Traditional language learning apps rely on flashcards and repetitive drills. Lingput solves the real problem: providing learners with comprehensible input - AI-generated stories perfectly tailored to their vocabulary level, complete with audio and intelligent word tracking.
85% faster API responses (600ms → 85ms) by implementing Redis caching to eliminate multiple redundant database queries
Non-blocking user experience replacing 30-second blocking API requests with async job queues and real-time progress updates
80% deployment time reduction (25min → 5min) with automated CI/CD pipeline
Production-ready architecture handling concurrent AI processing and real-time progress updates
Built with production-grade practices and scalable system design:
Async Job Processing: BullMQ + Redis queue system replaced blocking 30-second API requests with background processing, enabling responsive API with real-time progress tracking
Clean Architecture: Multi-layered Express.js backend (Controller/Service/Repository) with dependency injection for maintainability
Performance Optimization: Redis-powered multi-layer caching eliminated repetitive database queries, dramatically reducing database load and API latency
Secure Authentication: HTTP-only cookies with access/refresh token flow, protecting against XSS attacks
Advanced Frontend: Next.js with custom React hooks for intelligent job lifecycle management and optimistic UI updates
Production DevOps: Fully containerized with Docker Compose, automated CI/CD pipeline, and zero-downtime deployments
This project demonstrates end-to-end ownership of complex technical challenges:
Async Processing Pipeline: Story generation → translation → lemmatization → audio synthesis → asset upload
UX Blocking Issues: Replaced single 30-second blocking API requests with background job processing and streaming progress updates to the frontend
Database Performance: Eliminated slow page loads caused by multiple redundant database queries through intelligent Redis caching strategy
Resource Management: Efficient handling of long-running AI tasks without blocking the main application
Production Deployment: Zero-downtime deployments with comprehensive monitoring and error handling