Health Check Features (Item 12): - Added /health endpoint for basic health monitoring - Added /health/detailed for comprehensive component status - Added /health/ready for Kubernetes readiness probes - Added /health/live for liveness checks - Frontend health monitoring with auto-recovery - Clear stuck requests after 60 seconds - Visual health warnings when service is degraded - Monitoring script for external health checks Automatic Language Detection (Item 13): - Added "Auto-detect" option in source language dropdown - Whisper automatically detects language when auto-detect is selected - Shows detected language in UI after transcription - Updates language selector with detected language - Caches transcriptions with correct detected language 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> |
||
---|---|---|
static | ||
templates | ||
venv | ||
.gitignore | ||
app.py | ||
GPU_SUPPORT.md | ||
health-monitor.py | ||
package-lock.json | ||
package.json | ||
README_TYPESCRIPT.md | ||
README.md | ||
requirements.txt | ||
setup-script.sh | ||
tsconfig.json | ||
tts_test_output.mp3 | ||
tts-debug-script.py | ||
whisper_config.py |
Voice Language Translator
A mobile-friendly web application that translates spoken language between multiple languages using:
- Gemma 3 open-source LLM via Ollama for translation
- OpenAI Whisper for speech-to-text
- OpenAI Edge TTS for text-to-speech
Supported Languages
- Arabic
- Armenian
- Azerbaijani
- English
- French
- Georgian
- Kazakh
- Mandarin
- Farsi
- Portuguese
- Russian
- Spanish
- Turkish
- Uzbek
Setup Instructions
-
Install the required Python packages:
pip install -r requirements.txt
-
Make sure you have Ollama installed and the Gemma 3 model loaded:
ollama pull gemma3
-
Ensure your OpenAI Edge TTS server is running on port 5050.
-
Run the application:
python app.py
-
Open your browser and navigate to:
http://localhost:8000
Usage
- Select your source language from the dropdown menu
- Press the microphone button and speak
- Press the button again to stop recording
- Wait for the transcription to complete
- Select your target language
- Press the "Translate" button
- Use the play buttons to hear the original or translated text
Technical Details
- The app uses Flask for the web server
- Audio is processed client-side using the MediaRecorder API
- Whisper for speech recognition with language hints
- Ollama provides access to the Gemma 3 model for translation
- OpenAI Edge TTS delivers natural-sounding speech output
Mobile Support
The interface is fully responsive and designed to work well on mobile devices.