- Create llm_pool/router.py: LiteLLM Router with fast (Ollama) and quality (Anthropic/OpenAI) model groups - Configure fallback chain: quality providers fail -> fast group - Pin LiteLLM to ==1.82.5 (avoid September 2025 OOM regression in later releases) - Create llm_pool/main.py: FastAPI service on port 8004 with /complete and /health endpoints - Add providers/__init__.py: reserved for future per-provider customization - Update docker-compose.yml: add llm-pool and celery-worker service stubs
8 lines
253 B
Python
8 lines
253 B
Python
"""
|
|
Provider-specific customization package.
|
|
|
|
Reserved for future per-provider configuration, credential handling, and
|
|
provider-specific parameter transformations. Currently empty — the LiteLLM
|
|
Router in router.py is the sole configuration point.
|
|
"""
|