feat: make Ollama model configurable via OLLAMA_MODEL env var
- Add OLLAMA_MODEL setting to shared config (default: qwen3:32b) - LLM router reads from settings instead of hardcoded model name - Create .env file with all configurable settings documented - docker-compose passes OLLAMA_MODEL to llm-pool container To change the model: edit OLLAMA_MODEL in .env and restart llm-pool. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -112,6 +112,10 @@ class Settings(BaseSettings):
|
||||
default="http://localhost:11434",
|
||||
description="Ollama inference server base URL",
|
||||
)
|
||||
ollama_model: str = Field(
|
||||
default="qwen3:32b",
|
||||
description="Ollama model to use for local inference (e.g., qwen3:32b, llama3.1:70b)",
|
||||
)
|
||||
|
||||
# -------------------------------------------------------------------------
|
||||
# Auth / Security
|
||||
|
||||
Reference in New Issue
Block a user