docs(01-foundation): create phase plan
This commit is contained in:
@@ -29,7 +29,7 @@ Decimal phases appear between their surrounding integers in numeric order.
|
||||
4. The LLM backend pool routes requests through LiteLLM to both Ollama (local) and Anthropic/OpenAI, with automatic fallback when a provider is unavailable
|
||||
5. A new AI employee can be configured with a custom name, role, and persona — and that persona is reflected in responses
|
||||
6. An operator can create tenants and design agents (name, role, persona, system prompt, tools, escalation rules) via the admin portal
|
||||
**Plans**: TBD
|
||||
**Plans**: 4 plans
|
||||
|
||||
Plans:
|
||||
- [ ] 01-01: Monorepo scaffolding, Docker Compose dev environment, shared Pydantic models, DB schema with RLS
|
||||
@@ -47,7 +47,7 @@ Plans:
|
||||
3. The agent can invoke a registered tool (e.g., knowledge base search) and incorporate the result into its response
|
||||
4. When a configured escalation rule triggers (e.g., failed resolution attempts), the conversation and full context are handed off to a human with no information lost
|
||||
5. Every LLM call, tool invocation, and handoff event is recorded in an immutable audit trail queryable by tenant
|
||||
**Plans**: TBD
|
||||
**Plans**: 4 plans
|
||||
|
||||
Plans:
|
||||
- [ ] 02-01: Conversational memory layer (Redis sliding window + pgvector long-term storage with HNSW index)
|
||||
@@ -64,7 +64,7 @@ Plans:
|
||||
3. A new tenant completes the full onboarding sequence (connect channel → configure agent → send test message) in under 15 minutes
|
||||
4. An operator can subscribe, upgrade, and cancel their plan through Stripe — and feature limits are enforced automatically based on subscription state
|
||||
5. The portal displays per-tenant agent cost and token usage, giving operators visibility into spending without requiring access to backend logs
|
||||
**Plans**: TBD
|
||||
**Plans**: 4 plans
|
||||
|
||||
Plans:
|
||||
- [ ] 03-01: Channel connection wizard (Slack + WhatsApp), onboarding flow, portal polish
|
||||
|
||||
264
.planning/phases/01-foundation/01-01-PLAN.md
Normal file
264
.planning/phases/01-foundation/01-01-PLAN.md
Normal file
@@ -0,0 +1,264 @@
|
||||
---
|
||||
phase: 01-foundation
|
||||
plan: 01
|
||||
type: execute
|
||||
wave: 1
|
||||
depends_on: []
|
||||
files_modified:
|
||||
- pyproject.toml
|
||||
- docker-compose.yml
|
||||
- .env.example
|
||||
- packages/shared/models/message.py
|
||||
- packages/shared/models/tenant.py
|
||||
- packages/shared/models/auth.py
|
||||
- packages/shared/db.py
|
||||
- packages/shared/rls.py
|
||||
- packages/shared/config.py
|
||||
- packages/shared/redis_keys.py
|
||||
- packages/shared/__init__.py
|
||||
- packages/shared/models/__init__.py
|
||||
- migrations/env.py
|
||||
- migrations/versions/001_initial_schema.py
|
||||
- tests/conftest.py
|
||||
- tests/unit/test_normalize.py
|
||||
- tests/unit/test_tenant_resolution.py
|
||||
- tests/unit/test_redis_namespacing.py
|
||||
- tests/integration/test_tenant_isolation.py
|
||||
autonomous: true
|
||||
requirements:
|
||||
- CHAN-01
|
||||
- TNNT-01
|
||||
- TNNT-02
|
||||
- TNNT-03
|
||||
- TNNT-04
|
||||
|
||||
must_haves:
|
||||
truths:
|
||||
- "KonstructMessage Pydantic model validates and normalizes a Slack event payload into the unified internal format"
|
||||
- "Tenant A cannot query Tenant B's rows from the agents or channel_connections tables — enforced at the PostgreSQL layer via RLS"
|
||||
- "A channel workspace ID resolves to the correct Konstruct tenant ID via the channel_connections table"
|
||||
- "All Redis keys include {tenant_id}: prefix — no bare keys are possible through the shared utility"
|
||||
- "PostgreSQL and Redis are reachable via Docker Compose with TLS-ready configuration"
|
||||
artifacts:
|
||||
- path: "packages/shared/models/message.py"
|
||||
provides: "KonstructMessage, ChannelType, SenderInfo, MessageContent Pydantic models"
|
||||
exports: ["KonstructMessage", "ChannelType", "SenderInfo", "MessageContent"]
|
||||
- path: "packages/shared/models/tenant.py"
|
||||
provides: "Tenant, Agent, ChannelConnection SQLAlchemy models with RLS"
|
||||
exports: ["Tenant", "Agent", "ChannelConnection"]
|
||||
- path: "packages/shared/db.py"
|
||||
provides: "Async SQLAlchemy engine, session factory, get_session dependency"
|
||||
exports: ["engine", "async_session_factory", "get_session"]
|
||||
- path: "packages/shared/rls.py"
|
||||
provides: "current_tenant_id ContextVar and SQLAlchemy event hook for SET LOCAL"
|
||||
exports: ["current_tenant_id", "configure_rls_hook"]
|
||||
- path: "packages/shared/redis_keys.py"
|
||||
provides: "Namespaced Redis key constructors"
|
||||
exports: ["rate_limit_key", "idempotency_key", "session_key"]
|
||||
- path: "migrations/versions/001_initial_schema.py"
|
||||
provides: "Initial DB schema with RLS policies and FORCE ROW LEVEL SECURITY"
|
||||
contains: "FORCE ROW LEVEL SECURITY"
|
||||
- path: "tests/integration/test_tenant_isolation.py"
|
||||
provides: "Two-tenant RLS isolation test"
|
||||
contains: "tenant_a.*tenant_b"
|
||||
key_links:
|
||||
- from: "packages/shared/rls.py"
|
||||
to: "packages/shared/db.py"
|
||||
via: "SQLAlchemy before_cursor_execute event hook on engine"
|
||||
pattern: "event\\.listens_for.*before_cursor_execute"
|
||||
- from: "migrations/versions/001_initial_schema.py"
|
||||
to: "packages/shared/models/tenant.py"
|
||||
via: "Schema must match SQLAlchemy model definitions"
|
||||
pattern: "CREATE TABLE (tenants|agents|channel_connections)"
|
||||
- from: "packages/shared/redis_keys.py"
|
||||
to: "Redis"
|
||||
via: "All key functions prepend tenant_id"
|
||||
pattern: "f\"{tenant_id}:"
|
||||
---
|
||||
|
||||
<objective>
|
||||
Scaffold the Python monorepo, Docker Compose dev environment, shared Pydantic/SQLAlchemy models, PostgreSQL schema with RLS tenant isolation, Redis namespacing utilities, and the foundational test infrastructure.
|
||||
|
||||
Purpose: Establish the secure multi-tenant data layer that every subsequent plan builds on. Tenant isolation is the most dangerous failure mode in Phase 1 — it must be proven correct before any channel or LLM code exists.
|
||||
|
||||
Output: Working monorepo with `uv` workspaces, Docker Compose running PostgreSQL 16 + Redis 7 + Ollama, shared data models (KonstructMessage, Tenant, Agent, ChannelConnection), Alembic migrations with RLS, Redis key namespacing, and green isolation tests.
|
||||
</objective>
|
||||
|
||||
<execution_context>
|
||||
@/home/adelorenzo/.claude/get-shit-done/workflows/execute-plan.md
|
||||
@/home/adelorenzo/.claude/get-shit-done/templates/summary.md
|
||||
</execution_context>
|
||||
|
||||
<context>
|
||||
@.planning/PROJECT.md
|
||||
@.planning/ROADMAP.md
|
||||
@.planning/STATE.md
|
||||
@.planning/phases/01-foundation/01-CONTEXT.md
|
||||
@.planning/phases/01-foundation/01-RESEARCH.md
|
||||
@CLAUDE.md
|
||||
</context>
|
||||
|
||||
<tasks>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 1: Monorepo scaffolding, Docker Compose, and shared data models</name>
|
||||
<files>
|
||||
pyproject.toml,
|
||||
docker-compose.yml,
|
||||
.env.example,
|
||||
packages/shared/__init__.py,
|
||||
packages/shared/models/__init__.py,
|
||||
packages/shared/models/message.py,
|
||||
packages/shared/models/tenant.py,
|
||||
packages/shared/models/auth.py,
|
||||
packages/shared/config.py,
|
||||
packages/shared/db.py,
|
||||
packages/shared/rls.py,
|
||||
packages/shared/redis_keys.py
|
||||
</files>
|
||||
<action>
|
||||
1. Initialize the Python monorepo with `uv init` and configure `pyproject.toml` with workspace members for packages/shared, packages/gateway, packages/router, packages/orchestrator, packages/llm-pool. Add core dependencies: fastapi[standard], pydantic[email], sqlalchemy[asyncio], asyncpg, alembic, litellm, redis, celery[redis], slack-bolt, httpx, slowapi. Add dev dependencies: ruff, mypy, pytest, pytest-asyncio, pytest-httpx. Configure `[tool.pytest.ini_options]` with `asyncio_mode = "auto"` and `testpaths = ["tests"]`. Configure `[tool.ruff]` with line-length=120 and basic rules.
|
||||
|
||||
2. Create `docker-compose.yml` with services:
|
||||
- `postgres`: PostgreSQL 16 with `POSTGRES_DB=konstruct`, creates `konstruct_app` role via init script, port 5432
|
||||
- `redis`: Redis 7, port 6379
|
||||
- `ollama`: Ollama with GPU optional (deploy.resources.reservations.devices with count:all, but service starts regardless), port 11434
|
||||
- Shared `konstruct-net` bridge network
|
||||
|
||||
3. Create `.env.example` with all required environment variables: DATABASE_URL (using konstruct_app role, not postgres superuser), REDIS_URL, SLACK_BOT_TOKEN, SLACK_SIGNING_SECRET, ANTHROPIC_API_KEY, OPENAI_API_KEY, OLLAMA_BASE_URL, AUTH_SECRET.
|
||||
|
||||
4. Create `packages/shared/config.py` using Pydantic Settings to load all env vars with sensible defaults for local dev.
|
||||
|
||||
5. Create `packages/shared/models/message.py` with the KonstructMessage Pydantic model exactly as specified in RESEARCH.md: ChannelType (StrEnum: slack, whatsapp, mattermost), SenderInfo, MessageContent, KonstructMessage. The `tenant_id` field is `str | None = None` (populated by Router after tenant resolution).
|
||||
|
||||
6. Create `packages/shared/models/tenant.py` with SQLAlchemy 2.0 async models:
|
||||
- `Tenant`: id (UUID PK), name (str, unique), slug (str, unique), settings (JSON), created_at, updated_at
|
||||
- `Agent`: id (UUID PK), tenant_id (FK to Tenant, NOT NULL), name (str), role (str), persona (text), system_prompt (text), model_preference (str, default "quality"), tool_assignments (JSON, default []), escalation_rules (JSON, default []), is_active (bool, default True), created_at, updated_at
|
||||
- `ChannelConnection`: id (UUID PK), tenant_id (FK to Tenant, NOT NULL), channel_type (ChannelType enum), workspace_id (str, unique for the channel_type), config (JSON — stores bot tokens, channel IDs per tenant), created_at
|
||||
Use SQLAlchemy 2.0 `Mapped[]` and `mapped_column()` style — never 1.x `Column()` style.
|
||||
|
||||
7. Create `packages/shared/models/auth.py` with a `PortalUser` SQLAlchemy model: id (UUID PK), email (str, unique), hashed_password (str), name (str), is_admin (bool, default False), created_at, updated_at. This is for portal authentication (Auth.js v5 will validate against this via a FastAPI endpoint).
|
||||
|
||||
8. Create `packages/shared/db.py` with async SQLAlchemy engine (asyncpg driver) and session factory. Use `create_async_engine` with `DATABASE_URL` from config. Export `get_session` as an async generator for FastAPI dependency injection.
|
||||
|
||||
9. Create `packages/shared/rls.py` with `current_tenant_id` ContextVar and a `configure_rls_hook(engine)` function that registers a `before_cursor_execute` event listener to `SET LOCAL app.current_tenant = '{tenant_id}'` when `current_tenant_id` is set. CRITICAL: Use parameterized query for the SET LOCAL to prevent SQL injection — use `cursor.execute("SET LOCAL app.current_tenant = %s", (tenant_id,))`.
|
||||
|
||||
10. Create `packages/shared/redis_keys.py` with typed key constructor functions: `rate_limit_key(tenant_id, channel)`, `idempotency_key(tenant_id, message_id)`, `session_key(tenant_id, thread_id)`, `engaged_thread_key(tenant_id, thread_id)`. Every function prepends `{tenant_id}:`. No Redis key should ever be constructable without a tenant_id.
|
||||
|
||||
11. Create minimal `__init__.py` files for packages/shared and packages/shared/models with appropriate re-exports.
|
||||
</action>
|
||||
<verify>
|
||||
<automated>cd /home/adelorenzo/repos/konstruct && uv sync && python -c "from packages.shared.models.message import KonstructMessage; from packages.shared.models.tenant import Tenant, Agent, ChannelConnection; from packages.shared.redis_keys import rate_limit_key; print('imports OK')"</automated>
|
||||
</verify>
|
||||
<done>
|
||||
- pyproject.toml configures uv workspaces with all dependencies
|
||||
- docker-compose.yml defines PostgreSQL 16, Redis 7, Ollama services
|
||||
- KonstructMessage, Tenant, Agent, ChannelConnection, PortalUser models importable
|
||||
- RLS hook configurable on engine
|
||||
- All Redis key functions require tenant_id parameter
|
||||
- .env.example documents all required env vars with konstruct_app role (not postgres superuser)
|
||||
</done>
|
||||
</task>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 2: Alembic migrations with RLS and tenant isolation tests</name>
|
||||
<files>
|
||||
migrations/env.py,
|
||||
migrations/script.py.mako,
|
||||
migrations/versions/001_initial_schema.py,
|
||||
tests/conftest.py,
|
||||
tests/unit/__init__.py,
|
||||
tests/unit/test_normalize.py,
|
||||
tests/unit/test_tenant_resolution.py,
|
||||
tests/unit/test_redis_namespacing.py,
|
||||
tests/integration/__init__.py,
|
||||
tests/integration/test_tenant_isolation.py
|
||||
</files>
|
||||
<action>
|
||||
1. Initialize Alembic with `alembic init migrations`. Modify `migrations/env.py` to use async engine (asyncpg) — follow the SQLAlchemy 2.0 async Alembic pattern with `run_async_migrations()`. Import the SQLAlchemy Base metadata from `packages/shared/models/tenant.py` so autogenerate works.
|
||||
|
||||
2. Create `migrations/versions/001_initial_schema.py`:
|
||||
- Create `konstruct_app` role: `CREATE ROLE konstruct_app WITH LOGIN PASSWORD 'konstruct_dev'` (dev password, .env overrides in prod)
|
||||
- Create tables: tenants, agents, channel_connections, portal_users — matching the SQLAlchemy models from Task 1
|
||||
- Apply RLS to tenant-scoped tables (agents, channel_connections):
|
||||
```sql
|
||||
ALTER TABLE agents ENABLE ROW LEVEL SECURITY;
|
||||
ALTER TABLE agents FORCE ROW LEVEL SECURITY;
|
||||
CREATE POLICY tenant_isolation ON agents USING (tenant_id = current_setting('app.current_tenant')::uuid);
|
||||
```
|
||||
Same pattern for channel_connections.
|
||||
- Do NOT apply RLS to `tenants` table itself (platform admin needs to list all tenants) or `portal_users` table.
|
||||
- GRANT SELECT, INSERT, UPDATE, DELETE on all tables to `konstruct_app`.
|
||||
- GRANT USAGE ON SCHEMA public TO `konstruct_app`.
|
||||
|
||||
3. Create `tests/conftest.py` with shared fixtures:
|
||||
- `db_engine`: Creates a test PostgreSQL database, runs migrations, yields async engine connected as `konstruct_app` (not postgres superuser), drops test DB after
|
||||
- `db_session`: Async session from the engine with RLS hook configured
|
||||
- `tenant_a` / `tenant_b`: Two-tenant fixture — creates two tenants, yields their IDs
|
||||
- `redis_client`: Creates a real Redis connection (or fakeredis if Docker not available) scoped to test prefix
|
||||
- Use `pytest.mark.asyncio` for async tests (auto mode from pyproject.toml config)
|
||||
- IMPORTANT: The test DB connection MUST use the `konstruct_app` role to actually test RLS. If using postgres superuser, RLS is bypassed and tests are worthless.
|
||||
|
||||
4. Create `tests/unit/test_normalize.py` (CHAN-01):
|
||||
- Test that a raw Slack `message` event payload normalizes to a valid KonstructMessage
|
||||
- Test ChannelType is set to "slack"
|
||||
- Test sender info is extracted correctly
|
||||
- Test thread_id is populated from Slack's `thread_ts`
|
||||
- Test channel_metadata contains workspace_id
|
||||
|
||||
5. Create `tests/unit/test_tenant_resolution.py` (TNNT-02):
|
||||
- Test that given a workspace_id and channel_type, the correct tenant_id is returned from a mock channel_connections lookup
|
||||
- Test that unknown workspace_id returns None
|
||||
- Test that workspace_id from wrong channel_type doesn't match
|
||||
|
||||
6. Create `tests/unit/test_redis_namespacing.py` (TNNT-03):
|
||||
- Test that `rate_limit_key("tenant-a", "slack")` returns `"tenant-a:ratelimit:slack"`
|
||||
- Test that `idempotency_key("tenant-a", "msg-123")` returns `"tenant-a:dedup:msg-123"`
|
||||
- Test that all key functions include tenant_id prefix
|
||||
- Test that no function can produce a key without tenant_id
|
||||
|
||||
7. Create `tests/integration/test_tenant_isolation.py` (TNNT-01) — THIS IS THE MOST CRITICAL TEST IN PHASE 1:
|
||||
- Uses the two-tenant fixture (tenant_a, tenant_b)
|
||||
- Creates an agent for tenant_a
|
||||
- Sets current_tenant_id to tenant_b
|
||||
- Queries agents table — MUST return zero rows (tenant_b cannot see tenant_a's agent)
|
||||
- Sets current_tenant_id to tenant_a
|
||||
- Queries agents table — MUST return one row
|
||||
- Repeat for channel_connections table
|
||||
- Verify with `SELECT relforcerowsecurity FROM pg_class WHERE relname = 'agents'` — must be True
|
||||
|
||||
8. Create empty `__init__.py` for tests/unit/ and tests/integration/.
|
||||
</action>
|
||||
<verify>
|
||||
<automated>cd /home/adelorenzo/repos/konstruct && docker compose up -d postgres redis && sleep 3 && alembic upgrade head && pytest tests/unit -x -q && pytest tests/integration/test_tenant_isolation.py -x -q</automated>
|
||||
</verify>
|
||||
<done>
|
||||
- Alembic migration creates all tables with RLS policies and FORCE ROW LEVEL SECURITY
|
||||
- konstruct_app role exists and is used by all application connections
|
||||
- Unit tests pass for KonstructMessage normalization, tenant resolution logic, and Redis namespacing
|
||||
- Integration test proves tenant_a cannot see tenant_b's data through PostgreSQL RLS
|
||||
- `relforcerowsecurity` is True for agents and channel_connections tables
|
||||
</done>
|
||||
</task>
|
||||
|
||||
</tasks>
|
||||
|
||||
<verification>
|
||||
- `docker compose up -d` starts PostgreSQL 16, Redis 7, and Ollama without errors
|
||||
- `alembic upgrade head` applies the initial schema with RLS
|
||||
- `pytest tests/unit -x -q` passes all unit tests (normalize, tenant resolution, redis namespacing)
|
||||
- `pytest tests/integration/test_tenant_isolation.py -x -q` proves RLS isolation
|
||||
- All imports from packages/shared work correctly
|
||||
- No Redis key can be constructed without a tenant_id
|
||||
</verification>
|
||||
|
||||
<success_criteria>
|
||||
- Green test suite proving tenant A cannot access tenant B's data
|
||||
- KonstructMessage model validates Slack event payloads
|
||||
- Docker Compose dev environment boots cleanly
|
||||
- All subsequent plans can import from packages/shared without modification
|
||||
</success_criteria>
|
||||
|
||||
<output>
|
||||
After completion, create `.planning/phases/01-foundation/01-01-SUMMARY.md`
|
||||
</output>
|
||||
277
.planning/phases/01-foundation/01-02-PLAN.md
Normal file
277
.planning/phases/01-foundation/01-02-PLAN.md
Normal file
@@ -0,0 +1,277 @@
|
||||
---
|
||||
phase: 01-foundation
|
||||
plan: 02
|
||||
type: execute
|
||||
wave: 2
|
||||
depends_on: ["01-01"]
|
||||
files_modified:
|
||||
- packages/llm-pool/__init__.py
|
||||
- packages/llm-pool/main.py
|
||||
- packages/llm-pool/router.py
|
||||
- packages/llm-pool/providers/__init__.py
|
||||
- packages/orchestrator/__init__.py
|
||||
- packages/orchestrator/main.py
|
||||
- packages/orchestrator/tasks.py
|
||||
- packages/orchestrator/agents/__init__.py
|
||||
- packages/orchestrator/agents/builder.py
|
||||
- packages/orchestrator/agents/runner.py
|
||||
- docker-compose.yml
|
||||
- tests/integration/test_llm_fallback.py
|
||||
- tests/integration/test_llm_providers.py
|
||||
autonomous: true
|
||||
requirements:
|
||||
- LLM-01
|
||||
- LLM-02
|
||||
|
||||
must_haves:
|
||||
truths:
|
||||
- "A completion request to the LLM pool service returns an LLM-generated response from the configured provider"
|
||||
- "When the primary provider is unavailable, the LLM pool automatically falls back to the next provider in the chain"
|
||||
- "Both Ollama (local) and Anthropic/OpenAI (commercial) are configured as available providers"
|
||||
- "Celery worker dispatches handle_message tasks asynchronously without blocking the caller"
|
||||
artifacts:
|
||||
- path: "packages/llm-pool/main.py"
|
||||
provides: "FastAPI service exposing /complete endpoint"
|
||||
exports: ["app"]
|
||||
- path: "packages/llm-pool/router.py"
|
||||
provides: "LiteLLM Router with model groups and fallback chains"
|
||||
exports: ["llm_router", "complete"]
|
||||
- path: "packages/orchestrator/tasks.py"
|
||||
provides: "Celery task handle_message (sync def, uses asyncio.run)"
|
||||
exports: ["handle_message"]
|
||||
- path: "packages/orchestrator/agents/builder.py"
|
||||
provides: "System prompt assembly from agent persona fields"
|
||||
exports: ["build_system_prompt"]
|
||||
- path: "packages/orchestrator/agents/runner.py"
|
||||
provides: "LLM call via llm-pool HTTP endpoint, response parsing"
|
||||
exports: ["run_agent"]
|
||||
key_links:
|
||||
- from: "packages/orchestrator/agents/runner.py"
|
||||
to: "packages/llm-pool/main.py"
|
||||
via: "HTTP POST to /complete endpoint"
|
||||
pattern: "httpx.*llm.pool.*complete"
|
||||
- from: "packages/orchestrator/tasks.py"
|
||||
to: "packages/orchestrator/agents/runner.py"
|
||||
via: "Celery task calls run_agent"
|
||||
pattern: "run_agent"
|
||||
- from: "packages/llm-pool/router.py"
|
||||
to: "LiteLLM"
|
||||
via: "Router.acompletion() with model_list and fallbacks"
|
||||
pattern: "router\\.acompletion"
|
||||
---
|
||||
|
||||
<objective>
|
||||
Build the LLM Backend Pool service (LiteLLM Router with Ollama + Anthropic/OpenAI providers and fallback routing) and the Celery-based Agent Orchestrator skeleton (async task dispatch, system prompt assembly, LLM call via pool).
|
||||
|
||||
Purpose: Provide the LLM inference layer that the Channel Gateway (Plan 03) will dispatch work to. Establishes the critical Celery sync-def pattern and the LiteLLM Router configuration before any channel integration exists.
|
||||
|
||||
Output: Running LLM pool FastAPI service on port 8002, Celery worker processing handle_message tasks, system prompt builder, and green integration tests for fallback routing.
|
||||
</objective>
|
||||
|
||||
<execution_context>
|
||||
@/home/adelorenzo/.claude/get-shit-done/workflows/execute-plan.md
|
||||
@/home/adelorenzo/.claude/get-shit-done/templates/summary.md
|
||||
</execution_context>
|
||||
|
||||
<context>
|
||||
@.planning/PROJECT.md
|
||||
@.planning/ROADMAP.md
|
||||
@.planning/STATE.md
|
||||
@.planning/phases/01-foundation/01-CONTEXT.md
|
||||
@.planning/phases/01-foundation/01-RESEARCH.md
|
||||
@.planning/phases/01-foundation/01-01-SUMMARY.md
|
||||
|
||||
<interfaces>
|
||||
<!-- From Plan 01 — shared models and DB layer the orchestrator depends on -->
|
||||
|
||||
From packages/shared/models/message.py:
|
||||
```python
|
||||
class ChannelType(StrEnum):
|
||||
SLACK = "slack"
|
||||
WHATSAPP = "whatsapp"
|
||||
MATTERMOST = "mattermost"
|
||||
|
||||
class KonstructMessage(BaseModel):
|
||||
id: str
|
||||
tenant_id: str | None = None
|
||||
channel: ChannelType
|
||||
channel_metadata: dict
|
||||
sender: SenderInfo
|
||||
content: MessageContent
|
||||
timestamp: datetime
|
||||
thread_id: str | None = None
|
||||
reply_to: str | None = None
|
||||
context: dict = Field(default_factory=dict)
|
||||
```
|
||||
|
||||
From packages/shared/models/tenant.py:
|
||||
```python
|
||||
class Agent(Base):
|
||||
id: Mapped[uuid.UUID]
|
||||
tenant_id: Mapped[uuid.UUID]
|
||||
name: Mapped[str]
|
||||
role: Mapped[str]
|
||||
persona: Mapped[str | None]
|
||||
system_prompt: Mapped[str | None]
|
||||
model_preference: Mapped[str] # "quality" | "fast"
|
||||
tool_assignments: Mapped[list] # JSON
|
||||
escalation_rules: Mapped[list] # JSON
|
||||
is_active: Mapped[bool]
|
||||
```
|
||||
|
||||
From packages/shared/db.py:
|
||||
```python
|
||||
async def get_session() -> AsyncGenerator[AsyncSession, None]: ...
|
||||
```
|
||||
|
||||
From packages/shared/config.py:
|
||||
```python
|
||||
class Settings(BaseSettings):
|
||||
anthropic_api_key: str
|
||||
openai_api_key: str
|
||||
ollama_base_url: str = "http://ollama:11434"
|
||||
redis_url: str = "redis://redis:6379/0"
|
||||
# ...
|
||||
```
|
||||
</interfaces>
|
||||
</context>
|
||||
|
||||
<tasks>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 1: LLM Backend Pool service with LiteLLM Router and fallback</name>
|
||||
<files>
|
||||
packages/llm-pool/__init__.py,
|
||||
packages/llm-pool/main.py,
|
||||
packages/llm-pool/router.py,
|
||||
packages/llm-pool/providers/__init__.py,
|
||||
docker-compose.yml
|
||||
</files>
|
||||
<action>
|
||||
1. Create `packages/llm-pool/router.py`:
|
||||
- Configure LiteLLM `Router` with `model_list` containing three model entries:
|
||||
- `"fast"` group: `ollama/qwen3:8b` pointing to `settings.ollama_base_url`
|
||||
- `"quality"` group: `anthropic/claude-sonnet-4-20250514` with `settings.anthropic_api_key`
|
||||
- `"quality"` group (fallback): `openai/gpt-4o` with `settings.openai_api_key`
|
||||
- Configure `fallbacks=[{"quality": ["fast"]}]` — if all quality providers fail, fall back to fast
|
||||
- Set `routing_strategy="latency-based-routing"`, `num_retries=2`, `set_verbose=False`
|
||||
- Pin LiteLLM to `1.82.5` in pyproject.toml (not latest — September 2025 OOM issue)
|
||||
- Export an async `complete(model_group: str, messages: list[dict], tenant_id: str)` function that calls `router.acompletion()` and returns the response content string. Include tenant_id in metadata for cost tracking.
|
||||
|
||||
2. Create `packages/llm-pool/main.py`:
|
||||
- FastAPI app on port 8002
|
||||
- `POST /complete` endpoint accepting `{ model: str, messages: list[dict], tenant_id: str }` — model is the group name ("quality" or "fast"), messages is the OpenAI-format message list
|
||||
- Returns `{ content: str, model: str, usage: { prompt_tokens: int, completion_tokens: int } }`
|
||||
- `GET /health` endpoint returning `{ status: "ok" }`
|
||||
- Error handling: If LiteLLM raises an exception (all providers down), return 503 with `{ error: "All providers unavailable" }`
|
||||
|
||||
3. Update `docker-compose.yml` to add the `llm-pool` service:
|
||||
- Build from packages/llm-pool or use uvicorn command
|
||||
- Port 8002
|
||||
- Depends on: ollama, redis
|
||||
- Environment: all LLM-related env vars from .env
|
||||
|
||||
4. Create `packages/llm-pool/providers/__init__.py` — empty for now, prepared for future per-provider customization.
|
||||
|
||||
5. Create `packages/llm-pool/__init__.py` with minimal exports.
|
||||
</action>
|
||||
<verify>
|
||||
<automated>cd /home/adelorenzo/repos/konstruct && python -c "from packages.llm_pool.router import complete; from packages.llm_pool.main import app; print('LLM pool imports OK')"</automated>
|
||||
</verify>
|
||||
<done>
|
||||
- LiteLLM Router configured with fast (Ollama) and quality (Anthropic + OpenAI) model groups
|
||||
- Fallback chain: quality providers -> fast
|
||||
- /complete endpoint accepts model group, messages, tenant_id and returns LLM response
|
||||
- LiteLLM pinned to 1.82.5
|
||||
- Docker Compose includes llm-pool service
|
||||
</done>
|
||||
</task>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 2: Celery orchestrator with system prompt builder and integration tests</name>
|
||||
<files>
|
||||
packages/orchestrator/__init__.py,
|
||||
packages/orchestrator/main.py,
|
||||
packages/orchestrator/tasks.py,
|
||||
packages/orchestrator/agents/__init__.py,
|
||||
packages/orchestrator/agents/builder.py,
|
||||
packages/orchestrator/agents/runner.py,
|
||||
tests/integration/test_llm_fallback.py,
|
||||
tests/integration/test_llm_providers.py
|
||||
</files>
|
||||
<action>
|
||||
1. Create `packages/orchestrator/main.py`:
|
||||
- Celery app configured with Redis broker (`settings.redis_url`)
|
||||
- Result backend: Redis
|
||||
- Include tasks from `packages.orchestrator.tasks`
|
||||
|
||||
2. Create `packages/orchestrator/tasks.py`:
|
||||
- CRITICAL PATTERN: All Celery tasks MUST be `def` (synchronous), NOT `async def`.
|
||||
- `@app.task def handle_message(message_data: dict) -> dict`: Deserializes message_data into KonstructMessage, calls `asyncio.run(_process_message(msg))`, returns result dict.
|
||||
- `async def _process_message(msg: KonstructMessage) -> dict`: Loads agent config from DB (using tenant_id + RLS), builds system prompt, calls LLM pool, returns response content.
|
||||
- Add a clear comment block at the top: "# CELERY TASKS MUST BE SYNC def — async def causes RuntimeError or silent hang. Use asyncio.run() for async work."
|
||||
|
||||
3. Create `packages/orchestrator/agents/builder.py`:
|
||||
- `build_system_prompt(agent: Agent) -> str`: Assembles the system prompt from agent fields:
|
||||
- Starts with agent.system_prompt if provided
|
||||
- Appends persona context: "Your name is {agent.name}. Your role is {agent.role}."
|
||||
- If agent.persona is set, appends: "Persona: {agent.persona}"
|
||||
- Appends AI transparency clause: "If asked directly whether you are an AI, always respond honestly that you are an AI assistant."
|
||||
- Per user decision: professional + warm tone is the default persona
|
||||
- `build_messages(system_prompt: str, user_message: str, history: list[dict] | None = None) -> list[dict]`: Returns OpenAI-format messages list with system prompt, optional history, and user message.
|
||||
|
||||
4. Create `packages/orchestrator/agents/runner.py`:
|
||||
- `async def run_agent(msg: KonstructMessage, agent: Agent) -> str`: Builds system prompt, constructs messages, calls LLM pool via `httpx.AsyncClient` POST to `http://llm-pool:8002/complete` with `{ model: agent.model_preference, messages: messages, tenant_id: msg.tenant_id }`. Returns the content string from the response.
|
||||
- Handle errors: If LLM pool returns non-200, log error and return a polite fallback message ("I'm having trouble processing your request right now. Please try again in a moment.").
|
||||
|
||||
5. Create `tests/integration/test_llm_fallback.py` (LLM-01):
|
||||
- Mock LiteLLM Router to simulate primary provider failure
|
||||
- Verify that when "quality" primary (Anthropic) raises an exception, the request automatically retries with fallback (OpenAI), then falls back to "fast" (Ollama)
|
||||
- Test that a successful fallback still returns a valid response
|
||||
- Test that when ALL providers fail, a 503 is returned
|
||||
|
||||
6. Create `tests/integration/test_llm_providers.py` (LLM-02):
|
||||
- Mock LiteLLM Router to verify both Ollama and commercial API configurations are present
|
||||
- Test that a request with model="fast" routes to Ollama
|
||||
- Test that a request with model="quality" routes to Anthropic or OpenAI
|
||||
- Verify the model_list contains entries for all three providers
|
||||
|
||||
7. Create `__init__.py` files for orchestrator and orchestrator/agents packages.
|
||||
|
||||
8. Update docker-compose.yml to add `celery-worker` service:
|
||||
- Command: `celery -A packages.orchestrator.main worker --loglevel=info`
|
||||
- Depends on: redis, postgres, llm-pool
|
||||
</action>
|
||||
<verify>
|
||||
<automated>cd /home/adelorenzo/repos/konstruct && pytest tests/integration/test_llm_fallback.py tests/integration/test_llm_providers.py -x -q</automated>
|
||||
</verify>
|
||||
<done>
|
||||
- Celery worker starts and accepts handle_message tasks
|
||||
- All Celery tasks are sync def with asyncio.run() pattern (never async def)
|
||||
- System prompt builder assembles persona, role, name, and AI transparency clause
|
||||
- LLM pool fallback: quality -> fast verified by integration tests
|
||||
- Both Ollama and commercial providers configured and routable
|
||||
- handle_message pipeline: deserialize -> load agent -> build prompt -> call LLM pool -> return response
|
||||
</done>
|
||||
</task>
|
||||
|
||||
</tasks>
|
||||
|
||||
<verification>
|
||||
- `pytest tests/integration/test_llm_fallback.py -x` proves fallback routing works
|
||||
- `pytest tests/integration/test_llm_providers.py -x` proves both local and commercial providers are configured
|
||||
- LLM pool /complete endpoint returns valid responses
|
||||
- Celery worker processes handle_message tasks without RuntimeError
|
||||
- No `async def` Celery tasks exist (grep confirms)
|
||||
</verification>
|
||||
|
||||
<success_criteria>
|
||||
- LLM Backend Pool routes requests through LiteLLM to configured providers with automatic fallback
|
||||
- Celery orchestrator dispatches and completes handle_message tasks asynchronously
|
||||
- System prompt reflects agent's name, role, persona, and AI transparency clause
|
||||
- All tests green
|
||||
</success_criteria>
|
||||
|
||||
<output>
|
||||
After completion, create `.planning/phases/01-foundation/01-02-SUMMARY.md`
|
||||
</output>
|
||||
287
.planning/phases/01-foundation/01-03-PLAN.md
Normal file
287
.planning/phases/01-foundation/01-03-PLAN.md
Normal file
@@ -0,0 +1,287 @@
|
||||
---
|
||||
phase: 01-foundation
|
||||
plan: 03
|
||||
type: execute
|
||||
wave: 3
|
||||
depends_on: ["01-01", "01-02"]
|
||||
files_modified:
|
||||
- packages/gateway/__init__.py
|
||||
- packages/gateway/main.py
|
||||
- packages/gateway/channels/__init__.py
|
||||
- packages/gateway/channels/slack.py
|
||||
- packages/gateway/normalize.py
|
||||
- packages/gateway/verify.py
|
||||
- packages/router/__init__.py
|
||||
- packages/router/main.py
|
||||
- packages/router/tenant.py
|
||||
- packages/router/ratelimit.py
|
||||
- packages/router/idempotency.py
|
||||
- packages/router/context.py
|
||||
- docker-compose.yml
|
||||
- tests/unit/test_ratelimit.py
|
||||
- tests/integration/test_slack_flow.py
|
||||
- tests/integration/test_agent_persona.py
|
||||
- tests/integration/test_ratelimit.py
|
||||
autonomous: true
|
||||
requirements:
|
||||
- CHAN-02
|
||||
- CHAN-05
|
||||
- AGNT-01
|
||||
|
||||
must_haves:
|
||||
truths:
|
||||
- "A Slack @mention or DM to the AI employee triggers an LLM-generated response posted back in the same Slack thread"
|
||||
- "The Slack event handler returns HTTP 200 within 3 seconds — LLM work is dispatched to Celery, not done inline"
|
||||
- "A request exceeding the per-tenant or per-channel rate limit is rejected with an informative Slack message rather than silently dropped"
|
||||
- "The agent's response reflects the configured name, role, and persona from the Agent table"
|
||||
- "A typing indicator (placeholder message) appears while the LLM is generating"
|
||||
artifacts:
|
||||
- path: "packages/gateway/main.py"
|
||||
provides: "FastAPI app with /slack/events endpoint mounting slack-bolt AsyncApp"
|
||||
exports: ["app"]
|
||||
- path: "packages/gateway/channels/slack.py"
|
||||
provides: "Slack event handlers for @mentions and DMs, dispatches to Celery"
|
||||
exports: ["register_slack_handlers"]
|
||||
- path: "packages/gateway/normalize.py"
|
||||
provides: "Slack event -> KonstructMessage normalization"
|
||||
exports: ["normalize_slack_event"]
|
||||
- path: "packages/router/tenant.py"
|
||||
provides: "Workspace ID -> tenant_id resolution from DB"
|
||||
exports: ["resolve_tenant"]
|
||||
- path: "packages/router/ratelimit.py"
|
||||
provides: "Redis token bucket rate limiter per tenant per channel"
|
||||
exports: ["check_rate_limit", "RateLimitExceeded"]
|
||||
- path: "packages/router/idempotency.py"
|
||||
provides: "Redis-based message deduplication"
|
||||
exports: ["is_duplicate", "mark_processed"]
|
||||
key_links:
|
||||
- from: "packages/gateway/channels/slack.py"
|
||||
to: "packages/orchestrator/tasks.py"
|
||||
via: "handle_message_task.delay(msg.model_dump())"
|
||||
pattern: "handle_message.*delay"
|
||||
- from: "packages/gateway/channels/slack.py"
|
||||
to: "packages/router/tenant.py"
|
||||
via: "resolve_tenant(workspace_id, channel_type)"
|
||||
pattern: "resolve_tenant"
|
||||
- from: "packages/gateway/channels/slack.py"
|
||||
to: "packages/router/ratelimit.py"
|
||||
via: "check_rate_limit(tenant_id, channel) before dispatch"
|
||||
pattern: "check_rate_limit"
|
||||
- from: "packages/orchestrator/agents/runner.py"
|
||||
to: "Slack API"
|
||||
via: "chat.update to replace placeholder with real response"
|
||||
pattern: "chat_update|chat\\.update"
|
||||
---
|
||||
|
||||
<objective>
|
||||
Build the Channel Gateway (Slack adapter with slack-bolt AsyncApp), Message Router (tenant resolution, rate limiting, idempotency), and wire them to the Celery orchestrator from Plan 02 to complete the end-to-end Slack message -> LLM response flow.
|
||||
|
||||
Purpose: Close the vertical loop — a Slack user @mentions the AI employee, a response appears in-thread. This is the core value demonstration of the entire platform.
|
||||
|
||||
Output: Working Slack integration where @mentions and DMs trigger LLM responses in-thread, with rate limiting, tenant resolution, deduplication, and typing indicator.
|
||||
</objective>
|
||||
|
||||
<execution_context>
|
||||
@/home/adelorenzo/.claude/get-shit-done/workflows/execute-plan.md
|
||||
@/home/adelorenzo/.claude/get-shit-done/templates/summary.md
|
||||
</execution_context>
|
||||
|
||||
<context>
|
||||
@.planning/PROJECT.md
|
||||
@.planning/ROADMAP.md
|
||||
@.planning/STATE.md
|
||||
@.planning/phases/01-foundation/01-CONTEXT.md
|
||||
@.planning/phases/01-foundation/01-RESEARCH.md
|
||||
@.planning/phases/01-foundation/01-01-SUMMARY.md
|
||||
@.planning/phases/01-foundation/01-02-SUMMARY.md
|
||||
|
||||
<interfaces>
|
||||
<!-- From Plan 01 — shared models -->
|
||||
|
||||
From packages/shared/models/message.py:
|
||||
```python
|
||||
class KonstructMessage(BaseModel):
|
||||
id: str
|
||||
tenant_id: str | None = None
|
||||
channel: ChannelType
|
||||
channel_metadata: dict
|
||||
sender: SenderInfo
|
||||
content: MessageContent
|
||||
timestamp: datetime
|
||||
thread_id: str | None = None
|
||||
```
|
||||
|
||||
From packages/shared/redis_keys.py:
|
||||
```python
|
||||
def rate_limit_key(tenant_id: str, channel: str) -> str: ...
|
||||
def idempotency_key(tenant_id: str, message_id: str) -> str: ...
|
||||
def engaged_thread_key(tenant_id: str, thread_id: str) -> str: ...
|
||||
```
|
||||
|
||||
From packages/shared/rls.py:
|
||||
```python
|
||||
current_tenant_id: ContextVar[str | None]
|
||||
```
|
||||
|
||||
<!-- From Plan 02 — orchestrator tasks and agent runner -->
|
||||
|
||||
From packages/orchestrator/tasks.py:
|
||||
```python
|
||||
@app.task
|
||||
def handle_message(message_data: dict) -> dict: ...
|
||||
```
|
||||
|
||||
From packages/orchestrator/agents/builder.py:
|
||||
```python
|
||||
def build_system_prompt(agent: Agent) -> str: ...
|
||||
def build_messages(system_prompt: str, user_message: str, history: list[dict] | None = None) -> list[dict]: ...
|
||||
```
|
||||
</interfaces>
|
||||
</context>
|
||||
|
||||
<tasks>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 1: Channel Gateway (Slack adapter) and Message Router</name>
|
||||
<files>
|
||||
packages/gateway/__init__.py,
|
||||
packages/gateway/main.py,
|
||||
packages/gateway/channels/__init__.py,
|
||||
packages/gateway/channels/slack.py,
|
||||
packages/gateway/normalize.py,
|
||||
packages/gateway/verify.py,
|
||||
packages/router/__init__.py,
|
||||
packages/router/main.py,
|
||||
packages/router/tenant.py,
|
||||
packages/router/ratelimit.py,
|
||||
packages/router/idempotency.py,
|
||||
packages/router/context.py,
|
||||
docker-compose.yml
|
||||
</files>
|
||||
<action>
|
||||
1. Create `packages/gateway/normalize.py`:
|
||||
- `normalize_slack_event(event: dict, workspace_id: str) -> KonstructMessage`: Converts a Slack Events API payload into a KonstructMessage. Extracts: user ID, text, thread_ts -> thread_id, channel ID, workspace_id into channel_metadata. Sets channel=ChannelType.SLACK.
|
||||
- Handle both @mention events (strip the `<@BOT_ID>` prefix from text) and DM events.
|
||||
|
||||
2. Create `packages/gateway/channels/slack.py`:
|
||||
- `register_slack_handlers(slack_app: AsyncApp)`: Registers event handlers on the slack-bolt AsyncApp.
|
||||
- Handle `app_mention` event: Normalize message, resolve tenant, check rate limit, check idempotency, post placeholder "Thinking..." message in thread (typing indicator per user decision), dispatch to Celery with `handle_message_task.delay(msg.model_dump() | {"placeholder_ts": placeholder_msg["ts"], "channel_id": event["channel"]})`.
|
||||
- Handle `message` event (DMs only — filter `channel_type == "im"`): Same flow as app_mention.
|
||||
- Thread follow-up behavior (Claude's discretion): Implement auto-follow for engaged threads. After the first @mention in a thread, store `engaged_thread_key(tenant_id, thread_id)` in Redis with 30-minute TTL. Subsequent messages in that thread (even without @mention) trigger a response. Per research recommendation.
|
||||
- If rate limit exceeded: Post an ephemeral message to the user: "I'm receiving too many requests right now. Please try again in a moment." Do NOT dispatch to Celery.
|
||||
- If tenant resolution fails (unknown workspace): Log warning and ignore the event silently.
|
||||
- CRITICAL: Return HTTP 200 immediately. NO LLM work inside the handler. Slack retries after 3 seconds.
|
||||
|
||||
3. Create `packages/gateway/main.py`:
|
||||
- FastAPI app mounting slack-bolt AsyncApp via `AsyncSlackRequestHandler`
|
||||
- `POST /slack/events` endpoint handled by the slack handler
|
||||
- `GET /health` endpoint
|
||||
- Port 8001
|
||||
|
||||
4. Create `packages/router/tenant.py`:
|
||||
- `async def resolve_tenant(workspace_id: str, channel_type: ChannelType, session: AsyncSession) -> str | None`: Queries `channel_connections` table for matching workspace_id + channel_type, returns tenant_id or None. Uses RLS-free query (tenant resolution must work across all tenants — this is the one pre-RLS operation).
|
||||
|
||||
5. Create `packages/router/ratelimit.py`:
|
||||
- `async def check_rate_limit(tenant_id: str, channel: str, redis: Redis) -> bool`: Implements token bucket using Redis. Uses `rate_limit_key(tenant_id, channel)` from shared redis_keys. Default: 30 requests per minute per tenant per channel (configurable). Returns True if allowed, raises `RateLimitExceeded` if not.
|
||||
- `class RateLimitExceeded(Exception)`: Custom exception with remaining_seconds attribute.
|
||||
|
||||
6. Create `packages/router/idempotency.py`:
|
||||
- `async def is_duplicate(tenant_id: str, message_id: str, redis: Redis) -> bool`: Checks Redis for `idempotency_key(tenant_id, message_id)`. If exists, return True (duplicate). Otherwise, set with 24-hour TTL and return False.
|
||||
|
||||
7. Create `packages/router/context.py`:
|
||||
- `async def load_agent_for_tenant(tenant_id: str, session: AsyncSession) -> Agent | None`: Loads the active agent for the tenant (Phase 1 = single agent per tenant). Sets `current_tenant_id` ContextVar before querying.
|
||||
|
||||
8. Update the Celery `handle_message` task in `packages/orchestrator/tasks.py` (or instruct Plan 02 SUMMARY reader to expect this):
|
||||
- After generating LLM response, use `slack_bolt` or `httpx` to call `chat.update` on the placeholder message (replace "Thinking..." with the real response). Use `placeholder_ts` and `channel_id` from the task payload.
|
||||
- Per user decision: Always reply in threads.
|
||||
|
||||
9. Update `docker-compose.yml` to add `gateway` service on port 8001, depending on redis, postgres, celery-worker.
|
||||
|
||||
10. Create `__init__.py` files for gateway, gateway/channels, router packages.
|
||||
</action>
|
||||
<verify>
|
||||
<automated>cd /home/adelorenzo/repos/konstruct && python -c "from packages.gateway.normalize import normalize_slack_event; from packages.router.tenant import resolve_tenant; from packages.router.ratelimit import check_rate_limit; print('Gateway + Router imports OK')"</automated>
|
||||
</verify>
|
||||
<done>
|
||||
- Slack @mentions and DMs are handled by slack-bolt AsyncApp in HTTP mode
|
||||
- Messages are normalized to KonstructMessage format
|
||||
- Tenant resolution maps workspace_id to tenant_id
|
||||
- Rate limiting enforces per-tenant per-channel limits with Redis token bucket
|
||||
- Idempotency deduplication prevents double-processing of Slack retries
|
||||
- Placeholder "Thinking..." message posted immediately, replaced with LLM response
|
||||
- Auto-follow engaged threads with 30-minute idle timeout
|
||||
- HTTP 200 returned within 3 seconds, all LLM work dispatched to Celery
|
||||
</done>
|
||||
</task>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 2: End-to-end integration tests for Slack flow, rate limiting, and agent persona</name>
|
||||
<files>
|
||||
tests/unit/test_ratelimit.py,
|
||||
tests/integration/test_slack_flow.py,
|
||||
tests/integration/test_agent_persona.py,
|
||||
tests/integration/test_ratelimit.py
|
||||
</files>
|
||||
<action>
|
||||
1. Create `tests/unit/test_ratelimit.py` (CHAN-05 unit):
|
||||
- Test token bucket allows requests under the limit (29 of 30)
|
||||
- Test token bucket rejects the 31st request in a 1-minute window
|
||||
- Test that rate limit keys are namespaced per tenant (tenant_a limit independent of tenant_b)
|
||||
- Test that rate limit resets after the window expires
|
||||
- Use fakeredis or real Redis from Docker Compose
|
||||
|
||||
2. Create `tests/integration/test_ratelimit.py` (CHAN-05 integration):
|
||||
- Test the full flow: send a Slack event that exceeds rate limit, verify that an ephemeral "too many requests" message is sent back via Slack API (mock Slack client)
|
||||
- Verify the event is NOT dispatched to Celery when rate-limited
|
||||
|
||||
3. Create `tests/integration/test_slack_flow.py` (CHAN-02):
|
||||
- Mock Slack client (no real Slack workspace needed)
|
||||
- Mock LLM pool response (no real LLM call needed)
|
||||
- Test full flow: Slack app_mention event -> normalize -> resolve tenant -> dispatch Celery -> LLM call -> chat.update with response
|
||||
- Verify the response is posted in-thread (thread_ts is set)
|
||||
- Verify placeholder "Thinking..." message is posted before Celery dispatch
|
||||
- Verify the placeholder is replaced with the real response
|
||||
- Test DM flow: message event with channel_type="im" triggers the same pipeline
|
||||
- Test that bot messages are ignored (no infinite loop)
|
||||
- Test that unknown workspace_id events are silently ignored
|
||||
|
||||
4. Create `tests/integration/test_agent_persona.py` (AGNT-01):
|
||||
- Create a tenant with an agent configured with name="Mara", role="Customer Support", persona="Professional and empathetic"
|
||||
- Mock the LLM pool to capture the messages array sent to /complete
|
||||
- Trigger a message through the pipeline
|
||||
- Verify the system prompt contains: "Your name is Mara", "Your role is Customer Support", "Professional and empathetic", and the AI transparency clause
|
||||
- Verify model_preference from the agent config is passed to the LLM pool
|
||||
</action>
|
||||
<verify>
|
||||
<automated>cd /home/adelorenzo/repos/konstruct && pytest tests/unit/test_ratelimit.py tests/integration/test_slack_flow.py tests/integration/test_agent_persona.py tests/integration/test_ratelimit.py -x -q</automated>
|
||||
</verify>
|
||||
<done>
|
||||
- Rate limiting unit tests verify token bucket behavior per tenant per channel
|
||||
- Slack flow integration test proves end-to-end: event -> normalize -> tenant resolve -> Celery -> LLM -> thread reply
|
||||
- Agent persona test proves system prompt reflects name, role, persona, and AI transparency clause
|
||||
- Rate limit integration test proves over-limit requests get informative rejection
|
||||
- All tests pass with mocked Slack client and mocked LLM pool
|
||||
</done>
|
||||
</task>
|
||||
|
||||
</tasks>
|
||||
|
||||
<verification>
|
||||
- `pytest tests/unit/test_ratelimit.py -x` verifies token bucket logic
|
||||
- `pytest tests/integration/test_slack_flow.py -x` proves end-to-end Slack -> LLM -> reply
|
||||
- `pytest tests/integration/test_agent_persona.py -x` proves persona reflected in system prompt
|
||||
- `pytest tests/integration/test_ratelimit.py -x` proves rate limit produces informative rejection
|
||||
- `grep -r "async def.*@app.task" packages/orchestrator/` returns NO matches (no async Celery tasks)
|
||||
</verification>
|
||||
|
||||
<success_criteria>
|
||||
- A Slack @mention or DM triggers an LLM response in the same thread (mocked end-to-end)
|
||||
- Rate-limited requests are rejected with informative message, not silently dropped
|
||||
- Agent persona (name, role, persona) is reflected in the LLM system prompt
|
||||
- Typing indicator (placeholder message) appears before LLM response
|
||||
- All tests green, no async Celery tasks
|
||||
</success_criteria>
|
||||
|
||||
<output>
|
||||
After completion, create `.planning/phases/01-foundation/01-03-SUMMARY.md`
|
||||
</output>
|
||||
353
.planning/phases/01-foundation/01-04-PLAN.md
Normal file
353
.planning/phases/01-foundation/01-04-PLAN.md
Normal file
@@ -0,0 +1,353 @@
|
||||
---
|
||||
phase: 01-foundation
|
||||
plan: 04
|
||||
type: execute
|
||||
wave: 2
|
||||
depends_on: ["01-01"]
|
||||
files_modified:
|
||||
- packages/portal/package.json
|
||||
- packages/portal/tsconfig.json
|
||||
- packages/portal/tailwind.config.ts
|
||||
- packages/portal/app/layout.tsx
|
||||
- packages/portal/app/page.tsx
|
||||
- packages/portal/app/(auth)/login/page.tsx
|
||||
- packages/portal/app/dashboard/layout.tsx
|
||||
- packages/portal/app/dashboard/page.tsx
|
||||
- packages/portal/app/tenants/page.tsx
|
||||
- packages/portal/app/tenants/[id]/page.tsx
|
||||
- packages/portal/app/tenants/new/page.tsx
|
||||
- packages/portal/app/agents/page.tsx
|
||||
- packages/portal/app/agents/[id]/page.tsx
|
||||
- packages/portal/app/agents/new/page.tsx
|
||||
- packages/portal/app/api/auth/[...nextauth]/route.ts
|
||||
- packages/portal/lib/auth.ts
|
||||
- packages/portal/lib/api.ts
|
||||
- packages/portal/lib/queries.ts
|
||||
- packages/portal/components/tenant-form.tsx
|
||||
- packages/portal/components/agent-designer.tsx
|
||||
- packages/portal/components/nav.tsx
|
||||
- packages/portal/middleware.ts
|
||||
- packages/shared/api/__init__.py
|
||||
- packages/shared/api/portal.py
|
||||
- tests/integration/test_portal_tenants.py
|
||||
- tests/integration/test_portal_agents.py
|
||||
autonomous: true
|
||||
requirements:
|
||||
- PRTA-01
|
||||
- PRTA-02
|
||||
|
||||
user_setup:
|
||||
- service: none
|
||||
why: "Portal uses email/password auth against local DB — no external OAuth provider needed in Phase 1"
|
||||
|
||||
must_haves:
|
||||
truths:
|
||||
- "Operator can log in to the portal with email and password"
|
||||
- "Operator can create a new tenant with name and slug"
|
||||
- "Operator can view, edit, and delete existing tenants"
|
||||
- "Operator can create an AI employee via the Agent Designer with name, role, persona, system prompt, tool assignments, and escalation rules"
|
||||
- "Operator can view, edit, and delete existing agents"
|
||||
- "Agent Designer is a prominent, dedicated module — not buried in settings"
|
||||
artifacts:
|
||||
- path: "packages/portal/app/(auth)/login/page.tsx"
|
||||
provides: "Login page with email/password form"
|
||||
- path: "packages/portal/app/tenants/page.tsx"
|
||||
provides: "Tenant list page with create/edit/delete"
|
||||
- path: "packages/portal/app/agents/new/page.tsx"
|
||||
provides: "Agent Designer form — the primary way operators define AI employees"
|
||||
- path: "packages/portal/components/agent-designer.tsx"
|
||||
provides: "Agent Designer form component with all fields"
|
||||
- path: "packages/portal/lib/auth.ts"
|
||||
provides: "Auth.js v5 configuration with Credentials provider"
|
||||
exports: ["auth", "signIn", "signOut", "handlers"]
|
||||
- path: "packages/shared/api/portal.py"
|
||||
provides: "FastAPI endpoints for tenant CRUD and agent CRUD"
|
||||
exports: ["portal_router"]
|
||||
key_links:
|
||||
- from: "packages/portal/lib/api.ts"
|
||||
to: "packages/shared/api/portal.py"
|
||||
via: "TanStack Query hooks calling FastAPI CRUD endpoints"
|
||||
pattern: "fetch.*api.*(tenants|agents)"
|
||||
- from: "packages/portal/lib/auth.ts"
|
||||
to: "packages/shared/api/portal.py"
|
||||
via: "Credentials provider validates against /auth/verify endpoint"
|
||||
pattern: "authorize.*fetch.*auth/verify"
|
||||
- from: "packages/portal/middleware.ts"
|
||||
to: "packages/portal/lib/auth.ts"
|
||||
via: "Auth middleware protects dashboard routes"
|
||||
pattern: "auth.*middleware"
|
||||
---
|
||||
|
||||
<objective>
|
||||
Build the Next.js admin portal with Auth.js v5 authentication, tenant CRUD, and the Agent Designer module, backed by FastAPI CRUD endpoints. The Agent Designer is the primary interface for operators to define their AI employees.
|
||||
|
||||
Purpose: Give operators a real admin interface to create tenants and configure AI employees. Per user decision, the portal starts in Phase 1 with Auth.js v5 — no hardcoded credentials or throwaway auth.
|
||||
|
||||
Output: Working portal at localhost:3000 with login, tenant management (create/list/view/edit/delete), and Agent Designer (name, role, persona, system prompt, tool assignments, escalation rules). Backed by FastAPI API endpoints with integration tests.
|
||||
</objective>
|
||||
|
||||
<execution_context>
|
||||
@/home/adelorenzo/.claude/get-shit-done/workflows/execute-plan.md
|
||||
@/home/adelorenzo/.claude/get-shit-done/templates/summary.md
|
||||
</execution_context>
|
||||
|
||||
<context>
|
||||
@.planning/PROJECT.md
|
||||
@.planning/ROADMAP.md
|
||||
@.planning/STATE.md
|
||||
@.planning/phases/01-foundation/01-CONTEXT.md
|
||||
@.planning/phases/01-foundation/01-RESEARCH.md
|
||||
@.planning/phases/01-foundation/01-01-SUMMARY.md
|
||||
|
||||
<interfaces>
|
||||
<!-- From Plan 01 — SQLAlchemy models the API endpoints operate on -->
|
||||
|
||||
From packages/shared/models/tenant.py:
|
||||
```python
|
||||
class Tenant(Base):
|
||||
id: Mapped[uuid.UUID]
|
||||
name: Mapped[str] # unique
|
||||
slug: Mapped[str] # unique
|
||||
settings: Mapped[dict] # JSON
|
||||
created_at: Mapped[datetime]
|
||||
updated_at: Mapped[datetime]
|
||||
|
||||
class Agent(Base):
|
||||
id: Mapped[uuid.UUID]
|
||||
tenant_id: Mapped[uuid.UUID] # FK -> Tenant
|
||||
name: Mapped[str]
|
||||
role: Mapped[str]
|
||||
persona: Mapped[str | None]
|
||||
system_prompt: Mapped[str | None]
|
||||
model_preference: Mapped[str] # "quality" | "fast"
|
||||
tool_assignments: Mapped[list] # JSON
|
||||
escalation_rules: Mapped[list] # JSON
|
||||
is_active: Mapped[bool]
|
||||
created_at: Mapped[datetime]
|
||||
updated_at: Mapped[datetime]
|
||||
```
|
||||
|
||||
From packages/shared/models/auth.py:
|
||||
```python
|
||||
class PortalUser(Base):
|
||||
id: Mapped[uuid.UUID]
|
||||
email: Mapped[str] # unique
|
||||
hashed_password: Mapped[str]
|
||||
name: Mapped[str]
|
||||
is_admin: Mapped[bool]
|
||||
created_at: Mapped[datetime]
|
||||
updated_at: Mapped[datetime]
|
||||
```
|
||||
|
||||
From packages/shared/db.py:
|
||||
```python
|
||||
async def get_session() -> AsyncGenerator[AsyncSession, None]: ...
|
||||
```
|
||||
</interfaces>
|
||||
</context>
|
||||
|
||||
<tasks>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 1: FastAPI portal API endpoints (tenant CRUD, agent CRUD, auth verify)</name>
|
||||
<files>
|
||||
packages/shared/api/__init__.py,
|
||||
packages/shared/api/portal.py,
|
||||
tests/integration/test_portal_tenants.py,
|
||||
tests/integration/test_portal_agents.py
|
||||
</files>
|
||||
<action>
|
||||
1. Create `packages/shared/api/portal.py` with a FastAPI `APIRouter` (prefix="/api/portal"):
|
||||
|
||||
**Auth endpoint:**
|
||||
- `POST /auth/verify`: Accepts `{ email: str, password: str }`, validates against PortalUser table using bcrypt, returns `{ id, email, name, is_admin }` or 401. Used by Auth.js Credentials provider.
|
||||
- `POST /auth/register`: Accepts `{ email, password, name }`, creates PortalUser with bcrypt-hashed password. Returns 201 with user info. (Needed for initial setup — consider restricting to admin-only in production.)
|
||||
|
||||
**Tenant endpoints (PRTA-01):**
|
||||
- `GET /tenants`: List all tenants (paginated, 20 per page). No RLS — platform admin sees all tenants.
|
||||
- `POST /tenants`: Create tenant. Accepts `{ name: str, slug: str, settings: dict? }`. Validates name length 2-100, slug format (lowercase, hyphens, 2-50 chars). Returns 201 with tenant object.
|
||||
- `GET /tenants/{id}`: Get tenant by ID. Returns 404 if not found.
|
||||
- `PUT /tenants/{id}`: Update tenant. Accepts partial updates. Returns updated tenant.
|
||||
- `DELETE /tenants/{id}`: Delete tenant. Returns 204. Cascade deletes agents and channel_connections.
|
||||
|
||||
**Agent endpoints (PRTA-02):**
|
||||
- `GET /tenants/{tenant_id}/agents`: List agents for a tenant.
|
||||
- `POST /tenants/{tenant_id}/agents`: Create agent. Accepts `{ name, role, persona?, system_prompt?, model_preference?, tool_assignments?, escalation_rules? }`. Name required, min 1 char. Role required, min 1 char. Returns 201.
|
||||
- `GET /tenants/{tenant_id}/agents/{id}`: Get agent by ID.
|
||||
- `PUT /tenants/{tenant_id}/agents/{id}`: Update agent. Accepts partial updates.
|
||||
- `DELETE /tenants/{tenant_id}/agents/{id}`: Delete agent. Returns 204.
|
||||
|
||||
Use Pydantic v2 request/response schemas (TenantCreate, TenantResponse, AgentCreate, AgentResponse, etc.). Use SQLAlchemy 2.0 `select()` style — never 1.x `session.query()`.
|
||||
|
||||
2. Create `tests/integration/test_portal_tenants.py` (PRTA-01):
|
||||
- Test create tenant with valid data returns 201
|
||||
- Test create tenant with duplicate slug returns 409
|
||||
- Test list tenants returns created tenants
|
||||
- Test get tenant by ID returns correct tenant
|
||||
- Test update tenant name
|
||||
- Test delete tenant returns 204 and tenant is gone
|
||||
- Test create tenant with invalid slug (uppercase, too short) returns 422
|
||||
- Use `httpx.AsyncClient` with the FastAPI app
|
||||
|
||||
3. Create `tests/integration/test_portal_agents.py` (PRTA-02):
|
||||
- Test create agent with all fields returns 201
|
||||
- Test create agent with minimal fields (name + role only) returns 201 with defaults
|
||||
- Test list agents for a tenant returns only that tenant's agents
|
||||
- Test get agent by ID
|
||||
- Test update agent persona and system prompt
|
||||
- Test delete agent
|
||||
- Test Agent Designer fields are all stored and retrievable: name, role, persona, system_prompt, model_preference, tool_assignments (JSON array), escalation_rules (JSON array)
|
||||
- Use `httpx.AsyncClient`
|
||||
</action>
|
||||
<verify>
|
||||
<automated>cd /home/adelorenzo/repos/konstruct && pytest tests/integration/test_portal_tenants.py tests/integration/test_portal_agents.py -x -q</automated>
|
||||
</verify>
|
||||
<done>
|
||||
- Tenant CRUD endpoints all functional with proper validation and error responses
|
||||
- Agent CRUD endpoints support all Agent Designer fields
|
||||
- Auth verify endpoint validates email/password against PortalUser table
|
||||
- Integration tests prove all CRUD operations work correctly
|
||||
- Pydantic schemas enforce input validation
|
||||
</done>
|
||||
</task>
|
||||
|
||||
<task type="auto">
|
||||
<name>Task 2: Next.js portal with Auth.js v5, tenant management, and Agent Designer</name>
|
||||
<files>
|
||||
packages/portal/package.json,
|
||||
packages/portal/tsconfig.json,
|
||||
packages/portal/tailwind.config.ts,
|
||||
packages/portal/app/layout.tsx,
|
||||
packages/portal/app/page.tsx,
|
||||
packages/portal/app/(auth)/login/page.tsx,
|
||||
packages/portal/app/dashboard/layout.tsx,
|
||||
packages/portal/app/dashboard/page.tsx,
|
||||
packages/portal/app/tenants/page.tsx,
|
||||
packages/portal/app/tenants/[id]/page.tsx,
|
||||
packages/portal/app/tenants/new/page.tsx,
|
||||
packages/portal/app/agents/page.tsx,
|
||||
packages/portal/app/agents/[id]/page.tsx,
|
||||
packages/portal/app/agents/new/page.tsx,
|
||||
packages/portal/app/api/auth/[...nextauth]/route.ts,
|
||||
packages/portal/lib/auth.ts,
|
||||
packages/portal/lib/api.ts,
|
||||
packages/portal/lib/queries.ts,
|
||||
packages/portal/components/tenant-form.tsx,
|
||||
packages/portal/components/agent-designer.tsx,
|
||||
packages/portal/components/nav.tsx,
|
||||
packages/portal/middleware.ts
|
||||
</files>
|
||||
<action>
|
||||
1. Initialize Next.js 16 project in `packages/portal/`:
|
||||
- `npx create-next-app@latest . --typescript --tailwind --eslint --app`
|
||||
- Install: `@tanstack/react-query react-hook-form zod next-auth@5 @hookform/resolvers`
|
||||
- Initialize shadcn/ui: `npx shadcn@latest init` then add components: button, input, textarea, card, table, form, label, select, dialog, toast, navigation-menu, separator, badge
|
||||
|
||||
2. Create `packages/portal/lib/auth.ts`:
|
||||
- Auth.js v5 with Credentials provider per research Pattern 7
|
||||
- Credentials provider calls `POST ${API_URL}/api/portal/auth/verify` with email + password
|
||||
- JWT session strategy (stateless, no DB session table needed for Phase 1)
|
||||
- Custom pages: signIn -> "/login"
|
||||
- Export `{ handlers, auth, signIn, signOut }`
|
||||
|
||||
3. Create `packages/portal/app/api/auth/[...nextauth]/route.ts`:
|
||||
- Re-export `handlers.GET` and `handlers.POST` from lib/auth.ts
|
||||
|
||||
4. Create `packages/portal/middleware.ts`:
|
||||
- Protect all routes except `/login` and `/api/auth/*`
|
||||
- Redirect unauthenticated users to `/login`
|
||||
|
||||
5. Create `packages/portal/app/(auth)/login/page.tsx`:
|
||||
- Email + password login form using shadcn/ui Input, Button, Card
|
||||
- Form validation with React Hook Form + Zod (email format, password min 8 chars)
|
||||
- Error display for invalid credentials
|
||||
- On success, redirect to /dashboard
|
||||
|
||||
6. Create `packages/portal/lib/api.ts`:
|
||||
- API client configured with base URL from env (NEXT_PUBLIC_API_URL)
|
||||
- Typed fetch wrapper with error handling
|
||||
|
||||
7. Create `packages/portal/lib/queries.ts`:
|
||||
- TanStack Query hooks: `useTenants()`, `useTenant(id)`, `useCreateTenant()`, `useUpdateTenant()`, `useDeleteTenant()`
|
||||
- TanStack Query hooks: `useAgents(tenantId)`, `useAgent(tenantId, id)`, `useCreateAgent()`, `useUpdateAgent()`, `useDeleteAgent()`
|
||||
- Proper invalidation on mutations
|
||||
|
||||
8. Create `packages/portal/components/nav.tsx`:
|
||||
- Sidebar navigation with links: Dashboard, Tenants, Employees (label it "Employees" not "Agents" — per the AI employee branding)
|
||||
- Active state highlighting
|
||||
- Logout button calling signOut
|
||||
|
||||
9. Create `packages/portal/app/dashboard/layout.tsx`:
|
||||
- Layout with sidebar nav + main content area
|
||||
- TanStack QueryClientProvider wrapping children
|
||||
|
||||
10. Create `packages/portal/app/dashboard/page.tsx`:
|
||||
- Simple dashboard landing page with tenant count and agent count stats
|
||||
|
||||
11. Create tenant management pages:
|
||||
- `app/tenants/page.tsx`: Table listing all tenants with name, slug, created date. "New Tenant" button. Row click navigates to detail.
|
||||
- `app/tenants/new/page.tsx`: Tenant creation form (name, slug). Slug auto-generated from name (lowercase, hyphenated).
|
||||
- `app/tenants/[id]/page.tsx`: Tenant detail with edit form and delete button. Shows agents for this tenant.
|
||||
|
||||
12. Create `packages/portal/components/tenant-form.tsx`:
|
||||
- Reusable form for create/edit tenant. React Hook Form + Zod validation.
|
||||
|
||||
13. Create Agent Designer pages — PER USER DECISION this is a PROMINENT, DEDICATED module:
|
||||
- `app/agents/page.tsx`: Card grid of all agents across tenants. Each card shows agent name, role, tenant name, active status. "New Employee" button.
|
||||
- `app/agents/new/page.tsx`: Full Agent Designer form. Grouped into sections:
|
||||
- **Identity:** Name (text), Role (text) — e.g., "Customer Support Lead"
|
||||
- **Personality:** Persona (textarea — personality description), System Prompt (textarea — raw system prompt override)
|
||||
- **Configuration:** Model Preference (select: "quality" / "fast"), Tenant (select dropdown)
|
||||
- **Capabilities:** Tool Assignments (JSON editor or tag-style input — list of tool names)
|
||||
- **Escalation:** Escalation Rules (JSON editor or structured form — condition + action pairs)
|
||||
- **Status:** Active toggle
|
||||
- `app/agents/[id]/page.tsx`: Edit existing agent with same form, pre-populated. Delete button.
|
||||
|
||||
14. Create `packages/portal/components/agent-designer.tsx`:
|
||||
- The Agent Designer form component. React Hook Form + Zod validation.
|
||||
- Zod schema: name (min 1), role (min 1), persona (optional), system_prompt (optional), model_preference (enum: quality|fast), tool_assignments (string array), escalation_rules (array of {condition: string, action: string}), is_active (boolean).
|
||||
- Use the "employee" language in labels and placeholders: "Employee Name", "Job Title" (for role), "Job Description" (for persona), "Statement of Work" (for system_prompt) — per user's specific vision that the Agent Designer is about defining an employee.
|
||||
- shadcn/ui components: Card for section grouping, Textarea for persona/system_prompt, Input for name/role, Select for model_preference, Badge for tool tags.
|
||||
|
||||
15. Create `packages/portal/app/layout.tsx`:
|
||||
- Root layout with Tailwind, font, metadata (title: "Konstruct Portal")
|
||||
|
||||
16. `packages/portal/app/page.tsx`:
|
||||
- Redirect to /dashboard if authenticated, /login if not
|
||||
|
||||
17. Update `docker-compose.yml` to add portal service on port 3000 with env vars.
|
||||
</action>
|
||||
<verify>
|
||||
<automated>cd /home/adelorenzo/repos/konstruct/packages/portal && npm run build</automated>
|
||||
</verify>
|
||||
<done>
|
||||
- Portal builds successfully with Next.js 16
|
||||
- Login page authenticates against FastAPI /auth/verify via Auth.js v5 Credentials provider
|
||||
- Protected routes redirect to /login when unauthenticated
|
||||
- Tenant CRUD: list, create, view, edit, delete all functional
|
||||
- Agent Designer: all fields (name, role, persona, system prompt, model preference, tool assignments, escalation rules) saveable and loadable
|
||||
- Agent Designer uses employee-centric language (Employee Name, Job Title, Job Description, Statement of Work)
|
||||
- Agent Designer is a prominent top-level module, not buried in settings
|
||||
- shadcn/ui styling with Tailwind CSS
|
||||
</done>
|
||||
</task>
|
||||
|
||||
</tasks>
|
||||
|
||||
<verification>
|
||||
- `pytest tests/integration/test_portal_tenants.py tests/integration/test_portal_agents.py -x` proves API CRUD works
|
||||
- `cd packages/portal && npm run build` compiles without errors
|
||||
- Portal pages render tenant list, tenant create/edit, agent designer
|
||||
- Auth.js v5 login flow works with email/password
|
||||
</verification>
|
||||
|
||||
<success_criteria>
|
||||
- Operator can log in, create tenants, and configure AI employees through the portal
|
||||
- Agent Designer prominently accessible with all required fields
|
||||
- All API CRUD operations validated by integration tests
|
||||
- Portal builds cleanly with Next.js 16
|
||||
</success_criteria>
|
||||
|
||||
<output>
|
||||
After completion, create `.planning/phases/01-foundation/01-04-SUMMARY.md`
|
||||
</output>
|
||||
Reference in New Issue
Block a user