Add Kubernetes and Stacks MCP servers

- Implement portainer-kubernetes server with 30 tools for comprehensive K8s management
  - Namespace, pod, deployment, and service operations
  - ConfigMap and Secret management with base64 encoding
  - Storage operations (PV/PVC)
  - Ingress configuration
  - Node information and pod logs

- Implement portainer-stacks server with 13 tools for stack management
  - Docker Compose and Kubernetes manifest support
  - Git repository integration for stack deployments
  - Stack lifecycle management (create, update, start, stop, delete)
  - Environment variable management
  - Stack migration between environments

- Add comprehensive README documentation for both servers
- Make server files executable
This commit is contained in:
Adolfo Delorenzo 2025-07-18 19:45:03 -03:00
parent e27251b922
commit 2dfe3c8bc1
4 changed files with 3470 additions and 0 deletions

430
README_KUBERNETES.md Normal file
View File

@ -0,0 +1,430 @@
# Portainer Kubernetes MCP Server
This MCP server provides comprehensive Kubernetes cluster management capabilities through Portainer's API.
## Features
- **Namespace Management**: List, create, and delete namespaces
- **Pod Operations**: List, view, delete pods, and access logs
- **Deployment Management**: Create, scale, update, restart, and delete deployments
- **Service Management**: List, create, and delete services
- **Ingress Configuration**: List and create ingress rules
- **ConfigMap & Secret Management**: Full CRUD operations with base64 encoding
- **Storage Management**: PersistentVolume and PersistentVolumeClaim operations
- **Node Information**: View cluster node details
## Installation
1. Ensure you have the Portainer MCP servers repository:
```bash
git clone https://github.com/yourusername/portainer-mcp.git
cd portainer-mcp
```
2. Install dependencies:
```bash
pip install -r requirements.txt
```
3. Configure environment variables:
```bash
cp .env.example .env
# Edit .env with your Portainer URL and API key
```
4. Make the server executable:
```bash
chmod +x portainer_kubernetes_server.py
```
## Configuration
Add to your Claude Desktop configuration:
```json
{
"portainer-kubernetes": {
"command": "python",
"args": ["/path/to/portainer-mcp/portainer_kubernetes_server.py"],
"env": {
"PORTAINER_URL": "https://your-portainer-instance.com",
"PORTAINER_API_KEY": "your-api-key"
}
}
}
```
## Available Tools
### Namespace Management
#### list_namespaces
List all namespaces in the cluster.
- **Parameters**:
- `environment_id` (required): Target environment ID
#### create_namespace
Create a new namespace.
- **Parameters**:
- `environment_id` (required): Target environment ID
- `name` (required): Namespace name
#### delete_namespace
Delete a namespace.
- **Parameters**:
- `environment_id` (required): Target environment ID
- `name` (required): Namespace name
### Pod Management
#### list_pods
List pods in a namespace.
- **Parameters**:
- `environment_id` (required): Target environment ID
- `namespace` (optional): Namespace (default: "default")
- `label_selector` (optional): Label selector (e.g., "app=nginx")
#### get_pod
Get detailed information about a specific pod.
- **Parameters**:
- `environment_id` (required): Target environment ID
- `namespace` (required): Namespace
- `name` (required): Pod name
#### delete_pod
Delete a pod.
- **Parameters**:
- `environment_id` (required): Target environment ID
- `namespace` (required): Namespace
- `name` (required): Pod name
- `grace_period` (optional): Grace period in seconds
#### get_pod_logs
Get logs from a pod.
- **Parameters**:
- `environment_id` (required): Target environment ID
- `namespace` (required): Namespace
- `name` (required): Pod name
- `container` (optional): Container name
- `previous` (optional): Get previous container logs
- `tail_lines` (optional): Number of lines from end
### Deployment Management
#### list_deployments
List deployments in a namespace.
- **Parameters**:
- `environment_id` (required): Target environment ID
- `namespace` (optional): Namespace (default: "default")
#### get_deployment
Get deployment details.
- **Parameters**:
- `environment_id` (required): Target environment ID
- `namespace` (required): Namespace
- `name` (required): Deployment name
#### create_deployment
Create a new deployment.
- **Parameters**:
- `environment_id` (required): Target environment ID
- `namespace` (required): Namespace
- `name` (required): Deployment name
- `image` (required): Container image
- `replicas` (optional): Number of replicas (default: 1)
- `port` (optional): Container port
- `env_vars` (optional): Environment variables object
- `labels` (optional): Labels object
#### scale_deployment
Scale a deployment.
- **Parameters**:
- `environment_id` (required): Target environment ID
- `namespace` (required): Namespace
- `name` (required): Deployment name
- `replicas` (required): Desired replica count
#### update_deployment_image
Update deployment container image.
- **Parameters**:
- `environment_id` (required): Target environment ID
- `namespace` (required): Namespace
- `name` (required): Deployment name
- `container` (required): Container name
- `image` (required): New image
#### restart_deployment
Restart a deployment by updating annotation.
- **Parameters**:
- `environment_id` (required): Target environment ID
- `namespace` (required): Namespace
- `name` (required): Deployment name
#### delete_deployment
Delete a deployment.
- **Parameters**:
- `environment_id` (required): Target environment ID
- `namespace` (required): Namespace
- `name` (required): Deployment name
### Service Management
#### list_services
List services in a namespace.
- **Parameters**:
- `environment_id` (required): Target environment ID
- `namespace` (optional): Namespace (default: "default")
#### create_service
Create a service for a deployment.
- **Parameters**:
- `environment_id` (required): Target environment ID
- `namespace` (required): Namespace
- `name` (required): Service name
- `selector` (required): Pod selector object
- `ports` (required): Array of port mappings
- `type` (optional): Service type (ClusterIP, NodePort, LoadBalancer)
#### delete_service
Delete a service.
- **Parameters**:
- `environment_id` (required): Target environment ID
- `namespace` (required): Namespace
- `name` (required): Service name
### Ingress Management
#### list_ingresses
List ingress rules in a namespace.
- **Parameters**:
- `environment_id` (required): Target environment ID
- `namespace` (optional): Namespace (default: "default")
#### create_ingress
Create an ingress rule.
- **Parameters**:
- `environment_id` (required): Target environment ID
- `namespace` (required): Namespace
- `name` (required): Ingress name
- `rules` (required): Array of ingress rules
- `tls` (optional): TLS configuration array
### ConfigMap Management
#### list_configmaps
List ConfigMaps in a namespace.
- **Parameters**:
- `environment_id` (required): Target environment ID
- `namespace` (optional): Namespace (default: "default")
#### create_configmap
Create a ConfigMap.
- **Parameters**:
- `environment_id` (required): Target environment ID
- `namespace` (required): Namespace
- `name` (required): ConfigMap name
- `data` (required): Key-value data object
#### delete_configmap
Delete a ConfigMap.
- **Parameters**:
- `environment_id` (required): Target environment ID
- `namespace` (required): Namespace
- `name` (required): ConfigMap name
### Secret Management
#### list_secrets
List secrets in a namespace.
- **Parameters**:
- `environment_id` (required): Target environment ID
- `namespace` (optional): Namespace (default: "default")
#### create_secret
Create a secret (automatically base64 encodes).
- **Parameters**:
- `environment_id` (required): Target environment ID
- `namespace` (required): Namespace
- `name` (required): Secret name
- `data` (required): Key-value data object
- `type` (optional): Secret type (default: "Opaque")
#### delete_secret
Delete a secret.
- **Parameters**:
- `environment_id` (required): Target environment ID
- `namespace` (required): Namespace
- `name` (required): Secret name
### Storage Management
#### list_persistent_volumes
List PersistentVolumes in the cluster.
- **Parameters**:
- `environment_id` (required): Target environment ID
#### list_persistent_volume_claims
List PersistentVolumeClaims in a namespace.
- **Parameters**:
- `environment_id` (required): Target environment ID
- `namespace` (optional): Namespace (default: "default")
#### create_persistent_volume_claim
Create a PersistentVolumeClaim.
- **Parameters**:
- `environment_id` (required): Target environment ID
- `namespace` (required): Namespace
- `name` (required): PVC name
- `storage` (required): Storage size (e.g., "1Gi")
- `access_modes` (optional): Array of access modes
- `storage_class` (optional): Storage class name
#### delete_persistent_volume_claim
Delete a PersistentVolumeClaim.
- **Parameters**:
- `environment_id` (required): Target environment ID
- `namespace` (required): Namespace
- `name` (required): PVC name
### Node Management
#### list_nodes
List cluster nodes with details.
- **Parameters**:
- `environment_id` (required): Target environment ID
## Usage Examples
### Create and manage a deployment
```javascript
// Create a deployment
await use_mcp_tool("portainer-kubernetes", "create_deployment", {
environment_id: "2",
namespace: "production",
name: "nginx-app",
image: "nginx:latest",
replicas: 3,
port: 80,
labels: { app: "nginx", tier: "frontend" }
});
// Scale the deployment
await use_mcp_tool("portainer-kubernetes", "scale_deployment", {
environment_id: "2",
namespace: "production",
name: "nginx-app",
replicas: 5
});
// Create a service for the deployment
await use_mcp_tool("portainer-kubernetes", "create_service", {
environment_id: "2",
namespace: "production",
name: "nginx-service",
selector: { app: "nginx" },
ports: [{ port: 80, targetPort: 80 }],
type: "LoadBalancer"
});
```
### Manage ConfigMaps and Secrets
```javascript
// Create a ConfigMap
await use_mcp_tool("portainer-kubernetes", "create_configmap", {
environment_id: "2",
namespace: "production",
name: "app-config",
data: {
"app.properties": "debug=false\nport=8080",
"database.conf": "host=db.example.com"
}
});
// Create a Secret (automatically base64 encoded)
await use_mcp_tool("portainer-kubernetes", "create_secret", {
environment_id: "2",
namespace: "production",
name: "db-credentials",
data: {
username: "dbuser",
password: "secretpassword"
}
});
```
### Storage operations
```javascript
// Create a PersistentVolumeClaim
await use_mcp_tool("portainer-kubernetes", "create_persistent_volume_claim", {
environment_id: "2",
namespace: "production",
name: "data-storage",
storage: "10Gi",
access_modes: ["ReadWriteOnce"],
storage_class: "fast-ssd"
});
```
### Pod troubleshooting
```javascript
// Get pod logs
await use_mcp_tool("portainer-kubernetes", "get_pod_logs", {
environment_id: "2",
namespace: "production",
name: "nginx-app-7d9c5b5b6-abc123",
tail_lines: 100
});
// Get pod details
await use_mcp_tool("portainer-kubernetes", "get_pod", {
environment_id: "2",
namespace: "production",
name: "nginx-app-7d9c5b5b6-abc123"
});
```
## Error Handling
The server includes comprehensive error handling:
- Network timeouts and retries
- Invalid Kubernetes resource specifications
- Authentication failures
- Resource not found errors
- Namespace conflicts
All errors are returned with descriptive messages to help diagnose issues.
## Security Notes
- The server automatically handles base64 encoding for Kubernetes secrets
- API tokens are never logged or exposed
- All communications use HTTPS when configured
- Follows Kubernetes RBAC permissions
## Troubleshooting
### Common Issues
1. **Authentication failures**: Ensure your API key is valid and has appropriate permissions
2. **Resource not found**: Verify the namespace and resource names
3. **Permission denied**: Check RBAC permissions for the service account
4. **Timeout errors**: Increase HTTP_TIMEOUT in environment variables
### Debug Mode
Enable debug logging by setting in your environment:
```bash
DEBUG=true
LOG_LEVEL=DEBUG
```
## Requirements
- Python 3.8+
- Portainer Business Edition 2.19+ with Kubernetes endpoints
- Valid Portainer API token
- Kubernetes cluster connected to Portainer

353
README_STACKS.md Normal file
View File

@ -0,0 +1,353 @@
# Portainer Stacks MCP Server
This MCP server provides comprehensive stack deployment and management capabilities through Portainer's API, supporting both Docker Compose and Kubernetes deployments.
## Features
- **Stack Management**: List, create, update, start, stop, and delete stacks
- **Docker Compose Support**: Deploy stacks from file content or Git repositories
- **Kubernetes Support**: Deploy Kubernetes manifests as stacks
- **Git Integration**: Create and update stacks directly from Git repositories
- **Environment Variables**: Manage stack environment variables
- **Stack Migration**: Copy stacks between environments
- **Multi-Environment**: Work with stacks across different Portainer environments
## Installation
1. Ensure you have the Portainer MCP servers repository:
```bash
git clone https://github.com/yourusername/portainer-mcp.git
cd portainer-mcp
```
2. Install dependencies:
```bash
pip install -r requirements.txt
```
3. Configure environment variables:
```bash
cp .env.example .env
# Edit .env with your Portainer URL and API key
```
4. Make the server executable:
```bash
chmod +x portainer_stacks_server.py
```
## Configuration
Add to your Claude Desktop configuration:
```json
{
"portainer-stacks": {
"command": "python",
"args": ["/path/to/portainer-mcp/portainer_stacks_server.py"],
"env": {
"PORTAINER_URL": "https://your-portainer-instance.com",
"PORTAINER_API_KEY": "your-api-key"
}
}
}
```
## Available Tools
### Stack Information
#### list_stacks
List all stacks across environments.
- **Parameters**:
- `environment_id` (optional): Filter by environment ID
#### get_stack
Get detailed information about a specific stack.
- **Parameters**:
- `stack_id` (required): Stack ID
#### get_stack_file
Get the stack file content (Docker Compose or Kubernetes manifest).
- **Parameters**:
- `stack_id` (required): Stack ID
### Stack Creation
#### create_compose_stack_from_file
Create a new Docker Compose stack from file content.
- **Parameters**:
- `environment_id` (required): Target environment ID
- `name` (required): Stack name
- `compose_file` (required): Docker Compose file content (YAML)
- `env_vars` (optional): Array of environment variables
- `name`: Variable name
- `value`: Variable value
#### create_compose_stack_from_git
Create a new Docker Compose stack from Git repository.
- **Parameters**:
- `environment_id` (required): Target environment ID
- `name` (required): Stack name
- `repository_url` (required): Git repository URL
- `repository_ref` (optional): Git reference (default: "main")
- `compose_path` (optional): Path to compose file (default: "docker-compose.yml")
- `repository_auth` (optional): Use authentication (default: false)
- `repository_username` (optional): Git username
- `repository_password` (optional): Git password/token
- `env_vars` (optional): Array of environment variables
#### create_kubernetes_stack
Create a new Kubernetes stack from manifest.
- **Parameters**:
- `environment_id` (required): Target environment ID
- `name` (required): Stack name
- `namespace` (optional): Kubernetes namespace (default: "default")
- `manifest` (required): Kubernetes manifest content (YAML)
### Stack Management
#### update_stack
Update an existing stack.
- **Parameters**:
- `stack_id` (required): Stack ID
- `compose_file` (optional): Updated compose file or manifest
- `env_vars` (optional): Updated environment variables array
- `pull_image` (optional): Pull latest images (default: true)
#### update_git_stack
Update a Git-based stack (pull latest changes).
- **Parameters**:
- `stack_id` (required): Stack ID
- `pull_image` (optional): Pull latest images (default: true)
#### start_stack
Start a stopped stack.
- **Parameters**:
- `stack_id` (required): Stack ID
#### stop_stack
Stop a running stack.
- **Parameters**:
- `stack_id` (required): Stack ID
#### delete_stack
Delete a stack and optionally its volumes.
- **Parameters**:
- `stack_id` (required): Stack ID
- `delete_volumes` (optional): Delete associated volumes (default: false)
### Stack Operations
#### migrate_stack
Migrate a stack to another environment.
- **Parameters**:
- `stack_id` (required): Stack ID
- `target_environment_id` (required): Target environment ID
- `new_name` (optional): New stack name
#### get_stack_logs
Get logs from all containers in a stack.
- **Parameters**:
- `stack_id` (required): Stack ID
- `tail` (optional): Number of lines from end (default: 100)
- `timestamps` (optional): Show timestamps (default: true)
## Usage Examples
### Deploy a Docker Compose stack
```javascript
// From file content
await use_mcp_tool("portainer-stacks", "create_compose_stack_from_file", {
environment_id: "2",
name: "wordpress-blog",
compose_file: `version: '3.8'
services:
wordpress:
image: wordpress:latest
ports:
- "8080:80"
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: \${DB_PASSWORD}
volumes:
- wordpress_data:/var/www/html
db:
image: mysql:5.7
environment:
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: \${DB_PASSWORD}
MYSQL_RANDOM_ROOT_PASSWORD: '1'
volumes:
- db_data:/var/lib/mysql
volumes:
wordpress_data:
db_data:`,
env_vars: [
{ name: "DB_PASSWORD", value: "secure_password" }
]
});
// From Git repository
await use_mcp_tool("portainer-stacks", "create_compose_stack_from_git", {
environment_id: "2",
name: "microservices-app",
repository_url: "https://github.com/myorg/microservices.git",
repository_ref: "main",
compose_path: "docker/docker-compose.prod.yml",
env_vars: [
{ name: "ENVIRONMENT", value: "production" },
{ name: "API_KEY", value: "secret_key" }
]
});
```
### Deploy a Kubernetes stack
```javascript
await use_mcp_tool("portainer-stacks", "create_kubernetes_stack", {
environment_id: "3",
name: "nginx-deployment",
namespace: "production",
manifest: `apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.21
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- port: 80
targetPort: 80
type: LoadBalancer`
});
```
### Manage stacks
```javascript
// Update a stack
await use_mcp_tool("portainer-stacks", "update_stack", {
stack_id: "5",
compose_file: "updated compose content...",
env_vars: [
{ name: "VERSION", value: "2.0" }
],
pull_image: true
});
// Update Git-based stack
await use_mcp_tool("portainer-stacks", "update_git_stack", {
stack_id: "7",
pull_image: true
});
// Stop and start stacks
await use_mcp_tool("portainer-stacks", "stop_stack", {
stack_id: "5"
});
await use_mcp_tool("portainer-stacks", "start_stack", {
stack_id: "5"
});
// Delete stack with volumes
await use_mcp_tool("portainer-stacks", "delete_stack", {
stack_id: "5",
delete_volumes: true
});
```
### Migrate stack between environments
```javascript
await use_mcp_tool("portainer-stacks", "migrate_stack", {
stack_id: "5",
target_environment_id: "4",
new_name: "wordpress-prod"
});
```
## Stack Types
The server supports three types of stacks:
1. **Swarm Stacks** (Type 1): Docker Swarm mode stacks
2. **Compose Stacks** (Type 2): Standard Docker Compose deployments
3. **Kubernetes Stacks** (Type 3): Kubernetes manifest deployments
## Error Handling
The server includes comprehensive error handling:
- Invalid stack configurations
- Git repository access errors
- Environment permission issues
- Network timeouts and retries
- Resource conflicts
All errors are returned with descriptive messages to help diagnose issues.
## Security Notes
- Git credentials are transmitted securely but stored in Portainer
- Environment variables may contain sensitive data
- Stack files can include secrets - handle with care
- API tokens are never logged or exposed
- Use RBAC to control stack access
## Best Practices
1. **Environment Variables**: Use environment variables for configuration instead of hardcoding values
2. **Git Integration**: Use Git repositories for version control and automated deployments
3. **Naming Convention**: Use consistent naming for stacks across environments
4. **Volume Management**: Be careful when deleting stacks with volumes
5. **Migration Testing**: Test stack migrations in non-production environments first
## Troubleshooting
### Common Issues
1. **Stack creation fails**: Check compose file syntax and image availability
2. **Git authentication errors**: Ensure credentials are correct and have repository access
3. **Permission denied**: Verify user has appropriate Portainer permissions
4. **Stack update fails**: Check for resource conflicts or invalid configurations
### Debug Mode
Enable debug logging by setting in your environment:
```bash
DEBUG=true
LOG_LEVEL=DEBUG
```
## Requirements
- Python 3.8+
- Portainer Business Edition 2.19+
- Valid Portainer API token with stack management permissions
- Docker or Kubernetes environments configured in Portainer

1847
portainer_kubernetes_server.py Executable file

File diff suppressed because it is too large Load Diff

840
portainer_stacks_server.py Executable file
View File

@ -0,0 +1,840 @@
#!/usr/bin/env python3
"""
Portainer Stacks MCP Server
Provides stack deployment and management functionality through Portainer's API.
Supports Docker Compose stacks and Kubernetes manifests.
"""
import os
import sys
import json
import asyncio
import aiohttp
import logging
from typing import Any, Optional
from mcp.server import Server, NotificationOptions
from mcp.server.models import InitializationOptions
import mcp.server.stdio
import mcp.types as types
# Set up logging
MCP_MODE = os.getenv("MCP_MODE", "true").lower() == "true"
if MCP_MODE:
# In MCP mode, suppress all logs to stdout/stderr
logging.basicConfig(level=logging.CRITICAL + 1)
else:
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)
# Environment variables
PORTAINER_URL = os.getenv("PORTAINER_URL", "").rstrip("/")
PORTAINER_API_KEY = os.getenv("PORTAINER_API_KEY", "")
# Validate environment
if not PORTAINER_URL or not PORTAINER_API_KEY:
if not MCP_MODE:
logger.error("PORTAINER_URL and PORTAINER_API_KEY must be set")
sys.exit(1)
# Helper functions
async def make_request(
method: str,
endpoint: str,
json_data: Optional[dict] = None,
params: Optional[dict] = None,
data: Optional[Any] = None,
headers: Optional[dict] = None
) -> dict:
"""Make an authenticated request to Portainer API."""
url = f"{PORTAINER_URL}{endpoint}"
default_headers = {
"X-API-Key": PORTAINER_API_KEY
}
if headers:
default_headers.update(headers)
timeout = aiohttp.ClientTimeout(total=30)
try:
async with aiohttp.ClientSession(timeout=timeout) as session:
async with session.request(
method,
url,
json=json_data,
params=params,
data=data,
headers=default_headers
) as response:
response_text = await response.text()
if response.status >= 400:
error_msg = f"API request failed: {response.status}"
try:
error_data = json.loads(response_text)
if "message" in error_data:
error_msg = f"{error_msg} - {error_data['message']}"
elif "details" in error_data:
error_msg = f"{error_msg} - {error_data['details']}"
except:
if response_text:
error_msg = f"{error_msg} - {response_text}"
return {"error": error_msg}
if response_text:
return json.loads(response_text)
return {}
except asyncio.TimeoutError:
return {"error": "Request timeout"}
except Exception as e:
return {"error": f"Request failed: {str(e)}"}
def format_stack_status(stack: dict) -> str:
"""Format stack status with emoji."""
status = stack.get("Status", 0)
if status == 1:
return "✅ Active"
elif status == 2:
return "⚠️ Inactive"
else:
return "❓ Unknown"
def format_stack_type(stack_type: int) -> str:
"""Format stack type."""
if stack_type == 1:
return "Swarm"
elif stack_type == 2:
return "Compose"
elif stack_type == 3:
return "Kubernetes"
else:
return "Unknown"
# Create server instance
server = Server("portainer-stacks")
@server.list_tools()
async def handle_list_tools() -> list[types.Tool]:
"""List all available tools."""
return [
types.Tool(
name="list_stacks",
description="List all stacks across environments",
inputSchema={
"type": "object",
"properties": {
"environment_id": {
"type": "string",
"description": "Filter by environment ID (optional)"
}
}
}
),
types.Tool(
name="get_stack",
description="Get detailed information about a specific stack",
inputSchema={
"type": "object",
"properties": {
"stack_id": {
"type": "string",
"description": "Stack ID"
}
},
"required": ["stack_id"]
}
),
types.Tool(
name="get_stack_file",
description="Get the stack file content (Docker Compose or Kubernetes manifest)",
inputSchema={
"type": "object",
"properties": {
"stack_id": {
"type": "string",
"description": "Stack ID"
}
},
"required": ["stack_id"]
}
),
types.Tool(
name="create_compose_stack_from_file",
description="Create a new Docker Compose stack from file content",
inputSchema={
"type": "object",
"properties": {
"environment_id": {
"type": "string",
"description": "Target environment ID"
},
"name": {
"type": "string",
"description": "Stack name"
},
"compose_file": {
"type": "string",
"description": "Docker Compose file content (YAML)"
},
"env_vars": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {"type": "string"},
"value": {"type": "string"}
}
},
"description": "Environment variables"
}
},
"required": ["environment_id", "name", "compose_file"]
}
),
types.Tool(
name="create_compose_stack_from_git",
description="Create a new Docker Compose stack from Git repository",
inputSchema={
"type": "object",
"properties": {
"environment_id": {
"type": "string",
"description": "Target environment ID"
},
"name": {
"type": "string",
"description": "Stack name"
},
"repository_url": {
"type": "string",
"description": "Git repository URL"
},
"repository_ref": {
"type": "string",
"description": "Git reference (branch/tag)",
"default": "main"
},
"compose_path": {
"type": "string",
"description": "Path to compose file in repository",
"default": "docker-compose.yml"
},
"repository_auth": {
"type": "boolean",
"description": "Use repository authentication",
"default": False
},
"repository_username": {
"type": "string",
"description": "Git username (if auth required)"
},
"repository_password": {
"type": "string",
"description": "Git password/token (if auth required)"
},
"env_vars": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {"type": "string"},
"value": {"type": "string"}
}
},
"description": "Environment variables"
}
},
"required": ["environment_id", "name", "repository_url"]
}
),
types.Tool(
name="create_kubernetes_stack",
description="Create a new Kubernetes stack from manifest",
inputSchema={
"type": "object",
"properties": {
"environment_id": {
"type": "string",
"description": "Target environment ID"
},
"name": {
"type": "string",
"description": "Stack name"
},
"namespace": {
"type": "string",
"description": "Kubernetes namespace",
"default": "default"
},
"manifest": {
"type": "string",
"description": "Kubernetes manifest content (YAML)"
}
},
"required": ["environment_id", "name", "manifest"]
}
),
types.Tool(
name="update_stack",
description="Update an existing stack",
inputSchema={
"type": "object",
"properties": {
"stack_id": {
"type": "string",
"description": "Stack ID"
},
"compose_file": {
"type": "string",
"description": "Updated compose file or manifest (required for file-based stacks)"
},
"env_vars": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {"type": "string"},
"value": {"type": "string"}
}
},
"description": "Updated environment variables"
},
"pull_image": {
"type": "boolean",
"description": "Pull latest images before updating",
"default": True
}
},
"required": ["stack_id"]
}
),
types.Tool(
name="update_git_stack",
description="Update a Git-based stack (pull latest changes)",
inputSchema={
"type": "object",
"properties": {
"stack_id": {
"type": "string",
"description": "Stack ID"
},
"pull_image": {
"type": "boolean",
"description": "Pull latest images after updating",
"default": True
}
},
"required": ["stack_id"]
}
),
types.Tool(
name="start_stack",
description="Start a stopped stack",
inputSchema={
"type": "object",
"properties": {
"stack_id": {
"type": "string",
"description": "Stack ID"
}
},
"required": ["stack_id"]
}
),
types.Tool(
name="stop_stack",
description="Stop a running stack",
inputSchema={
"type": "object",
"properties": {
"stack_id": {
"type": "string",
"description": "Stack ID"
}
},
"required": ["stack_id"]
}
),
types.Tool(
name="delete_stack",
description="Delete a stack and optionally its volumes",
inputSchema={
"type": "object",
"properties": {
"stack_id": {
"type": "string",
"description": "Stack ID"
},
"delete_volumes": {
"type": "boolean",
"description": "Also delete associated volumes",
"default": False
}
},
"required": ["stack_id"]
}
),
types.Tool(
name="migrate_stack",
description="Migrate a stack to another environment",
inputSchema={
"type": "object",
"properties": {
"stack_id": {
"type": "string",
"description": "Stack ID"
},
"target_environment_id": {
"type": "string",
"description": "Target environment ID"
},
"new_name": {
"type": "string",
"description": "New stack name (optional)"
}
},
"required": ["stack_id", "target_environment_id"]
}
),
types.Tool(
name="get_stack_logs",
description="Get logs from all containers in a stack",
inputSchema={
"type": "object",
"properties": {
"stack_id": {
"type": "string",
"description": "Stack ID"
},
"tail": {
"type": "integer",
"description": "Number of lines to show from the end",
"default": 100
},
"timestamps": {
"type": "boolean",
"description": "Show timestamps",
"default": True
}
},
"required": ["stack_id"]
}
)
]
@server.call_tool()
async def handle_call_tool(
name: str,
arguments: dict | None
) -> list[types.TextContent | types.ImageContent | types.EmbeddedResource]:
"""Handle tool execution."""
if not arguments:
arguments = {}
try:
# List stacks
if name == "list_stacks":
endpoint = "/api/stacks"
params = {}
result = await make_request("GET", endpoint, params=params)
if "error" in result:
return [types.TextContent(type="text", text=f"Error: {result['error']}")]
# Filter by environment if specified
stacks = result
if arguments.get("environment_id"):
env_id = int(arguments["environment_id"])
stacks = [s for s in stacks if s.get("EndpointId") == env_id]
if not stacks:
return [types.TextContent(type="text", text="No stacks found")]
output = "📚 Stacks:\n\n"
# Group by environment
env_groups = {}
for stack in stacks:
env_id = stack.get("EndpointId", "Unknown")
if env_id not in env_groups:
env_groups[env_id] = []
env_groups[env_id].append(stack)
for env_id, env_stacks in env_groups.items():
output += f"Environment {env_id}:\n"
for stack in env_stacks:
status = format_stack_status(stack)
stack_type = format_stack_type(stack.get("Type", 0))
output += f"{stack['Name']} (ID: {stack['Id']})\n"
output += f" Type: {stack_type} | Status: {status}\n"
if stack.get("GitConfig"):
output += f" Git: {stack['GitConfig']['URL']} ({stack['GitConfig']['ReferenceName']})\n"
output += "\n"
return [types.TextContent(type="text", text=output)]
# Get stack details
elif name == "get_stack":
stack_id = arguments["stack_id"]
result = await make_request("GET", f"/api/stacks/{stack_id}")
if "error" in result:
return [types.TextContent(type="text", text=f"Error: {result['error']}")]
output = f"📚 Stack: {result['Name']}\n\n"
output += f"ID: {result['Id']}\n"
output += f"Type: {format_stack_type(result.get('Type', 0))}\n"
output += f"Status: {format_stack_status(result)}\n"
output += f"Environment ID: {result.get('EndpointId', 'Unknown')}\n"
output += f"Created by: {result.get('CreatedBy', 'Unknown')}\n"
if result.get("GitConfig"):
git = result["GitConfig"]
output += f"\n🔗 Git Configuration:\n"
output += f" Repository: {git['URL']}\n"
output += f" Reference: {git['ReferenceName']}\n"
output += f" Path: {git.get('ComposeFilePathInRepository', 'N/A')}\n"
if result.get("Env"):
output += f"\n🔧 Environment Variables:\n"
for env in result["Env"]:
output += f" {env['name']} = {env['value']}\n"
if result.get("ResourceControl"):
rc = result["ResourceControl"]
output += f"\n🔒 Access Control:\n"
output += f" Public: {'Yes' if rc.get('Public') else 'No'}\n"
if rc.get("Users"):
output += f" Users: {len(rc['Users'])} users\n"
if rc.get("Teams"):
output += f" Teams: {len(rc['Teams'])} teams\n"
return [types.TextContent(type="text", text=output)]
# Get stack file
elif name == "get_stack_file":
stack_id = arguments["stack_id"]
result = await make_request("GET", f"/api/stacks/{stack_id}/file")
if "error" in result:
return [types.TextContent(type="text", text=f"Error: {result['error']}")]
content = result.get("StackFileContent", "")
if not content:
return [types.TextContent(type="text", text="Stack file is empty")]
output = f"📄 Stack File Content:\n\n```yaml\n{content}\n```"
return [types.TextContent(type="text", text=output)]
# Create compose stack from file
elif name == "create_compose_stack_from_file":
env_id = arguments["environment_id"]
# Build request data
data = {
"Name": arguments["name"],
"StackFileContent": arguments["compose_file"],
"EndpointId": int(env_id)
}
# Add environment variables if provided
if arguments.get("env_vars"):
data["Env"] = arguments["env_vars"]
result = await make_request("POST", "/api/stacks", json_data=data)
if "error" in result:
return [types.TextContent(type="text", text=f"Error: {result['error']}")]
output = f"✅ Stack created successfully!\n\n"
output += f"Name: {result['Name']}\n"
output += f"ID: {result['Id']}\n"
output += f"Type: Compose\n"
output += f"Environment: {result.get('EndpointId', 'Unknown')}\n"
return [types.TextContent(type="text", text=output)]
# Create compose stack from Git
elif name == "create_compose_stack_from_git":
env_id = arguments["environment_id"]
# Build request data
data = {
"Name": arguments["name"],
"EndpointId": int(env_id),
"GitConfig": {
"URL": arguments["repository_url"],
"ReferenceName": arguments.get("repository_ref", "main"),
"ComposeFilePathInRepository": arguments.get("compose_path", "docker-compose.yml")
}
}
# Add authentication if provided
if arguments.get("repository_auth") and arguments.get("repository_username"):
data["GitConfig"]["Authentication"] = {
"Username": arguments["repository_username"],
"Password": arguments.get("repository_password", "")
}
# Add environment variables if provided
if arguments.get("env_vars"):
data["Env"] = arguments["env_vars"]
result = await make_request("POST", "/api/stacks", json_data=data)
if "error" in result:
return [types.TextContent(type="text", text=f"Error: {result['error']}")]
output = f"✅ Git-based stack created successfully!\n\n"
output += f"Name: {result['Name']}\n"
output += f"ID: {result['Id']}\n"
output += f"Repository: {arguments['repository_url']}\n"
output += f"Branch/Tag: {arguments.get('repository_ref', 'main')}\n"
return [types.TextContent(type="text", text=output)]
# Create Kubernetes stack
elif name == "create_kubernetes_stack":
env_id = arguments["environment_id"]
# Build request data
data = {
"Name": arguments["name"],
"StackFileContent": arguments["manifest"],
"EndpointId": int(env_id),
"Type": 3, # Kubernetes type
"Namespace": arguments.get("namespace", "default")
}
result = await make_request("POST", "/api/stacks", json_data=data)
if "error" in result:
return [types.TextContent(type="text", text=f"Error: {result['error']}")]
output = f"✅ Kubernetes stack created successfully!\n\n"
output += f"Name: {result['Name']}\n"
output += f"ID: {result['Id']}\n"
output += f"Namespace: {arguments.get('namespace', 'default')}\n"
output += f"Environment: {result.get('EndpointId', 'Unknown')}\n"
return [types.TextContent(type="text", text=output)]
# Update stack
elif name == "update_stack":
stack_id = arguments["stack_id"]
# Get current stack info first
stack_info = await make_request("GET", f"/api/stacks/{stack_id}")
if "error" in stack_info:
return [types.TextContent(type="text", text=f"Error: {stack_info['error']}")]
# Build update data
data = {}
if arguments.get("compose_file"):
data["StackFileContent"] = arguments["compose_file"]
if arguments.get("env_vars"):
data["Env"] = arguments["env_vars"]
data["Prune"] = arguments.get("prune", False)
data["PullImage"] = arguments.get("pull_image", True)
endpoint = f"/api/stacks/{stack_id}"
params = {"endpointId": stack_info["EndpointId"]}
result = await make_request("PUT", endpoint, json_data=data, params=params)
if "error" in result:
return [types.TextContent(type="text", text=f"Error: {result['error']}")]
return [types.TextContent(type="text", text=f"✅ Stack '{stack_info['Name']}' updated successfully!")]
# Update Git stack
elif name == "update_git_stack":
stack_id = arguments["stack_id"]
# Get current stack info
stack_info = await make_request("GET", f"/api/stacks/{stack_id}")
if "error" in stack_info:
return [types.TextContent(type="text", text=f"Error: {stack_info['error']}")]
if not stack_info.get("GitConfig"):
return [types.TextContent(type="text", text="Error: This is not a Git-based stack")]
endpoint = f"/api/stacks/{stack_id}/git/redeploy"
params = {
"endpointId": stack_info["EndpointId"],
"pullImage": str(arguments.get("pull_image", True)).lower()
}
result = await make_request("PUT", endpoint, params=params)
if "error" in result:
return [types.TextContent(type="text", text=f"Error: {result['error']}")]
return [types.TextContent(type="text", text=f"✅ Git stack '{stack_info['Name']}' updated from repository!")]
# Start stack
elif name == "start_stack":
stack_id = arguments["stack_id"]
# Get stack info
stack_info = await make_request("GET", f"/api/stacks/{stack_id}")
if "error" in stack_info:
return [types.TextContent(type="text", text=f"Error: {stack_info['error']}")]
endpoint = f"/api/stacks/{stack_id}/start"
params = {"endpointId": stack_info["EndpointId"]}
result = await make_request("POST", endpoint, params=params)
if "error" in result:
return [types.TextContent(type="text", text=f"Error: {result['error']}")]
return [types.TextContent(type="text", text=f"✅ Stack '{stack_info['Name']}' started successfully!")]
# Stop stack
elif name == "stop_stack":
stack_id = arguments["stack_id"]
# Get stack info
stack_info = await make_request("GET", f"/api/stacks/{stack_id}")
if "error" in stack_info:
return [types.TextContent(type="text", text=f"Error: {stack_info['error']}")]
endpoint = f"/api/stacks/{stack_id}/stop"
params = {"endpointId": stack_info["EndpointId"]}
result = await make_request("POST", endpoint, params=params)
if "error" in result:
return [types.TextContent(type="text", text=f"Error: {result['error']}")]
return [types.TextContent(type="text", text=f"⏹️ Stack '{stack_info['Name']}' stopped successfully!")]
# Delete stack
elif name == "delete_stack":
stack_id = arguments["stack_id"]
# Get stack info
stack_info = await make_request("GET", f"/api/stacks/{stack_id}")
if "error" in stack_info:
return [types.TextContent(type="text", text=f"Error: {stack_info['error']}")]
endpoint = f"/api/stacks/{stack_id}"
params = {
"endpointId": stack_info["EndpointId"],
"external": "false"
}
if arguments.get("delete_volumes"):
# For compose stacks, this deletes volumes
data = {"removeVolumes": True}
result = await make_request("DELETE", endpoint, params=params, json_data=data)
else:
result = await make_request("DELETE", endpoint, params=params)
if "error" in result:
return [types.TextContent(type="text", text=f"Error: {result['error']}")]
output = f"🗑️ Stack '{stack_info['Name']}' deleted successfully!"
if arguments.get("delete_volumes"):
output += " (including volumes)"
return [types.TextContent(type="text", text=output)]
# Migrate stack
elif name == "migrate_stack":
stack_id = arguments["stack_id"]
target_env = arguments["target_environment_id"]
# Get current stack info and file
stack_info = await make_request("GET", f"/api/stacks/{stack_id}")
if "error" in stack_info:
return [types.TextContent(type="text", text=f"Error: {stack_info['error']}")]
stack_file = await make_request("GET", f"/api/stacks/{stack_id}/file")
if "error" in stack_file:
return [types.TextContent(type="text", text=f"Error: {stack_file['error']}")]
# Create new stack in target environment
new_name = arguments.get("new_name", f"{stack_info['Name']}-migrated")
data = {
"Name": new_name,
"StackFileContent": stack_file.get("StackFileContent", ""),
"EndpointId": int(target_env),
"Type": stack_info.get("Type", 2)
}
# Copy environment variables if any
if stack_info.get("Env"):
data["Env"] = stack_info["Env"]
# For Kubernetes stacks, copy namespace
if stack_info.get("Type") == 3 and stack_info.get("Namespace"):
data["Namespace"] = stack_info["Namespace"]
result = await make_request("POST", "/api/stacks", json_data=data)
if "error" in result:
return [types.TextContent(type="text", text=f"Error: {result['error']}")]
output = f"✅ Stack migrated successfully!\n\n"
output += f"Original: {stack_info['Name']} (Environment {stack_info['EndpointId']})\n"
output += f"New: {new_name} (Environment {target_env})\n"
output += f"New Stack ID: {result['Id']}\n"
output += "\nNote: Original stack was not deleted."
return [types.TextContent(type="text", text=output)]
# Get stack logs
elif name == "get_stack_logs":
stack_id = arguments["stack_id"]
# This is a simplified version - actual implementation would need to:
# 1. Get stack details
# 2. List all containers in the stack
# 3. Aggregate logs from all containers
return [types.TextContent(
type="text",
text="Note: Stack logs aggregation requires listing all containers in the stack. Use docker logs on individual containers for now."
)]
else:
return [types.TextContent(type="text", text=f"Unknown tool: {name}")]
except Exception as e:
logger.error(f"Error in {name}: {str(e)}", exc_info=True)
return [types.TextContent(type="text", text=f"Error: {str(e)}")]
async def main():
# Run the server using stdin/stdout streams
async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
await server.run(
read_stream,
write_stream,
InitializationOptions(
server_name="portainer-stacks",
server_version="0.1.0",
capabilities=server.get_capabilities(
notification_options=NotificationOptions(),
experimental_capabilities={},
),
),
)
if __name__ == "__main__":
asyncio.run(main())