This commit is contained in:
2026-03-10 14:30:51 -03:00
parent c2d54dd915
commit 290f05be87
8 changed files with 0 additions and 1198 deletions

View File

@@ -1,8 +0,0 @@
Hello, Adolfo from Portainer here.
if you don't persist data and use a replica count of 1: deployment
if you persist data using shared access policy, and use a replica count >1: deployment
if you don't persist data and use a global deployment: daemonset
if you persist data using isolated access policy: statefulset

View File

@@ -1,3 +0,0 @@
# Portainer Video Scritps
This is the repository of scripts of the videos that I will generate of Portainer content.

View File

@@ -1,72 +0,0 @@
Hello, Adolfo from Portainer here
In this video I want to show a basic deployment of the traefik ingress controller on a Charmed Kubernetes cluster and how to use it with Portainer.
We have a video on how to deploy Portainer on a Charmed Kubenetes cluster that you can watch here and that I highly recommended as it is a pre-req for this tutorial.
Charmed Kubernetes comes with a default nginx-ingress controller that uses ports 80 and 443 that are commonly used to access websites, apps and APIs over the internet.
Traefik also requires that these ports are available so in this exercise I am going to remove the default nginx-ingress controller. You could have both running on your cluster but that would require more complex configuration of firewall rules that can vary from one cluster environment or cloud provider to another.
I am assuming that you have your Portainer deployed already on your Charmed Kubernetes and has access to the cluster via kubectl. helm is also required to deploy traefik so make sure you have this command installed also. It is avaliable via snap.
snap install helm --classic
Let's start by doing some initial prep-work.
I am going to download the default values.yaml file from the traefik git repository to my machine. We will need to modify the helm values.yaml file slightly so we can make it work on our Charmed Kubenetes cluster:
wget https://raw.githubusercontent.com/traefik/traefik-helm-chart/master/traefik/values.yaml
With the sed command I am going to change a variable that will open ports 80 and 443 to traefik.
sed -i 's/\# hostPort: 8000/hostPort: 80/g' values.yaml
sed -i 's/\# hostPort: 8443/hostPort: 443/g' values.yaml
Let's add the traefik repositoy to helm.
helm repo add traefik https://helm.traefik.io/traefik
helm repo update
The next step is to remove the default inginx-ngress controller
You can do this by typing
kubectl delete namespace ingress-nginx-kubernetes-worker
Once this nginx-ingress controller is removed we can deploy Traefik using helm making sure that it uses the values.yaml file we just edited.
helm install traefik traefik/traefik -f values.yamljuju ssh kubernetes-master/0 -L 30777:localhost:30777 -fN
I am going to test this with a couple of apps. But before doing this it is important to mention that I am using a wildcard domain name setup on my DNS server. This simplifies a lot my deployment given I won't need to add each host entry to my DNS server everytime I need to publish an app, webiste or service on my cluster.
My wildcard domain is pointing to the worker-0 machine's IP address. You can get the IP address by typing
juju status | grep kubernetes-worker/0
Ok, now let's go to our Portainer instance and deploy some apps to be routed via traefik. I going to connect to my Portainer instance via a ssh tunnel using juju ssh kubernetes-master/0 -L 30777:localhost:30777 -fN
I am going to test deploying caddy and pointing the app to a domain I use for testing purposes called zz11.net
I will start by creating a Resource pool for this app here.
The hostname will be caddy.zz11.net
Now I am going to deploy the Application caddy and use the Resource pool so that traefik can route the incoming request accordingly.
Let's try with another tiny app called whoami.
containous/whoami
dokuwiki
bitnami/dokuwiki
8080
16
Deploy and Manage Traefik with Portainer on a Charmed Kubernetes cluster
40436,45

View File

@@ -1,418 +0,0 @@
# Default values for Traefik
image:
name: traefik
# defaults to appVersion
tag: ""
pullPolicy: IfNotPresent
#
# Configure the deployment
#
deployment:
enabled: true
# Can be either Deployment or DaemonSet
kind: Deployment
# Number of pods of the deployment (only applies when kind == Deployment)
replicas: 1
# Additional deployment annotations (e.g. for jaeger-operator sidecar injection)
annotations: {}
# Additional deployment labels (e.g. for filtering deployment by custom labels)
labels: {}
# Additional pod annotations (e.g. for mesh injection or prometheus scraping)
podAnnotations: {}
# Additional Pod labels (e.g. for filtering Pod by custom labels)
podLabels: {}
# Additional containers (e.g. for metric offloading sidecars)
additionalContainers: []
# https://docs.datadoghq.com/developers/dogstatsd/unix_socket/?tab=host
# - name: socat-proxy
# image: alpine/socat:1.0.5
# args: ["-s", "-u", "udp-recv:8125", "unix-sendto:/socket/socket"]
# volumeMounts:
# - name: dsdsocket
# mountPath: /socket
# Additional volumes available for use with initContainers and additionalContainers
additionalVolumes: []
# - name: dsdsocket
# hostPath:
# path: /var/run/statsd-exporter
# Additional initContainers (e.g. for setting file permission as shown below)
initContainers: []
# The "volume-permissions" init container is required if you run into permission issues.
# Related issue: https://github.com/traefik/traefik/issues/6972
# - name: volume-permissions
# image: busybox:1.31.1
# command: ["sh", "-c", "chmod -Rv 600 /data/*"]
# volumeMounts:
# - name: data
# mountPath: /data
# Custom pod DNS policy. Apply if `hostNetwork: true`
# dnsPolicy: ClusterFirstWithHostNet
# Additional imagePullSecrets
imagePullSecrets: []
# - name: myRegistryKeySecretName
# Pod disruption budget
podDisruptionBudget:
enabled: false
# maxUnavailable: 1
# minAvailable: 0
# Use ingressClass. Ignored if Traefik version < 2.3 / kubernetes < 1.18.x
ingressClass:
# true is not unit-testable yet, pending https://github.com/rancher/helm-unittest/pull/12
enabled: true
isDefaultClass: true
# Activate Pilot integration
pilot:
enabled: false
token: ""
dashboard: true
# Enable experimental features
experimental:
plugins:
enabled: false
kubernetesGateway:
enabled: false
appLabelSelector: "traefik"
certificates: []
# - group: "core"
# kind: "Secret"
# name: "mysecret"
# Create an IngressRoute for the dashboard
ingressRoute:
dashboard:
enabled: true
# Additional ingressRoute annotations (e.g. for kubernetes.io/ingress.class)
annotations: {}
# Additional ingressRoute labels (e.g. for filtering IngressRoute by custom labels)
labels: {}
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
#
# Configure providers
#
providers:
kubernetesCRD:
enabled: true
namespaces: []
# - "default"
kubernetesIngress:
enabled: true
# labelSelector: environment=production,method=traefik
namespaces: []
# - "default"
# IP used for Kubernetes Ingress endpoints
publishedService:
enabled: false
# Published Kubernetes Service to copy status from. Format: namespace/servicename
# By default this Traefik service
# pathOverride: ""
#
# Add volumes to the traefik pod. The volume name will be passed to tpl.
# This can be used to mount a cert pair or a configmap that holds a config.toml file.
# After the volume has been mounted, add the configs into traefik by using the `additionalArguments` list below, eg:
# additionalArguments:
# - "--providers.file.filename=/config/dynamic.toml"
# - "--ping"
# - "--ping.entrypoint=web"
volumes: []
# - name: public-cert
# mountPath: "/certs"
# type: secret
# - name: '{{ printf "%s-configs" .Release.Name }}'
# mountPath: "/config"
# type: configMap
# Additional volumeMounts to add to the Traefik container
additionalVolumeMounts: []
# For instance when using a logshipper for access logs
# - name: traefik-logs
# mountPath: /var/log/traefik
# Logs
# https://docs.traefik.io/observability/logs/
logs:
# Traefik logs concern everything that happens to Traefik itself (startup, configuration, events, shutdown, and so on).
general:
# By default, the logs use a text format (common), but you can
# also ask for the json format in the format option
# format: json
# By default, the level is set to ERROR. Alternative logging levels are DEBUG, PANIC, FATAL, ERROR, WARN, and INFO.
level: ERROR
access:
# To enable access logs
enabled: false
# By default, logs are written using the Common Log Format (CLF).
# To write logs in JSON, use json in the format option.
# If the given format is unsupported, the default (CLF) is used instead.
# format: json
# To write the logs in an asynchronous fashion, specify a bufferingSize option.
# This option represents the number of log lines Traefik will keep in memory before writing
# them to the selected output. In some cases, this option can greatly help performances.
# bufferingSize: 100
# Filtering https://docs.traefik.io/observability/access-logs/#filtering
filters: {}
# statuscodes: "200,300-302"
# retryattempts: true
# minduration: 10ms
# Fields
# https://docs.traefik.io/observability/access-logs/#limiting-the-fieldsincluding-headers
fields:
general:
defaultmode: keep
names: {}
# Examples:
# ClientUsername: drop
headers:
defaultmode: drop
names: {}
# Examples:
# User-Agent: redact
# Authorization: drop
# Content-Type: keep
globalArguments:
- "--global.checknewversion"
- "--global.sendanonymoususage"
# Configure Traefik static configuration
# Additional arguments to be passed at Traefik's binary
# All available options available on https://docs.traefik.io/reference/static-configuration/cli/
## Use curly braces to pass values: `helm install --set="additionalArguments={--providers.kubernetesingress.ingressclass=traefik-internal,--log.level=DEBUG}"`
additionalArguments:
- "--providers.kubernetesingress.ingressclass=traefik"
- "--log.level=DEBUG"
- "--log.format=json"
- "--certificatesresolvers.le.acme.caserver=https://acme-v02.api.letsencrypt.org/directory"
- "--certificatesresolvers.le.acme.tlschallenge=true"
- "--certificatesresolvers.le.acme.email=adelorenzo@oe74.net"
- "--certificatesresolvers.le.acme.storage=/data/acme.json"
# Environment variables to be passed to Traefik's binary
env: []
# - name: SOME_VAR
# value: some-var-value
# - name: SOME_VAR_FROM_CONFIG_MAP
# valueFrom:
# configMapRef:
# name: configmap-name
# key: config-key
# - name: SOME_SECRET
# valueFrom:
# secretKeyRef:
# name: secret-name
# key: secret-key
envFrom: []
# - configMapRef:
# name: config-map-name
# - secretRef:
# name: secret-name
# Configure ports
ports:
# The name of this one can't be changed as it is used for the readiness and
# liveness probes, but you can adjust its config to your liking
traefik:
port: 9000
# Use hostPort if set.
# hostPort: 9000
#
# Use hostIP if set. If not set, Kubernetes will default to 0.0.0.0, which
# means it's listening on all your interfaces and all your IPs. You may want
# to set this value if you need traefik to listen on specific interface
# only.
# hostIP: 192.168.100.10
# Override the liveness/readiness port. This is useful to integrate traefik
# with an external Load Balancer that performs healthchecks.
# healthchecksPort: 9000
# Defines whether the port is exposed if service.type is LoadBalancer or
# NodePort.
#
# You SHOULD NOT expose the traefik port on production deployments.
# If you want to access it from outside of your cluster,
# use `kubectl port-forward` or create a secure ingress
expose: false
# The exposed port for this service
exposedPort: 9000
# The port protocol (TCP/UDP)
protocol: TCP
web:
port: 8000
hostPort: 80
expose: true
exposedPort: 80
# The port protocol (TCP/UDP)
protocol: TCP
# Use nodeport if set. This is useful if you have configured Traefik in a
# LoadBalancer
# nodePort: 32080
# Port Redirections
# Added in 2.2, you can make permanent redirects via entrypoints.
# https://docs.traefik.io/routing/entrypoints/#redirection
redirectTo: websecure
websecure:
port: 8443
hostPort: 443
expose: true
exposedPort: 443
# The port protocol (TCP/UDP)
protocol: TCP
# nodePort: 32443
# Set TLS at the entrypoint
# https://doc.traefik.io/traefik/routing/entrypoints/#tls
tls:
enabled: true
# this is the name of a TLSOption definition
options: ""
certResolver: "le"
domains:
- main: zz11.net
# sans:
# - foo.example.com
# - bar.example.com
# TLS Options are created as TLSOption CRDs
# https://doc.traefik.io/traefik/https/tls/#tls-options
# Example:
# tlsOptions:
# default:
# sniStrict: true
# preferServerCipherSuites: true
# foobar:
# curvePreferences:
# - CurveP521
# - CurveP384
tlsOptions: {}
# Options for the main traefik service, where the entrypoints traffic comes
# from.
service:
enabled: true
type: LoadBalancer
# Additional annotations (e.g. for cloud provider specific config)
annotations: {}
# Additional service labels (e.g. for filtering Service by custom labels)
labels: {}
# Additional entries here will be added to the service spec. Cannot contains
# type, selector or ports entries.
spec: {}
# externalTrafficPolicy: Cluster
# loadBalancerIP: "1.2.3.4"
# clusterIP: "2.3.4.5"
loadBalancerSourceRanges: []
# - 192.168.0.1/32
# - 172.16.0.0/16
externalIPs: []
# - 1.2.3.4
## Create HorizontalPodAutoscaler object.
##
autoscaling:
enabled: false
# minReplicas: 1
# maxReplicas: 10
# metrics:
# - type: Resource
# resource:
# name: cpu
# targetAverageUtilization: 60
# - type: Resource
# resource:
# name: memory
# targetAverageUtilization: 60
# Enable persistence using Persistent Volume Claims
# ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
# After the pvc has been mounted, add the configs into traefik by using the `additionalArguments` list below, eg:
# additionalArguments:
# - "--certificatesresolvers.le.acme.storage=/data/acme.json"
# It will persist TLS certificates.
persistence:
enabled: true
name: data
# existingClaim: ""
accessMode: ReadWriteOnce
size: 128Mi
# storageClass: ""
path: /data
annotations: {}
# subPath: "" # only mount a subpath of the Volume into the pod
# If hostNetwork is true, runs traefik in the host network namespace
# To prevent unschedulabel pods due to port collisions, if hostNetwork=true
# and replicas>1, a pod anti-affinity is recommended and will be set if the
# affinity is left as default.
hostNetwork: false
# Whether Role Based Access Control objects like roles and rolebindings should be created
rbac:
enabled: true
# If set to false, installs ClusterRole and ClusterRoleBinding so Traefik can be used across namespaces.
# If set to true, installs namespace-specific Role and RoleBinding and requires provider configuration be set to that same namespace
namespaced: false
# Enable to create a PodSecurityPolicy and assign it to the Service Account via RoleBindin or ClusterRoleBinding
podSecurityPolicy:
enabled: false
# The service account the pods will use to interact with the Kubernetes API
serviceAccount:
# If set, an existing service account is used
# If not set, a service account is created automatically using the fullname template
name: ""
# Additional serviceAccount annotations (e.g. for oidc authentication)
serviceAccountAnnotations: {}
resources: {}
# requests:
# cpu: "100m"
# memory: "50Mi"
# limits:
# cpu: "300m"
# memory: "150Mi"
affinity: {}
# # This example pod anti-affinity forces the scheduler to put traefik pods
# # on nodes where no other traefik pods are scheduled.
# # It should be used when hostNetwork: true to prevent port conflicts
# podAntiAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# - labelSelector:
# matchExpressions:
# - key: app
# operator: In
# values:
# - {{ template "traefik.name" . }}
# topologyKey: failure-domain.beta.kubernetes.io/zone
nodeSelector: {}
tolerations: []
# Pods can have priority.
# Priority indicates the importance of a Pod relative to other Pods.
priorityClassName: ""
# Set the container security context
# To run the container with ports below 1024 this will need to be adjust to run as root
securityContext:
capabilities:
drop: [ALL]
readOnlyRootFilesystem: true
runAsGroup: 65532
runAsNonRoot: true
runAsUser: 65532
podSecurityContext:
fsGroup: 65532

View File

@@ -1,71 +0,0 @@
Hello, Adolfo from Portainer here
We have prepared a set of comparison videos of Portainer vs 4 different Kubernetes management tools:
Kubernetes Dashboard
Lens
CrossPlane
Rancher UI
the idea is to basically show the steps required to deploy an application on each of the tools vs Portainer and I am going to use a basic implementation of the redis database.
Here I start with Portainer vs Kubernetes Dashboard with a redis server deployment. In both cases I use microk8s and the process starts with the search for the proper container image in both cases.
In Portainer I used the Applications menu option and deployed redis with ,
bitnami/redis
apiVersion: v1
kind: ConfigMap
metadata:
name: example-redis-config
data:
redis-config: ""
apiVersion: v1
kind: Pod
metadata:
name: redis
spec:
containers:
- name: redis
image: redis:5.0.4
command:
- redis-server
- "/redis-master/redis.conf"
env:
- name: MASTER
value: "true"
ports:
- containerPort: 6379
resources:
limits:
cpu: "0.1"
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /redis-master
name: config
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: example-redis-config
items:
- key: redis-config
path: redis.conf
token=$(microk8s kubectl -n kube-system get secret | grep default-token | cut -d " " -f1)
microk8s kubectl -n kube-system describe secret $token
kubectl create clusterrolebinding --user system:serviceaccount:kube-system:default kube-system-cluster-admin --clusterrole cluster-admin

View File

@@ -1,29 +0,0 @@
Recommended content:
From a single server MVP to scale easy with Portainer
We assume you already have you web-app running within a container
Select your orchestration (what is best practice, what are reasons, what are limits ???)
- Kubernetes
- Docker Swarm
10-100 server : Docker Swarm
50-200 server: microk8s (easy as docker swarm, but prepare for further growth ???)
100-100.000: kubernetes
Setup infrastructure (swarm or microk8s)
Launch servers (virtual or bare metal)
Create master
Join further nodes
Launch Portainer
Launch reverse proxy via Portainer
The reverse proxy will automatically load balance all incoming requests to the web-app containers
The proxy will hot reload when containers change, not interrupt ongoing and long-running requests (?)
The proxy can automatically forward to services based on sub-domains and/or paths via labels (?)
Launch services (web-app and others)
Launch you services with Portainer, set labels for sub-domain and/or path
Example: www (wordpress), api (nodejs) (?)
Database: only one instance per server on dedicated servers
Scale up
Check metrics
Easily scale services up and down, add more servers
7. b. Manage credentials, pass them to the web-app so it can connect to the database.

View File

@@ -1,581 +0,0 @@
{
"version": "3",
"templates": [
{
"id": 1,
"type": 3,
"title": "Ollama",
"description": "Local LLM inference engine supporting Llama, Mistral, Qwen, Gemma, Phi and 100+ models with GPU acceleration",
"note": "Requires NVIDIA GPU with Docker GPU runtime configured. Pull models after deployment with: <code>docker exec ollama ollama pull llama3.1</code>",
"categories": ["ai", "llm", "inference"],
"platform": "linux",
"logo": "https://ollama.com/public/ollama.png",
"repository": {
"url": "https://git.oe74.net/adelorenzo/portainer_scripts",
"stackfile": "ai-templates/stacks/ollama/docker-compose.yml"
},
"env": [
{
"name": "OLLAMA_PORT",
"label": "Ollama API port",
"default": "11434"
},
{
"name": "OLLAMA_NUM_PARALLEL",
"label": "Max parallel requests",
"default": "4"
},
{
"name": "OLLAMA_MAX_LOADED_MODELS",
"label": "Max models loaded in VRAM",
"default": "2"
}
]
},
{
"id": 2,
"type": 3,
"title": "Open WebUI + Ollama",
"description": "Full-featured ChatGPT-like web interface bundled with Ollama backend for local LLM inference",
"note": "Access the web UI at the configured port. First user to register becomes admin. Requires NVIDIA GPU.",
"categories": ["ai", "llm", "chat-ui"],
"platform": "linux",
"logo": "https://docs.openwebui.com/img/logo.png",
"repository": {
"url": "https://git.oe74.net/adelorenzo/portainer_scripts",
"stackfile": "ai-templates/stacks/open-webui/docker-compose.yml"
},
"env": [
{
"name": "OPEN_WEBUI_PORT",
"label": "Web UI port",
"default": "3000"
},
{
"name": "OLLAMA_PORT",
"label": "Ollama API port",
"default": "11434"
},
{
"name": "WEBUI_SECRET_KEY",
"label": "Secret key for sessions",
"default": "changeme"
},
{
"name": "ENABLE_SIGNUP",
"label": "Allow user registration",
"default": "true"
}
]
},
{
"id": 3,
"type": 3,
"title": "LocalAI",
"description": "Drop-in OpenAI API compatible replacement. Run LLMs, generate images, audio locally with GPU acceleration",
"note": "Exposes an OpenAI-compatible API at /v1/. Models can be loaded via the API or placed in the models volume.",
"categories": ["ai", "llm", "openai-api"],
"platform": "linux",
"logo": "https://localai.io/logo.png",
"repository": {
"url": "https://git.oe74.net/adelorenzo/portainer_scripts",
"stackfile": "ai-templates/stacks/localai/docker-compose.yml"
},
"env": [
{
"name": "LOCALAI_PORT",
"label": "API port",
"default": "8080"
},
{
"name": "THREADS",
"label": "CPU threads for inference",
"default": "4"
},
{
"name": "CONTEXT_SIZE",
"label": "Default context window size",
"default": "4096"
}
]
},
{
"id": 4,
"type": 3,
"title": "vLLM",
"description": "High-throughput LLM serving engine with PagedAttention, continuous batching, and OpenAI-compatible API",
"note": "Requires NVIDIA GPU with sufficient VRAM for the chosen model. HuggingFace token needed for gated models.",
"categories": ["ai", "llm", "inference", "high-performance"],
"platform": "linux",
"logo": "https://docs.vllm.ai/en/latest/_static/vllm-logo-text-light.png",
"repository": {
"url": "https://git.oe74.net/adelorenzo/portainer_scripts",
"stackfile": "ai-templates/stacks/vllm/docker-compose.yml"
},
"env": [
{
"name": "VLLM_PORT",
"label": "API port",
"default": "8000"
},
{
"name": "MODEL_NAME",
"label": "HuggingFace model ID",
"default": "meta-llama/Llama-3.1-8B-Instruct"
},
{
"name": "HF_TOKEN",
"label": "HuggingFace access token"
},
{
"name": "MAX_MODEL_LEN",
"label": "Max sequence length",
"default": "4096"
},
{
"name": "GPU_MEM_UTIL",
"label": "GPU memory utilization (0-1)",
"default": "0.90"
},
{
"name": "TENSOR_PARALLEL",
"label": "Tensor parallel GPU count",
"default": "1"
}
]
},
{
"id": 5,
"type": 3,
"title": "Text Generation WebUI",
"description": "Comprehensive web UI for running LLMs locally (oobabooga). Supports GGUF, GPTQ, AWQ, EXL2, and HF formats",
"note": "Requires NVIDIA GPU. Models should be placed in the models volume. Supports extensions for RAG, TTS, and more.",
"categories": ["ai", "llm", "chat-ui"],
"platform": "linux",
"logo": "https://raw.githubusercontent.com/oobabooga/text-generation-webui/main/docs/logo.png",
"repository": {
"url": "https://git.oe74.net/adelorenzo/portainer_scripts",
"stackfile": "ai-templates/stacks/text-generation-webui/docker-compose.yml"
},
"env": [
{
"name": "WEBUI_PORT",
"label": "Web UI port",
"default": "7860"
},
{
"name": "API_PORT",
"label": "API port",
"default": "5000"
},
{
"name": "STREAM_PORT",
"label": "Streaming API port",
"default": "5005"
},
{
"name": "EXTRA_LAUNCH_ARGS",
"label": "Extra launch arguments",
"default": "--listen --api"
}
]
},
{
"id": 6,
"type": 3,
"title": "LiteLLM Proxy",
"description": "Unified LLM API gateway supporting 100+ providers (OpenAI, Anthropic, Ollama, vLLM, etc.) with spend tracking and load balancing",
"note": "Configure models in /app/config/litellm_config.yaml after deployment. Includes PostgreSQL for usage tracking.",
"categories": ["ai", "llm", "api-gateway", "proxy"],
"platform": "linux",
"logo": "https://litellm.ai/favicon.ico",
"repository": {
"url": "https://git.oe74.net/adelorenzo/portainer_scripts",
"stackfile": "ai-templates/stacks/litellm/docker-compose.yml"
},
"env": [
{
"name": "LITELLM_PORT",
"label": "Proxy API port",
"default": "4000"
},
{
"name": "LITELLM_MASTER_KEY",
"label": "Master API key",
"default": "sk-master-key"
},
{
"name": "PG_USER",
"label": "PostgreSQL user",
"default": "litellm"
},
{
"name": "PG_PASSWORD",
"label": "PostgreSQL password",
"default": "litellm"
}
]
},
{
"id": 7,
"type": 3,
"title": "ComfyUI",
"description": "Node-based Stable Diffusion workflow engine for image and video generation with GPU acceleration",
"note": "Requires NVIDIA GPU. Access the node editor at the configured port. Models go in the models volume.",
"categories": ["ai", "image-generation", "stable-diffusion"],
"platform": "linux",
"logo": "https://raw.githubusercontent.com/comfyanonymous/ComfyUI/master/web/assets/comfyui-logo.png",
"repository": {
"url": "https://git.oe74.net/adelorenzo/portainer_scripts",
"stackfile": "ai-templates/stacks/comfyui/docker-compose.yml"
},
"env": [
{
"name": "COMFYUI_PORT",
"label": "Web UI port",
"default": "8188"
},
{
"name": "CLI_ARGS",
"label": "Launch arguments",
"default": "--listen 0.0.0.0 --port 8188"
}
]
},
{
"id": 8,
"type": 3,
"title": "Stable Diffusion WebUI",
"description": "AUTOMATIC1111 web interface for Stable Diffusion image generation with extensive extension ecosystem",
"note": "Requires NVIDIA GPU with 8GB+ VRAM. First startup downloads the base model and may take several minutes.",
"categories": ["ai", "image-generation", "stable-diffusion"],
"platform": "linux",
"logo": "https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/html/logo.png",
"repository": {
"url": "https://git.oe74.net/adelorenzo/portainer_scripts",
"stackfile": "ai-templates/stacks/stable-diffusion-webui/docker-compose.yml"
},
"env": [
{
"name": "SD_PORT",
"label": "Web UI port",
"default": "7860"
},
{
"name": "CLI_ARGS",
"label": "Launch arguments",
"default": "--listen --api --xformers"
}
]
},
{
"id": 9,
"type": 3,
"title": "Langflow",
"description": "Visual framework for building multi-agent and RAG applications. Drag-and-drop LLM pipeline builder",
"note": "Access the visual editor at the configured port. Connect to Ollama, OpenAI, or any LLM backend.",
"categories": ["ai", "agents", "rag", "workflows"],
"platform": "linux",
"logo": "https://avatars.githubusercontent.com/u/128686189",
"repository": {
"url": "https://git.oe74.net/adelorenzo/portainer_scripts",
"stackfile": "ai-templates/stacks/langflow/docker-compose.yml"
},
"env": [
{
"name": "LANGFLOW_PORT",
"label": "Web UI port",
"default": "7860"
},
{
"name": "AUTO_LOGIN",
"label": "Skip login screen",
"default": "true"
}
]
},
{
"id": 10,
"type": 3,
"title": "Flowise",
"description": "Drag-and-drop LLM orchestration tool. Build chatbots, agents, and RAG pipelines without coding",
"note": "Default credentials are admin/changeme. Connect to any OpenAI-compatible API backend.",
"categories": ["ai", "agents", "rag", "chatbots"],
"platform": "linux",
"logo": "https://flowiseai.com/favicon.ico",
"repository": {
"url": "https://git.oe74.net/adelorenzo/portainer_scripts",
"stackfile": "ai-templates/stacks/flowise/docker-compose.yml"
},
"env": [
{
"name": "FLOWISE_PORT",
"label": "Web UI port",
"default": "3000"
},
{
"name": "FLOWISE_USERNAME",
"label": "Admin username",
"default": "admin"
},
{
"name": "FLOWISE_PASSWORD",
"label": "Admin password",
"default": "changeme"
}
]
},
{
"id": 11,
"type": 3,
"title": "n8n (AI-Enabled)",
"description": "Workflow automation platform with built-in AI agent nodes, LLM chains, and vector store integrations",
"note": "AI features include: AI Agent nodes, LLM Chain, Document Loaders, Vector Stores, Text Splitters, and Memory nodes.",
"categories": ["ai", "automation", "workflows", "agents"],
"platform": "linux",
"logo": "https://n8n.io/favicon.ico",
"repository": {
"url": "https://git.oe74.net/adelorenzo/portainer_scripts",
"stackfile": "ai-templates/stacks/n8n-ai/docker-compose.yml"
},
"env": [
{
"name": "N8N_PORT",
"label": "Web UI port",
"default": "5678"
},
{
"name": "N8N_USER",
"label": "Admin username",
"default": "admin"
},
{
"name": "N8N_PASSWORD",
"label": "Admin password",
"default": "changeme"
},
{
"name": "WEBHOOK_URL",
"label": "External webhook URL",
"default": "http://localhost:5678/"
}
]
},
{
"id": 12,
"type": 3,
"title": "Qdrant",
"description": "High-performance vector similarity search engine for RAG, semantic search, and AI applications",
"note": "REST API on port 6333, gRPC on 6334. Supports filtering, payload indexing, and distributed mode.",
"categories": ["ai", "vector-database", "rag", "embeddings"],
"platform": "linux",
"logo": "https://qdrant.tech/images/logo_with_text.png",
"repository": {
"url": "https://git.oe74.net/adelorenzo/portainer_scripts",
"stackfile": "ai-templates/stacks/qdrant/docker-compose.yml"
},
"env": [
{
"name": "QDRANT_HTTP_PORT",
"label": "REST API port",
"default": "6333"
},
{
"name": "QDRANT_GRPC_PORT",
"label": "gRPC port",
"default": "6334"
},
{
"name": "QDRANT_API_KEY",
"label": "API key (optional)"
}
]
},
{
"id": 13,
"type": 3,
"title": "ChromaDB",
"description": "AI-native open-source embedding database. The easiest vector store to get started with for RAG applications",
"note": "Persistent storage enabled by default. Compatible with LangChain, LlamaIndex, and all major AI frameworks.",
"categories": ["ai", "vector-database", "rag", "embeddings"],
"platform": "linux",
"logo": "https://www.trychroma.com/chroma-logo.png",
"repository": {
"url": "https://git.oe74.net/adelorenzo/portainer_scripts",
"stackfile": "ai-templates/stacks/chromadb/docker-compose.yml"
},
"env": [
{
"name": "CHROMA_PORT",
"label": "API port",
"default": "8000"
},
{
"name": "CHROMA_TOKEN",
"label": "Auth token (optional)"
},
{
"name": "TELEMETRY",
"label": "Anonymous telemetry",
"default": "FALSE"
}
]
},
{
"id": 14,
"type": 3,
"title": "Weaviate",
"description": "AI-native vector database with built-in vectorization modules and hybrid search capabilities",
"note": "Supports text2vec-transformers, generative-openai, and many other modules. Configure modules via environment variables.",
"categories": ["ai", "vector-database", "rag", "search"],
"platform": "linux",
"logo": "https://weaviate.io/img/site/weaviate-logo-light.png",
"repository": {
"url": "https://git.oe74.net/adelorenzo/portainer_scripts",
"stackfile": "ai-templates/stacks/weaviate/docker-compose.yml"
},
"env": [
{
"name": "WEAVIATE_HTTP_PORT",
"label": "HTTP API port",
"default": "8080"
},
{
"name": "WEAVIATE_GRPC_PORT",
"label": "gRPC port",
"default": "50051"
},
{
"name": "VECTORIZER",
"label": "Default vectorizer module",
"default": "none"
},
{
"name": "MODULES",
"label": "Enabled modules",
"default": "text2vec-transformers,generative-openai"
},
{
"name": "ANON_ACCESS",
"label": "Anonymous access enabled",
"default": "true"
}
]
},
{
"id": 15,
"type": 3,
"title": "MLflow",
"description": "Open-source ML lifecycle platform — experiment tracking, model registry, and model serving",
"note": "Access the tracking UI at the configured port. Uses SQLite backend by default — switch to PostgreSQL for production.",
"categories": ["ai", "mlops", "experiment-tracking", "model-registry"],
"platform": "linux",
"logo": "https://mlflow.org/img/mlflow-black.svg",
"repository": {
"url": "https://git.oe74.net/adelorenzo/portainer_scripts",
"stackfile": "ai-templates/stacks/mlflow/docker-compose.yml"
},
"env": [
{
"name": "MLFLOW_PORT",
"label": "Tracking UI port",
"default": "5000"
}
]
},
{
"id": 16,
"type": 3,
"title": "Label Studio",
"description": "Multi-type data labeling and annotation platform for training ML and AI models",
"note": "Supports image, text, audio, video, and time-series annotation. Export to all major ML formats.",
"categories": ["ai", "mlops", "data-labeling", "annotation"],
"platform": "linux",
"logo": "https://labelstud.io/images/ls-logo.png",
"repository": {
"url": "https://git.oe74.net/adelorenzo/portainer_scripts",
"stackfile": "ai-templates/stacks/label-studio/docker-compose.yml"
},
"env": [
{
"name": "LS_PORT",
"label": "Web UI port",
"default": "8080"
},
{
"name": "LS_USER",
"label": "Admin email",
"default": "admin@example.com"
},
{
"name": "LS_PASSWORD",
"label": "Admin password",
"default": "changeme"
}
]
},
{
"id": 17,
"type": 3,
"title": "Jupyter (GPU / PyTorch)",
"description": "GPU-accelerated Jupyter Lab with PyTorch, CUDA, and data science libraries pre-installed",
"note": "Requires NVIDIA GPU. Access with the configured token. Workspace persists in the work volume.",
"categories": ["ai", "ml-development", "notebooks", "pytorch"],
"platform": "linux",
"logo": "https://jupyter.org/assets/homepage/main-logo.svg",
"repository": {
"url": "https://git.oe74.net/adelorenzo/portainer_scripts",
"stackfile": "ai-templates/stacks/jupyter-gpu/docker-compose.yml"
},
"env": [
{
"name": "JUPYTER_PORT",
"label": "Jupyter Lab port",
"default": "8888"
},
{
"name": "JUPYTER_TOKEN",
"label": "Access token",
"default": "changeme"
},
{
"name": "GRANT_SUDO",
"label": "Allow sudo in notebooks",
"default": "yes"
}
]
},
{
"id": 18,
"type": 3,
"title": "Whisper ASR",
"description": "OpenAI Whisper speech-to-text API server with GPU acceleration. Supports transcription and translation",
"note": "Requires NVIDIA GPU. API documentation available at /docs. Supports models: tiny, base, small, medium, large-v3.",
"categories": ["ai", "speech-to-text", "transcription", "audio"],
"platform": "linux",
"logo": "https://upload.wikimedia.org/wikipedia/commons/0/04/ChatGPT_logo.svg",
"repository": {
"url": "https://git.oe74.net/adelorenzo/portainer_scripts",
"stackfile": "ai-templates/stacks/whisper/docker-compose.yml"
},
"env": [
{
"name": "WHISPER_PORT",
"label": "API port",
"default": "9000"
},
{
"name": "ASR_MODEL",
"label": "Whisper model size",
"description": "Options: tiny, base, small, medium, large-v3",
"default": "base"
},
{
"name": "ASR_ENGINE",
"label": "ASR engine",
"default": "openai_whisper"
}
]
}
]
}

View File

@@ -1,16 +0,0 @@
kubectl create -f - <<EOY
apiVersion: v1
kind: PersistentVolume
metadata:
name: traefik
labels:
type: local-storage
spec:
storageClassName: local-storage
capacity:
storage: 128Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data"
EOY