Kilo is a multi-cloud network overlay built on WireGuard and designed for Kubernetes (k8s + wg = kg)
Go to file
Lucas Servén Marín 72bfb762b9
pkg/{k8s,mesh}: introduce liveness checks
This commit introduces liveness checks to Kilo. This allows the Kilo
daemons to take nodes with inactive or dead Kilo deamons out of the
topology until they are alive again.
2019-04-02 18:28:27 +02:00
cmd cmd/kgctl: fix possible nil pointer dereference 2019-04-02 18:23:51 +02:00
docs init 2019-01-18 02:50:10 +01:00
manifests manifests: add RBAC resources 2019-01-21 19:55:30 +01:00
pkg pkg/{k8s,mesh}: introduce liveness checks 2019-04-02 18:28:27 +02:00
vendor init 2019-01-18 02:50:10 +01:00
.gitignore init 2019-01-18 02:50:10 +01:00
.header init 2019-01-18 02:50:10 +01:00
.travis.yml Makefile: bump go version 2019-03-21 00:55:07 +01:00
cluster.svg init 2019-01-18 02:50:10 +01:00
Dockerfile init 2019-01-18 02:50:10 +01:00
go.mod init 2019-01-18 02:50:10 +01:00
go.sum init 2019-01-18 02:50:10 +01:00
kilo.svg init 2019-01-18 02:50:10 +01:00
LICENSE init 2019-01-18 02:50:10 +01:00
Makefile Makefile: bump go version 2019-03-21 00:55:07 +01:00
README.md init 2019-01-18 02:50:10 +01:00

Kilo

Kilo is a multi-cloud network overlay built on WireGuard and designed for Kubernetes.

Build Status Go Report Card

Overview

Kilo connects nodes in a cluster by providing an encrypted layer 3 network that can span across data centers and public clouds. By allowing pools of nodes in different locations to communicate securely, Kilo enables the operation of multi-cloud clusters.

How it works

Kilo uses WireGuard, a performant and secure VPN, to create a mesh between the different logical locations in a cluster. The Kilo agent, kg, runs on every node in the cluster, setting up the public and private keys for the VPN as well as the necessary rules to route packets between locations.

Kilo can operate as an add-on complimenting the cluster-networking solution currently installed on a cluster. This means that if a cluster uses, for example, Calico for networking, Kilo can be installed on top to enable pools of nodes in different locations to join; Kilo will take care of the network between locations, while Calico will take care of the network within locations.

Installing on Kubernetes

Kilo can be installed on any Kubernetes cluster either pre- or post-bring-up.

Step 1: install WireGuard

Kilo requires the WireGuard kernel module on all nodes in the cluster. For most Linux distributions, this can be installed using the system package manager. For Container Linux, WireGuard can be easily installed using a DaemonSet:

kubectl apply -f https://raw.githubusercontent.com/squat/modulus/master/wireguard/daemonset.yaml

Step 2: open WireGuard port

The nodes in the mesh will require an open UDP port in order to communicate. By default, Kilo uses UDP port 51820.

Step 3: specify locations

Kilo needs to know which nodes are in each location. If the cluster does not automatically set the failure-domain.beta.kubernetes.io/region node label, then the kilo.squat.ai/location annotation can be used. For example, the following snippet could be used to annotate all nodes with GCP in the name:

for node in $(kubectl get nodes | grep -i gcp | awk '{print $1}'); do kubectl annotate node $node kilo.squat.ai/location="gcp"; done

Step 4: ensure nodes have public IP

At least one node in each location must have a public IP address. If the public IP address is not automatically configured on the node's Ethernet device, it can be manually specified using the kilo.squat.ai/force-external-ip annotation.

Step 5: install Kilo!

Kilo can be installed by deploying a DaemonSet to the cluster.

To run Kilo on kubeadm:

kubectl apply -f https://raw.githubusercontent.com/squat/kilo/master/manifests/kilo-kubeadm.yaml

To run Kilo on bootkube:

kubectl apply -f https://raw.githubusercontent.com/squat/kilo/master/manifests/kilo-bootkube.yaml

To run Kilo on Typhoon:

kubectl apply -f https://raw.githubusercontent.com/squat/kilo/master/manifests/kilo-typhoon.yaml

Analysis

The topology of a Kilo network can be analyzed using the kgctl binary. For example, the graph command can be used to generate a graph of the network in Graphviz format:

kgctl graph --kubeconfig=$KUBECONFIG | twopi -Tsvg > cluster.svg