2020-03-09 22:17:32 +00:00
< p align = "center" > < img src = "./kilo.svg" width = "150" / > < / p >
2019-01-18 01:50:10 +00:00
# Kilo
Kilo is a multi-cloud network overlay built on WireGuard and designed for Kubernetes.
2021-01-30 20:26:46 +00:00
[![Build Status ](https://github.com/squat/kilo/workflows/CI/badge.svg )](https://github.com/squat/kilo/actions?query=workflow%3ACI)
2019-01-18 01:50:10 +00:00
[![Go Report Card ](https://goreportcard.com/badge/github.com/squat/kilo )](https://goreportcard.com/report/github.com/squat/kilo)
## Overview
Kilo connects nodes in a cluster by providing an encrypted layer 3 network that can span across data centers and public clouds.
By allowing pools of nodes in different locations to communicate securely, Kilo enables the operation of multi-cloud clusters.
2019-05-03 10:55:01 +00:00
Kilo's design allows clients to VPN to a cluster in order to securely access services running on the cluster.
2019-05-08 15:10:09 +00:00
In addition to creating multi-cloud clusters, Kilo enables the creation of multi-cluster services, i.e. services that span across different Kubernetes clusters.
2019-01-18 01:50:10 +00:00
2021-01-07 12:17:47 +00:00
An introductory video about Kilo from KubeCon EU 2019 can be found on [youtube ](https://www.youtube.com/watch?v=iPz_DAOOCKA ).
2020-10-24 12:51:40 +00:00
2019-01-18 01:50:10 +00:00
## How it works
2019-05-08 15:10:09 +00:00
Kilo uses [WireGuard ](https://www.wireguard.com/ ), a performant and secure VPN, to create a mesh between the different nodes in a cluster.
2019-01-18 01:50:10 +00:00
The Kilo agent, `kg` , runs on every node in the cluster, setting up the public and private keys for the VPN as well as the necessary rules to route packets between locations.
2019-05-06 23:49:55 +00:00
Kilo can operate both as a complete, independent networking provider as well as an add-on complimenting the cluster-networking solution currently installed on a cluster.
2019-05-13 23:01:53 +00:00
This means that if a cluster uses, for example, Flannel for networking, Kilo can be installed on top to enable pools of nodes in different locations to join; Kilo will take care of the network between locations, while Flannel will take care of the network within locations.
2019-01-18 01:50:10 +00:00
## Installing on Kubernetes
Kilo can be installed on any Kubernetes cluster either pre- or post-bring-up.
2021-01-07 12:43:46 +00:00
### Step 1: get WireGuard
2019-01-18 01:50:10 +00:00
2020-11-14 11:58:40 +00:00
Kilo requires the WireGuard kernel module to be loaded on all nodes in the cluster.
Starting at Linux 5.6, the kernel includes WireGuard in-tree; Linux distributions with older kernels will need to install WireGuard.
For most Linux distributions, this can be done using the system package manager.
[See the WireGuard website for up-to-date instructions for installing WireGuard ](https://www.wireguard.com/install/ ).
2019-01-18 01:50:10 +00:00
2021-01-07 12:43:46 +00:00
Clusters with nodes on which the WireGuard kernel module cannot be installed can use Kilo by leveraging a [userspace WireGuard implementation ](./docs/userspace-wireguard.md ).
2019-01-18 01:50:10 +00:00
### Step 2: open WireGuard port
The nodes in the mesh will require an open UDP port in order to communicate.
By default, Kilo uses UDP port 51820.
2019-05-07 14:36:02 +00:00
### Step 3: specify topology
2019-01-18 01:50:10 +00:00
2019-05-07 14:36:02 +00:00
By default, Kilo creates a mesh between the different logical locations in the cluster, e.g. data-centers, cloud providers, etc.
For this, Kilo needs to know which groups of nodes are in each location.
2020-01-07 17:41:52 +00:00
If the cluster does not automatically set the [topology.kubernetes.io/region ](https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesioregion ) node label, then the [kilo.squat.ai/location ](./docs/annotations.md#location ) annotation can be used.
2019-01-18 01:50:10 +00:00
For example, the following snippet could be used to annotate all nodes with `GCP` in the name:
```shell
for node in $(kubectl get nodes | grep -i gcp | awk '{print $1}'); do kubectl annotate node $node kilo.squat.ai/location="gcp"; done
```
2019-05-07 14:36:02 +00:00
Kilo allows the topology of the encrypted network to be completely customized.
[See the topology docs for more details ](./docs/topology.md ).
2019-01-18 01:50:10 +00:00
### Step 4: ensure nodes have public IP
2020-02-22 16:17:13 +00:00
At least one node in each location must have an IP address that is routable from the other locations.
If the locations are in different clouds or private networks, then this must be a public IP address.
If this IP address is not automatically configured on the node's Ethernet device, it can be manually specified using the [kilo.squat.ai/force-endpoint ](./docs/annotations.md#force-endpoint ) annotation.
2019-01-18 01:50:10 +00:00
### Step 5: install Kilo!
Kilo can be installed by deploying a DaemonSet to the cluster.
To run Kilo on kubeadm:
```shell
2021-02-26 09:46:31 +00:00
kubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/kilo-kubeadm.yaml
2019-01-18 01:50:10 +00:00
```
To run Kilo on bootkube:
```shell
2021-02-26 09:46:31 +00:00
kubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/kilo-bootkube.yaml
2019-01-18 01:50:10 +00:00
```
To run Kilo on Typhoon:
```shell
2021-02-26 09:46:31 +00:00
kubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/kilo-typhoon.yaml
2019-01-18 01:50:10 +00:00
```
2019-05-13 12:31:06 +00:00
To run Kilo on k3s:
2019-05-11 19:06:31 +00:00
```shell
2021-02-26 09:46:31 +00:00
kubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/kilo-k3s.yaml
2019-05-11 19:06:31 +00:00
```
2019-05-13 23:01:53 +00:00
## Add-on Mode
Administrators of existing clusters who do not want to swap out the existing networking solution can run Kilo in add-on mode.
In this mode, Kilo will add advanced features to the cluster, such as VPN and multi-cluster services, while delegating CNI management and local networking to the cluster's current networking provider.
2019-07-16 21:31:27 +00:00
Kilo currently supports running on top of Flannel.
2019-05-13 23:01:53 +00:00
2019-07-16 21:31:27 +00:00
For example, to run Kilo on a Typhoon cluster running Flannel:
2019-05-13 23:01:53 +00:00
```shell
2021-02-26 09:46:31 +00:00
kubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/kilo-typhoon-flannel.yaml
2019-05-13 23:01:53 +00:00
```
2021-02-26 09:46:31 +00:00
[See the manifests directory for more examples ](https://github.com/squat/kilo/tree/main/manifests ).
2019-05-13 23:01:53 +00:00
2019-05-03 10:55:01 +00:00
## VPN
2019-07-16 21:31:27 +00:00
Kilo also enables peers outside of a Kubernetes cluster to connect to the VPN, allowing cluster applications to securely access external services and permitting developers and support to securely debug cluster resources.
2019-05-03 10:55:01 +00:00
In order to declare a peer, start by defining a Kilo peer resource:
```shell
cat < < 'EOF' | kubectl apply -f -
apiVersion: kilo.squat.ai/v1alpha1
kind: Peer
metadata:
name: squat
spec:
allowedIPs:
2019-05-08 15:10:09 +00:00
- 10.5.0.1/32
2019-05-03 10:55:01 +00:00
publicKey: GY5aT1N9dTR/nJnT1N2f4ClZWVj0jOAld0r8ysWLyjg=
persistentKeepalive: 10
EOF
```
2019-07-16 21:31:27 +00:00
This configuration can then be applied to a local WireGuard interface, e.g. `wg0` , to give it access to the cluster with the help of the `kgctl` tool:
2019-05-03 10:55:01 +00:00
```shell
2019-05-03 12:22:05 +00:00
kgctl showconf peer squat > peer.ini
2019-05-03 12:13:56 +00:00
sudo wg setconf wg0 peer.ini
2019-05-03 10:55:01 +00:00
```
[See the VPN docs for more details ](./docs/vpn.md ).
2019-05-08 15:10:09 +00:00
## Multi-cluster Services
A logical application of Kilo's VPN is to connect two different Kubernetes clusters.
This allows workloads running in one cluster to access services running in another.
For example, if `cluster1` is running a Kubernetes Service that we need to access from Pods running in `cluster2` , we could do the following:
```shell
2019-05-10 00:42:20 +00:00
# Register the nodes in cluster1 as peers of cluster2.
for n in $(kubectl --kubeconfig $KUBECONFIG1 get no -o name | cut -d'/' -f2); do
2019-09-23 23:00:43 +00:00
kgctl --kubeconfig $KUBECONFIG1 showconf node $n --as-peer -o yaml --allowed-ips $SERVICECIDR1 | kubectl --kubeconfig $KUBECONFIG2 apply -f -
2019-05-10 00:42:20 +00:00
done
# Register the nodes in cluster2 as peers of cluster1.
for n in $(kubectl --kubeconfig $KUBECONFIG2 get no -o name | cut -d'/' -f2); do
2019-09-23 23:00:43 +00:00
kgctl --kubeconfig $KUBECONFIG2 showconf node $n --as-peer -o yaml --allowed-ips $SERVICECIDR2 | kubectl --kubeconfig $KUBECONFIG1 apply -f -
2019-05-10 00:42:20 +00:00
done
2019-05-08 15:10:09 +00:00
# Create a Service in cluster2 to mirror the Service in cluster1.
cat < < 'EOF' | kubectl --kubeconfig $KUBECONFIG2 apply -f -
apiVersion: v1
kind: Service
metadata:
name: important-service
spec:
ports:
- port: 80
---
apiVersion: v1
kind: Endpoints
metadata:
name: important-service
subsets:
- addresses:
- ip: $CLUSTERIP # The cluster IP of the important service on cluster1.
ports:
- port: 80
EOF
```
Now, `important-service` can be used on `cluster2` just like any other Kubernetes Service.
[See the multi-cluster services docs for more details ](./docs/multi-cluster-services.md ).
2019-01-18 01:50:10 +00:00
## Analysis
2020-03-06 15:11:23 +00:00
The topology and configuration of a Kilo network can be analyzed using the [`kgctl` command line tool ](./docs/kgctl.md ).
2019-01-18 01:50:10 +00:00
For example, the `graph` command can be used to generate a graph of the network in Graphviz format:
```shell
2019-05-07 14:36:02 +00:00
kgctl graph | circo -Tsvg > cluster.svg
2019-01-18 01:50:10 +00:00
```
2020-03-09 22:17:32 +00:00
< img src = "./docs/graphs/location.svg" / >