88 Commits

Author SHA1 Message Date
leonnicolas
6684d5bca3 Merge pull request #144 from squat/fix_graph_newlines
pkg/mesh/graph.go: fix format
2021-04-12 00:41:25 +02:00
leonnicolas
a6fcab6878 pkg/mesh/graph.go: fix format
Previously the newlines were ignored by circo.
This lead to very flat ellipses.
Masked newlines "\\n" are correctly handeled.

Signed-off-by: leonnicolas <leonloechner@gmx.de>
2021-03-26 11:12:05 +01:00
leonnicolas
b59210ef48 Merge pull request #142 from squat/force_internal_ip_docs
docs/annotations.md: docs for disable private ip
2021-03-25 18:07:33 +01:00
leonnicolas
299fab7f2f Update docs/annotations.md
Co-authored-by: Lucas Servén Marín <lserven@gmail.com>
2021-03-25 16:38:18 +01:00
leonnicolas
c1680372df docs/annotations.md: docs for disable private ip
Signed-off-by: leonnicolas <leonloechner@gmx.de>
2021-03-25 14:12:16 +01:00
Lucas Servén Marín
9a6ec98343 Merge pull request #141 from squat/fix_graph
pkg/mesh: fix panic in graph
2021-03-25 13:05:55 +01:00
Lucas Servén Marín
d1948acd77 pkg/mesh: fix panic in graph
Commit 4d00bc56fe introduced a bug in the
Kilo graph generation logic. This commit used the WireGuard CIDR from
the topology struct as the graph title, however this field is nil
whenever the selected node is not a leader, causing the program to
panic.

This commit changes the meaning of the topology struct's wireGuardCIDR
field so that the field is always defined and the normalized value will
always be equal to the Kilo subnet CIDR. When the selected node is a
leader node, then the field's IP will be the IP allocated to the node
within the subnet. This effectively prevents the program from panicking.

Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2021-03-25 02:59:54 +01:00
Lucas Servén Marín
dc34682909 Merge pull request #127 from squat/disable_private_ip
FEATURE: allow disabling private IPs
2021-03-24 21:09:35 +01:00
leonnicolas
9d10d4a3de FEATURE: allow disabling private IPs
When forcing the internal IP to "" or "-", private IPs won't be used.
2021-03-13 23:33:18 +01:00
Lucas Servén Marín
3882d1baae Merge pull request #139 from squat/bug_ipip_rule_reconciliation
pkg/encapsulation/ipip.go: fix order of flags
2021-03-13 22:36:44 +01:00
leonnicolas
50ba744e74 pkg/encapsulation/ipip.go: fix order of flags 2021-03-13 19:55:00 +01:00
Lucas Servén Marín
dc33521374 Merge pull request #138 from squat/bug_resync_period
pkg/mesh/mesh.go: actually add resync period
2021-03-13 16:37:17 +01:00
leonnicolas
db62b273c0 pkg/mesh/mesh.go: actually add resync period
resync period was not added to mesh struct.

Signed-off-by: leonnicolas <leonloechner@gmx.de>
2021-03-13 16:31:09 +01:00
leonnicolas
ba37d913e4 Merge pull request #136 from squat/ipip_protocol_name
pkg/encapsulation/ipip*: fix ipip iptables rules
2021-03-13 15:37:23 +01:00
Lucas Servén Marín
ede3118cc8 pkg/encapsulation/ipip*: fix ipip iptables rules
Since #116 implemented fragile comparisons of iptables rules to avoid
calling the iptables binary excessively during every reconciliation, the
iptables rules for IPIP encapsulation must be updated to match the
expected output. One complication is that rather than returning the
protocol number in the rule, iptables resolves the protocol number to a
name by looking up the number in the netd protocols database. This name
can vary depending on the host's environment. This commit adds two
solutions for resolving the protocol name:
1. a fixed mapping to the string `ipencap`, which should always work
for Kilo whenever it runs in the Alpine Linux container; and
2. a runtime lookup using the netd database, which only works if Kilo is
compiled with CGO and is meant to be used only if Kilo is not running in
the normal container environment.

Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2021-03-13 15:24:55 +01:00
Lucas Servén Marín
ce1b95251e Merge pull request #124 from squat/update_docusaurus
website: update docusaurus
2021-03-10 14:41:56 +01:00
Lucas Servén Marín
259d2a3d8b website: update docusaurus
Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2021-03-10 14:31:55 +01:00
Lucas Servén Marín
c85fbde2ba docs/vpn.md: add clarification
Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2021-03-09 16:58:07 +01:00
Lucas Servén Marín
251e8fac40 Merge pull request #132 from squat/172-16-slash-12
pkg/mesh: correctly check 172.16/12 IP range
2021-03-06 01:06:36 +01:00
Lucas Servén Marín
39803cef66 pkg/mesh: correctly check 172.16/12 IP range
Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2021-03-06 00:50:48 +01:00
Lucas Servén Marín
90a2540487 Merge pull request #131 from squat/172-16-slash-12
pkg/mesh: correctly idenitfy 172.16/12 IPs
2021-03-05 18:32:59 +01:00
Lucas Servén Marín
7cc707f335 pkg/mesh: correctly idenitfy 172.16/12 IPs
Currently Kilo incorrectly identifies the 172.16/12 private IP range.
This commit fixes the logic.

Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2021-03-05 18:27:12 +01:00
Lucas Servén Marín
4d1756c23a Merge pull request #130 from squat/kubeamd_cni_path
manifests: fix kubeadm CNI path
2021-03-04 14:06:13 +01:00
Lucas Servén Marín
a408ce9f35 manifests: fix kubeadm CNI path
As discussed in
https://github.com/squat/kilo/issues/129#issuecomment-789651850,
the Kilo manifests for kubeadm install the CNI configuration in the
wrong directory. They are using /etc/kubernetes/cni/net.d [0] when they
should be using /etc/cni/net.d [1].

[0]
https://github.com/squat/kilo/blob/main/manifests/kilo-kubeadm.yaml#L163
[1]
https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#cni

Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2021-03-04 12:53:46 +01:00
Lucas Servén Marín
2b959f7020 Merge pull request #125 from squat/resync-period
cmd/kg,pkg: add --resync-period flag
2021-03-02 13:18:49 +01:00
Lucas Servén Marín
7a74d87cc7 Merge pull request #126 from squat/dependabot/npm_and_yarn/website/prismjs-1.23.0
build(deps): bump prismjs from 1.21.0 to 1.23.0 in /website
2021-03-02 08:56:23 +01:00
dependabot[bot]
b0e670eb76 build(deps): bump prismjs from 1.21.0 to 1.23.0 in /website
Bumps [prismjs](https://github.com/PrismJS/prism) from 1.21.0 to 1.23.0.
- [Release notes](https://github.com/PrismJS/prism/releases)
- [Changelog](https://github.com/PrismJS/prism/blob/master/CHANGELOG.md)
- [Commits](https://github.com/PrismJS/prism/compare/v1.21.0...v1.23.0)

Signed-off-by: dependabot[bot] <support@github.com>
2021-03-01 21:04:33 +00:00
Lucas Servén Marín
8dbbc636b5 cmd/kg,pkg: add --resync-period flag
This commit introduces a new `--resync-period` flag to control how often
the Kilo controllers should reconcile.

Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2021-03-01 18:20:06 +01:00
Lucas Servén Marín
c060bf24e2 Merge pull request #116 from squat/reduce_iptables_calls
pkg/iptables: reduce calls to iptables
2021-02-26 22:17:04 +01:00
Lucas Servén Marín
4b32c49ae1 pkg/iptables: add logger to iptables controller
This commit adds a logger to the iptables controller using the options
pattern. It also logs when the controller needs to reset rules, to be
able to identify costly reconciliations.

Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2021-02-26 20:54:16 +01:00
Lucas Servén Marín
4c43548bd6 Merge pull request #123 from squat/simply
docs: remove use of 'simply'
2021-02-26 11:25:33 +01:00
Lucas Servén Marín
18e2e752f6 docs: remove use of 'simply'
Let's make the documentation more inclusive and sensitive of the
familiarity and comfort of users.

Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2021-02-26 11:16:08 +01:00
Lucas Servén Marín
7fc50871ca Merge pull request #122 from squat/rename_branch_to_main
*: rename branch to main
2021-02-26 11:04:05 +01:00
Lucas Servén Marín
c5d0debab6 *: rename branch to main
This commit renames the principal branch of the repository to `main`!

Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2021-02-26 10:46:31 +01:00
leonnicolas
0d0fdda619 Merge pull request #117 from squat/maintainers
MAINTAINERS.md: propose @leonnicolas as maintainer
2021-02-20 20:58:39 +01:00
Lucas Servén Marín
f032c1182d MAINTAINERS.md: propose @leonnicolas as maintainer
This commit proposes [Leon](https://github.com/leonnicolas) as a
maintainer of Kilo. Leon has done tons of great work in the project in
feature development, bug triaging, and documentation. It would be a
privilege to have you join!

Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2021-02-20 19:35:35 +01:00
Lucas Servén Marín
acfd0bbaec pkg/iptables: reduce calls to iptables
Currently, every time the iptables controller syncs rules, it spawns an
an iptables process for every rule it checks. This causes two problems:
1. it creates unnecessary load on the system; and
2. it causes contention on the xtables lock file.

This commit creates a lazy cache for iptables rules and chains that
avoids spawning iptables processes. This means that each time the
iptables rules are reconciled, if no rules need to be changed then at
most one iptables process should be spawned to check all of the rules in
a chain and at most one process should be spawned to check all of the
chains in a table.

Note: the success of this reduction in calls to iptables depends on a
somewhat fragile comparison of iptables rule text. The text of any rule
must match exactly, including the order of the flags. An improvement to
come would be to implement an iptables rule parser than can be used to
check semantic equivalence betweem iptables rules.

Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2021-02-20 19:24:06 +01:00
Lucas Servén Marín
afea50a388 Merge pull request #115 from leonnicolas/bug_encapsulation
pkg/mesh/mesh.go: iptables rules in encapsulation
2021-02-20 09:13:07 +01:00
leonnicolas
52d8d13047 pkg/mesh/mesh.go: iptables rules in encapsulation
Because of new naming conventions for locations, the CIDRs were not
being set within locations.
This lead to no iptables rules added for nodes in the same location.
2021-02-20 02:00:57 +01:00
Lucas Servén Marín
4ae1ccf1e8 Merge pull request #112 from SerialVelocity/patch-1
Vulnerability: Don't add generic ACCEPT rules to the filter chain
2021-02-15 14:08:36 +01:00
Ben Grabham
709c1ec6c0 Don't add generic ACCEPT rules to the filter chain 2021-02-15 12:00:25 +00:00
Lucas Servén Marín
0eaefc5e6e Merge pull request #111 from leonnicolas/release_binaries
.github/workflows/ci.yml: publish binaries
2021-02-14 19:53:58 +01:00
leonnicolas
2164e7003f .github/workflows/ci.yml: publish binaries
All kgctl will be published on each new release.
The naming convention is kgctl-<os name>-<architecure>
2021-02-14 19:33:24 +01:00
Lucas Servén Marín
7ea8c1bc64 .github: allow workflow to be triggered manually
Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2021-02-08 19:49:07 +01:00
Lucas Servén Marín
539a139a16 Merge pull request #108 from squat/migrate_to_github_actions
.github/workflows: migrate to github actions
2021-01-31 15:13:27 +01:00
Lucas Servén Marín
c4c8fe81cc .github/workflows: migrate to github actions
Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2021-01-31 15:09:11 +01:00
Lucas Servén Marín
fd8bee718b Merge pull request #107 from squat/no_private_iface
pkg/mesh: don't shadow privIface
2021-01-30 20:16:49 +01:00
Lucas Servén Marín
03545d674f pkg/mesh: don't shadow privIface
This commit fixes a bug where the variable holding the index of the
private interface was shadowed, causing it to always be "0".

Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2021-01-30 20:09:50 +01:00
Lucas Servén Marín
f61b902128 Merge pull request #106 from leonnicolas/bug_iptables
BUG: iptables rules
2021-01-30 17:42:28 +01:00
Lucas Servén Marín
64fb06a383 pkg/k8s: bump headers for 2021
This commit re-generates all generated files to include the new year in
the comment.

Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2021-01-30 17:40:01 +01:00
leonnicolas
448f618c60 BUG: iptables rules
Add default iptables to allow forward traffic from and to pod cidr.

Previously Kilo expected the default behaviour of the forward chain to
accept packets, which can not be guaranteed.
2021-01-30 12:52:30 +01:00
Lucas Servén Marín
3563e660dc Merge pull request #105 from squat/fix_graph_title
pkg/mesh/graph.go: use WireGuard CIDR as title
2021-01-29 18:21:11 +01:00
Lucas Servén Marín
4d00bc56fe pkg/mesh/graph.go: use WireGuard CIDR as title
This commit changes the graph so that the WireGuard CIDR is used as the
title rather than the pod subnet assigned to a node in the cluster.

Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2021-01-29 15:49:42 +01:00
Lucas Servén Marín
27ca2d4b17 Merge pull request #104 from leonnicolas/nodes_without_private_ips
Nodes without private IPs
2021-01-24 23:21:51 +01:00
leonnicolas
3a201ba0fa Nodes without private IPs
Allow nodes to have no private IPs.
Nodes without private IPs will automatically be put into
their own location.
2021-01-24 22:37:24 +01:00
Lucas Servén Marín
92825ba0c7 Merge pull request #103 from squat/ignore_kilo_ip_when_finding_internal_ips
pkg/mesh/mesh.go: ignore Kilo IP during discovery
2021-01-20 11:08:58 +01:00
Lucas Servén Marín
95c0143b1a pkg/mesh/mesh.go: ignore Kilo IP during discovery
This ensures that Kilo will not select an IP assigned to the Kilo
interface when discovering public and private IPs.

Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2021-01-19 20:25:50 +01:00
Lucas Servén Marín
e7855825cf docs/userspace-wireguard.md: add details
This commit clarifies a few lines from the userspace doc and notes in
the README that Kilo works with userspace WireGuard.

Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2021-01-07 13:48:10 +01:00
Lucas Servén Marín
f6f0b8c791 README.md: typo fix
Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2021-01-07 13:17:47 +01:00
Lucas Servén Marín
dafae4bafb Merge pull request #100 from leonnicolas/master
FEATURE: user space wireguard
2020-12-29 19:23:07 +01:00
leonnicolas
e30cff5293 FEATURE: user space wireguard
Add the possibility to use a user space implementation of wireguard. Specifically, the rust implementation boringtun.
2020-12-29 18:50:58 +01:00
Lucas Servén Marín
2d12d9ef81 docs/topology.md: grammar fix
Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2020-12-19 14:59:22 +01:00
Lucas Servén Marín
a789003a58 Merge pull request #97 from castai/add-custom-topology-label
feat: add support for custom topology label
2020-12-19 14:56:05 +01:00
Tadeuš Varnas
a5684a97e0 Update topology.md 2020-12-14 10:53:21 +02:00
Tadeuš Varnas
849449890d Apply suggestions from code review
Co-authored-by: Lucas Servén Marín <lserven@gmail.com>
2020-12-14 10:20:53 +02:00
Lucas Servén Marín
12798add5f Merge pull request #99 from squat/dependabot/npm_and_yarn/website/ini-1.3.8
build(deps): bump ini from 1.3.5 to 1.3.8 in /website
2020-12-12 11:36:16 +01:00
dependabot[bot]
a6a7f98c29 build(deps): bump ini from 1.3.5 to 1.3.8 in /website
Bumps [ini](https://github.com/isaacs/ini) from 1.3.5 to 1.3.8.
- [Release notes](https://github.com/isaacs/ini/releases)
- [Commits](https://github.com/isaacs/ini/compare/v1.3.5...v1.3.8)

Signed-off-by: dependabot[bot] <support@github.com>
2020-12-12 10:09:14 +00:00
varnastadues
cb12666fc1 feat: add support for custom topology label 2020-12-11 16:44:20 +02:00
Lucas Servén Marín
42c895f70a Makefile: no darwin+arm windows+arm build matrix
This commit excludes Darwin+ARM and Windows+ARM combinations from the
build matrix.

Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2020-11-23 11:16:35 +01:00
Lucas Servén Marín
f52efc212c Makefile: variable detection for cross-compilation
The PR to add support for cross-compilation to other OSs introduced a
bug in ARCH and OS variable detection. This commit fixes it.

Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2020-11-23 10:18:08 +01:00
Lucas Servén Marín
ab24242a44 Merge pull request #93 from squat/remove_coreos_specific_code
README.md: remove CoreOS-specific install step
2020-11-14 13:06:56 +01:00
Lucas Servén Marín
f205c9bfab README.md: remove CoreOS-specific install step
This commit removes a code snippet that is specific to CoreOS Container
Linux. Including this in the installation instructions for WireGuard can
give the impression that this code works for any cluster.

Fixes: #89.

Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2020-11-14 13:04:50 +01:00
Lucas Servén Marín
6ab9913c7b Merge pull request #92 from squat/non_linux
pkg/*: allow kgctl to compile for other OSes
2020-11-14 12:31:28 +01:00
Lucas Servén Marín
45cedbb84a pkg/*: allow kgctl to compile for other OSes
This commit enables the compilation of kgctl when GOOS!=linux.
This fixes #56.

Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2020-11-14 12:16:07 +01:00
Lucas Servén Marín
b802489826 Merge pull request #87 from pratikbalar/patch-1
Highlighting Note
2020-11-11 08:41:22 +01:00
Praitk
ae8f0655b3 Highlighting Note
Highlighting Note in order to it visible first cause after analyzing some issues, i find people are ignoring this note section (including my self)
2020-11-11 11:53:26 +05:30
Lucas Servén Marín
425796ec4e Merge pull request #80 from hansbogert/patch-1
doc: Add video reference to README
2020-10-24 15:13:46 +02:00
Hans van den Bogert
33eac74d4a doc: Add video reference to README 2020-10-24 14:51:40 +02:00
Lucas Servén Marín
410a014daf vendor: revendor
Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2020-09-23 11:38:32 +02:00
Lucas Servén Marín
0cc1a2ff8c docs,website: add doc for kg
This commit adds a doc for `kg`, the Kilo agent that runs on every node
in the mesh. This includes: the doc itself, files needed for the
website, and tooling to generate the document using `embedmd`.

Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2020-09-23 11:38:25 +02:00
Lucas Servén Marín
5e970d8b42 pkg/mesh: small change for clarity
Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2020-09-18 16:11:01 +02:00
Lucas Servén Marín
ac7fa37fd0 Merge pull request #42 from squat/peer-dns-names
pkg/k8s: enable peers to use DNS names
2020-09-17 15:20:52 +02:00
Lucas Servén Marín
116fb7337a pkg/k8s: enable peers to use DNS names
This commit enables peers defined using the Peer CRD to declare their
endpoints using DNS names.

Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2020-09-17 14:48:38 +02:00
Lucas Servén Marín
e3cb7d7958 .travis.yml: only tag latest images on master
Ensure that only images built from the master branch get tagged with
`latest`.

Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2020-09-17 14:47:40 +02:00
Lucas Servén Marín
d3492a72cb website: add dependency resolutions
Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2020-09-17 14:28:07 +02:00
Lucas Servén Marín
7750a08019 website: update syntax for new docusaurus version
Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2020-09-17 13:05:47 +02:00
Lucas Servén Marín
5d7fb96274 website/yarn.lock: bump npm deps
Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2020-09-17 13:05:20 +02:00
Lucas Servén Marín
b5cadfe3de .travis.yml: only tag latest image if not git tag
If we tag a release for, e.g. 0.1.1, after we've already cut a 0.2.0
tag, then CI would tag the 0.1.1 image as `latest`, which is confusing.
This commit ensures that we only tag the `latest` image when building
from master.

Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2020-09-15 15:58:15 +02:00
93 changed files with 9208 additions and 5155 deletions

141
.github/workflows/ci.yml vendored Normal file
View File

@@ -0,0 +1,141 @@
name: CI
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
schedule:
- cron: '0 0 * * *'
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.15.7
- name: Build
run: make
linux:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.15.7
- name: Build kg and kgctl for all Linux Architectures
run: make all-build
darwin:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.15.7
- name: Build kgctl for Darwin
run: make OS=darwin
windows:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.15.7
- name: Build kgctl for Windows
run: make OS=windows
unit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.15.7
- name: Run Unit Tests
run: make unit
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.15.7
- name: Lint Code
run: make lint
container:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.15.7
- name: Enable Experimental Docker CLI
run: |
echo $'{\n "experimental": true\n}' | sudo tee /etc/docker/daemon.json
mkdir -p ~/.docker
echo $'{\n "experimental": "enabled"\n}' | sudo tee ~/.docker/config.json
sudo service docker restart
docker version -f '{{.Client.Experimental}}'
docker version -f '{{.Server.Experimental}}'
docker buildx version
- name: Container
run: make container
push:
if: github.event_name != 'pull_request'
needs:
- build
- linux
- darwin
- windows
- unit
- lint
- container
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.15.7
- name: Enable Experimental Docker CLI
run: |
echo $'{\n "experimental": true\n}' | sudo tee /etc/docker/daemon.json
mkdir -p ~/.docker
echo $'{\n "experimental": "enabled"\n}' | sudo tee ~/.docker/config.json
sudo service docker restart
docker version -f '{{.Client.Experimental}}'
docker version -f '{{.Server.Experimental}}'
docker buildx version
- name: Set up QEMU
uses: docker/setup-qemu-action@master
with:
platforms: all
- name: Login to DockerHub
if: github.event_name != 'pull_request'
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Build and push
if: github.event_name != 'pull_request'
run: make manifest
- name: Build and push latest
if: github.event_name != 'pull_request' && github.ref == 'refs/heads/main'
run: make manifest-latest

21
.github/workflows/release.yaml vendored Normal file
View File

@@ -0,0 +1,21 @@
on:
release:
types: [created]
name: Handle Release
jobs:
linux:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.15.7
- name: Make Directory with kgctl Binaries to Be Released
run: make release
- name: Publish Release
uses: skx/github-action-publish-binaries@master
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
args: 'bin/release/kgctl-*'

1
.gitignore vendored
View File

@@ -3,3 +3,4 @@
.manifest*
.push*
bin/
tmp/

View File

@@ -1,31 +0,0 @@
sudo: required
dist: xenial
language: go
services:
- docker
go:
- 1.14.2
env:
- GO111MODULE=on DOCKER_CLI_EXPERIMENTAL=enabled
before_install:
- sudo apt-get update && sudo apt-get -y install jq
install: true
script:
- make
- make clean
- make unit
- make lint
- make container
after_success:
- docker login -u="$DOCKER_USERNAME" -p="$DOCKER_PASSWORD"
- docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
- make manifest && make manifest-latest

View File

@@ -11,5 +11,5 @@ LABEL maintainer="squat <lserven@gmail.com>"
RUN echo -e "https://alpine.global.ssl.fastly.net/alpine/v3.12/main\nhttps://alpine.global.ssl.fastly.net/alpine/v3.12/community" > /etc/apk/repositories && \
apk add --no-cache ipset iptables ip6tables wireguard-tools
COPY --from=cni bridge host-local loopback portmap /opt/cni/bin/
COPY bin/$GOARCH/kg /opt/bin/
COPY bin/linux/$GOARCH/kg /opt/bin/
ENTRYPOINT ["/opt/bin/kg"]

6
MAINTAINERS.md Normal file
View File

@@ -0,0 +1,6 @@
# Core Maintainers of This Repository
| Name | GitHub |
|--------------------|------------------------------------------------|
| Lucas Servén Marín | [@squat](https://github.com/squat) |
| Leon Löchner | [@leonnicolas](https://github.com/leonnicolas) |

View File

@@ -1,10 +1,18 @@
export GO111MODULE=on
.PHONY: push container clean container-name container-latest push-latest fmt lint test unit vendor header generate client deepcopy informer lister openapi manifest manfest-latest manifest-annotate manifest manfest-latest manifest-annotate
.PHONY: push container clean container-name container-latest push-latest fmt lint test unit vendor header generate client deepcopy informer lister openapi manifest manfest-latest manifest-annotate manifest manfest-latest manifest-annotate release
ARCH ?= amd64
OS ?= $(shell go env GOOS)
ARCH ?= $(shell go env GOARCH)
ALL_OS := linux darwin windows
ALL_ARCH := amd64 arm arm64
DOCKER_ARCH := "amd64" "arm v7" "arm64 v8"
BINS := $(addprefix bin/$(ARCH)/,kg kgctl)
ifeq ($(OS),linux)
BINS := bin/$(OS)/$(ARCH)/kg bin/$(OS)/$(ARCH)/kgctl
else
BINS := bin/$(OS)/$(ARCH)/kgctl
endif
RELEASE_BINS := $(addprefix bin/release/kgctl-, $(addprefix linux-, $(ALL_ARCH)) darwin-amd64 windows-amd64)
CLIENT_BINS := $(addsuffix /kgctl, $(addprefix bin/, $(addprefix linux/, $(ALL_ARCH)) darwin/amd64 windows/amd64))
PROJECT := kilo
PKG := github.com/squat/$(PROJECT)
REGISTRY ?= index.docker.io
@@ -31,14 +39,15 @@ INFORMER_GEN_BINARY := bin/informer-gen
LISTER_GEN_BINARY := bin/lister-gen
OPENAPI_GEN_BINARY := bin/openapi-gen
GOLINT_BINARY := bin/golint
EMBEDMD_BINARY := bin/embedmd
BUILD_IMAGE ?= golang:1.14.2-alpine
BUILD_IMAGE ?= golang:1.15.7-alpine
BASE_IMAGE ?= alpine:3.12
build: $(BINS)
build-%:
@$(MAKE) --no-print-directory ARCH=$* build
@$(MAKE) --no-print-directory OS=$(word 1,$(subst -, ,$*)) ARCH=$(word 2,$(subst -, ,$*)) build
container-latest-%:
@$(MAKE) --no-print-directory ARCH=$* container-latest
@@ -52,7 +61,7 @@ push-latest-%:
push-%:
@$(MAKE) --no-print-directory ARCH=$* push
all-build: $(addprefix build-, $(ALL_ARCH))
all-build: $(addprefix build-$(OS)-, $(ALL_ARCH))
all-container: $(addprefix container-, $(ALL_ARCH))
@@ -132,7 +141,7 @@ pkg/k8s/apis/kilo/v1alpha1/openapi_generated.go: pkg/k8s/apis/kilo/v1alpha1/type
go fmt $@
$(BINS): $(SRC) go.mod
@mkdir -p bin/$(ARCH)
@mkdir -p bin/$(word 2,$(subst /, ,$@))/$(word 3,$(subst /, ,$@))
@echo "building: $@"
@docker run --rm \
-u $$(id -u):$$(id -g) \
@@ -140,8 +149,8 @@ $(BINS): $(SRC) go.mod
-w /$(PROJECT) \
$(BUILD_IMAGE) \
/bin/sh -c " \
GOARCH=$(ARCH) \
GOOS=linux \
GOARCH=$(word 3,$(subst /, ,$@)) \
GOOS=$(word 2,$(subst /, ,$@)) \
GOCACHE=/$(PROJECT)/.cache \
CGO_ENABLED=0 \
go build -mod=vendor -o $@ \
@@ -190,7 +199,7 @@ header: .header
FILES=; \
for f in $(GO_FILES); do \
for i in 0 1 2 3 4 5; do \
FILE=$$(tail -n +$$i $$f | head -n $$HEADER_LEN | sed "s/[0-9]\{4\}/YEAR/"); \
FILE=$$(t=$$(mktemp) && tail -n +$$i $$f > $$t && head -n $$HEADER_LEN $$t | sed "s/[0-9]\{4\}/YEAR/"); \
[ "$$FILE" = "$$HEADER" ] && continue 2; \
done; \
FILES="$$FILES$$f "; \
@@ -200,6 +209,13 @@ header: .header
exit 1; \
fi
tmp/help.txt: bin/$(OS)/$(ARCH)/kg
mkdir -p tmp
bin//$(OS)/$(ARCH)/kg --help 2>&1 | head -n -1 > $@
docs/kg.md: $(EMBEDMD_BINARY) tmp/help.txt
$(EMBEDMD_BINARY) -w $@
website/docs/README.md: README.md
rm -rf website/static/img/graphs
find docs -type f -name '*.md' | xargs -I{} sh -c 'cat $(@D)/$$(basename {} .md) > website/{}'
@@ -216,7 +232,7 @@ website/build/index.html: website/docs/README.md
yarn --cwd website build
container: .container-$(ARCH)-$(VERSION) container-name
.container-$(ARCH)-$(VERSION): $(BINS) Dockerfile
.container-$(ARCH)-$(VERSION): bin/linux/$(ARCH)/kg Dockerfile
@i=0; for a in $(ALL_ARCH); do [ "$$a" = $(ARCH) ] && break; i=$$((i+1)); done; \
ia=""; iv=""; \
j=0; for a in $(DOCKER_ARCH); do \
@@ -281,6 +297,12 @@ push-latest: container-latest
push-name:
@echo "pushed: $(IMAGE):$(ARCH)-$(VERSION)"
release: $(RELEASE_BINS)
$(RELEASE_BINS):
@make OS=$(word 2,$(subst -, ,$(@F))) ARCH=$(word 3,$(subst -, ,$(@F)))
mkdir -p $(@D)
cp bin/$(word 2,$(subst -, ,$(@F)))/$(word 3,$(subst -, ,$(@F)))/kgctl $@
clean: container-clean bin-clean
rm -rf .cache
@@ -311,3 +333,6 @@ $(OPENAPI_GEN_BINARY):
$(GOLINT_BINARY):
go build -mod=vendor -o $@ golang.org/x/lint/golint
$(EMBEDMD_BINARY):
go build -mod=vendor -o $@ github.com/campoy/embedmd

View File

@@ -4,7 +4,7 @@
Kilo is a multi-cloud network overlay built on WireGuard and designed for Kubernetes.
[![Build Status](https://travis-ci.org/squat/kilo.svg?branch=master)](https://travis-ci.org/squat/kilo)
[![Build Status](https://github.com/squat/kilo/workflows/CI/badge.svg)](https://github.com/squat/kilo/actions?query=workflow%3ACI)
[![Go Report Card](https://goreportcard.com/badge/github.com/squat/kilo)](https://goreportcard.com/report/github.com/squat/kilo)
## Overview
@@ -14,6 +14,8 @@ By allowing pools of nodes in different locations to communicate securely, Kilo
Kilo's design allows clients to VPN to a cluster in order to securely access services running on the cluster.
In addition to creating multi-cloud clusters, Kilo enables the creation of multi-cluster services, i.e. services that span across different Kubernetes clusters.
An introductory video about Kilo from KubeCon EU 2019 can be found on [youtube](https://www.youtube.com/watch?v=iPz_DAOOCKA).
## How it works
Kilo uses [WireGuard](https://www.wireguard.com/), a performant and secure VPN, to create a mesh between the different nodes in a cluster.
@@ -26,15 +28,14 @@ This means that if a cluster uses, for example, Flannel for networking, Kilo can
Kilo can be installed on any Kubernetes cluster either pre- or post-bring-up.
### Step 1: install WireGuard
### Step 1: get WireGuard
Kilo requires the WireGuard kernel module on all nodes in the cluster.
For most Linux distributions, this can be installed using the system package manager.
For Container Linux, WireGuard can be easily installed using a DaemonSet:
Kilo requires the WireGuard kernel module to be loaded on all nodes in the cluster.
Starting at Linux 5.6, the kernel includes WireGuard in-tree; Linux distributions with older kernels will need to install WireGuard.
For most Linux distributions, this can be done using the system package manager.
[See the WireGuard website for up-to-date instructions for installing WireGuard](https://www.wireguard.com/install/).
```shell
kubectl apply -f https://raw.githubusercontent.com/squat/modulus/master/wireguard/daemonset.yaml
```
Clusters with nodes on which the WireGuard kernel module cannot be installed can use Kilo by leveraging a [userspace WireGuard implementation](./docs/userspace-wireguard.md).
### Step 2: open WireGuard port
@@ -68,25 +69,25 @@ Kilo can be installed by deploying a DaemonSet to the cluster.
To run Kilo on kubeadm:
```shell
kubectl apply -f https://raw.githubusercontent.com/squat/kilo/master/manifests/kilo-kubeadm.yaml
kubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/kilo-kubeadm.yaml
```
To run Kilo on bootkube:
```shell
kubectl apply -f https://raw.githubusercontent.com/squat/kilo/master/manifests/kilo-bootkube.yaml
kubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/kilo-bootkube.yaml
```
To run Kilo on Typhoon:
```shell
kubectl apply -f https://raw.githubusercontent.com/squat/kilo/master/manifests/kilo-typhoon.yaml
kubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/kilo-typhoon.yaml
```
To run Kilo on k3s:
```shell
kubectl apply -f https://raw.githubusercontent.com/squat/kilo/master/manifests/kilo-k3s.yaml
kubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/kilo-k3s.yaml
```
## Add-on Mode
@@ -98,10 +99,10 @@ Kilo currently supports running on top of Flannel.
For example, to run Kilo on a Typhoon cluster running Flannel:
```shell
kubectl apply -f https://raw.githubusercontent.com/squat/kilo/master/manifests/kilo-typhoon-flannel.yaml
kubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/kilo-typhoon-flannel.yaml
```
[See the manifests directory for more examples](https://github.com/squat/kilo/tree/master/manifests).
[See the manifests directory for more examples](https://github.com/squat/kilo/tree/main/manifests).
## VPN

View File

@@ -16,7 +16,6 @@ package main
import (
"errors"
"flag"
"fmt"
"net"
"net/http"
@@ -24,12 +23,14 @@ import (
"os/signal"
"strings"
"syscall"
"time"
"github.com/go-kit/kit/log"
"github.com/go-kit/kit/log/level"
"github.com/oklog/run"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
flag "github.com/spf13/pflag"
apiextensions "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
@@ -80,6 +81,7 @@ var (
func Main() error {
backend := flag.String("backend", k8s.Backend, fmt.Sprintf("The backend for the mesh. Possible values: %s", availableBackends))
cleanUpIface := flag.Bool("clean-up-interface", false, "Should Kilo delete its interface when it shuts down?")
createIface := flag.Bool("create-interface", true, "Should kilo create an interface on startup?")
cni := flag.Bool("cni", true, "Should Kilo manage the node's CNI configuration?")
cniPath := flag.String("cni-path", mesh.DefaultCNIPath, "Path to CNI config.")
compatibility := flag.String("compatibility", "", fmt.Sprintf("Should Kilo run in compatibility mode? Possible values: %s", availableCompatibilities))
@@ -92,9 +94,11 @@ func Main() error {
local := flag.Bool("local", true, "Should Kilo manage routes within a location?")
logLevel := flag.String("log-level", logLevelInfo, fmt.Sprintf("Log level to use. Possible values: %s", availableLogLevels))
master := flag.String("master", "", "The address of the Kubernetes API server (overrides any value in kubeconfig).")
topologyLabel := flag.String("topology-label", k8s.RegionLabelKey, "Kubernetes node label used to group nodes into logical locations.")
var port uint
flag.UintVar(&port, "port", mesh.DefaultKiloPort, "The port over which WireGuard peers should communicate.")
subnet := flag.String("subnet", mesh.DefaultKiloSubnet.String(), "CIDR from which to allocate addresses for WireGuard interfaces.")
resyncPeriod := flag.Duration("resync-period", 30*time.Second, "How often should the Kilo controllers reconcile?")
printVersion := flag.Bool("version", false, "Print version and exit")
flag.Parse()
@@ -171,12 +175,12 @@ func Main() error {
c := kubernetes.NewForConfigOrDie(config)
kc := kiloclient.NewForConfigOrDie(config)
ec := apiextensions.NewForConfigOrDie(config)
b = k8s.New(c, kc, ec)
b = k8s.New(c, kc, ec, *topologyLabel)
default:
return fmt.Errorf("backend %v unknown; possible values are: %s", *backend, availableBackends)
}
m, err := mesh.New(b, enc, gr, *hostname, uint32(port), s, *local, *cni, *cniPath, *iface, *cleanUpIface, log.With(logger, "component", "kilo"))
m, err := mesh.New(b, enc, gr, *hostname, uint32(port), s, *local, *cni, *cniPath, *iface, *cleanUpIface, *createIface, *resyncPeriod, log.With(logger, "component", "kilo"))
if err != nil {
return fmt.Errorf("failed to create Kilo mesh: %v", err)
}

View File

@@ -60,9 +60,10 @@ var (
granularity mesh.Granularity
port uint32
}
backend string
granularity string
kubeconfig string
backend string
granularity string
kubeconfig string
topologyLabel string
)
func runRoot(_ *cobra.Command, _ []string) error {
@@ -83,7 +84,7 @@ func runRoot(_ *cobra.Command, _ []string) error {
c := kubernetes.NewForConfigOrDie(config)
kc := kiloclient.NewForConfigOrDie(config)
ec := apiextensions.NewForConfigOrDie(config)
opts.backend = k8s.New(c, kc, ec)
opts.backend = k8s.New(c, kc, ec, topologyLabel)
default:
return fmt.Errorf("backend %v unknown; posible values are: %s", backend, availableBackends)
}
@@ -110,6 +111,7 @@ func main() {
cmd.PersistentFlags().StringVar(&granularity, "mesh-granularity", string(mesh.LogicalGranularity), fmt.Sprintf("The granularity of the network mesh to create. Possible values: %s", availableGranularities))
cmd.PersistentFlags().StringVar(&kubeconfig, "kubeconfig", os.Getenv("KUBECONFIG"), "Path to kubeconfig.")
cmd.PersistentFlags().Uint32Var(&opts.port, "port", mesh.DefaultKiloPort, "The WireGuard port over which the nodes communicate.")
cmd.PersistentFlags().StringVar(&topologyLabel, "topology-label", k8s.RegionLabelKey, "Kubernetes node label used to group nodes into logical locations.")
for _, subCmd := range []*cobra.Command{
graph(),

View File

@@ -289,9 +289,16 @@ func translatePeer(peer *wireguard.Peer) *v1alpha1.Peer {
aips = append(aips, aip.String())
}
var endpoint *v1alpha1.PeerEndpoint
if peer.Endpoint != nil && peer.Endpoint.Port > 0 && peer.Endpoint.IP != nil {
if peer.Endpoint != nil && peer.Endpoint.Port > 0 && (peer.Endpoint.IP != nil || peer.Endpoint.DNS != "") {
var ip string
if peer.Endpoint.IP != nil {
ip = peer.Endpoint.IP.String()
}
endpoint = &v1alpha1.PeerEndpoint{
IP: peer.Endpoint.IP.String(),
DNSOrIP: v1alpha1.DNSOrIP{
DNS: peer.Endpoint.DNS,
IP: ip,
},
Port: peer.Endpoint.Port,
}
}

View File

@@ -5,7 +5,7 @@ The following annotations can be added to any Kubernetes Node object to configur
|Name|type|examples|
|----|----|-------|
|[kilo.squat.ai/force-endpoint](#force-endpoint)|host:port|`55.55.55.55:51820`, `example.com:1337`|
|[kilo.squat.ai/force-internal-ip](#force-internal-ip)|CIDR|`55.55.55.55/32`|
|[kilo.squat.ai/force-internal-ip](#force-internal-ip)|CIDR|`55.55.55.55/32`, `"-"`,`""`|
|[kilo.squat.ai/leader](#leader)|string|`""`, `true`|
|[kilo.squat.ai/location](#location)|string|`gcp-east`, `lab`|
|[kilo.squat.ai/persistent-keepalive](#persistent-keepalive)|uint|`10`|
@@ -25,6 +25,7 @@ Kilo routes packets destined for nodes inside the same logical location using th
The Kilo agent running on each node will use heuristics to automatically detect a private IP address for the node; however, in some circumstances it may be necessary to explicitly configure the IP address, for example:
* _multiple private IP addresses_: if a node has multiple private IPs but one is preferred, then the preferred IP address should be specified;
* _IPv6_: if a node has both private IPv4 and IPv6 addresses and the Kilo network should operate over IPv6, then the IPv6 address should be specified.
* _disable private IP with "-" or ""_: a node has a private and public address, but the private address ought to be ignored.
### leader
By default, Kilo creates a network mesh at the data-center granularity.

40
docs/kg.md Normal file
View File

@@ -0,0 +1,40 @@
# kg
`kg` is the Kilo agent that runs on every Kubernetes node in a Kilo mesh.
It performs several key functions, including:
* adding the node to the Kilo mesh;
* installing CNI configuration on the node;
* configuring the WireGuard network interface; and
* maintaining routing table entries and iptables rules.
`kg` is typically installed on all nodes of a Kubernetes cluster using a DaemonSet.
Example manifests can be found [in the manifests directory](https://github.com/squat/kilo/tree/main/manifests).
## Usage
The behavior of `kg` can be configured using the command line flags listed below.
[embedmd]:# (../tmp/help.txt)
```txt
Usage of bin//linux/amd64/kg:
--backend string The backend for the mesh. Possible values: kubernetes (default "kubernetes")
--clean-up-interface Should Kilo delete its interface when it shuts down?
--cni Should Kilo manage the node's CNI configuration? (default true)
--cni-path string Path to CNI config. (default "/etc/cni/net.d/10-kilo.conflist")
--compatibility string Should Kilo run in compatibility mode? Possible values: flannel
--create-interface Should kilo create an interface on startup? (default true)
--encapsulate string When should Kilo encapsulate packets within a location? Possible values: never, crosssubnet, always (default "always")
--hostname string Hostname of the node on which this process is running.
--interface string Name of the Kilo interface to use; if it does not exist, it will be created. (default "kilo0")
--kubeconfig string Path to kubeconfig.
--listen string The address at which to listen for health and metrics. (default ":1107")
--local Should Kilo manage routes within a location? (default true)
--log-level string Log level to use. Possible values: all, debug, info, warn, error, none (default "info")
--master string The address of the Kubernetes API server (overrides any value in kubeconfig).
--mesh-granularity string The granularity of the network mesh to create. Possible values: location, full (default "location")
--port uint The port over which WireGuard peers should communicate. (default 51820)
--resync-period duration How often should the Kilo controllers reconcile? (default 30s)
--subnet string CIDR from which to allocate addresses for WireGuard interfaces. (default "10.4.0.0/16")
--topology-label string Kubernetes node label used to group nodes into logical locations. (default "topology.kubernetes.io/region")
--version Print version and exit
```

View File

@@ -8,7 +8,7 @@ This tool can be used to understand a mesh's topology, get the WireGuard configu
Installing `kgctl` currently requires building the binary from source.
*Note*: the [Go toolchain must be installed](https://golang.org/doc/install) in order to build the binary.
To build and install `kgctl`, simply run:
To build and install `kgctl`, run:
```shell
go install github.com/squat/kilo/cmd/kgctl

View File

@@ -5,7 +5,7 @@ This enables clusters to provide services to other clusters over a secure connec
For example, a cluster on AWS with access to GPUs could run a machine learning service that could be consumed by workloads running in a another location, e.g. an on-prem cluster without GPUs.
Unlike services exposed via Ingresses or NodePort Services, multi-cluster services can remain private and internal to the clusters.
*Note*: in order for connected clusters to be fully routable, the allowed IPs that they declare must be non-overlapping, i.e. the Kilo, pod, and service CIDRs.
> **Note**: in order for connected clusters to be fully routable, the allowed IPs that they declare must be non-overlapping, i.e. the Kilo, pod, and service CIDRs.
## Getting Started

View File

@@ -10,7 +10,7 @@ Support for [Kubernetes network policies](https://kubernetes.io/docs/concepts/se
The following command adds network policy support by deploying kube-router to work alongside Kilo:
```shell
kubectl apply -f kubectl apply -f https://raw.githubusercontent.com/squat/kilo/master/manifests/kube-router.yaml
kubectl apply -f kubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/kube-router.yaml
```
## Examples

View File

@@ -11,9 +11,10 @@ This allows the encrypted network to serve several purposes, for example:
By default, Kilo creates a mesh between the different logical locations in the cluster, e.g. data-centers, cloud providers, etc.
Kilo will try to infer the location of the node using the [topology.kubernetes.io/region](https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesioregion) node label.
Additionally, Kilo supports using a custom topology label by setting the command line flag `--topology-label=<label>`.
If this label is not set, then the [kilo.squat.ai/location](./annotations.md#location) node annotation can be used.
For example, in order to join nodes in Google Cloud and AWS into a single cluster, an administrator could use the following snippet could to annotate all nodes with `GCP` in the name:
For example, in order to join nodes in Google Cloud and AWS into a single cluster, an administrator could use the following snippet to annotate all nodes with `GCP` in the name:
```shell
for node in $(kubectl get nodes | grep -i gcp | awk '{print $1}'); do kubectl annotate node $node kilo.squat.ai/location="gcp"; done

View File

@@ -0,0 +1,34 @@
# Userspace WireGuard
It is possible to use a userspace implementation of WireGuard with Kilo.
This can make sense in cases where
* not all nodes in a cluster have WireGuard installed; or
* nodes are effectively immutable and kernel modules cannot be installed.
## Homogeneous Clusters
In a homogeneous cluster where no node has the WireGuard kernel module, a userspace WireGuard implementation can be made available by deploying a DaemonSet.
This DaemonSet creates a WireGuard interface that Kilo will manage.
In order to avoid race conditions, `kg` needs to be passed the `--create-interface=false` flag.
An example configuration for a k3s cluster with [boringtun](https://github.com/cloudflare/boringtun) can be applied with:
```shell
kubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/kilo-k3s-userspace.yaml
```
__Note:__ even if some nodes have the WireGuard kernel module, this configuration will cause all nodes to use the userspace implementation of WireGuard.
## Heterogeneous Clusters
In a heterogeneous cluster where some nodes are missing the WireGuard kernel module, a userspace WireGuard implementation can be provided only to the nodes that need it while enabling the other nodes to leverage WireGuard via the kernel module.
An example of such a configuration for a k3s cluster can by applied with:
```shell
kubectl apply -f https://raw.githubusercontent.com/squat/kilo/main/manifests/kilo-k3s-userspace-heterogeneous.yaml
```
This configuration will deploy [nkml](https://github.com/leonnicolas/nkml) as a DaemonSet to label all nodes according to the presence of the WireGuard kernel module.
It will also create two different DaemonSets with Kilo: `kilo` without userspace WireGuard and `kilo-userspace` with boringtun as a sidecar.
__Note:__ because Kilo is dependant on nkml, nkml must be run on the host network before CNI is available and requires a kubeconfig in order to access the Kubernetes API.

View File

@@ -3,6 +3,7 @@
Kilo enables peers outside of a Kubernetes cluster to connect to the created WireGuard network.
This enables several use cases, for example:
* giving cluster applications secure access to external services, e.g. services behind a corporate VPN;
* improving the development flow of applications by running them locally and connecting them to the cluster;
* allowing external services to access the cluster; and
* enabling developers and support to securely debug cluster resources.
@@ -74,7 +75,7 @@ From any node or Pod on the cluster, one can now ping the peer:
ping 10.5.0.1
```
If the peer exposes a layer 4 service, for example an HTTP service, then one could also make requests against that endpoint from the cluster:
If the peer exposes a layer 4 service, for example an HTTP server listening on TCP port 80, then one could also make requests against that endpoint from the cluster:
```shell
curl http://10.5.0.1

4
go.mod
View File

@@ -1,11 +1,12 @@
module github.com/squat/kilo
go 1.14
go 1.15
require (
github.com/PuerkitoBio/purell v1.1.1 // indirect
github.com/ant31/crd-validation v0.0.0-20180801212718-38f6a293f140
github.com/awalterschulze/gographviz v0.0.0-20181013152038-b2885df04310
github.com/campoy/embedmd v1.0.0
github.com/containernetworking/cni v0.6.0
github.com/containernetworking/plugins v0.6.0
github.com/coreos/go-iptables v0.4.0
@@ -37,6 +38,7 @@ require (
github.com/onsi/gomega v1.5.0 // indirect
github.com/prometheus/client_golang v0.9.2
github.com/spf13/cobra v0.0.4-0.20190321000552-67fc4837d267
github.com/spf13/pflag v1.0.3
github.com/stretchr/testify v1.3.0 // indirect
github.com/vishvananda/netlink v1.0.0
github.com/vishvananda/netns v0.0.0-20180720170159-13995c7128cc // indirect

15
go.sum
View File

@@ -1,6 +1,5 @@
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/PuerkitoBio/purell v1.1.0 h1:rmGxhojJlM0tuKtfdvliR84CFHljx9ag64t2xmVkjK4=
github.com/PuerkitoBio/purell v1.1.0/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
github.com/PuerkitoBio/purell v1.1.1 h1:WEQqlqaGbrPkxLJWfBwQmfEAE1Z7ONdDLqrN38tNFfI=
github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
@@ -13,6 +12,8 @@ github.com/awalterschulze/gographviz v0.0.0-20181013152038-b2885df04310 h1:t+qxR
github.com/awalterschulze/gographviz v0.0.0-20181013152038-b2885df04310/go.mod h1:GEV5wmg4YquNw7v1kkyoX9etIk8yVmXj+AkDHuuETHs=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973 h1:xJ4a3vCFaGF/jqvzLMYoU8P317H5OQ+Via4RmuPwCS0=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/campoy/embedmd v1.0.0 h1:V4kI2qTJJLf4J29RzI/MAt2c3Bl4dQSYPuflzwFH2hY=
github.com/campoy/embedmd v1.0.0/go.mod h1:oxyr9RCiSXg0M3VJ3ks0UGfp98BpSSGr0kpiX3MzVl8=
github.com/containernetworking/cni v0.6.0 h1:FXICGBZNMtdHlW65trpoHviHctQD3seWhRRcqp2hMOU=
github.com/containernetworking/cni v0.6.0/go.mod h1:LGwApLUm2FpoOfxTDEeq8T9ipbpZ61X79hmU3w8FmsY=
github.com/containernetworking/plugins v0.6.0 h1:bqPT7yYisnWs+FrtgY5/qLEB9QZ/6z11wMNCwSdzZm0=
@@ -38,17 +39,14 @@ github.com/go-kit/kit v0.8.0 h1:Wz+5lgoB0kkuqLEc6NVmwRknTKP6dTGbSqvhZtBI/j0=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.4.0 h1:MP4Eh7ZCb31lleYCFuwm0oe4/YGak+5l1vA2NOE80nA=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-openapi/jsonpointer v0.17.0 h1:nH6xp8XdXHx8dqveo0ZuJBluCO2qGrPbDNZ0dwoRHP0=
github.com/go-openapi/jsonpointer v0.17.0/go.mod h1:cOnomiV+CVVwFLk0A/MExoFMjwdsUdVpsRhURCKh+3M=
github.com/go-openapi/jsonpointer v0.19.0 h1:FTUMcX77w5rQkClIzDtTxvn6Bsa894CcrzNj2MMfeg8=
github.com/go-openapi/jsonpointer v0.19.0/go.mod h1:cOnomiV+CVVwFLk0A/MExoFMjwdsUdVpsRhURCKh+3M=
github.com/go-openapi/jsonreference v0.17.0 h1:yJW3HCkTHg7NOA+gZ83IPHzUSnUzGXhGmsdiCcMexbA=
github.com/go-openapi/jsonreference v0.17.0/go.mod h1:g4xxGn04lDIRh0GJb5QlpE3HfopLOL6uZrK/VgnsK9I=
github.com/go-openapi/jsonreference v0.19.0 h1:BqWKpV1dFd+AuiKlgtddwVIFQsuMpxfBDBHGfM2yNpk=
github.com/go-openapi/jsonreference v0.19.0/go.mod h1:g4xxGn04lDIRh0GJb5QlpE3HfopLOL6uZrK/VgnsK9I=
github.com/go-openapi/spec v0.19.0 h1:A4SZ6IWh3lnjH0rG0Z5lkxazMGBECtrZcbyYQi+64k4=
github.com/go-openapi/spec v0.19.0/go.mod h1:XkF/MOi14NmjsfZ8VtAKf8pIlbZzyoTvZsdfssdxcBI=
github.com/go-openapi/swag v0.17.0 h1:iqrgMg7Q7SvtbWLlltPrkMs0UBJI6oTSs79JFRUi880=
github.com/go-openapi/swag v0.17.0/go.mod h1:AByQ+nYG6gQg71GINrmuDXCPWdL640yX49/kXLo40Tg=
github.com/go-openapi/swag v0.19.0 h1:Kg7Wl7LkTPlmc393QZQ/5rQadPhi7pBVEMZxyTi0Ii8=
github.com/go-openapi/swag v0.19.0/go.mod h1:AByQ+nYG6gQg71GINrmuDXCPWdL640yX49/kXLo40Tg=
@@ -58,7 +56,6 @@ github.com/gogo/protobuf v1.2.1 h1:/s5zKNz0uPFCZ5hddgPdo2TK2TVrUNMn0OOX8/aZMTE=
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
github.com/golang/groupcache v0.0.0-20181024230925-c65c006176ff h1:kOkM9whyQYodu09SJ6W3NCsHG7crFaJILQ22Gozp3lg=
github.com/golang/groupcache v0.0.0-20181024230925-c65c006176ff/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/protobuf v1.2.0 h1:P3YflyNX/ehuJFLhxviNdFxQPkGK5cDcApsge1SqnvM=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1 h1:YF8+flBXS5eO826T4nzqPrxfhQThhXl0YzfuUPu4SBg=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
@@ -89,7 +86,6 @@ github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kylelemons/godebug v0.0.0-20170820004349-d65d576e9348 h1:MtvEpTB6LX3vkb4ax0b5D2DHbNAUsen0Gx5wZoq3lV4=
github.com/kylelemons/godebug v0.0.0-20170820004349-d65d576e9348/go.mod h1:B69LEHPfb2qLo0BaaOLcbitczOKLWTsrBG9LczfCD4k=
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/mailru/easyjson v0.0.0-20180823135443-60711f1a8329 h1:2gxZ0XQIU/5z3Z3bUBu+FXuk2pFbkN6tcwi/pjyaDic=
github.com/mailru/easyjson v0.0.0-20180823135443-60711f1a8329/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.0.0-20190403194419-1ea4449da983 h1:wL11wNW7dhKIcRCHSm4sHKPWz0tt4mwBsVodG7+Xyqg=
github.com/mailru/easyjson v0.0.0-20190403194419-1ea4449da983/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
@@ -129,7 +125,6 @@ github.com/spf13/pflag v1.0.3 h1:zPAT6CGy6wXeQ7NtTnaTerfKOsV6V6F8agHXFiazDkg=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
@@ -147,12 +142,10 @@ golang.org/x/lint v0.0.0-20190409202823-959b441ac422 h1:QzoH/1pFpZguR8NrRHLcO6jK
golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181005035420-146acd28ed58 h1:otZG8yDCO4LVps5+9bxOeNiCvgmOyt96J3roHTYs7oE=
golang.org/x/net v0.0.0-20181005035420-146acd28ed58/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181201002055-351d144fa1fc/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3 h1:0GoQqolDA55aaLxZyTzK/Y2ePZzZTUrRacwib7cNsYQ=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09 h1:KaQtG+aDELoNmXYas3TVkGNYRuq8JQ1aa7LJt8EXVyo=
golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
@@ -164,18 +157,15 @@ golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d h1:+R4KGOnez64A81RvjARKc4UT5/tI9ujCIVX+P5KiHuI=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190429190828-d89cdac9e872 h1:cGjJzUd8RgBw428LXP65YXni0aiGNA4Bl+ls8SmLOm8=
golang.org/x/sys v0.0.0-20190429190828-d89cdac9e872/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c h1:fqgJT0MGcGpPgpWU7VRdRjuArfcOvC4AoJmILihzhDg=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e h1:FDhOuMEY4JVRztM/gsbk+IKUQ8kj74bxZrgw87eMMVc=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384 h1:TFlARGu6Czu1z7q93HTxcP1P+/ZFC/IKythI5RzrnRg=
@@ -183,7 +173,6 @@ golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.5.0 h1:KxkO13IPW4Lslp2bz+KHP2E3gtFlrIGNThxkZQ3g+4c=
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=

View File

@@ -0,0 +1,349 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: kilo
namespace: kube-system
labels:
app.kubernetes.io/name: kilo
data:
cni-conf.json: |
{
"cniVersion":"0.3.1",
"name":"kilo",
"plugins":[
{
"name":"kubernetes",
"type":"bridge",
"bridge":"kube-bridge",
"isDefaultGateway":true,
"forceAddress":true,
"mtu": 1420,
"ipam":{
"type":"host-local"
}
},
{
"type":"portmap",
"snat":true,
"capabilities":{
"portMappings":true
}
}
]
}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kilo
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kilo
rules:
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- get
- patch
- watch
- apiGroups:
- kilo.squat.ai
resources:
- peers
verbs:
- list
- update
- watch
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kilo
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kilo
subjects:
- kind: ServiceAccount
name: kilo
namespace: kube-system
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kilo
namespace: kube-system
labels:
app.kubernetes.io/name: kilo
spec:
selector:
matchLabels:
app.kubernetes.io/name: kilo
template:
metadata:
labels:
app.kubernetes.io/name: kilo
spec:
nodeSelector:
nkml.squat.ai/wireguard: "true"
serviceAccountName: kilo
hostNetwork: true
containers:
- name: kilo
image: squat/kilo
args:
- --kubeconfig=/etc/kubernetes/kubeconfig
- --hostname=$(NODE_NAME)
- --interface=kilo0
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
privileged: true
volumeMounts:
- name: cni-conf-dir
mountPath: /etc/cni/net.d
- name: kilo-dir
mountPath: /var/lib/kilo
- name: kubeconfig
mountPath: /etc/kubernetes/kubeconfig
readOnly: true
- name: lib-modules
mountPath: /lib/modules
readOnly: true
- name: xtables-lock
mountPath: /run/xtables.lock
readOnly: false
initContainers:
- name: install-cni
image: squat/kilo
command:
- /bin/sh
- -c
- set -e -x;
cp /opt/cni/bin/* /host/opt/cni/bin/;
TMP_CONF="$CNI_CONF_NAME".tmp;
echo "$CNI_NETWORK_CONFIG" > $TMP_CONF;
rm -f /host/etc/cni/net.d/*;
mv $TMP_CONF /host/etc/cni/net.d/$CNI_CONF_NAME
env:
- name: CNI_CONF_NAME
value: 10-kilo.conflist
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: kilo
key: cni-conf.json
volumeMounts:
- name: cni-bin-dir
mountPath: /host/opt/cni/bin
- name: cni-conf-dir
mountPath: /host/etc/cni/net.d
tolerations:
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists
volumes:
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-conf-dir
hostPath:
path: /etc/cni/net.d
- name: kilo-dir
hostPath:
path: /var/lib/kilo
- name: kubeconfig
hostPath:
# Since kilo runs as a daemonset, it is recommended that you copy the
# k3s.yaml kubeconfig file from the master node to all worker nodes
# with the same path structure.
path: /etc/rancher/k3s/k3s.yaml
- name: lib-modules
hostPath:
path: /lib/modules
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kilo-userspace
namespace: kube-system
labels:
app.kubernetes.io/name: kilo-userspace
spec:
selector:
matchLabels:
app.kubernetes.io/name: kilo-userspace
template:
metadata:
labels:
app.kubernetes.io/name: kilo-userspace
spec:
nodeSelector:
nkml.squat.ai/wireguard: "false"
serviceAccountName: kilo
hostNetwork: true
containers:
- name: kilo
image: squat/kilo
args:
- --kubeconfig=/etc/kubernetes/kubeconfig
- --hostname=$(NODE_NAME)
- --create-interface=false
- --interface=kilo0
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
privileged: true
volumeMounts:
- name: cni-conf-dir
mountPath: /etc/cni/net.d
- name: kilo-dir
mountPath: /var/lib/kilo
- name: kubeconfig
mountPath: /etc/kubernetes/kubeconfig
readOnly: true
- name: lib-modules
mountPath: /lib/modules
readOnly: true
- name: xtables-lock
mountPath: /run/xtables.lock
readOnly: false
- name: wireguard
mountPath: /var/run/wireguard
readOnly: false
- name: boringtun
image: leonnicolas/boringtun
args:
- --disable-drop-privileges=true
- --foreground
- kilo0
securityContext:
privileged: true
volumeMounts:
- name: wireguard
mountPath: /var/run/wireguard
readOnly: false
initContainers:
- name: install-cni
image: squat/kilo
command:
- /bin/sh
- -c
- set -e -x;
cp /opt/cni/bin/* /host/opt/cni/bin/;
TMP_CONF="$CNI_CONF_NAME".tmp;
echo "$CNI_NETWORK_CONFIG" > $TMP_CONF;
rm -f /host/etc/cni/net.d/*;
mv $TMP_CONF /host/etc/cni/net.d/$CNI_CONF_NAME
env:
- name: CNI_CONF_NAME
value: 10-kilo.conflist
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: kilo
key: cni-conf.json
volumeMounts:
- name: cni-bin-dir
mountPath: /host/opt/cni/bin
- name: cni-conf-dir
mountPath: /host/etc/cni/net.d
tolerations:
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists
volumes:
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-conf-dir
hostPath:
path: /etc/cni/net.d
- name: kilo-dir
hostPath:
path: /var/lib/kilo
- name: kubeconfig
hostPath:
# Since kilo runs as a daemonset, it is recommended that you copy the
# k3s.yaml kubeconfig file from the master node to all worker nodes
# with the same path structure.
path: /etc/rancher/k3s/k3s.yaml
- name: lib-modules
hostPath:
path: /lib/modules
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
- name: wireguard
hostPath:
path: /var/run/wireguard
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: nkml
namespace: kube-system
labels:
app.kubernetes.io/name: nkml
spec:
selector:
matchLabels:
app.kubernetes.io/name: nkml
template:
metadata:
labels:
app.kubernetes.io/name: nkml
spec:
hostNetwork: true
containers:
- name: nkml
image: leonnicolas/nkml
args:
- --hostname=$(NODE_NAME)
- --label-mod=wireguard
- --kubeconfig=/etc/kubernetes/kubeconfig
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
ports:
- name: http
containerPort: 8080
volumeMounts:
- name: kubeconfig
mountPath: /etc/kubernetes/kubeconfig
readOnly: true
volumes:
- name: kubeconfig
hostPath:
# since the above DaemonSets are dependant on the labels
# and nkml would need a cni to start
# it needs run on the hostnetwork and use the kubeconfig
# to label the nodes
path: /etc/rancher/k3s/k3s.yaml

View File

@@ -0,0 +1,199 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: kilo
namespace: kube-system
labels:
app.kubernetes.io/name: kilo
data:
cni-conf.json: |
{
"cniVersion":"0.3.1",
"name":"kilo",
"plugins":[
{
"name":"kubernetes",
"type":"bridge",
"bridge":"kube-bridge",
"isDefaultGateway":true,
"forceAddress":true,
"mtu": 1420,
"ipam":{
"type":"host-local"
}
},
{
"type":"portmap",
"snat":true,
"capabilities":{
"portMappings":true
}
}
]
}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kilo
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kilo
rules:
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- patch
- watch
- apiGroups:
- kilo.squat.ai
resources:
- peers
verbs:
- list
- update
- watch
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kilo
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kilo
subjects:
- kind: ServiceAccount
name: kilo
namespace: kube-system
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kilo
namespace: kube-system
labels:
app.kubernetes.io/name: kilo
spec:
selector:
matchLabels:
app.kubernetes.io/name: kilo
template:
metadata:
labels:
app.kubernetes.io/name: kilo
spec:
serviceAccountName: kilo
hostNetwork: true
containers:
- name: kilo
image: squat/kilo
args:
- --kubeconfig=/etc/kubernetes/kubeconfig
- --hostname=$(NODE_NAME)
- --create-interface=false
- --interface=kilo0
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
privileged: true
volumeMounts:
- name: cni-conf-dir
mountPath: /etc/cni/net.d
- name: kilo-dir
mountPath: /var/lib/kilo
- name: kubeconfig
mountPath: /etc/kubernetes/kubeconfig
readOnly: true
- name: lib-modules
mountPath: /lib/modules
readOnly: true
- name: xtables-lock
mountPath: /run/xtables.lock
readOnly: false
- name: wireguard
mountPath: /var/run/wireguard
readOnly: false
- name: boringtun
image: leonnicolas/boringtun
args:
- --disable-drop-privileges=true
- --foreground
- kilo0
securityContext:
privileged: true
volumeMounts:
- name: wireguard
mountPath: /var/run/wireguard
readOnly: false
initContainers:
- name: install-cni
image: squat/kilo
command:
- /bin/sh
- -c
- set -e -x;
cp /opt/cni/bin/* /host/opt/cni/bin/;
TMP_CONF="$CNI_CONF_NAME".tmp;
echo "$CNI_NETWORK_CONFIG" > $TMP_CONF;
rm -f /host/etc/cni/net.d/*;
mv $TMP_CONF /host/etc/cni/net.d/$CNI_CONF_NAME
env:
- name: CNI_CONF_NAME
value: 10-kilo.conflist
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: kilo
key: cni-conf.json
volumeMounts:
- name: cni-bin-dir
mountPath: /host/opt/cni/bin
- name: cni-conf-dir
mountPath: /host/etc/cni/net.d
tolerations:
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists
volumes:
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-conf-dir
hostPath:
path: /etc/cni/net.d
- name: kilo-dir
hostPath:
path: /var/lib/kilo
- name: kubeconfig
hostPath:
# Since kilo runs as a daemonset, it is recommended that you copy the
# k3s.yaml kubeconfig file from the master node to all worker nodes
# with the same path structure.
path: /etc/rancher/k3s/k3s.yaml
- name: lib-modules
hostPath:
path: /lib/modules
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
- name: wireguard
hostPath:
path: /var/run/wireguard

View File

@@ -160,7 +160,7 @@ spec:
path: /opt/cni/bin
- name: cni-conf-dir
hostPath:
path: /etc/kubernetes/cni/net.d
path: /etc/cni/net.d
- name: kilo-dir
hostPath:
path: /var/lib/kilo

View File

@@ -67,17 +67,18 @@ func (i *ipip) Init(base int) error {
// when traffic between nodes must be encapsulated.
func (i *ipip) Rules(nodes []*net.IPNet) []iptables.Rule {
var rules []iptables.Rule
proto := ipipProtocolName()
rules = append(rules, iptables.NewIPv4Chain("filter", "KILO-IPIP"))
rules = append(rules, iptables.NewIPv6Chain("filter", "KILO-IPIP"))
rules = append(rules, iptables.NewIPv4Rule("filter", "INPUT", "-m", "comment", "--comment", "Kilo: jump to IPIP chain", "-p", "4", "-j", "KILO-IPIP"))
rules = append(rules, iptables.NewIPv6Rule("filter", "INPUT", "-m", "comment", "--comment", "Kilo: jump to IPIP chain", "-p", "4", "-j", "KILO-IPIP"))
rules = append(rules, iptables.NewIPv4Rule("filter", "INPUT", "-p", proto, "-m", "comment", "--comment", "Kilo: jump to IPIP chain", "-j", "KILO-IPIP"))
rules = append(rules, iptables.NewIPv6Rule("filter", "INPUT", "-p", proto, "-m", "comment", "--comment", "Kilo: jump to IPIP chain", "-j", "KILO-IPIP"))
for _, n := range nodes {
// Accept encapsulated traffic from peers.
rules = append(rules, iptables.NewRule(iptables.GetProtocol(len(n.IP)), "filter", "KILO-IPIP", "-m", "comment", "--comment", "Kilo: allow IPIP traffic", "-s", n.IP.String(), "-j", "ACCEPT"))
rules = append(rules, iptables.NewRule(iptables.GetProtocol(len(n.IP)), "filter", "KILO-IPIP", "-s", n.String(), "-m", "comment", "--comment", "Kilo: allow IPIP traffic", "-j", "ACCEPT"))
}
// Drop all other IPIP traffic.
rules = append(rules, iptables.NewIPv4Rule("filter", "INPUT", "-m", "comment", "--comment", "Kilo: reject other IPIP traffic", "-p", "4", "-j", "DROP"))
rules = append(rules, iptables.NewIPv6Rule("filter", "INPUT", "-m", "comment", "--comment", "Kilo: reject other IPIP traffic", "-p", "4", "-j", "DROP"))
rules = append(rules, iptables.NewIPv4Rule("filter", "INPUT", "-p", proto, "-m", "comment", "--comment", "Kilo: reject other IPIP traffic", "-j", "DROP"))
rules = append(rules, iptables.NewIPv6Rule("filter", "INPUT", "-p", proto, "-m", "comment", "--comment", "Kilo: reject other IPIP traffic", "-j", "DROP"))
return rules
}

View File

@@ -0,0 +1,26 @@
// Copyright 2021 the Kilo authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// +build cgo
package encapsulation
/*
#include <netdb.h>
*/
import "C"
func ipipProtocolName() string {
return C.GoString(C.getprotobynumber(4).p_name)
}

View File

@@ -0,0 +1,24 @@
// Copyright 2021 the Kilo authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// +build !cgo
package encapsulation
// If we can determine the protocol name at runtime
// by looking it up in the protocols database, assume `ipencap`
// as this is the value in Kilo's container image.
func ipipProtocolName() string {
return "ipencap"
}

59
pkg/encapsulation/noop.go Normal file
View File

@@ -0,0 +1,59 @@
// Copyright 2021 the Kilo authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package encapsulation
import (
"net"
"github.com/squat/kilo/pkg/iptables"
)
// Noop is an encapsulation that does nothing.
type Noop Strategy
// CleanUp will also do nothing.
func (n Noop) CleanUp() error {
return nil
}
// Gw will also do nothing.
func (n Noop) Gw(_ net.IP, _ net.IP, _ *net.IPNet) net.IP {
return nil
}
// Index will also do nothing.
func (n Noop) Index() int {
return 0
}
// Init will also do nothing.
func (n Noop) Init(_ int) error {
return nil
}
// Rules will also do nothing.
func (n Noop) Rules(_ []*net.IPNet) []iptables.Rule {
return nil
}
// Set will also do nothing.
func (n Noop) Set(_ *net.IPNet) error {
return nil
}
// Strategy will finally do nothing.
func (n Noop) Strategy() Strategy {
return Strategy(n)
}

View File

@@ -16,6 +16,8 @@ package iptables
import (
"fmt"
"strings"
"sync/atomic"
"github.com/coreos/go-iptables/iptables"
)
@@ -38,12 +40,14 @@ func (s statusError) ExitStatus() int {
}
type fakeClient struct {
calls uint64
storage []Rule
}
var _ Client = &fakeClient{}
func (f *fakeClient) AppendUnique(table, chain string, spec ...string) error {
atomic.AddUint64(&f.calls, 1)
exists, err := f.Exists(table, chain, spec...)
if err != nil {
return err
@@ -56,6 +60,7 @@ func (f *fakeClient) AppendUnique(table, chain string, spec ...string) error {
}
func (f *fakeClient) Delete(table, chain string, spec ...string) error {
atomic.AddUint64(&f.calls, 1)
r := &rule{table: table, chain: chain, spec: spec}
for i := range f.storage {
if f.storage[i].String() == r.String() {
@@ -69,6 +74,7 @@ func (f *fakeClient) Delete(table, chain string, spec ...string) error {
}
func (f *fakeClient) Exists(table, chain string, spec ...string) (bool, error) {
atomic.AddUint64(&f.calls, 1)
r := &rule{table: table, chain: chain, spec: spec}
for i := range f.storage {
if f.storage[i].String() == r.String() {
@@ -78,7 +84,22 @@ func (f *fakeClient) Exists(table, chain string, spec ...string) (bool, error) {
return false, nil
}
func (f *fakeClient) List(table, chain string) ([]string, error) {
atomic.AddUint64(&f.calls, 1)
var rs []string
for i := range f.storage {
switch r := f.storage[i].(type) {
case *rule:
if r.table == table && r.chain == chain {
rs = append(rs, strings.TrimSpace(strings.TrimPrefix(r.String(), table)))
}
}
}
return rs, nil
}
func (f *fakeClient) ClearChain(table, name string) error {
atomic.AddUint64(&f.calls, 1)
for i := range f.storage {
r, ok := f.storage[i].(*rule)
if !ok {
@@ -90,10 +111,14 @@ func (f *fakeClient) ClearChain(table, name string) error {
}
}
}
return f.DeleteChain(table, name)
if err := f.DeleteChain(table, name); err != nil {
return err
}
return f.NewChain(table, name)
}
func (f *fakeClient) DeleteChain(table, name string) error {
atomic.AddUint64(&f.calls, 1)
for i := range f.storage {
r, ok := f.storage[i].(*rule)
if !ok {
@@ -116,6 +141,7 @@ func (f *fakeClient) DeleteChain(table, name string) error {
}
func (f *fakeClient) NewChain(table, name string) error {
atomic.AddUint64(&f.calls, 1)
c := &chain{table: table, chain: name}
for i := range f.storage {
if f.storage[i].String() == c.String() {
@@ -125,3 +151,17 @@ func (f *fakeClient) NewChain(table, name string) error {
f.storage = append(f.storage, c)
return nil
}
func (f *fakeClient) ListChains(table string) ([]string, error) {
atomic.AddUint64(&f.calls, 1)
var cs []string
for i := range f.storage {
switch c := f.storage[i].(type) {
case *chain:
if c.table == table {
cs = append(cs, c.chain)
}
}
}
return cs, nil
}

View File

@@ -17,11 +17,12 @@ package iptables
import (
"fmt"
"net"
"strings"
"sync"
"time"
"github.com/coreos/go-iptables/iptables"
"github.com/go-kit/kit/log"
"github.com/go-kit/kit/log/level"
)
// Protocol represents an IP protocol.
@@ -47,9 +48,11 @@ type Client interface {
AppendUnique(table string, chain string, rule ...string) error
Delete(table string, chain string, rule ...string) error
Exists(table string, chain string, rule ...string) (bool, error)
List(table string, chain string) ([]string, error)
ClearChain(table string, chain string) error
DeleteChain(table string, chain string) error
NewChain(table string, chain string) error
ListChains(table string) ([]string, error)
}
// Rule is an interface for interacting with iptables objects.
@@ -107,7 +110,17 @@ func (r *rule) String() string {
if r == nil {
return ""
}
return fmt.Sprintf("%s_%s_%s", r.table, r.chain, strings.Join(r.spec, "_"))
spec := r.table + " -A " + r.chain
for i, s := range r.spec {
spec += " "
// If this is the content of a comment, wrap the value in quotes.
if i > 0 && r.spec[i-1] == "--comment" {
spec += `"` + s + `"`
} else {
spec += s
}
}
return spec
}
func (r *rule) Proto() Protocol {
@@ -132,6 +145,7 @@ func NewIPv6Chain(table, name string) Rule {
}
func (c *chain) Add(client Client) error {
// Note: `ClearChain` creates a chain if it does not exist.
if err := client.ClearChain(c.table, c.chain); err != nil {
return fmt.Errorf("failed to add iptables chain: %v", err)
}
@@ -171,41 +185,81 @@ func (c *chain) String() string {
if c == nil {
return ""
}
return fmt.Sprintf("%s_%s", c.table, c.chain)
return chainToString(c.table, c.chain)
}
func (c *chain) Proto() Protocol {
return c.proto
}
func chainToString(table, chain string) string {
return fmt.Sprintf("%s -N %s", table, chain)
}
// Controller is able to reconcile a given set of iptables rules.
type Controller struct {
v4 Client
v6 Client
errors chan error
v4 Client
v6 Client
errors chan error
logger log.Logger
resyncPeriod time.Duration
sync.Mutex
rules []Rule
subscribed bool
}
// ControllerOption modifies the controller's configuration.
type ControllerOption func(h *Controller)
// WithLogger adds a logger to the controller.
func WithLogger(logger log.Logger) ControllerOption {
return func(c *Controller) {
c.logger = logger
}
}
// WithResyncPeriod modifies how often the controller reconciles.
func WithResyncPeriod(resyncPeriod time.Duration) ControllerOption {
return func(c *Controller) {
c.resyncPeriod = resyncPeriod
}
}
// WithClients adds iptables clients to the controller.
func WithClients(v4, v6 Client) ControllerOption {
return func(c *Controller) {
c.v4 = v4
c.v6 = v6
}
}
// New generates a new iptables rules controller.
// It expects an IP address length to determine
// whether to operate in IPv4 or IPv6 mode.
func New() (*Controller, error) {
v4, err := iptables.NewWithProtocol(iptables.ProtocolIPv4)
if err != nil {
return nil, fmt.Errorf("failed to create iptables IPv4 client: %v", err)
}
v6, err := iptables.NewWithProtocol(iptables.ProtocolIPv6)
if err != nil {
return nil, fmt.Errorf("failed to create iptables IPv6 client: %v", err)
}
return &Controller{
v4: v4,
v6: v6,
// If no options are given, IPv4 and IPv6 clients
// will be instantiated using the regular iptables backend.
func New(opts ...ControllerOption) (*Controller, error) {
c := &Controller{
errors: make(chan error),
}, nil
logger: log.NewNopLogger(),
}
for _, o := range opts {
o(c)
}
if c.v4 == nil {
v4, err := iptables.NewWithProtocol(iptables.ProtocolIPv4)
if err != nil {
return nil, fmt.Errorf("failed to create iptables IPv4 client: %v", err)
}
c.v4 = v4
}
if c.v6 == nil {
v6, err := iptables.NewWithProtocol(iptables.ProtocolIPv6)
if err != nil {
return nil, fmt.Errorf("failed to create iptables IPv6 client: %v", err)
}
c.v6 = v6
}
return c, nil
}
// Run watches for changes to iptables rules and reconciles
@@ -220,16 +274,18 @@ func (c *Controller) Run(stop <-chan struct{}) (<-chan error, error) {
c.subscribed = true
c.Unlock()
go func() {
t := time.NewTimer(c.resyncPeriod)
defer close(c.errors)
for {
select {
case <-time.After(5 * time.Second):
case <-t.C:
if err := c.reconcile(); err != nil {
nonBlockingSend(c.errors, fmt.Errorf("failed to reconcile rules: %v", err))
}
t.Reset(c.resyncPeriod)
case <-stop:
return
}
if err := c.reconcile(); err != nil {
nonBlockingSend(c.errors, fmt.Errorf("failed to reconcile rules: %v", err))
}
}
}()
return c.errors, nil
@@ -242,12 +298,14 @@ func (c *Controller) Run(stop <-chan struct{}) (<-chan error, error) {
func (c *Controller) reconcile() error {
c.Lock()
defer c.Unlock()
var rc ruleCache
for i, r := range c.rules {
ok, err := r.Exists(c.client(r.Proto()))
ok, err := rc.exists(c.client(r.Proto()), r)
if err != nil {
return fmt.Errorf("failed to check if rule exists: %v", err)
}
if !ok {
level.Info(c.logger).Log("msg", fmt.Sprintf("applying %d iptables rules", len(c.rules)-i))
if err := c.resetFromIndex(i, c.rules); err != nil {
return fmt.Errorf("failed to add rule: %v", err)
}

View File

@@ -83,10 +83,11 @@ func TestSet(t *testing.T) {
},
},
} {
controller := &Controller{}
client := &fakeClient{}
controller.v4 = client
controller.v6 = client
controller, err := New(WithClients(client, client))
if err != nil {
t.Fatalf("test case %q: got unexpected error instantiating controller: %v", tc.name, err)
}
for i := range tc.sets {
if err := controller.Set(tc.sets[i]); err != nil {
t.Fatalf("test case %q: got unexpected error seting rule set %d: %v", tc.name, i, err)
@@ -139,10 +140,11 @@ func TestCleanUp(t *testing.T) {
rules: []Rule{rules[0], rules[1]},
},
} {
controller := &Controller{}
client := &fakeClient{}
controller.v4 = client
controller.v6 = client
controller, err := New(WithClients(client, client))
if err != nil {
t.Fatalf("test case %q: got unexpected error instantiating controller: %v", tc.name, err)
}
if err := controller.Set(tc.rules); err != nil {
t.Fatalf("test case %q: Set should not fail: %v", tc.name, err)
}

106
pkg/iptables/rulecache.go Normal file
View File

@@ -0,0 +1,106 @@
// Copyright 2021 the Kilo authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package iptables
import (
"fmt"
"strings"
)
type ruleCacheFlag byte
const (
exists ruleCacheFlag = 1 << iota
populated
)
type isNotExistError interface {
error
IsNotExist() bool
}
// ruleCache is a lazy cache that can be used to
// check if a given rule or chain exists in an iptables
// table.
type ruleCache [2]map[string]ruleCacheFlag
func (rc *ruleCache) populateTable(c Client, proto Protocol, table string) error {
// If the table already exists in the destination map,
// exit early since it has already been populated.
if rc[proto][table]&populated != 0 {
return nil
}
cs, err := c.ListChains(table)
if err != nil {
return fmt.Errorf("failed to populate chains for table %q: %v", table, err)
}
rc[proto][table] = exists | populated
for i := range cs {
rc[proto][chainToString(table, cs[i])] |= exists
}
return nil
}
func (rc *ruleCache) populateChain(c Client, proto Protocol, table, chain string) error {
// If the destination chain true, then it has already been populated.
if rc[proto][chainToString(table, chain)]&populated != 0 {
return nil
}
rs, err := c.List(table, chain)
if err != nil {
if existsErr, ok := err.(isNotExistError); ok && existsErr.IsNotExist() {
rc[proto][chainToString(table, chain)] = populated
return nil
}
return fmt.Errorf("failed to populate rules in chain %q for table %q: %v", chain, table, err)
}
for i := range rs {
rc[proto][strings.Join([]string{table, rs[i]}, " ")] = exists
}
// If there are rules on the chain, then the chain exists too.
if len(rs) > 0 {
rc[proto][chainToString(table, chain)] = exists
}
rc[proto][chainToString(table, chain)] |= populated
return nil
}
func (rc *ruleCache) populateRules(c Client, r Rule) error {
// Ensure a map for the proto exists.
if rc[r.Proto()] == nil {
rc[r.Proto()] = make(map[string]ruleCacheFlag)
}
if ch, ok := r.(*chain); ok {
return rc.populateTable(c, r.Proto(), ch.table)
}
ru := r.(*rule)
return rc.populateChain(c, r.Proto(), ru.table, ru.chain)
}
func (rc *ruleCache) exists(c Client, r Rule) (bool, error) {
// Exit early if the exact rule exists by name.
if rc[r.Proto()][r.String()]&exists != 0 {
return true, nil
}
// Otherwise, populate the respective rules.
if err := rc.populateRules(c, r); err != nil {
return false, err
}
return rc[r.Proto()][r.String()]&exists != 0, nil
}

View File

@@ -0,0 +1,125 @@
// Copyright 2021 the Kilo authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package iptables
import (
"testing"
)
func TestRuleCache(t *testing.T) {
for _, tc := range []struct {
name string
rules []Rule
check []Rule
out []bool
calls uint64
}{
{
name: "empty",
rules: nil,
check: []Rule{rules[0]},
out: []bool{false},
calls: 1,
},
{
name: "single negative",
rules: []Rule{rules[1]},
check: []Rule{rules[0]},
out: []bool{false},
calls: 1,
},
{
name: "single positive",
rules: []Rule{rules[1]},
check: []Rule{rules[1]},
out: []bool{true},
calls: 1,
},
{
name: "single chain",
rules: []Rule{&chain{"nat", "KILO-NAT", ProtocolIPv4}},
check: []Rule{&chain{"nat", "KILO-NAT", ProtocolIPv4}},
out: []bool{true},
calls: 1,
},
{
name: "rule on chain means chain exists",
rules: []Rule{rules[0]},
check: []Rule{rules[0], &chain{"filter", "FORWARD", ProtocolIPv4}},
out: []bool{true, true},
calls: 1,
},
{
name: "rule on chain does not mean table is fully populated",
rules: []Rule{rules[0], &chain{"filter", "INPUT", ProtocolIPv4}},
check: []Rule{rules[0], &chain{"filter", "OUTPUT", ProtocolIPv4}, &chain{"filter", "INPUT", ProtocolIPv4}},
out: []bool{true, false, true},
calls: 2,
},
{
name: "multiple rules on chain",
rules: []Rule{rules[0], rules[1]},
check: []Rule{rules[0], rules[1], &chain{"filter", "FORWARD", ProtocolIPv4}},
out: []bool{true, true, true},
calls: 1,
},
{
name: "checking rule on chain does not mean chain exists",
rules: nil,
check: []Rule{rules[0], &chain{"filter", "FORWARD", ProtocolIPv4}},
out: []bool{false, false},
calls: 2,
},
{
name: "multiple chains on same table",
rules: nil,
check: []Rule{&chain{"filter", "INPUT", ProtocolIPv4}, &chain{"filter", "FORWARD", ProtocolIPv4}},
out: []bool{false, false},
calls: 1,
},
{
name: "multiple chains on different table",
rules: nil,
check: []Rule{&chain{"filter", "INPUT", ProtocolIPv4}, &chain{"nat", "POSTROUTING", ProtocolIPv4}},
out: []bool{false, false},
calls: 2,
},
} {
controller := &Controller{}
client := &fakeClient{}
controller.v4 = client
controller.v6 = client
if err := controller.Set(tc.rules); err != nil {
t.Fatalf("test case %q: Set should not fail: %v", tc.name, err)
}
// Reset the client's calls so we can examine how many times
// the rule cache performs operations.
client.calls = 0
var rc ruleCache
for i := range tc.check {
ok, err := rc.exists(controller.client(tc.check[i].Proto()), tc.check[i])
if err != nil {
t.Fatalf("test case %q check %d: check should not fail: %v", tc.name, i, err)
}
if ok != tc.out[i] {
t.Errorf("test case %q check %d: expected %t, got %t", tc.name, i, tc.out[i], ok)
}
}
if client.calls != tc.calls {
t.Errorf("test case %q: expected client to be called %d times, got %d", tc.name, tc.calls, client.calls)
}
}
}

View File

@@ -1,6 +1,6 @@
// +build !ignore_autogenerated
// Copyright 2020 the Kilo authors
// Copyright 2021 the Kilo authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.

View File

@@ -19,9 +19,11 @@ import (
"errors"
"fmt"
"net"
"strings"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/util/validation"
)
const (
@@ -81,12 +83,22 @@ type PeerSpec struct {
// PeerEndpoint represents a WireGuard enpoint, which is a ip:port tuple.
type PeerEndpoint struct {
// IP must be a valid IP address.
IP string `json:"ip"`
DNSOrIP
// Port must be a valid port number.
Port uint32 `json:"port"`
}
// DNSOrIP represents either a DNS name or an IP address.
// IPs, as they are more specific, are preferred.
type DNSOrIP struct {
// DNS must be a valid RFC 1123 subdomain.
// +optional
DNS string `json:"dns,omitempty"`
// IP must be a valid IP address.
// +optional
IP string `json:"ip,omitempty"`
}
// PeerName is the peer resource's FQDN.
var PeerName = PeerPlural + "." + GroupName
@@ -127,10 +139,18 @@ func (p *Peer) Validate() error {
}
}
if p.Spec.Endpoint != nil {
if net.ParseIP(p.Spec.Endpoint.IP) == nil {
if p.Spec.Endpoint.IP == "" && p.Spec.Endpoint.DNS == "" {
return errors.New("either an endpoint DNS name IP address must be given")
}
if p.Spec.Endpoint.DNS != "" {
if errs := validation.IsDNS1123Subdomain(p.Spec.Endpoint.DNS); len(errs) != 0 {
return errors.New(strings.Join(errs, "; "))
}
}
if p.Spec.Endpoint.IP != "" && net.ParseIP(p.Spec.Endpoint.IP) == nil {
return fmt.Errorf("failed to parse %q as a valid IP address", p.Spec.Endpoint.IP)
}
if p.Spec.Endpoint.Port == 0 {
if 1 > p.Spec.Endpoint.Port || p.Spec.Endpoint.Port > 65535 {
return fmt.Errorf("port must be a valid UDP port number, got %d", p.Spec.Endpoint.Port)
}
}

View File

@@ -1,6 +1,6 @@
// +build !ignore_autogenerated
// Copyright 2020 the Kilo authors
// Copyright 2021 the Kilo authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -22,6 +22,22 @@ import (
runtime "k8s.io/apimachinery/pkg/runtime"
)
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *DNSOrIP) DeepCopyInto(out *DNSOrIP) {
*out = *in
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DNSOrIP.
func (in *DNSOrIP) DeepCopy() *DNSOrIP {
if in == nil {
return nil
}
out := new(DNSOrIP)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Peer) DeepCopyInto(out *Peer) {
*out = *in
@@ -52,6 +68,7 @@ func (in *Peer) DeepCopyObject() runtime.Object {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *PeerEndpoint) DeepCopyInto(out *PeerEndpoint) {
*out = *in
out.DNSOrIP = in.DNSOrIP
return
}

View File

@@ -59,8 +59,8 @@ const (
locationAnnotationKey = "kilo.squat.ai/location"
persistentKeepaliveKey = "kilo.squat.ai/persistent-keepalive"
wireGuardIPAnnotationKey = "kilo.squat.ai/wireguard-ip"
regionLabelKey = "topology.kubernetes.io/region"
// RegionLabelKey is the key for the well-known Kubernetes topology region label.
RegionLabelKey = "topology.kubernetes.io/region"
jsonPatchSlash = "~1"
jsonRemovePatch = `{"op": "remove", "path": "%s"}`
)
@@ -81,10 +81,11 @@ func (b *backend) Peers() mesh.PeerBackend {
}
type nodeBackend struct {
client kubernetes.Interface
events chan *mesh.NodeEvent
informer cache.SharedIndexInformer
lister v1listers.NodeLister
client kubernetes.Interface
events chan *mesh.NodeEvent
informer cache.SharedIndexInformer
lister v1listers.NodeLister
topologyLabel string
}
type peerBackend struct {
@@ -96,16 +97,17 @@ type peerBackend struct {
}
// New creates a new instance of a mesh.Backend.
func New(c kubernetes.Interface, kc kiloclient.Interface, ec apiextensions.Interface) mesh.Backend {
func New(c kubernetes.Interface, kc kiloclient.Interface, ec apiextensions.Interface, topologyLabel string) mesh.Backend {
ni := v1informers.NewNodeInformer(c, 5*time.Minute, nil)
pi := v1alpha1informers.NewPeerInformer(kc, 5*time.Minute, nil)
return &backend{
&nodeBackend{
client: c,
events: make(chan *mesh.NodeEvent),
informer: ni,
lister: v1listers.NewNodeLister(ni.GetIndexer()),
client: c,
events: make(chan *mesh.NodeEvent),
informer: ni,
lister: v1listers.NewNodeLister(ni.GetIndexer()),
topologyLabel: topologyLabel,
},
&peerBackend{
client: kc,
@@ -138,7 +140,7 @@ func (nb *nodeBackend) Get(name string) (*mesh.Node, error) {
if err != nil {
return nil, err
}
return translateNode(n), nil
return translateNode(n, nb.topologyLabel), nil
}
// Init initializes the backend; for this backend that means
@@ -158,7 +160,7 @@ func (nb *nodeBackend) Init(stop <-chan struct{}) error {
// Failed to decode Node; ignoring...
return
}
nb.events <- &mesh.NodeEvent{Type: mesh.AddEvent, Node: translateNode(n)}
nb.events <- &mesh.NodeEvent{Type: mesh.AddEvent, Node: translateNode(n, nb.topologyLabel)}
},
UpdateFunc: func(old, obj interface{}) {
n, ok := obj.(*v1.Node)
@@ -171,7 +173,7 @@ func (nb *nodeBackend) Init(stop <-chan struct{}) error {
// Failed to decode Node; ignoring...
return
}
nb.events <- &mesh.NodeEvent{Type: mesh.UpdateEvent, Node: translateNode(n), Old: translateNode(o)}
nb.events <- &mesh.NodeEvent{Type: mesh.UpdateEvent, Node: translateNode(n, nb.topologyLabel), Old: translateNode(o, nb.topologyLabel)}
},
DeleteFunc: func(obj interface{}) {
n, ok := obj.(*v1.Node)
@@ -179,7 +181,7 @@ func (nb *nodeBackend) Init(stop <-chan struct{}) error {
// Failed to decode Node; ignoring...
return
}
nb.events <- &mesh.NodeEvent{Type: mesh.DeleteEvent, Node: translateNode(n)}
nb.events <- &mesh.NodeEvent{Type: mesh.DeleteEvent, Node: translateNode(n, nb.topologyLabel)}
},
},
)
@@ -194,7 +196,7 @@ func (nb *nodeBackend) List() ([]*mesh.Node, error) {
}
nodes := make([]*mesh.Node, len(ns))
for i := range ns {
nodes[i] = translateNode(ns[i])
nodes[i] = translateNode(ns[i], nb.topologyLabel)
}
return nodes, nil
}
@@ -207,7 +209,11 @@ func (nb *nodeBackend) Set(name string, node *mesh.Node) error {
}
n := old.DeepCopy()
n.ObjectMeta.Annotations[endpointAnnotationKey] = node.Endpoint.String()
n.ObjectMeta.Annotations[internalIPAnnotationKey] = node.InternalIP.String()
if node.InternalIP == nil {
n.ObjectMeta.Annotations[internalIPAnnotationKey] = ""
} else {
n.ObjectMeta.Annotations[internalIPAnnotationKey] = node.InternalIP.String()
}
n.ObjectMeta.Annotations[keyAnnotationKey] = string(node.Key)
n.ObjectMeta.Annotations[lastSeenAnnotationKey] = strconv.FormatInt(node.LastSeen, 10)
if node.WireGuardIP == nil {
@@ -239,7 +245,7 @@ func (nb *nodeBackend) Watch() <-chan *mesh.NodeEvent {
}
// translateNode translates a Kubernetes Node to a mesh.Node.
func translateNode(node *v1.Node) *mesh.Node {
func translateNode(node *v1.Node, topologyLabel string) *mesh.Node {
if node == nil {
return nil
}
@@ -253,7 +259,7 @@ func translateNode(node *v1.Node) *mesh.Node {
// Allow the region to be overridden by an explicit location.
location, ok := node.ObjectMeta.Annotations[locationAnnotationKey]
if !ok {
location = node.ObjectMeta.Labels[regionLabelKey]
location = node.ObjectMeta.Labels[topologyLabel]
}
// Allow the endpoint to be overridden.
endpoint := parseEndpoint(node.ObjectMeta.Annotations[forceEndpointAnnotationKey])
@@ -265,6 +271,12 @@ func translateNode(node *v1.Node) *mesh.Node {
if internalIP == nil {
internalIP = normalizeIP(node.ObjectMeta.Annotations[internalIPAnnotationKey])
}
// Set the ForceInternalIP flag, if force-internal-ip annotation was set to "".
noInternalIP := false
if s, ok := node.ObjectMeta.Annotations[forceInternalIPAnnotationKey]; ok && (s == "" || s == "-") {
noInternalIP = true
internalIP = nil
}
// Set Wireguard PersistentKeepalive setting for the node.
var persistentKeepalive int64
if keepAlive, ok := node.ObjectMeta.Annotations[persistentKeepaliveKey]; !ok {
@@ -287,7 +299,10 @@ func translateNode(node *v1.Node) *mesh.Node {
// remote node's agent has not yet set its IP address;
// in this case the IP will be nil and
// the mesh can wait for the node to be updated.
// It is valid for the InternalIP to be nil,
// if the given node only has public IP addresses.
Endpoint: endpoint,
NoInternalIP: noInternalIP,
InternalIP: internalIP,
Key: []byte(node.ObjectMeta.Annotations[keyAnnotationKey]),
LastSeen: lastSeen,
@@ -325,10 +340,13 @@ func translatePeer(peer *v1alpha1.Peer) *mesh.Peer {
} else {
ip = ip.To16()
}
if peer.Spec.Endpoint.Port > 0 && ip != nil {
if peer.Spec.Endpoint.Port > 0 && (ip != nil || peer.Spec.Endpoint.DNS != "") {
endpoint = &wireguard.Endpoint{
DNSOrIP: wireguard.DNSOrIP{IP: ip},
Port: peer.Spec.Endpoint.Port,
DNSOrIP: wireguard.DNSOrIP{
DNS: peer.Spec.Endpoint.DNS,
IP: ip,
},
Port: peer.Spec.Endpoint.Port,
}
}
}
@@ -464,8 +482,15 @@ func (pb *peerBackend) Set(name string, peer *mesh.Peer) error {
p.Spec.AllowedIPs[i] = peer.AllowedIPs[i].String()
}
if peer.Endpoint != nil {
var ip string
if peer.Endpoint.IP != nil {
ip = peer.Endpoint.IP.String()
}
p.Spec.Endpoint = &v1alpha1.PeerEndpoint{
IP: peer.Endpoint.IP.String(),
DNSOrIP: v1alpha1.DNSOrIP{
IP: ip,
DNS: peer.Endpoint.DNS,
},
Port: peer.Endpoint.Port,
}
}

View File

@@ -83,7 +83,7 @@ func TestTranslateNode(t *testing.T) {
{
name: "region",
labels: map[string]string{
regionLabelKey: "a",
RegionLabelKey: "a",
},
out: &mesh.Node{
Location: "a",
@@ -95,7 +95,7 @@ func TestTranslateNode(t *testing.T) {
locationAnnotationKey: "b",
},
labels: map[string]string{
regionLabelKey: "a",
RegionLabelKey: "a",
},
out: &mesh.Node{
Location: "b",
@@ -137,7 +137,8 @@ func TestTranslateNode(t *testing.T) {
forceInternalIPAnnotationKey: "-10.1.0.2/24",
},
out: &mesh.Node{
InternalIP: &net.IPNet{IP: net.ParseIP("10.1.0.1"), Mask: net.CIDRMask(24, 32)},
InternalIP: &net.IPNet{IP: net.ParseIP("10.1.0.1"), Mask: net.CIDRMask(24, 32)},
NoInternalIP: false,
},
},
{
@@ -147,7 +148,8 @@ func TestTranslateNode(t *testing.T) {
forceInternalIPAnnotationKey: "10.1.0.2/24",
},
out: &mesh.Node{
InternalIP: &net.IPNet{IP: net.ParseIP("10.1.0.2"), Mask: net.CIDRMask(24, 32)},
InternalIP: &net.IPNet{IP: net.ParseIP("10.1.0.2"), Mask: net.CIDRMask(24, 32)},
NoInternalIP: false,
},
},
{
@@ -172,10 +174,11 @@ func TestTranslateNode(t *testing.T) {
wireGuardIPAnnotationKey: "10.4.0.1/16",
},
labels: map[string]string{
regionLabelKey: "a",
RegionLabelKey: "a",
},
out: &mesh.Node{
Endpoint: &wireguard.Endpoint{DNSOrIP: wireguard.DNSOrIP{IP: net.ParseIP("10.0.0.2")}, Port: 51821},
NoInternalIP: false,
InternalIP: &net.IPNet{IP: net.ParseIP("10.1.0.2"), Mask: net.CIDRMask(32, 32)},
Key: []byte("foo"),
LastSeen: 1000000000,
@@ -187,12 +190,68 @@ func TestTranslateNode(t *testing.T) {
},
subnet: "10.2.1.0/24",
},
{
name: "no InternalIP",
annotations: map[string]string{
endpointAnnotationKey: "10.0.0.1:51820",
internalIPAnnotationKey: "",
keyAnnotationKey: "foo",
lastSeenAnnotationKey: "1000000000",
locationAnnotationKey: "b",
persistentKeepaliveKey: "25",
wireGuardIPAnnotationKey: "10.4.0.1/16",
},
labels: map[string]string{
RegionLabelKey: "a",
},
out: &mesh.Node{
Endpoint: &wireguard.Endpoint{DNSOrIP: wireguard.DNSOrIP{IP: net.ParseIP("10.0.0.1")}, Port: 51820},
InternalIP: nil,
Key: []byte("foo"),
LastSeen: 1000000000,
Leader: false,
Location: "b",
PersistentKeepalive: 25,
Subnet: &net.IPNet{IP: net.ParseIP("10.2.1.0"), Mask: net.CIDRMask(24, 32)},
WireGuardIP: &net.IPNet{IP: net.ParseIP("10.4.0.1"), Mask: net.CIDRMask(16, 32)},
},
subnet: "10.2.1.0/24",
},
{
name: "Force no internal IP",
annotations: map[string]string{
endpointAnnotationKey: "10.0.0.1:51820",
internalIPAnnotationKey: "10.1.0.1/32",
forceInternalIPAnnotationKey: "",
keyAnnotationKey: "foo",
lastSeenAnnotationKey: "1000000000",
locationAnnotationKey: "b",
persistentKeepaliveKey: "25",
wireGuardIPAnnotationKey: "10.4.0.1/16",
},
labels: map[string]string{
RegionLabelKey: "a",
},
out: &mesh.Node{
Endpoint: &wireguard.Endpoint{DNSOrIP: wireguard.DNSOrIP{IP: net.ParseIP("10.0.0.1")}, Port: 51820},
NoInternalIP: true,
InternalIP: nil,
Key: []byte("foo"),
LastSeen: 1000000000,
Leader: false,
Location: "b",
PersistentKeepalive: 25,
Subnet: &net.IPNet{IP: net.ParseIP("10.2.1.0"), Mask: net.CIDRMask(24, 32)},
WireGuardIP: &net.IPNet{IP: net.ParseIP("10.4.0.1"), Mask: net.CIDRMask(16, 32)},
},
subnet: "10.2.1.0/24",
},
} {
n := &v1.Node{}
n.ObjectMeta.Annotations = tc.annotations
n.ObjectMeta.Labels = tc.labels
n.Spec.PodCIDR = tc.subnet
node := translateNode(n)
node := translateNode(n, RegionLabelKey)
if diff := pretty.Compare(node, tc.out); diff != "" {
t.Errorf("test case %q: got diff: %v", tc.name, diff)
}
@@ -240,17 +299,30 @@ func TestTranslatePeer(t *testing.T) {
name: "invalid endpoint ip",
spec: v1alpha1.PeerSpec{
Endpoint: &v1alpha1.PeerEndpoint{
IP: "foo",
DNSOrIP: v1alpha1.DNSOrIP{
IP: "foo",
},
Port: mesh.DefaultKiloPort,
},
},
out: &mesh.Peer{},
},
{
name: "valid endpoint",
name: "only endpoint port",
spec: v1alpha1.PeerSpec{
Endpoint: &v1alpha1.PeerEndpoint{
IP: "10.0.0.1",
Port: mesh.DefaultKiloPort,
},
},
out: &mesh.Peer{},
},
{
name: "valid endpoint ip",
spec: v1alpha1.PeerSpec{
Endpoint: &v1alpha1.PeerEndpoint{
DNSOrIP: v1alpha1.DNSOrIP{
IP: "10.0.0.1",
},
Port: mesh.DefaultKiloPort,
},
},
@@ -263,6 +335,25 @@ func TestTranslatePeer(t *testing.T) {
},
},
},
{
name: "valid endpoint DNS",
spec: v1alpha1.PeerSpec{
Endpoint: &v1alpha1.PeerEndpoint{
DNSOrIP: v1alpha1.DNSOrIP{
DNS: "example.com",
},
Port: mesh.DefaultKiloPort,
},
},
out: &mesh.Peer{
Peer: wireguard.Peer{
Endpoint: &wireguard.Endpoint{
DNSOrIP: wireguard.DNSOrIP{DNS: "example.com"},
Port: mesh.DefaultKiloPort,
},
},
},
},
{
name: "empty key",
spec: v1alpha1.PeerSpec{

View File

@@ -1,4 +1,4 @@
// Copyright 2020 the Kilo authors
// Copyright 2021 the Kilo authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.

View File

@@ -1,4 +1,4 @@
// Copyright 2020 the Kilo authors
// Copyright 2021 the Kilo authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.

View File

@@ -1,4 +1,4 @@
// Copyright 2020 the Kilo authors
// Copyright 2021 the Kilo authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.

View File

@@ -1,4 +1,4 @@
// Copyright 2020 the Kilo authors
// Copyright 2021 the Kilo authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.

View File

@@ -1,4 +1,4 @@
// Copyright 2020 the Kilo authors
// Copyright 2021 the Kilo authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.

View File

@@ -1,4 +1,4 @@
// Copyright 2020 the Kilo authors
// Copyright 2021 the Kilo authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.

View File

@@ -1,4 +1,4 @@
// Copyright 2020 the Kilo authors
// Copyright 2021 the Kilo authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.

View File

@@ -1,4 +1,4 @@
// Copyright 2020 the Kilo authors
// Copyright 2021 the Kilo authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.

View File

@@ -1,4 +1,4 @@
// Copyright 2020 the Kilo authors
// Copyright 2021 the Kilo authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.

View File

@@ -1,4 +1,4 @@
// Copyright 2020 the Kilo authors
// Copyright 2021 the Kilo authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.

View File

@@ -1,4 +1,4 @@
// Copyright 2020 the Kilo authors
// Copyright 2021 the Kilo authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.

View File

@@ -1,4 +1,4 @@
// Copyright 2020 the Kilo authors
// Copyright 2021 the Kilo authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.

View File

@@ -1,4 +1,4 @@
// Copyright 2020 the Kilo authors
// Copyright 2021 the Kilo authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.

View File

@@ -1,4 +1,4 @@
// Copyright 2020 the Kilo authors
// Copyright 2021 the Kilo authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.

View File

@@ -1,4 +1,4 @@
// Copyright 2020 the Kilo authors
// Copyright 2021 the Kilo authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.

View File

@@ -1,4 +1,4 @@
// Copyright 2020 the Kilo authors
// Copyright 2021 the Kilo authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.

View File

@@ -1,4 +1,4 @@
// Copyright 2020 the Kilo authors
// Copyright 2021 the Kilo authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.

View File

@@ -1,4 +1,4 @@
// Copyright 2020 the Kilo authors
// Copyright 2021 the Kilo authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.

View File

@@ -1,4 +1,4 @@
// Copyright 2020 the Kilo authors
// Copyright 2021 the Kilo authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.

View File

@@ -1,4 +1,4 @@
// Copyright 2020 the Kilo authors
// Copyright 2021 the Kilo authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.

View File

@@ -1,4 +1,4 @@
// Copyright 2020 the Kilo authors
// Copyright 2021 the Kilo authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.

View File

@@ -1,4 +1,4 @@
// Copyright 2020 the Kilo authors
// Copyright 2021 the Kilo authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.

153
pkg/mesh/backend.go Normal file
View File

@@ -0,0 +1,153 @@
// Copyright 2019 the Kilo authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package mesh
import (
"net"
"time"
"github.com/squat/kilo/pkg/wireguard"
)
const (
// checkInPeriod is how often nodes should check-in.
checkInPeriod = 30 * time.Second
// DefaultKiloInterface is the default interface created and used by Kilo.
DefaultKiloInterface = "kilo0"
// DefaultKiloPort is the default UDP port Kilo uses.
DefaultKiloPort = 51820
// DefaultCNIPath is the default path to the CNI config file.
DefaultCNIPath = "/etc/cni/net.d/10-kilo.conflist"
)
// DefaultKiloSubnet is the default CIDR for Kilo.
var DefaultKiloSubnet = &net.IPNet{IP: []byte{10, 4, 0, 0}, Mask: []byte{255, 255, 0, 0}}
// Granularity represents the abstraction level at which the network
// should be meshed.
type Granularity string
const (
// LogicalGranularity indicates that the network should create
// a mesh between logical locations, e.g. data-centers, but not between
// all nodes within a single location.
LogicalGranularity Granularity = "location"
// FullGranularity indicates that the network should create
// a mesh between every node.
FullGranularity Granularity = "full"
)
// Node represents a node in the network.
type Node struct {
Endpoint *wireguard.Endpoint
Key []byte
NoInternalIP bool
InternalIP *net.IPNet
// LastSeen is a Unix time for the last time
// the node confirmed it was live.
LastSeen int64
// Leader is a suggestion to Kilo that
// the node wants to lead its segment.
Leader bool
Location string
Name string
PersistentKeepalive int
Subnet *net.IPNet
WireGuardIP *net.IPNet
}
// Ready indicates whether or not the node is ready.
func (n *Node) Ready() bool {
// Nodes that are not leaders will not have WireGuardIPs, so it is not required.
return n != nil && n.Endpoint != nil && !(n.Endpoint.IP == nil && n.Endpoint.DNS == "") && n.Endpoint.Port != 0 && n.Key != nil && n.Subnet != nil && time.Now().Unix()-n.LastSeen < int64(checkInPeriod)*2/int64(time.Second)
}
// Peer represents a peer in the network.
type Peer struct {
wireguard.Peer
Name string
}
// Ready indicates whether or not the peer is ready.
// Peers can have empty endpoints because they may not have an
// IP, for example if they are behind a NAT, and thus
// will not declare their endpoint and instead allow it to be
// discovered.
func (p *Peer) Ready() bool {
return p != nil && p.AllowedIPs != nil && len(p.AllowedIPs) != 0 && p.PublicKey != nil
}
// EventType describes what kind of an action an event represents.
type EventType string
const (
// AddEvent represents an action where an item was added.
AddEvent EventType = "add"
// DeleteEvent represents an action where an item was removed.
DeleteEvent EventType = "delete"
// UpdateEvent represents an action where an item was updated.
UpdateEvent EventType = "update"
)
// NodeEvent represents an event concerning a node in the cluster.
type NodeEvent struct {
Type EventType
Node *Node
Old *Node
}
// PeerEvent represents an event concerning a peer in the cluster.
type PeerEvent struct {
Type EventType
Peer *Peer
Old *Peer
}
// Backend can create clients for all of the
// primitive types that Kilo deals with, namely:
// * nodes; and
// * peers.
type Backend interface {
Nodes() NodeBackend
Peers() PeerBackend
}
// NodeBackend can get nodes by name, init itself,
// list the nodes that should be meshed,
// set Kilo properties for a node,
// clean up any changes applied to the backend,
// and watch for changes to nodes.
type NodeBackend interface {
CleanUp(string) error
Get(string) (*Node, error)
Init(<-chan struct{}) error
List() ([]*Node, error)
Set(string, *Node) error
Watch() <-chan *NodeEvent
}
// PeerBackend can get peers by name, init itself,
// list the peers that should be in the mesh,
// set fields for a peer,
// clean up any changes applied to the backend,
// and watch for changes to peers.
type PeerBackend interface {
CleanUp(string) error
Get(string) (*Peer, error)
Init(<-chan struct{}) error
List() ([]*Peer, error)
Set(string, *Peer) error
Watch() <-chan *PeerEvent
}

View File

@@ -12,6 +12,8 @@
// See the License for the specific language governing permissions and
// limitations under the License.
// +build linux
package mesh
import (

287
pkg/mesh/discoverips.go Normal file
View File

@@ -0,0 +1,287 @@
// Copyright 2019 the Kilo authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// +build linux
package mesh
import (
"errors"
"fmt"
"net"
"sort"
"github.com/vishvananda/netlink"
)
// getIP returns a private and public IP address for the local node.
// It selects the private IP address in the following order:
// - private IP to which hostname resolves
// - private IP assigned to interface of default route
// - private IP assigned to local interface
// - nil if no private IP was found
// It selects the public IP address in the following order:
// - public IP to which hostname resolves
// - public IP assigned to interface of default route
// - public IP assigned to local interface
// - private IP to which hostname resolves
// - private IP assigned to interface of default route
// - private IP assigned to local interface
// - if no IP was found, return nil and an error.
func getIP(hostname string, ignoreIfaces ...int) (*net.IPNet, *net.IPNet, error) {
ignore := make(map[string]struct{})
for i := range ignoreIfaces {
if ignoreIfaces[i] == 0 {
// Only ignore valid interfaces.
continue
}
iface, err := net.InterfaceByIndex(ignoreIfaces[i])
if err != nil {
return nil, nil, fmt.Errorf("failed to find interface %d: %v", ignoreIfaces[i], err)
}
ips, err := ipsForInterface(iface)
if err != nil {
return nil, nil, err
}
for _, ip := range ips {
ignore[ip.String()] = struct{}{}
ignore[oneAddressCIDR(ip.IP).String()] = struct{}{}
}
}
var hostPriv, hostPub []*net.IPNet
{
// Check IPs to which hostname resolves first.
ips := ipsForHostname(hostname)
for _, ip := range ips {
ok, mask, err := assignedToInterface(ip)
if err != nil {
return nil, nil, fmt.Errorf("failed to search locally assigned addresses: %v", err)
}
if !ok {
continue
}
ip.Mask = mask
if isPublic(ip.IP) {
hostPub = append(hostPub, ip)
continue
}
hostPriv = append(hostPriv, ip)
}
sortIPs(hostPriv)
sortIPs(hostPub)
}
var defaultPriv, defaultPub []*net.IPNet
{
// Check IPs on interface for default route next.
iface, err := defaultInterface()
if err != nil {
return nil, nil, err
}
ips, err := ipsForInterface(iface)
if err != nil {
return nil, nil, err
}
for _, ip := range ips {
if isLocal(ip.IP) {
continue
}
if isPublic(ip.IP) {
defaultPub = append(defaultPub, ip)
continue
}
defaultPriv = append(defaultPriv, ip)
}
sortIPs(defaultPriv)
sortIPs(defaultPub)
}
var interfacePriv, interfacePub []*net.IPNet
{
// Finally look for IPs on all interfaces.
ips, err := ipsForAllInterfaces()
if err != nil {
return nil, nil, err
}
for _, ip := range ips {
if isLocal(ip.IP) {
continue
}
if isPublic(ip.IP) {
interfacePub = append(interfacePub, ip)
continue
}
interfacePriv = append(interfacePriv, ip)
}
sortIPs(interfacePriv)
sortIPs(interfacePub)
}
var priv, pub, tmpPriv, tmpPub []*net.IPNet
tmpPriv = append(tmpPriv, hostPriv...)
tmpPriv = append(tmpPriv, defaultPriv...)
tmpPriv = append(tmpPriv, interfacePriv...)
tmpPub = append(tmpPub, hostPub...)
tmpPub = append(tmpPub, defaultPub...)
tmpPub = append(tmpPub, interfacePub...)
for i := range tmpPriv {
if _, ok := ignore[tmpPriv[i].String()]; ok {
continue
}
priv = append(priv, tmpPriv[i])
}
for i := range tmpPub {
if _, ok := ignore[tmpPub[i].String()]; ok {
continue
}
pub = append(pub, tmpPub[i])
}
if len(priv) == 0 && len(pub) == 0 {
return nil, nil, errors.New("no valid IP was found")
}
if len(priv) == 0 {
// If no private IPs were found, use nil.
priv = append(priv, nil)
}
if len(pub) == 0 {
pub = priv
}
return priv[0], pub[0], nil
}
func assignedToInterface(ip *net.IPNet) (bool, net.IPMask, error) {
links, err := netlink.LinkList()
if err != nil {
return false, nil, fmt.Errorf("failed to list interfaces: %v", err)
}
// Sort the links for stability.
sort.Slice(links, func(i, j int) bool {
return links[i].Attrs().Name < links[j].Attrs().Name
})
for _, link := range links {
addrs, err := netlink.AddrList(link, netlink.FAMILY_ALL)
if err != nil {
return false, nil, fmt.Errorf("failed to list addresses for %s: %v", link.Attrs().Name, err)
}
// Sort the IPs for stability.
sort.Slice(addrs, func(i, j int) bool {
return addrs[i].String() < addrs[j].String()
})
for i := range addrs {
if ip.IP.Equal(addrs[i].IP) {
return true, addrs[i].Mask, nil
}
}
}
return false, nil, nil
}
// ipsForHostname returns a slice of IPs to which the
// given hostname resolves.
func ipsForHostname(hostname string) []*net.IPNet {
if ip := net.ParseIP(hostname); ip != nil {
return []*net.IPNet{oneAddressCIDR(ip)}
}
ips, err := net.LookupIP(hostname)
if err != nil {
// Most likely the hostname is not resolvable.
return nil
}
nets := make([]*net.IPNet, len(ips))
for i := range ips {
nets[i] = oneAddressCIDR(ips[i])
}
return nets
}
// ipsForAllInterfaces returns a slice of IPs assigned to all the
// interfaces on the host.
func ipsForAllInterfaces() ([]*net.IPNet, error) {
ifaces, err := net.Interfaces()
if err != nil {
return nil, fmt.Errorf("failed to list interfaces: %v", err)
}
var nets []*net.IPNet
for _, iface := range ifaces {
ips, err := ipsForInterface(&iface)
if err != nil {
return nil, fmt.Errorf("failed to list addresses for %s: %v", iface.Name, err)
}
nets = append(nets, ips...)
}
return nets, nil
}
// ipsForInterface returns a slice of IPs assigned to the given interface.
func ipsForInterface(iface *net.Interface) ([]*net.IPNet, error) {
link, err := netlink.LinkByIndex(iface.Index)
if err != nil {
return nil, fmt.Errorf("failed to get link: %s", err)
}
addrs, err := netlink.AddrList(link, netlink.FAMILY_ALL)
if err != nil {
return nil, fmt.Errorf("failed to list addresses for %s: %v", iface.Name, err)
}
var ips []*net.IPNet
for _, a := range addrs {
if a.IPNet != nil {
ips = append(ips, a.IPNet)
}
}
return ips, nil
}
// interfacesForIP returns a slice of interfaces withthe given IP.
func interfacesForIP(ip *net.IPNet) ([]net.Interface, error) {
ifaces, err := net.Interfaces()
if err != nil {
return nil, fmt.Errorf("failed to list interfaces: %v", err)
}
var interfaces []net.Interface
for _, iface := range ifaces {
ips, err := ipsForInterface(&iface)
if err != nil {
return nil, fmt.Errorf("failed to list addresses for %s: %v", iface.Name, err)
}
for i := range ips {
if ip.IP.Equal(ips[i].IP) {
interfaces = append(interfaces, iface)
break
}
}
}
if len(interfaces) == 0 {
return nil, fmt.Errorf("no interface has %s assigned", ip.String())
}
return interfaces, nil
}
// defaultInterface returns the interface for the default route of the host.
func defaultInterface() (*net.Interface, error) {
routes, err := netlink.RouteList(nil, netlink.FAMILY_ALL)
if err != nil {
return nil, err
}
for _, route := range routes {
if route.Dst == nil || route.Dst.String() == "0.0.0.0/0" || route.Dst.String() == "::/0" {
if route.LinkIndex <= 0 {
return nil, errors.New("failed to determine interface of route")
}
return net.InterfaceByIndex(route.LinkIndex)
}
}
return nil, errors.New("failed to find default route")
}

View File

@@ -27,7 +27,7 @@ import (
func (t *Topology) Dot() (string, error) {
g := gographviz.NewGraph()
g.Name = "kilo"
if err := g.AddAttr("kilo", string(gographviz.Label), graphEscape(t.subnet.String())); err != nil {
if err := g.AddAttr("kilo", string(gographviz.Label), graphEscape((&net.IPNet{IP: t.wireGuardCIDR.IP.Mask(t.wireGuardCIDR.Mask), Mask: t.wireGuardCIDR.Mask}).String())); err != nil {
return "", fmt.Errorf("failed to add label to graph")
}
if err := g.AddAttr("kilo", string(gographviz.LabelLOC), "t"); err != nil {
@@ -70,7 +70,11 @@ func (t *Topology) Dot() (string, error) {
return "", fmt.Errorf("failed to add rank to node")
}
}
if err := g.Nodes.Lookup[graphEscape(s.hostnames[j])].Attrs.Add(string(gographviz.Label), nodeLabel(s.location, s.hostnames[j], s.cidrs[j], s.privateIPs[j], wg, endpoint)); err != nil {
var priv net.IP
if s.privateIPs != nil {
priv = s.privateIPs[j]
}
if err := g.Nodes.Lookup[graphEscape(s.hostnames[j])].Attrs.Add(string(gographviz.Label), nodeLabel(s.location, s.hostnames[j], s.cidrs[j], priv, wg, endpoint)); err != nil {
return "", fmt.Errorf("failed to add label to node")
}
}
@@ -155,7 +159,9 @@ func nodeLabel(location, name string, cidr *net.IPNet, priv, wgIP net.IP, endpoi
location,
name,
cidr.String(),
priv.String(),
}
if priv != nil {
label = append(label, priv.String())
}
if wgIP != nil {
label = append(label, wgIP.String())
@@ -163,9 +169,9 @@ func nodeLabel(location, name string, cidr *net.IPNet, priv, wgIP net.IP, endpoi
if endpoint != nil {
label = append(label, endpoint.String())
}
return graphEscape(strings.Join(label, "\n"))
return graphEscape(strings.Join(label, "\\n"))
}
func peerLabel(peer *Peer) string {
return graphEscape(fmt.Sprintf("%s\n%s\n", peer.Name, peer.Endpoint.String()))
return graphEscape(fmt.Sprintf("%s\\n%s\n", peer.Name, peer.Endpoint.String()))
}

View File

@@ -15,150 +15,10 @@
package mesh
import (
"errors"
"fmt"
"net"
"sort"
"github.com/vishvananda/netlink"
)
// getIP returns a private and public IP address for the local node.
// It selects the private IP address in the following order:
// - private IP to which hostname resolves
// - private IP assigned to interface of default route
// - private IP assigned to local interface
// - public IP to which hostname resolves
// - public IP assigned to interface of default route
// - public IP assigned to local interface
// It selects the public IP address in the following order:
// - public IP to which hostname resolves
// - public IP assigned to interface of default route
// - public IP assigned to local interface
// - private IP to which hostname resolves
// - private IP assigned to interface of default route
// - private IP assigned to local interface
// - if no IP was found, return nil and an error.
func getIP(hostname string, ignoreIfaces ...int) (*net.IPNet, *net.IPNet, error) {
ignore := make(map[string]struct{})
for i := range ignoreIfaces {
if ignoreIfaces[i] == 0 {
// Only ignore valid interfaces.
continue
}
iface, err := net.InterfaceByIndex(ignoreIfaces[i])
if err != nil {
return nil, nil, fmt.Errorf("failed to find interface %d: %v", ignoreIfaces[i], err)
}
ips, err := ipsForInterface(iface)
if err != nil {
return nil, nil, err
}
for _, ip := range ips {
ignore[ip.String()] = struct{}{}
ignore[oneAddressCIDR(ip.IP).String()] = struct{}{}
}
}
var hostPriv, hostPub []*net.IPNet
{
// Check IPs to which hostname resolves first.
ips := ipsForHostname(hostname)
for _, ip := range ips {
ok, mask, err := assignedToInterface(ip)
if err != nil {
return nil, nil, fmt.Errorf("failed to search locally assigned addresses: %v", err)
}
if !ok {
continue
}
ip.Mask = mask
if isPublic(ip.IP) {
hostPub = append(hostPub, ip)
continue
}
hostPriv = append(hostPriv, ip)
}
sortIPs(hostPriv)
sortIPs(hostPub)
}
var defaultPriv, defaultPub []*net.IPNet
{
// Check IPs on interface for default route next.
iface, err := defaultInterface()
if err != nil {
return nil, nil, err
}
ips, err := ipsForInterface(iface)
if err != nil {
return nil, nil, err
}
for _, ip := range ips {
if isLocal(ip.IP) {
continue
}
if isPublic(ip.IP) {
defaultPub = append(defaultPub, ip)
continue
}
defaultPriv = append(defaultPriv, ip)
}
sortIPs(defaultPriv)
sortIPs(defaultPub)
}
var interfacePriv, interfacePub []*net.IPNet
{
// Finally look for IPs on all interfaces.
ips, err := ipsForAllInterfaces()
if err != nil {
return nil, nil, err
}
for _, ip := range ips {
if isLocal(ip.IP) {
continue
}
if isPublic(ip.IP) {
interfacePub = append(interfacePub, ip)
continue
}
interfacePriv = append(interfacePriv, ip)
}
sortIPs(interfacePriv)
sortIPs(interfacePub)
}
var priv, pub, tmpPriv, tmpPub []*net.IPNet
tmpPriv = append(tmpPriv, hostPriv...)
tmpPriv = append(tmpPriv, defaultPriv...)
tmpPriv = append(tmpPriv, interfacePriv...)
tmpPub = append(tmpPub, hostPub...)
tmpPub = append(tmpPub, defaultPub...)
tmpPub = append(tmpPub, interfacePub...)
for i := range tmpPriv {
if _, ok := ignore[tmpPriv[i].String()]; ok {
continue
}
priv = append(priv, tmpPriv[i])
}
for i := range tmpPub {
if _, ok := ignore[tmpPub[i].String()]; ok {
continue
}
pub = append(pub, tmpPub[i])
}
if len(priv) == 0 && len(pub) == 0 {
return nil, nil, errors.New("no valid IP was found")
}
if len(priv) == 0 {
priv = pub
}
if len(pub) == 0 {
pub = priv
}
return priv[0], pub[0], nil
}
// sortIPs sorts IPs so the result is stable.
// It will first sort IPs by type, to prefer selecting
// IPs of the same type, and then by value.
@@ -175,33 +35,6 @@ func sortIPs(ips []*net.IPNet) {
})
}
func assignedToInterface(ip *net.IPNet) (bool, net.IPMask, error) {
links, err := netlink.LinkList()
if err != nil {
return false, nil, fmt.Errorf("failed to list interfaces: %v", err)
}
// Sort the links for stability.
sort.Slice(links, func(i, j int) bool {
return links[i].Attrs().Name < links[j].Attrs().Name
})
for _, link := range links {
addrs, err := netlink.AddrList(link, netlink.FAMILY_ALL)
if err != nil {
return false, nil, fmt.Errorf("failed to list addresses for %s: %v", link.Attrs().Name, err)
}
// Sort the IPs for stability.
sort.Slice(addrs, func(i, j int) bool {
return addrs[i].String() < addrs[j].String()
})
for i := range addrs {
if ip.IP.Equal(addrs[i].IP) {
return true, addrs[i].Mask, nil
}
}
}
return false, nil, nil
}
func isLocal(ip net.IP) bool {
return ip.IsLoopback() || ip.IsLinkLocalMulticast() || ip.IsLinkLocalUnicast()
}
@@ -209,12 +42,12 @@ func isLocal(ip net.IP) bool {
func isPublic(ip net.IP) bool {
// Check RFC 1918 addresses.
if ip4 := ip.To4(); ip4 != nil {
switch true {
switch {
// Check for 10.0.0.0/8.
case ip4[0] == 10:
return false
// Check for 172.16.0.0/12.
case ip4[0] == 172 && ip4[1]&0xf0 == 0x01:
case ip4[0] == 172 && ip4[1]&0xf0 == 0x10:
return false
// Check for 192.168.0.0/16.
case ip4[0] == 192 && ip4[1] == 168:
@@ -225,7 +58,7 @@ func isPublic(ip net.IP) bool {
}
// Check RFC 4193 addresses.
if len(ip) == net.IPv6len {
switch true {
switch {
// Check for fd00::/8.
case ip[0] == 0xfd && ip[1] == 0x00:
return false
@@ -236,105 +69,6 @@ func isPublic(ip net.IP) bool {
return false
}
// ipsForHostname returns a slice of IPs to which the
// given hostname resolves.
func ipsForHostname(hostname string) []*net.IPNet {
if ip := net.ParseIP(hostname); ip != nil {
return []*net.IPNet{oneAddressCIDR(ip)}
}
ips, err := net.LookupIP(hostname)
if err != nil {
// Most likely the hostname is not resolvable.
return nil
}
nets := make([]*net.IPNet, len(ips))
for i := range ips {
nets[i] = oneAddressCIDR(ips[i])
}
return nets
}
// ipsForAllInterfaces returns a slice of IPs assigned to all the
// interfaces on the host.
func ipsForAllInterfaces() ([]*net.IPNet, error) {
ifaces, err := net.Interfaces()
if err != nil {
return nil, fmt.Errorf("failed to list interfaces: %v", err)
}
var nets []*net.IPNet
for _, iface := range ifaces {
ips, err := ipsForInterface(&iface)
if err != nil {
return nil, fmt.Errorf("failed to list addresses for %s: %v", iface.Name, err)
}
nets = append(nets, ips...)
}
return nets, nil
}
// ipsForInterface returns a slice of IPs assigned to the given interface.
func ipsForInterface(iface *net.Interface) ([]*net.IPNet, error) {
link, err := netlink.LinkByIndex(iface.Index)
if err != nil {
return nil, fmt.Errorf("failed to get link: %s", err)
}
addrs, err := netlink.AddrList(link, netlink.FAMILY_ALL)
if err != nil {
return nil, fmt.Errorf("failed to list addresses for %s: %v", iface.Name, err)
}
var ips []*net.IPNet
for _, a := range addrs {
if a.IPNet != nil {
ips = append(ips, a.IPNet)
}
}
return ips, nil
}
// interfacesForIP returns a slice of interfaces withthe given IP.
func interfacesForIP(ip *net.IPNet) ([]net.Interface, error) {
ifaces, err := net.Interfaces()
if err != nil {
return nil, fmt.Errorf("failed to list interfaces: %v", err)
}
var interfaces []net.Interface
for _, iface := range ifaces {
ips, err := ipsForInterface(&iface)
if err != nil {
return nil, fmt.Errorf("failed to list addresses for %s: %v", iface.Name, err)
}
for i := range ips {
if ip.IP.Equal(ips[i].IP) {
interfaces = append(interfaces, iface)
break
}
}
}
if len(interfaces) == 0 {
return nil, fmt.Errorf("no interface has %s assigned", ip.String())
}
return interfaces, nil
}
// defaultInterface returns the interface for the default route of the host.
func defaultInterface() (*net.Interface, error) {
routes, err := netlink.RouteList(nil, netlink.FAMILY_ALL)
if err != nil {
return nil, err
}
for _, route := range routes {
if route.Dst == nil || route.Dst.String() == "0.0.0.0/0" || route.Dst.String() == "::/0" {
if route.LinkIndex <= 0 {
return nil, errors.New("failed to determine interface of route")
}
return net.InterfaceByIndex(route.LinkIndex)
}
}
return nil, errors.New("failed to find default route")
}
type allocator struct {
bits int
ones int

View File

@@ -127,3 +127,46 @@ func TestSortIPs(t *testing.T) {
}
}
}
func TestIsPublic(t *testing.T) {
for _, tc := range []struct {
name string
ip net.IP
out bool
}{
{
name: "10/8",
ip: net.ParseIP("10.0.0.1"),
out: false,
},
{
name: "172.16/12",
ip: net.ParseIP("172.16.0.0"),
out: false,
},
{
name: "172.16/12 random",
ip: net.ParseIP("172.24.135.46"),
out: false,
},
{
name: "below 172.16/12",
ip: net.ParseIP("172.15.255.255"),
out: true,
},
{
name: "above 172.16/12",
ip: net.ParseIP("172.160.255.255"),
out: true,
},
{
name: "192.168/16",
ip: net.ParseIP("192.168.0.0"),
out: false,
},
} {
if isPublic(tc.ip) != tc.out {
t.Errorf("test case %q: expected %t, got %t", tc.name, tc.out, !tc.out)
}
}
}

View File

@@ -12,6 +12,8 @@
// See the License for the specific language governing permissions and
// limitations under the License.
// +build linux
package mesh
import (
@@ -35,142 +37,15 @@ import (
"github.com/squat/kilo/pkg/wireguard"
)
const resyncPeriod = 30 * time.Second
const (
// KiloPath is the directory where Kilo stores its configuration.
KiloPath = "/var/lib/kilo"
// PrivateKeyPath is the filepath where the WireGuard private key is stored.
PrivateKeyPath = KiloPath + "/key"
// ConfPath is the filepath where the WireGuard configuration is stored.
ConfPath = KiloPath + "/conf"
// DefaultKiloInterface is the default iterface created and used by Kilo.
DefaultKiloInterface = "kilo0"
// DefaultKiloPort is the default UDP port Kilo uses.
DefaultKiloPort = 51820
// DefaultCNIPath is the default path to the CNI config file.
DefaultCNIPath = "/etc/cni/net.d/10-kilo.conflist"
// kiloPath is the directory where Kilo stores its configuration.
kiloPath = "/var/lib/kilo"
// privateKeyPath is the filepath where the WireGuard private key is stored.
privateKeyPath = kiloPath + "/key"
// confPath is the filepath where the WireGuard configuration is stored.
confPath = kiloPath + "/conf"
)
// DefaultKiloSubnet is the default CIDR for Kilo.
var DefaultKiloSubnet = &net.IPNet{IP: []byte{10, 4, 0, 0}, Mask: []byte{255, 255, 0, 0}}
// Granularity represents the abstraction level at which the network
// should be meshed.
type Granularity string
const (
// LogicalGranularity indicates that the network should create
// a mesh between logical locations, e.g. data-centers, but not between
// all nodes within a single location.
LogicalGranularity Granularity = "location"
// FullGranularity indicates that the network should create
// a mesh between every node.
FullGranularity Granularity = "full"
)
// Node represents a node in the network.
type Node struct {
Endpoint *wireguard.Endpoint
Key []byte
InternalIP *net.IPNet
// LastSeen is a Unix time for the last time
// the node confirmed it was live.
LastSeen int64
// Leader is a suggestion to Kilo that
// the node wants to lead its segment.
Leader bool
Location string
Name string
PersistentKeepalive int
Subnet *net.IPNet
WireGuardIP *net.IPNet
}
// Ready indicates whether or not the node is ready.
func (n *Node) Ready() bool {
// Nodes that are not leaders will not have WireGuardIPs, so it is not required.
return n != nil && n.Endpoint != nil && !(n.Endpoint.IP == nil && n.Endpoint.DNS == "") && n.Endpoint.Port != 0 && n.Key != nil && n.InternalIP != nil && n.Subnet != nil && time.Now().Unix()-n.LastSeen < int64(resyncPeriod)*2/int64(time.Second)
}
// Peer represents a peer in the network.
type Peer struct {
wireguard.Peer
Name string
}
// Ready indicates whether or not the peer is ready.
// Peers can have empty endpoints because they may not have an
// IP, for example if they are behind a NAT, and thus
// will not declare their endpoint and instead allow it to be
// discovered.
func (p *Peer) Ready() bool {
return p != nil && p.AllowedIPs != nil && len(p.AllowedIPs) != 0 && p.PublicKey != nil
}
// EventType describes what kind of an action an event represents.
type EventType string
const (
// AddEvent represents an action where an item was added.
AddEvent EventType = "add"
// DeleteEvent represents an action where an item was removed.
DeleteEvent EventType = "delete"
// UpdateEvent represents an action where an item was updated.
UpdateEvent EventType = "update"
)
// NodeEvent represents an event concerning a node in the cluster.
type NodeEvent struct {
Type EventType
Node *Node
Old *Node
}
// PeerEvent represents an event concerning a peer in the cluster.
type PeerEvent struct {
Type EventType
Peer *Peer
Old *Peer
}
// Backend can create clients for all of the
// primitive types that Kilo deals with, namely:
// * nodes; and
// * peers.
type Backend interface {
Nodes() NodeBackend
Peers() PeerBackend
}
// NodeBackend can get nodes by name, init itself,
// list the nodes that should be meshed,
// set Kilo properties for a node,
// clean up any changes applied to the backend,
// and watch for changes to nodes.
type NodeBackend interface {
CleanUp(string) error
Get(string) (*Node, error)
Init(<-chan struct{}) error
List() ([]*Node, error)
Set(string, *Node) error
Watch() <-chan *NodeEvent
}
// PeerBackend can get peers by name, init itself,
// list the peers that should be in the mesh,
// set fields for a peer,
// clean up any changes applied to the backend,
// and watch for changes to peers.
type PeerBackend interface {
CleanUp(string) error
Get(string) (*Peer, error)
Init(<-chan struct{}) error
List() ([]*Peer, error)
Set(string, *Peer) error
Watch() <-chan *PeerEvent
}
// Mesh is able to create Kilo network meshes.
type Mesh struct {
Backend
@@ -190,13 +65,14 @@ type Mesh struct {
priv []byte
privIface int
pub []byte
resyncPeriod time.Duration
stop chan struct{}
subnet *net.IPNet
table *route.Table
wireGuardIP *net.IPNet
// nodes and peers are mutable fields in the struct
// and needs to be guarded.
// and need to be guarded.
nodes map[string]*Node
peers map[string]*Peer
mu sync.Mutex
@@ -210,11 +86,11 @@ type Mesh struct {
}
// New returns a new Mesh instance.
func New(backend Backend, enc encapsulation.Encapsulator, granularity Granularity, hostname string, port uint32, subnet *net.IPNet, local, cni bool, cniPath, iface string, cleanUpIface bool, logger log.Logger) (*Mesh, error) {
if err := os.MkdirAll(KiloPath, 0700); err != nil {
func New(backend Backend, enc encapsulation.Encapsulator, granularity Granularity, hostname string, port uint32, subnet *net.IPNet, local, cni bool, cniPath, iface string, cleanUpIface bool, createIface bool, resyncPeriod time.Duration, logger log.Logger) (*Mesh, error) {
if err := os.MkdirAll(kiloPath, 0700); err != nil {
return nil, fmt.Errorf("failed to create directory to store configuration: %v", err)
}
private, err := ioutil.ReadFile(PrivateKeyPath)
private, err := ioutil.ReadFile(privateKeyPath)
private = bytes.Trim(private, "\n")
if err != nil {
level.Warn(logger).Log("msg", "no private key found on disk; generating one now")
@@ -226,34 +102,49 @@ func New(backend Backend, enc encapsulation.Encapsulator, granularity Granularit
if err != nil {
return nil, err
}
if err := ioutil.WriteFile(PrivateKeyPath, private, 0600); err != nil {
if err := ioutil.WriteFile(privateKeyPath, private, 0600); err != nil {
return nil, fmt.Errorf("failed to write private key to disk: %v", err)
}
cniIndex, err := cniDeviceIndex()
if err != nil {
return nil, fmt.Errorf("failed to query netlink for CNI device: %v", err)
}
privateIP, publicIP, err := getIP(hostname, enc.Index(), cniIndex)
var kiloIface int
if createIface {
kiloIface, _, err = wireguard.New(iface)
if err != nil {
return nil, fmt.Errorf("failed to create WireGuard interface: %v", err)
}
} else {
link, err := netlink.LinkByName(iface)
if err != nil {
return nil, fmt.Errorf("failed to get interface index: %v", err)
}
kiloIface = link.Attrs().Index
}
privateIP, publicIP, err := getIP(hostname, kiloIface, enc.Index(), cniIndex)
if err != nil {
return nil, fmt.Errorf("failed to find public IP: %v", err)
}
ifaces, err := interfacesForIP(privateIP)
if err != nil {
return nil, fmt.Errorf("failed to find interface for private IP: %v", err)
}
privIface := ifaces[0].Index
kiloIface, _, err := wireguard.New(iface)
if err != nil {
return nil, fmt.Errorf("failed to create WireGuard interface: %v", err)
}
if enc.Strategy() != encapsulation.Never {
if err := enc.Init(privIface); err != nil {
return nil, fmt.Errorf("failed to initialize encapsulator: %v", err)
var privIface int
if privateIP != nil {
ifaces, err := interfacesForIP(privateIP)
if err != nil {
return nil, fmt.Errorf("failed to find interface for private IP: %v", err)
}
privIface = ifaces[0].Index
if enc.Strategy() != encapsulation.Never {
if err := enc.Init(privIface); err != nil {
return nil, fmt.Errorf("failed to initialize encapsulator: %v", err)
}
}
level.Debug(logger).Log("msg", fmt.Sprintf("using %s as the private IP address", privateIP.String()))
} else {
enc = encapsulation.Noop(enc.Strategy())
level.Debug(logger).Log("msg", "running without a private IP address")
}
level.Debug(logger).Log("msg", fmt.Sprintf("using %s as the private IP address", privateIP.String()))
level.Debug(logger).Log("msg", fmt.Sprintf("using %s as the public IP address", publicIP.String()))
ipTables, err := iptables.New()
ipTables, err := iptables.New(iptables.WithLogger(log.With(logger, "component", "iptables")), iptables.WithResyncPeriod(resyncPeriod))
if err != nil {
return nil, fmt.Errorf("failed to IP tables controller: %v", err)
}
@@ -275,6 +166,7 @@ func New(backend Backend, enc encapsulation.Encapsulator, granularity Granularit
priv: private,
privIface: privIface,
pub: public,
resyncPeriod: resyncPeriod,
local: local,
stop: make(chan struct{}),
subnet: subnet,
@@ -344,7 +236,8 @@ func (m *Mesh) Run() error {
}
}()
defer m.cleanUp()
t := time.NewTimer(resyncPeriod)
resync := time.NewTimer(m.resyncPeriod)
checkIn := time.NewTimer(checkInPeriod)
nw := m.Nodes().Watch()
pw := m.Peers().Watch()
var ne *NodeEvent
@@ -355,13 +248,15 @@ func (m *Mesh) Run() error {
m.syncNodes(ne)
case pe = <-pw:
m.syncPeers(pe)
case <-t.C:
case <-checkIn.C:
m.checkIn()
checkIn.Reset(checkInPeriod)
case <-resync.C:
if m.cni {
m.updateCNIConfig()
}
m.applyTopology()
t.Reset(resyncPeriod)
resync.Reset(m.resyncPeriod)
case <-m.stop:
return nil
}
@@ -476,7 +371,7 @@ func (m *Mesh) handleLocal(n *Node) {
if n.Endpoint == nil || (n.Endpoint.DNS == "" && n.Endpoint.IP == nil) {
n.Endpoint = &wireguard.Endpoint{DNSOrIP: wireguard.DNSOrIP{IP: m.externalIP.IP}, Port: m.port}
}
if n.InternalIP == nil {
if n.InternalIP == nil && !n.NoInternalIP {
n.InternalIP = m.internalIP
}
// Compare the given node to the calculated local node.
@@ -485,6 +380,7 @@ func (m *Mesh) handleLocal(n *Node) {
local := &Node{
Endpoint: n.Endpoint,
Key: m.pub,
NoInternalIP: n.NoInternalIP,
InternalIP: n.InternalIP,
LastSeen: time.Now().Unix(),
Leader: n.Leader,
@@ -581,7 +477,11 @@ func (m *Mesh) applyTopology() {
return
}
// Update the node's WireGuard IP.
m.wireGuardIP = t.wireGuardCIDR
if t.leader {
m.wireGuardIP = t.wireGuardCIDR
} else {
m.wireGuardIP = nil
}
conf := t.Conf()
buf, err := conf.Bytes()
if err != nil {
@@ -589,21 +489,21 @@ func (m *Mesh) applyTopology() {
m.errorCounter.WithLabelValues("apply").Inc()
return
}
if err := ioutil.WriteFile(ConfPath, buf, 0600); err != nil {
if err := ioutil.WriteFile(confPath, buf, 0600); err != nil {
level.Error(m.logger).Log("error", err)
m.errorCounter.WithLabelValues("apply").Inc()
return
}
var ipRules []iptables.Rule
if m.cni {
ipRules = append(ipRules, t.Rules(m.cni)...)
}
ipRules := t.Rules(m.cni)
// If we are handling local routes, ensure the local
// tunnel has an IP address and IPIP traffic is allowed.
if m.enc.Strategy() != encapsulation.Never && m.local {
var cidrs []*net.IPNet
for _, s := range t.segments {
if s.location == nodes[m.hostname].Location {
// If the location prefix is not logicalLocation, but nodeLocation,
// we don't need to set any extra rules for encapsulation anyways
// because traffic will go over WireGuard.
if s.location == logicalLocationPrefix+nodes[m.hostname].Location {
for i := range s.privateIPs {
cidrs = append(cidrs, oneAddressCIDR(s.privateIPs[i]))
}
@@ -636,7 +536,7 @@ func (m *Mesh) applyTopology() {
equal := conf.Equal(oldConf)
if !equal {
level.Info(m.logger).Log("msg", "WireGuard configurations are different")
if err := wireguard.SetConf(link.Attrs().Name, ConfPath); err != nil {
if err := wireguard.SetConf(link.Attrs().Name, confPath); err != nil {
level.Error(m.logger).Log("error", err)
m.errorCounter.WithLabelValues("apply").Inc()
return
@@ -691,7 +591,7 @@ func (m *Mesh) cleanUp() {
level.Error(m.logger).Log("error", fmt.Sprintf("failed to clean up routes: %v", err))
m.errorCounter.WithLabelValues("cleanUp").Inc()
}
if err := os.Remove(ConfPath); err != nil {
if err := os.Remove(confPath); err != nil {
level.Error(m.logger).Log("error", fmt.Sprintf("failed to delete configuration file: %v", err))
m.errorCounter.WithLabelValues("cleanUp").Inc()
}
@@ -770,7 +670,7 @@ func isSelf(hostname string, node *Node) bool {
}
func nodesAreEqual(a, b *Node) bool {
if !(a != nil) == (b != nil) {
if (a != nil) != (b != nil) {
return false
}
if a == b {

256
pkg/mesh/routes.go Normal file
View File

@@ -0,0 +1,256 @@
// Copyright 2019 the Kilo authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// +build linux
package mesh
import (
"net"
"github.com/vishvananda/netlink"
"golang.org/x/sys/unix"
"github.com/squat/kilo/pkg/encapsulation"
"github.com/squat/kilo/pkg/iptables"
)
const kiloTableIndex = 1107
// Routes generates a slice of routes for a given Topology.
func (t *Topology) Routes(kiloIfaceName string, kiloIface, privIface, tunlIface int, local bool, enc encapsulation.Encapsulator) ([]*netlink.Route, []*netlink.Rule) {
var routes []*netlink.Route
var rules []*netlink.Rule
if !t.leader {
// Find the GW for this segment.
// This will be the an IP of the leader.
// In an IPIP encapsulated mesh it is the leader's private IP.
var gw net.IP
for _, segment := range t.segments {
if segment.location == t.location {
gw = enc.Gw(segment.endpoint.IP, segment.privateIPs[segment.leader], segment.cidrs[segment.leader])
break
}
}
for _, segment := range t.segments {
// First, add a route to the WireGuard IP of the segment.
routes = append(routes, encapsulateRoute(&netlink.Route{
Dst: oneAddressCIDR(segment.wireGuardIP),
Flags: int(netlink.FLAG_ONLINK),
Gw: gw,
LinkIndex: privIface,
Protocol: unix.RTPROT_STATIC,
}, enc.Strategy(), t.privateIP, tunlIface))
// Add routes for the current segment if local is true.
if segment.location == t.location {
if local {
for i := range segment.cidrs {
// Don't add routes for the local node.
if segment.privateIPs[i].Equal(t.privateIP.IP) {
continue
}
routes = append(routes, encapsulateRoute(&netlink.Route{
Dst: segment.cidrs[i],
Flags: int(netlink.FLAG_ONLINK),
Gw: segment.privateIPs[i],
LinkIndex: privIface,
Protocol: unix.RTPROT_STATIC,
}, enc.Strategy(), t.privateIP, tunlIface))
// Encapsulate packets from the host's Pod subnet headed
// to private IPs.
if enc.Strategy() == encapsulation.Always || (enc.Strategy() == encapsulation.CrossSubnet && !t.privateIP.Contains(segment.privateIPs[i])) {
routes = append(routes, &netlink.Route{
Dst: oneAddressCIDR(segment.privateIPs[i]),
Flags: int(netlink.FLAG_ONLINK),
Gw: segment.privateIPs[i],
LinkIndex: tunlIface,
Protocol: unix.RTPROT_STATIC,
Table: kiloTableIndex,
})
rules = append(rules, defaultRule(&netlink.Rule{
Src: t.subnet,
Dst: oneAddressCIDR(segment.privateIPs[i]),
Table: kiloTableIndex,
}))
}
}
}
continue
}
for i := range segment.cidrs {
// Add routes to the Pod CIDRs of nodes in other segments.
routes = append(routes, encapsulateRoute(&netlink.Route{
Dst: segment.cidrs[i],
Flags: int(netlink.FLAG_ONLINK),
Gw: gw,
LinkIndex: privIface,
Protocol: unix.RTPROT_STATIC,
}, enc.Strategy(), t.privateIP, tunlIface))
}
for i := range segment.privateIPs {
// Add routes to the private IPs of nodes in other segments.
routes = append(routes, encapsulateRoute(&netlink.Route{
Dst: oneAddressCIDR(segment.privateIPs[i]),
Flags: int(netlink.FLAG_ONLINK),
Gw: gw,
LinkIndex: privIface,
Protocol: unix.RTPROT_STATIC,
}, enc.Strategy(), t.privateIP, tunlIface))
}
}
// Add routes for the allowed IPs of peers.
for _, peer := range t.peers {
for i := range peer.AllowedIPs {
routes = append(routes, encapsulateRoute(&netlink.Route{
Dst: peer.AllowedIPs[i],
Flags: int(netlink.FLAG_ONLINK),
Gw: gw,
LinkIndex: privIface,
Protocol: unix.RTPROT_STATIC,
}, enc.Strategy(), t.privateIP, tunlIface))
}
}
return routes, rules
}
for _, segment := range t.segments {
// Add routes for the current segment if local is true.
if segment.location == t.location {
// If the local node does not have a private IP address,
// then skip adding routes, because the node is in its own location.
if local && t.privateIP != nil {
for i := range segment.cidrs {
// Don't add routes for the local node.
if segment.privateIPs[i].Equal(t.privateIP.IP) {
continue
}
routes = append(routes, encapsulateRoute(&netlink.Route{
Dst: segment.cidrs[i],
Flags: int(netlink.FLAG_ONLINK),
Gw: segment.privateIPs[i],
LinkIndex: privIface,
Protocol: unix.RTPROT_STATIC,
}, enc.Strategy(), t.privateIP, tunlIface))
// Encapsulate packets from the host's Pod subnet headed
// to private IPs.
if enc.Strategy() == encapsulation.Always || (enc.Strategy() == encapsulation.CrossSubnet && !t.privateIP.Contains(segment.privateIPs[i])) {
routes = append(routes, &netlink.Route{
Dst: oneAddressCIDR(segment.privateIPs[i]),
Flags: int(netlink.FLAG_ONLINK),
Gw: segment.privateIPs[i],
LinkIndex: tunlIface,
Protocol: unix.RTPROT_STATIC,
Table: kiloTableIndex,
})
rules = append(rules, defaultRule(&netlink.Rule{
Src: t.subnet,
Dst: oneAddressCIDR(segment.privateIPs[i]),
Table: kiloTableIndex,
}))
// Also encapsulate packets from the Kilo interface
// headed to private IPs.
rules = append(rules, defaultRule(&netlink.Rule{
Dst: oneAddressCIDR(segment.privateIPs[i]),
Table: kiloTableIndex,
IifName: kiloIfaceName,
}))
}
}
}
// Continuing here prevents leaders form adding routes via WireGuard to
// nodes in their own location.
continue
}
for i := range segment.cidrs {
// Add routes to the Pod CIDRs of nodes in other segments.
routes = append(routes, &netlink.Route{
Dst: segment.cidrs[i],
Flags: int(netlink.FLAG_ONLINK),
Gw: segment.wireGuardIP,
LinkIndex: kiloIface,
Protocol: unix.RTPROT_STATIC,
})
// Don't add routes through Kilo if the private IP
// equals the external IP. This means that the node
// is only accessible through an external IP and we
// cannot encapsulate traffic to an IP through the IP.
if segment.privateIPs == nil || segment.privateIPs[i].Equal(segment.endpoint.IP) {
continue
}
// Add routes to the private IPs of nodes in other segments.
// Number of CIDRs and private IPs always match so
// we can reuse the loop.
routes = append(routes, &netlink.Route{
Dst: oneAddressCIDR(segment.privateIPs[i]),
Flags: int(netlink.FLAG_ONLINK),
Gw: segment.wireGuardIP,
LinkIndex: kiloIface,
Protocol: unix.RTPROT_STATIC,
})
}
}
// Add routes for the allowed IPs of peers.
for _, peer := range t.peers {
for i := range peer.AllowedIPs {
routes = append(routes, &netlink.Route{
Dst: peer.AllowedIPs[i],
LinkIndex: kiloIface,
Protocol: unix.RTPROT_STATIC,
})
}
}
return routes, rules
}
func encapsulateRoute(route *netlink.Route, encapsulate encapsulation.Strategy, subnet *net.IPNet, tunlIface int) *netlink.Route {
if encapsulate == encapsulation.Always || (encapsulate == encapsulation.CrossSubnet && !subnet.Contains(route.Gw)) {
route.LinkIndex = tunlIface
}
return route
}
// Rules returns the iptables rules required by the local node.
func (t *Topology) Rules(cni bool) []iptables.Rule {
var rules []iptables.Rule
rules = append(rules, iptables.NewIPv4Chain("nat", "KILO-NAT"))
rules = append(rules, iptables.NewIPv6Chain("nat", "KILO-NAT"))
if cni {
rules = append(rules, iptables.NewRule(iptables.GetProtocol(len(t.subnet.IP)), "nat", "POSTROUTING", "-s", t.subnet.String(), "-m", "comment", "--comment", "Kilo: jump to KILO-NAT chain", "-j", "KILO-NAT"))
}
for _, s := range t.segments {
rules = append(rules, iptables.NewRule(iptables.GetProtocol(len(s.wireGuardIP)), "nat", "KILO-NAT", "-d", oneAddressCIDR(s.wireGuardIP).String(), "-m", "comment", "--comment", "Kilo: do not NAT packets destined for WireGuared IPs", "-j", "RETURN"))
for _, aip := range s.allowedIPs {
rules = append(rules, iptables.NewRule(iptables.GetProtocol(len(aip.IP)), "nat", "KILO-NAT", "-d", aip.String(), "-m", "comment", "--comment", "Kilo: do not NAT packets destined for known IPs", "-j", "RETURN"))
}
}
for _, p := range t.peers {
for _, aip := range p.AllowedIPs {
rules = append(rules,
iptables.NewRule(iptables.GetProtocol(len(aip.IP)), "nat", "POSTROUTING", "-s", aip.String(), "-m", "comment", "--comment", "Kilo: jump to NAT chain", "-j", "KILO-NAT"),
iptables.NewRule(iptables.GetProtocol(len(aip.IP)), "nat", "KILO-NAT", "-d", aip.String(), "-m", "comment", "--comment", "Kilo: do not NAT packets destined for peers", "-j", "RETURN"),
)
}
}
rules = append(rules, iptables.NewIPv4Rule("nat", "KILO-NAT", "-m", "comment", "--comment", "Kilo: NAT remaining packets", "-j", "MASQUERADE"))
rules = append(rules, iptables.NewIPv6Rule("nat", "KILO-NAT", "-m", "comment", "--comment", "Kilo: NAT remaining packets", "-j", "MASQUERADE"))
return rules
}
func defaultRule(rule *netlink.Rule) *netlink.Rule {
base := netlink.NewRule()
base.Src = rule.Src
base.Dst = rule.Dst
base.IifName = rule.IifName
base.Table = rule.Table
return base
}

1041
pkg/mesh/routes_test.go Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -19,14 +19,13 @@ import (
"net"
"sort"
"github.com/squat/kilo/pkg/encapsulation"
"github.com/squat/kilo/pkg/iptables"
"github.com/squat/kilo/pkg/wireguard"
"github.com/vishvananda/netlink"
"golang.org/x/sys/unix"
)
const kiloTableIndex = 1107
const (
logicalLocationPrefix = "location:"
nodeLocationPrefix = "node:"
)
// Topology represents the logical structure of the overlay network.
type Topology struct {
@@ -51,8 +50,10 @@ type Topology struct {
// subnet is the Pod subnet of the local node.
subnet *net.IPNet
// wireGuardCIDR is the allocated CIDR of the WireGuard
// interface of the local node. If the local node is not
// the leader, then it is nil.
// interface of the local node within the Kilo subnet.
// If the local node is not the leader of a location, then
// the IP is the 0th address in the subnet, i.e. the CIDR
// is equal to the Kilo subnet.
wireGuardCIDR *net.IPNet
}
@@ -83,21 +84,29 @@ func NewTopology(nodes map[string]*Node, peers map[string]*Peer, granularity Gra
var location string
switch granularity {
case LogicalGranularity:
location = node.Location
location = logicalLocationPrefix + node.Location
// Put node in a different location, if no private
// IP was found.
if node.InternalIP == nil {
location = nodeLocationPrefix + node.Name
}
case FullGranularity:
location = node.Name
location = nodeLocationPrefix + node.Name
}
topoMap[location] = append(topoMap[location], node)
}
var localLocation string
switch granularity {
case LogicalGranularity:
localLocation = nodes[hostname].Location
localLocation = logicalLocationPrefix + nodes[hostname].Location
if nodes[hostname].InternalIP == nil {
localLocation = nodeLocationPrefix + hostname
}
case FullGranularity:
localLocation = hostname
localLocation = nodeLocationPrefix + hostname
}
t := Topology{key: key, port: port, hostname: hostname, location: localLocation, persistentKeepalive: persistentKeepalive, privateIP: nodes[hostname].InternalIP, subnet: nodes[hostname].Subnet}
t := Topology{key: key, port: port, hostname: hostname, location: localLocation, persistentKeepalive: persistentKeepalive, privateIP: nodes[hostname].InternalIP, subnet: nodes[hostname].Subnet, wireGuardCIDR: subnet}
for location := range topoMap {
// Sort the location so the result is stable.
sort.Slice(topoMap[location], func(i, j int) bool {
@@ -116,10 +125,13 @@ func NewTopology(nodes map[string]*Node, peers map[string]*Peer, granularity Gra
// - the node's allocated subnet
// - the node's WireGuard IP
// - the node's internal IP
allowedIPs = append(allowedIPs, node.Subnet, oneAddressCIDR(node.InternalIP.IP))
allowedIPs = append(allowedIPs, node.Subnet)
if node.InternalIP != nil {
allowedIPs = append(allowedIPs, oneAddressCIDR(node.InternalIP.IP))
privateIPs = append(privateIPs, node.InternalIP.IP)
}
cidrs = append(cidrs, node.Subnet)
hostnames = append(hostnames, node.Name)
privateIPs = append(privateIPs, node.InternalIP.IP)
}
t.segments = append(t.segments, &segment{
allowedIPs: allowedIPs,
@@ -164,193 +176,6 @@ func NewTopology(nodes map[string]*Node, peers map[string]*Peer, granularity Gra
return &t, nil
}
// Routes generates a slice of routes for a given Topology.
func (t *Topology) Routes(kiloIfaceName string, kiloIface, privIface, tunlIface int, local bool, enc encapsulation.Encapsulator) ([]*netlink.Route, []*netlink.Rule) {
var routes []*netlink.Route
var rules []*netlink.Rule
if !t.leader {
// Find the GW for this segment.
// This will be the an IP of the leader.
// In an IPIP encapsulated mesh it is the leader's private IP.
var gw net.IP
for _, segment := range t.segments {
if segment.location == t.location {
gw = enc.Gw(segment.endpoint.IP, segment.privateIPs[segment.leader], segment.cidrs[segment.leader])
break
}
}
for _, segment := range t.segments {
// First, add a route to the WireGuard IP of the segment.
routes = append(routes, encapsulateRoute(&netlink.Route{
Dst: oneAddressCIDR(segment.wireGuardIP),
Flags: int(netlink.FLAG_ONLINK),
Gw: gw,
LinkIndex: privIface,
Protocol: unix.RTPROT_STATIC,
}, enc.Strategy(), t.privateIP, tunlIface))
// Add routes for the current segment if local is true.
if segment.location == t.location {
if local {
for i := range segment.cidrs {
// Don't add routes for the local node.
if segment.privateIPs[i].Equal(t.privateIP.IP) {
continue
}
routes = append(routes, encapsulateRoute(&netlink.Route{
Dst: segment.cidrs[i],
Flags: int(netlink.FLAG_ONLINK),
Gw: segment.privateIPs[i],
LinkIndex: privIface,
Protocol: unix.RTPROT_STATIC,
}, enc.Strategy(), t.privateIP, tunlIface))
// Encapsulate packets from the host's Pod subnet headed
// to private IPs.
if enc.Strategy() == encapsulation.Always || (enc.Strategy() == encapsulation.CrossSubnet && !t.privateIP.Contains(segment.privateIPs[i])) {
routes = append(routes, &netlink.Route{
Dst: oneAddressCIDR(segment.privateIPs[i]),
Flags: int(netlink.FLAG_ONLINK),
Gw: segment.privateIPs[i],
LinkIndex: tunlIface,
Protocol: unix.RTPROT_STATIC,
Table: kiloTableIndex,
})
rules = append(rules, defaultRule(&netlink.Rule{
Src: t.subnet,
Dst: oneAddressCIDR(segment.privateIPs[i]),
Table: kiloTableIndex,
}))
}
}
}
continue
}
for i := range segment.cidrs {
// Add routes to the Pod CIDRs of nodes in other segments.
routes = append(routes, encapsulateRoute(&netlink.Route{
Dst: segment.cidrs[i],
Flags: int(netlink.FLAG_ONLINK),
Gw: gw,
LinkIndex: privIface,
Protocol: unix.RTPROT_STATIC,
}, enc.Strategy(), t.privateIP, tunlIface))
// Add routes to the private IPs of nodes in other segments.
// Number of CIDRs and private IPs always match so
// we can reuse the loop.
routes = append(routes, encapsulateRoute(&netlink.Route{
Dst: oneAddressCIDR(segment.privateIPs[i]),
Flags: int(netlink.FLAG_ONLINK),
Gw: gw,
LinkIndex: privIface,
Protocol: unix.RTPROT_STATIC,
}, enc.Strategy(), t.privateIP, tunlIface))
}
}
// Add routes for the allowed IPs of peers.
for _, peer := range t.peers {
for i := range peer.AllowedIPs {
routes = append(routes, encapsulateRoute(&netlink.Route{
Dst: peer.AllowedIPs[i],
Flags: int(netlink.FLAG_ONLINK),
Gw: gw,
LinkIndex: privIface,
Protocol: unix.RTPROT_STATIC,
}, enc.Strategy(), t.privateIP, tunlIface))
}
}
return routes, rules
}
for _, segment := range t.segments {
// Add routes for the current segment if local is true.
if segment.location == t.location {
if local {
for i := range segment.cidrs {
// Don't add routes for the local node.
if segment.privateIPs[i].Equal(t.privateIP.IP) {
continue
}
routes = append(routes, encapsulateRoute(&netlink.Route{
Dst: segment.cidrs[i],
Flags: int(netlink.FLAG_ONLINK),
Gw: segment.privateIPs[i],
LinkIndex: privIface,
Protocol: unix.RTPROT_STATIC,
}, enc.Strategy(), t.privateIP, tunlIface))
// Encapsulate packets from the host's Pod subnet headed
// to private IPs.
if enc.Strategy() == encapsulation.Always || (enc.Strategy() == encapsulation.CrossSubnet && !t.privateIP.Contains(segment.privateIPs[i])) {
routes = append(routes, &netlink.Route{
Dst: oneAddressCIDR(segment.privateIPs[i]),
Flags: int(netlink.FLAG_ONLINK),
Gw: segment.privateIPs[i],
LinkIndex: tunlIface,
Protocol: unix.RTPROT_STATIC,
Table: kiloTableIndex,
})
rules = append(rules, defaultRule(&netlink.Rule{
Src: t.subnet,
Dst: oneAddressCIDR(segment.privateIPs[i]),
Table: kiloTableIndex,
}))
// Also encapsulate packets from the Kilo interface
// headed to private IPs.
rules = append(rules, defaultRule(&netlink.Rule{
Dst: oneAddressCIDR(segment.privateIPs[i]),
Table: kiloTableIndex,
IifName: kiloIfaceName,
}))
}
}
}
continue
}
for i := range segment.cidrs {
// Add routes to the Pod CIDRs of nodes in other segments.
routes = append(routes, &netlink.Route{
Dst: segment.cidrs[i],
Flags: int(netlink.FLAG_ONLINK),
Gw: segment.wireGuardIP,
LinkIndex: kiloIface,
Protocol: unix.RTPROT_STATIC,
})
// Don't add routes through Kilo if the private IP
// equals the external IP. This means that the node
// is only accessible through an external IP and we
// cannot encapsulate traffic to an IP through the IP.
if segment.privateIPs[i].Equal(segment.endpoint.IP) {
continue
}
// Add routes to the private IPs of nodes in other segments.
// Number of CIDRs and private IPs always match so
// we can reuse the loop.
routes = append(routes, &netlink.Route{
Dst: oneAddressCIDR(segment.privateIPs[i]),
Flags: int(netlink.FLAG_ONLINK),
Gw: segment.wireGuardIP,
LinkIndex: kiloIface,
Protocol: unix.RTPROT_STATIC,
})
}
}
// Add routes for the allowed IPs of peers.
for _, peer := range t.peers {
for i := range peer.AllowedIPs {
routes = append(routes, &netlink.Route{
Dst: peer.AllowedIPs[i],
LinkIndex: kiloIface,
Protocol: unix.RTPROT_STATIC,
})
}
}
return routes, rules
}
func encapsulateRoute(route *netlink.Route, encapsulate encapsulation.Strategy, subnet *net.IPNet, tunlIface int) *netlink.Route {
if encapsulate == encapsulation.Always || (encapsulate == encapsulation.CrossSubnet && !subnet.Contains(route.Gw)) {
route.LinkIndex = tunlIface
}
return route
}
// Conf generates a WireGuard configuration file for a given Topology.
func (t *Topology) Conf() *wireguard.Conf {
c := &wireguard.Conf{
@@ -437,33 +262,6 @@ func (t *Topology) PeerConf(name string) *wireguard.Conf {
return c
}
// Rules returns the iptables rules required by the local node.
func (t *Topology) Rules(cni bool) []iptables.Rule {
var rules []iptables.Rule
rules = append(rules, iptables.NewIPv4Chain("nat", "KILO-NAT"))
rules = append(rules, iptables.NewIPv6Chain("nat", "KILO-NAT"))
if cni {
rules = append(rules, iptables.NewRule(iptables.GetProtocol(len(t.subnet.IP)), "nat", "POSTROUTING", "-m", "comment", "--comment", "Kilo: jump to NAT chain", "-s", t.subnet.String(), "-j", "KILO-NAT"))
}
for _, s := range t.segments {
rules = append(rules, iptables.NewRule(iptables.GetProtocol(len(s.wireGuardIP)), "nat", "KILO-NAT", "-m", "comment", "--comment", "Kilo: do not NAT packets destined for WireGuared IPs", "-d", s.wireGuardIP.String(), "-j", "RETURN"))
for _, aip := range s.allowedIPs {
rules = append(rules, iptables.NewRule(iptables.GetProtocol(len(aip.IP)), "nat", "KILO-NAT", "-m", "comment", "--comment", "Kilo: do not NAT packets destined for known IPs", "-d", aip.String(), "-j", "RETURN"))
}
}
for _, p := range t.peers {
for _, aip := range p.AllowedIPs {
rules = append(rules,
iptables.NewRule(iptables.GetProtocol(len(aip.IP)), "nat", "POSTROUTING", "-m", "comment", "--comment", "Kilo: jump to NAT chain", "-s", aip.String(), "-j", "KILO-NAT"),
iptables.NewRule(iptables.GetProtocol(len(aip.IP)), "nat", "KILO-NAT", "-m", "comment", "--comment", "Kilo: do not NAT packets destined for peers", "-d", aip.String(), "-j", "RETURN"),
)
}
}
rules = append(rules, iptables.NewIPv4Rule("nat", "KILO-NAT", "-m", "comment", "--comment", "Kilo: NAT remaining packets", "-j", "MASQUERADE"))
rules = append(rules, iptables.NewIPv6Rule("nat", "KILO-NAT", "-m", "comment", "--comment", "Kilo: NAT remaining packets", "-j", "MASQUERADE"))
return rules
}
// oneAddressCIDR takes an IP address and returns a CIDR
// that contains only that address.
func oneAddressCIDR(ip net.IP) *net.IPNet {
@@ -521,12 +319,3 @@ func deduplicatePeerIPs(peers []*Peer) []*Peer {
}
return ps
}
func defaultRule(rule *netlink.Rule) *netlink.Rule {
base := netlink.NewRule()
base.Src = rule.Src
base.Dst = rule.Dst
base.IifName = rule.IifName
base.Table = rule.Table
return base
}

File diff suppressed because it is too large Load Diff

View File

@@ -12,6 +12,8 @@
// See the License for the specific language governing permissions and
// limitations under the License.
// +build linux
package wireguard
import (

View File

@@ -17,6 +17,7 @@
package main
import (
_ "github.com/campoy/embedmd"
_ "golang.org/x/lint/golint"
_ "k8s.io/code-generator/cmd/client-gen"
_ "k8s.io/code-generator/cmd/deepcopy-gen"

4
vendor/github.com/campoy/embedmd/.gitignore generated vendored Normal file
View File

@@ -0,0 +1,4 @@
.vscode/
*.test
embed
.DS_Store

9
vendor/github.com/campoy/embedmd/.travis.yml generated vendored Normal file
View File

@@ -0,0 +1,9 @@
language: go
go:
- 1.7.x
- 1.8.x
- 1.x
- master
script:
- go test ./...

202
vendor/github.com/campoy/embedmd/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

152
vendor/github.com/campoy/embedmd/README.md generated vendored Normal file
View File

@@ -0,0 +1,152 @@
[![Build Status](https://travis-ci.org/campoy/embedmd.svg)](https://travis-ci.org/campoy/embedmd) [![Go Report Card](https://goreportcard.com/badge/github.com/campoy/embedmd)](https://goreportcard.com/report/github.com/campoy/embedmd)
# embedmd
Are you tired of copy pasting your code into your `README.md` file, just to
forget about it later on and have unsynced copies? Or even worse, code
that does not even compile?
Then `embedmd` is for you!
`embedmd` embeds files or fractions of files into Markdown files. It does
so by searching `embedmd` commands, which are a subset of the Markdown
syntax for comments. This means they are invisible when Markdown is
rendered, so they can be kept in the file as pointers to the origin of
the embedded text.
The command receives a list of Markdown files. If no list is given, the command
reads from the standard input.
The format of an `embedmd` command is:
```Markdown
[embedmd]:# (pathOrURL language /start regexp/ /end regexp/)
```
The embedded code will be extracted from the file at `pathOrURL`,
which can either be a relative path to a file in the local file
system (using always forward slashes as directory separator) or
a URL starting with `http://` or `https://`.
If the `pathOrURL` is a URL the tool will fetch the content in that URL.
The embedded content starts at the first line that matches `/start regexp/`
and finishes at the first line matching `/end regexp/`.
Omitting the the second regular expression will embed only the piece of text
that matches `/regexp/`:
```Markdown
[embedmd]:# (pathOrURL language /regexp/)
```
To embed the whole line matching a regular expression you can use:
```Markdown
[embedmd]:# (pathOrURL language /.*regexp.*/)
```
To embed from a point to the end you should use:
```Markdown
[embedmd]:# (pathOrURL language /start regexp/ $)
```
To embed a whole file, omit both regular expressions:
```Markdown
[embedmd]:# (pathOrURL language)
```
You can omit the language in any of the previous commands, and the extension
of the file will be used for the snippet syntax highlighting.
This works when the file extensions matches the name of the language (like Go
files, since `.go` matches `go`). However, this will fail with other files like
`.md` whose language name is `markdown`.
```Markdown
[embedmd]:# (file.ext)
```
## Installation
> You can install Go by following [these instructions](https://golang.org/doc/install).
`embedmd` is written in Go, so if you have Go installed you can install it with
`go get`:
```
go get github.com/campoy/embedmd
```
This will download the code, compile it, and leave an `embedmd` binary
in `$GOPATH/bin`.
Eventually, and if there's enough interest, I will provide binaries for
every OS and architecture out there ... _eventually_.
## Usage:
Given the two files in [sample](sample):
*hello.go:*
[embedmd]:# (sample/hello.go)
```go
// Copyright 2016 Google Inc. All rights reserved.
// Use of this source code is governed by the Apache 2.0
// license that can be found in the LICENSE file.
package main
import (
"fmt"
"time"
)
func main() {
fmt.Println("Hello, there, it is", time.Now())
}
```
*docs.md:*
[embedmd]:# (sample/docs.md Markdown /./ /embedmd.*time.*/)
```Markdown
# A hello world in Go
Go is very simple, here you can see a whole "hello, world" program.
[embedmd]:# (hello.go)
We can try to embed a file from a directory.
[embedmd]:# (test/hello.go /func main/ $)
You always start with a `package` statement like:
[embedmd]:# (hello.go /package.*/)
Followed by an `import` statement:
[embedmd]:# (hello.go /import/ /\)/)
You can also see how to get the current time:
[embedmd]:# (hello.go /time\.[^)]*\)/)
```
# Flags
* `-w`: Executing `embedmd -w docs.md` will modify `docs.md`
and add the corresponding code snippets, as shown in
[sample/result.md](sample/result.md).
* `-d`: Executing `embedmd -d docs.md` will display the difference
between the contents of `docs.md` and the output of
`embedmd docs.md`.
### Disclaimer
This is not an official Google product (experimental or otherwise), it is just
code that happens to be owned by Google.

101
vendor/github.com/campoy/embedmd/embedmd/command.go generated vendored Normal file
View File

@@ -0,0 +1,101 @@
// Copyright 2016 Google Inc. All rights reserved.
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to writing, software distributed
// under the License is distributed on a "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied.
//
// See the License for the specific language governing permissions and
// limitations under the License.
package embedmd
import (
"errors"
"path/filepath"
"strings"
)
type command struct {
path, lang string
start, end *string
}
func parseCommand(s string) (*command, error) {
s = strings.TrimSpace(s)
if len(s) < 2 || s[0] != '(' || s[len(s)-1] != ')' {
return nil, errors.New("argument list should be in parenthesis")
}
args, err := fields(s[1 : len(s)-1])
if err != nil {
return nil, err
}
if len(args) == 0 {
return nil, errors.New("missing file name")
}
cmd := &command{path: args[0]}
args = args[1:]
if len(args) > 0 && args[0][0] != '/' {
cmd.lang, args = args[0], args[1:]
} else {
ext := filepath.Ext(cmd.path[1:])
if len(ext) == 0 {
return nil, errors.New("language is required when file has no extension")
}
cmd.lang = ext[1:]
}
switch {
case len(args) == 1:
cmd.start = &args[0]
case len(args) == 2:
cmd.start, cmd.end = &args[0], &args[1]
case len(args) > 2:
return nil, errors.New("too many arguments")
}
return cmd, nil
}
// fields returns a list of the groups of text separated by blanks,
// keeping all text surrounded by / as a group.
func fields(s string) ([]string, error) {
var args []string
for s = strings.TrimSpace(s); len(s) > 0; s = strings.TrimSpace(s) {
if s[0] == '/' {
sep := nextSlash(s[1:])
if sep < 0 {
return nil, errors.New("unbalanced /")
}
args, s = append(args, s[:sep+2]), s[sep+2:]
} else {
sep := strings.IndexByte(s[1:], ' ')
if sep < 0 {
return append(args, s), nil
}
args, s = append(args, s[:sep+1]), s[sep+1:]
}
}
return args, nil
}
// nextSlash will find the index of the next unescaped slash in a string.
func nextSlash(s string) int {
for sep := 0; ; sep++ {
i := strings.IndexByte(s[sep:], '/')
if i < 0 {
return -1
}
sep += i
if sep == 0 || s[sep-1] != '\\' {
return sep
}
}
}

51
vendor/github.com/campoy/embedmd/embedmd/content.go generated vendored Normal file
View File

@@ -0,0 +1,51 @@
// Copyright 2016 Google Inc. All rights reserved.
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to writing, software distributed
// under the License is distributed on a "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied.
//
// See the License for the specific language governing permissions and
// limitations under the License.
package embedmd
import (
"fmt"
"io/ioutil"
"net/http"
"path/filepath"
"strings"
)
// Fetcher provides an abstraction on a file system.
// The Fetch function is called anytime some content needs to be fetched.
// For now this includes files and URLs.
// The first parameter is the base directory that could be used to resolve
// relative paths. This base directory will be ignored for absolute paths,
// such as URLs.
type Fetcher interface {
Fetch(dir, path string) ([]byte, error)
}
type fetcher struct{}
func (fetcher) Fetch(dir, path string) ([]byte, error) {
if !strings.HasPrefix(path, "http://") && !strings.HasPrefix(path, "https://") {
path = filepath.Join(dir, filepath.FromSlash(path))
return ioutil.ReadFile(path)
}
res, err := http.Get(path)
if err != nil {
return nil, err
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
return nil, fmt.Errorf("status %s", res.Status)
}
return ioutil.ReadAll(res.Body)
}

153
vendor/github.com/campoy/embedmd/embedmd/embedmd.go generated vendored Normal file
View File

@@ -0,0 +1,153 @@
// Copyright 2016 Google Inc. All rights reserved.
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to writing, software distributed
// under the License is distributed on a "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied.
//
// See the License for the specific language governing permissions and
// limitations under the License.
// Package embedmd provides a single function, Process, that parses markdown
// searching for markdown comments.
//
// The format of an embedmd command is:
//
// [embedmd]:# (pathOrURL language /start regexp/ /end regexp/)
//
// The embedded code will be extracted from the file at pathOrURL,
// which can either be a relative path to a file in the local file
// system (using always forward slashes as directory separator) or
// a url starting with http:// or https://.
// If the pathOrURL is a url the tool will fetch the content in that url.
// The embedded content starts at the first line that matches /start regexp/
// and finishes at the first line matching /end regexp/.
//
// Omitting the the second regular expression will embed only the piece of
// text that matches /regexp/:
//
// [embedmd]:# (pathOrURL language /regexp/)
//
// To embed the whole line matching a regular expression you can use:
//
// [embedmd]:# (pathOrURL language /.*regexp.*\n/)
//
// If you want to embed from a point to the end you should use:
//
// [embedmd]:# (pathOrURL language /start regexp/ $)
//
// Finally you can embed a whole file by omitting both regular expressions:
//
// [embedmd]:# (pathOrURL language)
//
// You can ommit the language in any of the previous commands, and the extension
// of the file will be used for the snippet syntax highlighting. Note that while
// this works Go files, since the file extension .go matches the name of the language
// go, this will fail with other files like .md whose language name is markdown.
//
// [embedmd]:# (file.ext)
//
package embedmd
import (
"fmt"
"io"
"regexp"
)
// Process reads markdown from the given io.Reader searching for an embedmd
// command. When a command is found, it is executed and the output is written
// into the given io.Writer with the rest of standard markdown.
func Process(out io.Writer, in io.Reader, opts ...Option) error {
e := embedder{Fetcher: fetcher{}}
for _, opt := range opts {
opt.f(&e)
}
return process(out, in, e.runCommand)
}
// An Option provides a way to adapt the Process function to your needs.
type Option struct{ f func(*embedder) }
// WithBaseDir indicates that the given path should be used to resolve relative
// paths.
func WithBaseDir(path string) Option {
return Option{func(e *embedder) { e.baseDir = path }}
}
// WithFetcher provides a custom Fetcher to be used whenever a path or url needs
// to be fetched.
func WithFetcher(c Fetcher) Option {
return Option{func(e *embedder) { e.Fetcher = c }}
}
type embedder struct {
Fetcher
baseDir string
}
func (e *embedder) runCommand(w io.Writer, cmd *command) error {
b, err := e.Fetch(e.baseDir, cmd.path)
if err != nil {
return fmt.Errorf("could not read %s: %v", cmd.path, err)
}
b, err = extract(b, cmd.start, cmd.end)
if err != nil {
return fmt.Errorf("could not extract content from %s: %v", cmd.path, err)
}
if len(b) > 0 && b[len(b)-1] != '\n' {
b = append(b, '\n')
}
fmt.Fprintln(w, "```"+cmd.lang)
w.Write(b)
fmt.Fprintln(w, "```")
return nil
}
func extract(b []byte, start, end *string) ([]byte, error) {
if start == nil && end == nil {
return b, nil
}
match := func(s string) ([]int, error) {
if len(s) <= 2 || s[0] != '/' || s[len(s)-1] != '/' {
return nil, fmt.Errorf("missing slashes (/) around %q", s)
}
re, err := regexp.CompilePOSIX(s[1 : len(s)-1])
if err != nil {
return nil, err
}
loc := re.FindIndex(b)
if loc == nil {
return nil, fmt.Errorf("could not match %q", s)
}
return loc, nil
}
if *start != "" {
loc, err := match(*start)
if err != nil {
return nil, err
}
if end == nil {
return b[loc[0]:loc[1]], nil
}
b = b[loc[0]:]
}
if *end != "$" {
loc, err := match(*end)
if err != nil {
return nil, err
}
b = b[:loc[1]]
}
return b, nil
}

117
vendor/github.com/campoy/embedmd/embedmd/parser.go generated vendored Normal file
View File

@@ -0,0 +1,117 @@
// Copyright 2016 Google Inc. All rights reserved.
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to writing, software distributed
// under the License is distributed on a "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied.
//
// See the License for the specific language governing permissions and
// limitations under the License.
package embedmd
import (
"bufio"
"fmt"
"io"
"strings"
)
type commandRunner func(io.Writer, *command) error
func process(out io.Writer, in io.Reader, run commandRunner) error {
s := &countingScanner{bufio.NewScanner(in), 0}
state := parsingText
var err error
for state != nil {
state, err = state(out, s, run)
if err != nil {
return fmt.Errorf("%d: %v", s.line, err)
}
}
if err := s.Err(); err != nil {
return fmt.Errorf("%d: %v", s.line, err)
}
return nil
}
type countingScanner struct {
*bufio.Scanner
line int
}
func (c *countingScanner) Scan() bool {
b := c.Scanner.Scan()
if b {
c.line++
}
return b
}
type textScanner interface {
Text() string
Scan() bool
}
type state func(io.Writer, textScanner, commandRunner) (state, error)
func parsingText(out io.Writer, s textScanner, run commandRunner) (state, error) {
if !s.Scan() {
return nil, nil // end of file, which is fine.
}
switch line := s.Text(); {
case strings.HasPrefix(line, "[embedmd]:#"):
return parsingCmd, nil
case strings.HasPrefix(line, "```"):
return codeParser{print: true}.parse, nil
default:
fmt.Fprintln(out, s.Text())
return parsingText, nil
}
}
func parsingCmd(out io.Writer, s textScanner, run commandRunner) (state, error) {
line := s.Text()
fmt.Fprintln(out, line)
args := line[strings.Index(line, "#")+1:]
cmd, err := parseCommand(args)
if err != nil {
return nil, err
}
if err := run(out, cmd); err != nil {
return nil, err
}
if !s.Scan() {
return nil, nil // end of file, which is fine.
}
if strings.HasPrefix(s.Text(), "```") {
return codeParser{print: false}.parse, nil
}
fmt.Fprintln(out, s.Text())
return parsingText, nil
}
type codeParser struct{ print bool }
func (c codeParser) parse(out io.Writer, s textScanner, run commandRunner) (state, error) {
if c.print {
fmt.Fprintln(out, s.Text())
}
if !s.Scan() {
return nil, fmt.Errorf("unbalanced code section")
}
if !strings.HasPrefix(s.Text(), "```") {
return c.parse, nil
}
// print the end of the code section if needed and go back to parsing text.
if c.print {
fmt.Fprintln(out, s.Text())
}
return parsingText, nil
}

3
vendor/github.com/campoy/embedmd/go.mod generated vendored Normal file
View File

@@ -0,0 +1,3 @@
module github.com/campoy/embedmd
require github.com/pmezard/go-difflib v1.0.0

2
vendor/github.com/campoy/embedmd/go.sum generated vendored Normal file
View File

@@ -0,0 +1,2 @@
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=

185
vendor/github.com/campoy/embedmd/main.go generated vendored Normal file
View File

@@ -0,0 +1,185 @@
// Copyright 2016 Google Inc. All rights reserved.
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to writing, software distributed
// under the License is distributed on a "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied.
//
// See the License for the specific language governing permissions and
// limitations under the License.
// embedmd
//
// embedmd embeds files or fractions of files into markdown files.
// It does so by searching embedmd commands, which are a subset of the
// markdown syntax for comments. This means they are invisible when
// markdown is rendered, so they can be kept in the file as pointers
// to the origin of the embedded text.
//
// The command receives a list of markdown files, if none is given it
// reads from the standard input.
//
// embedmd supports two flags:
// -d: will print the difference of the input file with what the output
// would have been if executed.
// -w: rewrites the given files rather than writing the output to the standard
// output.
//
// For more information on the format of the commands, read the documentation
// of the github.com/campoy/embedmd/embedmd package.
package main
import (
"bytes"
"flag"
"fmt"
"io"
"io/ioutil"
"os"
"path/filepath"
"github.com/campoy/embedmd/embedmd"
"github.com/pmezard/go-difflib/difflib"
)
// modified while building by -ldflags.
var version = "unkown"
func usage() {
fmt.Fprintf(os.Stderr, "usage: embedmd [flags] [path ...]\n")
flag.PrintDefaults()
}
func main() {
rewrite := flag.Bool("w", false, "write result to (markdown) file instead of stdout")
doDiff := flag.Bool("d", false, "display diffs instead of rewriting files")
printVersion := flag.Bool("v", false, "display embedmd version")
flag.Usage = usage
flag.Parse()
if *printVersion {
fmt.Println("embedmd version: " + version)
return
}
diff, err := embed(flag.Args(), *rewrite, *doDiff)
if err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(2)
}
if diff && *doDiff {
os.Exit(2)
}
}
var (
stdout io.Writer = os.Stdout
stdin io.Reader = os.Stdin
)
func embed(paths []string, rewrite, doDiff bool) (foundDiff bool, err error) {
if rewrite && doDiff {
return false, fmt.Errorf("error: cannot use -w and -d simultaneously")
}
if len(paths) == 0 {
if rewrite {
return false, fmt.Errorf("error: cannot use -w with standard input")
}
if !doDiff {
return false, embedmd.Process(stdout, stdin)
}
var out, in bytes.Buffer
if err := embedmd.Process(&out, io.TeeReader(stdin, &in)); err != nil {
return false, err
}
d, err := diff(in.String(), out.String())
if err != nil || len(d) == 0 {
return false, err
}
fmt.Fprintf(stdout, "%s", d)
return true, nil
}
for _, path := range paths {
d, err := processFile(path, rewrite, doDiff)
if err != nil {
return false, fmt.Errorf("%s:%v", path, err)
}
foundDiff = foundDiff || d
}
return foundDiff, nil
}
type file interface {
io.ReadCloser
io.WriterAt
Truncate(int64) error
}
// replaced by testing functions.
var openFile = func(name string) (file, error) {
return os.OpenFile(name, os.O_RDWR, 0666)
}
func readFile(path string) ([]byte, error) {
f, err := openFile(path)
if err != nil {
return nil, err
}
defer f.Close()
return ioutil.ReadAll(f)
}
func processFile(path string, rewrite, doDiff bool) (foundDiff bool, err error) {
if filepath.Ext(path) != ".md" {
return false, fmt.Errorf("not a markdown file")
}
f, err := openFile(path)
if err != nil {
return false, err
}
defer f.Close()
buf := new(bytes.Buffer)
if err := embedmd.Process(buf, f, embedmd.WithBaseDir(filepath.Dir(path))); err != nil {
return false, err
}
if doDiff {
f, err := readFile(path)
if err != nil {
return false, fmt.Errorf("could not read %s for diff: %v", path, err)
}
data, err := diff(string(f), buf.String())
if err != nil || len(data) == 0 {
return false, err
}
fmt.Fprintf(stdout, "%s", data)
return true, nil
}
if rewrite {
n, err := f.WriteAt(buf.Bytes(), 0)
if err != nil {
return false, fmt.Errorf("could not write: %v", err)
}
return false, f.Truncate(int64(n))
}
io.Copy(stdout, buf)
return false, nil
}
func diff(a, b string) (string, error) {
return difflib.GetUnifiedDiffString(difflib.UnifiedDiff{
A: difflib.SplitLines(a),
B: difflib.SplitLines(b),
Context: 3,
})
}

27
vendor/github.com/pmezard/go-difflib/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,27 @@
Copyright (c) 2013, Patrick Mezard
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
The names of its contributors may not be used to endorse or promote
products derived from this software without specific prior written
permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

772
vendor/github.com/pmezard/go-difflib/difflib/difflib.go generated vendored Normal file
View File

@@ -0,0 +1,772 @@
// Package difflib is a partial port of Python difflib module.
//
// It provides tools to compare sequences of strings and generate textual diffs.
//
// The following class and functions have been ported:
//
// - SequenceMatcher
//
// - unified_diff
//
// - context_diff
//
// Getting unified diffs was the main goal of the port. Keep in mind this code
// is mostly suitable to output text differences in a human friendly way, there
// are no guarantees generated diffs are consumable by patch(1).
package difflib
import (
"bufio"
"bytes"
"fmt"
"io"
"strings"
)
func min(a, b int) int {
if a < b {
return a
}
return b
}
func max(a, b int) int {
if a > b {
return a
}
return b
}
func calculateRatio(matches, length int) float64 {
if length > 0 {
return 2.0 * float64(matches) / float64(length)
}
return 1.0
}
type Match struct {
A int
B int
Size int
}
type OpCode struct {
Tag byte
I1 int
I2 int
J1 int
J2 int
}
// SequenceMatcher compares sequence of strings. The basic
// algorithm predates, and is a little fancier than, an algorithm
// published in the late 1980's by Ratcliff and Obershelp under the
// hyperbolic name "gestalt pattern matching". The basic idea is to find
// the longest contiguous matching subsequence that contains no "junk"
// elements (R-O doesn't address junk). The same idea is then applied
// recursively to the pieces of the sequences to the left and to the right
// of the matching subsequence. This does not yield minimal edit
// sequences, but does tend to yield matches that "look right" to people.
//
// SequenceMatcher tries to compute a "human-friendly diff" between two
// sequences. Unlike e.g. UNIX(tm) diff, the fundamental notion is the
// longest *contiguous* & junk-free matching subsequence. That's what
// catches peoples' eyes. The Windows(tm) windiff has another interesting
// notion, pairing up elements that appear uniquely in each sequence.
// That, and the method here, appear to yield more intuitive difference
// reports than does diff. This method appears to be the least vulnerable
// to synching up on blocks of "junk lines", though (like blank lines in
// ordinary text files, or maybe "<P>" lines in HTML files). That may be
// because this is the only method of the 3 that has a *concept* of
// "junk" <wink>.
//
// Timing: Basic R-O is cubic time worst case and quadratic time expected
// case. SequenceMatcher is quadratic time for the worst case and has
// expected-case behavior dependent in a complicated way on how many
// elements the sequences have in common; best case time is linear.
type SequenceMatcher struct {
a []string
b []string
b2j map[string][]int
IsJunk func(string) bool
autoJunk bool
bJunk map[string]struct{}
matchingBlocks []Match
fullBCount map[string]int
bPopular map[string]struct{}
opCodes []OpCode
}
func NewMatcher(a, b []string) *SequenceMatcher {
m := SequenceMatcher{autoJunk: true}
m.SetSeqs(a, b)
return &m
}
func NewMatcherWithJunk(a, b []string, autoJunk bool,
isJunk func(string) bool) *SequenceMatcher {
m := SequenceMatcher{IsJunk: isJunk, autoJunk: autoJunk}
m.SetSeqs(a, b)
return &m
}
// Set two sequences to be compared.
func (m *SequenceMatcher) SetSeqs(a, b []string) {
m.SetSeq1(a)
m.SetSeq2(b)
}
// Set the first sequence to be compared. The second sequence to be compared is
// not changed.
//
// SequenceMatcher computes and caches detailed information about the second
// sequence, so if you want to compare one sequence S against many sequences,
// use .SetSeq2(s) once and call .SetSeq1(x) repeatedly for each of the other
// sequences.
//
// See also SetSeqs() and SetSeq2().
func (m *SequenceMatcher) SetSeq1(a []string) {
if &a == &m.a {
return
}
m.a = a
m.matchingBlocks = nil
m.opCodes = nil
}
// Set the second sequence to be compared. The first sequence to be compared is
// not changed.
func (m *SequenceMatcher) SetSeq2(b []string) {
if &b == &m.b {
return
}
m.b = b
m.matchingBlocks = nil
m.opCodes = nil
m.fullBCount = nil
m.chainB()
}
func (m *SequenceMatcher) chainB() {
// Populate line -> index mapping
b2j := map[string][]int{}
for i, s := range m.b {
indices := b2j[s]
indices = append(indices, i)
b2j[s] = indices
}
// Purge junk elements
m.bJunk = map[string]struct{}{}
if m.IsJunk != nil {
junk := m.bJunk
for s, _ := range b2j {
if m.IsJunk(s) {
junk[s] = struct{}{}
}
}
for s, _ := range junk {
delete(b2j, s)
}
}
// Purge remaining popular elements
popular := map[string]struct{}{}
n := len(m.b)
if m.autoJunk && n >= 200 {
ntest := n/100 + 1
for s, indices := range b2j {
if len(indices) > ntest {
popular[s] = struct{}{}
}
}
for s, _ := range popular {
delete(b2j, s)
}
}
m.bPopular = popular
m.b2j = b2j
}
func (m *SequenceMatcher) isBJunk(s string) bool {
_, ok := m.bJunk[s]
return ok
}
// Find longest matching block in a[alo:ahi] and b[blo:bhi].
//
// If IsJunk is not defined:
//
// Return (i,j,k) such that a[i:i+k] is equal to b[j:j+k], where
// alo <= i <= i+k <= ahi
// blo <= j <= j+k <= bhi
// and for all (i',j',k') meeting those conditions,
// k >= k'
// i <= i'
// and if i == i', j <= j'
//
// In other words, of all maximal matching blocks, return one that
// starts earliest in a, and of all those maximal matching blocks that
// start earliest in a, return the one that starts earliest in b.
//
// If IsJunk is defined, first the longest matching block is
// determined as above, but with the additional restriction that no
// junk element appears in the block. Then that block is extended as
// far as possible by matching (only) junk elements on both sides. So
// the resulting block never matches on junk except as identical junk
// happens to be adjacent to an "interesting" match.
//
// If no blocks match, return (alo, blo, 0).
func (m *SequenceMatcher) findLongestMatch(alo, ahi, blo, bhi int) Match {
// CAUTION: stripping common prefix or suffix would be incorrect.
// E.g.,
// ab
// acab
// Longest matching block is "ab", but if common prefix is
// stripped, it's "a" (tied with "b"). UNIX(tm) diff does so
// strip, so ends up claiming that ab is changed to acab by
// inserting "ca" in the middle. That's minimal but unintuitive:
// "it's obvious" that someone inserted "ac" at the front.
// Windiff ends up at the same place as diff, but by pairing up
// the unique 'b's and then matching the first two 'a's.
besti, bestj, bestsize := alo, blo, 0
// find longest junk-free match
// during an iteration of the loop, j2len[j] = length of longest
// junk-free match ending with a[i-1] and b[j]
j2len := map[int]int{}
for i := alo; i != ahi; i++ {
// look at all instances of a[i] in b; note that because
// b2j has no junk keys, the loop is skipped if a[i] is junk
newj2len := map[int]int{}
for _, j := range m.b2j[m.a[i]] {
// a[i] matches b[j]
if j < blo {
continue
}
if j >= bhi {
break
}
k := j2len[j-1] + 1
newj2len[j] = k
if k > bestsize {
besti, bestj, bestsize = i-k+1, j-k+1, k
}
}
j2len = newj2len
}
// Extend the best by non-junk elements on each end. In particular,
// "popular" non-junk elements aren't in b2j, which greatly speeds
// the inner loop above, but also means "the best" match so far
// doesn't contain any junk *or* popular non-junk elements.
for besti > alo && bestj > blo && !m.isBJunk(m.b[bestj-1]) &&
m.a[besti-1] == m.b[bestj-1] {
besti, bestj, bestsize = besti-1, bestj-1, bestsize+1
}
for besti+bestsize < ahi && bestj+bestsize < bhi &&
!m.isBJunk(m.b[bestj+bestsize]) &&
m.a[besti+bestsize] == m.b[bestj+bestsize] {
bestsize += 1
}
// Now that we have a wholly interesting match (albeit possibly
// empty!), we may as well suck up the matching junk on each
// side of it too. Can't think of a good reason not to, and it
// saves post-processing the (possibly considerable) expense of
// figuring out what to do with it. In the case of an empty
// interesting match, this is clearly the right thing to do,
// because no other kind of match is possible in the regions.
for besti > alo && bestj > blo && m.isBJunk(m.b[bestj-1]) &&
m.a[besti-1] == m.b[bestj-1] {
besti, bestj, bestsize = besti-1, bestj-1, bestsize+1
}
for besti+bestsize < ahi && bestj+bestsize < bhi &&
m.isBJunk(m.b[bestj+bestsize]) &&
m.a[besti+bestsize] == m.b[bestj+bestsize] {
bestsize += 1
}
return Match{A: besti, B: bestj, Size: bestsize}
}
// Return list of triples describing matching subsequences.
//
// Each triple is of the form (i, j, n), and means that
// a[i:i+n] == b[j:j+n]. The triples are monotonically increasing in
// i and in j. It's also guaranteed that if (i, j, n) and (i', j', n') are
// adjacent triples in the list, and the second is not the last triple in the
// list, then i+n != i' or j+n != j'. IOW, adjacent triples never describe
// adjacent equal blocks.
//
// The last triple is a dummy, (len(a), len(b), 0), and is the only
// triple with n==0.
func (m *SequenceMatcher) GetMatchingBlocks() []Match {
if m.matchingBlocks != nil {
return m.matchingBlocks
}
var matchBlocks func(alo, ahi, blo, bhi int, matched []Match) []Match
matchBlocks = func(alo, ahi, blo, bhi int, matched []Match) []Match {
match := m.findLongestMatch(alo, ahi, blo, bhi)
i, j, k := match.A, match.B, match.Size
if match.Size > 0 {
if alo < i && blo < j {
matched = matchBlocks(alo, i, blo, j, matched)
}
matched = append(matched, match)
if i+k < ahi && j+k < bhi {
matched = matchBlocks(i+k, ahi, j+k, bhi, matched)
}
}
return matched
}
matched := matchBlocks(0, len(m.a), 0, len(m.b), nil)
// It's possible that we have adjacent equal blocks in the
// matching_blocks list now.
nonAdjacent := []Match{}
i1, j1, k1 := 0, 0, 0
for _, b := range matched {
// Is this block adjacent to i1, j1, k1?
i2, j2, k2 := b.A, b.B, b.Size
if i1+k1 == i2 && j1+k1 == j2 {
// Yes, so collapse them -- this just increases the length of
// the first block by the length of the second, and the first
// block so lengthened remains the block to compare against.
k1 += k2
} else {
// Not adjacent. Remember the first block (k1==0 means it's
// the dummy we started with), and make the second block the
// new block to compare against.
if k1 > 0 {
nonAdjacent = append(nonAdjacent, Match{i1, j1, k1})
}
i1, j1, k1 = i2, j2, k2
}
}
if k1 > 0 {
nonAdjacent = append(nonAdjacent, Match{i1, j1, k1})
}
nonAdjacent = append(nonAdjacent, Match{len(m.a), len(m.b), 0})
m.matchingBlocks = nonAdjacent
return m.matchingBlocks
}
// Return list of 5-tuples describing how to turn a into b.
//
// Each tuple is of the form (tag, i1, i2, j1, j2). The first tuple
// has i1 == j1 == 0, and remaining tuples have i1 == the i2 from the
// tuple preceding it, and likewise for j1 == the previous j2.
//
// The tags are characters, with these meanings:
//
// 'r' (replace): a[i1:i2] should be replaced by b[j1:j2]
//
// 'd' (delete): a[i1:i2] should be deleted, j1==j2 in this case.
//
// 'i' (insert): b[j1:j2] should be inserted at a[i1:i1], i1==i2 in this case.
//
// 'e' (equal): a[i1:i2] == b[j1:j2]
func (m *SequenceMatcher) GetOpCodes() []OpCode {
if m.opCodes != nil {
return m.opCodes
}
i, j := 0, 0
matching := m.GetMatchingBlocks()
opCodes := make([]OpCode, 0, len(matching))
for _, m := range matching {
// invariant: we've pumped out correct diffs to change
// a[:i] into b[:j], and the next matching block is
// a[ai:ai+size] == b[bj:bj+size]. So we need to pump
// out a diff to change a[i:ai] into b[j:bj], pump out
// the matching block, and move (i,j) beyond the match
ai, bj, size := m.A, m.B, m.Size
tag := byte(0)
if i < ai && j < bj {
tag = 'r'
} else if i < ai {
tag = 'd'
} else if j < bj {
tag = 'i'
}
if tag > 0 {
opCodes = append(opCodes, OpCode{tag, i, ai, j, bj})
}
i, j = ai+size, bj+size
// the list of matching blocks is terminated by a
// sentinel with size 0
if size > 0 {
opCodes = append(opCodes, OpCode{'e', ai, i, bj, j})
}
}
m.opCodes = opCodes
return m.opCodes
}
// Isolate change clusters by eliminating ranges with no changes.
//
// Return a generator of groups with up to n lines of context.
// Each group is in the same format as returned by GetOpCodes().
func (m *SequenceMatcher) GetGroupedOpCodes(n int) [][]OpCode {
if n < 0 {
n = 3
}
codes := m.GetOpCodes()
if len(codes) == 0 {
codes = []OpCode{OpCode{'e', 0, 1, 0, 1}}
}
// Fixup leading and trailing groups if they show no changes.
if codes[0].Tag == 'e' {
c := codes[0]
i1, i2, j1, j2 := c.I1, c.I2, c.J1, c.J2
codes[0] = OpCode{c.Tag, max(i1, i2-n), i2, max(j1, j2-n), j2}
}
if codes[len(codes)-1].Tag == 'e' {
c := codes[len(codes)-1]
i1, i2, j1, j2 := c.I1, c.I2, c.J1, c.J2
codes[len(codes)-1] = OpCode{c.Tag, i1, min(i2, i1+n), j1, min(j2, j1+n)}
}
nn := n + n
groups := [][]OpCode{}
group := []OpCode{}
for _, c := range codes {
i1, i2, j1, j2 := c.I1, c.I2, c.J1, c.J2
// End the current group and start a new one whenever
// there is a large range with no changes.
if c.Tag == 'e' && i2-i1 > nn {
group = append(group, OpCode{c.Tag, i1, min(i2, i1+n),
j1, min(j2, j1+n)})
groups = append(groups, group)
group = []OpCode{}
i1, j1 = max(i1, i2-n), max(j1, j2-n)
}
group = append(group, OpCode{c.Tag, i1, i2, j1, j2})
}
if len(group) > 0 && !(len(group) == 1 && group[0].Tag == 'e') {
groups = append(groups, group)
}
return groups
}
// Return a measure of the sequences' similarity (float in [0,1]).
//
// Where T is the total number of elements in both sequences, and
// M is the number of matches, this is 2.0*M / T.
// Note that this is 1 if the sequences are identical, and 0 if
// they have nothing in common.
//
// .Ratio() is expensive to compute if you haven't already computed
// .GetMatchingBlocks() or .GetOpCodes(), in which case you may
// want to try .QuickRatio() or .RealQuickRation() first to get an
// upper bound.
func (m *SequenceMatcher) Ratio() float64 {
matches := 0
for _, m := range m.GetMatchingBlocks() {
matches += m.Size
}
return calculateRatio(matches, len(m.a)+len(m.b))
}
// Return an upper bound on ratio() relatively quickly.
//
// This isn't defined beyond that it is an upper bound on .Ratio(), and
// is faster to compute.
func (m *SequenceMatcher) QuickRatio() float64 {
// viewing a and b as multisets, set matches to the cardinality
// of their intersection; this counts the number of matches
// without regard to order, so is clearly an upper bound
if m.fullBCount == nil {
m.fullBCount = map[string]int{}
for _, s := range m.b {
m.fullBCount[s] = m.fullBCount[s] + 1
}
}
// avail[x] is the number of times x appears in 'b' less the
// number of times we've seen it in 'a' so far ... kinda
avail := map[string]int{}
matches := 0
for _, s := range m.a {
n, ok := avail[s]
if !ok {
n = m.fullBCount[s]
}
avail[s] = n - 1
if n > 0 {
matches += 1
}
}
return calculateRatio(matches, len(m.a)+len(m.b))
}
// Return an upper bound on ratio() very quickly.
//
// This isn't defined beyond that it is an upper bound on .Ratio(), and
// is faster to compute than either .Ratio() or .QuickRatio().
func (m *SequenceMatcher) RealQuickRatio() float64 {
la, lb := len(m.a), len(m.b)
return calculateRatio(min(la, lb), la+lb)
}
// Convert range to the "ed" format
func formatRangeUnified(start, stop int) string {
// Per the diff spec at http://www.unix.org/single_unix_specification/
beginning := start + 1 // lines start numbering with one
length := stop - start
if length == 1 {
return fmt.Sprintf("%d", beginning)
}
if length == 0 {
beginning -= 1 // empty ranges begin at line just before the range
}
return fmt.Sprintf("%d,%d", beginning, length)
}
// Unified diff parameters
type UnifiedDiff struct {
A []string // First sequence lines
FromFile string // First file name
FromDate string // First file time
B []string // Second sequence lines
ToFile string // Second file name
ToDate string // Second file time
Eol string // Headers end of line, defaults to LF
Context int // Number of context lines
}
// Compare two sequences of lines; generate the delta as a unified diff.
//
// Unified diffs are a compact way of showing line changes and a few
// lines of context. The number of context lines is set by 'n' which
// defaults to three.
//
// By default, the diff control lines (those with ---, +++, or @@) are
// created with a trailing newline. This is helpful so that inputs
// created from file.readlines() result in diffs that are suitable for
// file.writelines() since both the inputs and outputs have trailing
// newlines.
//
// For inputs that do not have trailing newlines, set the lineterm
// argument to "" so that the output will be uniformly newline free.
//
// The unidiff format normally has a header for filenames and modification
// times. Any or all of these may be specified using strings for
// 'fromfile', 'tofile', 'fromfiledate', and 'tofiledate'.
// The modification times are normally expressed in the ISO 8601 format.
func WriteUnifiedDiff(writer io.Writer, diff UnifiedDiff) error {
buf := bufio.NewWriter(writer)
defer buf.Flush()
wf := func(format string, args ...interface{}) error {
_, err := buf.WriteString(fmt.Sprintf(format, args...))
return err
}
ws := func(s string) error {
_, err := buf.WriteString(s)
return err
}
if len(diff.Eol) == 0 {
diff.Eol = "\n"
}
started := false
m := NewMatcher(diff.A, diff.B)
for _, g := range m.GetGroupedOpCodes(diff.Context) {
if !started {
started = true
fromDate := ""
if len(diff.FromDate) > 0 {
fromDate = "\t" + diff.FromDate
}
toDate := ""
if len(diff.ToDate) > 0 {
toDate = "\t" + diff.ToDate
}
if diff.FromFile != "" || diff.ToFile != "" {
err := wf("--- %s%s%s", diff.FromFile, fromDate, diff.Eol)
if err != nil {
return err
}
err = wf("+++ %s%s%s", diff.ToFile, toDate, diff.Eol)
if err != nil {
return err
}
}
}
first, last := g[0], g[len(g)-1]
range1 := formatRangeUnified(first.I1, last.I2)
range2 := formatRangeUnified(first.J1, last.J2)
if err := wf("@@ -%s +%s @@%s", range1, range2, diff.Eol); err != nil {
return err
}
for _, c := range g {
i1, i2, j1, j2 := c.I1, c.I2, c.J1, c.J2
if c.Tag == 'e' {
for _, line := range diff.A[i1:i2] {
if err := ws(" " + line); err != nil {
return err
}
}
continue
}
if c.Tag == 'r' || c.Tag == 'd' {
for _, line := range diff.A[i1:i2] {
if err := ws("-" + line); err != nil {
return err
}
}
}
if c.Tag == 'r' || c.Tag == 'i' {
for _, line := range diff.B[j1:j2] {
if err := ws("+" + line); err != nil {
return err
}
}
}
}
}
return nil
}
// Like WriteUnifiedDiff but returns the diff a string.
func GetUnifiedDiffString(diff UnifiedDiff) (string, error) {
w := &bytes.Buffer{}
err := WriteUnifiedDiff(w, diff)
return string(w.Bytes()), err
}
// Convert range to the "ed" format.
func formatRangeContext(start, stop int) string {
// Per the diff spec at http://www.unix.org/single_unix_specification/
beginning := start + 1 // lines start numbering with one
length := stop - start
if length == 0 {
beginning -= 1 // empty ranges begin at line just before the range
}
if length <= 1 {
return fmt.Sprintf("%d", beginning)
}
return fmt.Sprintf("%d,%d", beginning, beginning+length-1)
}
type ContextDiff UnifiedDiff
// Compare two sequences of lines; generate the delta as a context diff.
//
// Context diffs are a compact way of showing line changes and a few
// lines of context. The number of context lines is set by diff.Context
// which defaults to three.
//
// By default, the diff control lines (those with *** or ---) are
// created with a trailing newline.
//
// For inputs that do not have trailing newlines, set the diff.Eol
// argument to "" so that the output will be uniformly newline free.
//
// The context diff format normally has a header for filenames and
// modification times. Any or all of these may be specified using
// strings for diff.FromFile, diff.ToFile, diff.FromDate, diff.ToDate.
// The modification times are normally expressed in the ISO 8601 format.
// If not specified, the strings default to blanks.
func WriteContextDiff(writer io.Writer, diff ContextDiff) error {
buf := bufio.NewWriter(writer)
defer buf.Flush()
var diffErr error
wf := func(format string, args ...interface{}) {
_, err := buf.WriteString(fmt.Sprintf(format, args...))
if diffErr == nil && err != nil {
diffErr = err
}
}
ws := func(s string) {
_, err := buf.WriteString(s)
if diffErr == nil && err != nil {
diffErr = err
}
}
if len(diff.Eol) == 0 {
diff.Eol = "\n"
}
prefix := map[byte]string{
'i': "+ ",
'd': "- ",
'r': "! ",
'e': " ",
}
started := false
m := NewMatcher(diff.A, diff.B)
for _, g := range m.GetGroupedOpCodes(diff.Context) {
if !started {
started = true
fromDate := ""
if len(diff.FromDate) > 0 {
fromDate = "\t" + diff.FromDate
}
toDate := ""
if len(diff.ToDate) > 0 {
toDate = "\t" + diff.ToDate
}
if diff.FromFile != "" || diff.ToFile != "" {
wf("*** %s%s%s", diff.FromFile, fromDate, diff.Eol)
wf("--- %s%s%s", diff.ToFile, toDate, diff.Eol)
}
}
first, last := g[0], g[len(g)-1]
ws("***************" + diff.Eol)
range1 := formatRangeContext(first.I1, last.I2)
wf("*** %s ****%s", range1, diff.Eol)
for _, c := range g {
if c.Tag == 'r' || c.Tag == 'd' {
for _, cc := range g {
if cc.Tag == 'i' {
continue
}
for _, line := range diff.A[cc.I1:cc.I2] {
ws(prefix[cc.Tag] + line)
}
}
break
}
}
range2 := formatRangeContext(first.J1, last.J2)
wf("--- %s ----%s", range2, diff.Eol)
for _, c := range g {
if c.Tag == 'r' || c.Tag == 'i' {
for _, cc := range g {
if cc.Tag == 'd' {
continue
}
for _, line := range diff.B[cc.J1:cc.J2] {
ws(prefix[cc.Tag] + line)
}
}
break
}
}
}
return diffErr
}
// Like WriteContextDiff but returns the diff a string.
func GetContextDiffString(diff ContextDiff) (string, error) {
w := &bytes.Buffer{}
err := WriteContextDiff(w, diff)
return string(w.Bytes()), err
}
// Split a string on "\n" while preserving them. The output can be used
// as input for UnifiedDiff and ContextDiff structures.
func SplitLines(s string) []string {
lines := strings.SplitAfter(s, "\n")
lines[len(lines)-1] += "\n"
return lines
}

7
vendor/modules.txt vendored
View File

@@ -16,6 +16,10 @@ github.com/awalterschulze/gographviz/internal/parser
github.com/awalterschulze/gographviz/internal/token
# github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973
github.com/beorn7/perks/quantile
# github.com/campoy/embedmd v1.0.0
## explicit
github.com/campoy/embedmd
github.com/campoy/embedmd/embedmd
# github.com/containernetworking/cni v0.6.0
## explicit
github.com/containernetworking/cni/libcni
@@ -128,6 +132,8 @@ github.com/oklog/run
## explicit
# github.com/onsi/gomega v1.5.0
## explicit
# github.com/pmezard/go-difflib v1.0.0
github.com/pmezard/go-difflib/difflib
# github.com/prometheus/client_golang v0.9.2
## explicit
github.com/prometheus/client_golang/prometheus
@@ -148,6 +154,7 @@ github.com/prometheus/procfs/xfs
## explicit
github.com/spf13/cobra
# github.com/spf13/pflag v1.0.3
## explicit
github.com/spf13/pflag
# github.com/stretchr/testify v1.3.0
## explicit

4
website/docs/kg Normal file
View File

@@ -0,0 +1,4 @@
---
id: kg
hide_title: true
---

View File

@@ -0,0 +1,5 @@
---
id: userspace-wireguard
title: Userspace WireGuard
hide_title: true
---

View File

@@ -13,7 +13,7 @@ module.exports = {
alt: 'Kilo',
src: 'img/kilo.svg',
},
links: [
items: [
{
to: 'docs/introduction',
activeBasePath: 'docs',

View File

@@ -9,10 +9,9 @@
"deploy": "docusaurus deploy"
},
"dependencies": {
"@docusaurus/core": "^2.0.0-alpha.56",
"@docusaurus/preset-classic": "^2.0.0-alpha.56",
"@docusaurus/core": "2.0.0-alpha.71",
"@docusaurus/preset-classic": "2.0.0-alpha.71",
"classnames": "^2.2.6",
"minimist": "^1.2.3",
"react": "^16.8.4",
"react-dom": "^16.8.4"
},
@@ -27,5 +26,9 @@
"last 1 firefox version",
"last 1 safari version"
]
},
"resolutions": {
"minimist": "^1.2.3",
"node-fetch": "^2.6.1"
}
}

View File

@@ -7,12 +7,12 @@ module.exports = {
{
type: 'category',
label: 'Guides',
items: ['topology', 'vpn', 'vpn-server', 'multi-cluster-services', 'network-policies'],
items: ['topology', 'vpn', 'vpn-server', 'multi-cluster-services', 'network-policies', 'userspace-wireguard'],
},
{
type: 'category',
label: 'Reference',
items: ['annotations', 'kgctl'],
items: ['annotations', 'kg', 'kgctl'],
},
//Features: ['mdx'],
],

File diff suppressed because it is too large Load Diff