35 Commits

Author SHA1 Message Date
leonnicolas
478a1b9945 manifests/: fix boringtun containers
A change in boringtun's cli caused the boringtun containers to crash.

Signed-off-by: leonnicolas <leonloechner@gmx.de>
2022-07-11 23:30:10 +02:00
leonnicolas
e328646617 Pin boringtun image tag (#319)
* Pin boringtun image tag

Pin the image to a tag before boringtun's cli changed.
Specifically the --disable-drop-privileges flag need a boolean param.

* Fix image name
2022-07-11 23:17:05 +02:00
dependabot[bot]
6ebc914354 build(deps): bump eventsource from 1.1.0 to 1.1.1 in /website (#315)
Bumps [eventsource](https://github.com/EventSource/eventsource) from 1.1.0 to 1.1.1.
- [Release notes](https://github.com/EventSource/eventsource/releases)
- [Changelog](https://github.com/EventSource/eventsource/blob/master/HISTORY.md)
- [Commits](https://github.com/EventSource/eventsource/compare/v1.1.0...v1.1.1)

---
updated-dependencies:
- dependency-name: eventsource
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-06-13 21:03:15 +02:00
Antoine
4be792ea54 feat: cilium add-mode support (#312)
* feat: cilium add-mode support

when cni management by kilo is disable, we can use existing cluster's cni setup thanks to add-on mode

https://kilo.squat.ai/docs/introduction#add-on-mode

* feat: manifest example for cilium addon mode

* fix: apply comment from PR review

* fix: add mutex to interface retrieval into flannel addon mode
2022-05-20 02:13:07 +02:00
Lucas Servén Marín
50fbc2eec2 staticcheck (#313)
* CI: use staticcheck for linting

This commit switches the linter for Go code from golint to staticcheck.
Golint has been deprecated since last year and staticcheck is a
recommended replacement.

Signed-off-by: Lucas Servén Marín <lserven@gmail.com>

* revendor

Signed-off-by: Lucas Servén Marín <lserven@gmail.com>

* cmd,pkg: fix lint warnings

Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2022-05-19 19:45:43 +02:00
Lucas Servén Marín
93f46e03ea Merge pull request #311 from squat/dependabot/npm_and_yarn/website/cross-fetch-3.1.5
build(deps): bump cross-fetch from 3.1.4 to 3.1.5 in /website
2022-04-29 00:46:08 +02:00
dependabot[bot]
59ed36e81f build(deps): bump cross-fetch from 3.1.4 to 3.1.5 in /website
Bumps [cross-fetch](https://github.com/lquixada/cross-fetch) from 3.1.4 to 3.1.5.
- [Release notes](https://github.com/lquixada/cross-fetch/releases)
- [Commits](https://github.com/lquixada/cross-fetch/compare/v3.1.4...v3.1.5)

---
updated-dependencies:
- dependency-name: cross-fetch
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-04-28 22:26:28 +00:00
leonnicolas
0820a9d32f Remove context.TODO() (#310)
Remove almost all (except the ones created by informer-gen)
context.TODOs.

Signed-off-by: leonnicolas <leonloechner@gmx.de>
2022-04-28 19:39:57 +02:00
Lucas Servén Marín
7aeaa855e7 Merge pull request #309 from squat/release-0.5
Release 0.5
2022-04-27 19:32:56 +02:00
Lucas Servén Marín
01bf238799 Merge pull request #307 from squat/cut-0.5.0
cut 0.5.0
2022-04-27 12:46:00 +02:00
Lucas Servén Marín
37a5aef6ea cut 0.5.0
Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2022-04-25 10:39:39 +02:00
Lucas Servén Marín
5424c5eb55 Merge pull request #306 from squat/update_packages
go.*: Update k8s packages
2022-04-23 12:28:58 +02:00
leonnicolas
213688fd7d Update autogenerated code and CRD
Also edit Makefile to generate valid manifest.

Signed-off-by: leonnicolas <leonloechner@gmx.de>
2022-04-23 11:39:37 +02:00
leonnicolas
3eaacc01ae go.*: Update k8s packages
- update k8s client_go
 - update k8s apiextensions-apiserver
 - update k8s controller-tools

Signed-off-by: leonnicolas <leonloechner@gmx.de>
2022-04-23 11:09:50 +02:00
Lucas Servén Marín
e20d13ace0 Merge pull request #302 from squat/support_nftables
Dockerfile: support nftables
2022-04-23 09:30:42 +02:00
Lucas Servén Marín
0ddeea3d78 Merge pull request #305 from squat/pprof
Pprof
2022-04-22 18:59:23 +02:00
Lucas Servén Marín
bbc4fe30a6 vendor: revendor
Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2022-04-22 12:05:46 +02:00
Lucas Servén Marín
7291a3bd71 cmd/kg: add pprof endpoints
This commit enhances the Kilo agent internal HTTP server to include
pprof endpoints. For simplicity, this commit migrates the internal
server creation to https://github.com/metalmatze/signal/internalserver,
which allows for easy registration of common internal server
observability endpoints.

Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2022-04-22 12:03:56 +02:00
Lucas Servén Marín
826593d6ba Merge pull request #303 from squat/bump_golang
Bump go and container base image
2022-04-21 21:54:47 +02:00
leonnicolas
6491d7b87f Bump go and container base image
- bump golang 1.17 -> 1.18
 - bump alpine 3.14 -> 3.15
 - revendor

 We need to use golang instead of golang:alpine because it does not
 contain git anymore. This should be fine as we are not enabling CGO,
 thus not linking against musl instead of libc.

Signed-off-by: leonnicolas <leonloechner@gmx.de>
2022-04-21 21:35:54 +02:00
Lucas Servén Marín
d04da92a23 Dockerfile: support nftables
Currently, Kilo _only_ supports adding firewall rules via the legacy
iptables API. This means that on systems using nftables in the host
network namespace, the namespace will be polluted and both firewall
infrastructures will be used, causing unexpected and difficult
to predict interactions. In other words, networking may not work as
expected on nftables-based systems.

This PR fixes this by using the iptables-wrappers project [0] to install
run-time detection of the in-use iptables backend.

[0] https://github.com/kubernetes-sigs/iptables-wrappers

Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2022-04-21 20:46:03 +02:00
Lucas Servén Marín
fc741bf444 Merge pull request #301 from squat/check_docs_in_ci
.github: ensure docs are up to date in CI
2022-04-21 20:40:50 +02:00
Lucas Servén Marín
8afe1bea53 Merge pull request #300 from squat/use_cni_0.4.0
manifests: use CNI 0.4.0
2022-04-21 08:26:42 +02:00
Lucas Servén Marín
112772d02d docs: regenerate
Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2022-04-20 16:15:56 +02:00
Lucas Servén Marín
a385f1ac82 .github: ensure docs are up to date in CI
This commit updates the CI configuration for Kilo to ensure that the
documentation, specifically the generated docs, are up-to-date.

Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2022-04-20 16:11:07 +02:00
Lucas Servén Marín
1f19133ea8 manifests: use CNI 0.4.0
As mentioned in the Kilo Slack [0], Kubernetes supports CNI 0.4.0 and
does not yet support 1.0.0. Correspondingly, this commit downgrades the
declared CNI version in the configuration to 0.4.0 and crucially updates
the configuration used in the e2e tests to exercise this new CNI
version.

[0] https://kubernetes.slack.com/archives/C022EB4R7TK/p1650455432970199?thread_ts=1650368553.132859&cid=C022EB4R7TK

Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2022-04-20 14:57:21 +02:00
Lucas Servén Marín
7985ed5091 Merge pull request #299 from READ10/main
bump CNI plugins version and fix spec version
2022-04-19 14:49:47 +02:00
Dave Allan
19c13b7401 reduce cniVersion from 1.0.1 to 1.0.0 to match spec version 2022-04-19 08:28:31 -04:00
Dave Allan
3e6818d0b3 bump CNI plugins version to 1.1.1 2022-04-19 08:27:35 -04:00
Lucas Servén Marín
8cadff2b79 CNI: bump to 1.0.1 (#297)
* CNI: bump to 1.0.1

This commit bumps the declared version of CNI in the Kilo manifests to
1.0.1. This is possible with no changes to the configuration lists
because our simple configuration is not affected by any of the
deprecations, and there was effectively no change between 0.4.0 and
1.0.0, other than the declaration of a stable API. Similarly, this
commit also bumps the version of the CNI library and the plugins
package.

Bumping to CNI 1.0.0 will help ensure that Kilo stays compatible with
container runtimes in the future.

Signed-off-by: Lucas Servén Marín <lserven@gmail.com>

* vendor: revendor

Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2022-04-18 19:00:37 +02:00
Lucas Servén Marín
6862274e8e Merge pull request #298 from squat/dependabot/npm_and_yarn/website/async-2.6.4
build(deps): bump async from 2.6.3 to 2.6.4 in /website
2022-04-17 00:43:23 +02:00
dependabot[bot]
a02542b529 build(deps): bump async from 2.6.3 to 2.6.4 in /website
Bumps [async](https://github.com/caolan/async) from 2.6.3 to 2.6.4.
- [Release notes](https://github.com/caolan/async/releases)
- [Changelog](https://github.com/caolan/async/blob/v2.6.4/CHANGELOG.md)
- [Commits](https://github.com/caolan/async/compare/v2.6.3...v2.6.4)

---
updated-dependencies:
- dependency-name: async
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-04-16 22:24:04 +00:00
Lucas Servén Marín
7dbbf52e1c Merge pull request #295 from squat/release-0.4
Release 0.4
2022-04-17 00:23:27 +02:00
dependabot[bot]
9a9131d965 build(deps): bump github.com/containernetworking/cni from 0.6.0 to 0.8.1 (#293) 2022-04-14 09:20:22 +00:00
Lucas Servén Marín
a6d50a8046 .github/workflows/release.yaml: clarify job name (#296)
Currently,the job to build kgctl binaries is named `linux`, which
suggests to the reader that the job is only building binaries for Linux,
when it is in fact building binaries for Linux, Darwin, and Windows.

Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
2022-04-13 20:23:13 +02:00
1138 changed files with 126497 additions and 19073 deletions

View File

@@ -19,7 +19,7 @@ jobs:
- name: Set up Go - name: Set up Go
uses: actions/setup-go@v2 uses: actions/setup-go@v2
with: with:
go-version: 1.17.1 go-version: 1.18
- name: Vendor - name: Vendor
run: | run: |
make vendor make vendor
@@ -32,10 +32,23 @@ jobs:
- name: Set up Go - name: Set up Go
uses: actions/setup-go@v2 uses: actions/setup-go@v2
with: with:
go-version: 1.17.1 go-version: 1.18
- name: Build - name: Build
run: make run: make
docs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.18
- name: Build docs
run: |
make gen-docs
git diff --exit-code
linux: linux:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
@@ -43,7 +56,7 @@ jobs:
- name: Set up Go - name: Set up Go
uses: actions/setup-go@v2 uses: actions/setup-go@v2
with: with:
go-version: 1.17.1 go-version: 1.18
- name: Build kg and kgctl for all Linux Architectures - name: Build kg and kgctl for all Linux Architectures
run: make all-build run: make all-build
@@ -54,7 +67,7 @@ jobs:
- name: Set up Go - name: Set up Go
uses: actions/setup-go@v2 uses: actions/setup-go@v2
with: with:
go-version: 1.17.1 go-version: 1.18
- name: Build kgctl for Darwin amd64 - name: Build kgctl for Darwin amd64
run: make OS=darwin ARCH=amd64 run: make OS=darwin ARCH=amd64
- name: Build kgctl for Darwin arm64 - name: Build kgctl for Darwin arm64
@@ -67,7 +80,7 @@ jobs:
- name: Set up Go - name: Set up Go
uses: actions/setup-go@v2 uses: actions/setup-go@v2
with: with:
go-version: 1.17.1 go-version: 1.18
- name: Build kgctl for Windows - name: Build kgctl for Windows
run: make OS=windows run: make OS=windows
@@ -78,7 +91,7 @@ jobs:
- name: Set up Go - name: Set up Go
uses: actions/setup-go@v2 uses: actions/setup-go@v2
with: with:
go-version: 1.17.1 go-version: 1.18
- name: Run Unit Tests - name: Run Unit Tests
run: make unit run: make unit
@@ -90,7 +103,7 @@ jobs:
- name: Set up Go - name: Set up Go
uses: actions/setup-go@v2 uses: actions/setup-go@v2
with: with:
go-version: 1.17.1 go-version: 1.18
- name: Run e2e Tests - name: Run e2e Tests
run: make e2e run: make e2e
@@ -101,7 +114,7 @@ jobs:
- name: Set up Go - name: Set up Go
uses: actions/setup-go@v2 uses: actions/setup-go@v2
with: with:
go-version: 1.17.1 go-version: 1.18
- name: Lint Code - name: Lint Code
run: make lint run: make lint
@@ -112,7 +125,7 @@ jobs:
- name: Set up Go - name: Set up Go
uses: actions/setup-go@v2 uses: actions/setup-go@v2
with: with:
go-version: 1.17.1 go-version: 1.18
- name: Enable Experimental Docker CLI - name: Enable Experimental Docker CLI
run: | run: |
echo $'{\n "experimental": true\n}' | sudo tee /etc/docker/daemon.json echo $'{\n "experimental": true\n}' | sudo tee /etc/docker/daemon.json
@@ -142,7 +155,7 @@ jobs:
- name: Set up Go - name: Set up Go
uses: actions/setup-go@v2 uses: actions/setup-go@v2
with: with:
go-version: 1.17.1 go-version: 1.18
- name: Enable Experimental Docker CLI - name: Enable Experimental Docker CLI
run: | run: |
echo $'{\n "experimental": true\n}' | sudo tee /etc/docker/daemon.json echo $'{\n "experimental": true\n}' | sudo tee /etc/docker/daemon.json

View File

@@ -3,15 +3,15 @@ on:
types: [created] types: [created]
name: Handle Release name: Handle Release
jobs: jobs:
linux: kgctl:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
- name: Set up Go - name: Set up Go
uses: actions/setup-go@v2 uses: actions/setup-go@v2
with: with:
go-version: 1.17.1 go-version: 1.18
- name: Make Directory with kgctl Binaries to Be Released - name: Build kgctl Binaries to Be Released
run: make release run: make release
- name: Publish Release - name: Publish Release
uses: skx/github-action-publish-binaries@master uses: skx/github-action-publish-binaries@master

View File

@@ -1,7 +1,7 @@
ARG FROM=alpine ARG FROM=alpine
FROM $FROM AS cni FROM $FROM AS cni
ARG GOARCH=amd64 ARG GOARCH=amd64
ARG CNI_PLUGINS_VERSION=v0.9.1 ARG CNI_PLUGINS_VERSION=v1.1.1
RUN apk add --no-cache curl && \ RUN apk add --no-cache curl && \
curl -Lo cni.tar.gz https://github.com/containernetworking/plugins/releases/download/$CNI_PLUGINS_VERSION/cni-plugins-linux-$GOARCH-$CNI_PLUGINS_VERSION.tgz && \ curl -Lo cni.tar.gz https://github.com/containernetworking/plugins/releases/download/$CNI_PLUGINS_VERSION/cni-plugins-linux-$GOARCH-$CNI_PLUGINS_VERSION.tgz && \
tar -xf cni.tar.gz tar -xf cni.tar.gz
@@ -13,5 +13,7 @@ LABEL maintainer="squat <lserven@gmail.com>"
RUN echo -e "https://alpine.global.ssl.fastly.net/alpine/$ALPINE_VERSION/main\nhttps://alpine.global.ssl.fastly.net/alpine/$ALPINE_VERSION/community" > /etc/apk/repositories && \ RUN echo -e "https://alpine.global.ssl.fastly.net/alpine/$ALPINE_VERSION/main\nhttps://alpine.global.ssl.fastly.net/alpine/$ALPINE_VERSION/community" > /etc/apk/repositories && \
apk add --no-cache ipset iptables ip6tables graphviz font-noto apk add --no-cache ipset iptables ip6tables graphviz font-noto
COPY --from=cni bridge host-local loopback portmap /opt/cni/bin/ COPY --from=cni bridge host-local loopback portmap /opt/cni/bin/
ADD https://raw.githubusercontent.com/kubernetes-sigs/iptables-wrappers/e139a115350974aac8a82ec4b815d2845f86997e/iptables-wrapper-installer.sh /
RUN chmod 700 /iptables-wrapper-installer.sh && /iptables-wrapper-installer.sh --no-sanity-check
COPY bin/linux/$GOARCH/kg /opt/bin/ COPY bin/linux/$GOARCH/kg /opt/bin/
ENTRYPOINT ["/opt/bin/kg"] ENTRYPOINT ["/opt/bin/kg"]

View File

@@ -38,15 +38,15 @@ DOCS_GEN_BINARY := bin/docs-gen
DEEPCOPY_GEN_BINARY := bin/deepcopy-gen DEEPCOPY_GEN_BINARY := bin/deepcopy-gen
INFORMER_GEN_BINARY := bin/informer-gen INFORMER_GEN_BINARY := bin/informer-gen
LISTER_GEN_BINARY := bin/lister-gen LISTER_GEN_BINARY := bin/lister-gen
GOLINT_BINARY := bin/golint STATICCHECK_BINARY := bin/staticcheck
EMBEDMD_BINARY := bin/embedmd EMBEDMD_BINARY := bin/embedmd
KIND_BINARY := $(shell pwd)/bin/kind KIND_BINARY := $(shell pwd)/bin/kind
KUBECTL_BINARY := $(shell pwd)/bin/kubectl KUBECTL_BINARY := $(shell pwd)/bin/kubectl
BASH_UNIT := $(shell pwd)/bin/bash_unit BASH_UNIT := $(shell pwd)/bin/bash_unit
BASH_UNIT_FLAGS := BASH_UNIT_FLAGS :=
BUILD_IMAGE ?= golang:1.17.1-alpine3.14 BUILD_IMAGE ?= golang:1.18.0
BASE_IMAGE ?= alpine:3.14 BASE_IMAGE ?= alpine:3.15
build: $(BINS) build: $(BINS)
@@ -81,7 +81,7 @@ crd: manifests/crds.yaml
manifests/crds.yaml: pkg/k8s/apis/kilo/v1alpha1/types.go $(CONTROLLER_GEN_BINARY) manifests/crds.yaml: pkg/k8s/apis/kilo/v1alpha1/types.go $(CONTROLLER_GEN_BINARY)
$(CONTROLLER_GEN_BINARY) crd \ $(CONTROLLER_GEN_BINARY) crd \
paths=./pkg/k8s/apis/kilo/... \ paths=./pkg/k8s/apis/kilo/... \
output:crd:stdout | tail -n +3 > $@ output:crd:stdout > $@
client: pkg/k8s/clientset/versioned/typed/kilo/v1alpha1/peer.go client: pkg/k8s/clientset/versioned/typed/kilo/v1alpha1/peer.go
pkg/k8s/clientset/versioned/typed/kilo/v1alpha1/peer.go: .header pkg/k8s/apis/kilo/v1alpha1/types.go $(CLIENT_GEN_BINARY) pkg/k8s/clientset/versioned/typed/kilo/v1alpha1/peer.go: .header pkg/k8s/apis/kilo/v1alpha1/types.go $(CLIENT_GEN_BINARY)
@@ -139,7 +139,7 @@ pkg/k8s/listers/kilo/v1alpha1/peer.go: .header pkg/k8s/apis/kilo/v1alpha1/types.
rm -r github.com || true rm -r github.com || true
go fmt ./pkg/k8s/listers/... go fmt ./pkg/k8s/listers/...
gen-docs: generate docs/api.md gen-docs: generate docs/api.md docs/kg.md
docs/api.md: pkg/k8s/apis/kilo/v1alpha1/types.go $(DOCS_GEN_BINARY) docs/api.md: pkg/k8s/apis/kilo/v1alpha1/types.go $(DOCS_GEN_BINARY)
$(DOCS_GEN_BINARY) $< > $@ $(DOCS_GEN_BINARY) $< > $@
@@ -165,7 +165,7 @@ fmt:
@echo $(GO_PKGS) @echo $(GO_PKGS)
gofmt -w -s $(GO_FILES) gofmt -w -s $(GO_FILES)
lint: header $(GOLINT_BINARY) lint: header $(STATICCHECK_BINARY)
@echo 'go vet $(GO_PKGS)' @echo 'go vet $(GO_PKGS)'
@vet_res=$$(GO111MODULE=on go vet -mod=vendor $(GO_PKGS) 2>&1); if [ -n "$$vet_res" ]; then \ @vet_res=$$(GO111MODULE=on go vet -mod=vendor $(GO_PKGS) 2>&1); if [ -n "$$vet_res" ]; then \
echo ""; \ echo ""; \
@@ -174,10 +174,10 @@ lint: header $(GOLINT_BINARY)
echo "$$vet_res"; \ echo "$$vet_res"; \
exit 1; \ exit 1; \
fi fi
@echo '$(GOLINT_BINARY) $(GO_PKGS)' @echo '$(STATICCHECK_BINARY) $(GO_PKGS)'
@lint_res=$$($(GOLINT_BINARY) $(GO_PKGS)); if [ -n "$$lint_res" ]; then \ @lint_res=$$($(STATICCHECK_BINARY) $(GO_PKGS)); if [ -n "$$lint_res" ]; then \
echo ""; \ echo ""; \
echo "Golint found style issues. Please check the reported issues"; \ echo "Staticcheck found style issues. Please check the reported issues"; \
echo "and fix them if necessary before submitting the code for review:"; \ echo "and fix them if necessary before submitting the code for review:"; \
echo "$$lint_res"; \ echo "$$lint_res"; \
exit 1; \ exit 1; \
@@ -358,8 +358,8 @@ $(LISTER_GEN_BINARY):
$(DOCS_GEN_BINARY): cmd/docs-gen/main.go $(DOCS_GEN_BINARY): cmd/docs-gen/main.go
go build -mod=vendor -o $@ ./cmd/docs-gen go build -mod=vendor -o $@ ./cmd/docs-gen
$(GOLINT_BINARY): $(STATICCHECK_BINARY):
go build -mod=vendor -o $@ golang.org/x/lint/golint go build -mod=vendor -o $@ honnef.co/go/tools/cmd/staticcheck
$(EMBEDMD_BINARY): $(EMBEDMD_BINARY):
go build -mod=vendor -o $@ github.com/campoy/embedmd go build -mod=vendor -o $@ github.com/campoy/embedmd

View File

@@ -15,6 +15,7 @@
package main package main
import ( import (
"context"
"errors" "errors"
"fmt" "fmt"
"net" "net"
@@ -27,10 +28,10 @@ import (
"github.com/go-kit/kit/log" "github.com/go-kit/kit/log"
"github.com/go-kit/kit/log/level" "github.com/go-kit/kit/log/level"
"github.com/metalmatze/signal/internalserver"
"github.com/oklog/run" "github.com/oklog/run"
"github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/collectors" "github.com/prometheus/client_golang/prometheus/collectors"
"github.com/prometheus/client_golang/prometheus/promhttp"
"github.com/spf13/cobra" "github.com/spf13/cobra"
apiextensions "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset" apiextensions "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset"
"k8s.io/client-go/kubernetes" "k8s.io/client-go/kubernetes"
@@ -212,6 +213,8 @@ func runRoot(_ *cobra.Command, _ []string) error {
switch compatibility { switch compatibility {
case "flannel": case "flannel":
enc = encapsulation.NewFlannel(e) enc = encapsulation.NewFlannel(e)
case "cilium":
enc = encapsulation.NewCilium(e)
default: default:
enc = encapsulation.NewIPIP(e) enc = encapsulation.NewIPIP(e)
} }
@@ -251,18 +254,21 @@ func runRoot(_ *cobra.Command, _ []string) error {
var g run.Group var g run.Group
{ {
h := internalserver.NewHandler(
internalserver.WithName("Internal Kilo API"),
internalserver.WithPrometheusRegistry(registry),
internalserver.WithPProf(),
)
h.AddEndpoint("/health", "Exposes health checks", healthHandler)
h.AddEndpoint("/graph", "Exposes Kilo mesh topology graph", (&graphHandler{m, gr, &hostname, s}).ServeHTTP)
// Run the HTTP server. // Run the HTTP server.
mux := http.NewServeMux()
mux.HandleFunc("/health", healthHandler)
mux.Handle("/graph", &graphHandler{m, gr, &hostname, s})
mux.Handle("/metrics", promhttp.HandlerFor(registry, promhttp.HandlerOpts{}))
l, err := net.Listen("tcp", listen) l, err := net.Listen("tcp", listen)
if err != nil { if err != nil {
return fmt.Errorf("failed to listen on %s: %v", listen, err) return fmt.Errorf("failed to listen on %s: %v", listen, err)
} }
g.Add(func() error { g.Add(func() error {
if err := http.Serve(l, mux); err != nil && err != http.ErrServerClosed { if err := http.Serve(l, h); err != nil && err != http.ErrServerClosed {
return fmt.Errorf("error: server exited unexpectedly: %v", err) return fmt.Errorf("error: server exited unexpectedly: %v", err)
} }
return nil return nil
@@ -272,15 +278,16 @@ func runRoot(_ *cobra.Command, _ []string) error {
} }
{ {
ctx, cancel := context.WithCancel(context.Background())
// Start the mesh. // Start the mesh.
g.Add(func() error { g.Add(func() error {
logger.Log("msg", fmt.Sprintf("Starting Kilo network mesh '%v'.", version.Version)) logger.Log("msg", fmt.Sprintf("Starting Kilo network mesh '%v'.", version.Version))
if err := m.Run(); err != nil { if err := m.Run(ctx); err != nil {
return fmt.Errorf("error: Kilo exited unexpectedly: %v", err) return fmt.Errorf("error: Kilo exited unexpectedly: %v", err)
} }
return nil return nil
}, func(error) { }, func(error) {
m.Stop() cancel()
}) })
} }

View File

@@ -276,8 +276,6 @@ func cleanUp(iface int, t *route.Table, logger log.Logger) {
if err := t.CleanUp(); err != nil { if err := t.CleanUp(); err != nil {
level.Error(logger).Log("failed to clean up routes: %v", err) level.Error(logger).Log("failed to clean up routes: %v", err)
} }
return
} }
func sync(table *route.Table, peerName string, privateKey wgtypes.Key, iface int, logger log.Logger) error { func sync(table *route.Table, peerName string, privateKey wgtypes.Key, iface int, logger log.Logger) error {

View File

@@ -71,7 +71,7 @@ var (
topologyLabel string topologyLabel string
) )
func runRoot(_ *cobra.Command, _ []string) error { func runRoot(c *cobra.Command, _ []string) error {
if opts.port < 1 || opts.port > 1<<16-1 { if opts.port < 1 || opts.port > 1<<16-1 {
return fmt.Errorf("invalid port: port mus be in range [%d:%d], but got %d", 1, 1<<16-1, opts.port) return fmt.Errorf("invalid port: port mus be in range [%d:%d], but got %d", 1, 1<<16-1, opts.port)
} }
@@ -99,11 +99,11 @@ func runRoot(_ *cobra.Command, _ []string) error {
return fmt.Errorf("backend %s unknown; posible values are: %s", backend, availableBackends) return fmt.Errorf("backend %s unknown; posible values are: %s", backend, availableBackends)
} }
if err := opts.backend.Nodes().Init(make(chan struct{})); err != nil { if err := opts.backend.Nodes().Init(c.Context()); err != nil {
return fmt.Errorf("failed to initialize node backend: %w", err) return fmt.Errorf("failed to initialize node backend: %w", err)
} }
if err := opts.backend.Peers().Init(make(chan struct{})); err != nil { if err := opts.backend.Peers().Init(c.Context()); err != nil {
return fmt.Errorf("failed to initialize peer backend: %w", err) return fmt.Errorf("failed to initialize peer backend: %w", err)
} }
return nil return nil

View File

@@ -16,7 +16,22 @@ The behavior of `kg` can be configured using the command line flags listed below
[embedmd]:# (../tmp/help.txt) [embedmd]:# (../tmp/help.txt)
```txt ```txt
Usage of bin//linux/amd64/kg: kg is the Kilo agent.
It runs on every node of a cluster,
setting up the public and private keys for the VPN
as well as the necessary rules to route packets between locations.
Usage:
kg [flags]
kg [command]
Available Commands:
completion generate the autocompletion script for the specified shell
help Help about any command
version Print the version and exit.
webhook webhook starts a HTTPS server to validate updates and creations of Kilo peers.
Flags:
--backend string The backend for the mesh. Possible values: kubernetes (default "kubernetes") --backend string The backend for the mesh. Possible values: kubernetes (default "kubernetes")
--clean-up-interface Should Kilo delete its interface when it shuts down? --clean-up-interface Should Kilo delete its interface when it shuts down?
--cni Should Kilo manage the node's CNI configuration? (default true) --cni Should Kilo manage the node's CNI configuration? (default true)
@@ -24,8 +39,10 @@ Usage of bin//linux/amd64/kg:
--compatibility string Should Kilo run in compatibility mode? Possible values: flannel --compatibility string Should Kilo run in compatibility mode? Possible values: flannel
--create-interface Should kilo create an interface on startup? (default true) --create-interface Should kilo create an interface on startup? (default true)
--encapsulate string When should Kilo encapsulate packets within a location? Possible values: never, crosssubnet, always (default "always") --encapsulate string When should Kilo encapsulate packets within a location? Possible values: never, crosssubnet, always (default "always")
-h, --help help for kg
--hostname string Hostname of the node on which this process is running. --hostname string Hostname of the node on which this process is running.
--interface string Name of the Kilo interface to use; if it does not exist, it will be created. (default "kilo0") --interface string Name of the Kilo interface to use; if it does not exist, it will be created. (default "kilo0")
--iptables-forward-rules Add default accept rules to the FORWARD chain in iptables. Warning: this may break firewalls with a deny all policy and is potentially insecure!
--kubeconfig string Path to kubeconfig. --kubeconfig string Path to kubeconfig.
--listen string The address at which to listen for health and metrics. (default ":1107") --listen string The address at which to listen for health and metrics. (default ":1107")
--local Should Kilo manage routes within a location? (default true) --local Should Kilo manage routes within a location? (default true)
@@ -33,9 +50,11 @@ Usage of bin//linux/amd64/kg:
--master string The address of the Kubernetes API server (overrides any value in kubeconfig). --master string The address of the Kubernetes API server (overrides any value in kubeconfig).
--mesh-granularity string The granularity of the network mesh to create. Possible values: location, full (default "location") --mesh-granularity string The granularity of the network mesh to create. Possible values: location, full (default "location")
--mtu uint The MTU of the WireGuard interface created by Kilo. (default 1420) --mtu uint The MTU of the WireGuard interface created by Kilo. (default 1420)
--port uint The port over which WireGuard peers should communicate. (default 51820) --port int The port over which WireGuard peers should communicate. (default 51820)
--prioritise-private-addresses Prefer to assign a private IP address to the node's endpoint.
--resync-period duration How often should the Kilo controllers reconcile? (default 30s) --resync-period duration How often should the Kilo controllers reconcile? (default 30s)
--subnet string CIDR from which to allocate addresses for WireGuard interfaces. (default "10.4.0.0/16") --subnet string CIDR from which to allocate addresses for WireGuard interfaces. (default "10.4.0.0/16")
--topology-label string Kubernetes node label used to group nodes into logical locations. (default "topology.kubernetes.io/region") --topology-label string Kubernetes node label used to group nodes into logical locations. (default "topology.kubernetes.io/region")
--version Print version and exit --version Print version and exit
``` ```

View File

@@ -8,7 +8,7 @@ metadata:
data: data:
cni-conf.json: | cni-conf.json: |
{ {
"cniVersion":"0.3.1", "cniVersion":"0.4.0",
"name":"kilo", "name":"kilo",
"plugins":[ "plugins":[
{ {
@@ -136,9 +136,9 @@ spec:
mountPath: /etc/kubernetes mountPath: /etc/kubernetes
readOnly: true readOnly: true
- name: boringtun - name: boringtun
image: leonnicolas/boringtun:alpine image: leonnicolas/boringtun
args: args:
- --disable-drop-privileges=true - --disable-drop-privileges
- --foreground - --foreground
- kilo0 - kilo0
securityContext: securityContext:

View File

@@ -65,7 +65,7 @@ build_kind_config() {
} }
create_interface() { create_interface() {
docker run -d --name="$1" --rm --network=host --cap-add=NET_ADMIN --device=/dev/net/tun -v /var/run/wireguard:/var/run/wireguard -e WG_LOG_LEVEL=debug leonnicolas/boringtun --foreground --disable-drop-privileges true "$1" docker run -d --name="$1" --rm --network=host --cap-add=NET_ADMIN --device=/dev/net/tun -v /var/run/wireguard:/var/run/wireguard -e WG_LOG_LEVEL=debug leonnicolas/boringtun --foreground --disable-drop-privileges "$1"
} }
delete_interface() { delete_interface() {

81
go.mod
View File

@@ -1,50 +1,51 @@
module github.com/squat/kilo module github.com/squat/kilo
go 1.17 go 1.18
require ( require (
github.com/awalterschulze/gographviz v0.0.0-20181013152038-b2885df04310 github.com/awalterschulze/gographviz v0.0.0-20181013152038-b2885df04310
github.com/campoy/embedmd v1.0.0 github.com/campoy/embedmd v1.0.0
github.com/containernetworking/cni v0.6.0 github.com/containernetworking/cni v1.0.1
github.com/containernetworking/plugins v0.6.0 github.com/containernetworking/plugins v1.1.1
github.com/coreos/go-iptables v0.4.0 github.com/coreos/go-iptables v0.6.0
github.com/go-kit/kit v0.9.0 github.com/go-kit/kit v0.9.0
github.com/imdario/mergo v0.3.6 // indirect
github.com/kylelemons/godebug v0.0.0-20170820004349-d65d576e9348 github.com/kylelemons/godebug v0.0.0-20170820004349-d65d576e9348
github.com/metalmatze/signal v0.0.0-20210307161603-1c9aa721a97a
github.com/oklog/run v1.1.0 github.com/oklog/run v1.1.0
github.com/prometheus/client_golang v1.11.0 github.com/prometheus/client_golang v1.11.0
github.com/spf13/cobra v1.1.3 github.com/spf13/cobra v1.2.1
github.com/vishvananda/netlink v1.0.0 github.com/vishvananda/netlink v1.1.1-0.20210330154013-f5de75959ad5
github.com/vishvananda/netns v0.0.0-20180720170159-13995c7128cc // indirect
golang.org/x/lint v0.0.0-20200302205851-738671d3881b
golang.org/x/sys v0.0.0-20211124211545-fe61309f8881 golang.org/x/sys v0.0.0-20211124211545-fe61309f8881
golang.zx2c4.com/wireguard/wgctrl v0.0.0-20211124212657-dd7407c86d22 golang.zx2c4.com/wireguard/wgctrl v0.0.0-20211124212657-dd7407c86d22
k8s.io/api v0.21.1 honnef.co/go/tools v0.3.1
k8s.io/apiextensions-apiserver v0.21.1 k8s.io/api v0.23.6
k8s.io/apimachinery v0.21.1 k8s.io/apiextensions-apiserver v0.23.6
k8s.io/client-go v0.21.1 k8s.io/apimachinery v0.23.6
k8s.io/code-generator v0.21.1 k8s.io/client-go v0.23.6
sigs.k8s.io/controller-tools v0.6.0 k8s.io/code-generator v0.23.6
sigs.k8s.io/controller-tools v0.8.0
) )
require ( require (
github.com/BurntSushi/toml v0.4.1 // indirect
github.com/beorn7/perks v1.0.1 // indirect github.com/beorn7/perks v1.0.1 // indirect
github.com/cespare/xxhash/v2 v2.1.1 // indirect github.com/cespare/xxhash/v2 v2.1.1 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect github.com/davecgh/go-spew v1.1.1 // indirect
github.com/evanphx/json-patch v4.9.0+incompatible // indirect github.com/evanphx/json-patch v4.12.0+incompatible // indirect
github.com/fatih/color v1.12.0 // indirect github.com/fatih/color v1.12.0 // indirect
github.com/go-logfmt/logfmt v0.5.0 // indirect github.com/go-logfmt/logfmt v0.5.0 // indirect
github.com/go-logr/logr v0.4.0 // indirect github.com/go-logr/logr v1.2.0 // indirect
github.com/gobuffalo/flect v0.2.2 // indirect github.com/gobuffalo/flect v0.2.3 // indirect
github.com/gogo/protobuf v1.3.2 // indirect github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/protobuf v1.5.2 // indirect github.com/golang/protobuf v1.5.2 // indirect
github.com/google/go-cmp v0.5.6 // indirect github.com/google/go-cmp v0.5.6 // indirect
github.com/google/gofuzz v1.1.0 // indirect github.com/google/gofuzz v1.1.0 // indirect
github.com/googleapis/gnostic v0.4.1 // indirect github.com/google/uuid v1.2.0 // indirect
github.com/hashicorp/golang-lru v0.5.1 // indirect github.com/googleapis/gnostic v0.5.5 // indirect
github.com/imdario/mergo v0.3.11 // indirect
github.com/inconshreveable/mousetrap v1.0.0 // indirect github.com/inconshreveable/mousetrap v1.0.0 // indirect
github.com/josharian/native v0.0.0-20200817173448-b6b71def0850 // indirect github.com/josharian/native v0.0.0-20200817173448-b6b71def0850 // indirect
github.com/json-iterator/go v1.1.11 // indirect github.com/json-iterator/go v1.1.12 // indirect
github.com/mattn/go-colorable v0.1.8 // indirect github.com/mattn/go-colorable v0.1.8 // indirect
github.com/mattn/go-isatty v0.0.12 // indirect github.com/mattn/go-isatty v0.0.12 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 // indirect github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 // indirect
@@ -52,32 +53,36 @@ require (
github.com/mdlayher/netlink v1.4.1 // indirect github.com/mdlayher/netlink v1.4.1 // indirect
github.com/mdlayher/socket v0.0.0-20211102153432-57e3fa563ecb // indirect github.com/mdlayher/socket v0.0.0-20211102153432-57e3fa563ecb // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.1 // indirect github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/pkg/errors v0.9.1 // indirect github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/prometheus/client_model v0.2.0 // indirect github.com/prometheus/client_model v0.2.0 // indirect
github.com/prometheus/common v0.26.0 // indirect github.com/prometheus/common v0.28.0 // indirect
github.com/prometheus/procfs v0.6.0 // indirect github.com/prometheus/procfs v0.6.0 // indirect
github.com/safchain/ethtool v0.0.0-20210803160452-9aa261dae9b1 // indirect
github.com/spf13/pflag v1.0.5 // indirect github.com/spf13/pflag v1.0.5 // indirect
github.com/vishvananda/netns v0.0.0-20210104183010-2eb08e3e575f // indirect
golang.org/x/crypto v0.0.0-20211117183948-ae814b36b871 // indirect golang.org/x/crypto v0.0.0-20211117183948-ae814b36b871 // indirect
golang.org/x/mod v0.4.2 // indirect golang.org/x/exp/typeparams v0.0.0-20220218215828-6cf2b201936e // indirect
golang.org/x/net v0.0.0-20211123203042-d83791d6bcd9 // indirect golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3 // indirect
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d // indirect golang.org/x/net v0.0.0-20211209124913-491a49abca63 // indirect
golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d // indirect golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f // indirect
golang.org/x/text v0.3.6 // indirect golang.org/x/term v0.0.0-20210615171337-6886f2dfbf5b // indirect
golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba // indirect golang.org/x/text v0.3.7 // indirect
golang.org/x/tools v0.1.2 // indirect golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac // indirect
golang.org/x/tools v0.1.11-0.20220316014157-77aa08bb151a // indirect
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect
golang.zx2c4.com/wireguard v0.0.0-20211123210315-387f7c461a16 // indirect golang.zx2c4.com/wireguard v0.0.0-20211123210315-387f7c461a16 // indirect
google.golang.org/appengine v1.6.5 // indirect google.golang.org/appengine v1.6.7 // indirect
google.golang.org/protobuf v1.26.0 // indirect google.golang.org/protobuf v1.27.1 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect gopkg.in/yaml.v2 v2.4.0 // indirect
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect
k8s.io/gengo v0.0.0-20201214224949-b6c5ce23f027 // indirect k8s.io/gengo v0.0.0-20210813121822-485abfe95c7c // indirect
k8s.io/klog/v2 v2.8.0 // indirect k8s.io/klog/v2 v2.30.0 // indirect
k8s.io/kube-openapi v0.0.0-20210305001622-591a79e4bda7 // indirect k8s.io/kube-openapi v0.0.0-20211115234752-e816edb12b65 // indirect
k8s.io/utils v0.0.0-20201110183641-67b214c5f920 // indirect k8s.io/utils v0.0.0-20211116205334-6203023598ed // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.1.0 // indirect sigs.k8s.io/json v0.0.0-20211020170558-c049b76a60c6 // indirect
sigs.k8s.io/yaml v1.2.0 // indirect sigs.k8s.io/structured-merge-diff/v4 v4.2.1 // indirect
sigs.k8s.io/yaml v1.3.0 // indirect
) )

489
go.sum

File diff suppressed because it is too large Load Diff

View File

@@ -1,8 +1,9 @@
---
apiVersion: apiextensions.k8s.io/v1 apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition kind: CustomResourceDefinition
metadata: metadata:
annotations: annotations:
controller-gen.kubebuilder.io/version: v0.6.0 controller-gen.kubebuilder.io/version: v0.8.0
creationTimestamp: null creationTimestamp: null
name: peers.kilo.squat.ai name: peers.kilo.squat.ai
spec: spec:

View File

@@ -67,7 +67,7 @@ spec:
hostNetwork: true hostNetwork: true
containers: containers:
- name: kilo - name: kilo
image: squat/kilo:0.4.1 image: squat/kilo:0.5.0
args: args:
- --kubeconfig=/etc/kubernetes/kubeconfig - --kubeconfig=/etc/kubernetes/kubeconfig
- --hostname=$(NODE_NAME) - --hostname=$(NODE_NAME)

View File

@@ -8,7 +8,7 @@ metadata:
data: data:
cni-conf.json: | cni-conf.json: |
{ {
"cniVersion":"0.3.1", "cniVersion":"0.4.0",
"name":"kilo", "name":"kilo",
"plugins":[ "plugins":[
{ {
@@ -101,7 +101,7 @@ spec:
hostNetwork: true hostNetwork: true
containers: containers:
- name: kilo - name: kilo
image: squat/kilo:0.4.1 image: squat/kilo:0.5.0
args: args:
- --kubeconfig=/etc/kubernetes/kubeconfig - --kubeconfig=/etc/kubernetes/kubeconfig
- --hostname=$(NODE_NAME) - --hostname=$(NODE_NAME)
@@ -131,7 +131,7 @@ spec:
readOnly: false readOnly: false
initContainers: initContainers:
- name: install-cni - name: install-cni
image: squat/kilo:0.4.1 image: squat/kilo:0.5.0
command: command:
- /bin/sh - /bin/sh
- -c - -c

View File

@@ -96,7 +96,7 @@ spec:
hostNetwork: true hostNetwork: true
containers: containers:
- name: kilo - name: kilo
image: squat/kilo:0.4.1 image: squat/kilo:0.5.0
args: args:
- --kubeconfig=/etc/kubernetes/kubeconfig - --kubeconfig=/etc/kubernetes/kubeconfig
- --hostname=$(NODE_NAME) - --hostname=$(NODE_NAME)
@@ -127,7 +127,7 @@ spec:
readOnly: false readOnly: false
initContainers: initContainers:
- name: generate-kubeconfig - name: generate-kubeconfig
image: squat/kilo:0.4.1 image: squat/kilo:0.5.0
command: command:
- /bin/sh - /bin/sh
args: args:

View File

@@ -8,7 +8,7 @@ metadata:
data: data:
cni-conf.json: | cni-conf.json: |
{ {
"cniVersion":"0.3.1", "cniVersion":"0.4.0",
"name":"kilo", "name":"kilo",
"plugins":[ "plugins":[
{ {
@@ -133,7 +133,7 @@ spec:
hostNetwork: true hostNetwork: true
containers: containers:
- name: kilo - name: kilo
image: squat/kilo:0.4.1 image: squat/kilo:0.5.0
args: args:
- --kubeconfig=/etc/kubernetes/kubeconfig - --kubeconfig=/etc/kubernetes/kubeconfig
- --hostname=$(NODE_NAME) - --hostname=$(NODE_NAME)
@@ -164,7 +164,7 @@ spec:
readOnly: false readOnly: false
initContainers: initContainers:
- name: generate-kubeconfig - name: generate-kubeconfig
image: squat/kilo:0.4.1 image: squat/kilo:0.5.0
command: command:
- /bin/sh - /bin/sh
args: args:
@@ -185,7 +185,7 @@ spec:
fieldRef: fieldRef:
fieldPath: metadata.namespace fieldPath: metadata.namespace
- name: install-cni - name: install-cni
image: squat/kilo:0.4.1 image: squat/kilo:0.5.0
command: command:
- /bin/sh - /bin/sh
- -c - -c
@@ -264,7 +264,7 @@ spec:
hostNetwork: true hostNetwork: true
containers: containers:
- name: kilo - name: kilo
image: squat/kilo:0.4.1 image: squat/kilo:0.5.0
args: args:
- --kubeconfig=/etc/kubernetes/kubeconfig - --kubeconfig=/etc/kubernetes/kubeconfig
- --hostname=$(NODE_NAME) - --hostname=$(NODE_NAME)
@@ -298,9 +298,9 @@ spec:
mountPath: /var/run/wireguard mountPath: /var/run/wireguard
readOnly: false readOnly: false
- name: boringtun - name: boringtun
image: leonnicolas/boringtun image: leonnicolas/boringtun:cc19859
args: args:
- --disable-drop-privileges=true - --disable-drop-privileges
- --foreground - --foreground
- kilo0 - kilo0
securityContext: securityContext:
@@ -311,7 +311,7 @@ spec:
readOnly: false readOnly: false
initContainers: initContainers:
- name: generate-kubeconfig - name: generate-kubeconfig
image: squat/kilo:0.4.1 image: squat/kilo:0.5.0
command: command:
- /bin/sh - /bin/sh
args: args:
@@ -332,7 +332,7 @@ spec:
fieldRef: fieldRef:
fieldPath: metadata.namespace fieldPath: metadata.namespace
- name: install-cni - name: install-cni
image: squat/kilo:0.4.1 image: squat/kilo:0.5.0
command: command:
- /bin/sh - /bin/sh
- -c - -c
@@ -428,7 +428,7 @@ spec:
readOnly: true readOnly: true
initContainers: initContainers:
- name: generate-kubeconfig - name: generate-kubeconfig
image: squat/kilo:0.4.1 image: squat/kilo:0.5.0
command: command:
- /bin/sh - /bin/sh
args: args:

View File

@@ -8,7 +8,7 @@ metadata:
data: data:
cni-conf.json: | cni-conf.json: |
{ {
"cniVersion":"0.3.1", "cniVersion":"0.4.0",
"name":"kilo", "name":"kilo",
"plugins":[ "plugins":[
{ {
@@ -131,7 +131,7 @@ spec:
hostNetwork: true hostNetwork: true
containers: containers:
- name: kilo - name: kilo
image: squat/kilo:0.4.1 image: squat/kilo:0.5.0
args: args:
- --kubeconfig=/etc/kubernetes/kubeconfig - --kubeconfig=/etc/kubernetes/kubeconfig
- --hostname=$(NODE_NAME) - --hostname=$(NODE_NAME)
@@ -165,9 +165,9 @@ spec:
mountPath: /var/run/wireguard mountPath: /var/run/wireguard
readOnly: false readOnly: false
- name: boringtun - name: boringtun
image: leonnicolas/boringtun image: leonnicolas/boringtun:cc19859
args: args:
- --disable-drop-privileges=true - --disable-drop-privileges
- --foreground - --foreground
- kilo0 - kilo0
securityContext: securityContext:
@@ -178,7 +178,7 @@ spec:
readOnly: false readOnly: false
initContainers: initContainers:
- name: generate-kubeconfig - name: generate-kubeconfig
image: squat/kilo:0.4.1 image: squat/kilo:0.5.0
command: command:
- /bin/sh - /bin/sh
args: args:
@@ -199,7 +199,7 @@ spec:
fieldRef: fieldRef:
fieldPath: metadata.namespace fieldPath: metadata.namespace
- name: install-cni - name: install-cni
image: squat/kilo:0.4.1 image: squat/kilo:0.5.0
command: command:
- /bin/sh - /bin/sh
- -c - -c

View File

@@ -8,7 +8,7 @@ metadata:
data: data:
cni-conf.json: | cni-conf.json: |
{ {
"cniVersion":"0.3.1", "cniVersion":"0.4.0",
"name":"kilo", "name":"kilo",
"plugins":[ "plugins":[
{ {
@@ -130,7 +130,7 @@ spec:
hostNetwork: true hostNetwork: true
containers: containers:
- name: kilo - name: kilo
image: squat/kilo:0.4.1 image: squat/kilo:0.5.0
args: args:
- --kubeconfig=/etc/kubernetes/kubeconfig - --kubeconfig=/etc/kubernetes/kubeconfig
- --hostname=$(NODE_NAME) - --hostname=$(NODE_NAME)
@@ -160,7 +160,7 @@ spec:
readOnly: false readOnly: false
initContainers: initContainers:
- name: generate-kubeconfig - name: generate-kubeconfig
image: squat/kilo:0.4.1 image: squat/kilo:0.5.0
command: command:
- /bin/sh - /bin/sh
args: args:
@@ -181,7 +181,7 @@ spec:
fieldRef: fieldRef:
fieldPath: metadata.namespace fieldPath: metadata.namespace
- name: install-cni - name: install-cni
image: squat/kilo:0.4.1 image: squat/kilo:0.5.0
command: command:
- /bin/sh - /bin/sh
- -c - -c

View File

@@ -0,0 +1,142 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: kilo
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kilo
rules:
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- patch
- watch
- apiGroups:
- kilo.squat.ai
resources:
- peers
verbs:
- list
- watch
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kilo
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kilo
subjects:
- kind: ServiceAccount
name: kilo
namespace: kube-system
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kilo
namespace: kube-system
labels:
app.kubernetes.io/name: kilo
app.kubernetes.io/part-of: kilo
spec:
selector:
matchLabels:
app.kubernetes.io/name: kilo
app.kubernetes.io/part-of: kilo
template:
metadata:
labels:
app.kubernetes.io/name: kilo
app.kubernetes.io/part-of: kilo
spec:
serviceAccountName: kilo
hostNetwork: true
containers:
- name: kilo
image: squat/kilo:0.5.0
args:
- --kubeconfig=/etc/kubernetes/kubeconfig
- --hostname=$(NODE_NAME)
- --cni=false
- --compatibility=cilium
- --local=false
# additional and also optional flag
- --encapsulate=crosssubnet
- --clean-up-interface=true
- --subnet=172.31.254.0/24
- --log-level=all
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
ports:
- containerPort: 1107
name: metrics
securityContext:
privileged: true
volumeMounts:
- name: kilo-dir
mountPath: /var/lib/kilo
# with kube-proxy configmap
# - name: kubeconfig
# mountPath: /etc/kubernetes
# readOnly: true
# without kube-proxy host kubeconfig binding
- name: kubeconfig
mount_path: /etc/kubernetes/kubeconfig
sub_path: admin.conf
read_only: true
- name: lib-modules
mountPath: /lib/modules
readOnly: true
- name: xtables-lock
mountPath: /run/xtables.lock
readOnly: false
tolerations:
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists
volumes:
- name: kilo-dir
hostPath:
path: /var/lib/kilo
# with kube-proxy configmap
# - name: kubeconfig
# configMap:
# name: kube-proxy
# items:
# - key: kubeconfig.conf
# path: kubeconfig
# without kube-proxy host kubeconfig binding
- name: kubeconfig
host_path:
path: /etc/kubernetes
- name: lib-modules
hostPath:
path: /lib/modules
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate

View File

@@ -67,7 +67,7 @@ spec:
hostNetwork: true hostNetwork: true
containers: containers:
- name: boringtun - name: boringtun
image: leonnicolas/boringtun image: leonnicolas/boringtun:cc19859
args: args:
- --disable-drop-privileges=true - --disable-drop-privileges=true
- --foreground - --foreground
@@ -79,7 +79,7 @@ spec:
mountPath: /var/run/wireguard mountPath: /var/run/wireguard
readOnly: false readOnly: false
- name: kilo - name: kilo
image: squat/kilo:0.4.1 image: squat/kilo:0.5.0
args: args:
- --kubeconfig=/etc/kubernetes/kubeconfig - --kubeconfig=/etc/kubernetes/kubeconfig
- --hostname=$(NODE_NAME) - --hostname=$(NODE_NAME)

View File

@@ -67,7 +67,7 @@ spec:
hostNetwork: true hostNetwork: true
containers: containers:
- name: kilo - name: kilo
image: squat/kilo:0.4.1 image: squat/kilo:0.5.0
args: args:
- --kubeconfig=/etc/kubernetes/kubeconfig - --kubeconfig=/etc/kubernetes/kubeconfig
- --hostname=$(NODE_NAME) - --hostname=$(NODE_NAME)

View File

@@ -8,7 +8,7 @@ metadata:
data: data:
cni-conf.json: | cni-conf.json: |
{ {
"cniVersion":"0.3.1", "cniVersion":"0.4.0",
"name":"kilo", "name":"kilo",
"plugins":[ "plugins":[
{ {
@@ -101,10 +101,10 @@ spec:
hostNetwork: true hostNetwork: true
containers: containers:
- name: boringtun - name: boringtun
image: leonnicolas/boringtun image: leonnicolas/boringtun:cc19859
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
args: args:
- --disable-drop-privileges=true - --disable-drop-privileges
- --foreground - --foreground
- kilo0 - kilo0
securityContext: securityContext:
@@ -114,7 +114,7 @@ spec:
mountPath: /var/run/wireguard mountPath: /var/run/wireguard
readOnly: false readOnly: false
- name: kilo - name: kilo
image: squat/kilo:0.4.1 image: squat/kilo:0.5.0
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
args: args:
- --kubeconfig=/etc/kubernetes/kubeconfig - --kubeconfig=/etc/kubernetes/kubeconfig
@@ -150,7 +150,7 @@ spec:
readOnly: false readOnly: false
initContainers: initContainers:
- name: install-cni - name: install-cni
image: squat/kilo:0.4.1 image: squat/kilo:0.5.0
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
command: command:
- /bin/sh - /bin/sh

View File

@@ -8,7 +8,7 @@ metadata:
data: data:
cni-conf.json: | cni-conf.json: |
{ {
"cniVersion":"0.3.1", "cniVersion":"0.4.0",
"name":"kilo", "name":"kilo",
"plugins":[ "plugins":[
{ {
@@ -101,7 +101,7 @@ spec:
hostNetwork: true hostNetwork: true
containers: containers:
- name: kilo - name: kilo
image: squat/kilo:0.4.1 image: squat/kilo:0.5.0
args: args:
- --kubeconfig=/etc/kubernetes/kubeconfig - --kubeconfig=/etc/kubernetes/kubeconfig
- --hostname=$(NODE_NAME) - --hostname=$(NODE_NAME)
@@ -131,7 +131,7 @@ spec:
readOnly: false readOnly: false
initContainers: initContainers:
- name: install-cni - name: install-cni
image: squat/kilo:0.4.1 image: squat/kilo:0.5.0
command: command:
- /bin/sh - /bin/sh
- -c - -c

View File

@@ -67,7 +67,7 @@ spec:
hostNetwork: true hostNetwork: true
containers: containers:
- name: kilo - name: kilo
image: squat/kilo:0.4.1 image: squat/kilo:0.5.0
args: args:
- --kubeconfig=/etc/kubernetes/kubeconfig - --kubeconfig=/etc/kubernetes/kubeconfig
- --hostname=$(NODE_NAME) - --hostname=$(NODE_NAME)

View File

@@ -8,7 +8,7 @@ metadata:
data: data:
cni-conf.json: | cni-conf.json: |
{ {
"cniVersion":"0.3.1", "cniVersion":"0.4.0",
"name":"kilo", "name":"kilo",
"plugins":[ "plugins":[
{ {
@@ -101,7 +101,7 @@ spec:
hostNetwork: true hostNetwork: true
containers: containers:
- name: kilo - name: kilo
image: squat/kilo:0.4.1 image: squat/kilo:0.5.0
args: args:
- --kubeconfig=/etc/kubernetes/kubeconfig - --kubeconfig=/etc/kubernetes/kubeconfig
- --hostname=$(NODE_NAME) - --hostname=$(NODE_NAME)
@@ -131,7 +131,7 @@ spec:
readOnly: false readOnly: false
initContainers: initContainers:
- name: install-cni - name: install-cni
image: squat/kilo:0.4.1 image: squat/kilo:0.5.0
command: command:
- /bin/sh - /bin/sh
- -c - -c

View File

@@ -46,7 +46,7 @@ spec:
runAsUser: 1000 runAsUser: 1000
containers: containers:
- name: server - name: server
image: squat/kilo:0.4.1 image: squat/kilo:0.5.0
args: args:
- webhook - webhook
- --cert-file=/run/secrets/tls/tls.crt - --cert-file=/run/secrets/tls/tls.crt

111
pkg/encapsulation/cilium.go Normal file
View File

@@ -0,0 +1,111 @@
// Copyright 2019 the Kilo authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package encapsulation
import (
"fmt"
"net"
"sync"
"github.com/vishvananda/netlink"
"github.com/squat/kilo/pkg/iptables"
)
const ciliumDeviceName = "cilium_host"
type cilium struct {
iface int
strategy Strategy
ch chan netlink.LinkUpdate
done chan struct{}
// mu guards updates to the iface field.
mu sync.Mutex
}
// NewCilium returns an encapsulator that uses Cilium.
func NewCilium(strategy Strategy) Encapsulator {
return &cilium{
ch: make(chan netlink.LinkUpdate),
done: make(chan struct{}),
strategy: strategy,
}
}
// CleanUp close done channel
func (f *cilium) CleanUp() error {
close(f.done)
return nil
}
// Gw returns the correct gateway IP associated with the given node.
func (f *cilium) Gw(_, _ net.IP, subnet *net.IPNet) net.IP {
return subnet.IP
}
// Index returns the index of the Cilium interface.
func (f *cilium) Index() int {
f.mu.Lock()
defer f.mu.Unlock()
return f.iface
}
// Init finds the Cilium interface index.
func (f *cilium) Init(_ int) error {
if err := netlink.LinkSubscribe(f.ch, f.done); err != nil {
return fmt.Errorf("failed to subscribe to updates to %s: %v", ciliumDeviceName, err)
}
go func() {
var lu netlink.LinkUpdate
for {
select {
case lu = <-f.ch:
if lu.Attrs().Name == ciliumDeviceName {
f.mu.Lock()
f.iface = lu.Attrs().Index
f.mu.Unlock()
}
case <-f.done:
return
}
}
}()
i, err := netlink.LinkByName(ciliumDeviceName)
if _, ok := err.(netlink.LinkNotFoundError); ok {
return nil
}
if err != nil {
return fmt.Errorf("failed to query for Cilium interface: %v", err)
}
f.mu.Lock()
f.iface = i.Attrs().Index
f.mu.Unlock()
return nil
}
// Rules is a no-op.
func (f *cilium) Rules(_ []*net.IPNet) []iptables.Rule {
return nil
}
// Set is a no-op.
func (f *cilium) Set(_ *net.IPNet) error {
return nil
}
// Strategy returns the configured strategy for encapsulation.
func (f *cilium) Strategy() Strategy {
return f.strategy
}

View File

@@ -56,6 +56,8 @@ func (f *flannel) Gw(_, _ net.IP, subnet *net.IPNet) net.IP {
// Index returns the index of the Flannel interface. // Index returns the index of the Flannel interface.
func (f *flannel) Index() int { func (f *flannel) Index() int {
f.mu.Lock()
defer f.mu.Unlock()
return f.iface return f.iface
} }

View File

@@ -128,7 +128,7 @@ func New(c kubernetes.Interface, kc kiloclient.Interface, ec apiextensions.Inter
} }
// CleanUp removes configuration applied to the backend. // CleanUp removes configuration applied to the backend.
func (nb *nodeBackend) CleanUp(name string) error { func (nb *nodeBackend) CleanUp(ctx context.Context, name string) error {
patch := []byte("[" + strings.Join([]string{ patch := []byte("[" + strings.Join([]string{
fmt.Sprintf(jsonRemovePatch, path.Join("/metadata", "annotations", strings.Replace(endpointAnnotationKey, "/", jsonPatchSlash, 1))), fmt.Sprintf(jsonRemovePatch, path.Join("/metadata", "annotations", strings.Replace(endpointAnnotationKey, "/", jsonPatchSlash, 1))),
fmt.Sprintf(jsonRemovePatch, path.Join("/metadata", "annotations", strings.Replace(internalIPAnnotationKey, "/", jsonPatchSlash, 1))), fmt.Sprintf(jsonRemovePatch, path.Join("/metadata", "annotations", strings.Replace(internalIPAnnotationKey, "/", jsonPatchSlash, 1))),
@@ -138,7 +138,7 @@ func (nb *nodeBackend) CleanUp(name string) error {
fmt.Sprintf(jsonRemovePatch, path.Join("/metadata", "annotations", strings.Replace(discoveredEndpointsKey, "/", jsonPatchSlash, 1))), fmt.Sprintf(jsonRemovePatch, path.Join("/metadata", "annotations", strings.Replace(discoveredEndpointsKey, "/", jsonPatchSlash, 1))),
fmt.Sprintf(jsonRemovePatch, path.Join("/metadata", "annotations", strings.Replace(granularityKey, "/", jsonPatchSlash, 1))), fmt.Sprintf(jsonRemovePatch, path.Join("/metadata", "annotations", strings.Replace(granularityKey, "/", jsonPatchSlash, 1))),
}, ",") + "]") }, ",") + "]")
if _, err := nb.client.CoreV1().Nodes().Patch(context.TODO(), name, types.JSONPatchType, patch, metav1.PatchOptions{}); err != nil { if _, err := nb.client.CoreV1().Nodes().Patch(ctx, name, types.JSONPatchType, patch, metav1.PatchOptions{}); err != nil {
return fmt.Errorf("failed to patch node: %v", err) return fmt.Errorf("failed to patch node: %v", err)
} }
return nil return nil
@@ -155,9 +155,9 @@ func (nb *nodeBackend) Get(name string) (*mesh.Node, error) {
// Init initializes the backend; for this backend that means // Init initializes the backend; for this backend that means
// syncing the informer cache. // syncing the informer cache.
func (nb *nodeBackend) Init(stop <-chan struct{}) error { func (nb *nodeBackend) Init(ctx context.Context) error {
go nb.informer.Run(stop) go nb.informer.Run(ctx.Done())
if ok := cache.WaitForCacheSync(stop, func() bool { if ok := cache.WaitForCacheSync(ctx.Done(), func() bool {
return nb.informer.HasSynced() return nb.informer.HasSynced()
}); !ok { }); !ok {
return errors.New("failed to sync node cache") return errors.New("failed to sync node cache")
@@ -212,7 +212,7 @@ func (nb *nodeBackend) List() ([]*mesh.Node, error) {
} }
// Set sets the fields of a node. // Set sets the fields of a node.
func (nb *nodeBackend) Set(name string, node *mesh.Node) error { func (nb *nodeBackend) Set(ctx context.Context, name string, node *mesh.Node) error {
old, err := nb.lister.Get(name) old, err := nb.lister.Get(name)
if err != nil { if err != nil {
return fmt.Errorf("failed to find node: %v", err) return fmt.Errorf("failed to find node: %v", err)
@@ -253,7 +253,7 @@ func (nb *nodeBackend) Set(name string, node *mesh.Node) error {
if err != nil { if err != nil {
return fmt.Errorf("failed to create patch for node %q: %v", n.Name, err) return fmt.Errorf("failed to create patch for node %q: %v", n.Name, err)
} }
if _, err = nb.client.CoreV1().Nodes().Patch(context.TODO(), name, types.StrategicMergePatchType, patch, metav1.PatchOptions{}); err != nil { if _, err = nb.client.CoreV1().Nodes().Patch(ctx, name, types.StrategicMergePatchType, patch, metav1.PatchOptions{}); err != nil {
return fmt.Errorf("failed to patch node: %v", err) return fmt.Errorf("failed to patch node: %v", err)
} }
return nil return nil
@@ -431,7 +431,7 @@ func translatePeer(peer *v1alpha1.Peer) *mesh.Peer {
} }
// CleanUp removes configuration applied to the backend. // CleanUp removes configuration applied to the backend.
func (pb *peerBackend) CleanUp(name string) error { func (pb *peerBackend) CleanUp(_ context.Context, _ string) error {
return nil return nil
} }
@@ -446,14 +446,14 @@ func (pb *peerBackend) Get(name string) (*mesh.Peer, error) {
// Init initializes the backend; for this backend that means // Init initializes the backend; for this backend that means
// syncing the informer cache. // syncing the informer cache.
func (pb *peerBackend) Init(stop <-chan struct{}) error { func (pb *peerBackend) Init(ctx context.Context) error {
// Check the presents of the CRD peers.kilo.squat.ai. // Check the presents of the CRD peers.kilo.squat.ai.
if _, err := pb.extensionsClient.ApiextensionsV1().CustomResourceDefinitions().Get(context.TODO(), strings.Join([]string{v1alpha1.PeerPlural, v1alpha1.GroupName}, "."), metav1.GetOptions{}); err != nil { if _, err := pb.extensionsClient.ApiextensionsV1().CustomResourceDefinitions().Get(ctx, strings.Join([]string{v1alpha1.PeerPlural, v1alpha1.GroupName}, "."), metav1.GetOptions{}); err != nil {
return fmt.Errorf("CRD is not present: %v", err) return fmt.Errorf("CRD is not present: %v", err)
} }
go pb.informer.Run(stop) go pb.informer.Run(ctx.Done())
if ok := cache.WaitForCacheSync(stop, func() bool { if ok := cache.WaitForCacheSync(ctx.Done(), func() bool {
return pb.informer.HasSynced() return pb.informer.HasSynced()
}); !ok { }); !ok {
return errors.New("failed to sync peer cache") return errors.New("failed to sync peer cache")
@@ -512,7 +512,7 @@ func (pb *peerBackend) List() ([]*mesh.Peer, error) {
} }
// Set sets the fields of a peer. // Set sets the fields of a peer.
func (pb *peerBackend) Set(name string, peer *mesh.Peer) error { func (pb *peerBackend) Set(ctx context.Context, name string, peer *mesh.Peer) error {
old, err := pb.lister.Get(name) old, err := pb.lister.Get(name)
if err != nil { if err != nil {
return fmt.Errorf("failed to find peer: %v", err) return fmt.Errorf("failed to find peer: %v", err)
@@ -542,7 +542,7 @@ func (pb *peerBackend) Set(name string, peer *mesh.Peer) error {
p.Spec.PresharedKey = peer.PresharedKey.String() p.Spec.PresharedKey = peer.PresharedKey.String()
} }
p.Spec.PublicKey = peer.PublicKey.String() p.Spec.PublicKey = peer.PublicKey.String()
if _, err = pb.client.KiloV1alpha1().Peers().Update(context.TODO(), p, metav1.UpdateOptions{}); err != nil { if _, err = pb.client.KiloV1alpha1().Peers().Update(ctx, p, metav1.UpdateOptions{}); err != nil {
return fmt.Errorf("failed to update peer: %v", err) return fmt.Errorf("failed to update peer: %v", err)
} }
return nil return nil

View File

@@ -18,6 +18,7 @@ package versioned
import ( import (
"fmt" "fmt"
"net/http"
kilov1alpha1 "github.com/squat/kilo/pkg/k8s/clientset/versioned/typed/kilo/v1alpha1" kilov1alpha1 "github.com/squat/kilo/pkg/k8s/clientset/versioned/typed/kilo/v1alpha1"
discovery "k8s.io/client-go/discovery" discovery "k8s.io/client-go/discovery"
@@ -53,22 +54,45 @@ func (c *Clientset) Discovery() discovery.DiscoveryInterface {
// NewForConfig creates a new Clientset for the given config. // NewForConfig creates a new Clientset for the given config.
// If config's RateLimiter is not set and QPS and Burst are acceptable, // If config's RateLimiter is not set and QPS and Burst are acceptable,
// NewForConfig will generate a rate-limiter in configShallowCopy. // NewForConfig will generate a rate-limiter in configShallowCopy.
// NewForConfig is equivalent to NewForConfigAndClient(c, httpClient),
// where httpClient was generated with rest.HTTPClientFor(c).
func NewForConfig(c *rest.Config) (*Clientset, error) { func NewForConfig(c *rest.Config) (*Clientset, error) {
configShallowCopy := *c configShallowCopy := *c
if configShallowCopy.UserAgent == "" {
configShallowCopy.UserAgent = rest.DefaultKubernetesUserAgent()
}
// share the transport between all clients
httpClient, err := rest.HTTPClientFor(&configShallowCopy)
if err != nil {
return nil, err
}
return NewForConfigAndClient(&configShallowCopy, httpClient)
}
// NewForConfigAndClient creates a new Clientset for the given config and http client.
// Note the http client provided takes precedence over the configured transport values.
// If config's RateLimiter is not set and QPS and Burst are acceptable,
// NewForConfigAndClient will generate a rate-limiter in configShallowCopy.
func NewForConfigAndClient(c *rest.Config, httpClient *http.Client) (*Clientset, error) {
configShallowCopy := *c
if configShallowCopy.RateLimiter == nil && configShallowCopy.QPS > 0 { if configShallowCopy.RateLimiter == nil && configShallowCopy.QPS > 0 {
if configShallowCopy.Burst <= 0 { if configShallowCopy.Burst <= 0 {
return nil, fmt.Errorf("burst is required to be greater than 0 when RateLimiter is not set and QPS is set to greater than 0") return nil, fmt.Errorf("burst is required to be greater than 0 when RateLimiter is not set and QPS is set to greater than 0")
} }
configShallowCopy.RateLimiter = flowcontrol.NewTokenBucketRateLimiter(configShallowCopy.QPS, configShallowCopy.Burst) configShallowCopy.RateLimiter = flowcontrol.NewTokenBucketRateLimiter(configShallowCopy.QPS, configShallowCopy.Burst)
} }
var cs Clientset var cs Clientset
var err error var err error
cs.kiloV1alpha1, err = kilov1alpha1.NewForConfig(&configShallowCopy) cs.kiloV1alpha1, err = kilov1alpha1.NewForConfigAndClient(&configShallowCopy, httpClient)
if err != nil { if err != nil {
return nil, err return nil, err
} }
cs.DiscoveryClient, err = discovery.NewDiscoveryClientForConfig(&configShallowCopy) cs.DiscoveryClient, err = discovery.NewDiscoveryClientForConfigAndClient(&configShallowCopy, httpClient)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -78,11 +102,11 @@ func NewForConfig(c *rest.Config) (*Clientset, error) {
// NewForConfigOrDie creates a new Clientset for the given config and // NewForConfigOrDie creates a new Clientset for the given config and
// panics if there is an error in the config. // panics if there is an error in the config.
func NewForConfigOrDie(c *rest.Config) *Clientset { func NewForConfigOrDie(c *rest.Config) *Clientset {
var cs Clientset cs, err := NewForConfig(c)
cs.kiloV1alpha1 = kilov1alpha1.NewForConfigOrDie(c) if err != nil {
panic(err)
cs.DiscoveryClient = discovery.NewDiscoveryClientForConfigOrDie(c) }
return &cs return cs
} }
// New creates a new Clientset for the given RESTClient. // New creates a new Clientset for the given RESTClient.

View File

@@ -72,7 +72,10 @@ func (c *Clientset) Tracker() testing.ObjectTracker {
return c.tracker return c.tracker
} }
var _ clientset.Interface = &Clientset{} var (
_ clientset.Interface = &Clientset{}
_ testing.FakeClient = &Clientset{}
)
// KiloV1alpha1 retrieves the KiloV1alpha1Client // KiloV1alpha1 retrieves the KiloV1alpha1Client
func (c *Clientset) KiloV1alpha1() kilov1alpha1.KiloV1alpha1Interface { func (c *Clientset) KiloV1alpha1() kilov1alpha1.KiloV1alpha1Interface {

View File

@@ -97,7 +97,7 @@ func (c *FakePeers) Update(ctx context.Context, peer *v1alpha1.Peer, opts v1.Upd
// Delete takes name of the peer and deletes it. Returns an error if one occurs. // Delete takes name of the peer and deletes it. Returns an error if one occurs.
func (c *FakePeers) Delete(ctx context.Context, name string, opts v1.DeleteOptions) error { func (c *FakePeers) Delete(ctx context.Context, name string, opts v1.DeleteOptions) error {
_, err := c.Fake. _, err := c.Fake.
Invokes(testing.NewRootDeleteAction(peersResource, name), &v1alpha1.Peer{}) Invokes(testing.NewRootDeleteActionWithOptions(peersResource, name, opts), &v1alpha1.Peer{})
return err return err
} }

View File

@@ -17,6 +17,8 @@
package v1alpha1 package v1alpha1
import ( import (
"net/http"
v1alpha1 "github.com/squat/kilo/pkg/k8s/apis/kilo/v1alpha1" v1alpha1 "github.com/squat/kilo/pkg/k8s/apis/kilo/v1alpha1"
"github.com/squat/kilo/pkg/k8s/clientset/versioned/scheme" "github.com/squat/kilo/pkg/k8s/clientset/versioned/scheme"
rest "k8s.io/client-go/rest" rest "k8s.io/client-go/rest"
@@ -37,12 +39,28 @@ func (c *KiloV1alpha1Client) Peers() PeerInterface {
} }
// NewForConfig creates a new KiloV1alpha1Client for the given config. // NewForConfig creates a new KiloV1alpha1Client for the given config.
// NewForConfig is equivalent to NewForConfigAndClient(c, httpClient),
// where httpClient was generated with rest.HTTPClientFor(c).
func NewForConfig(c *rest.Config) (*KiloV1alpha1Client, error) { func NewForConfig(c *rest.Config) (*KiloV1alpha1Client, error) {
config := *c config := *c
if err := setConfigDefaults(&config); err != nil { if err := setConfigDefaults(&config); err != nil {
return nil, err return nil, err
} }
client, err := rest.RESTClientFor(&config) httpClient, err := rest.HTTPClientFor(&config)
if err != nil {
return nil, err
}
return NewForConfigAndClient(&config, httpClient)
}
// NewForConfigAndClient creates a new KiloV1alpha1Client for the given config and http client.
// Note the http client provided takes precedence over the configured transport values.
func NewForConfigAndClient(c *rest.Config, h *http.Client) (*KiloV1alpha1Client, error) {
config := *c
if err := setConfigDefaults(&config); err != nil {
return nil, err
}
client, err := rest.RESTClientForConfigAndClient(&config, h)
if err != nil { if err != nil {
return nil, err return nil, err
} }

View File

@@ -15,6 +15,7 @@
package mesh package mesh
import ( import (
"context"
"net" "net"
"time" "time"
@@ -146,11 +147,11 @@ type Backend interface {
// clean up any changes applied to the backend, // clean up any changes applied to the backend,
// and watch for changes to nodes. // and watch for changes to nodes.
type NodeBackend interface { type NodeBackend interface {
CleanUp(string) error CleanUp(context.Context, string) error
Get(string) (*Node, error) Get(string) (*Node, error)
Init(<-chan struct{}) error Init(context.Context) error
List() ([]*Node, error) List() ([]*Node, error)
Set(string, *Node) error Set(context.Context, string, *Node) error
Watch() <-chan *NodeEvent Watch() <-chan *NodeEvent
} }
@@ -160,10 +161,10 @@ type NodeBackend interface {
// clean up any changes applied to the backend, // clean up any changes applied to the backend,
// and watch for changes to peers. // and watch for changes to peers.
type PeerBackend interface { type PeerBackend interface {
CleanUp(string) error CleanUp(context.Context, string) error
Get(string) (*Peer, error) Get(string) (*Peer, error)
Init(<-chan struct{}) error Init(context.Context) error
List() ([]*Peer, error) List() ([]*Peer, error)
Set(string, *Peer) error Set(context.Context, string, *Peer) error
Watch() <-chan *PeerEvent Watch() <-chan *PeerEvent
} }

View File

@@ -19,6 +19,7 @@ package mesh
import ( import (
"bytes" "bytes"
"context"
"fmt" "fmt"
"io/ioutil" "io/ioutil"
"net" "net"
@@ -61,7 +62,6 @@ type Mesh struct {
ipTables *iptables.Controller ipTables *iptables.Controller
kiloIface int kiloIface int
kiloIfaceName string kiloIfaceName string
key []byte
local bool local bool
port int port int
priv wgtypes.Key priv wgtypes.Key
@@ -69,7 +69,6 @@ type Mesh struct {
pub wgtypes.Key pub wgtypes.Key
resyncPeriod time.Duration resyncPeriod time.Duration
iptablesForwardRule bool iptablesForwardRule bool
stop chan struct{}
subnet *net.IPNet subnet *net.IPNet
table *route.Table table *route.Table
wireGuardIP *net.IPNet wireGuardIP *net.IPNet
@@ -94,6 +93,9 @@ func New(backend Backend, enc encapsulation.Encapsulator, granularity Granularit
return nil, fmt.Errorf("failed to create directory to store configuration: %v", err) return nil, fmt.Errorf("failed to create directory to store configuration: %v", err)
} }
privateB, err := ioutil.ReadFile(privateKeyPath) privateB, err := ioutil.ReadFile(privateKeyPath)
if err != nil && !os.IsNotExist(err) {
return nil, fmt.Errorf("failed to read private key file: %v", err)
}
privateB = bytes.Trim(privateB, "\n") privateB = bytes.Trim(privateB, "\n")
private, err := wgtypes.ParseKey(string(privateB)) private, err := wgtypes.ParseKey(string(privateB))
if err != nil { if err != nil {
@@ -180,7 +182,6 @@ func New(backend Backend, enc encapsulation.Encapsulator, granularity Granularit
resyncPeriod: resyncPeriod, resyncPeriod: resyncPeriod,
iptablesForwardRule: iptablesForwardRule, iptablesForwardRule: iptablesForwardRule,
local: local, local: local,
stop: make(chan struct{}),
subnet: subnet, subnet: subnet,
table: route.NewTable(), table: route.NewTable(),
errorCounter: prometheus.NewCounterVec(prometheus.CounterOpts{ errorCounter: prometheus.NewCounterVec(prometheus.CounterOpts{
@@ -208,8 +209,8 @@ func New(backend Backend, enc encapsulation.Encapsulator, granularity Granularit
} }
// Run starts the mesh. // Run starts the mesh.
func (m *Mesh) Run() error { func (m *Mesh) Run(ctx context.Context) error {
if err := m.Nodes().Init(m.stop); err != nil { if err := m.Nodes().Init(ctx); err != nil {
return fmt.Errorf("failed to initialize node backend: %v", err) return fmt.Errorf("failed to initialize node backend: %v", err)
} }
// Try to set the CNI config quickly. // Try to set the CNI config quickly.
@@ -221,14 +222,14 @@ func (m *Mesh) Run() error {
level.Warn(m.logger).Log("error", fmt.Errorf("failed to get node %q: %v", m.hostname, err)) level.Warn(m.logger).Log("error", fmt.Errorf("failed to get node %q: %v", m.hostname, err))
} }
} }
if err := m.Peers().Init(m.stop); err != nil { if err := m.Peers().Init(ctx); err != nil {
return fmt.Errorf("failed to initialize peer backend: %v", err) return fmt.Errorf("failed to initialize peer backend: %v", err)
} }
ipTablesErrors, err := m.ipTables.Run(m.stop) ipTablesErrors, err := m.ipTables.Run(ctx.Done())
if err != nil { if err != nil {
return fmt.Errorf("failed to watch for IP tables updates: %v", err) return fmt.Errorf("failed to watch for IP tables updates: %v", err)
} }
routeErrors, err := m.table.Run(m.stop) routeErrors, err := m.table.Run(ctx.Done())
if err != nil { if err != nil {
return fmt.Errorf("failed to watch for route table updates: %v", err) return fmt.Errorf("failed to watch for route table updates: %v", err)
} }
@@ -238,7 +239,7 @@ func (m *Mesh) Run() error {
select { select {
case err = <-ipTablesErrors: case err = <-ipTablesErrors:
case err = <-routeErrors: case err = <-routeErrors:
case <-m.stop: case <-ctx.Done():
return return
} }
if err != nil { if err != nil {
@@ -257,11 +258,11 @@ func (m *Mesh) Run() error {
for { for {
select { select {
case ne = <-nw: case ne = <-nw:
m.syncNodes(ne) m.syncNodes(ctx, ne)
case pe = <-pw: case pe = <-pw:
m.syncPeers(pe) m.syncPeers(pe)
case <-checkIn.C: case <-checkIn.C:
m.checkIn() m.checkIn(ctx)
checkIn.Reset(checkInPeriod) checkIn.Reset(checkInPeriod)
case <-resync.C: case <-resync.C:
if m.cni { if m.cni {
@@ -269,18 +270,18 @@ func (m *Mesh) Run() error {
} }
m.applyTopology() m.applyTopology()
resync.Reset(m.resyncPeriod) resync.Reset(m.resyncPeriod)
case <-m.stop: case <-ctx.Done():
return nil return nil
} }
} }
} }
func (m *Mesh) syncNodes(e *NodeEvent) { func (m *Mesh) syncNodes(ctx context.Context, e *NodeEvent) {
logger := log.With(m.logger, "event", e.Type) logger := log.With(m.logger, "event", e.Type)
level.Debug(logger).Log("msg", "syncing nodes", "event", e.Type) level.Debug(logger).Log("msg", "syncing nodes", "event", e.Type)
if isSelf(m.hostname, e.Node) { if isSelf(m.hostname, e.Node) {
level.Debug(logger).Log("msg", "processing local node", "node", e.Node) level.Debug(logger).Log("msg", "processing local node", "node", e.Node)
m.handleLocal(e.Node) m.handleLocal(ctx, e.Node)
return return
} }
var diff bool var diff bool
@@ -348,7 +349,7 @@ func (m *Mesh) syncPeers(e *PeerEvent) {
// checkIn will try to update the local node's LastSeen timestamp // checkIn will try to update the local node's LastSeen timestamp
// in the backend. // in the backend.
func (m *Mesh) checkIn() { func (m *Mesh) checkIn(ctx context.Context) {
m.mu.Lock() m.mu.Lock()
defer m.mu.Unlock() defer m.mu.Unlock()
n := m.nodes[m.hostname] n := m.nodes[m.hostname]
@@ -358,7 +359,7 @@ func (m *Mesh) checkIn() {
} }
oldTime := n.LastSeen oldTime := n.LastSeen
n.LastSeen = time.Now().Unix() n.LastSeen = time.Now().Unix()
if err := m.Nodes().Set(m.hostname, n); err != nil { if err := m.Nodes().Set(ctx, m.hostname, n); err != nil {
level.Error(m.logger).Log("error", fmt.Sprintf("failed to set local node: %v", err), "node", n) level.Error(m.logger).Log("error", fmt.Sprintf("failed to set local node: %v", err), "node", n)
m.errorCounter.WithLabelValues("checkin").Inc() m.errorCounter.WithLabelValues("checkin").Inc()
// Revert time. // Revert time.
@@ -368,7 +369,7 @@ func (m *Mesh) checkIn() {
level.Debug(m.logger).Log("msg", "successfully checked in local node in backend") level.Debug(m.logger).Log("msg", "successfully checked in local node in backend")
} }
func (m *Mesh) handleLocal(n *Node) { func (m *Mesh) handleLocal(ctx context.Context, n *Node) {
// Allow the IPs to be overridden. // Allow the IPs to be overridden.
if !n.Endpoint.Ready() { if !n.Endpoint.Ready() {
e := wireguard.NewEndpoint(m.externalIP.IP, m.port) e := wireguard.NewEndpoint(m.externalIP.IP, m.port)
@@ -399,7 +400,7 @@ func (m *Mesh) handleLocal(n *Node) {
} }
if !nodesAreEqual(n, local) { if !nodesAreEqual(n, local) {
level.Debug(m.logger).Log("msg", "local node differs from backend") level.Debug(m.logger).Log("msg", "local node differs from backend")
if err := m.Nodes().Set(m.hostname, local); err != nil { if err := m.Nodes().Set(ctx, m.hostname, local); err != nil {
level.Error(m.logger).Log("error", fmt.Sprintf("failed to set local node: %v", err), "node", local) level.Error(m.logger).Log("error", fmt.Sprintf("failed to set local node: %v", err), "node", local)
m.errorCounter.WithLabelValues("local").Inc() m.errorCounter.WithLabelValues("local").Inc()
return return
@@ -584,11 +585,6 @@ func (m *Mesh) RegisterMetrics(r prometheus.Registerer) {
) )
} }
// Stop stops the mesh.
func (m *Mesh) Stop() {
close(m.stop)
}
func (m *Mesh) cleanUp() { func (m *Mesh) cleanUp() {
if err := m.ipTables.CleanUp(); err != nil { if err := m.ipTables.CleanUp(); err != nil {
level.Error(m.logger).Log("error", fmt.Sprintf("failed to clean up IP tables: %v", err)) level.Error(m.logger).Log("error", fmt.Sprintf("failed to clean up IP tables: %v", err))
@@ -604,14 +600,22 @@ func (m *Mesh) cleanUp() {
m.errorCounter.WithLabelValues("cleanUp").Inc() m.errorCounter.WithLabelValues("cleanUp").Inc()
} }
} }
if err := m.Nodes().CleanUp(m.hostname); err != nil { {
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
if err := m.Nodes().CleanUp(ctx, m.hostname); err != nil {
level.Error(m.logger).Log("error", fmt.Sprintf("failed to clean up node backend: %v", err)) level.Error(m.logger).Log("error", fmt.Sprintf("failed to clean up node backend: %v", err))
m.errorCounter.WithLabelValues("cleanUp").Inc() m.errorCounter.WithLabelValues("cleanUp").Inc()
} }
if err := m.Peers().CleanUp(m.hostname); err != nil { }
{
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
if err := m.Peers().CleanUp(ctx, m.hostname); err != nil {
level.Error(m.logger).Log("error", fmt.Sprintf("failed to clean up peer backend: %v", err)) level.Error(m.logger).Log("error", fmt.Sprintf("failed to clean up peer backend: %v", err))
m.errorCounter.WithLabelValues("cleanUp").Inc() m.errorCounter.WithLabelValues("cleanUp").Inc()
} }
}
if err := m.enc.CleanUp(); err != nil { if err := m.enc.CleanUp(); err != nil {
level.Error(m.logger).Log("error", fmt.Sprintf("failed to clean up encapsulator: %v", err)) level.Error(m.logger).Log("error", fmt.Sprintf("failed to clean up encapsulator: %v", err))
m.errorCounter.WithLabelValues("cleanUp").Inc() m.errorCounter.WithLabelValues("cleanUp").Inc()

View File

@@ -16,7 +16,6 @@ package mesh
import ( import (
"net" "net"
"strings"
"testing" "testing"
"time" "time"
@@ -27,10 +26,6 @@ import (
"github.com/squat/kilo/pkg/wireguard" "github.com/squat/kilo/pkg/wireguard"
) )
func allowedIPs(ips ...string) string {
return strings.Join(ips, ", ")
}
func mustParseCIDR(s string) (r net.IPNet) { func mustParseCIDR(s string) (r net.IPNet) {
if _, ip, err := net.ParseCIDR(s); err != nil { if _, ip, err := net.ParseCIDR(s); err != nil {
panic("failed to parse CIDR") panic("failed to parse CIDR")

View File

@@ -183,7 +183,7 @@ func (e *Endpoint) Resolved() bool {
// UDPAddr() will not try to resolve a DN name, if the Endpoint is not yet resolved. // UDPAddr() will not try to resolve a DN name, if the Endpoint is not yet resolved.
func (e *Endpoint) UDPAddr(resolve bool) (*net.UDPAddr, error) { func (e *Endpoint) UDPAddr(resolve bool) (*net.UDPAddr, error) {
if !e.Ready() { if !e.Ready() {
return nil, errors.New("Enpoint is not ready") return nil, errors.New("endpoint is not ready")
} }
if e.udpAddr != nil { if e.udpAddr != nil {
// Make a copy of the UDPAddr to protect it from modification outside this package. // Make a copy of the UDPAddr to protect it from modification outside this package.
@@ -191,7 +191,7 @@ func (e *Endpoint) UDPAddr(resolve bool) (*net.UDPAddr, error) {
return &h, nil return &h, nil
} }
if !resolve { if !resolve {
return nil, errors.New("Endpoint is not resolved") return nil, errors.New("endpoint is not resolved")
} }
var err error var err error
if e.udpAddr, err = net.ResolveUDPAddr("udp", e.addr); err != nil { if e.udpAddr, err = net.ResolveUDPAddr("udp", e.addr); err != nil {
@@ -358,19 +358,13 @@ func (c *Conf) Equal(d *wgtypes.Device) (bool, string) {
func sortPeerConfigs(peers []wgtypes.Peer) { func sortPeerConfigs(peers []wgtypes.Peer) {
sort.Slice(peers, func(i, j int) bool { sort.Slice(peers, func(i, j int) bool {
if peers[i].PublicKey.String() < peers[j].PublicKey.String() { return peers[i].PublicKey.String() < peers[j].PublicKey.String()
return true
}
return false
}) })
} }
func sortPeers(peers []Peer) { func sortPeers(peers []Peer) {
sort.Slice(peers, func(i, j int) bool { sort.Slice(peers, func(i, j int) bool {
if peers[i].PublicKey.String() < peers[j].PublicKey.String() { return peers[i].PublicKey.String() < peers[j].PublicKey.String()
return true
}
return false
}) })
} }

View File

@@ -19,7 +19,7 @@ package main
import ( import (
_ "github.com/campoy/embedmd" _ "github.com/campoy/embedmd"
_ "golang.org/x/lint/golint" _ "honnef.co/go/tools/cmd/staticcheck"
_ "k8s.io/code-generator/cmd/client-gen" _ "k8s.io/code-generator/cmd/client-gen"
_ "k8s.io/code-generator/cmd/deepcopy-gen" _ "k8s.io/code-generator/cmd/deepcopy-gen"
_ "k8s.io/code-generator/cmd/informer-gen" _ "k8s.io/code-generator/cmd/informer-gen"

2
vendor/github.com/BurntSushi/toml/.gitignore generated vendored Normal file
View File

@@ -0,0 +1,2 @@
toml.test
/toml-test

1
vendor/github.com/BurntSushi/toml/COMPATIBLE generated vendored Normal file
View File

@@ -0,0 +1 @@
Compatible with TOML version [v1.0.0](https://toml.io/en/v1.0.0).

21
vendor/github.com/BurntSushi/toml/COPYING generated vendored Normal file
View File

@@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2013 TOML authors
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

220
vendor/github.com/BurntSushi/toml/README.md generated vendored Normal file
View File

@@ -0,0 +1,220 @@
## TOML parser and encoder for Go with reflection
TOML stands for Tom's Obvious, Minimal Language. This Go package provides a
reflection interface similar to Go's standard library `json` and `xml`
packages. This package also supports the `encoding.TextUnmarshaler` and
`encoding.TextMarshaler` interfaces so that you can define custom data
representations. (There is an example of this below.)
Compatible with TOML version [v1.0.0](https://toml.io/en/v1.0.0).
Documentation: https://godocs.io/github.com/BurntSushi/toml
See the [releases page](https://github.com/BurntSushi/toml/releases) for a
changelog; this information is also in the git tag annotations (e.g. `git show
v0.4.0`).
This library requires Go 1.13 or newer; install it with:
$ go get github.com/BurntSushi/toml
It also comes with a TOML validator CLI tool:
$ go get github.com/BurntSushi/toml/cmd/tomlv
$ tomlv some-toml-file.toml
### Testing
This package passes all tests in
[toml-test](https://github.com/BurntSushi/toml-test) for both the decoder
and the encoder.
### Examples
This package works similarly to how the Go standard library handles XML and
JSON. Namely, data is loaded into Go values via reflection.
For the simplest example, consider some TOML file as just a list of keys
and values:
```toml
Age = 25
Cats = [ "Cauchy", "Plato" ]
Pi = 3.14
Perfection = [ 6, 28, 496, 8128 ]
DOB = 1987-07-05T05:45:00Z
```
Which could be defined in Go as:
```go
type Config struct {
Age int
Cats []string
Pi float64
Perfection []int
DOB time.Time // requires `import time`
}
```
And then decoded with:
```go
var conf Config
if _, err := toml.Decode(tomlData, &conf); err != nil {
// handle error
}
```
You can also use struct tags if your struct field name doesn't map to a TOML
key value directly:
```toml
some_key_NAME = "wat"
```
```go
type TOML struct {
ObscureKey string `toml:"some_key_NAME"`
}
```
Beware that like other most other decoders **only exported fields** are
considered when encoding and decoding; private fields are silently ignored.
### Using the `encoding.TextUnmarshaler` interface
Here's an example that automatically parses duration strings into
`time.Duration` values:
```toml
[[song]]
name = "Thunder Road"
duration = "4m49s"
[[song]]
name = "Stairway to Heaven"
duration = "8m03s"
```
Which can be decoded with:
```go
type song struct {
Name string
Duration duration
}
type songs struct {
Song []song
}
var favorites songs
if _, err := toml.Decode(blob, &favorites); err != nil {
log.Fatal(err)
}
for _, s := range favorites.Song {
fmt.Printf("%s (%s)\n", s.Name, s.Duration)
}
```
And you'll also need a `duration` type that satisfies the
`encoding.TextUnmarshaler` interface:
```go
type duration struct {
time.Duration
}
func (d *duration) UnmarshalText(text []byte) error {
var err error
d.Duration, err = time.ParseDuration(string(text))
return err
}
```
To target TOML specifically you can implement `UnmarshalTOML` TOML interface in
a similar way.
### More complex usage
Here's an example of how to load the example from the official spec page:
```toml
# This is a TOML document. Boom.
title = "TOML Example"
[owner]
name = "Tom Preston-Werner"
organization = "GitHub"
bio = "GitHub Cofounder & CEO\nLikes tater tots and beer."
dob = 1979-05-27T07:32:00Z # First class dates? Why not?
[database]
server = "192.168.1.1"
ports = [ 8001, 8001, 8002 ]
connection_max = 5000
enabled = true
[servers]
# You can indent as you please. Tabs or spaces. TOML don't care.
[servers.alpha]
ip = "10.0.0.1"
dc = "eqdc10"
[servers.beta]
ip = "10.0.0.2"
dc = "eqdc10"
[clients]
data = [ ["gamma", "delta"], [1, 2] ] # just an update to make sure parsers support it
# Line breaks are OK when inside arrays
hosts = [
"alpha",
"omega"
]
```
And the corresponding Go types are:
```go
type tomlConfig struct {
Title string
Owner ownerInfo
DB database `toml:"database"`
Servers map[string]server
Clients clients
}
type ownerInfo struct {
Name string
Org string `toml:"organization"`
Bio string
DOB time.Time
}
type database struct {
Server string
Ports []int
ConnMax int `toml:"connection_max"`
Enabled bool
}
type server struct {
IP string
DC string
}
type clients struct {
Data [][]interface{}
Hosts []string
}
```
Note that a case insensitive match will be tried if an exact match can't be
found.
A working example of the above can be found in `_examples/example.{go,toml}`.

511
vendor/github.com/BurntSushi/toml/decode.go generated vendored Normal file
View File

@@ -0,0 +1,511 @@
package toml
import (
"encoding"
"fmt"
"io"
"io/ioutil"
"math"
"os"
"reflect"
"strings"
"time"
)
// Unmarshaler is the interface implemented by objects that can unmarshal a
// TOML description of themselves.
type Unmarshaler interface {
UnmarshalTOML(interface{}) error
}
// Unmarshal decodes the contents of `p` in TOML format into a pointer `v`.
func Unmarshal(p []byte, v interface{}) error {
_, err := Decode(string(p), v)
return err
}
// Primitive is a TOML value that hasn't been decoded into a Go value.
//
// This type can be used for any value, which will cause decoding to be delayed.
// You can use the PrimitiveDecode() function to "manually" decode these values.
//
// NOTE: The underlying representation of a `Primitive` value is subject to
// change. Do not rely on it.
//
// NOTE: Primitive values are still parsed, so using them will only avoid the
// overhead of reflection. They can be useful when you don't know the exact type
// of TOML data until runtime.
type Primitive struct {
undecoded interface{}
context Key
}
// PrimitiveDecode is just like the other `Decode*` functions, except it
// decodes a TOML value that has already been parsed. Valid primitive values
// can *only* be obtained from values filled by the decoder functions,
// including this method. (i.e., `v` may contain more `Primitive`
// values.)
//
// Meta data for primitive values is included in the meta data returned by
// the `Decode*` functions with one exception: keys returned by the Undecoded
// method will only reflect keys that were decoded. Namely, any keys hidden
// behind a Primitive will be considered undecoded. Executing this method will
// update the undecoded keys in the meta data. (See the example.)
func (md *MetaData) PrimitiveDecode(primValue Primitive, v interface{}) error {
md.context = primValue.context
defer func() { md.context = nil }()
return md.unify(primValue.undecoded, rvalue(v))
}
// Decoder decodes TOML data.
//
// TOML tables correspond to Go structs or maps (dealer's choice they can be
// used interchangeably).
//
// TOML table arrays correspond to either a slice of structs or a slice of maps.
//
// TOML datetimes correspond to Go time.Time values. Local datetimes are parsed
// in the local timezone.
//
// All other TOML types (float, string, int, bool and array) correspond to the
// obvious Go types.
//
// An exception to the above rules is if a type implements the TextUnmarshaler
// interface, in which case any primitive TOML value (floats, strings, integers,
// booleans, datetimes) will be converted to a []byte and given to the value's
// UnmarshalText method. See the Unmarshaler example for a demonstration with
// time duration strings.
//
// Key mapping
//
// TOML keys can map to either keys in a Go map or field names in a Go struct.
// The special `toml` struct tag can be used to map TOML keys to struct fields
// that don't match the key name exactly (see the example). A case insensitive
// match to struct names will be tried if an exact match can't be found.
//
// The mapping between TOML values and Go values is loose. That is, there may
// exist TOML values that cannot be placed into your representation, and there
// may be parts of your representation that do not correspond to TOML values.
// This loose mapping can be made stricter by using the IsDefined and/or
// Undecoded methods on the MetaData returned.
//
// This decoder does not handle cyclic types. Decode will not terminate if a
// cyclic type is passed.
type Decoder struct {
r io.Reader
}
// NewDecoder creates a new Decoder.
func NewDecoder(r io.Reader) *Decoder {
return &Decoder{r: r}
}
// Decode TOML data in to the pointer `v`.
func (dec *Decoder) Decode(v interface{}) (MetaData, error) {
rv := reflect.ValueOf(v)
if rv.Kind() != reflect.Ptr {
return MetaData{}, e("Decode of non-pointer %s", reflect.TypeOf(v))
}
if rv.IsNil() {
return MetaData{}, e("Decode of nil %s", reflect.TypeOf(v))
}
// TODO: have parser should read from io.Reader? Or at the very least, make
// it read from []byte rather than string
data, err := ioutil.ReadAll(dec.r)
if err != nil {
return MetaData{}, err
}
p, err := parse(string(data))
if err != nil {
return MetaData{}, err
}
md := MetaData{
p.mapping, p.types, p.ordered,
make(map[string]bool, len(p.ordered)), nil,
}
return md, md.unify(p.mapping, indirect(rv))
}
// Decode the TOML data in to the pointer v.
//
// See the documentation on Decoder for a description of the decoding process.
func Decode(data string, v interface{}) (MetaData, error) {
return NewDecoder(strings.NewReader(data)).Decode(v)
}
// DecodeFile is just like Decode, except it will automatically read the
// contents of the file at path and decode it for you.
func DecodeFile(path string, v interface{}) (MetaData, error) {
fp, err := os.Open(path)
if err != nil {
return MetaData{}, err
}
defer fp.Close()
return NewDecoder(fp).Decode(v)
}
// unify performs a sort of type unification based on the structure of `rv`,
// which is the client representation.
//
// Any type mismatch produces an error. Finding a type that we don't know
// how to handle produces an unsupported type error.
func (md *MetaData) unify(data interface{}, rv reflect.Value) error {
// Special case. Look for a `Primitive` value.
// TODO: #76 would make this superfluous after implemented.
if rv.Type() == reflect.TypeOf((*Primitive)(nil)).Elem() {
// Save the undecoded data and the key context into the primitive
// value.
context := make(Key, len(md.context))
copy(context, md.context)
rv.Set(reflect.ValueOf(Primitive{
undecoded: data,
context: context,
}))
return nil
}
// Special case. Unmarshaler Interface support.
if rv.CanAddr() {
if v, ok := rv.Addr().Interface().(Unmarshaler); ok {
return v.UnmarshalTOML(data)
}
}
// Special case. Look for a value satisfying the TextUnmarshaler interface.
if v, ok := rv.Interface().(encoding.TextUnmarshaler); ok {
return md.unifyText(data, v)
}
// TODO:
// The behavior here is incorrect whenever a Go type satisfies the
// encoding.TextUnmarshaler interface but also corresponds to a TOML hash or
// array. In particular, the unmarshaler should only be applied to primitive
// TOML values. But at this point, it will be applied to all kinds of values
// and produce an incorrect error whenever those values are hashes or arrays
// (including arrays of tables).
k := rv.Kind()
// laziness
if k >= reflect.Int && k <= reflect.Uint64 {
return md.unifyInt(data, rv)
}
switch k {
case reflect.Ptr:
elem := reflect.New(rv.Type().Elem())
err := md.unify(data, reflect.Indirect(elem))
if err != nil {
return err
}
rv.Set(elem)
return nil
case reflect.Struct:
return md.unifyStruct(data, rv)
case reflect.Map:
return md.unifyMap(data, rv)
case reflect.Array:
return md.unifyArray(data, rv)
case reflect.Slice:
return md.unifySlice(data, rv)
case reflect.String:
return md.unifyString(data, rv)
case reflect.Bool:
return md.unifyBool(data, rv)
case reflect.Interface:
// we only support empty interfaces.
if rv.NumMethod() > 0 {
return e("unsupported type %s", rv.Type())
}
return md.unifyAnything(data, rv)
case reflect.Float32:
fallthrough
case reflect.Float64:
return md.unifyFloat64(data, rv)
}
return e("unsupported type %s", rv.Kind())
}
func (md *MetaData) unifyStruct(mapping interface{}, rv reflect.Value) error {
tmap, ok := mapping.(map[string]interface{})
if !ok {
if mapping == nil {
return nil
}
return e("type mismatch for %s: expected table but found %T",
rv.Type().String(), mapping)
}
for key, datum := range tmap {
var f *field
fields := cachedTypeFields(rv.Type())
for i := range fields {
ff := &fields[i]
if ff.name == key {
f = ff
break
}
if f == nil && strings.EqualFold(ff.name, key) {
f = ff
}
}
if f != nil {
subv := rv
for _, i := range f.index {
subv = indirect(subv.Field(i))
}
if isUnifiable(subv) {
md.decoded[md.context.add(key).String()] = true
md.context = append(md.context, key)
if err := md.unify(datum, subv); err != nil {
return err
}
md.context = md.context[0 : len(md.context)-1]
} else if f.name != "" {
// Bad user! No soup for you!
return e("cannot write unexported field %s.%s",
rv.Type().String(), f.name)
}
}
}
return nil
}
func (md *MetaData) unifyMap(mapping interface{}, rv reflect.Value) error {
if k := rv.Type().Key().Kind(); k != reflect.String {
return fmt.Errorf(
"toml: cannot decode to a map with non-string key type (%s in %q)",
k, rv.Type())
}
tmap, ok := mapping.(map[string]interface{})
if !ok {
if tmap == nil {
return nil
}
return badtype("map", mapping)
}
if rv.IsNil() {
rv.Set(reflect.MakeMap(rv.Type()))
}
for k, v := range tmap {
md.decoded[md.context.add(k).String()] = true
md.context = append(md.context, k)
rvkey := indirect(reflect.New(rv.Type().Key()))
rvval := reflect.Indirect(reflect.New(rv.Type().Elem()))
if err := md.unify(v, rvval); err != nil {
return err
}
md.context = md.context[0 : len(md.context)-1]
rvkey.SetString(k)
rv.SetMapIndex(rvkey, rvval)
}
return nil
}
func (md *MetaData) unifyArray(data interface{}, rv reflect.Value) error {
datav := reflect.ValueOf(data)
if datav.Kind() != reflect.Slice {
if !datav.IsValid() {
return nil
}
return badtype("slice", data)
}
if l := datav.Len(); l != rv.Len() {
return e("expected array length %d; got TOML array of length %d", rv.Len(), l)
}
return md.unifySliceArray(datav, rv)
}
func (md *MetaData) unifySlice(data interface{}, rv reflect.Value) error {
datav := reflect.ValueOf(data)
if datav.Kind() != reflect.Slice {
if !datav.IsValid() {
return nil
}
return badtype("slice", data)
}
n := datav.Len()
if rv.IsNil() || rv.Cap() < n {
rv.Set(reflect.MakeSlice(rv.Type(), n, n))
}
rv.SetLen(n)
return md.unifySliceArray(datav, rv)
}
func (md *MetaData) unifySliceArray(data, rv reflect.Value) error {
l := data.Len()
for i := 0; i < l; i++ {
err := md.unify(data.Index(i).Interface(), indirect(rv.Index(i)))
if err != nil {
return err
}
}
return nil
}
func (md *MetaData) unifyDatetime(data interface{}, rv reflect.Value) error {
if _, ok := data.(time.Time); ok {
rv.Set(reflect.ValueOf(data))
return nil
}
return badtype("time.Time", data)
}
func (md *MetaData) unifyString(data interface{}, rv reflect.Value) error {
if s, ok := data.(string); ok {
rv.SetString(s)
return nil
}
return badtype("string", data)
}
func (md *MetaData) unifyFloat64(data interface{}, rv reflect.Value) error {
if num, ok := data.(float64); ok {
switch rv.Kind() {
case reflect.Float32:
fallthrough
case reflect.Float64:
rv.SetFloat(num)
default:
panic("bug")
}
return nil
}
return badtype("float", data)
}
func (md *MetaData) unifyInt(data interface{}, rv reflect.Value) error {
if num, ok := data.(int64); ok {
if rv.Kind() >= reflect.Int && rv.Kind() <= reflect.Int64 {
switch rv.Kind() {
case reflect.Int, reflect.Int64:
// No bounds checking necessary.
case reflect.Int8:
if num < math.MinInt8 || num > math.MaxInt8 {
return e("value %d is out of range for int8", num)
}
case reflect.Int16:
if num < math.MinInt16 || num > math.MaxInt16 {
return e("value %d is out of range for int16", num)
}
case reflect.Int32:
if num < math.MinInt32 || num > math.MaxInt32 {
return e("value %d is out of range for int32", num)
}
}
rv.SetInt(num)
} else if rv.Kind() >= reflect.Uint && rv.Kind() <= reflect.Uint64 {
unum := uint64(num)
switch rv.Kind() {
case reflect.Uint, reflect.Uint64:
// No bounds checking necessary.
case reflect.Uint8:
if num < 0 || unum > math.MaxUint8 {
return e("value %d is out of range for uint8", num)
}
case reflect.Uint16:
if num < 0 || unum > math.MaxUint16 {
return e("value %d is out of range for uint16", num)
}
case reflect.Uint32:
if num < 0 || unum > math.MaxUint32 {
return e("value %d is out of range for uint32", num)
}
}
rv.SetUint(unum)
} else {
panic("unreachable")
}
return nil
}
return badtype("integer", data)
}
func (md *MetaData) unifyBool(data interface{}, rv reflect.Value) error {
if b, ok := data.(bool); ok {
rv.SetBool(b)
return nil
}
return badtype("boolean", data)
}
func (md *MetaData) unifyAnything(data interface{}, rv reflect.Value) error {
rv.Set(reflect.ValueOf(data))
return nil
}
func (md *MetaData) unifyText(data interface{}, v encoding.TextUnmarshaler) error {
var s string
switch sdata := data.(type) {
case TextMarshaler:
text, err := sdata.MarshalText()
if err != nil {
return err
}
s = string(text)
case fmt.Stringer:
s = sdata.String()
case string:
s = sdata
case bool:
s = fmt.Sprintf("%v", sdata)
case int64:
s = fmt.Sprintf("%d", sdata)
case float64:
s = fmt.Sprintf("%f", sdata)
default:
return badtype("primitive (string-like)", data)
}
if err := v.UnmarshalText([]byte(s)); err != nil {
return err
}
return nil
}
// rvalue returns a reflect.Value of `v`. All pointers are resolved.
func rvalue(v interface{}) reflect.Value {
return indirect(reflect.ValueOf(v))
}
// indirect returns the value pointed to by a pointer.
// Pointers are followed until the value is not a pointer.
// New values are allocated for each nil pointer.
//
// An exception to this rule is if the value satisfies an interface of
// interest to us (like encoding.TextUnmarshaler).
func indirect(v reflect.Value) reflect.Value {
if v.Kind() != reflect.Ptr {
if v.CanSet() {
pv := v.Addr()
if _, ok := pv.Interface().(encoding.TextUnmarshaler); ok {
return pv
}
}
return v
}
if v.IsNil() {
v.Set(reflect.New(v.Type().Elem()))
}
return indirect(reflect.Indirect(v))
}
func isUnifiable(rv reflect.Value) bool {
if rv.CanSet() {
return true
}
if _, ok := rv.Interface().(encoding.TextUnmarshaler); ok {
return true
}
return false
}
func e(format string, args ...interface{}) error {
return fmt.Errorf("toml: "+format, args...)
}
func badtype(expected string, data interface{}) error {
return e("cannot load TOML value of type %T into a Go %s", data, expected)
}

18
vendor/github.com/BurntSushi/toml/decode_go116.go generated vendored Normal file
View File

@@ -0,0 +1,18 @@
// +build go1.16
package toml
import (
"io/fs"
)
// DecodeFS is just like Decode, except it will automatically read the contents
// of the file at `path` from a fs.FS instance.
func DecodeFS(fsys fs.FS, path string, v interface{}) (MetaData, error) {
fp, err := fsys.Open(path)
if err != nil {
return MetaData{}, err
}
defer fp.Close()
return NewDecoder(fp).Decode(v)
}

123
vendor/github.com/BurntSushi/toml/decode_meta.go generated vendored Normal file
View File

@@ -0,0 +1,123 @@
package toml
import "strings"
// MetaData allows access to meta information about TOML data that may not be
// inferable via reflection. In particular, whether a key has been defined and
// the TOML type of a key.
type MetaData struct {
mapping map[string]interface{}
types map[string]tomlType
keys []Key
decoded map[string]bool
context Key // Used only during decoding.
}
// IsDefined reports if the key exists in the TOML data.
//
// The key should be specified hierarchically, for example to access the TOML
// key "a.b.c" you would use:
//
// IsDefined("a", "b", "c")
//
// IsDefined will return false if an empty key given. Keys are case sensitive.
func (md *MetaData) IsDefined(key ...string) bool {
if len(key) == 0 {
return false
}
var hash map[string]interface{}
var ok bool
var hashOrVal interface{} = md.mapping
for _, k := range key {
if hash, ok = hashOrVal.(map[string]interface{}); !ok {
return false
}
if hashOrVal, ok = hash[k]; !ok {
return false
}
}
return true
}
// Type returns a string representation of the type of the key specified.
//
// Type will return the empty string if given an empty key or a key that does
// not exist. Keys are case sensitive.
func (md *MetaData) Type(key ...string) string {
fullkey := strings.Join(key, ".")
if typ, ok := md.types[fullkey]; ok {
return typ.typeString()
}
return ""
}
// Key represents any TOML key, including key groups. Use (MetaData).Keys to get
// values of this type.
type Key []string
func (k Key) String() string { return strings.Join(k, ".") }
func (k Key) maybeQuotedAll() string {
var ss []string
for i := range k {
ss = append(ss, k.maybeQuoted(i))
}
return strings.Join(ss, ".")
}
func (k Key) maybeQuoted(i int) string {
if k[i] == "" {
return `""`
}
quote := false
for _, c := range k[i] {
if !isBareKeyChar(c) {
quote = true
break
}
}
if quote {
return `"` + quotedReplacer.Replace(k[i]) + `"`
}
return k[i]
}
func (k Key) add(piece string) Key {
newKey := make(Key, len(k)+1)
copy(newKey, k)
newKey[len(k)] = piece
return newKey
}
// Keys returns a slice of every key in the TOML data, including key groups.
//
// Each key is itself a slice, where the first element is the top of the
// hierarchy and the last is the most specific. The list will have the same
// order as the keys appeared in the TOML data.
//
// All keys returned are non-empty.
func (md *MetaData) Keys() []Key {
return md.keys
}
// Undecoded returns all keys that have not been decoded in the order in which
// they appear in the original TOML document.
//
// This includes keys that haven't been decoded because of a Primitive value.
// Once the Primitive value is decoded, the keys will be considered decoded.
//
// Also note that decoding into an empty interface will result in no decoding,
// and so no keys will be considered decoded.
//
// In this sense, the Undecoded keys correspond to keys in the TOML document
// that do not have a concrete type in your representation.
func (md *MetaData) Undecoded() []Key {
undecoded := make([]Key, 0, len(md.keys))
for _, key := range md.keys {
if !md.decoded[key.String()] {
undecoded = append(undecoded, key)
}
}
return undecoded
}

33
vendor/github.com/BurntSushi/toml/deprecated.go generated vendored Normal file
View File

@@ -0,0 +1,33 @@
package toml
import (
"encoding"
"io"
)
// DEPRECATED!
//
// Use the identical encoding.TextMarshaler instead. It is defined here to
// support Go 1.1 and older.
type TextMarshaler encoding.TextMarshaler
// DEPRECATED!
//
// Use the identical encoding.TextUnmarshaler instead. It is defined here to
// support Go 1.1 and older.
type TextUnmarshaler encoding.TextUnmarshaler
// DEPRECATED!
//
// Use MetaData.PrimitiveDecode instead.
func PrimitiveDecode(primValue Primitive, v interface{}) error {
md := MetaData{decoded: make(map[string]bool)}
return md.unify(primValue.undecoded, rvalue(v))
}
// DEPRECATED!
//
// Use NewDecoder(reader).Decode(&v) instead.
func DecodeReader(r io.Reader, v interface{}) (MetaData, error) {
return NewDecoder(r).Decode(v)
}

13
vendor/github.com/BurntSushi/toml/doc.go generated vendored Normal file
View File

@@ -0,0 +1,13 @@
/*
Package toml implements decoding and encoding of TOML files.
This package supports TOML v1.0.0, as listed on https://toml.io
There is also support for delaying decoding with the Primitive type, and
querying the set of keys in a TOML document with the MetaData type.
The github.com/BurntSushi/toml/cmd/tomlv package implements a TOML validator,
and can be used to verify if TOML document is valid. It can also be used to
print the type of each key.
*/
package toml

650
vendor/github.com/BurntSushi/toml/encode.go generated vendored Normal file
View File

@@ -0,0 +1,650 @@
package toml
import (
"bufio"
"encoding"
"errors"
"fmt"
"io"
"math"
"reflect"
"sort"
"strconv"
"strings"
"time"
"github.com/BurntSushi/toml/internal"
)
type tomlEncodeError struct{ error }
var (
errArrayNilElement = errors.New("toml: cannot encode array with nil element")
errNonString = errors.New("toml: cannot encode a map with non-string key type")
errAnonNonStruct = errors.New("toml: cannot encode an anonymous field that is not a struct")
errNoKey = errors.New("toml: top-level values must be Go maps or structs")
errAnything = errors.New("") // used in testing
)
var quotedReplacer = strings.NewReplacer(
"\"", "\\\"",
"\\", "\\\\",
"\x00", `\u0000`,
"\x01", `\u0001`,
"\x02", `\u0002`,
"\x03", `\u0003`,
"\x04", `\u0004`,
"\x05", `\u0005`,
"\x06", `\u0006`,
"\x07", `\u0007`,
"\b", `\b`,
"\t", `\t`,
"\n", `\n`,
"\x0b", `\u000b`,
"\f", `\f`,
"\r", `\r`,
"\x0e", `\u000e`,
"\x0f", `\u000f`,
"\x10", `\u0010`,
"\x11", `\u0011`,
"\x12", `\u0012`,
"\x13", `\u0013`,
"\x14", `\u0014`,
"\x15", `\u0015`,
"\x16", `\u0016`,
"\x17", `\u0017`,
"\x18", `\u0018`,
"\x19", `\u0019`,
"\x1a", `\u001a`,
"\x1b", `\u001b`,
"\x1c", `\u001c`,
"\x1d", `\u001d`,
"\x1e", `\u001e`,
"\x1f", `\u001f`,
"\x7f", `\u007f`,
)
// Encoder encodes a Go to a TOML document.
//
// The mapping between Go values and TOML values should be precisely the same as
// for the Decode* functions. Similarly, the TextMarshaler interface is
// supported by encoding the resulting bytes as strings. If you want to write
// arbitrary binary data then you will need to use something like base64 since
// TOML does not have any binary types.
//
// When encoding TOML hashes (Go maps or structs), keys without any sub-hashes
// are encoded first.
//
// Go maps will be sorted alphabetically by key for deterministic output.
//
// Encoding Go values without a corresponding TOML representation will return an
// error. Examples of this includes maps with non-string keys, slices with nil
// elements, embedded non-struct types, and nested slices containing maps or
// structs. (e.g. [][]map[string]string is not allowed but []map[string]string
// is okay, as is []map[string][]string).
//
// NOTE: Only exported keys are encoded due to the use of reflection. Unexported
// keys are silently discarded.
type Encoder struct {
// The string to use for a single indentation level. The default is two
// spaces.
Indent string
// hasWritten is whether we have written any output to w yet.
hasWritten bool
w *bufio.Writer
}
// NewEncoder create a new Encoder.
func NewEncoder(w io.Writer) *Encoder {
return &Encoder{
w: bufio.NewWriter(w),
Indent: " ",
}
}
// Encode writes a TOML representation of the Go value to the Encoder's writer.
//
// An error is returned if the value given cannot be encoded to a valid TOML
// document.
func (enc *Encoder) Encode(v interface{}) error {
rv := eindirect(reflect.ValueOf(v))
if err := enc.safeEncode(Key([]string{}), rv); err != nil {
return err
}
return enc.w.Flush()
}
func (enc *Encoder) safeEncode(key Key, rv reflect.Value) (err error) {
defer func() {
if r := recover(); r != nil {
if terr, ok := r.(tomlEncodeError); ok {
err = terr.error
return
}
panic(r)
}
}()
enc.encode(key, rv)
return nil
}
func (enc *Encoder) encode(key Key, rv reflect.Value) {
// Special case. Time needs to be in ISO8601 format.
// Special case. If we can marshal the type to text, then we used that.
// Basically, this prevents the encoder for handling these types as
// generic structs (or whatever the underlying type of a TextMarshaler is).
switch t := rv.Interface().(type) {
case time.Time, encoding.TextMarshaler:
enc.writeKeyValue(key, rv, false)
return
// TODO: #76 would make this superfluous after implemented.
case Primitive:
enc.encode(key, reflect.ValueOf(t.undecoded))
return
}
k := rv.Kind()
switch k {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
reflect.Int64,
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
reflect.Uint64,
reflect.Float32, reflect.Float64, reflect.String, reflect.Bool:
enc.writeKeyValue(key, rv, false)
case reflect.Array, reflect.Slice:
if typeEqual(tomlArrayHash, tomlTypeOfGo(rv)) {
enc.eArrayOfTables(key, rv)
} else {
enc.writeKeyValue(key, rv, false)
}
case reflect.Interface:
if rv.IsNil() {
return
}
enc.encode(key, rv.Elem())
case reflect.Map:
if rv.IsNil() {
return
}
enc.eTable(key, rv)
case reflect.Ptr:
if rv.IsNil() {
return
}
enc.encode(key, rv.Elem())
case reflect.Struct:
enc.eTable(key, rv)
default:
encPanic(fmt.Errorf("unsupported type for key '%s': %s", key, k))
}
}
// eElement encodes any value that can be an array element.
func (enc *Encoder) eElement(rv reflect.Value) {
switch v := rv.Interface().(type) {
case time.Time: // Using TextMarshaler adds extra quotes, which we don't want.
format := time.RFC3339Nano
switch v.Location() {
case internal.LocalDatetime:
format = "2006-01-02T15:04:05.999999999"
case internal.LocalDate:
format = "2006-01-02"
case internal.LocalTime:
format = "15:04:05.999999999"
}
switch v.Location() {
default:
enc.wf(v.Format(format))
case internal.LocalDatetime, internal.LocalDate, internal.LocalTime:
enc.wf(v.In(time.UTC).Format(format))
}
return
case encoding.TextMarshaler:
// Use text marshaler if it's available for this value.
if s, err := v.MarshalText(); err != nil {
encPanic(err)
} else {
enc.writeQuoted(string(s))
}
return
}
switch rv.Kind() {
case reflect.String:
enc.writeQuoted(rv.String())
case reflect.Bool:
enc.wf(strconv.FormatBool(rv.Bool()))
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
enc.wf(strconv.FormatInt(rv.Int(), 10))
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
enc.wf(strconv.FormatUint(rv.Uint(), 10))
case reflect.Float32:
f := rv.Float()
if math.IsNaN(f) {
enc.wf("nan")
} else if math.IsInf(f, 0) {
enc.wf("%cinf", map[bool]byte{true: '-', false: '+'}[math.Signbit(f)])
} else {
enc.wf(floatAddDecimal(strconv.FormatFloat(f, 'f', -1, 32)))
}
case reflect.Float64:
f := rv.Float()
if math.IsNaN(f) {
enc.wf("nan")
} else if math.IsInf(f, 0) {
enc.wf("%cinf", map[bool]byte{true: '-', false: '+'}[math.Signbit(f)])
} else {
enc.wf(floatAddDecimal(strconv.FormatFloat(f, 'f', -1, 64)))
}
case reflect.Array, reflect.Slice:
enc.eArrayOrSliceElement(rv)
case reflect.Struct:
enc.eStruct(nil, rv, true)
case reflect.Map:
enc.eMap(nil, rv, true)
case reflect.Interface:
enc.eElement(rv.Elem())
default:
encPanic(fmt.Errorf("unexpected primitive type: %T", rv.Interface()))
}
}
// By the TOML spec, all floats must have a decimal with at least one number on
// either side.
func floatAddDecimal(fstr string) string {
if !strings.Contains(fstr, ".") {
return fstr + ".0"
}
return fstr
}
func (enc *Encoder) writeQuoted(s string) {
enc.wf("\"%s\"", quotedReplacer.Replace(s))
}
func (enc *Encoder) eArrayOrSliceElement(rv reflect.Value) {
length := rv.Len()
enc.wf("[")
for i := 0; i < length; i++ {
elem := rv.Index(i)
enc.eElement(elem)
if i != length-1 {
enc.wf(", ")
}
}
enc.wf("]")
}
func (enc *Encoder) eArrayOfTables(key Key, rv reflect.Value) {
if len(key) == 0 {
encPanic(errNoKey)
}
for i := 0; i < rv.Len(); i++ {
trv := rv.Index(i)
if isNil(trv) {
continue
}
enc.newline()
enc.wf("%s[[%s]]", enc.indentStr(key), key.maybeQuotedAll())
enc.newline()
enc.eMapOrStruct(key, trv, false)
}
}
func (enc *Encoder) eTable(key Key, rv reflect.Value) {
if len(key) == 1 {
// Output an extra newline between top-level tables.
// (The newline isn't written if nothing else has been written though.)
enc.newline()
}
if len(key) > 0 {
enc.wf("%s[%s]", enc.indentStr(key), key.maybeQuotedAll())
enc.newline()
}
enc.eMapOrStruct(key, rv, false)
}
func (enc *Encoder) eMapOrStruct(key Key, rv reflect.Value, inline bool) {
switch rv := eindirect(rv); rv.Kind() {
case reflect.Map:
enc.eMap(key, rv, inline)
case reflect.Struct:
enc.eStruct(key, rv, inline)
default:
// Should never happen?
panic("eTable: unhandled reflect.Value Kind: " + rv.Kind().String())
}
}
func (enc *Encoder) eMap(key Key, rv reflect.Value, inline bool) {
rt := rv.Type()
if rt.Key().Kind() != reflect.String {
encPanic(errNonString)
}
// Sort keys so that we have deterministic output. And write keys directly
// underneath this key first, before writing sub-structs or sub-maps.
var mapKeysDirect, mapKeysSub []string
for _, mapKey := range rv.MapKeys() {
k := mapKey.String()
if typeIsHash(tomlTypeOfGo(rv.MapIndex(mapKey))) {
mapKeysSub = append(mapKeysSub, k)
} else {
mapKeysDirect = append(mapKeysDirect, k)
}
}
var writeMapKeys = func(mapKeys []string, trailC bool) {
sort.Strings(mapKeys)
for i, mapKey := range mapKeys {
val := rv.MapIndex(reflect.ValueOf(mapKey))
if isNil(val) {
continue
}
if inline {
enc.writeKeyValue(Key{mapKey}, val, true)
if trailC || i != len(mapKeys)-1 {
enc.wf(", ")
}
} else {
enc.encode(key.add(mapKey), val)
}
}
}
if inline {
enc.wf("{")
}
writeMapKeys(mapKeysDirect, len(mapKeysSub) > 0)
writeMapKeys(mapKeysSub, false)
if inline {
enc.wf("}")
}
}
func (enc *Encoder) eStruct(key Key, rv reflect.Value, inline bool) {
// Write keys for fields directly under this key first, because if we write
// a field that creates a new table then all keys under it will be in that
// table (not the one we're writing here).
//
// Fields is a [][]int: for fieldsDirect this always has one entry (the
// struct index). For fieldsSub it contains two entries: the parent field
// index from tv, and the field indexes for the fields of the sub.
var (
rt = rv.Type()
fieldsDirect, fieldsSub [][]int
addFields func(rt reflect.Type, rv reflect.Value, start []int)
)
addFields = func(rt reflect.Type, rv reflect.Value, start []int) {
for i := 0; i < rt.NumField(); i++ {
f := rt.Field(i)
if f.PkgPath != "" && !f.Anonymous { /// Skip unexported fields.
continue
}
frv := rv.Field(i)
// Treat anonymous struct fields with tag names as though they are
// not anonymous, like encoding/json does.
//
// Non-struct anonymous fields use the normal encoding logic.
if f.Anonymous {
t := f.Type
switch t.Kind() {
case reflect.Struct:
if getOptions(f.Tag).name == "" {
addFields(t, frv, append(start, f.Index...))
continue
}
case reflect.Ptr:
if t.Elem().Kind() == reflect.Struct && getOptions(f.Tag).name == "" {
if !frv.IsNil() {
addFields(t.Elem(), frv.Elem(), append(start, f.Index...))
}
continue
}
}
}
if typeIsHash(tomlTypeOfGo(frv)) {
fieldsSub = append(fieldsSub, append(start, f.Index...))
} else {
fieldsDirect = append(fieldsDirect, append(start, f.Index...))
}
}
}
addFields(rt, rv, nil)
writeFields := func(fields [][]int) {
for _, fieldIndex := range fields {
fieldType := rt.FieldByIndex(fieldIndex)
fieldVal := rv.FieldByIndex(fieldIndex)
if isNil(fieldVal) { /// Don't write anything for nil fields.
continue
}
opts := getOptions(fieldType.Tag)
if opts.skip {
continue
}
keyName := fieldType.Name
if opts.name != "" {
keyName = opts.name
}
if opts.omitempty && isEmpty(fieldVal) {
continue
}
if opts.omitzero && isZero(fieldVal) {
continue
}
if inline {
enc.writeKeyValue(Key{keyName}, fieldVal, true)
if fieldIndex[0] != len(fields)-1 {
enc.wf(", ")
}
} else {
enc.encode(key.add(keyName), fieldVal)
}
}
}
if inline {
enc.wf("{")
}
writeFields(fieldsDirect)
writeFields(fieldsSub)
if inline {
enc.wf("}")
}
}
// tomlTypeName returns the TOML type name of the Go value's type. It is
// used to determine whether the types of array elements are mixed (which is
// forbidden). If the Go value is nil, then it is illegal for it to be an array
// element, and valueIsNil is returned as true.
// Returns the TOML type of a Go value. The type may be `nil`, which means
// no concrete TOML type could be found.
func tomlTypeOfGo(rv reflect.Value) tomlType {
if isNil(rv) || !rv.IsValid() {
return nil
}
switch rv.Kind() {
case reflect.Bool:
return tomlBool
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
reflect.Int64,
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
reflect.Uint64:
return tomlInteger
case reflect.Float32, reflect.Float64:
return tomlFloat
case reflect.Array, reflect.Slice:
if typeEqual(tomlHash, tomlArrayType(rv)) {
return tomlArrayHash
}
return tomlArray
case reflect.Ptr, reflect.Interface:
return tomlTypeOfGo(rv.Elem())
case reflect.String:
return tomlString
case reflect.Map:
return tomlHash
case reflect.Struct:
switch rv.Interface().(type) {
case time.Time:
return tomlDatetime
case encoding.TextMarshaler:
return tomlString
default:
// Someone used a pointer receiver: we can make it work for pointer
// values.
if rv.CanAddr() {
_, ok := rv.Addr().Interface().(encoding.TextMarshaler)
if ok {
return tomlString
}
}
return tomlHash
}
default:
_, ok := rv.Interface().(encoding.TextMarshaler)
if ok {
return tomlString
}
encPanic(errors.New("unsupported type: " + rv.Kind().String()))
panic("") // Need *some* return value
}
}
// tomlArrayType returns the element type of a TOML array. The type returned
// may be nil if it cannot be determined (e.g., a nil slice or a zero length
// slize). This function may also panic if it finds a type that cannot be
// expressed in TOML (such as nil elements, heterogeneous arrays or directly
// nested arrays of tables).
func tomlArrayType(rv reflect.Value) tomlType {
if isNil(rv) || !rv.IsValid() || rv.Len() == 0 {
return nil
}
/// Don't allow nil.
rvlen := rv.Len()
for i := 1; i < rvlen; i++ {
if tomlTypeOfGo(rv.Index(i)) == nil {
encPanic(errArrayNilElement)
}
}
firstType := tomlTypeOfGo(rv.Index(0))
if firstType == nil {
encPanic(errArrayNilElement)
}
return firstType
}
type tagOptions struct {
skip bool // "-"
name string
omitempty bool
omitzero bool
}
func getOptions(tag reflect.StructTag) tagOptions {
t := tag.Get("toml")
if t == "-" {
return tagOptions{skip: true}
}
var opts tagOptions
parts := strings.Split(t, ",")
opts.name = parts[0]
for _, s := range parts[1:] {
switch s {
case "omitempty":
opts.omitempty = true
case "omitzero":
opts.omitzero = true
}
}
return opts
}
func isZero(rv reflect.Value) bool {
switch rv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return rv.Int() == 0
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return rv.Uint() == 0
case reflect.Float32, reflect.Float64:
return rv.Float() == 0.0
}
return false
}
func isEmpty(rv reflect.Value) bool {
switch rv.Kind() {
case reflect.Array, reflect.Slice, reflect.Map, reflect.String:
return rv.Len() == 0
case reflect.Bool:
return !rv.Bool()
}
return false
}
func (enc *Encoder) newline() {
if enc.hasWritten {
enc.wf("\n")
}
}
// Write a key/value pair:
//
// key = <any value>
//
// If inline is true it won't add a newline at the end.
func (enc *Encoder) writeKeyValue(key Key, val reflect.Value, inline bool) {
if len(key) == 0 {
encPanic(errNoKey)
}
enc.wf("%s%s = ", enc.indentStr(key), key.maybeQuoted(len(key)-1))
enc.eElement(val)
if !inline {
enc.newline()
}
}
func (enc *Encoder) wf(format string, v ...interface{}) {
if _, err := fmt.Fprintf(enc.w, format, v...); err != nil {
encPanic(err)
}
enc.hasWritten = true
}
func (enc *Encoder) indentStr(key Key) string {
return strings.Repeat(enc.Indent, len(key)-1)
}
func encPanic(err error) {
panic(tomlEncodeError{err})
}
func eindirect(v reflect.Value) reflect.Value {
switch v.Kind() {
case reflect.Ptr, reflect.Interface:
return eindirect(v.Elem())
default:
return v
}
}
func isNil(rv reflect.Value) bool {
switch rv.Kind() {
case reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice:
return rv.IsNil()
default:
return false
}
}

36
vendor/github.com/BurntSushi/toml/internal/tz.go generated vendored Normal file
View File

@@ -0,0 +1,36 @@
package internal
import "time"
// Timezones used for local datetime, date, and time TOML types.
//
// The exact way times and dates without a timezone should be interpreted is not
// well-defined in the TOML specification and left to the implementation. These
// defaults to current local timezone offset of the computer, but this can be
// changed by changing these variables before decoding.
//
// TODO:
// Ideally we'd like to offer people the ability to configure the used timezone
// by setting Decoder.Timezone and Encoder.Timezone; however, this is a bit
// tricky: the reason we use three different variables for this is to support
// round-tripping without these specific TZ names we wouldn't know which
// format to use.
//
// There isn't a good way to encode this right now though, and passing this sort
// of information also ties in to various related issues such as string format
// encoding, encoding of comments, etc.
//
// So, for the time being, just put this in internal until we can write a good
// comprehensive API for doing all of this.
//
// The reason they're exported is because they're referred from in e.g.
// internal/tag.
//
// Note that this behaviour is valid according to the TOML spec as the exact
// behaviour is left up to implementations.
var (
localOffset = func() int { _, o := time.Now().Zone(); return o }()
LocalDatetime = time.FixedZone("datetime-local", localOffset)
LocalDate = time.FixedZone("date-local", localOffset)
LocalTime = time.FixedZone("time-local", localOffset)
)

1225
vendor/github.com/BurntSushi/toml/lex.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

739
vendor/github.com/BurntSushi/toml/parse.go generated vendored Normal file
View File

@@ -0,0 +1,739 @@
package toml
import (
"errors"
"fmt"
"strconv"
"strings"
"time"
"unicode/utf8"
"github.com/BurntSushi/toml/internal"
)
type parser struct {
mapping map[string]interface{}
types map[string]tomlType
lx *lexer
ordered []Key // List of keys in the order that they appear in the TOML data.
context Key // Full key for the current hash in scope.
currentKey string // Base key name for everything except hashes.
approxLine int // Rough approximation of line number
implicits map[string]bool // Record implied keys (e.g. 'key.group.names').
}
// ParseError is used when a file can't be parsed: for example invalid integer
// literals, duplicate keys, etc.
type ParseError struct {
Message string
Line int
LastKey string
}
func (pe ParseError) Error() string {
return fmt.Sprintf("Near line %d (last key parsed '%s'): %s",
pe.Line, pe.LastKey, pe.Message)
}
func parse(data string) (p *parser, err error) {
defer func() {
if r := recover(); r != nil {
var ok bool
if err, ok = r.(ParseError); ok {
return
}
panic(r)
}
}()
// Read over BOM; do this here as the lexer calls utf8.DecodeRuneInString()
// which mangles stuff.
if strings.HasPrefix(data, "\xff\xfe") || strings.HasPrefix(data, "\xfe\xff") {
data = data[2:]
}
// Examine first few bytes for NULL bytes; this probably means it's a UTF-16
// file (second byte in surrogate pair being NULL). Again, do this here to
// avoid having to deal with UTF-8/16 stuff in the lexer.
ex := 6
if len(data) < 6 {
ex = len(data)
}
if strings.ContainsRune(data[:ex], 0) {
return nil, errors.New("files cannot contain NULL bytes; probably using UTF-16; TOML files must be UTF-8")
}
p = &parser{
mapping: make(map[string]interface{}),
types: make(map[string]tomlType),
lx: lex(data),
ordered: make([]Key, 0),
implicits: make(map[string]bool),
}
for {
item := p.next()
if item.typ == itemEOF {
break
}
p.topLevel(item)
}
return p, nil
}
func (p *parser) panicf(format string, v ...interface{}) {
msg := fmt.Sprintf(format, v...)
panic(ParseError{
Message: msg,
Line: p.approxLine,
LastKey: p.current(),
})
}
func (p *parser) next() item {
it := p.lx.nextItem()
//fmt.Printf("ITEM %-18s line %-3d │ %q\n", it.typ, it.line, it.val)
if it.typ == itemError {
p.panicf("%s", it.val)
}
return it
}
func (p *parser) bug(format string, v ...interface{}) {
panic(fmt.Sprintf("BUG: "+format+"\n\n", v...))
}
func (p *parser) expect(typ itemType) item {
it := p.next()
p.assertEqual(typ, it.typ)
return it
}
func (p *parser) assertEqual(expected, got itemType) {
if expected != got {
p.bug("Expected '%s' but got '%s'.", expected, got)
}
}
func (p *parser) topLevel(item item) {
switch item.typ {
case itemCommentStart: // # ..
p.approxLine = item.line
p.expect(itemText)
case itemTableStart: // [ .. ]
name := p.next()
p.approxLine = name.line
var key Key
for ; name.typ != itemTableEnd && name.typ != itemEOF; name = p.next() {
key = append(key, p.keyString(name))
}
p.assertEqual(itemTableEnd, name.typ)
p.addContext(key, false)
p.setType("", tomlHash)
p.ordered = append(p.ordered, key)
case itemArrayTableStart: // [[ .. ]]
name := p.next()
p.approxLine = name.line
var key Key
for ; name.typ != itemArrayTableEnd && name.typ != itemEOF; name = p.next() {
key = append(key, p.keyString(name))
}
p.assertEqual(itemArrayTableEnd, name.typ)
p.addContext(key, true)
p.setType("", tomlArrayHash)
p.ordered = append(p.ordered, key)
case itemKeyStart: // key = ..
outerContext := p.context
/// Read all the key parts (e.g. 'a' and 'b' in 'a.b')
k := p.next()
p.approxLine = k.line
var key Key
for ; k.typ != itemKeyEnd && k.typ != itemEOF; k = p.next() {
key = append(key, p.keyString(k))
}
p.assertEqual(itemKeyEnd, k.typ)
/// The current key is the last part.
p.currentKey = key[len(key)-1]
/// All the other parts (if any) are the context; need to set each part
/// as implicit.
context := key[:len(key)-1]
for i := range context {
p.addImplicitContext(append(p.context, context[i:i+1]...))
}
/// Set value.
val, typ := p.value(p.next(), false)
p.set(p.currentKey, val, typ)
p.ordered = append(p.ordered, p.context.add(p.currentKey))
/// Remove the context we added (preserving any context from [tbl] lines).
p.context = outerContext
p.currentKey = ""
default:
p.bug("Unexpected type at top level: %s", item.typ)
}
}
// Gets a string for a key (or part of a key in a table name).
func (p *parser) keyString(it item) string {
switch it.typ {
case itemText:
return it.val
case itemString, itemMultilineString,
itemRawString, itemRawMultilineString:
s, _ := p.value(it, false)
return s.(string)
default:
p.bug("Unexpected key type: %s", it.typ)
}
panic("unreachable")
}
var datetimeRepl = strings.NewReplacer(
"z", "Z",
"t", "T",
" ", "T")
// value translates an expected value from the lexer into a Go value wrapped
// as an empty interface.
func (p *parser) value(it item, parentIsArray bool) (interface{}, tomlType) {
switch it.typ {
case itemString:
return p.replaceEscapes(it.val), p.typeOfPrimitive(it)
case itemMultilineString:
return p.replaceEscapes(stripFirstNewline(stripEscapedNewlines(it.val))), p.typeOfPrimitive(it)
case itemRawString:
return it.val, p.typeOfPrimitive(it)
case itemRawMultilineString:
return stripFirstNewline(it.val), p.typeOfPrimitive(it)
case itemInteger:
return p.valueInteger(it)
case itemFloat:
return p.valueFloat(it)
case itemBool:
switch it.val {
case "true":
return true, p.typeOfPrimitive(it)
case "false":
return false, p.typeOfPrimitive(it)
default:
p.bug("Expected boolean value, but got '%s'.", it.val)
}
case itemDatetime:
return p.valueDatetime(it)
case itemArray:
return p.valueArray(it)
case itemInlineTableStart:
return p.valueInlineTable(it, parentIsArray)
default:
p.bug("Unexpected value type: %s", it.typ)
}
panic("unreachable")
}
func (p *parser) valueInteger(it item) (interface{}, tomlType) {
if !numUnderscoresOK(it.val) {
p.panicf("Invalid integer %q: underscores must be surrounded by digits", it.val)
}
if numHasLeadingZero(it.val) {
p.panicf("Invalid integer %q: cannot have leading zeroes", it.val)
}
num, err := strconv.ParseInt(it.val, 0, 64)
if err != nil {
// Distinguish integer values. Normally, it'd be a bug if the lexer
// provides an invalid integer, but it's possible that the number is
// out of range of valid values (which the lexer cannot determine).
// So mark the former as a bug but the latter as a legitimate user
// error.
if e, ok := err.(*strconv.NumError); ok && e.Err == strconv.ErrRange {
p.panicf("Integer '%s' is out of the range of 64-bit signed integers.", it.val)
} else {
p.bug("Expected integer value, but got '%s'.", it.val)
}
}
return num, p.typeOfPrimitive(it)
}
func (p *parser) valueFloat(it item) (interface{}, tomlType) {
parts := strings.FieldsFunc(it.val, func(r rune) bool {
switch r {
case '.', 'e', 'E':
return true
}
return false
})
for _, part := range parts {
if !numUnderscoresOK(part) {
p.panicf("Invalid float %q: underscores must be surrounded by digits", it.val)
}
}
if len(parts) > 0 && numHasLeadingZero(parts[0]) {
p.panicf("Invalid float %q: cannot have leading zeroes", it.val)
}
if !numPeriodsOK(it.val) {
// As a special case, numbers like '123.' or '1.e2',
// which are valid as far as Go/strconv are concerned,
// must be rejected because TOML says that a fractional
// part consists of '.' followed by 1+ digits.
p.panicf("Invalid float %q: '.' must be followed by one or more digits", it.val)
}
val := strings.Replace(it.val, "_", "", -1)
if val == "+nan" || val == "-nan" { // Go doesn't support this, but TOML spec does.
val = "nan"
}
num, err := strconv.ParseFloat(val, 64)
if err != nil {
if e, ok := err.(*strconv.NumError); ok && e.Err == strconv.ErrRange {
p.panicf("Float '%s' is out of the range of 64-bit IEEE-754 floating-point numbers.", it.val)
} else {
p.panicf("Invalid float value: %q", it.val)
}
}
return num, p.typeOfPrimitive(it)
}
var dtTypes = []struct {
fmt string
zone *time.Location
}{
{time.RFC3339Nano, time.Local},
{"2006-01-02T15:04:05.999999999", internal.LocalDatetime},
{"2006-01-02", internal.LocalDate},
{"15:04:05.999999999", internal.LocalTime},
}
func (p *parser) valueDatetime(it item) (interface{}, tomlType) {
it.val = datetimeRepl.Replace(it.val)
var (
t time.Time
ok bool
err error
)
for _, dt := range dtTypes {
t, err = time.ParseInLocation(dt.fmt, it.val, dt.zone)
if err == nil {
ok = true
break
}
}
if !ok {
p.panicf("Invalid TOML Datetime: %q.", it.val)
}
return t, p.typeOfPrimitive(it)
}
func (p *parser) valueArray(it item) (interface{}, tomlType) {
p.setType(p.currentKey, tomlArray)
// p.setType(p.currentKey, typ)
var (
array []interface{}
types []tomlType
)
for it = p.next(); it.typ != itemArrayEnd; it = p.next() {
if it.typ == itemCommentStart {
p.expect(itemText)
continue
}
val, typ := p.value(it, true)
array = append(array, val)
types = append(types, typ)
}
return array, tomlArray
}
func (p *parser) valueInlineTable(it item, parentIsArray bool) (interface{}, tomlType) {
var (
hash = make(map[string]interface{})
outerContext = p.context
outerKey = p.currentKey
)
p.context = append(p.context, p.currentKey)
prevContext := p.context
p.currentKey = ""
p.addImplicit(p.context)
p.addContext(p.context, parentIsArray)
/// Loop over all table key/value pairs.
for it := p.next(); it.typ != itemInlineTableEnd; it = p.next() {
if it.typ == itemCommentStart {
p.expect(itemText)
continue
}
/// Read all key parts.
k := p.next()
p.approxLine = k.line
var key Key
for ; k.typ != itemKeyEnd && k.typ != itemEOF; k = p.next() {
key = append(key, p.keyString(k))
}
p.assertEqual(itemKeyEnd, k.typ)
/// The current key is the last part.
p.currentKey = key[len(key)-1]
/// All the other parts (if any) are the context; need to set each part
/// as implicit.
context := key[:len(key)-1]
for i := range context {
p.addImplicitContext(append(p.context, context[i:i+1]...))
}
/// Set the value.
val, typ := p.value(p.next(), false)
p.set(p.currentKey, val, typ)
p.ordered = append(p.ordered, p.context.add(p.currentKey))
hash[p.currentKey] = val
/// Restore context.
p.context = prevContext
}
p.context = outerContext
p.currentKey = outerKey
return hash, tomlHash
}
// numHasLeadingZero checks if this number has leading zeroes, allowing for '0',
// +/- signs, and base prefixes.
func numHasLeadingZero(s string) bool {
if len(s) > 1 && s[0] == '0' && isDigit(rune(s[1])) { // >1 to allow "0" and isDigit to allow 0x
return true
}
if len(s) > 2 && (s[0] == '-' || s[0] == '+') && s[1] == '0' {
return true
}
return false
}
// numUnderscoresOK checks whether each underscore in s is surrounded by
// characters that are not underscores.
func numUnderscoresOK(s string) bool {
switch s {
case "nan", "+nan", "-nan", "inf", "-inf", "+inf":
return true
}
accept := false
for _, r := range s {
if r == '_' {
if !accept {
return false
}
}
// isHexadecimal is a superset of all the permissable characters
// surrounding an underscore.
accept = isHexadecimal(r)
}
return accept
}
// numPeriodsOK checks whether every period in s is followed by a digit.
func numPeriodsOK(s string) bool {
period := false
for _, r := range s {
if period && !isDigit(r) {
return false
}
period = r == '.'
}
return !period
}
// Set the current context of the parser, where the context is either a hash or
// an array of hashes, depending on the value of the `array` parameter.
//
// Establishing the context also makes sure that the key isn't a duplicate, and
// will create implicit hashes automatically.
func (p *parser) addContext(key Key, array bool) {
var ok bool
// Always start at the top level and drill down for our context.
hashContext := p.mapping
keyContext := make(Key, 0)
// We only need implicit hashes for key[0:-1]
for _, k := range key[0 : len(key)-1] {
_, ok = hashContext[k]
keyContext = append(keyContext, k)
// No key? Make an implicit hash and move on.
if !ok {
p.addImplicit(keyContext)
hashContext[k] = make(map[string]interface{})
}
// If the hash context is actually an array of tables, then set
// the hash context to the last element in that array.
//
// Otherwise, it better be a table, since this MUST be a key group (by
// virtue of it not being the last element in a key).
switch t := hashContext[k].(type) {
case []map[string]interface{}:
hashContext = t[len(t)-1]
case map[string]interface{}:
hashContext = t
default:
p.panicf("Key '%s' was already created as a hash.", keyContext)
}
}
p.context = keyContext
if array {
// If this is the first element for this array, then allocate a new
// list of tables for it.
k := key[len(key)-1]
if _, ok := hashContext[k]; !ok {
hashContext[k] = make([]map[string]interface{}, 0, 4)
}
// Add a new table. But make sure the key hasn't already been used
// for something else.
if hash, ok := hashContext[k].([]map[string]interface{}); ok {
hashContext[k] = append(hash, make(map[string]interface{}))
} else {
p.panicf("Key '%s' was already created and cannot be used as an array.", keyContext)
}
} else {
p.setValue(key[len(key)-1], make(map[string]interface{}))
}
p.context = append(p.context, key[len(key)-1])
}
// set calls setValue and setType.
func (p *parser) set(key string, val interface{}, typ tomlType) {
p.setValue(p.currentKey, val)
p.setType(p.currentKey, typ)
}
// setValue sets the given key to the given value in the current context.
// It will make sure that the key hasn't already been defined, account for
// implicit key groups.
func (p *parser) setValue(key string, value interface{}) {
var (
tmpHash interface{}
ok bool
hash = p.mapping
keyContext Key
)
for _, k := range p.context {
keyContext = append(keyContext, k)
if tmpHash, ok = hash[k]; !ok {
p.bug("Context for key '%s' has not been established.", keyContext)
}
switch t := tmpHash.(type) {
case []map[string]interface{}:
// The context is a table of hashes. Pick the most recent table
// defined as the current hash.
hash = t[len(t)-1]
case map[string]interface{}:
hash = t
default:
p.panicf("Key '%s' has already been defined.", keyContext)
}
}
keyContext = append(keyContext, key)
if _, ok := hash[key]; ok {
// Normally redefining keys isn't allowed, but the key could have been
// defined implicitly and it's allowed to be redefined concretely. (See
// the `valid/implicit-and-explicit-after.toml` in toml-test)
//
// But we have to make sure to stop marking it as an implicit. (So that
// another redefinition provokes an error.)
//
// Note that since it has already been defined (as a hash), we don't
// want to overwrite it. So our business is done.
if p.isArray(keyContext) {
p.removeImplicit(keyContext)
hash[key] = value
return
}
if p.isImplicit(keyContext) {
p.removeImplicit(keyContext)
return
}
// Otherwise, we have a concrete key trying to override a previous
// key, which is *always* wrong.
p.panicf("Key '%s' has already been defined.", keyContext)
}
hash[key] = value
}
// setType sets the type of a particular value at a given key.
// It should be called immediately AFTER setValue.
//
// Note that if `key` is empty, then the type given will be applied to the
// current context (which is either a table or an array of tables).
func (p *parser) setType(key string, typ tomlType) {
keyContext := make(Key, 0, len(p.context)+1)
for _, k := range p.context {
keyContext = append(keyContext, k)
}
if len(key) > 0 { // allow type setting for hashes
keyContext = append(keyContext, key)
}
p.types[keyContext.String()] = typ
}
// Implicit keys need to be created when tables are implied in "a.b.c.d = 1" and
// "[a.b.c]" (the "a", "b", and "c" hashes are never created explicitly).
func (p *parser) addImplicit(key Key) { p.implicits[key.String()] = true }
func (p *parser) removeImplicit(key Key) { p.implicits[key.String()] = false }
func (p *parser) isImplicit(key Key) bool { return p.implicits[key.String()] }
func (p *parser) isArray(key Key) bool { return p.types[key.String()] == tomlArray }
func (p *parser) addImplicitContext(key Key) {
p.addImplicit(key)
p.addContext(key, false)
}
// current returns the full key name of the current context.
func (p *parser) current() string {
if len(p.currentKey) == 0 {
return p.context.String()
}
if len(p.context) == 0 {
return p.currentKey
}
return fmt.Sprintf("%s.%s", p.context, p.currentKey)
}
func stripFirstNewline(s string) string {
if len(s) > 0 && s[0] == '\n' {
return s[1:]
}
if len(s) > 1 && s[0] == '\r' && s[1] == '\n' {
return s[2:]
}
return s
}
// Remove newlines inside triple-quoted strings if a line ends with "\".
func stripEscapedNewlines(s string) string {
split := strings.Split(s, "\n")
if len(split) < 1 {
return s
}
escNL := false // Keep track of the last non-blank line was escaped.
for i, line := range split {
line = strings.TrimRight(line, " \t\r")
if len(line) == 0 || line[len(line)-1] != '\\' {
split[i] = strings.TrimRight(split[i], "\r")
if !escNL && i != len(split)-1 {
split[i] += "\n"
}
continue
}
escBS := true
for j := len(line) - 1; j >= 0 && line[j] == '\\'; j-- {
escBS = !escBS
}
if escNL {
line = strings.TrimLeft(line, " \t\r")
}
escNL = !escBS
if escBS {
split[i] += "\n"
continue
}
split[i] = line[:len(line)-1] // Remove \
if len(split)-1 > i {
split[i+1] = strings.TrimLeft(split[i+1], " \t\r")
}
}
return strings.Join(split, "")
}
func (p *parser) replaceEscapes(str string) string {
var replaced []rune
s := []byte(str)
r := 0
for r < len(s) {
if s[r] != '\\' {
c, size := utf8.DecodeRune(s[r:])
r += size
replaced = append(replaced, c)
continue
}
r += 1
if r >= len(s) {
p.bug("Escape sequence at end of string.")
return ""
}
switch s[r] {
default:
p.bug("Expected valid escape code after \\, but got %q.", s[r])
return ""
case ' ', '\t':
p.panicf("invalid escape: '\\%c'", s[r])
return ""
case 'b':
replaced = append(replaced, rune(0x0008))
r += 1
case 't':
replaced = append(replaced, rune(0x0009))
r += 1
case 'n':
replaced = append(replaced, rune(0x000A))
r += 1
case 'f':
replaced = append(replaced, rune(0x000C))
r += 1
case 'r':
replaced = append(replaced, rune(0x000D))
r += 1
case '"':
replaced = append(replaced, rune(0x0022))
r += 1
case '\\':
replaced = append(replaced, rune(0x005C))
r += 1
case 'u':
// At this point, we know we have a Unicode escape of the form
// `uXXXX` at [r, r+5). (Because the lexer guarantees this
// for us.)
escaped := p.asciiEscapeToUnicode(s[r+1 : r+5])
replaced = append(replaced, escaped)
r += 5
case 'U':
// At this point, we know we have a Unicode escape of the form
// `uXXXX` at [r, r+9). (Because the lexer guarantees this
// for us.)
escaped := p.asciiEscapeToUnicode(s[r+1 : r+9])
replaced = append(replaced, escaped)
r += 9
}
}
return string(replaced)
}
func (p *parser) asciiEscapeToUnicode(bs []byte) rune {
s := string(bs)
hex, err := strconv.ParseUint(strings.ToLower(s), 16, 32)
if err != nil {
p.bug("Could not parse '%s' as a hexadecimal number, but the "+
"lexer claims it's OK: %s", s, err)
}
if !utf8.ValidRune(rune(hex)) {
p.panicf("Escaped character '\\u%s' is not valid UTF-8.", s)
}
return rune(hex)
}

70
vendor/github.com/BurntSushi/toml/type_check.go generated vendored Normal file
View File

@@ -0,0 +1,70 @@
package toml
// tomlType represents any Go type that corresponds to a TOML type.
// While the first draft of the TOML spec has a simplistic type system that
// probably doesn't need this level of sophistication, we seem to be militating
// toward adding real composite types.
type tomlType interface {
typeString() string
}
// typeEqual accepts any two types and returns true if they are equal.
func typeEqual(t1, t2 tomlType) bool {
if t1 == nil || t2 == nil {
return false
}
return t1.typeString() == t2.typeString()
}
func typeIsHash(t tomlType) bool {
return typeEqual(t, tomlHash) || typeEqual(t, tomlArrayHash)
}
type tomlBaseType string
func (btype tomlBaseType) typeString() string {
return string(btype)
}
func (btype tomlBaseType) String() string {
return btype.typeString()
}
var (
tomlInteger tomlBaseType = "Integer"
tomlFloat tomlBaseType = "Float"
tomlDatetime tomlBaseType = "Datetime"
tomlString tomlBaseType = "String"
tomlBool tomlBaseType = "Bool"
tomlArray tomlBaseType = "Array"
tomlHash tomlBaseType = "Hash"
tomlArrayHash tomlBaseType = "ArrayHash"
)
// typeOfPrimitive returns a tomlType of any primitive value in TOML.
// Primitive values are: Integer, Float, Datetime, String and Bool.
//
// Passing a lexer item other than the following will cause a BUG message
// to occur: itemString, itemBool, itemInteger, itemFloat, itemDatetime.
func (p *parser) typeOfPrimitive(lexItem item) tomlType {
switch lexItem.typ {
case itemInteger:
return tomlInteger
case itemFloat:
return tomlFloat
case itemDatetime:
return tomlDatetime
case itemString:
return tomlString
case itemMultilineString:
return tomlString
case itemRawString:
return tomlString
case itemRawMultilineString:
return tomlString
case itemBool:
return tomlBool
}
p.bug("Cannot infer primitive type of lex item '%s'.", lexItem)
panic("unreachable")
}

242
vendor/github.com/BurntSushi/toml/type_fields.go generated vendored Normal file
View File

@@ -0,0 +1,242 @@
package toml
// Struct field handling is adapted from code in encoding/json:
//
// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the Go distribution.
import (
"reflect"
"sort"
"sync"
)
// A field represents a single field found in a struct.
type field struct {
name string // the name of the field (`toml` tag included)
tag bool // whether field has a `toml` tag
index []int // represents the depth of an anonymous field
typ reflect.Type // the type of the field
}
// byName sorts field by name, breaking ties with depth,
// then breaking ties with "name came from toml tag", then
// breaking ties with index sequence.
type byName []field
func (x byName) Len() int { return len(x) }
func (x byName) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
func (x byName) Less(i, j int) bool {
if x[i].name != x[j].name {
return x[i].name < x[j].name
}
if len(x[i].index) != len(x[j].index) {
return len(x[i].index) < len(x[j].index)
}
if x[i].tag != x[j].tag {
return x[i].tag
}
return byIndex(x).Less(i, j)
}
// byIndex sorts field by index sequence.
type byIndex []field
func (x byIndex) Len() int { return len(x) }
func (x byIndex) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
func (x byIndex) Less(i, j int) bool {
for k, xik := range x[i].index {
if k >= len(x[j].index) {
return false
}
if xik != x[j].index[k] {
return xik < x[j].index[k]
}
}
return len(x[i].index) < len(x[j].index)
}
// typeFields returns a list of fields that TOML should recognize for the given
// type. The algorithm is breadth-first search over the set of structs to
// include - the top struct and then any reachable anonymous structs.
func typeFields(t reflect.Type) []field {
// Anonymous fields to explore at the current level and the next.
current := []field{}
next := []field{{typ: t}}
// Count of queued names for current level and the next.
count := map[reflect.Type]int{}
nextCount := map[reflect.Type]int{}
// Types already visited at an earlier level.
visited := map[reflect.Type]bool{}
// Fields found.
var fields []field
for len(next) > 0 {
current, next = next, current[:0]
count, nextCount = nextCount, map[reflect.Type]int{}
for _, f := range current {
if visited[f.typ] {
continue
}
visited[f.typ] = true
// Scan f.typ for fields to include.
for i := 0; i < f.typ.NumField(); i++ {
sf := f.typ.Field(i)
if sf.PkgPath != "" && !sf.Anonymous { // unexported
continue
}
opts := getOptions(sf.Tag)
if opts.skip {
continue
}
index := make([]int, len(f.index)+1)
copy(index, f.index)
index[len(f.index)] = i
ft := sf.Type
if ft.Name() == "" && ft.Kind() == reflect.Ptr {
// Follow pointer.
ft = ft.Elem()
}
// Record found field and index sequence.
if opts.name != "" || !sf.Anonymous || ft.Kind() != reflect.Struct {
tagged := opts.name != ""
name := opts.name
if name == "" {
name = sf.Name
}
fields = append(fields, field{name, tagged, index, ft})
if count[f.typ] > 1 {
// If there were multiple instances, add a second,
// so that the annihilation code will see a duplicate.
// It only cares about the distinction between 1 or 2,
// so don't bother generating any more copies.
fields = append(fields, fields[len(fields)-1])
}
continue
}
// Record new anonymous struct to explore in next round.
nextCount[ft]++
if nextCount[ft] == 1 {
f := field{name: ft.Name(), index: index, typ: ft}
next = append(next, f)
}
}
}
}
sort.Sort(byName(fields))
// Delete all fields that are hidden by the Go rules for embedded fields,
// except that fields with TOML tags are promoted.
// The fields are sorted in primary order of name, secondary order
// of field index length. Loop over names; for each name, delete
// hidden fields by choosing the one dominant field that survives.
out := fields[:0]
for advance, i := 0, 0; i < len(fields); i += advance {
// One iteration per name.
// Find the sequence of fields with the name of this first field.
fi := fields[i]
name := fi.name
for advance = 1; i+advance < len(fields); advance++ {
fj := fields[i+advance]
if fj.name != name {
break
}
}
if advance == 1 { // Only one field with this name
out = append(out, fi)
continue
}
dominant, ok := dominantField(fields[i : i+advance])
if ok {
out = append(out, dominant)
}
}
fields = out
sort.Sort(byIndex(fields))
return fields
}
// dominantField looks through the fields, all of which are known to
// have the same name, to find the single field that dominates the
// others using Go's embedding rules, modified by the presence of
// TOML tags. If there are multiple top-level fields, the boolean
// will be false: This condition is an error in Go and we skip all
// the fields.
func dominantField(fields []field) (field, bool) {
// The fields are sorted in increasing index-length order. The winner
// must therefore be one with the shortest index length. Drop all
// longer entries, which is easy: just truncate the slice.
length := len(fields[0].index)
tagged := -1 // Index of first tagged field.
for i, f := range fields {
if len(f.index) > length {
fields = fields[:i]
break
}
if f.tag {
if tagged >= 0 {
// Multiple tagged fields at the same level: conflict.
// Return no field.
return field{}, false
}
tagged = i
}
}
if tagged >= 0 {
return fields[tagged], true
}
// All remaining fields have the same length. If there's more than one,
// we have a conflict (two fields named "X" at the same level) and we
// return no field.
if len(fields) > 1 {
return field{}, false
}
return fields[0], true
}
var fieldCache struct {
sync.RWMutex
m map[reflect.Type][]field
}
// cachedTypeFields is like typeFields but uses a cache to avoid repeated work.
func cachedTypeFields(t reflect.Type) []field {
fieldCache.RLock()
f := fieldCache.m[t]
fieldCache.RUnlock()
if f != nil {
return f
}
// Compute fields without lock.
// Might duplicate effort but won't hold other computations back.
f = typeFields(t)
if f == nil {
f = []field{}
}
fieldCache.Lock()
if fieldCache.m == nil {
fieldCache.m = map[reflect.Type][]field{}
}
fieldCache.m[t] = f
fieldCache.Unlock()
return f
}

View File

@@ -14,15 +14,40 @@
package libcni package libcni
// Note this is the actual implementation of the CNI specification, which
// is reflected in the https://github.com/containernetworking/cni/blob/master/SPEC.md file
// it is typically bundled into runtime providers (i.e. containerd or cri-o would use this
// before calling runc or hcsshim). It is also bundled into CNI providers as well, for example,
// to add an IP to a container, to parse the configuration of the CNI and so on.
import ( import (
"context"
"encoding/json"
"fmt"
"io/ioutil"
"os" "os"
"path/filepath"
"strings" "strings"
"github.com/containernetworking/cni/pkg/invoke" "github.com/containernetworking/cni/pkg/invoke"
"github.com/containernetworking/cni/pkg/types" "github.com/containernetworking/cni/pkg/types"
"github.com/containernetworking/cni/pkg/types/create"
"github.com/containernetworking/cni/pkg/utils"
"github.com/containernetworking/cni/pkg/version" "github.com/containernetworking/cni/pkg/version"
) )
var (
CacheDir = "/var/lib/cni"
)
const (
CNICacheV1 = "cniCacheV1"
)
// A RuntimeConf holds the arguments to one invocation of a CNI plugin
// excepting the network configuration, with the nested exception that
// the `runtimeConfig` from the network configuration is included
// here.
type RuntimeConf struct { type RuntimeConf struct {
ContainerID string ContainerID string
NetNS string NetNS string
@@ -34,6 +59,9 @@ type RuntimeConf struct {
// in this map which match the capabilities of the plugin are passed // in this map which match the capabilities of the plugin are passed
// to the plugin // to the plugin
CapabilityArgs map[string]interface{} CapabilityArgs map[string]interface{}
// DEPRECATED. Will be removed in a future release.
CacheDir string
} }
type NetworkConfig struct { type NetworkConfig struct {
@@ -44,31 +72,62 @@ type NetworkConfig struct {
type NetworkConfigList struct { type NetworkConfigList struct {
Name string Name string
CNIVersion string CNIVersion string
DisableCheck bool
Plugins []*NetworkConfig Plugins []*NetworkConfig
Bytes []byte Bytes []byte
} }
type CNI interface { type CNI interface {
AddNetworkList(net *NetworkConfigList, rt *RuntimeConf) (types.Result, error) AddNetworkList(ctx context.Context, net *NetworkConfigList, rt *RuntimeConf) (types.Result, error)
DelNetworkList(net *NetworkConfigList, rt *RuntimeConf) error CheckNetworkList(ctx context.Context, net *NetworkConfigList, rt *RuntimeConf) error
DelNetworkList(ctx context.Context, net *NetworkConfigList, rt *RuntimeConf) error
GetNetworkListCachedResult(net *NetworkConfigList, rt *RuntimeConf) (types.Result, error)
GetNetworkListCachedConfig(net *NetworkConfigList, rt *RuntimeConf) ([]byte, *RuntimeConf, error)
AddNetwork(net *NetworkConfig, rt *RuntimeConf) (types.Result, error) AddNetwork(ctx context.Context, net *NetworkConfig, rt *RuntimeConf) (types.Result, error)
DelNetwork(net *NetworkConfig, rt *RuntimeConf) error CheckNetwork(ctx context.Context, net *NetworkConfig, rt *RuntimeConf) error
DelNetwork(ctx context.Context, net *NetworkConfig, rt *RuntimeConf) error
GetNetworkCachedResult(net *NetworkConfig, rt *RuntimeConf) (types.Result, error)
GetNetworkCachedConfig(net *NetworkConfig, rt *RuntimeConf) ([]byte, *RuntimeConf, error)
ValidateNetworkList(ctx context.Context, net *NetworkConfigList) ([]string, error)
ValidateNetwork(ctx context.Context, net *NetworkConfig) ([]string, error)
} }
type CNIConfig struct { type CNIConfig struct {
Path []string Path []string
exec invoke.Exec
cacheDir string
} }
// CNIConfig implements the CNI interface // CNIConfig implements the CNI interface
var _ CNI = &CNIConfig{} var _ CNI = &CNIConfig{}
func buildOneConfig(list *NetworkConfigList, orig *NetworkConfig, prevResult types.Result, rt *RuntimeConf) (*NetworkConfig, error) { // NewCNIConfig returns a new CNIConfig object that will search for plugins
// in the given paths and use the given exec interface to run those plugins,
// or if the exec interface is not given, will use a default exec handler.
func NewCNIConfig(path []string, exec invoke.Exec) *CNIConfig {
return NewCNIConfigWithCacheDir(path, "", exec)
}
// NewCNIConfigWithCacheDir returns a new CNIConfig object that will search for plugins
// in the given paths use the given exec interface to run those plugins,
// or if the exec interface is not given, will use a default exec handler.
// The given cache directory will be used for temporary data storage when needed.
func NewCNIConfigWithCacheDir(path []string, cacheDir string, exec invoke.Exec) *CNIConfig {
return &CNIConfig{
Path: path,
cacheDir: cacheDir,
exec: exec,
}
}
func buildOneConfig(name, cniVersion string, orig *NetworkConfig, prevResult types.Result, rt *RuntimeConf) (*NetworkConfig, error) {
var err error var err error
inject := map[string]interface{}{ inject := map[string]interface{}{
"name": list.Name, "name": name,
"cniVersion": list.CNIVersion, "cniVersion": cniVersion,
} }
// Add previous plugin result // Add previous plugin result
if prevResult != nil { if prevResult != nil {
@@ -92,7 +151,7 @@ func buildOneConfig(list *NetworkConfigList, orig *NetworkConfig, prevResult typ
// These capabilities arguments are filtered through the plugin's advertised // These capabilities arguments are filtered through the plugin's advertised
// capabilities from its config JSON, and any keys in the CapabilityArgs // capabilities from its config JSON, and any keys in the CapabilityArgs
// matching plugin capabilities are added to the "runtimeConfig" dictionary // matching plugin capabilities are added to the "runtimeConfig" dictionary
// sent to the plugin via JSON on stdin. For exmaple, if the plugin's // sent to the plugin via JSON on stdin. For example, if the plugin's
// capabilities include "portMappings", and the CapabilityArgs map includes a // capabilities include "portMappings", and the CapabilityArgs map includes a
// "portMappings" key, that key and its value are added to the "runtimeConfig" // "portMappings" key, that key and its value are added to the "runtimeConfig"
// dictionary to be passed to the plugin's stdin. // dictionary to be passed to the plugin's stdin.
@@ -119,45 +178,295 @@ func injectRuntimeConfig(orig *NetworkConfig, rt *RuntimeConf) (*NetworkConfig,
return orig, nil return orig, nil
} }
// AddNetworkList executes a sequence of plugins with the ADD command // ensure we have a usable exec if the CNIConfig was not given one
func (c *CNIConfig) AddNetworkList(list *NetworkConfigList, rt *RuntimeConf) (types.Result, error) { func (c *CNIConfig) ensureExec() invoke.Exec {
var prevResult types.Result if c.exec == nil {
for _, net := range list.Plugins { c.exec = &invoke.DefaultExec{
pluginPath, err := invoke.FindInPath(net.Network.Type, c.Path) RawExec: &invoke.RawExec{Stderr: os.Stderr},
if err != nil { PluginDecoder: version.PluginDecoder{},
return nil, err
}
newConf, err := buildOneConfig(list, net, prevResult, rt)
if err != nil {
return nil, err
}
prevResult, err = invoke.ExecPluginWithResult(pluginPath, newConf.Bytes, c.args("ADD", rt))
if err != nil {
return nil, err
} }
} }
return c.exec
return prevResult, nil
} }
// DelNetworkList executes a sequence of plugins with the DEL command type cachedInfo struct {
func (c *CNIConfig) DelNetworkList(list *NetworkConfigList, rt *RuntimeConf) error { Kind string `json:"kind"`
for i := len(list.Plugins) - 1; i >= 0; i-- { ContainerID string `json:"containerId"`
net := list.Plugins[i] Config []byte `json:"config"`
IfName string `json:"ifName"`
NetworkName string `json:"networkName"`
CniArgs [][2]string `json:"cniArgs,omitempty"`
CapabilityArgs map[string]interface{} `json:"capabilityArgs,omitempty"`
RawResult map[string]interface{} `json:"result,omitempty"`
Result types.Result `json:"-"`
}
pluginPath, err := invoke.FindInPath(net.Network.Type, c.Path) // getCacheDir returns the cache directory in this order:
// 1) global cacheDir from CNIConfig object
// 2) deprecated cacheDir from RuntimeConf object
// 3) fall back to default cache directory
func (c *CNIConfig) getCacheDir(rt *RuntimeConf) string {
if c.cacheDir != "" {
return c.cacheDir
}
if rt.CacheDir != "" {
return rt.CacheDir
}
return CacheDir
}
func (c *CNIConfig) getCacheFilePath(netName string, rt *RuntimeConf) (string, error) {
if netName == "" || rt.ContainerID == "" || rt.IfName == "" {
return "", fmt.Errorf("cache file path requires network name (%q), container ID (%q), and interface name (%q)", netName, rt.ContainerID, rt.IfName)
}
return filepath.Join(c.getCacheDir(rt), "results", fmt.Sprintf("%s-%s-%s", netName, rt.ContainerID, rt.IfName)), nil
}
func (c *CNIConfig) cacheAdd(result types.Result, config []byte, netName string, rt *RuntimeConf) error {
cached := cachedInfo{
Kind: CNICacheV1,
ContainerID: rt.ContainerID,
Config: config,
IfName: rt.IfName,
NetworkName: netName,
CniArgs: rt.Args,
CapabilityArgs: rt.CapabilityArgs,
}
// We need to get type.Result into cachedInfo as JSON map
// Marshal to []byte, then Unmarshal into cached.RawResult
data, err := json.Marshal(result)
if err != nil { if err != nil {
return err return err
} }
newConf, err := buildOneConfig(list, net, nil, rt) err = json.Unmarshal(data, &cached.RawResult)
if err != nil { if err != nil {
return err return err
} }
if err := invoke.ExecPluginWithoutResult(pluginPath, newConf.Bytes, c.args("DEL", rt)); err != nil { newBytes, err := json.Marshal(&cached)
if err != nil {
return err
}
fname, err := c.getCacheFilePath(netName, rt)
if err != nil {
return err
}
if err := os.MkdirAll(filepath.Dir(fname), 0700); err != nil {
return err
}
return ioutil.WriteFile(fname, newBytes, 0600)
}
func (c *CNIConfig) cacheDel(netName string, rt *RuntimeConf) error {
fname, err := c.getCacheFilePath(netName, rt)
if err != nil {
// Ignore error
return nil
}
return os.Remove(fname)
}
func (c *CNIConfig) getCachedConfig(netName string, rt *RuntimeConf) ([]byte, *RuntimeConf, error) {
var bytes []byte
fname, err := c.getCacheFilePath(netName, rt)
if err != nil {
return nil, nil, err
}
bytes, err = ioutil.ReadFile(fname)
if err != nil {
// Ignore read errors; the cached result may not exist on-disk
return nil, nil, nil
}
unmarshaled := cachedInfo{}
if err := json.Unmarshal(bytes, &unmarshaled); err != nil {
return nil, nil, fmt.Errorf("failed to unmarshal cached network %q config: %w", netName, err)
}
if unmarshaled.Kind != CNICacheV1 {
return nil, nil, fmt.Errorf("read cached network %q config has wrong kind: %v", netName, unmarshaled.Kind)
}
newRt := *rt
if unmarshaled.CniArgs != nil {
newRt.Args = unmarshaled.CniArgs
}
newRt.CapabilityArgs = unmarshaled.CapabilityArgs
return unmarshaled.Config, &newRt, nil
}
func (c *CNIConfig) getLegacyCachedResult(netName, cniVersion string, rt *RuntimeConf) (types.Result, error) {
fname, err := c.getCacheFilePath(netName, rt)
if err != nil {
return nil, err
}
data, err := ioutil.ReadFile(fname)
if err != nil {
// Ignore read errors; the cached result may not exist on-disk
return nil, nil
}
// Load the cached result
result, err := create.CreateFromBytes(data)
if err != nil {
return nil, err
}
// Convert to the config version to ensure plugins get prevResult
// in the same version as the config. The cached result version
// should match the config version unless the config was changed
// while the container was running.
result, err = result.GetAsVersion(cniVersion)
if err != nil {
return nil, fmt.Errorf("failed to convert cached result to config version %q: %w", cniVersion, err)
}
return result, nil
}
func (c *CNIConfig) getCachedResult(netName, cniVersion string, rt *RuntimeConf) (types.Result, error) {
fname, err := c.getCacheFilePath(netName, rt)
if err != nil {
return nil, err
}
fdata, err := ioutil.ReadFile(fname)
if err != nil {
// Ignore read errors; the cached result may not exist on-disk
return nil, nil
}
cachedInfo := cachedInfo{}
if err := json.Unmarshal(fdata, &cachedInfo); err != nil || cachedInfo.Kind != CNICacheV1 {
return c.getLegacyCachedResult(netName, cniVersion, rt)
}
newBytes, err := json.Marshal(&cachedInfo.RawResult)
if err != nil {
return nil, fmt.Errorf("failed to marshal cached network %q config: %w", netName, err)
}
// Load the cached result
result, err := create.CreateFromBytes(newBytes)
if err != nil {
return nil, err
}
// Convert to the config version to ensure plugins get prevResult
// in the same version as the config. The cached result version
// should match the config version unless the config was changed
// while the container was running.
result, err = result.GetAsVersion(cniVersion)
if err != nil {
return nil, fmt.Errorf("failed to convert cached result to config version %q: %w", cniVersion, err)
}
return result, nil
}
// GetNetworkListCachedResult returns the cached Result of the previous
// AddNetworkList() operation for a network list, or an error.
func (c *CNIConfig) GetNetworkListCachedResult(list *NetworkConfigList, rt *RuntimeConf) (types.Result, error) {
return c.getCachedResult(list.Name, list.CNIVersion, rt)
}
// GetNetworkCachedResult returns the cached Result of the previous
// AddNetwork() operation for a network, or an error.
func (c *CNIConfig) GetNetworkCachedResult(net *NetworkConfig, rt *RuntimeConf) (types.Result, error) {
return c.getCachedResult(net.Network.Name, net.Network.CNIVersion, rt)
}
// GetNetworkListCachedConfig copies the input RuntimeConf to output
// RuntimeConf with fields updated with info from the cached Config.
func (c *CNIConfig) GetNetworkListCachedConfig(list *NetworkConfigList, rt *RuntimeConf) ([]byte, *RuntimeConf, error) {
return c.getCachedConfig(list.Name, rt)
}
// GetNetworkCachedConfig copies the input RuntimeConf to output
// RuntimeConf with fields updated with info from the cached Config.
func (c *CNIConfig) GetNetworkCachedConfig(net *NetworkConfig, rt *RuntimeConf) ([]byte, *RuntimeConf, error) {
return c.getCachedConfig(net.Network.Name, rt)
}
func (c *CNIConfig) addNetwork(ctx context.Context, name, cniVersion string, net *NetworkConfig, prevResult types.Result, rt *RuntimeConf) (types.Result, error) {
c.ensureExec()
pluginPath, err := c.exec.FindInPath(net.Network.Type, c.Path)
if err != nil {
return nil, err
}
if err := utils.ValidateContainerID(rt.ContainerID); err != nil {
return nil, err
}
if err := utils.ValidateNetworkName(name); err != nil {
return nil, err
}
if err := utils.ValidateInterfaceName(rt.IfName); err != nil {
return nil, err
}
newConf, err := buildOneConfig(name, cniVersion, net, prevResult, rt)
if err != nil {
return nil, err
}
return invoke.ExecPluginWithResult(ctx, pluginPath, newConf.Bytes, c.args("ADD", rt), c.exec)
}
// AddNetworkList executes a sequence of plugins with the ADD command
func (c *CNIConfig) AddNetworkList(ctx context.Context, list *NetworkConfigList, rt *RuntimeConf) (types.Result, error) {
var err error
var result types.Result
for _, net := range list.Plugins {
result, err = c.addNetwork(ctx, list.Name, list.CNIVersion, net, result, rt)
if err != nil {
return nil, fmt.Errorf("plugin %s failed (add): %w", pluginDescription(net.Network), err)
}
}
if err = c.cacheAdd(result, list.Bytes, list.Name, rt); err != nil {
return nil, fmt.Errorf("failed to set network %q cached result: %w", list.Name, err)
}
return result, nil
}
func (c *CNIConfig) checkNetwork(ctx context.Context, name, cniVersion string, net *NetworkConfig, prevResult types.Result, rt *RuntimeConf) error {
c.ensureExec()
pluginPath, err := c.exec.FindInPath(net.Network.Type, c.Path)
if err != nil {
return err
}
newConf, err := buildOneConfig(name, cniVersion, net, prevResult, rt)
if err != nil {
return err
}
return invoke.ExecPluginWithoutResult(ctx, pluginPath, newConf.Bytes, c.args("CHECK", rt), c.exec)
}
// CheckNetworkList executes a sequence of plugins with the CHECK command
func (c *CNIConfig) CheckNetworkList(ctx context.Context, list *NetworkConfigList, rt *RuntimeConf) error {
// CHECK was added in CNI spec version 0.4.0 and higher
if gtet, err := version.GreaterThanOrEqualTo(list.CNIVersion, "0.4.0"); err != nil {
return err
} else if !gtet {
return fmt.Errorf("configuration version %q does not support the CHECK command", list.CNIVersion)
}
if list.DisableCheck {
return nil
}
cachedResult, err := c.getCachedResult(list.Name, list.CNIVersion, rt)
if err != nil {
return fmt.Errorf("failed to get network %q cached result: %w", list.Name, err)
}
for _, net := range list.Plugins {
if err := c.checkNetwork(ctx, list.Name, list.CNIVersion, net, cachedResult, rt); err != nil {
return err return err
} }
} }
@@ -165,45 +474,196 @@ func (c *CNIConfig) DelNetworkList(list *NetworkConfigList, rt *RuntimeConf) err
return nil return nil
} }
func (c *CNIConfig) delNetwork(ctx context.Context, name, cniVersion string, net *NetworkConfig, prevResult types.Result, rt *RuntimeConf) error {
c.ensureExec()
pluginPath, err := c.exec.FindInPath(net.Network.Type, c.Path)
if err != nil {
return err
}
newConf, err := buildOneConfig(name, cniVersion, net, prevResult, rt)
if err != nil {
return err
}
return invoke.ExecPluginWithoutResult(ctx, pluginPath, newConf.Bytes, c.args("DEL", rt), c.exec)
}
// DelNetworkList executes a sequence of plugins with the DEL command
func (c *CNIConfig) DelNetworkList(ctx context.Context, list *NetworkConfigList, rt *RuntimeConf) error {
var cachedResult types.Result
// Cached result on DEL was added in CNI spec version 0.4.0 and higher
if gtet, err := version.GreaterThanOrEqualTo(list.CNIVersion, "0.4.0"); err != nil {
return err
} else if gtet {
cachedResult, err = c.getCachedResult(list.Name, list.CNIVersion, rt)
if err != nil {
return fmt.Errorf("failed to get network %q cached result: %w", list.Name, err)
}
}
for i := len(list.Plugins) - 1; i >= 0; i-- {
net := list.Plugins[i]
if err := c.delNetwork(ctx, list.Name, list.CNIVersion, net, cachedResult, rt); err != nil {
return fmt.Errorf("plugin %s failed (delete): %w", pluginDescription(net.Network), err)
}
}
_ = c.cacheDel(list.Name, rt)
return nil
}
func pluginDescription(net *types.NetConf) string {
if net == nil {
return "<missing>"
}
pluginType := net.Type
out := fmt.Sprintf("type=%q", pluginType)
name := net.Name
if name != "" {
out += fmt.Sprintf(" name=%q", name)
}
return out
}
// AddNetwork executes the plugin with the ADD command // AddNetwork executes the plugin with the ADD command
func (c *CNIConfig) AddNetwork(net *NetworkConfig, rt *RuntimeConf) (types.Result, error) { func (c *CNIConfig) AddNetwork(ctx context.Context, net *NetworkConfig, rt *RuntimeConf) (types.Result, error) {
pluginPath, err := invoke.FindInPath(net.Network.Type, c.Path) result, err := c.addNetwork(ctx, net.Network.Name, net.Network.CNIVersion, net, nil, rt)
if err != nil { if err != nil {
return nil, err return nil, err
} }
net, err = injectRuntimeConfig(net, rt) if err = c.cacheAdd(result, net.Bytes, net.Network.Name, rt); err != nil {
if err != nil { return nil, fmt.Errorf("failed to set network %q cached result: %w", net.Network.Name, err)
return nil, err
} }
return invoke.ExecPluginWithResult(pluginPath, net.Bytes, c.args("ADD", rt)) return result, nil
}
// CheckNetwork executes the plugin with the CHECK command
func (c *CNIConfig) CheckNetwork(ctx context.Context, net *NetworkConfig, rt *RuntimeConf) error {
// CHECK was added in CNI spec version 0.4.0 and higher
if gtet, err := version.GreaterThanOrEqualTo(net.Network.CNIVersion, "0.4.0"); err != nil {
return err
} else if !gtet {
return fmt.Errorf("configuration version %q does not support the CHECK command", net.Network.CNIVersion)
}
cachedResult, err := c.getCachedResult(net.Network.Name, net.Network.CNIVersion, rt)
if err != nil {
return fmt.Errorf("failed to get network %q cached result: %w", net.Network.Name, err)
}
return c.checkNetwork(ctx, net.Network.Name, net.Network.CNIVersion, net, cachedResult, rt)
} }
// DelNetwork executes the plugin with the DEL command // DelNetwork executes the plugin with the DEL command
func (c *CNIConfig) DelNetwork(net *NetworkConfig, rt *RuntimeConf) error { func (c *CNIConfig) DelNetwork(ctx context.Context, net *NetworkConfig, rt *RuntimeConf) error {
pluginPath, err := invoke.FindInPath(net.Network.Type, c.Path) var cachedResult types.Result
// Cached result on DEL was added in CNI spec version 0.4.0 and higher
if gtet, err := version.GreaterThanOrEqualTo(net.Network.CNIVersion, "0.4.0"); err != nil {
return err
} else if gtet {
cachedResult, err = c.getCachedResult(net.Network.Name, net.Network.CNIVersion, rt)
if err != nil {
return fmt.Errorf("failed to get network %q cached result: %w", net.Network.Name, err)
}
}
if err := c.delNetwork(ctx, net.Network.Name, net.Network.CNIVersion, net, cachedResult, rt); err != nil {
return err
}
_ = c.cacheDel(net.Network.Name, rt)
return nil
}
// ValidateNetworkList checks that a configuration is reasonably valid.
// - all the specified plugins exist on disk
// - every plugin supports the desired version.
//
// Returns a list of all capabilities supported by the configuration, or error
func (c *CNIConfig) ValidateNetworkList(ctx context.Context, list *NetworkConfigList) ([]string, error) {
version := list.CNIVersion
// holding map for seen caps (in case of duplicates)
caps := map[string]interface{}{}
errs := []error{}
for _, net := range list.Plugins {
if err := c.validatePlugin(ctx, net.Network.Type, version); err != nil {
errs = append(errs, err)
}
for c, enabled := range net.Network.Capabilities {
if !enabled {
continue
}
caps[c] = struct{}{}
}
}
if len(errs) > 0 {
return nil, fmt.Errorf("%v", errs)
}
// make caps list
cc := make([]string, 0, len(caps))
for c := range caps {
cc = append(cc, c)
}
return cc, nil
}
// ValidateNetwork checks that a configuration is reasonably valid.
// It uses the same logic as ValidateNetworkList)
// Returns a list of capabilities
func (c *CNIConfig) ValidateNetwork(ctx context.Context, net *NetworkConfig) ([]string, error) {
caps := []string{}
for c, ok := range net.Network.Capabilities {
if ok {
caps = append(caps, c)
}
}
if err := c.validatePlugin(ctx, net.Network.Type, net.Network.CNIVersion); err != nil {
return nil, err
}
return caps, nil
}
// validatePlugin checks that an individual plugin's configuration is sane
func (c *CNIConfig) validatePlugin(ctx context.Context, pluginName, expectedVersion string) error {
c.ensureExec()
pluginPath, err := c.exec.FindInPath(pluginName, c.Path)
if err != nil { if err != nil {
return err return err
} }
if expectedVersion == "" {
expectedVersion = "0.1.0"
}
net, err = injectRuntimeConfig(net, rt) vi, err := invoke.GetVersionInfo(ctx, pluginPath, c.exec)
if err != nil { if err != nil {
return err return err
} }
for _, vers := range vi.SupportedVersions() {
return invoke.ExecPluginWithoutResult(pluginPath, net.Bytes, c.args("DEL", rt)) if vers == expectedVersion {
return nil
}
}
return fmt.Errorf("plugin %s does not support config version %q", pluginName, expectedVersion)
} }
// GetVersionInfo reports which versions of the CNI spec are supported by // GetVersionInfo reports which versions of the CNI spec are supported by
// the given plugin. // the given plugin.
func (c *CNIConfig) GetVersionInfo(pluginType string) (version.PluginInfo, error) { func (c *CNIConfig) GetVersionInfo(ctx context.Context, pluginType string) (version.PluginInfo, error) {
pluginPath, err := invoke.FindInPath(pluginType, c.Path) c.ensureExec()
pluginPath, err := c.exec.FindInPath(pluginType, c.Path)
if err != nil { if err != nil {
return nil, err return nil, err
} }
return invoke.GetVersionInfo(pluginPath) return invoke.GetVersionInfo(ctx, pluginPath, c.exec)
} }
// ===== // =====

View File

@@ -43,7 +43,10 @@ func (e NoConfigsFoundError) Error() string {
func ConfFromBytes(bytes []byte) (*NetworkConfig, error) { func ConfFromBytes(bytes []byte) (*NetworkConfig, error) {
conf := &NetworkConfig{Bytes: bytes} conf := &NetworkConfig{Bytes: bytes}
if err := json.Unmarshal(bytes, &conf.Network); err != nil { if err := json.Unmarshal(bytes, &conf.Network); err != nil {
return nil, fmt.Errorf("error parsing configuration: %s", err) return nil, fmt.Errorf("error parsing configuration: %w", err)
}
if conf.Network.Type == "" {
return nil, fmt.Errorf("error parsing configuration: missing 'type'")
} }
return conf, nil return conf, nil
} }
@@ -51,7 +54,7 @@ func ConfFromBytes(bytes []byte) (*NetworkConfig, error) {
func ConfFromFile(filename string) (*NetworkConfig, error) { func ConfFromFile(filename string) (*NetworkConfig, error) {
bytes, err := ioutil.ReadFile(filename) bytes, err := ioutil.ReadFile(filename)
if err != nil { if err != nil {
return nil, fmt.Errorf("error reading %s: %s", filename, err) return nil, fmt.Errorf("error reading %s: %w", filename, err)
} }
return ConfFromBytes(bytes) return ConfFromBytes(bytes)
} }
@@ -59,7 +62,7 @@ func ConfFromFile(filename string) (*NetworkConfig, error) {
func ConfListFromBytes(bytes []byte) (*NetworkConfigList, error) { func ConfListFromBytes(bytes []byte) (*NetworkConfigList, error) {
rawList := make(map[string]interface{}) rawList := make(map[string]interface{})
if err := json.Unmarshal(bytes, &rawList); err != nil { if err := json.Unmarshal(bytes, &rawList); err != nil {
return nil, fmt.Errorf("error parsing configuration list: %s", err) return nil, fmt.Errorf("error parsing configuration list: %w", err)
} }
rawName, ok := rawList["name"] rawName, ok := rawList["name"]
@@ -80,8 +83,17 @@ func ConfListFromBytes(bytes []byte) (*NetworkConfigList, error) {
} }
} }
disableCheck := false
if rawDisableCheck, ok := rawList["disableCheck"]; ok {
disableCheck, ok = rawDisableCheck.(bool)
if !ok {
return nil, fmt.Errorf("error parsing configuration list: invalid disableCheck type %T", rawDisableCheck)
}
}
list := &NetworkConfigList{ list := &NetworkConfigList{
Name: name, Name: name,
DisableCheck: disableCheck,
CNIVersion: cniVersion, CNIVersion: cniVersion,
Bytes: bytes, Bytes: bytes,
} }
@@ -102,11 +114,11 @@ func ConfListFromBytes(bytes []byte) (*NetworkConfigList, error) {
for i, conf := range plugins { for i, conf := range plugins {
newBytes, err := json.Marshal(conf) newBytes, err := json.Marshal(conf)
if err != nil { if err != nil {
return nil, fmt.Errorf("Failed to marshal plugin config %d: %v", i, err) return nil, fmt.Errorf("failed to marshal plugin config %d: %w", i, err)
} }
netConf, err := ConfFromBytes(newBytes) netConf, err := ConfFromBytes(newBytes)
if err != nil { if err != nil {
return nil, fmt.Errorf("Failed to parse plugin config %d: %v", i, err) return nil, fmt.Errorf("failed to parse plugin config %d: %w", i, err)
} }
list.Plugins = append(list.Plugins, netConf) list.Plugins = append(list.Plugins, netConf)
} }
@@ -117,7 +129,7 @@ func ConfListFromBytes(bytes []byte) (*NetworkConfigList, error) {
func ConfListFromFile(filename string) (*NetworkConfigList, error) { func ConfListFromFile(filename string) (*NetworkConfigList, error) {
bytes, err := ioutil.ReadFile(filename) bytes, err := ioutil.ReadFile(filename)
if err != nil { if err != nil {
return nil, fmt.Errorf("error reading %s: %s", filename, err) return nil, fmt.Errorf("error reading %s: %w", filename, err)
} }
return ConfListFromBytes(bytes) return ConfListFromBytes(bytes)
} }
@@ -206,7 +218,7 @@ func InjectConf(original *NetworkConfig, newValues map[string]interface{}) (*Net
config := make(map[string]interface{}) config := make(map[string]interface{})
err := json.Unmarshal(original.Bytes, &config) err := json.Unmarshal(original.Bytes, &config)
if err != nil { if err != nil {
return nil, fmt.Errorf("unmarshal existing network bytes: %s", err) return nil, fmt.Errorf("unmarshal existing network bytes: %w", err)
} }
for key, value := range newValues { for key, value := range newValues {

View File

@@ -15,6 +15,7 @@
package invoke package invoke
import ( import (
"fmt"
"os" "os"
"strings" "strings"
) )
@@ -22,6 +23,8 @@ import (
type CNIArgs interface { type CNIArgs interface {
// For use with os/exec; i.e., return nil to inherit the // For use with os/exec; i.e., return nil to inherit the
// environment from this process // environment from this process
// For use in delegation; inherit the environment from this
// process and allow overrides
AsEnv() []string AsEnv() []string
} }
@@ -29,7 +32,7 @@ type inherited struct{}
var inheritArgsFromEnv inherited var inheritArgsFromEnv inherited
func (_ *inherited) AsEnv() []string { func (*inherited) AsEnv() []string {
return nil return nil
} }
@@ -57,17 +60,17 @@ func (args *Args) AsEnv() []string {
pluginArgsStr = stringify(args.PluginArgs) pluginArgsStr = stringify(args.PluginArgs)
} }
// Ensure that the custom values are first, so any value present in // Duplicated values which come first will be overridden, so we must put the
// the process environment won't override them. // custom values in the end to avoid being overridden by the process environments.
env = append([]string{ env = append(env,
"CNI_COMMAND=" + args.Command, "CNI_COMMAND="+args.Command,
"CNI_CONTAINERID=" + args.ContainerID, "CNI_CONTAINERID="+args.ContainerID,
"CNI_NETNS=" + args.NetNS, "CNI_NETNS="+args.NetNS,
"CNI_ARGS=" + pluginArgsStr, "CNI_ARGS="+pluginArgsStr,
"CNI_IFNAME=" + args.IfName, "CNI_IFNAME="+args.IfName,
"CNI_PATH=" + args.Path, "CNI_PATH="+args.Path,
}, env...) )
return env return dedupEnv(env)
} }
// taken from rkt/networking/net_plugin.go // taken from rkt/networking/net_plugin.go
@@ -80,3 +83,46 @@ func stringify(pluginArgs [][2]string) string {
return strings.Join(entries, ";") return strings.Join(entries, ";")
} }
// DelegateArgs implements the CNIArgs interface
// used for delegation to inherit from environments
// and allow some overrides like CNI_COMMAND
var _ CNIArgs = &DelegateArgs{}
type DelegateArgs struct {
Command string
}
func (d *DelegateArgs) AsEnv() []string {
env := os.Environ()
// The custom values should come in the end to override the existing
// process environment of the same key.
env = append(env,
"CNI_COMMAND="+d.Command,
)
return dedupEnv(env)
}
// dedupEnv returns a copy of env with any duplicates removed, in favor of later values.
// Items not of the normal environment "key=value" form are preserved unchanged.
func dedupEnv(env []string) []string {
out := make([]string, 0, len(env))
envMap := map[string]string{}
for _, kv := range env {
// find the first "=" in environment, if not, just keep it
eq := strings.Index(kv, "=")
if eq < 0 {
out = append(out, kv)
continue
}
envMap[kv[:eq]] = kv[eq+1:]
}
for k, v := range envMap {
out = append(out, fmt.Sprintf("%s=%s", k, v))
}
return out
}

View File

@@ -15,39 +15,66 @@
package invoke package invoke
import ( import (
"fmt" "context"
"os" "os"
"path/filepath" "path/filepath"
"github.com/containernetworking/cni/pkg/types" "github.com/containernetworking/cni/pkg/types"
) )
func DelegateAdd(delegatePlugin string, netconf []byte) (types.Result, error) { func delegateCommon(delegatePlugin string, exec Exec) (string, Exec, error) {
if os.Getenv("CNI_COMMAND") != "ADD" { if exec == nil {
return nil, fmt.Errorf("CNI_COMMAND is not ADD") exec = defaultExec
} }
paths := filepath.SplitList(os.Getenv("CNI_PATH")) paths := filepath.SplitList(os.Getenv("CNI_PATH"))
pluginPath, err := exec.FindInPath(delegatePlugin, paths)
if err != nil {
return "", nil, err
}
pluginPath, err := FindInPath(delegatePlugin, paths) return pluginPath, exec, nil
}
// DelegateAdd calls the given delegate plugin with the CNI ADD action and
// JSON configuration
func DelegateAdd(ctx context.Context, delegatePlugin string, netconf []byte, exec Exec) (types.Result, error) {
pluginPath, realExec, err := delegateCommon(delegatePlugin, exec)
if err != nil { if err != nil {
return nil, err return nil, err
} }
return ExecPluginWithResult(pluginPath, netconf, ArgsFromEnv()) // DelegateAdd will override the original "CNI_COMMAND" env from process with ADD
return ExecPluginWithResult(ctx, pluginPath, netconf, delegateArgs("ADD"), realExec)
} }
func DelegateDel(delegatePlugin string, netconf []byte) error { // DelegateCheck calls the given delegate plugin with the CNI CHECK action and
if os.Getenv("CNI_COMMAND") != "DEL" { // JSON configuration
return fmt.Errorf("CNI_COMMAND is not DEL") func DelegateCheck(ctx context.Context, delegatePlugin string, netconf []byte, exec Exec) error {
} pluginPath, realExec, err := delegateCommon(delegatePlugin, exec)
paths := filepath.SplitList(os.Getenv("CNI_PATH"))
pluginPath, err := FindInPath(delegatePlugin, paths)
if err != nil { if err != nil {
return err return err
} }
return ExecPluginWithoutResult(pluginPath, netconf, ArgsFromEnv()) // DelegateCheck will override the original CNI_COMMAND env from process with CHECK
return ExecPluginWithoutResult(ctx, pluginPath, netconf, delegateArgs("CHECK"), realExec)
}
// DelegateDel calls the given delegate plugin with the CNI DEL action and
// JSON configuration
func DelegateDel(ctx context.Context, delegatePlugin string, netconf []byte, exec Exec) error {
pluginPath, realExec, err := delegateCommon(delegatePlugin, exec)
if err != nil {
return err
}
// DelegateDel will override the original CNI_COMMAND env from process with DEL
return ExecPluginWithoutResult(ctx, pluginPath, netconf, delegateArgs("DEL"), realExec)
}
// return CNIArgs used by delegation
func delegateArgs(action string) *DelegateArgs {
return &DelegateArgs{
Command: action,
}
} }

View File

@@ -15,57 +15,83 @@
package invoke package invoke
import ( import (
"context"
"fmt" "fmt"
"os" "os"
"github.com/containernetworking/cni/pkg/types" "github.com/containernetworking/cni/pkg/types"
"github.com/containernetworking/cni/pkg/types/create"
"github.com/containernetworking/cni/pkg/version" "github.com/containernetworking/cni/pkg/version"
) )
func ExecPluginWithResult(pluginPath string, netconf []byte, args CNIArgs) (types.Result, error) { // Exec is an interface encapsulates all operations that deal with finding
return defaultPluginExec.WithResult(pluginPath, netconf, args) // and executing a CNI plugin. Tests may provide a fake implementation
} // to avoid writing fake plugins to temporary directories during the test.
type Exec interface {
func ExecPluginWithoutResult(pluginPath string, netconf []byte, args CNIArgs) error { ExecPlugin(ctx context.Context, pluginPath string, stdinData []byte, environ []string) ([]byte, error)
return defaultPluginExec.WithoutResult(pluginPath, netconf, args) FindInPath(plugin string, paths []string) (string, error)
}
func GetVersionInfo(pluginPath string) (version.PluginInfo, error) {
return defaultPluginExec.GetVersionInfo(pluginPath)
}
var defaultPluginExec = &PluginExec{
RawExec: &RawExec{Stderr: os.Stderr},
VersionDecoder: &version.PluginDecoder{},
}
type PluginExec struct {
RawExec interface {
ExecPlugin(pluginPath string, stdinData []byte, environ []string) ([]byte, error)
}
VersionDecoder interface {
Decode(jsonBytes []byte) (version.PluginInfo, error) Decode(jsonBytes []byte) (version.PluginInfo, error)
}
} }
func (e *PluginExec) WithResult(pluginPath string, netconf []byte, args CNIArgs) (types.Result, error) { // For example, a testcase could pass an instance of the following fakeExec
stdoutBytes, err := e.RawExec.ExecPlugin(pluginPath, netconf, args.AsEnv()) // object to ExecPluginWithResult() to verify the incoming stdin and environment
// and provide a tailored response:
//
//import (
// "encoding/json"
// "path"
// "strings"
//)
//
//type fakeExec struct {
// version.PluginDecoder
//}
//
//func (f *fakeExec) ExecPlugin(pluginPath string, stdinData []byte, environ []string) ([]byte, error) {
// net := &types.NetConf{}
// err := json.Unmarshal(stdinData, net)
// if err != nil {
// return nil, fmt.Errorf("failed to unmarshal configuration: %v", err)
// }
// pluginName := path.Base(pluginPath)
// if pluginName != net.Type {
// return nil, fmt.Errorf("plugin name %q did not match config type %q", pluginName, net.Type)
// }
// for _, e := range environ {
// // Check environment for forced failure request
// parts := strings.Split(e, "=")
// if len(parts) > 0 && parts[0] == "FAIL" {
// return nil, fmt.Errorf("failed to execute plugin %s", pluginName)
// }
// }
// return []byte("{\"CNIVersion\":\"0.4.0\"}"), nil
//}
//
//func (f *fakeExec) FindInPath(plugin string, paths []string) (string, error) {
// if len(paths) > 0 {
// return path.Join(paths[0], plugin), nil
// }
// return "", fmt.Errorf("failed to find plugin %s in paths %v", plugin, paths)
//}
func ExecPluginWithResult(ctx context.Context, pluginPath string, netconf []byte, args CNIArgs, exec Exec) (types.Result, error) {
if exec == nil {
exec = defaultExec
}
stdoutBytes, err := exec.ExecPlugin(ctx, pluginPath, netconf, args.AsEnv())
if err != nil { if err != nil {
return nil, err return nil, err
} }
// Plugin must return result in same version as specified in netconf return create.CreateFromBytes(stdoutBytes)
versionDecoder := &version.ConfigDecoder{}
confVersion, err := versionDecoder.Decode(netconf)
if err != nil {
return nil, err
}
return version.NewResult(confVersion, stdoutBytes)
} }
func (e *PluginExec) WithoutResult(pluginPath string, netconf []byte, args CNIArgs) error { func ExecPluginWithoutResult(ctx context.Context, pluginPath string, netconf []byte, args CNIArgs, exec Exec) error {
_, err := e.RawExec.ExecPlugin(pluginPath, netconf, args.AsEnv()) if exec == nil {
exec = defaultExec
}
_, err := exec.ExecPlugin(ctx, pluginPath, netconf, args.AsEnv())
return err return err
} }
@@ -73,7 +99,10 @@ func (e *PluginExec) WithoutResult(pluginPath string, netconf []byte, args CNIAr
// For recent-enough plugins, it uses the information returned by the VERSION // For recent-enough plugins, it uses the information returned by the VERSION
// command. For older plugins which do not recognize that command, it reports // command. For older plugins which do not recognize that command, it reports
// version 0.1.0 // version 0.1.0
func (e *PluginExec) GetVersionInfo(pluginPath string) (version.PluginInfo, error) { func GetVersionInfo(ctx context.Context, pluginPath string, exec Exec) (version.PluginInfo, error) {
if exec == nil {
exec = defaultExec
}
args := &Args{ args := &Args{
Command: "VERSION", Command: "VERSION",
@@ -83,7 +112,7 @@ func (e *PluginExec) GetVersionInfo(pluginPath string) (version.PluginInfo, erro
Path: "dummy", Path: "dummy",
} }
stdin := []byte(fmt.Sprintf(`{"cniVersion":%q}`, version.Current())) stdin := []byte(fmt.Sprintf(`{"cniVersion":%q}`, version.Current()))
stdoutBytes, err := e.RawExec.ExecPlugin(pluginPath, stdin, args.AsEnv()) stdoutBytes, err := exec.ExecPlugin(ctx, pluginPath, stdin, args.AsEnv())
if err != nil { if err != nil {
if err.Error() == "unknown CNI_COMMAND: VERSION" { if err.Error() == "unknown CNI_COMMAND: VERSION" {
return version.PluginSupports("0.1.0"), nil return version.PluginSupports("0.1.0"), nil
@@ -91,5 +120,19 @@ func (e *PluginExec) GetVersionInfo(pluginPath string) (version.PluginInfo, erro
return nil, err return nil, err
} }
return e.VersionDecoder.Decode(stdoutBytes) return exec.Decode(stdoutBytes)
}
// DefaultExec is an object that implements the Exec interface which looks
// for and executes plugins from disk.
type DefaultExec struct {
*RawExec
version.PluginDecoder
}
// DefaultExec implements the Exec interface
var _ Exec = &DefaultExec{}
var defaultExec = &DefaultExec{
RawExec: &RawExec{Stderr: os.Stderr},
} }

View File

@@ -18,6 +18,7 @@ import (
"fmt" "fmt"
"os" "os"
"path/filepath" "path/filepath"
"strings"
) )
// FindInPath returns the full path of the plugin by searching in the provided path // FindInPath returns the full path of the plugin by searching in the provided path
@@ -26,6 +27,10 @@ func FindInPath(plugin string, paths []string) (string, error) {
return "", fmt.Errorf("no plugin name provided") return "", fmt.Errorf("no plugin name provided")
} }
if strings.ContainsRune(plugin, os.PathSeparator) {
return "", fmt.Errorf("invalid plugin name: %s", plugin)
}
if len(paths) == 0 { if len(paths) == 0 {
return "", fmt.Errorf("no paths provided") return "", fmt.Errorf("no paths provided")
} }

View File

@@ -12,7 +12,7 @@
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
// +build darwin dragonfly freebsd linux netbsd opensbd solaris // +build darwin dragonfly freebsd linux netbsd openbsd solaris
package invoke package invoke

View File

@@ -16,10 +16,13 @@ package invoke
import ( import (
"bytes" "bytes"
"context"
"encoding/json" "encoding/json"
"fmt" "fmt"
"io" "io"
"os/exec" "os/exec"
"strings"
"time"
"github.com/containernetworking/cni/pkg/types" "github.com/containernetworking/cni/pkg/types"
) )
@@ -28,32 +31,58 @@ type RawExec struct {
Stderr io.Writer Stderr io.Writer
} }
func (e *RawExec) ExecPlugin(pluginPath string, stdinData []byte, environ []string) ([]byte, error) { func (e *RawExec) ExecPlugin(ctx context.Context, pluginPath string, stdinData []byte, environ []string) ([]byte, error) {
stdout := &bytes.Buffer{} stdout := &bytes.Buffer{}
stderr := &bytes.Buffer{}
c := exec.CommandContext(ctx, pluginPath)
c.Env = environ
c.Stdin = bytes.NewBuffer(stdinData)
c.Stdout = stdout
c.Stderr = stderr
c := exec.Cmd{ // Retry the command on "text file busy" errors
Env: environ, for i := 0; i <= 5; i++ {
Path: pluginPath, err := c.Run()
Args: []string{pluginPath},
Stdin: bytes.NewBuffer(stdinData), // Command succeeded
Stdout: stdout, if err == nil {
Stderr: e.Stderr, break
}
if err := c.Run(); err != nil {
return nil, pluginErr(err, stdout.Bytes())
} }
// If the plugin is currently about to be written, then we wait a
// second and try it again
if strings.Contains(err.Error(), "text file busy") {
time.Sleep(time.Second)
continue
}
// All other errors except than the busy text file
return nil, e.pluginErr(err, stdout.Bytes(), stderr.Bytes())
}
// Copy stderr to caller's buffer in case plugin printed to both
// stdout and stderr for some reason. Ignore failures as stderr is
// only informational.
if e.Stderr != nil && stderr.Len() > 0 {
_, _ = stderr.WriteTo(e.Stderr)
}
return stdout.Bytes(), nil return stdout.Bytes(), nil
} }
func pluginErr(err error, output []byte) error { func (e *RawExec) pluginErr(err error, stdout, stderr []byte) error {
if _, ok := err.(*exec.ExitError); ok {
emsg := types.Error{} emsg := types.Error{}
if perr := json.Unmarshal(output, &emsg); perr != nil { if len(stdout) == 0 {
emsg.Msg = fmt.Sprintf("netplugin failed but error parsing its diagnostic message %q: %v", string(output), perr) if len(stderr) == 0 {
emsg.Msg = fmt.Sprintf("netplugin failed with no error message: %v", err)
} else {
emsg.Msg = fmt.Sprintf("netplugin failed: %q", string(stderr))
}
} else if perr := json.Unmarshal(stdout, &emsg); perr != nil {
emsg.Msg = fmt.Sprintf("netplugin failed but error parsing its diagnostic message %q: %v", string(stdout), perr)
} }
return &emsg return &emsg
} }
return err func (e *RawExec) FindInPath(plugin string, paths []string) (string, error) {
return FindInPath(plugin, paths)
} }

View File

@@ -17,29 +17,52 @@ package types020
import ( import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"io"
"net" "net"
"os" "os"
"github.com/containernetworking/cni/pkg/types" "github.com/containernetworking/cni/pkg/types"
convert "github.com/containernetworking/cni/pkg/types/internal"
) )
const ImplementedSpecVersion string = "0.2.0" const ImplementedSpecVersion string = "0.2.0"
var SupportedVersions = []string{"", "0.1.0", ImplementedSpecVersion} var supportedVersions = []string{"", "0.1.0", ImplementedSpecVersion}
// Register converters for all versions less than the implemented spec version
func init() {
convert.RegisterConverter("0.1.0", []string{ImplementedSpecVersion}, convertFrom010)
convert.RegisterConverter(ImplementedSpecVersion, []string{"0.1.0"}, convertTo010)
// Creator
convert.RegisterCreator(supportedVersions, NewResult)
}
// Compatibility types for CNI version 0.1.0 and 0.2.0 // Compatibility types for CNI version 0.1.0 and 0.2.0
// NewResult creates a new Result object from JSON data. The JSON data
// must be compatible with the CNI versions implemented by this type.
func NewResult(data []byte) (types.Result, error) { func NewResult(data []byte) (types.Result, error) {
result := &Result{} result := &Result{}
if err := json.Unmarshal(data, result); err != nil { if err := json.Unmarshal(data, result); err != nil {
return nil, err return nil, err
} }
for _, v := range supportedVersions {
if result.CNIVersion == v {
if result.CNIVersion == "" {
result.CNIVersion = "0.1.0"
}
return result, nil return result, nil
}
}
return nil, fmt.Errorf("result type supports %v but unmarshalled CNIVersion is %q",
supportedVersions, result.CNIVersion)
} }
// GetResult converts the given Result object to the ImplementedSpecVersion
// and returns the concrete type or an error
func GetResult(r types.Result) (*Result, error) { func GetResult(r types.Result) (*Result, error) {
// We expect version 0.1.0/0.2.0 results result020, err := convert.Convert(r, ImplementedSpecVersion)
result020, err := r.GetAsVersion(ImplementedSpecVersion)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -50,6 +73,32 @@ func GetResult(r types.Result) (*Result, error) {
return result, nil return result, nil
} }
func convertFrom010(from types.Result, toVersion string) (types.Result, error) {
if toVersion != "0.2.0" {
panic("only converts to version 0.2.0")
}
fromResult := from.(*Result)
return &Result{
CNIVersion: ImplementedSpecVersion,
IP4: fromResult.IP4.Copy(),
IP6: fromResult.IP6.Copy(),
DNS: *fromResult.DNS.Copy(),
}, nil
}
func convertTo010(from types.Result, toVersion string) (types.Result, error) {
if toVersion != "0.1.0" {
panic("only converts to version 0.1.0")
}
fromResult := from.(*Result)
return &Result{
CNIVersion: "0.1.0",
IP4: fromResult.IP4.Copy(),
IP6: fromResult.IP6.Copy(),
DNS: *fromResult.DNS.Copy(),
}, nil
}
// Result is what gets returned from the plugin (via stdout) to the caller // Result is what gets returned from the plugin (via stdout) to the caller
type Result struct { type Result struct {
CNIVersion string `json:"cniVersion,omitempty"` CNIVersion string `json:"cniVersion,omitempty"`
@@ -59,42 +108,31 @@ type Result struct {
} }
func (r *Result) Version() string { func (r *Result) Version() string {
return ImplementedSpecVersion return r.CNIVersion
} }
func (r *Result) GetAsVersion(version string) (types.Result, error) { func (r *Result) GetAsVersion(version string) (types.Result, error) {
for _, supportedVersion := range SupportedVersions { // If the creator of the result did not set the CNIVersion, assume it
if version == supportedVersion { // should be the highest spec version implemented by this Result
r.CNIVersion = version if r.CNIVersion == "" {
return r, nil r.CNIVersion = ImplementedSpecVersion
} }
} return convert.Convert(r, version)
return nil, fmt.Errorf("cannot convert version %q to %s", SupportedVersions, version)
} }
func (r *Result) Print() error { func (r *Result) Print() error {
return r.PrintTo(os.Stdout)
}
func (r *Result) PrintTo(writer io.Writer) error {
data, err := json.MarshalIndent(r, "", " ") data, err := json.MarshalIndent(r, "", " ")
if err != nil { if err != nil {
return err return err
} }
_, err = os.Stdout.Write(data) _, err = writer.Write(data)
return err return err
} }
// String returns a formatted string in the form of "[IP4: $1,][ IP6: $2,] DNS: $3" where
// $1 represents the receiver's IPv4, $2 represents the receiver's IPv6 and $3 the
// receiver's DNS. If $1 or $2 are nil, they won't be present in the returned string.
func (r *Result) String() string {
var str string
if r.IP4 != nil {
str = fmt.Sprintf("IP4:%+v, ", *r.IP4)
}
if r.IP6 != nil {
str += fmt.Sprintf("IP6:%+v, ", *r.IP6)
}
return fmt.Sprintf("%sDNS:%+v", str, r.DNS)
}
// IPConfig contains values necessary to configure an interface // IPConfig contains values necessary to configure an interface
type IPConfig struct { type IPConfig struct {
IP net.IPNet IP net.IPNet
@@ -102,6 +140,22 @@ type IPConfig struct {
Routes []types.Route Routes []types.Route
} }
func (i *IPConfig) Copy() *IPConfig {
if i == nil {
return nil
}
var routes []types.Route
for _, fromRoute := range i.Routes {
routes = append(routes, *fromRoute.Copy())
}
return &IPConfig{
IP: i.IP,
Gateway: i.Gateway,
Routes: routes,
}
}
// net.IPNet is not JSON (un)marshallable so this duality is needed // net.IPNet is not JSON (un)marshallable so this duality is needed
// for our custom IPNet type // for our custom IPNet type

View File

@@ -0,0 +1,306 @@
// Copyright 2016 CNI authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package types040
import (
"encoding/json"
"fmt"
"io"
"net"
"os"
"github.com/containernetworking/cni/pkg/types"
types020 "github.com/containernetworking/cni/pkg/types/020"
convert "github.com/containernetworking/cni/pkg/types/internal"
)
const ImplementedSpecVersion string = "0.4.0"
var supportedVersions = []string{"0.3.0", "0.3.1", ImplementedSpecVersion}
// Register converters for all versions less than the implemented spec version
func init() {
// Up-converters
convert.RegisterConverter("0.1.0", supportedVersions, convertFrom02x)
convert.RegisterConverter("0.2.0", supportedVersions, convertFrom02x)
convert.RegisterConverter("0.3.0", supportedVersions, convertInternal)
convert.RegisterConverter("0.3.1", supportedVersions, convertInternal)
// Down-converters
convert.RegisterConverter("0.4.0", []string{"0.3.0", "0.3.1"}, convertInternal)
convert.RegisterConverter("0.4.0", []string{"0.1.0", "0.2.0"}, convertTo02x)
convert.RegisterConverter("0.3.1", []string{"0.1.0", "0.2.0"}, convertTo02x)
convert.RegisterConverter("0.3.0", []string{"0.1.0", "0.2.0"}, convertTo02x)
// Creator
convert.RegisterCreator(supportedVersions, NewResult)
}
func NewResult(data []byte) (types.Result, error) {
result := &Result{}
if err := json.Unmarshal(data, result); err != nil {
return nil, err
}
for _, v := range supportedVersions {
if result.CNIVersion == v {
return result, nil
}
}
return nil, fmt.Errorf("result type supports %v but unmarshalled CNIVersion is %q",
supportedVersions, result.CNIVersion)
}
func GetResult(r types.Result) (*Result, error) {
resultCurrent, err := r.GetAsVersion(ImplementedSpecVersion)
if err != nil {
return nil, err
}
result, ok := resultCurrent.(*Result)
if !ok {
return nil, fmt.Errorf("failed to convert result")
}
return result, nil
}
func NewResultFromResult(result types.Result) (*Result, error) {
newResult, err := convert.Convert(result, ImplementedSpecVersion)
if err != nil {
return nil, err
}
return newResult.(*Result), nil
}
// Result is what gets returned from the plugin (via stdout) to the caller
type Result struct {
CNIVersion string `json:"cniVersion,omitempty"`
Interfaces []*Interface `json:"interfaces,omitempty"`
IPs []*IPConfig `json:"ips,omitempty"`
Routes []*types.Route `json:"routes,omitempty"`
DNS types.DNS `json:"dns,omitempty"`
}
func convert020IPConfig(from *types020.IPConfig, ipVersion string) *IPConfig {
return &IPConfig{
Version: ipVersion,
Address: from.IP,
Gateway: from.Gateway,
}
}
func convertFrom02x(from types.Result, toVersion string) (types.Result, error) {
fromResult := from.(*types020.Result)
toResult := &Result{
CNIVersion: toVersion,
DNS: *fromResult.DNS.Copy(),
Routes: []*types.Route{},
}
if fromResult.IP4 != nil {
toResult.IPs = append(toResult.IPs, convert020IPConfig(fromResult.IP4, "4"))
for _, fromRoute := range fromResult.IP4.Routes {
toResult.Routes = append(toResult.Routes, fromRoute.Copy())
}
}
if fromResult.IP6 != nil {
toResult.IPs = append(toResult.IPs, convert020IPConfig(fromResult.IP6, "6"))
for _, fromRoute := range fromResult.IP6.Routes {
toResult.Routes = append(toResult.Routes, fromRoute.Copy())
}
}
return toResult, nil
}
func convertInternal(from types.Result, toVersion string) (types.Result, error) {
fromResult := from.(*Result)
toResult := &Result{
CNIVersion: toVersion,
DNS: *fromResult.DNS.Copy(),
Routes: []*types.Route{},
}
for _, fromIntf := range fromResult.Interfaces {
toResult.Interfaces = append(toResult.Interfaces, fromIntf.Copy())
}
for _, fromIPC := range fromResult.IPs {
toResult.IPs = append(toResult.IPs, fromIPC.Copy())
}
for _, fromRoute := range fromResult.Routes {
toResult.Routes = append(toResult.Routes, fromRoute.Copy())
}
return toResult, nil
}
func convertTo02x(from types.Result, toVersion string) (types.Result, error) {
fromResult := from.(*Result)
toResult := &types020.Result{
CNIVersion: toVersion,
DNS: *fromResult.DNS.Copy(),
}
for _, fromIP := range fromResult.IPs {
// Only convert the first IP address of each version as 0.2.0
// and earlier cannot handle multiple IP addresses
if fromIP.Version == "4" && toResult.IP4 == nil {
toResult.IP4 = &types020.IPConfig{
IP: fromIP.Address,
Gateway: fromIP.Gateway,
}
} else if fromIP.Version == "6" && toResult.IP6 == nil {
toResult.IP6 = &types020.IPConfig{
IP: fromIP.Address,
Gateway: fromIP.Gateway,
}
}
if toResult.IP4 != nil && toResult.IP6 != nil {
break
}
}
for _, fromRoute := range fromResult.Routes {
is4 := fromRoute.Dst.IP.To4() != nil
if is4 && toResult.IP4 != nil {
toResult.IP4.Routes = append(toResult.IP4.Routes, types.Route{
Dst: fromRoute.Dst,
GW: fromRoute.GW,
})
} else if !is4 && toResult.IP6 != nil {
toResult.IP6.Routes = append(toResult.IP6.Routes, types.Route{
Dst: fromRoute.Dst,
GW: fromRoute.GW,
})
}
}
// 0.2.0 and earlier require at least one IP address in the Result
if toResult.IP4 == nil && toResult.IP6 == nil {
return nil, fmt.Errorf("cannot convert: no valid IP addresses")
}
return toResult, nil
}
func (r *Result) Version() string {
return r.CNIVersion
}
func (r *Result) GetAsVersion(version string) (types.Result, error) {
// If the creator of the result did not set the CNIVersion, assume it
// should be the highest spec version implemented by this Result
if r.CNIVersion == "" {
r.CNIVersion = ImplementedSpecVersion
}
return convert.Convert(r, version)
}
func (r *Result) Print() error {
return r.PrintTo(os.Stdout)
}
func (r *Result) PrintTo(writer io.Writer) error {
data, err := json.MarshalIndent(r, "", " ")
if err != nil {
return err
}
_, err = writer.Write(data)
return err
}
// Interface contains values about the created interfaces
type Interface struct {
Name string `json:"name"`
Mac string `json:"mac,omitempty"`
Sandbox string `json:"sandbox,omitempty"`
}
func (i *Interface) String() string {
return fmt.Sprintf("%+v", *i)
}
func (i *Interface) Copy() *Interface {
if i == nil {
return nil
}
newIntf := *i
return &newIntf
}
// Int returns a pointer to the int value passed in. Used to
// set the IPConfig.Interface field.
func Int(v int) *int {
return &v
}
// IPConfig contains values necessary to configure an IP address on an interface
type IPConfig struct {
// IP version, either "4" or "6"
Version string
// Index into Result structs Interfaces list
Interface *int
Address net.IPNet
Gateway net.IP
}
func (i *IPConfig) String() string {
return fmt.Sprintf("%+v", *i)
}
func (i *IPConfig) Copy() *IPConfig {
if i == nil {
return nil
}
ipc := &IPConfig{
Version: i.Version,
Address: i.Address,
Gateway: i.Gateway,
}
if i.Interface != nil {
intf := *i.Interface
ipc.Interface = &intf
}
return ipc
}
// JSON (un)marshallable types
type ipConfig struct {
Version string `json:"version"`
Interface *int `json:"interface,omitempty"`
Address types.IPNet `json:"address"`
Gateway net.IP `json:"gateway,omitempty"`
}
func (c *IPConfig) MarshalJSON() ([]byte, error) {
ipc := ipConfig{
Version: c.Version,
Interface: c.Interface,
Address: types.IPNet(c.Address),
Gateway: c.Gateway,
}
return json.Marshal(ipc)
}
func (c *IPConfig) UnmarshalJSON(data []byte) error {
ipc := ipConfig{}
if err := json.Unmarshal(data, &ipc); err != nil {
return err
}
c.Version = ipc.Version
c.Interface = ipc.Interface
c.Address = net.IPNet(ipc.Address)
c.Gateway = ipc.Gateway
return nil
}

View File

@@ -0,0 +1,307 @@
// Copyright 2016 CNI authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package types100
import (
"encoding/json"
"fmt"
"io"
"net"
"os"
"github.com/containernetworking/cni/pkg/types"
types040 "github.com/containernetworking/cni/pkg/types/040"
convert "github.com/containernetworking/cni/pkg/types/internal"
)
const ImplementedSpecVersion string = "1.0.0"
var supportedVersions = []string{ImplementedSpecVersion}
// Register converters for all versions less than the implemented spec version
func init() {
// Up-converters
convert.RegisterConverter("0.1.0", supportedVersions, convertFrom02x)
convert.RegisterConverter("0.2.0", supportedVersions, convertFrom02x)
convert.RegisterConverter("0.3.0", supportedVersions, convertFrom04x)
convert.RegisterConverter("0.3.1", supportedVersions, convertFrom04x)
convert.RegisterConverter("0.4.0", supportedVersions, convertFrom04x)
// Down-converters
convert.RegisterConverter("1.0.0", []string{"0.3.0", "0.3.1", "0.4.0"}, convertTo04x)
convert.RegisterConverter("1.0.0", []string{"0.1.0", "0.2.0"}, convertTo02x)
// Creator
convert.RegisterCreator(supportedVersions, NewResult)
}
func NewResult(data []byte) (types.Result, error) {
result := &Result{}
if err := json.Unmarshal(data, result); err != nil {
return nil, err
}
for _, v := range supportedVersions {
if result.CNIVersion == v {
return result, nil
}
}
return nil, fmt.Errorf("result type supports %v but unmarshalled CNIVersion is %q",
supportedVersions, result.CNIVersion)
}
func GetResult(r types.Result) (*Result, error) {
resultCurrent, err := r.GetAsVersion(ImplementedSpecVersion)
if err != nil {
return nil, err
}
result, ok := resultCurrent.(*Result)
if !ok {
return nil, fmt.Errorf("failed to convert result")
}
return result, nil
}
func NewResultFromResult(result types.Result) (*Result, error) {
newResult, err := convert.Convert(result, ImplementedSpecVersion)
if err != nil {
return nil, err
}
return newResult.(*Result), nil
}
// Result is what gets returned from the plugin (via stdout) to the caller
type Result struct {
CNIVersion string `json:"cniVersion,omitempty"`
Interfaces []*Interface `json:"interfaces,omitempty"`
IPs []*IPConfig `json:"ips,omitempty"`
Routes []*types.Route `json:"routes,omitempty"`
DNS types.DNS `json:"dns,omitempty"`
}
func convertFrom02x(from types.Result, toVersion string) (types.Result, error) {
result040, err := convert.Convert(from, "0.4.0")
if err != nil {
return nil, err
}
result100, err := convertFrom04x(result040, ImplementedSpecVersion)
if err != nil {
return nil, err
}
return result100, nil
}
func convertIPConfigFrom040(from *types040.IPConfig) *IPConfig {
to := &IPConfig{
Address: from.Address,
Gateway: from.Gateway,
}
if from.Interface != nil {
intf := *from.Interface
to.Interface = &intf
}
return to
}
func convertInterfaceFrom040(from *types040.Interface) *Interface {
return &Interface{
Name: from.Name,
Mac: from.Mac,
Sandbox: from.Sandbox,
}
}
func convertFrom04x(from types.Result, toVersion string) (types.Result, error) {
fromResult := from.(*types040.Result)
toResult := &Result{
CNIVersion: toVersion,
DNS: *fromResult.DNS.Copy(),
Routes: []*types.Route{},
}
for _, fromIntf := range fromResult.Interfaces {
toResult.Interfaces = append(toResult.Interfaces, convertInterfaceFrom040(fromIntf))
}
for _, fromIPC := range fromResult.IPs {
toResult.IPs = append(toResult.IPs, convertIPConfigFrom040(fromIPC))
}
for _, fromRoute := range fromResult.Routes {
toResult.Routes = append(toResult.Routes, fromRoute.Copy())
}
return toResult, nil
}
func convertIPConfigTo040(from *IPConfig) *types040.IPConfig {
version := "6"
if from.Address.IP.To4() != nil {
version = "4"
}
to := &types040.IPConfig{
Version: version,
Address: from.Address,
Gateway: from.Gateway,
}
if from.Interface != nil {
intf := *from.Interface
to.Interface = &intf
}
return to
}
func convertInterfaceTo040(from *Interface) *types040.Interface {
return &types040.Interface{
Name: from.Name,
Mac: from.Mac,
Sandbox: from.Sandbox,
}
}
func convertTo04x(from types.Result, toVersion string) (types.Result, error) {
fromResult := from.(*Result)
toResult := &types040.Result{
CNIVersion: toVersion,
DNS: *fromResult.DNS.Copy(),
Routes: []*types.Route{},
}
for _, fromIntf := range fromResult.Interfaces {
toResult.Interfaces = append(toResult.Interfaces, convertInterfaceTo040(fromIntf))
}
for _, fromIPC := range fromResult.IPs {
toResult.IPs = append(toResult.IPs, convertIPConfigTo040(fromIPC))
}
for _, fromRoute := range fromResult.Routes {
toResult.Routes = append(toResult.Routes, fromRoute.Copy())
}
return toResult, nil
}
func convertTo02x(from types.Result, toVersion string) (types.Result, error) {
// First convert to 0.4.0
result040, err := convertTo04x(from, "0.4.0")
if err != nil {
return nil, err
}
result02x, err := convert.Convert(result040, toVersion)
if err != nil {
return nil, err
}
return result02x, nil
}
func (r *Result) Version() string {
return r.CNIVersion
}
func (r *Result) GetAsVersion(version string) (types.Result, error) {
// If the creator of the result did not set the CNIVersion, assume it
// should be the highest spec version implemented by this Result
if r.CNIVersion == "" {
r.CNIVersion = ImplementedSpecVersion
}
return convert.Convert(r, version)
}
func (r *Result) Print() error {
return r.PrintTo(os.Stdout)
}
func (r *Result) PrintTo(writer io.Writer) error {
data, err := json.MarshalIndent(r, "", " ")
if err != nil {
return err
}
_, err = writer.Write(data)
return err
}
// Interface contains values about the created interfaces
type Interface struct {
Name string `json:"name"`
Mac string `json:"mac,omitempty"`
Sandbox string `json:"sandbox,omitempty"`
}
func (i *Interface) String() string {
return fmt.Sprintf("%+v", *i)
}
func (i *Interface) Copy() *Interface {
if i == nil {
return nil
}
newIntf := *i
return &newIntf
}
// Int returns a pointer to the int value passed in. Used to
// set the IPConfig.Interface field.
func Int(v int) *int {
return &v
}
// IPConfig contains values necessary to configure an IP address on an interface
type IPConfig struct {
// Index into Result structs Interfaces list
Interface *int
Address net.IPNet
Gateway net.IP
}
func (i *IPConfig) String() string {
return fmt.Sprintf("%+v", *i)
}
func (i *IPConfig) Copy() *IPConfig {
if i == nil {
return nil
}
ipc := &IPConfig{
Address: i.Address,
Gateway: i.Gateway,
}
if i.Interface != nil {
intf := *i.Interface
ipc.Interface = &intf
}
return ipc
}
// JSON (un)marshallable types
type ipConfig struct {
Interface *int `json:"interface,omitempty"`
Address types.IPNet `json:"address"`
Gateway net.IP `json:"gateway,omitempty"`
}
func (c *IPConfig) MarshalJSON() ([]byte, error) {
ipc := ipConfig{
Interface: c.Interface,
Address: types.IPNet(c.Address),
Gateway: c.Gateway,
}
return json.Marshal(ipc)
}
func (c *IPConfig) UnmarshalJSON(data []byte) error {
ipc := ipConfig{}
if err := json.Unmarshal(data, &ipc); err != nil {
return err
}
c.Interface = ipc.Interface
c.Address = net.IPNet(ipc.Address)
c.Gateway = ipc.Gateway
return nil
}

View File

@@ -36,7 +36,7 @@ func (b *UnmarshallableBool) UnmarshalText(data []byte) error {
case "0", "false": case "0", "false":
*b = false *b = false
default: default:
return fmt.Errorf("Boolean unmarshal error: invalid input %s", s) return fmt.Errorf("boolean unmarshal error: invalid input %s", s)
} }
return nil return nil
} }
@@ -91,16 +91,26 @@ func LoadArgs(args string, container interface{}) error {
unknownArgs = append(unknownArgs, pair) unknownArgs = append(unknownArgs, pair)
continue continue
} }
keyFieldIface := keyField.Addr().Interface()
u, ok := keyFieldIface.(encoding.TextUnmarshaler) var keyFieldInterface interface{}
switch {
case keyField.Kind() == reflect.Ptr:
keyField.Set(reflect.New(keyField.Type().Elem()))
keyFieldInterface = keyField.Interface()
case keyField.CanAddr() && keyField.Addr().CanInterface():
keyFieldInterface = keyField.Addr().Interface()
default:
return UnmarshalableArgsError{fmt.Errorf("field '%s' has no valid interface", keyString)}
}
u, ok := keyFieldInterface.(encoding.TextUnmarshaler)
if !ok { if !ok {
return UnmarshalableArgsError{fmt.Errorf( return UnmarshalableArgsError{fmt.Errorf(
"ARGS: cannot unmarshal into field '%s' - type '%s' does not implement encoding.TextUnmarshaler", "ARGS: cannot unmarshal into field '%s' - type '%s' does not implement encoding.TextUnmarshaler",
keyString, reflect.TypeOf(keyFieldIface))} keyString, reflect.TypeOf(keyFieldInterface))}
} }
err := u.UnmarshalText([]byte(valueString)) err := u.UnmarshalText([]byte(valueString))
if err != nil { if err != nil {
return fmt.Errorf("ARGS: error parsing value of pair %q: %v)", pair, err) return fmt.Errorf("ARGS: error parsing value of pair %q: %w", pair, err)
} }
} }

View File

@@ -0,0 +1,56 @@
// Copyright 2016 CNI authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package create
import (
"encoding/json"
"fmt"
"github.com/containernetworking/cni/pkg/types"
convert "github.com/containernetworking/cni/pkg/types/internal"
)
// DecodeVersion returns the CNI version from CNI configuration or result JSON,
// or an error if the operation could not be performed.
func DecodeVersion(jsonBytes []byte) (string, error) {
var conf struct {
CNIVersion string `json:"cniVersion"`
}
err := json.Unmarshal(jsonBytes, &conf)
if err != nil {
return "", fmt.Errorf("decoding version from network config: %w", err)
}
if conf.CNIVersion == "" {
return "0.1.0", nil
}
return conf.CNIVersion, nil
}
// Create creates a CNI Result using the given JSON with the expected
// version, or an error if the creation could not be performed
func Create(version string, bytes []byte) (types.Result, error) {
return convert.Create(version, bytes)
}
// CreateFromBytes creates a CNI Result from the given JSON, automatically
// detecting the CNI spec version of the result. An error is returned if the
// operation could not be performed.
func CreateFromBytes(bytes []byte) (types.Result, error) {
version, err := DecodeVersion(bytes)
if err != nil {
return nil, err
}
return convert.Create(version, bytes)
}

View File

@@ -1,300 +0,0 @@
// Copyright 2016 CNI authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package current
import (
"encoding/json"
"fmt"
"net"
"os"
"github.com/containernetworking/cni/pkg/types"
"github.com/containernetworking/cni/pkg/types/020"
)
const ImplementedSpecVersion string = "0.3.1"
var SupportedVersions = []string{"0.3.0", ImplementedSpecVersion}
func NewResult(data []byte) (types.Result, error) {
result := &Result{}
if err := json.Unmarshal(data, result); err != nil {
return nil, err
}
return result, nil
}
func GetResult(r types.Result) (*Result, error) {
resultCurrent, err := r.GetAsVersion(ImplementedSpecVersion)
if err != nil {
return nil, err
}
result, ok := resultCurrent.(*Result)
if !ok {
return nil, fmt.Errorf("failed to convert result")
}
return result, nil
}
var resultConverters = []struct {
versions []string
convert func(types.Result) (*Result, error)
}{
{types020.SupportedVersions, convertFrom020},
{SupportedVersions, convertFrom030},
}
func convertFrom020(result types.Result) (*Result, error) {
oldResult, err := types020.GetResult(result)
if err != nil {
return nil, err
}
newResult := &Result{
CNIVersion: ImplementedSpecVersion,
DNS: oldResult.DNS,
Routes: []*types.Route{},
}
if oldResult.IP4 != nil {
newResult.IPs = append(newResult.IPs, &IPConfig{
Version: "4",
Address: oldResult.IP4.IP,
Gateway: oldResult.IP4.Gateway,
})
for _, route := range oldResult.IP4.Routes {
gw := route.GW
if gw == nil {
gw = oldResult.IP4.Gateway
}
newResult.Routes = append(newResult.Routes, &types.Route{
Dst: route.Dst,
GW: gw,
})
}
}
if oldResult.IP6 != nil {
newResult.IPs = append(newResult.IPs, &IPConfig{
Version: "6",
Address: oldResult.IP6.IP,
Gateway: oldResult.IP6.Gateway,
})
for _, route := range oldResult.IP6.Routes {
gw := route.GW
if gw == nil {
gw = oldResult.IP6.Gateway
}
newResult.Routes = append(newResult.Routes, &types.Route{
Dst: route.Dst,
GW: gw,
})
}
}
if len(newResult.IPs) == 0 {
return nil, fmt.Errorf("cannot convert: no valid IP addresses")
}
return newResult, nil
}
func convertFrom030(result types.Result) (*Result, error) {
newResult, ok := result.(*Result)
if !ok {
return nil, fmt.Errorf("failed to convert result")
}
newResult.CNIVersion = ImplementedSpecVersion
return newResult, nil
}
func NewResultFromResult(result types.Result) (*Result, error) {
version := result.Version()
for _, converter := range resultConverters {
for _, supportedVersion := range converter.versions {
if version == supportedVersion {
return converter.convert(result)
}
}
}
return nil, fmt.Errorf("unsupported CNI result22 version %q", version)
}
// Result is what gets returned from the plugin (via stdout) to the caller
type Result struct {
CNIVersion string `json:"cniVersion,omitempty"`
Interfaces []*Interface `json:"interfaces,omitempty"`
IPs []*IPConfig `json:"ips,omitempty"`
Routes []*types.Route `json:"routes,omitempty"`
DNS types.DNS `json:"dns,omitempty"`
}
// Convert to the older 0.2.0 CNI spec Result type
func (r *Result) convertTo020() (*types020.Result, error) {
oldResult := &types020.Result{
CNIVersion: types020.ImplementedSpecVersion,
DNS: r.DNS,
}
for _, ip := range r.IPs {
// Only convert the first IP address of each version as 0.2.0
// and earlier cannot handle multiple IP addresses
if ip.Version == "4" && oldResult.IP4 == nil {
oldResult.IP4 = &types020.IPConfig{
IP: ip.Address,
Gateway: ip.Gateway,
}
} else if ip.Version == "6" && oldResult.IP6 == nil {
oldResult.IP6 = &types020.IPConfig{
IP: ip.Address,
Gateway: ip.Gateway,
}
}
if oldResult.IP4 != nil && oldResult.IP6 != nil {
break
}
}
for _, route := range r.Routes {
is4 := route.Dst.IP.To4() != nil
if is4 && oldResult.IP4 != nil {
oldResult.IP4.Routes = append(oldResult.IP4.Routes, types.Route{
Dst: route.Dst,
GW: route.GW,
})
} else if !is4 && oldResult.IP6 != nil {
oldResult.IP6.Routes = append(oldResult.IP6.Routes, types.Route{
Dst: route.Dst,
GW: route.GW,
})
}
}
if oldResult.IP4 == nil && oldResult.IP6 == nil {
return nil, fmt.Errorf("cannot convert: no valid IP addresses")
}
return oldResult, nil
}
func (r *Result) Version() string {
return ImplementedSpecVersion
}
func (r *Result) GetAsVersion(version string) (types.Result, error) {
switch version {
case "0.3.0", ImplementedSpecVersion:
r.CNIVersion = version
return r, nil
case types020.SupportedVersions[0], types020.SupportedVersions[1], types020.SupportedVersions[2]:
return r.convertTo020()
}
return nil, fmt.Errorf("cannot convert version 0.3.x to %q", version)
}
func (r *Result) Print() error {
data, err := json.MarshalIndent(r, "", " ")
if err != nil {
return err
}
_, err = os.Stdout.Write(data)
return err
}
// String returns a formatted string in the form of "[Interfaces: $1,][ IP: $2,] DNS: $3" where
// $1 represents the receiver's Interfaces, $2 represents the receiver's IP addresses and $3 the
// receiver's DNS. If $1 or $2 are nil, they won't be present in the returned string.
func (r *Result) String() string {
var str string
if len(r.Interfaces) > 0 {
str += fmt.Sprintf("Interfaces:%+v, ", r.Interfaces)
}
if len(r.IPs) > 0 {
str += fmt.Sprintf("IP:%+v, ", r.IPs)
}
if len(r.Routes) > 0 {
str += fmt.Sprintf("Routes:%+v, ", r.Routes)
}
return fmt.Sprintf("%sDNS:%+v", str, r.DNS)
}
// Convert this old version result to the current CNI version result
func (r *Result) Convert() (*Result, error) {
return r, nil
}
// Interface contains values about the created interfaces
type Interface struct {
Name string `json:"name"`
Mac string `json:"mac,omitempty"`
Sandbox string `json:"sandbox,omitempty"`
}
func (i *Interface) String() string {
return fmt.Sprintf("%+v", *i)
}
// Int returns a pointer to the int value passed in. Used to
// set the IPConfig.Interface field.
func Int(v int) *int {
return &v
}
// IPConfig contains values necessary to configure an IP address on an interface
type IPConfig struct {
// IP version, either "4" or "6"
Version string
// Index into Result structs Interfaces list
Interface *int
Address net.IPNet
Gateway net.IP
}
func (i *IPConfig) String() string {
return fmt.Sprintf("%+v", *i)
}
// JSON (un)marshallable types
type ipConfig struct {
Version string `json:"version"`
Interface *int `json:"interface,omitempty"`
Address types.IPNet `json:"address"`
Gateway net.IP `json:"gateway,omitempty"`
}
func (c *IPConfig) MarshalJSON() ([]byte, error) {
ipc := ipConfig{
Version: c.Version,
Interface: c.Interface,
Address: types.IPNet(c.Address),
Gateway: c.Gateway,
}
return json.Marshal(ipc)
}
func (c *IPConfig) UnmarshalJSON(data []byte) error {
ipc := ipConfig{}
if err := json.Unmarshal(data, &ipc); err != nil {
return err
}
c.Version = ipc.Version
c.Interface = ipc.Interface
c.Address = net.IPNet(ipc.Address)
c.Gateway = ipc.Gateway
return nil
}

View File

@@ -0,0 +1,92 @@
// Copyright 2016 CNI authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package convert
import (
"fmt"
"github.com/containernetworking/cni/pkg/types"
)
// ConvertFn should convert from the given arbitrary Result type into a
// Result implementing CNI specification version passed in toVersion.
// The function is guaranteed to be passed a Result type matching the
// fromVersion it was registered with, and is guaranteed to be
// passed a toVersion matching one of the toVersions it was registered with.
type ConvertFn func(from types.Result, toVersion string) (types.Result, error)
type converter struct {
// fromVersion is the CNI Result spec version that convertFn accepts
fromVersion string
// toVersions is a list of versions that convertFn can convert to
toVersions []string
convertFn ConvertFn
}
var converters []*converter
func findConverter(fromVersion, toVersion string) *converter {
for _, c := range converters {
if c.fromVersion == fromVersion {
for _, v := range c.toVersions {
if v == toVersion {
return c
}
}
}
}
return nil
}
// Convert converts a CNI Result to the requested CNI specification version,
// or returns an error if the conversion could not be performed or failed
func Convert(from types.Result, toVersion string) (types.Result, error) {
if toVersion == "" {
toVersion = "0.1.0"
}
fromVersion := from.Version()
// Shortcut for same version
if fromVersion == toVersion {
return from, nil
}
// Otherwise find the right converter
c := findConverter(fromVersion, toVersion)
if c == nil {
return nil, fmt.Errorf("no converter for CNI result version %s to %s",
fromVersion, toVersion)
}
return c.convertFn(from, toVersion)
}
// RegisterConverter registers a CNI Result converter. SHOULD NOT BE CALLED
// EXCEPT FROM CNI ITSELF.
func RegisterConverter(fromVersion string, toVersions []string, convertFn ConvertFn) {
// Make sure there is no converter already registered for these
// from and to versions
for _, v := range toVersions {
if findConverter(fromVersion, v) != nil {
panic(fmt.Sprintf("converter already registered for %s to %s",
fromVersion, v))
}
}
converters = append(converters, &converter{
fromVersion: fromVersion,
toVersions: toVersions,
convertFn: convertFn,
})
}

View File

@@ -0,0 +1,66 @@
// Copyright 2016 CNI authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package convert
import (
"fmt"
"github.com/containernetworking/cni/pkg/types"
)
type ResultFactoryFunc func([]byte) (types.Result, error)
type creator struct {
// CNI Result spec versions that createFn can create a Result for
versions []string
createFn ResultFactoryFunc
}
var creators []*creator
func findCreator(version string) *creator {
for _, c := range creators {
for _, v := range c.versions {
if v == version {
return c
}
}
}
return nil
}
// Create creates a CNI Result using the given JSON, or an error if the creation
// could not be performed
func Create(version string, bytes []byte) (types.Result, error) {
if c := findCreator(version); c != nil {
return c.createFn(bytes)
}
return nil, fmt.Errorf("unsupported CNI result version %q", version)
}
// RegisterCreator registers a CNI Result creator. SHOULD NOT BE CALLED
// EXCEPT FROM CNI ITSELF.
func RegisterCreator(versions []string, createFn ResultFactoryFunc) {
// Make sure there is no creator already registered for these versions
for _, v := range versions {
if findCreator(v) != nil {
panic(fmt.Sprintf("creator already registered for %s", v))
}
}
creators = append(creators, &creator{
versions: versions,
createFn: createFn,
})
}

View File

@@ -16,8 +16,8 @@ package types
import ( import (
"encoding/json" "encoding/json"
"errors"
"fmt" "fmt"
"io"
"net" "net"
"os" "os"
) )
@@ -63,10 +63,15 @@ type NetConf struct {
Name string `json:"name,omitempty"` Name string `json:"name,omitempty"`
Type string `json:"type,omitempty"` Type string `json:"type,omitempty"`
Capabilities map[string]bool `json:"capabilities,omitempty"` Capabilities map[string]bool `json:"capabilities,omitempty"`
IPAM struct { IPAM IPAM `json:"ipam,omitempty"`
Type string `json:"type,omitempty"`
} `json:"ipam,omitempty"`
DNS DNS `json:"dns"` DNS DNS `json:"dns"`
RawPrevResult map[string]interface{} `json:"prevResult,omitempty"`
PrevResult Result `json:"-"`
}
type IPAM struct {
Type string `json:"type,omitempty"`
} }
// NetConfList describes an ordered list of networks. // NetConfList describes an ordered list of networks.
@@ -74,14 +79,13 @@ type NetConfList struct {
CNIVersion string `json:"cniVersion,omitempty"` CNIVersion string `json:"cniVersion,omitempty"`
Name string `json:"name,omitempty"` Name string `json:"name,omitempty"`
DisableCheck bool `json:"disableCheck,omitempty"`
Plugins []*NetConf `json:"plugins,omitempty"` Plugins []*NetConf `json:"plugins,omitempty"`
} }
type ResultFactoryFunc func([]byte) (Result, error)
// Result is an interface that provides the result of plugin execution // Result is an interface that provides the result of plugin execution
type Result interface { type Result interface {
// The highest CNI specification result verison the result supports // The highest CNI specification result version the result supports
// without having to convert // without having to convert
Version() string Version() string
@@ -92,8 +96,8 @@ type Result interface {
// Prints the result in JSON format to stdout // Prints the result in JSON format to stdout
Print() error Print() error
// Returns a JSON string representation of the result // Prints the result in JSON format to provided writer
String() string PrintTo(writer io.Writer) error
} }
func PrintResult(result Result, version string) error { func PrintResult(result Result, version string) error {
@@ -112,6 +116,24 @@ type DNS struct {
Options []string `json:"options,omitempty"` Options []string `json:"options,omitempty"`
} }
func (d *DNS) Copy() *DNS {
if d == nil {
return nil
}
to := &DNS{Domain: d.Domain}
for _, ns := range d.Nameservers {
to.Nameservers = append(to.Nameservers, ns)
}
for _, s := range d.Search {
to.Search = append(to.Search, s)
}
for _, o := range d.Options {
to.Options = append(to.Options, o)
}
return to
}
type Route struct { type Route struct {
Dst net.IPNet Dst net.IPNet
GW net.IP GW net.IP
@@ -121,12 +143,30 @@ func (r *Route) String() string {
return fmt.Sprintf("%+v", *r) return fmt.Sprintf("%+v", *r)
} }
func (r *Route) Copy() *Route {
if r == nil {
return nil
}
return &Route{
Dst: r.Dst,
GW: r.GW,
}
}
// Well known error codes // Well known error codes
// see https://github.com/containernetworking/cni/blob/master/SPEC.md#well-known-error-codes // see https://github.com/containernetworking/cni/blob/master/SPEC.md#well-known-error-codes
const ( const (
ErrUnknown uint = iota // 0 ErrUnknown uint = iota // 0
ErrIncompatibleCNIVersion // 1 ErrIncompatibleCNIVersion // 1
ErrUnsupportedField // 2 ErrUnsupportedField // 2
ErrUnknownContainer // 3
ErrInvalidEnvironmentVariables // 4
ErrIOFailure // 5
ErrDecodingFailure // 6
ErrInvalidNetworkConfig // 7
ErrTryAgainLater uint = 11
ErrInternal uint = 999
) )
type Error struct { type Error struct {
@@ -135,6 +175,14 @@ type Error struct {
Details string `json:"details,omitempty"` Details string `json:"details,omitempty"`
} }
func NewError(code uint, msg, details string) *Error {
return &Error{
Code: code,
Msg: msg,
Details: details,
}
}
func (e *Error) Error() string { func (e *Error) Error() string {
details := "" details := ""
if e.Details != "" { if e.Details != "" {
@@ -167,7 +215,7 @@ func (r *Route) UnmarshalJSON(data []byte) error {
return nil return nil
} }
func (r *Route) MarshalJSON() ([]byte, error) { func (r Route) MarshalJSON() ([]byte, error) {
rt := route{ rt := route{
Dst: IPNet(r.Dst), Dst: IPNet(r.Dst),
GW: r.GW, GW: r.GW,
@@ -184,6 +232,3 @@ func prettyPrint(obj interface{}) error {
_, err = os.Stdout.Write(data) _, err = os.Stdout.Write(data)
return err return err
} }
// NotImplementedError is used to indicate that a method is not implemented for the given platform
var NotImplementedError = errors.New("Not Implemented")

View File

@@ -0,0 +1,84 @@
// Copyright 2019 CNI authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package utils
import (
"bytes"
"fmt"
"regexp"
"unicode"
"github.com/containernetworking/cni/pkg/types"
)
const (
// cniValidNameChars is the regexp used to validate valid characters in
// containerID and networkName
cniValidNameChars = `[a-zA-Z0-9][a-zA-Z0-9_.\-]`
// maxInterfaceNameLength is the length max of a valid interface name
maxInterfaceNameLength = 15
)
var cniReg = regexp.MustCompile(`^` + cniValidNameChars + `*$`)
// ValidateContainerID will validate that the supplied containerID is not empty does not contain invalid characters
func ValidateContainerID(containerID string) *types.Error {
if containerID == "" {
return types.NewError(types.ErrUnknownContainer, "missing containerID", "")
}
if !cniReg.MatchString(containerID) {
return types.NewError(types.ErrInvalidEnvironmentVariables, "invalid characters in containerID", containerID)
}
return nil
}
// ValidateNetworkName will validate that the supplied networkName does not contain invalid characters
func ValidateNetworkName(networkName string) *types.Error {
if networkName == "" {
return types.NewError(types.ErrInvalidNetworkConfig, "missing network name:", "")
}
if !cniReg.MatchString(networkName) {
return types.NewError(types.ErrInvalidNetworkConfig, "invalid characters found in network name", networkName)
}
return nil
}
// ValidateInterfaceName will validate the interface name based on the three rules below
// 1. The name must not be empty
// 2. The name must be less than 16 characters
// 3. The name must not be "." or ".."
// 3. The name must not contain / or : or any whitespace characters
// ref to https://github.com/torvalds/linux/blob/master/net/core/dev.c#L1024
func ValidateInterfaceName(ifName string) *types.Error {
if len(ifName) == 0 {
return types.NewError(types.ErrInvalidEnvironmentVariables, "interface name is empty", "")
}
if len(ifName) > maxInterfaceNameLength {
return types.NewError(types.ErrInvalidEnvironmentVariables, "interface name is too long", fmt.Sprintf("interface name should be less than %d characters", maxInterfaceNameLength+1))
}
if ifName == "." || ifName == ".." {
return types.NewError(types.ErrInvalidEnvironmentVariables, "interface name is . or ..", "")
}
for _, r := range bytes.Runes([]byte(ifName)) {
if r == '/' || r == ':' || unicode.IsSpace(r) {
return types.NewError(types.ErrInvalidEnvironmentVariables, "interface name contains / or : or whitespace characters", "")
}
}
return nil
}

View File

@@ -15,23 +15,12 @@
package version package version
import ( import (
"encoding/json" "github.com/containernetworking/cni/pkg/types/create"
"fmt"
) )
// ConfigDecoder can decode the CNI version available in network config data // ConfigDecoder can decode the CNI version available in network config data
type ConfigDecoder struct{} type ConfigDecoder struct{}
func (*ConfigDecoder) Decode(jsonBytes []byte) (string, error) { func (*ConfigDecoder) Decode(jsonBytes []byte) (string, error) {
var conf struct { return create.DecodeVersion(jsonBytes)
CNIVersion string `json:"cniVersion"`
}
err := json.Unmarshal(jsonBytes, &conf)
if err != nil {
return "", fmt.Errorf("decoding version from network config: %s", err)
}
if conf.CNIVersion == "" {
return "0.1.0", nil
}
return conf.CNIVersion, nil
} }

View File

@@ -18,6 +18,8 @@ import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"io" "io"
"strconv"
"strings"
) )
// PluginInfo reports information about CNI versioning // PluginInfo reports information about CNI versioning
@@ -66,7 +68,7 @@ func (*PluginDecoder) Decode(jsonBytes []byte) (PluginInfo, error) {
var info pluginInfo var info pluginInfo
err := json.Unmarshal(jsonBytes, &info) err := json.Unmarshal(jsonBytes, &info)
if err != nil { if err != nil {
return nil, fmt.Errorf("decoding version info: %s", err) return nil, fmt.Errorf("decoding version info: %w", err)
} }
if info.CNIVersion_ == "" { if info.CNIVersion_ == "" {
return nil, fmt.Errorf("decoding version info: missing field cniVersion") return nil, fmt.Errorf("decoding version info: missing field cniVersion")
@@ -79,3 +81,64 @@ func (*PluginDecoder) Decode(jsonBytes []byte) (PluginInfo, error) {
} }
return &info, nil return &info, nil
} }
// ParseVersion parses a version string like "3.0.1" or "0.4.5" into major,
// minor, and micro numbers or returns an error
func ParseVersion(version string) (int, int, int, error) {
var major, minor, micro int
if version == "" {
return -1, -1, -1, fmt.Errorf("invalid version %q: the version is empty", version)
}
parts := strings.Split(version, ".")
if len(parts) >= 4 {
return -1, -1, -1, fmt.Errorf("invalid version %q: too many parts", version)
}
major, err := strconv.Atoi(parts[0])
if err != nil {
return -1, -1, -1, fmt.Errorf("failed to convert major version part %q: %w", parts[0], err)
}
if len(parts) >= 2 {
minor, err = strconv.Atoi(parts[1])
if err != nil {
return -1, -1, -1, fmt.Errorf("failed to convert minor version part %q: %w", parts[1], err)
}
}
if len(parts) >= 3 {
micro, err = strconv.Atoi(parts[2])
if err != nil {
return -1, -1, -1, fmt.Errorf("failed to convert micro version part %q: %w", parts[2], err)
}
}
return major, minor, micro, nil
}
// GreaterThanOrEqualTo takes two string versions, parses them into major/minor/micro
// numbers, and compares them to determine whether the first version is greater
// than or equal to the second
func GreaterThanOrEqualTo(version, otherVersion string) (bool, error) {
firstMajor, firstMinor, firstMicro, err := ParseVersion(version)
if err != nil {
return false, err
}
secondMajor, secondMinor, secondMicro, err := ParseVersion(otherVersion)
if err != nil {
return false, err
}
if firstMajor > secondMajor {
return true, nil
} else if firstMajor == secondMajor {
if firstMinor > secondMinor {
return true, nil
} else if firstMinor == secondMinor && firstMicro >= secondMicro {
return true, nil
}
}
return false, nil
}

View File

@@ -15,16 +15,17 @@
package version package version
import ( import (
"encoding/json"
"fmt" "fmt"
"github.com/containernetworking/cni/pkg/types" "github.com/containernetworking/cni/pkg/types"
"github.com/containernetworking/cni/pkg/types/020" types100 "github.com/containernetworking/cni/pkg/types/100"
"github.com/containernetworking/cni/pkg/types/current" "github.com/containernetworking/cni/pkg/types/create"
) )
// Current reports the version of the CNI spec implemented by this library // Current reports the version of the CNI spec implemented by this library
func Current() string { func Current() string {
return "0.3.1" return types100.ImplementedSpecVersion
} }
// Legacy PluginInfo describes a plugin that is backwards compatible with the // Legacy PluginInfo describes a plugin that is backwards compatible with the
@@ -35,27 +36,54 @@ func Current() string {
// Any future CNI spec versions which meet this definition should be added to // Any future CNI spec versions which meet this definition should be added to
// this list. // this list.
var Legacy = PluginSupports("0.1.0", "0.2.0") var Legacy = PluginSupports("0.1.0", "0.2.0")
var All = PluginSupports("0.1.0", "0.2.0", "0.3.0", "0.3.1") var All = PluginSupports("0.1.0", "0.2.0", "0.3.0", "0.3.1", "0.4.0", "1.0.0")
var resultFactories = []struct { // VersionsFrom returns a list of versions starting from min, inclusive
supportedVersions []string func VersionsStartingFrom(min string) PluginInfo {
newResult types.ResultFactoryFunc out := []string{}
}{ // cheat, just assume ordered
{current.SupportedVersions, current.NewResult}, ok := false
{types020.SupportedVersions, types020.NewResult}, for _, v := range All.SupportedVersions() {
if !ok && v == min {
ok = true
}
if ok {
out = append(out, v)
}
}
return PluginSupports(out...)
} }
// Finds a Result object matching the requested version (if any) and asks // Finds a Result object matching the requested version (if any) and asks
// that object to parse the plugin result, returning an error if parsing failed. // that object to parse the plugin result, returning an error if parsing failed.
func NewResult(version string, resultBytes []byte) (types.Result, error) { func NewResult(version string, resultBytes []byte) (types.Result, error) {
reconciler := &Reconciler{} return create.Create(version, resultBytes)
for _, resultFactory := range resultFactories { }
err := reconciler.CheckRaw(version, resultFactory.supportedVersions)
if err == nil { // ParsePrevResult parses a prevResult in a NetConf structure and sets
// Result supports this version // the NetConf's PrevResult member to the parsed Result object.
return resultFactory.newResult(resultBytes) func ParsePrevResult(conf *types.NetConf) error {
} if conf.RawPrevResult == nil {
return nil
} }
return nil, fmt.Errorf("unsupported CNI result version %q", version) // Prior to 1.0.0, Result types may not marshal a CNIVersion. Since the
// result version must match the config version, if the Result's version
// is empty, inject the config version.
if ver, ok := conf.RawPrevResult["CNIVersion"]; !ok || ver == "" {
conf.RawPrevResult["CNIVersion"] = conf.CNIVersion
}
resultBytes, err := json.Marshal(conf.RawPrevResult)
if err != nil {
return fmt.Errorf("could not serialize prevResult: %w", err)
}
conf.RawPrevResult = nil
conf.PrevResult, err = create.Create(conf.CNIVersion, resultBytes)
if err != nil {
return fmt.Errorf("could not parse prevResult: %w", err)
}
return nil
} }

View File

@@ -0,0 +1,105 @@
// Copyright 2021 CNI authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package ip
import (
"fmt"
"net"
"strings"
)
// IP is a CNI maintained type inherited from net.IPNet which can
// represent a single IP address with or without prefix.
type IP struct {
net.IPNet
}
// newIP will create an IP with net.IP and net.IPMask
func newIP(ip net.IP, mask net.IPMask) *IP {
return &IP{
IPNet: net.IPNet{
IP: ip,
Mask: mask,
},
}
}
// ParseIP will parse string s as an IP, and return it.
// The string s must be formed like <ip>[/<prefix>].
// If s is not a valid textual representation of an IP,
// will return nil.
func ParseIP(s string) *IP {
if strings.ContainsAny(s, "/") {
ip, ipNet, err := net.ParseCIDR(s)
if err != nil {
return nil
}
return newIP(ip, ipNet.Mask)
} else {
ip := net.ParseIP(s)
if ip == nil {
return nil
}
return newIP(ip, nil)
}
}
// ToIP will return a net.IP in standard form from this IP.
// If this IP can not be converted to a valid net.IP, will return nil.
func (i *IP) ToIP() net.IP {
switch {
case i.IP.To4() != nil:
return i.IP.To4()
case i.IP.To16() != nil:
return i.IP.To16()
default:
return nil
}
}
// String returns the string form of this IP.
func (i *IP) String() string {
if len(i.Mask) > 0 {
return i.IPNet.String()
}
return i.IP.String()
}
// MarshalText implements the encoding.TextMarshaler interface.
// The encoding is the same as returned by String,
// But when len(ip) is zero, will return an empty slice.
func (i *IP) MarshalText() ([]byte, error) {
if len(i.IP) == 0 {
return []byte{}, nil
}
return []byte(i.String()), nil
}
// UnmarshalText implements the encoding.TextUnmarshaler interface.
// The textual bytes are expected in a form accepted by Parse,
// But when len(b) is zero, will return an empty IP.
func (i *IP) UnmarshalText(b []byte) error {
if len(b) == 0 {
*i = IP{}
return nil
}
ip := ParseIP(string(b))
if ip == nil {
return fmt.Errorf("invalid IP address %s", string(b))
}
*i = *ip
return nil
}

View File

@@ -15,9 +15,10 @@
package ip package ip
import ( import (
"bytes"
"io/ioutil" "io/ioutil"
"github.com/containernetworking/cni/pkg/types/current" current "github.com/containernetworking/cni/pkg/types/100"
) )
func EnableIP4Forward() error { func EnableIP4Forward() error {
@@ -35,12 +36,13 @@ func EnableForward(ips []*current.IPConfig) error {
v6 := false v6 := false
for _, ip := range ips { for _, ip := range ips {
if ip.Version == "4" && !v4 { isV4 := ip.Address.IP.To4() != nil
if isV4 && !v4 {
if err := EnableIP4Forward(); err != nil { if err := EnableIP4Forward(); err != nil {
return err return err
} }
v4 = true v4 = true
} else if ip.Version == "6" && !v6 { } else if !isV4 && !v6 {
if err := EnableIP6Forward(); err != nil { if err := EnableIP6Forward(); err != nil {
return err return err
} }
@@ -51,5 +53,10 @@ func EnableForward(ips []*current.IPConfig) error {
} }
func echo1(f string) error { func echo1(f string) error {
if content, err := ioutil.ReadFile(f); err == nil {
if bytes.Equal(bytes.TrimSpace(content), []byte("1")) {
return nil
}
}
return ioutil.WriteFile(f, []byte("1"), 0644) return ioutil.WriteFile(f, []byte("1"), 0644)
} }

View File

@@ -22,7 +22,7 @@ import (
) )
// SetupIPMasq installs iptables rules to masquerade traffic // SetupIPMasq installs iptables rules to masquerade traffic
// coming from ipn and going outside of it // coming from ip of ipn and going outside of ipn
func SetupIPMasq(ipn *net.IPNet, chain string, comment string) error { func SetupIPMasq(ipn *net.IPNet, chain string, comment string) error {
isV6 := ipn.IP.To4() == nil isV6 := ipn.IP.To4() == nil
@@ -70,23 +70,57 @@ func SetupIPMasq(ipn *net.IPNet, chain string, comment string) error {
return err return err
} }
return ipt.AppendUnique("nat", "POSTROUTING", "-s", ipn.String(), "-j", chain, "-m", "comment", "--comment", comment) // Packets from the specific IP of this network will hit the chain
return ipt.AppendUnique("nat", "POSTROUTING", "-s", ipn.IP.String(), "-j", chain, "-m", "comment", "--comment", comment)
} }
// TeardownIPMasq undoes the effects of SetupIPMasq // TeardownIPMasq undoes the effects of SetupIPMasq
func TeardownIPMasq(ipn *net.IPNet, chain string, comment string) error { func TeardownIPMasq(ipn *net.IPNet, chain string, comment string) error {
ipt, err := iptables.New() isV6 := ipn.IP.To4() == nil
var ipt *iptables.IPTables
var err error
if isV6 {
ipt, err = iptables.NewWithProtocol(iptables.ProtocolIPv6)
} else {
ipt, err = iptables.NewWithProtocol(iptables.ProtocolIPv4)
}
if err != nil { if err != nil {
return fmt.Errorf("failed to locate iptables: %v", err) return fmt.Errorf("failed to locate iptables: %v", err)
} }
if err = ipt.Delete("nat", "POSTROUTING", "-s", ipn.String(), "-j", chain, "-m", "comment", "--comment", comment); err != nil { err = ipt.Delete("nat", "POSTROUTING", "-s", ipn.IP.String(), "-j", chain, "-m", "comment", "--comment", comment)
if err != nil && !isNotExist(err) {
return err return err
} }
if err = ipt.ClearChain("nat", chain); err != nil { // for downward compatibility
err = ipt.Delete("nat", "POSTROUTING", "-s", ipn.String(), "-j", chain, "-m", "comment", "--comment", comment)
if err != nil && !isNotExist(err) {
return err return err
} }
return ipt.DeleteChain("nat", chain) err = ipt.ClearChain("nat", chain)
if err != nil && !isNotExist(err) {
return err
}
err = ipt.DeleteChain("nat", chain)
if err != nil && !isNotExist(err) {
return err
}
return nil
}
// isNotExist returnst true if the error is from iptables indicating
// that the target does not exist.
func isNotExist(err error) bool {
e, ok := err.(*iptables.Error)
if !ok {
return false
}
return e.IsNotExist()
} }

View File

@@ -21,29 +21,45 @@ import (
"net" "net"
"os" "os"
"github.com/containernetworking/plugins/pkg/ns" "github.com/safchain/ethtool"
"github.com/containernetworking/plugins/pkg/utils/hwaddr"
"github.com/vishvananda/netlink" "github.com/vishvananda/netlink"
"github.com/containernetworking/plugins/pkg/ns"
"github.com/containernetworking/plugins/pkg/utils/sysctl"
) )
var ( var (
ErrLinkNotFound = errors.New("link not found") ErrLinkNotFound = errors.New("link not found")
) )
func makeVethPair(name, peer string, mtu int) (netlink.Link, error) { // makeVethPair is called from within the container's network namespace
func makeVethPair(name, peer string, mtu int, mac string, hostNS ns.NetNS) (netlink.Link, error) {
veth := &netlink.Veth{ veth := &netlink.Veth{
LinkAttrs: netlink.LinkAttrs{ LinkAttrs: netlink.LinkAttrs{
Name: name, Name: name,
Flags: net.FlagUp,
MTU: mtu, MTU: mtu,
}, },
PeerName: peer, PeerName: peer,
PeerNamespace: netlink.NsFd(int(hostNS.Fd())),
}
if mac != "" {
m, err := net.ParseMAC(mac)
if err != nil {
return nil, err
}
veth.LinkAttrs.HardwareAddr = m
} }
if err := netlink.LinkAdd(veth); err != nil { if err := netlink.LinkAdd(veth); err != nil {
return nil, err return nil, err
} }
// Re-fetch the container link to get its creation-time parameters, e.g. index and mac
veth2, err := netlink.LinkByName(name)
if err != nil {
netlink.LinkDel(veth) // try and clean up the link if possible.
return nil, err
}
return veth, nil return veth2, nil
} }
func peerExists(name string) bool { func peerExists(name string) bool {
@@ -53,20 +69,24 @@ func peerExists(name string) bool {
return true return true
} }
func makeVeth(name string, mtu int) (peerName string, veth netlink.Link, err error) { func makeVeth(name, vethPeerName string, mtu int, mac string, hostNS ns.NetNS) (peerName string, veth netlink.Link, err error) {
for i := 0; i < 10; i++ { for i := 0; i < 10; i++ {
if vethPeerName != "" {
peerName = vethPeerName
} else {
peerName, err = RandomVethName() peerName, err = RandomVethName()
if err != nil { if err != nil {
return return
} }
}
veth, err = makeVethPair(name, peerName, mtu) veth, err = makeVethPair(name, peerName, mtu, mac, hostNS)
switch { switch {
case err == nil: case err == nil:
return return
case os.IsExist(err): case os.IsExist(err):
if peerExists(peerName) { if peerExists(peerName) && vethPeerName == "" {
continue continue
} }
err = fmt.Errorf("container veth name provided (%v) already exists", name) err = fmt.Errorf("container veth name provided (%v) already exists", name)
@@ -86,7 +106,7 @@ func makeVeth(name string, mtu int) (peerName string, veth netlink.Link, err err
// RandomVethName returns string "veth" with random prefix (hashed from entropy) // RandomVethName returns string "veth" with random prefix (hashed from entropy)
func RandomVethName() (string, error) { func RandomVethName() (string, error) {
entropy := make([]byte, 4) entropy := make([]byte, 4)
_, err := rand.Reader.Read(entropy) _, err := rand.Read(entropy)
if err != nil { if err != nil {
return "", fmt.Errorf("failed to generate random veth name: %v", err) return "", fmt.Errorf("failed to generate random veth name: %v", err)
} }
@@ -114,29 +134,18 @@ func ifaceFromNetlinkLink(l netlink.Link) net.Interface {
} }
} }
// SetupVeth sets up a pair of virtual ethernet devices. // SetupVethWithName sets up a pair of virtual ethernet devices.
// Call SetupVeth from inside the container netns. It will create both veth // Call SetupVethWithName from inside the container netns. It will create both veth
// devices and move the host-side veth into the provided hostNS namespace. // devices and move the host-side veth into the provided hostNS namespace.
// On success, SetupVeth returns (hostVeth, containerVeth, nil) // hostVethName: If hostVethName is not specified, the host-side veth name will use a random string.
func SetupVeth(contVethName string, mtu int, hostNS ns.NetNS) (net.Interface, net.Interface, error) { // On success, SetupVethWithName returns (hostVeth, containerVeth, nil)
hostVethName, contVeth, err := makeVeth(contVethName, mtu) func SetupVethWithName(contVethName, hostVethName string, mtu int, contVethMac string, hostNS ns.NetNS) (net.Interface, net.Interface, error) {
hostVethName, contVeth, err := makeVeth(contVethName, hostVethName, mtu, contVethMac, hostNS)
if err != nil { if err != nil {
return net.Interface{}, net.Interface{}, err return net.Interface{}, net.Interface{}, err
} }
if err = netlink.LinkSetUp(contVeth); err != nil { var hostVeth netlink.Link
return net.Interface{}, net.Interface{}, fmt.Errorf("failed to set %q up: %v", contVethName, err)
}
hostVeth, err := netlink.LinkByName(hostVethName)
if err != nil {
return net.Interface{}, net.Interface{}, fmt.Errorf("failed to lookup %q: %v", hostVethName, err)
}
if err = netlink.LinkSetNsFd(hostVeth, int(hostNS.Fd())); err != nil {
return net.Interface{}, net.Interface{}, fmt.Errorf("failed to move veth to host netns: %v", err)
}
err = hostNS.Do(func(_ ns.NetNS) error { err = hostNS.Do(func(_ ns.NetNS) error {
hostVeth, err = netlink.LinkByName(hostVethName) hostVeth, err = netlink.LinkByName(hostVethName)
if err != nil { if err != nil {
@@ -146,6 +155,9 @@ func SetupVeth(contVethName string, mtu int, hostNS ns.NetNS) (net.Interface, ne
if err = netlink.LinkSetUp(hostVeth); err != nil { if err = netlink.LinkSetUp(hostVeth); err != nil {
return fmt.Errorf("failed to set %q up: %v", hostVethName, err) return fmt.Errorf("failed to set %q up: %v", hostVethName, err)
} }
// we want to own the routes for this interface
_, _ = sysctl.Sysctl(fmt.Sprintf("net/ipv6/conf/%s/accept_ra", hostVethName), "0")
return nil return nil
}) })
if err != nil { if err != nil {
@@ -154,10 +166,21 @@ func SetupVeth(contVethName string, mtu int, hostNS ns.NetNS) (net.Interface, ne
return ifaceFromNetlinkLink(hostVeth), ifaceFromNetlinkLink(contVeth), nil return ifaceFromNetlinkLink(hostVeth), ifaceFromNetlinkLink(contVeth), nil
} }
// SetupVeth sets up a pair of virtual ethernet devices.
// Call SetupVeth from inside the container netns. It will create both veth
// devices and move the host-side veth into the provided hostNS namespace.
// On success, SetupVeth returns (hostVeth, containerVeth, nil)
func SetupVeth(contVethName string, mtu int, contVethMac string, hostNS ns.NetNS) (net.Interface, net.Interface, error) {
return SetupVethWithName(contVethName, "", mtu, contVethMac, hostNS)
}
// DelLinkByName removes an interface link. // DelLinkByName removes an interface link.
func DelLinkByName(ifName string) error { func DelLinkByName(ifName string) error {
iface, err := netlink.LinkByName(ifName) iface, err := netlink.LinkByName(ifName)
if err != nil { if err != nil {
if _, ok := err.(netlink.LinkNotFoundError); ok {
return ErrLinkNotFound
}
return fmt.Errorf("failed to lookup %q: %v", ifName, err) return fmt.Errorf("failed to lookup %q: %v", ifName, err)
} }
@@ -168,19 +191,18 @@ func DelLinkByName(ifName string) error {
return nil return nil
} }
// DelLinkByNameAddr remove an interface returns its IP address // DelLinkByNameAddr remove an interface and returns its addresses
// of the specified family func DelLinkByNameAddr(ifName string) ([]*net.IPNet, error) {
func DelLinkByNameAddr(ifName string, family int) (*net.IPNet, error) {
iface, err := netlink.LinkByName(ifName) iface, err := netlink.LinkByName(ifName)
if err != nil { if err != nil {
if err != nil && err.Error() == "Link not found" { if _, ok := err.(netlink.LinkNotFoundError); ok {
return nil, ErrLinkNotFound return nil, ErrLinkNotFound
} }
return nil, fmt.Errorf("failed to lookup %q: %v", ifName, err) return nil, fmt.Errorf("failed to lookup %q: %v", ifName, err)
} }
addrs, err := netlink.AddrList(iface, family) addrs, err := netlink.AddrList(iface, netlink.FAMILY_ALL)
if err != nil || len(addrs) == 0 { if err != nil {
return nil, fmt.Errorf("failed to get IP addresses for %q: %v", ifName, err) return nil, fmt.Errorf("failed to get IP addresses for %q: %v", ifName, err)
} }
@@ -188,32 +210,52 @@ func DelLinkByNameAddr(ifName string, family int) (*net.IPNet, error) {
return nil, fmt.Errorf("failed to delete %q: %v", ifName, err) return nil, fmt.Errorf("failed to delete %q: %v", ifName, err)
} }
return addrs[0].IPNet, nil out := []*net.IPNet{}
for _, addr := range addrs {
if addr.IP.IsGlobalUnicast() {
out = append(out, addr.IPNet)
}
}
return out, nil
} }
func SetHWAddrByIP(ifName string, ip4 net.IP, ip6 net.IP) error { // GetVethPeerIfindex returns the veth link object, the peer ifindex of the
iface, err := netlink.LinkByName(ifName) // veth, or an error. This peer ifindex will only be valid in the peer's
// network namespace.
func GetVethPeerIfindex(ifName string) (netlink.Link, int, error) {
link, err := netlink.LinkByName(ifName)
if err != nil { if err != nil {
return fmt.Errorf("failed to lookup %q: %v", ifName, err) return nil, -1, fmt.Errorf("could not look up %q: %v", ifName, err)
}
if _, ok := link.(*netlink.Veth); !ok {
return nil, -1, fmt.Errorf("interface %q was not a veth interface", ifName)
} }
switch { // veth supports IFLA_LINK (what vishvananda/netlink calls ParentIndex)
case ip4 == nil && ip6 == nil: // on 4.1 and higher kernels
return fmt.Errorf("neither ip4 or ip6 specified") peerIndex := link.Attrs().ParentIndex
if peerIndex <= 0 {
case ip4 != nil: // Fall back to ethtool for 4.0 and earlier kernels
{ e, err := ethtool.NewEthtool()
hwAddr, err := hwaddr.GenerateHardwareAddr4(ip4, hwaddr.PrivateMACPrefix)
if err != nil { if err != nil {
return fmt.Errorf("failed to generate hardware addr: %v", err) return nil, -1, fmt.Errorf("failed to initialize ethtool: %v", err)
} }
if err = netlink.LinkSetHardwareAddr(iface, hwAddr); err != nil { defer e.Close()
return fmt.Errorf("failed to add hardware addr to %q: %v", ifName, err)
stats, err := e.Stats(link.Attrs().Name)
if err != nil {
return nil, -1, fmt.Errorf("failed to request ethtool stats: %v", err)
} }
n, ok := stats["peer_ifindex"]
if !ok {
return nil, -1, fmt.Errorf("failed to find 'peer_ifindex' in ethtool stats")
} }
case ip6 != nil: if n > 32767 || n == 0 {
// TODO: IPv6 return nil, -1, fmt.Errorf("invalid 'peer_ifindex' %d", n)
}
peerIndex = int(n)
} }
return nil return link, peerIndex, nil
} }

View File

@@ -39,3 +39,9 @@ func AddHostRoute(ipn *net.IPNet, gw net.IP, dev netlink.Link) error {
Gw: gw, Gw: gw,
}) })
} }
// AddDefaultRoute sets the default route on the given gateway.
func AddDefaultRoute(gw net.IP, dev netlink.Link) error {
_, defNet, _ := net.ParseCIDR("0.0.0.0/0")
return AddRoute(defNet, gw, dev)
}

View File

@@ -1,34 +0,0 @@
// Copyright 2015-2017 CNI authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// +build !linux
package ip
import (
"net"
"github.com/containernetworking/cni/pkg/types"
"github.com/vishvananda/netlink"
)
// AddRoute adds a universally-scoped route to a device.
func AddRoute(ipn *net.IPNet, gw net.IP, dev netlink.Link) error {
return types.NotImplementedError
}
// AddHostRoute adds a host-scoped route to a device.
func AddHostRoute(ipn *net.IPNet, gw net.IP, dev netlink.Link) error {
return types.NotImplementedError
}

View File

@@ -0,0 +1,116 @@
//go:build linux
// +build linux
// Copyright 2016 CNI authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package ip
import (
"fmt"
"net"
"github.com/containernetworking/cni/pkg/types"
current "github.com/containernetworking/cni/pkg/types/100"
"github.com/vishvananda/netlink"
)
func ValidateExpectedInterfaceIPs(ifName string, resultIPs []*current.IPConfig) error {
// Ensure ips
for _, ips := range resultIPs {
ourAddr := netlink.Addr{IPNet: &ips.Address}
match := false
link, err := netlink.LinkByName(ifName)
if err != nil {
return fmt.Errorf("Cannot find container link %v", ifName)
}
addrList, err := netlink.AddrList(link, netlink.FAMILY_ALL)
if err != nil {
return fmt.Errorf("Cannot obtain List of IP Addresses")
}
for _, addr := range addrList {
if addr.Equal(ourAddr) {
match = true
break
}
}
if match == false {
return fmt.Errorf("Failed to match addr %v on interface %v", ourAddr, ifName)
}
// Convert the host/prefixlen to just prefix for route lookup.
_, ourPrefix, err := net.ParseCIDR(ourAddr.String())
findGwy := &netlink.Route{Dst: ourPrefix}
routeFilter := netlink.RT_FILTER_DST
family := netlink.FAMILY_V6
if ips.Address.IP.To4() != nil {
family = netlink.FAMILY_V4
}
gwy, err := netlink.RouteListFiltered(family, findGwy, routeFilter)
if err != nil {
return fmt.Errorf("Error %v trying to find Gateway %v for interface %v", err, ips.Gateway, ifName)
}
if gwy == nil {
return fmt.Errorf("Failed to find Gateway %v for interface %v", ips.Gateway, ifName)
}
}
return nil
}
func ValidateExpectedRoute(resultRoutes []*types.Route) error {
// Ensure that each static route in prevResults is found in the routing table
for _, route := range resultRoutes {
find := &netlink.Route{Dst: &route.Dst, Gw: route.GW}
routeFilter := netlink.RT_FILTER_DST | netlink.RT_FILTER_GW
var family int
switch {
case route.Dst.IP.To4() != nil:
family = netlink.FAMILY_V4
// Default route needs Dst set to nil
if route.Dst.String() == "0.0.0.0/0" {
find = &netlink.Route{Dst: nil, Gw: route.GW}
routeFilter = netlink.RT_FILTER_DST
}
case len(route.Dst.IP) == net.IPv6len:
family = netlink.FAMILY_V6
// Default route needs Dst set to nil
if route.Dst.String() == "::/0" {
find = &netlink.Route{Dst: nil, Gw: route.GW}
routeFilter = netlink.RT_FILTER_DST
}
default:
return fmt.Errorf("Invalid static route found %v", route)
}
wasFound, err := netlink.RouteListFiltered(family, find, routeFilter)
if err != nil {
return fmt.Errorf("Expected Route %v not route table lookup error %v", route, err)
}
if wasFound == nil {
return fmt.Errorf("Expected Route %v not found in routing table", route)
}
}
return nil
}

View File

@@ -12,10 +12,6 @@ For example, you cannot rely on the `ns.Set()` namespace being the current names
The `ns.Do()` method provides **partial** control over network namespaces for you by implementing these strategies. All code dependent on a particular network namespace (including the root namespace) should be wrapped in the `ns.Do()` method to ensure the correct namespace is selected for the duration of your code. For example: The `ns.Do()` method provides **partial** control over network namespaces for you by implementing these strategies. All code dependent on a particular network namespace (including the root namespace) should be wrapped in the `ns.Do()` method to ensure the correct namespace is selected for the duration of your code. For example:
```go ```go
targetNs, err := ns.NewNS()
if err != nil {
return err
}
err = targetNs.Do(func(hostNs ns.NetNS) error { err = targetNs.Do(func(hostNs ns.NetNS) error {
dummy := &netlink.Dummy{ dummy := &netlink.Dummy{
LinkAttrs: netlink.LinkAttrs{ LinkAttrs: netlink.LinkAttrs{
@@ -26,11 +22,16 @@ err = targetNs.Do(func(hostNs ns.NetNS) error {
}) })
``` ```
Note this requirement to wrap every network call is very onerous - any libraries you call might call out to network services such as DNS, and all such calls need to be protected after you call `ns.Do()`. The CNI plugins all exit very soon after calling `ns.Do()` which helps to minimize the problem. Note this requirement to wrap every network call is very onerous - any libraries you call might call out to network services such as DNS, and all such calls need to be protected after you call `ns.Do()`. All goroutines spawned from within the `ns.Do` will not inherit the new namespace. The CNI plugins all exit very soon after calling `ns.Do()` which helps to minimize the problem.
Also: If the runtime spawns a new OS thread, it will inherit the network namespace of the parent thread, which may have been temporarily switched, and thus the new OS thread will be permanently "stuck in the wrong namespace". When a new thread is spawned in Linux, it inherits the namespace of its parent. In versions of go **prior to 1.10**, if the runtime spawns a new OS thread, it picks the parent randomly. If the chosen parent thread has been moved to a new namespace (even temporarily), the new OS thread will be permanently "stuck in the wrong namespace", and goroutines will non-deterministically switch namespaces as they are rescheduled.
In short, **there was no safe way to change network namespaces, even temporarily, from within a long-lived, multithreaded Go process**. If you wish to do this, you must use go 1.10 or greater.
### Creating network namespaces
Earlier versions of this library managed namespace creation, but as CNI does not actually utilize this feature (and it was essentially unmaintained), it was removed. If you're writing a container runtime, you should implement namespace management yourself. However, there are some gotchas when doing so, especially around handling `/var/run/netns`. A reasonably correct reference implementation, borrowed from `rkt`, can be found in `pkg/testutils/netns_linux.go` if you're in need of a source of inspiration.
In short, **there is no safe way to change network namespaces from within a long-lived, multithreaded Go process**. If your daemon process needs to be namespace aware, consider spawning a separate process (like a CNI plugin) for each namespace.
### Further Reading ### Further Reading
- https://github.com/golang/go/wiki/LockOSThread - https://github.com/golang/go/wiki/LockOSThread

View File

@@ -1,178 +0,0 @@
// Copyright 2015 CNI authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package ns
import (
"fmt"
"os"
"runtime"
"sync"
"syscall"
)
type NetNS interface {
// Executes the passed closure in this object's network namespace,
// attempting to restore the original namespace before returning.
// However, since each OS thread can have a different network namespace,
// and Go's thread scheduling is highly variable, callers cannot
// guarantee any specific namespace is set unless operations that
// require that namespace are wrapped with Do(). Also, no code called
// from Do() should call runtime.UnlockOSThread(), or the risk
// of executing code in an incorrect namespace will be greater. See
// https://github.com/golang/go/wiki/LockOSThread for further details.
Do(toRun func(NetNS) error) error
// Sets the current network namespace to this object's network namespace.
// Note that since Go's thread scheduling is highly variable, callers
// cannot guarantee the requested namespace will be the current namespace
// after this function is called; to ensure this wrap operations that
// require the namespace with Do() instead.
Set() error
// Returns the filesystem path representing this object's network namespace
Path() string
// Returns a file descriptor representing this object's network namespace
Fd() uintptr
// Cleans up this instance of the network namespace; if this instance
// is the last user the namespace will be destroyed
Close() error
}
type netNS struct {
file *os.File
mounted bool
closed bool
}
// netNS implements the NetNS interface
var _ NetNS = &netNS{}
const (
// https://github.com/torvalds/linux/blob/master/include/uapi/linux/magic.h
NSFS_MAGIC = 0x6e736673
PROCFS_MAGIC = 0x9fa0
)
type NSPathNotExistErr struct{ msg string }
func (e NSPathNotExistErr) Error() string { return e.msg }
type NSPathNotNSErr struct{ msg string }
func (e NSPathNotNSErr) Error() string { return e.msg }
func IsNSorErr(nspath string) error {
stat := syscall.Statfs_t{}
if err := syscall.Statfs(nspath, &stat); err != nil {
if os.IsNotExist(err) {
err = NSPathNotExistErr{msg: fmt.Sprintf("failed to Statfs %q: %v", nspath, err)}
} else {
err = fmt.Errorf("failed to Statfs %q: %v", nspath, err)
}
return err
}
switch stat.Type {
case PROCFS_MAGIC, NSFS_MAGIC:
return nil
default:
return NSPathNotNSErr{msg: fmt.Sprintf("unknown FS magic on %q: %x", nspath, stat.Type)}
}
}
// Returns an object representing the namespace referred to by @path
func GetNS(nspath string) (NetNS, error) {
err := IsNSorErr(nspath)
if err != nil {
return nil, err
}
fd, err := os.Open(nspath)
if err != nil {
return nil, err
}
return &netNS{file: fd}, nil
}
func (ns *netNS) Path() string {
return ns.file.Name()
}
func (ns *netNS) Fd() uintptr {
return ns.file.Fd()
}
func (ns *netNS) errorIfClosed() error {
if ns.closed {
return fmt.Errorf("%q has already been closed", ns.file.Name())
}
return nil
}
func (ns *netNS) Do(toRun func(NetNS) error) error {
if err := ns.errorIfClosed(); err != nil {
return err
}
containedCall := func(hostNS NetNS) error {
threadNS, err := GetCurrentNS()
if err != nil {
return fmt.Errorf("failed to open current netns: %v", err)
}
defer threadNS.Close()
// switch to target namespace
if err = ns.Set(); err != nil {
return fmt.Errorf("error switching to ns %v: %v", ns.file.Name(), err)
}
defer threadNS.Set() // switch back
return toRun(hostNS)
}
// save a handle to current network namespace
hostNS, err := GetCurrentNS()
if err != nil {
return fmt.Errorf("Failed to open current namespace: %v", err)
}
defer hostNS.Close()
var wg sync.WaitGroup
wg.Add(1)
var innerError error
go func() {
defer wg.Done()
runtime.LockOSThread()
innerError = containedCall(hostNS)
}()
wg.Wait()
return innerError
}
// WithNetNSPath executes the passed closure under the given network
// namespace, restoring the original namespace afterwards.
func WithNetNSPath(nspath string, toRun func(NetNS) error) error {
ns, err := GetNS(nspath)
if err != nil {
return err
}
defer ns.Close()
return ns.Do(toRun)
}

View File

@@ -15,18 +15,22 @@
package ns package ns
import ( import (
"crypto/rand"
"fmt" "fmt"
"os" "os"
"path"
"runtime" "runtime"
"sync" "sync"
"syscall"
"golang.org/x/sys/unix" "golang.org/x/sys/unix"
) )
// Returns an object representing the current OS thread's network namespace // Returns an object representing the current OS thread's network namespace
func GetCurrentNS() (NetNS, error) { func GetCurrentNS() (NetNS, error) {
// Lock the thread in case other goroutine executes in it and changes its
// network namespace after getCurrentThreadNetNSPath(), otherwise it might
// return an unexpected network namespace.
runtime.LockOSThread()
defer runtime.UnlockOSThread()
return GetNS(getCurrentThreadNetNSPath()) return GetNS(getCurrentThreadNetNSPath())
} }
@@ -37,82 +41,6 @@ func getCurrentThreadNetNSPath() string {
return fmt.Sprintf("/proc/%d/task/%d/ns/net", os.Getpid(), unix.Gettid()) return fmt.Sprintf("/proc/%d/task/%d/ns/net", os.Getpid(), unix.Gettid())
} }
// Creates a new persistent network namespace and returns an object
// representing that namespace, without switching to it
func NewNS() (NetNS, error) {
const nsRunDir = "/var/run/netns"
b := make([]byte, 16)
_, err := rand.Reader.Read(b)
if err != nil {
return nil, fmt.Errorf("failed to generate random netns name: %v", err)
}
err = os.MkdirAll(nsRunDir, 0755)
if err != nil {
return nil, err
}
// create an empty file at the mount point
nsName := fmt.Sprintf("cni-%x-%x-%x-%x-%x", b[0:4], b[4:6], b[6:8], b[8:10], b[10:])
nsPath := path.Join(nsRunDir, nsName)
mountPointFd, err := os.Create(nsPath)
if err != nil {
return nil, err
}
mountPointFd.Close()
// Ensure the mount point is cleaned up on errors; if the namespace
// was successfully mounted this will have no effect because the file
// is in-use
defer os.RemoveAll(nsPath)
var wg sync.WaitGroup
wg.Add(1)
// do namespace work in a dedicated goroutine, so that we can safely
// Lock/Unlock OSThread without upsetting the lock/unlock state of
// the caller of this function
var fd *os.File
go (func() {
defer wg.Done()
runtime.LockOSThread()
var origNS NetNS
origNS, err = GetNS(getCurrentThreadNetNSPath())
if err != nil {
return
}
defer origNS.Close()
// create a new netns on the current thread
err = unix.Unshare(unix.CLONE_NEWNET)
if err != nil {
return
}
defer origNS.Set()
// bind mount the new netns from the current thread onto the mount point
err = unix.Mount(getCurrentThreadNetNSPath(), nsPath, "none", unix.MS_BIND, "")
if err != nil {
return
}
fd, err = os.Open(nsPath)
if err != nil {
return
}
})()
wg.Wait()
if err != nil {
unix.Unmount(nsPath, unix.MNT_DETACH)
return nil, fmt.Errorf("failed to create namespace: %v", err)
}
return &netNS{file: fd, mounted: true}, nil
}
func (ns *netNS) Close() error { func (ns *netNS) Close() error {
if err := ns.errorIfClosed(); err != nil { if err := ns.errorIfClosed(); err != nil {
return err return err
@@ -123,16 +51,6 @@ func (ns *netNS) Close() error {
} }
ns.closed = true ns.closed = true
if ns.mounted {
if err := unix.Unmount(ns.file.Name(), unix.MNT_DETACH); err != nil {
return fmt.Errorf("Failed to unmount namespace %s: %v", ns.file.Name(), err)
}
if err := os.RemoveAll(ns.file.Name()); err != nil {
return fmt.Errorf("Failed to clean up namespace %s: %v", ns.file.Name(), err)
}
ns.mounted = false
}
return nil return nil
} }
@@ -147,3 +65,170 @@ func (ns *netNS) Set() error {
return nil return nil
} }
type NetNS interface {
// Executes the passed closure in this object's network namespace,
// attempting to restore the original namespace before returning.
// However, since each OS thread can have a different network namespace,
// and Go's thread scheduling is highly variable, callers cannot
// guarantee any specific namespace is set unless operations that
// require that namespace are wrapped with Do(). Also, no code called
// from Do() should call runtime.UnlockOSThread(), or the risk
// of executing code in an incorrect namespace will be greater. See
// https://github.com/golang/go/wiki/LockOSThread for further details.
Do(toRun func(NetNS) error) error
// Sets the current network namespace to this object's network namespace.
// Note that since Go's thread scheduling is highly variable, callers
// cannot guarantee the requested namespace will be the current namespace
// after this function is called; to ensure this wrap operations that
// require the namespace with Do() instead.
Set() error
// Returns the filesystem path representing this object's network namespace
Path() string
// Returns a file descriptor representing this object's network namespace
Fd() uintptr
// Cleans up this instance of the network namespace; if this instance
// is the last user the namespace will be destroyed
Close() error
}
type netNS struct {
file *os.File
closed bool
}
// netNS implements the NetNS interface
var _ NetNS = &netNS{}
const (
// https://github.com/torvalds/linux/blob/master/include/uapi/linux/magic.h
NSFS_MAGIC = unix.NSFS_MAGIC
PROCFS_MAGIC = unix.PROC_SUPER_MAGIC
)
type NSPathNotExistErr struct{ msg string }
func (e NSPathNotExistErr) Error() string { return e.msg }
type NSPathNotNSErr struct{ msg string }
func (e NSPathNotNSErr) Error() string { return e.msg }
func IsNSorErr(nspath string) error {
stat := syscall.Statfs_t{}
if err := syscall.Statfs(nspath, &stat); err != nil {
if os.IsNotExist(err) {
err = NSPathNotExistErr{msg: fmt.Sprintf("failed to Statfs %q: %v", nspath, err)}
} else {
err = fmt.Errorf("failed to Statfs %q: %v", nspath, err)
}
return err
}
switch stat.Type {
case PROCFS_MAGIC, NSFS_MAGIC:
return nil
default:
return NSPathNotNSErr{msg: fmt.Sprintf("unknown FS magic on %q: %x", nspath, stat.Type)}
}
}
// Returns an object representing the namespace referred to by @path
func GetNS(nspath string) (NetNS, error) {
err := IsNSorErr(nspath)
if err != nil {
return nil, err
}
fd, err := os.Open(nspath)
if err != nil {
return nil, err
}
return &netNS{file: fd}, nil
}
func (ns *netNS) Path() string {
return ns.file.Name()
}
func (ns *netNS) Fd() uintptr {
return ns.file.Fd()
}
func (ns *netNS) errorIfClosed() error {
if ns.closed {
return fmt.Errorf("%q has already been closed", ns.file.Name())
}
return nil
}
func (ns *netNS) Do(toRun func(NetNS) error) error {
if err := ns.errorIfClosed(); err != nil {
return err
}
containedCall := func(hostNS NetNS) error {
threadNS, err := GetCurrentNS()
if err != nil {
return fmt.Errorf("failed to open current netns: %v", err)
}
defer threadNS.Close()
// switch to target namespace
if err = ns.Set(); err != nil {
return fmt.Errorf("error switching to ns %v: %v", ns.file.Name(), err)
}
defer func() {
err := threadNS.Set() // switch back
if err == nil {
// Unlock the current thread only when we successfully switched back
// to the original namespace; otherwise leave the thread locked which
// will force the runtime to scrap the current thread, that is maybe
// not as optimal but at least always safe to do.
runtime.UnlockOSThread()
}
}()
return toRun(hostNS)
}
// save a handle to current network namespace
hostNS, err := GetCurrentNS()
if err != nil {
return fmt.Errorf("Failed to open current namespace: %v", err)
}
defer hostNS.Close()
var wg sync.WaitGroup
wg.Add(1)
// Start the callback in a new green thread so that if we later fail
// to switch the namespace back to the original one, we can safely
// leave the thread locked to die without a risk of the current thread
// left lingering with incorrect namespace.
var innerError error
go func() {
defer wg.Done()
runtime.LockOSThread()
innerError = containedCall(hostNS)
}()
wg.Wait()
return innerError
}
// WithNetNSPath executes the passed closure under the given network
// namespace, restoring the original namespace afterwards.
func WithNetNSPath(nspath string, toRun func(NetNS) error) error {
ns, err := GetNS(nspath)
if err != nil {
return err
}
defer ns.Close()
return ns.Do(toRun)
}

View File

@@ -1,36 +0,0 @@
// Copyright 2015-2017 CNI authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// +build !linux
package ns
import "github.com/containernetworking/cni/pkg/types"
// Returns an object representing the current OS thread's network namespace
func GetCurrentNS() (NetNS, error) {
return nil, types.NotImplementedError
}
func NewNS() (NetNS, error) {
return nil, types.NotImplementedError
}
func (ns *netNS) Close() error {
return types.NotImplementedError
}
func (ns *netNS) Set() error {
return types.NotImplementedError
}

View File

@@ -1,63 +0,0 @@
// Copyright 2016 CNI authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package hwaddr
import (
"fmt"
"net"
)
const (
ipRelevantByteLen = 4
PrivateMACPrefixString = "0a:58"
)
var (
// private mac prefix safe to use
PrivateMACPrefix = []byte{0x0a, 0x58}
)
type SupportIp4OnlyErr struct{ msg string }
func (e SupportIp4OnlyErr) Error() string { return e.msg }
type MacParseErr struct{ msg string }
func (e MacParseErr) Error() string { return e.msg }
type InvalidPrefixLengthErr struct{ msg string }
func (e InvalidPrefixLengthErr) Error() string { return e.msg }
// GenerateHardwareAddr4 generates 48 bit virtual mac addresses based on the IP4 input.
func GenerateHardwareAddr4(ip net.IP, prefix []byte) (net.HardwareAddr, error) {
switch {
case ip.To4() == nil:
return nil, SupportIp4OnlyErr{msg: "GenerateHardwareAddr4 only supports valid IPv4 address as input"}
case len(prefix) != len(PrivateMACPrefix):
return nil, InvalidPrefixLengthErr{msg: fmt.Sprintf(
"Prefix has length %d instead of %d", len(prefix), len(PrivateMACPrefix)),
}
}
ipByteLen := len(ip)
return (net.HardwareAddr)(
append(
prefix,
ip[ipByteLen-ipRelevantByteLen:ipByteLen]...),
), nil
}

View File

@@ -0,0 +1,78 @@
// Copyright 2016 CNI authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package sysctl
import (
"fmt"
"io/ioutil"
"path/filepath"
"strings"
)
// Sysctl provides a method to set/get values from /proc/sys - in linux systems
// new interface to set/get values of variables formerly handled by sysctl syscall
// If optional `params` have only one string value - this function will
// set this value into corresponding sysctl variable
func Sysctl(name string, params ...string) (string, error) {
if len(params) > 1 {
return "", fmt.Errorf("unexcepted additional parameters")
} else if len(params) == 1 {
return setSysctl(name, params[0])
}
return getSysctl(name)
}
func getSysctl(name string) (string, error) {
fullName := filepath.Join("/proc/sys", toNormalName(name))
data, err := ioutil.ReadFile(fullName)
if err != nil {
return "", err
}
return string(data[:len(data)-1]), nil
}
func setSysctl(name, value string) (string, error) {
fullName := filepath.Join("/proc/sys", toNormalName(name))
if err := ioutil.WriteFile(fullName, []byte(value), 0644); err != nil {
return "", err
}
return getSysctl(name)
}
// Normalize names by using slash as separator
// Sysctl names can use dots or slashes as separator:
// - if dots are used, dots and slashes are interchanged.
// - if slashes are used, slashes and dots are left intact.
// Separator in use is determined by first occurrence.
func toNormalName(name string) string {
interchange := false
for _, c := range name {
if c == '.' {
interchange = true
break
}
if c == '/' {
break
}
}
if interchange {
r := strings.NewReplacer(".", "/", "/", ".")
return r.Replace(name)
}
return name
}

View File

@@ -21,7 +21,8 @@ import (
"os" "os"
"strconv" "strconv"
"github.com/containernetworking/cni/pkg/types/current" current "github.com/containernetworking/cni/pkg/types/100"
"github.com/containernetworking/plugins/pkg/ip" "github.com/containernetworking/plugins/pkg/ip"
"github.com/containernetworking/plugins/plugins/ipam/host-local/backend" "github.com/containernetworking/plugins/plugins/ipam/host-local/backend"
) )
@@ -40,8 +41,8 @@ func NewIPAllocator(s *RangeSet, store backend.Store, id int) *IPAllocator {
} }
} }
// Get alocates an IP // Get allocates an IP
func (a *IPAllocator) Get(id string, requestedIP net.IP) (*current.IPConfig, error) { func (a *IPAllocator) Get(id string, ifname string, requestedIP net.IP) (*current.IPConfig, error) {
a.store.Lock() a.store.Lock()
defer a.store.Unlock() defer a.store.Unlock()
@@ -62,7 +63,7 @@ func (a *IPAllocator) Get(id string, requestedIP net.IP) (*current.IPConfig, err
return nil, fmt.Errorf("requested ip %s is subnet's gateway", requestedIP.String()) return nil, fmt.Errorf("requested ip %s is subnet's gateway", requestedIP.String())
} }
reserved, err := a.store.Reserve(id, requestedIP, a.rangeID) reserved, err := a.store.Reserve(id, ifname, requestedIP, a.rangeID)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -73,6 +74,17 @@ func (a *IPAllocator) Get(id string, requestedIP net.IP) (*current.IPConfig, err
gw = r.Gateway gw = r.Gateway
} else { } else {
// try to get allocated IPs for this given id, if exists, just return error
// because duplicate allocation is not allowed in SPEC
// https://github.com/containernetworking/cni/blob/master/SPEC.md
allocatedIPs := a.store.GetByID(id, ifname)
for _, allocatedIP := range allocatedIPs {
// check whether the existing IP belong to this range set
if _, err := a.rangeset.RangeFor(allocatedIP); err == nil {
return nil, fmt.Errorf("%s has been allocated to %s, duplicate allocation is not allowed", allocatedIP.String(), id)
}
}
iter, err := a.GetIter() iter, err := a.GetIter()
if err != nil { if err != nil {
return nil, err return nil, err
@@ -83,7 +95,7 @@ func (a *IPAllocator) Get(id string, requestedIP net.IP) (*current.IPConfig, err
break break
} }
reserved, err := a.store.Reserve(id, reservedIP.IP, a.rangeID) reserved, err := a.store.Reserve(id, ifname, reservedIP.IP, a.rangeID)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -97,24 +109,19 @@ func (a *IPAllocator) Get(id string, requestedIP net.IP) (*current.IPConfig, err
if reservedIP == nil { if reservedIP == nil {
return nil, fmt.Errorf("no IP addresses available in range set: %s", a.rangeset.String()) return nil, fmt.Errorf("no IP addresses available in range set: %s", a.rangeset.String())
} }
version := "4"
if reservedIP.IP.To4() == nil {
version = "6"
}
return &current.IPConfig{ return &current.IPConfig{
Version: version,
Address: *reservedIP, Address: *reservedIP,
Gateway: gw, Gateway: gw,
}, nil }, nil
} }
// Release clears all IPs allocated for the container with given ID // Release clears all IPs allocated for the container with given ID
func (a *IPAllocator) Release(id string) error { func (a *IPAllocator) Release(id string, ifname string) error {
a.store.Lock() a.store.Lock()
defer a.store.Unlock() defer a.store.Unlock()
return a.store.ReleaseByID(id) return a.store.ReleaseByID(id, ifname)
} }
type RangeIter struct { type RangeIter struct {
@@ -126,9 +133,8 @@ type RangeIter struct {
// Our current position // Our current position
cur net.IP cur net.IP
// The IP and range index where we started iterating; if we hit this again, we're done. // The IP where we started iterating; if we hit this again, we're done.
startIP net.IP startIP net.IP
startRange int
} }
// GetIter encapsulates the strategy for this allocator. // GetIter encapsulates the strategy for this allocator.
@@ -158,7 +164,6 @@ func (a *IPAllocator) GetIter() (*RangeIter, error) {
for i, r := range *a.rangeset { for i, r := range *a.rangeset {
if r.Contains(lastReservedIP) { if r.Contains(lastReservedIP) {
iter.rangeIdx = i iter.rangeIdx = i
iter.startRange = i
// We advance the cursor on every Next(), so the first call // We advance the cursor on every Next(), so the first call
// to next() will return lastReservedIP + 1 // to next() will return lastReservedIP + 1
@@ -168,7 +173,6 @@ func (a *IPAllocator) GetIter() (*RangeIter, error) {
} }
} else { } else {
iter.rangeIdx = 0 iter.rangeIdx = 0
iter.startRange = 0
iter.startIP = (*a.rangeset)[0].RangeStart iter.startIP = (*a.rangeset)[0].RangeStart
} }
return &iter, nil return &iter, nil
@@ -204,7 +208,7 @@ func (i *RangeIter) Next() (*net.IPNet, net.IP) {
if i.startIP == nil { if i.startIP == nil {
i.startIP = i.cur i.startIP = i.cur
} else if i.rangeIdx == i.startRange && i.cur.Equal(i.startIP) { } else if i.cur.Equal(i.startIP) {
// IF we've looped back to where we started, give up // IF we've looped back to where we started, give up
return nil, nil return nil, nil
} }

View File

@@ -20,14 +20,22 @@ import (
"net" "net"
"github.com/containernetworking/cni/pkg/types" "github.com/containernetworking/cni/pkg/types"
types020 "github.com/containernetworking/cni/pkg/types/020" "github.com/containernetworking/cni/pkg/version"
"github.com/containernetworking/plugins/pkg/ip"
) )
// The top-level network config, just so we can get the IPAM block // The top-level network config - IPAM plugins are passed the full configuration
// of the calling plugin, not just the IPAM section.
type Net struct { type Net struct {
Name string `json:"name"` Name string `json:"name"`
CNIVersion string `json:"cniVersion"` CNIVersion string `json:"cniVersion"`
IPAM *IPAMConfig `json:"ipam"` IPAM *IPAMConfig `json:"ipam"`
RuntimeConfig struct {
// The capability arg
IPRanges []RangeSet `json:"ipRanges,omitempty"`
IPs []*ip.IP `json:"ips,omitempty"`
} `json:"runtimeConfig,omitempty"`
Args *struct { Args *struct {
A *IPAMArgs `json:"cni"` A *IPAMArgs `json:"cni"`
} `json:"args"` } `json:"args"`
@@ -44,16 +52,16 @@ type IPAMConfig struct {
DataDir string `json:"dataDir"` DataDir string `json:"dataDir"`
ResolvConf string `json:"resolvConf"` ResolvConf string `json:"resolvConf"`
Ranges []RangeSet `json:"ranges"` Ranges []RangeSet `json:"ranges"`
IPArgs []net.IP `json:"-"` // Requested IPs from CNI_ARGS and args IPArgs []net.IP `json:"-"` // Requested IPs from CNI_ARGS, args and capabilities
} }
type IPAMEnvArgs struct { type IPAMEnvArgs struct {
types.CommonArgs types.CommonArgs
IP net.IP `json:"ip,omitempty"` IP ip.IP `json:"ip,omitempty"`
} }
type IPAMArgs struct { type IPAMArgs struct {
IPs []net.IP `json:"ips"` IPs []*ip.IP `json:"ips"`
} }
type RangeSet []Range type RangeSet []Range
@@ -76,7 +84,7 @@ func LoadIPAMConfig(bytes []byte, envArgs string) (*IPAMConfig, string, error) {
return nil, "", fmt.Errorf("IPAM config missing 'ipam' key") return nil, "", fmt.Errorf("IPAM config missing 'ipam' key")
} }
// Parse custom IP from both env args *and* the top-level args config // parse custom IP from env args
if envArgs != "" { if envArgs != "" {
e := IPAMEnvArgs{} e := IPAMEnvArgs{}
err := types.LoadArgs(envArgs, &e) err := types.LoadArgs(envArgs, &e)
@@ -84,16 +92,26 @@ func LoadIPAMConfig(bytes []byte, envArgs string) (*IPAMConfig, string, error) {
return nil, "", err return nil, "", err
} }
if e.IP != nil { if e.IP.ToIP() != nil {
n.IPAM.IPArgs = []net.IP{e.IP} n.IPAM.IPArgs = []net.IP{e.IP.ToIP()}
} }
} }
// parse custom IPs from CNI args in network config
if n.Args != nil && n.Args.A != nil && len(n.Args.A.IPs) != 0 { if n.Args != nil && n.Args.A != nil && len(n.Args.A.IPs) != 0 {
n.IPAM.IPArgs = append(n.IPAM.IPArgs, n.Args.A.IPs...) for _, i := range n.Args.A.IPs {
n.IPAM.IPArgs = append(n.IPAM.IPArgs, i.ToIP())
}
} }
for idx, _ := range n.IPAM.IPArgs { // parse custom IPs from runtime configuration
if len(n.RuntimeConfig.IPs) > 0 {
for _, i := range n.RuntimeConfig.IPs {
n.IPAM.IPArgs = append(n.IPAM.IPArgs, i.ToIP())
}
}
for idx := range n.IPAM.IPArgs {
if err := canonicalizeIP(&n.IPAM.IPArgs[idx]); err != nil { if err := canonicalizeIP(&n.IPAM.IPArgs[idx]); err != nil {
return nil, "", fmt.Errorf("cannot understand ip: %v", err) return nil, "", fmt.Errorf("cannot understand ip: %v", err)
} }
@@ -106,6 +124,11 @@ func LoadIPAMConfig(bytes []byte, envArgs string) (*IPAMConfig, string, error) {
} }
n.IPAM.Range = nil n.IPAM.Range = nil
// If a range is supplied as a runtime config, prepend it to the Ranges
if len(n.RuntimeConfig.IPRanges) > 0 {
n.IPAM.Ranges = append(n.RuntimeConfig.IPRanges, n.IPAM.Ranges...)
}
if len(n.IPAM.Ranges) == 0 { if len(n.IPAM.Ranges) == 0 {
return nil, "", fmt.Errorf("no IP ranges specified") return nil, "", fmt.Errorf("no IP ranges specified")
} }
@@ -113,7 +136,7 @@ func LoadIPAMConfig(bytes []byte, envArgs string) (*IPAMConfig, string, error) {
// Validate all ranges // Validate all ranges
numV4 := 0 numV4 := 0
numV6 := 0 numV6 := 0
for i, _ := range n.IPAM.Ranges { for i := range n.IPAM.Ranges {
if err := n.IPAM.Ranges[i].Canonicalize(); err != nil { if err := n.IPAM.Ranges[i].Canonicalize(); err != nil {
return nil, "", fmt.Errorf("invalid range set %d: %s", i, err) return nil, "", fmt.Errorf("invalid range set %d: %s", i, err)
} }
@@ -127,12 +150,10 @@ func LoadIPAMConfig(bytes []byte, envArgs string) (*IPAMConfig, string, error) {
// CNI spec 0.2.0 and below supported only one v4 and v6 address // CNI spec 0.2.0 and below supported only one v4 and v6 address
if numV4 > 1 || numV6 > 1 { if numV4 > 1 || numV6 > 1 {
for _, v := range types020.SupportedVersions { if ok, _ := version.GreaterThanOrEqualTo(n.CNIVersion, "0.3.0"); !ok {
if n.CNIVersion == v {
return nil, "", fmt.Errorf("CNI version %v does not support more than 1 address per family", n.CNIVersion) return nil, "", fmt.Errorf("CNI version %v does not support more than 1 address per family", n.CNIVersion)
} }
} }
}
// Check for overlaps // Check for overlaps
l := len(n.IPAM.Ranges) l := len(n.IPAM.Ranges)

View File

@@ -40,6 +40,12 @@ func (r *Range) Canonicalize() error {
return fmt.Errorf("IPNet IP and Mask version mismatch") return fmt.Errorf("IPNet IP and Mask version mismatch")
} }
// Ensure Subnet IP is the network address, not some other address
networkIP := r.Subnet.IP.Mask(r.Subnet.Mask)
if !r.Subnet.IP.Equal(networkIP) {
return fmt.Errorf("Network has host bits set. For a subnet mask of length %d the network address is %s", ones, networkIP.String())
}
// If the gateway is nil, claim .1 // If the gateway is nil, claim .1
if r.Gateway == nil { if r.Gateway == nil {
r.Gateway = ip.NextIP(r.Subnet.IP) r.Gateway = ip.NextIP(r.Subnet.IP)
@@ -47,10 +53,6 @@ func (r *Range) Canonicalize() error {
if err := canonicalizeIP(&r.Gateway); err != nil { if err := canonicalizeIP(&r.Gateway); err != nil {
return err return err
} }
subnet := (net.IPNet)(r.Subnet)
if !subnet.Contains(r.Gateway) {
return fmt.Errorf("gateway %s not in network %s", r.Gateway.String(), subnet.String())
}
} }
// RangeStart: If specified, make sure it's sane (inside the subnet), // RangeStart: If specified, make sure it's sane (inside the subnet),

View File

@@ -61,7 +61,7 @@ func (s *RangeSet) Canonicalize() error {
} }
fam := 0 fam := 0
for i, _ := range *s { for i := range *s {
if err := (*s)[i].Canonicalize(); err != nil { if err := (*s)[i].Canonicalize(); err != nil {
return err return err
} }

View File

@@ -20,8 +20,9 @@ type Store interface {
Lock() error Lock() error
Unlock() error Unlock() error
Close() error Close() error
Reserve(id string, ip net.IP, rangeID string) (bool, error) Reserve(id string, ifname string, ip net.IP, rangeID string) (bool, error)
LastReservedIP(rangeID string) (net.IP, error) LastReservedIP(rangeID string) (net.IP, error)
Release(ip net.IP) error Release(ip net.IP) error
ReleaseByID(id string) error ReleaseByID(id string, ifname string) error
GetByID(id string, ifname string) []net.IP
} }

View File

@@ -47,9 +47,12 @@ func (e *Error) Error() string {
// IsNotExist returns true if the error is due to the chain or rule not existing // IsNotExist returns true if the error is due to the chain or rule not existing
func (e *Error) IsNotExist() bool { func (e *Error) IsNotExist() bool {
return e.ExitStatus() == 1 && if e.ExitStatus() != 1 {
(e.msg == "iptables: Bad rule (does a matching rule exist in that chain?).\n" || return false
e.msg == "iptables: No chain/target/match by that name.\n") }
msgNoRuleExist := "Bad rule (does a matching rule exist in that chain?).\n"
msgNoChainExist := "No chain/target/match by that name.\n"
return strings.Contains(e.msg, msgNoRuleExist) || strings.Contains(e.msg, msgNoChainExist)
} }
// Protocol to differentiate between IPv4 and IPv6 // Protocol to differentiate between IPv4 and IPv6
@@ -65,43 +68,91 @@ type IPTables struct {
proto Protocol proto Protocol
hasCheck bool hasCheck bool
hasWait bool hasWait bool
waitSupportSecond bool
hasRandomFully bool hasRandomFully bool
v1 int v1 int
v2 int v2 int
v3 int v3 int
mode string // the underlying iptables operating mode, e.g. nf_tables mode string // the underlying iptables operating mode, e.g. nf_tables
timeout int // time to wait for the iptables lock, default waits forever
} }
// New creates a new IPTables. // Stat represents a structured statistic entry.
// For backwards compatibility, this always uses IPv4, i.e. "iptables". type Stat struct {
func New() (*IPTables, error) { Packets uint64 `json:"pkts"`
return NewWithProtocol(ProtocolIPv4) Bytes uint64 `json:"bytes"`
Target string `json:"target"`
Protocol string `json:"prot"`
Opt string `json:"opt"`
Input string `json:"in"`
Output string `json:"out"`
Source *net.IPNet `json:"source"`
Destination *net.IPNet `json:"destination"`
Options string `json:"options"`
}
type option func(*IPTables)
func IPFamily(proto Protocol) option {
return func(ipt *IPTables) {
ipt.proto = proto
}
}
func Timeout(timeout int) option {
return func(ipt *IPTables) {
ipt.timeout = timeout
}
}
// New creates a new IPTables configured with the options passed as parameter.
// For backwards compatibility, by default always uses IPv4 and timeout 0.
// i.e. you can create an IPv6 IPTables using a timeout of 5 seconds passing
// the IPFamily and Timeout options as follow:
// ip6t := New(IPFamily(ProtocolIPv6), Timeout(5))
func New(opts ...option) (*IPTables, error) {
ipt := &IPTables{
proto: ProtocolIPv4,
timeout: 0,
}
for _, opt := range opts {
opt(ipt)
}
path, err := exec.LookPath(getIptablesCommand(ipt.proto))
if err != nil {
return nil, err
}
ipt.path = path
vstring, err := getIptablesVersionString(path)
if err != nil {
return nil, fmt.Errorf("could not get iptables version: %v", err)
}
v1, v2, v3, mode, err := extractIptablesVersion(vstring)
if err != nil {
return nil, fmt.Errorf("failed to extract iptables version from [%s]: %v", vstring, err)
}
ipt.v1 = v1
ipt.v2 = v2
ipt.v3 = v3
ipt.mode = mode
checkPresent, waitPresent, waitSupportSecond, randomFullyPresent := getIptablesCommandSupport(v1, v2, v3)
ipt.hasCheck = checkPresent
ipt.hasWait = waitPresent
ipt.waitSupportSecond = waitSupportSecond
ipt.hasRandomFully = randomFullyPresent
return ipt, nil
} }
// New creates a new IPTables for the given proto. // New creates a new IPTables for the given proto.
// The proto will determine which command is used, either "iptables" or "ip6tables". // The proto will determine which command is used, either "iptables" or "ip6tables".
func NewWithProtocol(proto Protocol) (*IPTables, error) { func NewWithProtocol(proto Protocol) (*IPTables, error) {
path, err := exec.LookPath(getIptablesCommand(proto)) return New(IPFamily(proto), Timeout(0))
if err != nil {
return nil, err
}
vstring, err := getIptablesVersionString(path)
v1, v2, v3, mode, err := extractIptablesVersion(vstring)
checkPresent, waitPresent, randomFullyPresent := getIptablesCommandSupport(v1, v2, v3)
ipt := IPTables{
path: path,
proto: proto,
hasCheck: checkPresent,
hasWait: waitPresent,
hasRandomFully: randomFullyPresent,
v1: v1,
v2: v2,
v3: v3,
mode: mode,
}
return &ipt, nil
} }
// Proto returns the protocol used by this IPTables. // Proto returns the protocol used by this IPTables.
@@ -160,6 +211,14 @@ func (ipt *IPTables) Delete(table, chain string, rulespec ...string) error {
return ipt.run(cmd...) return ipt.run(cmd...)
} }
func (ipt *IPTables) DeleteIfExists(table, chain string, rulespec ...string) error {
exists, err := ipt.Exists(table, chain, rulespec...)
if err == nil && exists {
err = ipt.Delete(table, chain, rulespec...)
}
return err
}
// List rules in specified table/chain // List rules in specified table/chain
func (ipt *IPTables) List(table, chain string) ([]string, error) { func (ipt *IPTables) List(table, chain string) ([]string, error) {
args := []string{"-t", table, "-S", chain} args := []string{"-t", table, "-S", chain}
@@ -197,6 +256,21 @@ func (ipt *IPTables) ListChains(table string) ([]string, error) {
return chains, nil return chains, nil
} }
// '-S' is fine with non existing rule index as long as the chain exists
// therefore pass index 1 to reduce overhead for large chains
func (ipt *IPTables) ChainExists(table, chain string) (bool, error) {
err := ipt.run("-t", table, "-S", chain, "1")
eerr, eok := err.(*Error)
switch {
case err == nil:
return true, nil
case eok && eerr.ExitStatus() == 1:
return false, nil
default:
return false, err
}
}
// Stats lists rules including the byte and packet counts // Stats lists rules including the byte and packet counts
func (ipt *IPTables) Stats(table, chain string) ([][]string, error) { func (ipt *IPTables) Stats(table, chain string) ([][]string, error) {
args := []string{"-t", table, "-L", chain, "-n", "-v", "-x"} args := []string{"-t", table, "-L", chain, "-n", "-v", "-x"}
@@ -263,6 +337,63 @@ func (ipt *IPTables) Stats(table, chain string) ([][]string, error) {
return rows, nil return rows, nil
} }
// ParseStat parses a single statistic row into a Stat struct. The input should
// be a string slice that is returned from calling the Stat method.
func (ipt *IPTables) ParseStat(stat []string) (parsed Stat, err error) {
// For forward-compatibility, expect at least 10 fields in the stat
if len(stat) < 10 {
return parsed, fmt.Errorf("stat contained fewer fields than expected")
}
// Convert the fields that are not plain strings
parsed.Packets, err = strconv.ParseUint(stat[0], 0, 64)
if err != nil {
return parsed, fmt.Errorf(err.Error(), "could not parse packets")
}
parsed.Bytes, err = strconv.ParseUint(stat[1], 0, 64)
if err != nil {
return parsed, fmt.Errorf(err.Error(), "could not parse bytes")
}
_, parsed.Source, err = net.ParseCIDR(stat[7])
if err != nil {
return parsed, fmt.Errorf(err.Error(), "could not parse source")
}
_, parsed.Destination, err = net.ParseCIDR(stat[8])
if err != nil {
return parsed, fmt.Errorf(err.Error(), "could not parse destination")
}
// Put the fields that are strings
parsed.Target = stat[2]
parsed.Protocol = stat[3]
parsed.Opt = stat[4]
parsed.Input = stat[5]
parsed.Output = stat[6]
parsed.Options = stat[9]
return parsed, nil
}
// StructuredStats returns statistics as structured data which may be further
// parsed and marshaled.
func (ipt *IPTables) StructuredStats(table, chain string) ([]Stat, error) {
rawStats, err := ipt.Stats(table, chain)
if err != nil {
return nil, err
}
structStats := []Stat{}
for _, rawStat := range rawStats {
stat, err := ipt.ParseStat(rawStat)
if err != nil {
return nil, err
}
structStats = append(structStats, stat)
}
return structStats, nil
}
func (ipt *IPTables) executeList(args []string) ([]string, error) { func (ipt *IPTables) executeList(args []string) ([]string, error) {
var stdout bytes.Buffer var stdout bytes.Buffer
if err := ipt.runWithOutput(args, &stdout); err != nil { if err := ipt.runWithOutput(args, &stdout); err != nil {
@@ -276,17 +407,6 @@ func (ipt *IPTables) executeList(args []string) ([]string, error) {
rules = rules[:len(rules)-1] rules = rules[:len(rules)-1]
} }
// nftables mode doesn't return an error code when listing a non-existent
// chain. Patch that up.
if len(rules) == 0 && ipt.mode == "nf_tables" {
v := 1
return nil, &Error{
cmd: exec.Cmd{Args: args},
msg: "iptables: No chain/target/match by that name.",
exitStatus: &v,
}
}
for i, rule := range rules { for i, rule := range rules {
rules[i] = filterRuleOutput(rule) rules[i] = filterRuleOutput(rule)
} }
@@ -300,18 +420,13 @@ func (ipt *IPTables) NewChain(table, chain string) error {
return ipt.run("-t", table, "-N", chain) return ipt.run("-t", table, "-N", chain)
} }
const existsErr = 1
// ClearChain flushed (deletes all rules) in the specified table/chain. // ClearChain flushed (deletes all rules) in the specified table/chain.
// If the chain does not exist, a new one will be created // If the chain does not exist, a new one will be created
func (ipt *IPTables) ClearChain(table, chain string) error { func (ipt *IPTables) ClearChain(table, chain string) error {
err := ipt.NewChain(table, chain) err := ipt.NewChain(table, chain)
// the exit code for "this table already exists" is different for
// different iptables modes
existsErr := 1
if ipt.mode == "nf_tables" {
existsErr = 4
}
eerr, eok := err.(*Error) eerr, eok := err.(*Error)
switch { switch {
case err == nil: case err == nil:
@@ -335,6 +450,26 @@ func (ipt *IPTables) DeleteChain(table, chain string) error {
return ipt.run("-t", table, "-X", chain) return ipt.run("-t", table, "-X", chain)
} }
func (ipt *IPTables) ClearAndDeleteChain(table, chain string) error {
exists, err := ipt.ChainExists(table, chain)
if err != nil || !exists {
return err
}
err = ipt.run("-t", table, "-F", chain)
if err == nil {
err = ipt.run("-t", table, "-X", chain)
}
return err
}
func (ipt *IPTables) ClearAll() error {
return ipt.run("-F")
}
func (ipt *IPTables) DeleteAll() error {
return ipt.run("-X")
}
// ChangePolicy changes policy on chain to target // ChangePolicy changes policy on chain to target
func (ipt *IPTables) ChangePolicy(table, chain, target string) error { func (ipt *IPTables) ChangePolicy(table, chain, target string) error {
return ipt.run("-t", table, "-P", chain, target) return ipt.run("-t", table, "-P", chain, target)
@@ -362,6 +497,9 @@ func (ipt *IPTables) runWithOutput(args []string, stdout io.Writer) error {
args = append([]string{ipt.path}, args...) args = append([]string{ipt.path}, args...)
if ipt.hasWait { if ipt.hasWait {
args = append(args, "--wait") args = append(args, "--wait")
if ipt.timeout != 0 && ipt.waitSupportSecond {
args = append(args, strconv.Itoa(ipt.timeout))
}
} else { } else {
fmu, err := newXtablesFileLock() fmu, err := newXtablesFileLock()
if err != nil { if err != nil {
@@ -369,6 +507,7 @@ func (ipt *IPTables) runWithOutput(args []string, stdout io.Writer) error {
} }
ul, err := fmu.tryLock() ul, err := fmu.tryLock()
if err != nil { if err != nil {
syscall.Close(fmu.fd)
return err return err
} }
defer ul.Unlock() defer ul.Unlock()
@@ -404,8 +543,8 @@ func getIptablesCommand(proto Protocol) string {
} }
// Checks if iptables has the "-C" and "--wait" flag // Checks if iptables has the "-C" and "--wait" flag
func getIptablesCommandSupport(v1 int, v2 int, v3 int) (bool, bool, bool) { func getIptablesCommandSupport(v1 int, v2 int, v3 int) (bool, bool, bool, bool) {
return iptablesHasCheckCommand(v1, v2, v3), iptablesHasWaitCommand(v1, v2, v3), iptablesHasRandomFully(v1, v2, v3) return iptablesHasCheckCommand(v1, v2, v3), iptablesHasWaitCommand(v1, v2, v3), iptablesWaitSupportSecond(v1, v2, v3), iptablesHasRandomFully(v1, v2, v3)
} }
// getIptablesVersion returns the first three components of the iptables version // getIptablesVersion returns the first three components of the iptables version
@@ -480,6 +619,17 @@ func iptablesHasWaitCommand(v1 int, v2 int, v3 int) bool {
return false return false
} }
//Checks if an iptablse version is after 1.6.0, when --wait support second
func iptablesWaitSupportSecond(v1 int, v2 int, v3 int) bool {
if v1 > 1 {
return true
}
if v1 == 1 && v2 >= 6 {
return true
}
return false
}
// Checks if an iptables version is after 1.6.2, when --random-fully was added // Checks if an iptables version is after 1.6.2, when --random-fully was added
func iptablesHasRandomFully(v1 int, v2 int, v3 int) bool { func iptablesHasRandomFully(v1 int, v2 int, v3 int) bool {
if v1 > 1 { if v1 > 1 {

6
vendor/github.com/evanphx/json-patch/.gitignore generated vendored Normal file
View File

@@ -0,0 +1,6 @@
# editor and IDE paraphernalia
.idea
.vscode
# macOS paraphernalia
.DS_Store

View File

@@ -1,19 +0,0 @@
language: go
go:
- 1.14
- 1.13
install:
- if ! go get code.google.com/p/go.tools/cmd/cover; then go get golang.org/x/tools/cmd/cover; fi
- go get github.com/jessevdk/go-flags
script:
- go get
- go test -cover ./...
- cd ./v5
- go get
- go test -cover ./...
notifications:
email: false

View File

@@ -39,6 +39,25 @@ go get -u github.com/evanphx/json-patch/v5
which limits the total size increase in bytes caused by "copy" operations in a which limits the total size increase in bytes caused by "copy" operations in a
patch. It defaults to 0, which means there is no limit. patch. It defaults to 0, which means there is no limit.
These global variables control the behavior of `jsonpatch.Apply`.
An alternative to `jsonpatch.Apply` is `jsonpatch.ApplyWithOptions` whose behavior
is controlled by an `options` parameter of type `*jsonpatch.ApplyOptions`.
Structure `jsonpatch.ApplyOptions` includes the configuration options above
and adds two new options: `AllowMissingPathOnRemove` and `EnsurePathExistsOnAdd`.
When `AllowMissingPathOnRemove` is set to `true`, `jsonpatch.ApplyWithOptions` will ignore
`remove` operations whose `path` points to a non-existent location in the JSON document.
`AllowMissingPathOnRemove` defaults to `false` which will lead to `jsonpatch.ApplyWithOptions`
returning an error when hitting a missing `path` on `remove`.
When `EnsurePathExistsOnAdd` is set to `true`, `jsonpatch.ApplyWithOptions` will make sure
that `add` operations produce all the `path` elements that are missing from the target object.
Use `jsonpatch.NewApplyOptions` to create an instance of `jsonpatch.ApplyOptions`
whose values are populated from the global configuration variables.
## Create and apply a merge patch ## Create and apply a merge patch
Given both an original JSON document and a modified JSON document, you can create Given both an original JSON document and a modified JSON document, you can create
a [Merge Patch](https://tools.ietf.org/html/rfc7396) document. a [Merge Patch](https://tools.ietf.org/html/rfc7396) document.

View File

@@ -38,7 +38,10 @@ func mergeDocs(doc, patch *partialDoc, mergeMerge bool) {
cur, ok := (*doc)[k] cur, ok := (*doc)[k]
if !ok || cur == nil { if !ok || cur == nil {
if !mergeMerge {
pruneNulls(v) pruneNulls(v)
}
(*doc)[k] = v (*doc)[k] = v
} else { } else {
(*doc)[k] = merge(cur, v, mergeMerge) (*doc)[k] = merge(cur, v, mergeMerge)
@@ -79,8 +82,8 @@ func pruneAryNulls(ary *partialArray) *partialArray {
for _, v := range *ary { for _, v := range *ary {
if v != nil { if v != nil {
pruneNulls(v) pruneNulls(v)
newAry = append(newAry, v)
} }
newAry = append(newAry, v)
} }
*ary = newAry *ary = newAry
@@ -88,8 +91,8 @@ func pruneAryNulls(ary *partialArray) *partialArray {
return ary return ary
} }
var errBadJSONDoc = fmt.Errorf("Invalid JSON Document") var ErrBadJSONDoc = fmt.Errorf("Invalid JSON Document")
var errBadJSONPatch = fmt.Errorf("Invalid JSON Patch") var ErrBadJSONPatch = fmt.Errorf("Invalid JSON Patch")
var errBadMergeTypes = fmt.Errorf("Mismatched JSON Documents") var errBadMergeTypes = fmt.Errorf("Mismatched JSON Documents")
// MergeMergePatches merges two merge patches together, such that // MergeMergePatches merges two merge patches together, such that
@@ -114,19 +117,19 @@ func doMergePatch(docData, patchData []byte, mergeMerge bool) ([]byte, error) {
patchErr := json.Unmarshal(patchData, patch) patchErr := json.Unmarshal(patchData, patch)
if _, ok := docErr.(*json.SyntaxError); ok { if _, ok := docErr.(*json.SyntaxError); ok {
return nil, errBadJSONDoc return nil, ErrBadJSONDoc
} }
if _, ok := patchErr.(*json.SyntaxError); ok { if _, ok := patchErr.(*json.SyntaxError); ok {
return nil, errBadJSONPatch return nil, ErrBadJSONPatch
} }
if docErr == nil && *doc == nil { if docErr == nil && *doc == nil {
return nil, errBadJSONDoc return nil, ErrBadJSONDoc
} }
if patchErr == nil && *patch == nil { if patchErr == nil && *patch == nil {
return nil, errBadJSONPatch return nil, ErrBadJSONPatch
} }
if docErr != nil || patchErr != nil { if docErr != nil || patchErr != nil {
@@ -142,7 +145,7 @@ func doMergePatch(docData, patchData []byte, mergeMerge bool) ([]byte, error) {
patchErr = json.Unmarshal(patchData, patchAry) patchErr = json.Unmarshal(patchData, patchAry)
if patchErr != nil { if patchErr != nil {
return nil, errBadJSONPatch return nil, ErrBadJSONPatch
} }
pruneAryNulls(patchAry) pruneAryNulls(patchAry)
@@ -150,7 +153,7 @@ func doMergePatch(docData, patchData []byte, mergeMerge bool) ([]byte, error) {
out, patchErr := json.Marshal(patchAry) out, patchErr := json.Marshal(patchAry)
if patchErr != nil { if patchErr != nil {
return nil, errBadJSONPatch return nil, ErrBadJSONPatch
} }
return out, nil return out, nil
@@ -207,12 +210,12 @@ func createObjectMergePatch(originalJSON, modifiedJSON []byte) ([]byte, error) {
err := json.Unmarshal(originalJSON, &originalDoc) err := json.Unmarshal(originalJSON, &originalDoc)
if err != nil { if err != nil {
return nil, errBadJSONDoc return nil, ErrBadJSONDoc
} }
err = json.Unmarshal(modifiedJSON, &modifiedDoc) err = json.Unmarshal(modifiedJSON, &modifiedDoc)
if err != nil { if err != nil {
return nil, errBadJSONDoc return nil, ErrBadJSONDoc
} }
dest, err := getDiff(originalDoc, modifiedDoc) dest, err := getDiff(originalDoc, modifiedDoc)
@@ -233,17 +236,17 @@ func createArrayMergePatch(originalJSON, modifiedJSON []byte) ([]byte, error) {
err := json.Unmarshal(originalJSON, &originalDocs) err := json.Unmarshal(originalJSON, &originalDocs)
if err != nil { if err != nil {
return nil, errBadJSONDoc return nil, ErrBadJSONDoc
} }
err = json.Unmarshal(modifiedJSON, &modifiedDocs) err = json.Unmarshal(modifiedJSON, &modifiedDocs)
if err != nil { if err != nil {
return nil, errBadJSONDoc return nil, ErrBadJSONDoc
} }
total := len(originalDocs) total := len(originalDocs)
if len(modifiedDocs) != total { if len(modifiedDocs) != total {
return nil, errBadJSONDoc return nil, ErrBadJSONDoc
} }
result := []json.RawMessage{} result := []json.RawMessage{}

View File

@@ -412,6 +412,17 @@ func (d *partialArray) set(key string, val *lazyNode) error {
if err != nil { if err != nil {
return err return err
} }
if idx < 0 {
if !SupportNegativeIndices {
return errors.Wrapf(ErrInvalidIndex, "Unable to access invalid index: %d", idx)
}
if idx < -len(*d) {
return errors.Wrapf(ErrInvalidIndex, "Unable to access invalid index: %d", idx)
}
idx += len(*d)
}
(*d)[idx] = val (*d)[idx] = val
return nil return nil
} }
@@ -462,6 +473,16 @@ func (d *partialArray) get(key string) (*lazyNode, error) {
return nil, err return nil, err
} }
if idx < 0 {
if !SupportNegativeIndices {
return nil, errors.Wrapf(ErrInvalidIndex, "Unable to access invalid index: %d", idx)
}
if idx < -len(*d) {
return nil, errors.Wrapf(ErrInvalidIndex, "Unable to access invalid index: %d", idx)
}
idx += len(*d)
}
if idx >= len(*d) { if idx >= len(*d) {
return nil, errors.Wrapf(ErrInvalidIndex, "Unable to access invalid index: %d", idx) return nil, errors.Wrapf(ErrInvalidIndex, "Unable to access invalid index: %d", idx)
} }
@@ -547,6 +568,29 @@ func (p Patch) replace(doc *container, op Operation) error {
return errors.Wrapf(err, "replace operation failed to decode path") return errors.Wrapf(err, "replace operation failed to decode path")
} }
if path == "" {
val := op.value()
if val.which == eRaw {
if !val.tryDoc() {
if !val.tryAry() {
return errors.Wrapf(err, "replace operation value must be object or array")
}
}
}
switch val.which {
case eAry:
*doc = &val.ary
case eDoc:
*doc = &val.doc
case eRaw:
return errors.Wrapf(err, "replace operation hit impossible case")
}
return nil
}
con, key := findObject(doc, path) con, key := findObject(doc, path)
if con == nil { if con == nil {
@@ -613,6 +657,25 @@ func (p Patch) test(doc *container, op Operation) error {
return errors.Wrapf(err, "test operation failed to decode path") return errors.Wrapf(err, "test operation failed to decode path")
} }
if path == "" {
var self lazyNode
switch sv := (*doc).(type) {
case *partialDoc:
self.doc = *sv
self.which = eDoc
case *partialArray:
self.ary = *sv
self.which = eAry
}
if self.equal(op.value()) {
return nil
}
return errors.Wrapf(ErrTestFailed, "testing value %s failed", path)
}
con, key := findObject(doc, path) con, key := findObject(doc, path)
if con == nil { if con == nil {
@@ -721,6 +784,10 @@ func (p Patch) Apply(doc []byte) ([]byte, error) {
// ApplyIndent mutates a JSON document according to the patch, and returns the new // ApplyIndent mutates a JSON document according to the patch, and returns the new
// document indented. // document indented.
func (p Patch) ApplyIndent(doc []byte, indent string) ([]byte, error) { func (p Patch) ApplyIndent(doc []byte, indent string) ([]byte, error) {
if len(doc) == 0 {
return doc, nil
}
var pd container var pd container
if doc[0] == '[' { if doc[0] == '[' {
pd = &partialArray{} pd = &partialArray{}

Some files were not shown because too many files have changed in this diff Show More