docs: update multi-cluster-services examples

This commit is contained in:
Lucas Servén Marín 2019-05-10 02:42:20 +02:00
parent 35390054ba
commit bbbfc0548f
No known key found for this signature in database
GPG Key ID: 586FEAF680DA74AD
2 changed files with 35 additions and 19 deletions

View File

@ -117,10 +117,14 @@ This allows workloads running in one cluster to access services running in anoth
For example, if `cluster1` is running a Kubernetes Service that we need to access from Pods running in `cluster2`, we could do the following: For example, if `cluster1` is running a Kubernetes Service that we need to access from Pods running in `cluster2`, we could do the following:
```shell ```shell
# Register cluster1 as a peer of cluster2. # Register the nodes in cluster1 as peers of cluster2.
kgctl --kubeconfig $KUBECONFIG1 showconf node $NODE1 --as-peer -o yaml --allowed-ips $PODCIDR1,$SERVICECIDR1 | kubectl --kubeconfig KUBECONFIG2 apply -f - for n in $(kubectl --kubeconfig $KUBECONFIG1 get no -o name | cut -d'/' -f2); do
# Register cluster2 as a peer of cluster1. kgctl --kubeconfig $KUBECONFIG1 showconf node $n --as-peer -o yaml --allowed-ips $SERVICECIDR1 | kubectl --kubeconfig KUBECONFIG2 apply -f -
kgctl --kubeconfig $KUBECONFIG2 showconf node $NODE2 --as-peer -o yaml --allowed-ips $PODCIDR2,$SERVICECIDR2 | kubectl --kubeconfig KUBECONFIG1 apply -f - done
# Register the nodes in cluster2 as peers of cluster1.
for n in $(kubectl --kubeconfig $KUBECONFIG2 get no -o name | cut -d'/' -f2); do
kgctl --kubeconfig $KUBECONFIG2 showconf node $n --as-peer -o yaml --allowed-ips $SERVICECIDR2 | kubectl --kubeconfig KUBECONFIG1 apply -f -
done
# Create a Service in cluster2 to mirror the Service in cluster1. # Create a Service in cluster2 to mirror the Service in cluster1.
cat <<'EOF' | kubectl --kubeconfig $KUBECONFIG2 apply -f - cat <<'EOF' | kubectl --kubeconfig $KUBECONFIG2 apply -f -
apiVersion: v1 apiVersion: v1

View File

@ -5,32 +5,43 @@ This enables clusters to provide services to other clusters over a secure connec
For example, a cluster on AWS with access to GPUs could run a machine learning service that could be consumed by workloads running in a another location, e.g. an on-prem cluster without GPUs. For example, a cluster on AWS with access to GPUs could run a machine learning service that could be consumed by workloads running in a another location, e.g. an on-prem cluster without GPUs.
Unlike services exposed via Ingresses or NodePort Services, multi-cluster services can remain private and internal to the clusters. Unlike services exposed via Ingresses or NodePort Services, multi-cluster services can remain private and internal to the clusters.
*Note*: clusters connected with Kilo must have non-overlapping pod and service CIDRs. *Note*: in order for connected clusters to be fully routable, the allowed IPs that they declare must be non-overlapping, i.e. the Kilo, pod, and service CIDRs.
## Getting Started
Consider two clusters, `cluster1` with: Consider two clusters, `cluster1` with:
* kubeconfig: `KUBECONFIG1` * kubeconfig: `KUBECONFIG1`; and
* pod CIDR: `$PODCIDR1`
* service CIDR: `$SERVICECIDR1` * service CIDR: `$SERVICECIDR1`
* a node named: `$NODE1`
and `cluster2` with: and `cluster2` with:
* kubeconfig: `KUBECONFIG2` * kubeconfig: `KUBECONFIG2`
* pod CIDR: `$PODCIDR2` * service CIDR: `$SERVICECIDR2`; and
* service CIDR: `$SERVICECIDR2`
* a node named: `$NODE2`
In order to give `cluster2` access to a service running on `cluster1`, start by peering the nodes: In order to give `cluster2` access to a service running on `cluster1`, start by peering the nodes:
```shell ```shell
# Register cluster1 as a peer of cluster2. # Register the nodes in cluster1 as peers of cluster2.
kgctl --kubeconfig $KUBECONFIG1 showconf node $NODE1 --as-peer -o yaml --allowed-ips $PODCIDR1,$SERVICECIDR1 | kubectl --kubeconfig KUBECONFIG2 apply -f - for n in $(kubectl --kubeconfig $KUBECONFIG1 get no -o name | cut -d'/' -f2); do
# Register cluster2 as a peer of cluster1. # Specify the service CIDR as an extra IP range that should be routable.
kgctl --kubeconfig $KUBECONFIG2 showconf node $NODE2 --as-peer -o yaml --allowed-ips $PODCIDR2,$SERVICECIDR2 | kubectl --kubeconfig KUBECONFIG1 apply -f - kgctl --kubeconfig $KUBECONFIG1 showconf node $n --as-peer -o yaml --allowed-ips $SERVICECIDR1 | kubectl --kubeconfig KUBECONFIG2 apply -f -
done
# Register the nodes in cluster2 as peers of cluster1.
for n in $(kubectl --kubeconfig $KUBECONFIG2 get no -o name | cut -d'/' -f2); do
# Specify the service CIDR as an extra IP range that should be routable.
kgctl --kubeconfig $KUBECONFIG2 showconf node $n --as-peer -o yaml --allowed-ips $SERVICECIDR2 | kubectl --kubeconfig KUBECONFIG1 apply -f -
done
``` ```
Now, `cluster2` has access to Pods and Services on `cluster1`, and vice-versa. Now, Pods on `cluster1` can ping, cURL, or otherwise make requests against Pods and Servives in `cluster2` and vice-versa.
However, as it stands the external Services can only be accessed by using their clusterIPs directly; in other words, they are not Kubernetes-native.
We can change that by creating a Kubernetes Service in `cluster2` to mirror the Service in `cluster1`: ## Mirroring Services
At this point, Kilo has created a fully routable network between the two clusters.
However, as it stands the external Services can only be accessed by using their clusterIPs directly.
For example, a Pod in `cluster2` would need to use the URL `http://$CLUSTERIP_FROM_CLUSTER1` to make a request against a Service running in `cluster1`.
In other words, the Services are not yet Kubernetes-native.
We can easily change that by creating a Kubernetes Service in `cluster2` to mirror the Service in `cluster1`:
```shell ```shell
cat <<'EOF' | kubectl --kubeconfig $KUBECONFIG2 apply -f - cat <<'EOF' | kubectl --kubeconfig $KUBECONFIG2 apply -f -
@ -48,10 +59,11 @@ metadata:
name: important-service name: important-service
subsets: subsets:
- addresses: - addresses:
- ip: $CLUSTERIP # The cluster IP of the important service on cluster1. - ip: $CLUSTERIP_FROM_CLUSTER1 # The cluster IP of the important service on cluster1.
ports: ports:
- port: 80 - port: 80
EOF EOF
``` ```
Now, `important-service` can be used on `cluster2` just like any other Kubernetes Service. Now, `important-service` can be used on `cluster2` just like any other Kubernetes Service.
That means that a Pod in `cluster2` could directly use the Kubernetes DNS name for the Service when making HTTP requests, for example: `http://important-service.default.svc.cluster.local`.