Install the Router in Kubernetes
ziti-router
Host an OpenZiti router in Kubernetes
Add the OpenZiti Charts Repo to Helm
helm repo add openziti https://docs.openziti.io/helm-charts/
Minimal Installation
After adding the charts repo to Helm, then you may install the chart in the same cluster where the controller is running by using the cluster-internal service of the control plane endpoint. This default values used in this minimal approach is suitable for a Kubernetes distribution like K3S or Minikube that configures pass-through TLS for Service resources of type LoadBalancer.
# get a router enrollment token from the controller's management API
ziti edge create edge-router router1 \
--role-attributes default --tunneler-enabled --jwt-output-file /tmp/router1.jwt
# subscribe to the openziti Helm repo
helm repo add openziti https://openziti.github.io/helm-charts/
# install the router chart
helm install \
--namespace ziti-router --create-namespace --generate-name \
openziti/ziti-router \
--set-file enrollmentJwt=/tmp/router1.jwt \
--set advertisedHost=ziti-router.example.com \
--set ctrl.endpoint=ziti-controller-ctrl.ziti-controller.svc:6262
You must supply some values when you install the chart:
Key | Type | Default | Description |
---|---|---|---|
enrollmentJwt | string | nil | the router enrollment token from the Ziti management API |
advertisedHost | string | nil | the DNS name that edge clients will resolve to reach this router's edge listener |
ctrl.endpoint | string | nil | the DNS name:port of the router control plane endpoint provided by the Ziti controller |
Managed Kubernetes Installation
Managed Kubernetes providers typically configure server TLS for a Service of type LoadBalancer. Ziti needs pass-through TLS because edge clients authenticate to the router with client certificates. We'll accomplish this by changing the Service type to ClusterIP and creating Ingress resources with pass-through TLS for each cluster service.
This example demonstrates creating TLS pass-through Ingress resources for use with ingress-nginx.
Ensure you have the ingress-nginx
chart installed with controller.extraArgs.enable-ssl-passthrough=true
. You can verify this feature is enabled by running kubectl describe pods {ingress-nginx-controller pod}
and checking the args for --enable-ssl-passthrough=true
.
If not enabled, then you must patch the ingress-nginx
deployment to enable the SSL passthrough option.
kubectl patch deployment "ingress-nginx-controller" \
--namespace ingress-nginx \
--type json \
--patch '[{"op": "add",
"path": "/spec/template/spec/containers/0/args/-",
"value":"--enable-ssl-passthrough"
}]'
# subscribe to ingress-nginx
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx/
# install ingress-nginx
helm install \
--namespace ingress-nginx --create-namespace --generate-name \
ingress-nginx/ingress-nginx \
--set controller.extraArgs.enable-ssl-passthrough=true
Create a Helm chart values file for this router chart.
# /tmp/router-values.yml
ctrl:
endpoint: ziti-controller-ctrl.ziti-controller.svc:6262
advertisedHost: ziti-router.example.com
edge:
advertisedPort: 443
service:
type: ClusterIP
ingress:
enabled: true
ingressClassName: nginx
annotations:
kubernetes.io/ingress.allow-http: "false"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
Now upgrade your router chart release with the values file.
# will attempt enrollment again if it failed initially
helm upgrade \
--namespace ziti-router ziti-router-123456789 \
openziti/ziti-router \
--set-file enrollmentJwt=/tmp/router1.jwt \
--values /tmp/router-values.yml
Router Transport Links
The minimal installation guided you to install a router in the same cluster as the controller, and the managed Kubernetes upgrade guided you to expose the router's edge listener as a pass-through TLS Ingress. Building on those concepts, let's expand your mesh of Ziti routers. For this you will need to configure router link listeners, i.e. router-to-router links. This is accomplished in this chart by setting some additional values.
Merge the following with your router values.
linkListeners:
transport:
advertisedHost: router1-transport.example.com
advertisedPort: 443
service:
enabled: true
type: ClusterIP
ingress:
enabled: true
ingressClassName: nginx
annotations:
kubernetes.io/ingress.allow-http: "false"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
Notice that we've chosen a distinct DNS name for this new ingress. This allows us to have any number of 443/tcp virtual servers on the same IP address. You may find it convenient to delegate a DNS zone with a wildcard record resolving to your Nginx LoadBalancer IP.
Now upgrade your router chart release with the merged values file.
helm upgrade \
--namespace ziti-router ziti-router-123456789 \
openziti/ziti-router \
--set-file enrollmentJwt=/tmp/router1.jwt \
--values /tmp/router-values.yml
Proxy tunnel mode
The Openziti router supports the Proxy Tunnel mode. If you need to deploy Kubernetes services together with your Ziti Router in order to make these ports available as ClusterIP, NodePort or LoadBalancer services within your cluster, you can let this helm chart deploy those services for you. In some specific cases, it is not enough to have just one Kubernetes service making accessible all ports assigned to specific Openziti services, but to have some of these proxy ports on one, and others on another Kubernetes service (for example, if you want to expose one of the proxy services as a ClusterIP service, but another as a LoadBalancer service).
Here's an example router values' snippet to merge with your other values:
tunnel:
mode: proxy
proxyServices:
# this will be bound on the "default" proxy Kubernetes service, see below
- zitiService: my-ziti-service.svc
containerPort: 10443
advertisedPort: 10443
# this will be bound on an additionally configured proxy Kubernetes service, see below
- zitiService: my-other-service.svc
containerPort: 10022
advertisedPort: 10022
proxyDefaultK8sService:
enabled: true
type: ClusterIP
proxyAdditionalK8sServices:
- name: myservice
type: LoadBalancer
annotations:
metallb.universe.tf/loadBalancerIPs: 192.168.1.100
Values Reference
Key | Type | Default | Description |
---|---|---|---|
additionalVolumes | list | [] | additional volumes to mount to ziti-router container |
advertisedHost | string | nil | common advertise-host for transport and edge listeners can also be specified separately via edge.advertisedHost and linkListeners.transport.advertisedHost |
affinity | object | {} | deployment template spec affinity |
configFile | string | "ziti-router.yaml" | filename of router config YAML |
configMountDir | string | "/etc/ziti/config" | writeable mountpoint where read-only config file is projected to allow router to write ./endpoints statefile in same dir |
csr.sans.dns | list | [] | additional DNS SANs |
csr.sans.ip | list | [] | additional IP SANs |
ctrl.endpoint | string | nil | required control plane endpoint |
dnsConfig | object | {} | it allows to override dns options when dnsPolicy is set to None. |
dnsPolicy | string | "ClusterFirstWithHostNet" | |
edge.advertisedHost | string | nil | DNS name that edge clients will use to reach this router's edge listener |
edge.advertisedPort | int | 443 | cluster service, node port, load balancer, and ingress port |
edge.containerPort | int | 3022 | cluster service target port on the container |
edge.enabled | bool | true | enable the edge listener in the router config |
edge.ingress.annotations | string | nil | ingress annotations, e.g., to configure ingress-nginx |
edge.ingress.enabled | bool | false | create an ingress for the cluster service |
edge.service.annotations | string | nil | service annotations |
edge.service.enabled | bool | true | create a cluster service for the edge listener |
edge.service.labels | string | nil | service labels |
edge.service.type | string | "ClusterIP" | expose the service as a ClusterIP, NodePort, or LoadBalancer |
enrollJwtFile | string | "enrollment.jwt" | |
enrollmentJwt | string | nil | enrollment one time token from the controller's management API |
env | string | nil | set name to value in containers' environment |
execMountDir | string | "/usr/local/bin" | read-only mountpoint for executables (must be in image's executable search PATH) |
fabric.metrics.enabled | bool | false | configure fabric metrics in the router config |
forwarder.latencyProbeInterval | int | 10 | |
forwarder.linkDialQueueLength | int | 1000 | |
forwarder.linkDialWorkerCount | int | 32 | |
forwarder.rateLimitedQueueLength | int | 5000 | |
forwarder.rateLimitedWorkerCount | int | 64 | |
forwarder.xgressDialQueueLength | int | 1000 | |
forwarder.xgressDialWorkerCount | int | 128 | |
hostNetwork | bool | false | Host networking requested for a pod if set, i.e. tproxy ports enabled in the host namespace. i.e. egress gateway |
identityMountDir | string | "/etc/ziti/identity" | read-only mountpoint for router identity secret specified in deployment for use by router run container |
image.additionalArgs | list | [] | additional arguments can be passed directly to the container to modify ziti runtime arguments |
image.args | list | ["run","{{ .Values.configMountDir }}/{{ .Values.configFile }}"] | deployment container command args and opts |
image.command | list | ["/entrypoint.bash"] | deployment container command |
image.pullPolicy | string | "Always" | deployment image pull policy |
image.repository | string | "docker.io/openziti/ziti-router" | container image tag for deployment |
image.tag | string | nil | container image tag (default is Chart's appVersion) |
linkListeners.transport.advertisedHost | string | nil | DNS name that other routers will use to form mesh transport links with this router. Default is cluster-internal service DNS name:port. |
linkListeners.transport.advertisedPort | int | 443 | cluster service, node port, load balancer, and ingress port |
linkListeners.transport.containerPort | int | 10080 | cluster service target port on the container |
linkListeners.transport.ingress.annotations | string | nil | ingress annotations, e.g., to configure ingress-nginx |
linkListeners.transport.ingress.enabled | bool | false | create an ingress for the cluster service |
linkListeners.transport.service.annotations | string | nil | service annotations |
linkListeners.transport.service.enabled | bool | true | create a cluster service for the router transport link listener |
linkListeners.transport.service.labels | string | nil | service labels |
linkListeners.transport.service.type | string | "ClusterIP" | expose the service as a ClusterIP, NodePort, or LoadBalancer |
nodeSelector | object | {} | deployment template spec node selector |
persistence.accessMode | string | "ReadWriteOnce" | PVC access mode: ReadWriteOnce (concurrent mounts not allowed), ReadWriteMany (concurrent allowed) |
persistence.annotations | object | {} | annotations for the PVC |
persistence.enabled | bool | true | required: place a storage claim for the ctrl endpoints state file |
persistence.existingClaim | string | "" | A manually managed Persistent Volume and Claim Requires persistence.enabled: true If defined, PVC must be created manually before volume will be bound |
persistence.size | string | "50Mi" | 50Mi is plenty for this state file |
persistence.storageClass | string | "" | Storage class of PV to bind. By default it looks for the default storage class. If the PV uses a different storage class, specify that here. |
persistence.volumeName | string | nil | PVC volume name |
podAnnotations | object | {} | annotations to apply to all pods deployed by this chart |
podSecurityContext | object | {"fsGroup":2171} | deployment template spec security context |
podSecurityContext.fsGroup | int | 2171 | this is the GID of "ziggy" run-as user in the container that has access to any files created by the router process in the emptyDir volume used to persist the endpoints state file |
proxy | object | {} | Explicit proxy setting in the router configuration. Router can be deployed in a site where all egress traffic is forwarded through an explicit proxy. The enrollment will also be forwarded through the proxy. |
resources | object | {} | deployment container resources |
securityContext | string | nil | deployment container security context |
tolerations | list | [] | deployment template spec tolerations |
tunnel.diverterPath | string | nil | the tproxy mode can be switched from iptables based interception to bpf interception by passing the user space bpf program path. bpf kernel space program is expected to be loaded prior or during router deployment, e.g. bpfman agent, hostpath, etc |
tunnel.dnsSvcIpRange | string | nil | CIDR range for the internal service fqdn to dynamic intercept IP address resolution (default: 100.64.0.0/10) |
tunnel.lanIf | string | "lo" | interface device name for setting up INPUT firewall rules if fw enabled. It must be set but not needed in containers. Thus, it is set to lo by default |
tunnel.mode | string | "none" | run mode for the router's built-in tunnel component: host, tproxy, proxy, or none |
tunnel.proxyAdditionalK8sServices | list | [] | if tunnel mode is "proxy", create a separate cluster service for each Ziti service listed in "proxyServices" which k8sService == name |
tunnel.proxyDefaultK8sService | object | {"enabled":true,"type":"ClusterIP"} | if tunnel mode is "proxy", create the a cluster service named {{ release }}-proxy-default listening on each "advertisedPort" defined in "proxyServices" |
tunnel.proxyServices | list | [] | list of Ziti services for which K8s services are to be created by this deployment, default is one cluster service port per Ziti service |
tunnel.resolver | string | nil | Ziti nameserver listener where OS must be configured to send DNS queries (default: udp://127.0.0.1:53) |
TODO's
- replicas - does it make sense? afaik every replica needs it's own identity - how does this fit in?
- lower CA / Cert lifetime; refresh certificates on update