Flavors of Kubernetes

Kubernetes comes in many flavors, particularly network configurations. This page is intended to document these various configurations, and more specifically the linkerd configurations required for each.

If you have examples of linkerd working on other flavors or configurations of Kubernetes, please add them here!

Google Container Engine (GKE)

GKE is Kubernetes running on Google Compute Engine. All Kubernetes Daemonset linkerd-examples and blog posts assume this configuration with default networking, unless otherwise noted.


Minikube allows you to run Kubernetes locally.

No external LoadBalancer IPs

Unlike GKE, Minikube does not support external IPs on service objects with type: LoadBalancer. This means you need to run slightly different commands to get the IP:HOST addresses of service objects.

For example, deploy linkerd and sample apps:

kubectl apply -f https://raw.githubusercontent.com/linkerd/linkerd-examples/master/k8s-daemonset/k8s/linkerd.yml
kubectl apply -f https://raw.githubusercontent.com/linkerd/linkerd-examples/master/k8s-daemonset/k8s/hello-world-legacy.yml

Get linkerd routing address:

OUTGOING_PORT=$(kubectl get svc l5d -o jsonpath='{.spec.ports[?(@.name=="outgoing")].nodePort}')
L5D_INGRESS_LB=http://$(minikube ip):$OUTGOING_PORT
http_proxy=$L5D_INGRESS_LB curl -s http://hello
http_proxy=$L5D_INGRESS_LB curl -s http://world

Get linkerd admin address:

ADMIN_PORT=$(kubectl get svc l5d -o jsonpath='{.spec.ports[?(@.name=="admin")].nodePort}')
open http://$(minikube ip):$ADMIN_PORT

spec.nodeName does not work

In the default hello-world example, we use spec.nodeName to get the DNS name of the node we are running on. This does not work in minikube.

Instead, look at the legacy hello-world example, where we use metadata.name to get the POD_NAME along with a hostIP.sh script to determine the host IP.

This issue is tracked at linkerd/linkerd-examples#45,
and there is a proposal to improve Node-local services in Kubernetes.

Kubernetes prior to 1.4

Kubernetes prior to 1.4 exhibits the same spec.nodeName issue as Minikube.


For Kubernetes clusters configured with CNI, add hostNetwork: true to the linkerd daemonset spec and to the io.l5d.k8s.localnode and io.l5d.k8s.daemonset transformer configs.

You’ll also need to set the NODE_NAME environment variable using the downward API:

  - name: ...
    image: buoyantio/linkerd:0.9.1
    - name: NODE_NAME
          fieldPath: spec.nodeName

Note the differences between the default linkerd config and the
cni linkerd config.

To test this example, you can boot Minikube with CNI:

minikube start --network-plugin=cni
kubectl apply -f https://raw.githubusercontent.com/linkerd/linkerd-examples/master/k8s-daemonset/k8s/linkerd-cni.yml
kubectl apply -f https://raw.githubusercontent.com/linkerd/linkerd-examples/master/k8s-daemonset/k8s/hello-world.yml


If you’re trying out these examples in an RBAC enabled cluster (Kubernetes 1.6 and later), you’ll need to add RBAC rules to grant linkerd/namerd the permissions they need. See the RBAC section of this README for instructions, or refer to the linkerd-rbac-beta.yml file for an example.