Setting up linkerd-tcp in K8S

Hi everyone, I am trying to set up linkerd-tcp in K8S to proxy calls to Redis. However, I couldn’t find any examples for K8S. Here is the yaml file I am using but the linker-tcp containers are crashing because it couldn’t find the directory specified in the mountedVolumes. I tried to use a similar path to linkerd but apparently it is not the right one. What mountPath should I use? Can someone please help me? Thanks!

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: l5d-tcp-config
  namespace: linkerd
data:
  config.yaml: |-
    # the admin section defines where to serve the admin interface, and how
    # frequently to refresh metrics served
    admin:
      port: 9992
      metricsIntervalSecs: 5

    routers:
    - label: zoneredis
      # Currently, only namerd's HTTP interface is supported
      interpreter:
        kind: io.l5d.namerd.http
        baseUrl: http://localhost:4180
        namespace: redis
        periodSecs: 20
      servers:
        # Each router has one or more 'servers' listening for incoming connections.
        # By default, routers listen on localhost. You need to specify a port.
        - port: 7474
          dstName: /svc/zoneredis
          connectTimeoutMs: 500

---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  labels:
    app: l5d-tcp
  name: l5d-tcp
  namespace: linkerd
spec:
  template:
    metadata:
      labels:
        app: l5d-tcp
    spec:
      volumes:
      - name: l5d-tcp-config
        configMap:
          name: "l5d-tcp-config"
      containers:
      - name: l5d-tcp
        image: linkerd/linkerd-tcp:0.0.3
        env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        args:
        - /io.buoyant/linkerd-tcp/config/config.yaml
        ports:
        - name: admin
          containerPort: 9992
        - name: zoneredis
          containerPort: 7474
          hostPort: 7474
        volumeMounts:
        - name: "l5d-tcp-config"
          mountPath: "/io.buoyant/linkerd-tcp/config"
          readOnly: true

---
apiVersion: v1
kind: Service
metadata:
  name: l5d-tcp
  namespace: linkerd
spec:
  selector:
    app: l5d-tcp
  type: ClusterIP
  ports:
  - name: zoneredis
    port: 7474
  - name: admin
    port: 9992

Hi @ying I see that your configuration is based off of linkerd-tcp v0.1.0. However, the linkerd-tcp image you are using is v0.0.3. Could your try updating the image to v0.1.0 and see if that gets you in the right direction?

Hi @dennis.ab I updated the image to linkerd/linkerd-tcp:0.1.0, but the pods are showing as “ErrImagePull” and here is the log from the pod “Error from server (BadRequest): container “l5d-tcp” in pod “l5d-tcp-1psd1” is waiting to start: trying and failing to pull image”

Ahh I see, looks like the latest image we have on docker is v0.0.3, sorry about that. I have included a config that sets up a running linkerd-tcp and namerd with a k8s namer. Try messing around with this config to see if this satisfies your use case.

---
kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
  name: dtabs.l5d.io
spec:
  scope: Namespaced
  group: l5d.io
  version: v1alpha1
  names:
    kind: DTab
    plural: dtabs
    singular: dtab
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: namerd-config
  namespace: linkerd
data:
  config.yml: |-
    admin:
      ip: 0.0.0.0
      port: 9991
    namers:
    - kind: io.l5d.k8s
      experimental: true
      host: localhost
      port: 8001
    storage:
      kind: io.l5d.k8s
      host: localhost
      port: 8001
      namespace: default
    interfaces:
    - kind: io.l5d.httpController
      ip: 0.0.0.0
      port: 4180
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: l5d-tcp-config
  namespace: linkerd
data:
  config.yaml: |-
   admin:
    addr: 0.0.0.0:9992
    metricsIntervalSecs: 10
   proxies:
    - label: default
      servers:
        - kind: io.l5d.tcp
          addr: 0.0.0.0:7474
      namerd:
        url: http://namerd.default.svc.cluster.local:4180
        path: /svc/default
        intervalSecs: 5
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: l5d-tcp
  namespace: linkerd
  labels:
    app: l5d-tcp
spec:
  template:
    metadata:
      labels:
        app: l5d-tcp
    spec:
      volumes:
      - name: l5d-tcp-config
        configMap:
          name: "l5d-tcp-config"
      containers:
      - name: l5d-tcp
        image: linkerd/linkerd-tcp:0.0.3
        env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        args:
        - /io.buoyant/linkerd-tcp/config/config.yaml
        ports:
        - name: admin
          containerPort: 9992
        - name: zoneredis
          containerPort: 7474
          hostPort: 7474
        volumeMounts:
        - name: "l5d-tcp-config"
          mountPath: "/io.buoyant/linkerd-tcp/config"
          readOnly: true

---
apiVersion: v1
kind: Service
metadata:
  name: l5d-tcp
  namespace: linkerd
spec:
  selector:
    app: l5d-tcp
  type: LoadBalancer
  ports:
  - name: zoneredis
    port: 7474
  - name: admin
    port: 9992
---
kind: ReplicationController
apiVersion: v1
metadata:
  name: namerd
  namespace: linkerd
spec:
  replicas: 1
  selector:
    app: namerd
  template:
    metadata:
      labels:
        app: namerd
    spec:
      dnsPolicy: ClusterFirst
      volumes:
      - name: namerd-config
        configMap:
          name: namerd-config
      containers:
      - name: namerd
        image: buoyantio/namerd:1.3.2
        args:
        - /io.buoyant/namerd/config/config.yml
        ports:
        - name: http
          containerPort: 4180
        - name: admin
          containerPort: 9991
        volumeMounts:
        - name: "namerd-config"
          mountPath: "/io.buoyant/namerd/config"
          readOnly: true
      - name: kubectl
        image: buoyantio/kubectl:v1.8.5
        args:
        - "proxy"
        - "-p"
        - "8001"
---
apiVersion: v1
kind: Service
metadata:
  name: namerd
  namespace: linkerd
spec:
  selector:
    app: namerd
  type: LoadBalancer
  ports:
  - name: http
    port: 4180
  - name: admin
    port: 9991
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: namerctl-script
data:
  createNs.sh: |-
    #!/bin/sh
    set -e
    if namerctl dtab get default > /dev/null 2>&1; then
      echo "default namespace already exists"
    else
      echo "
      /host       => /#/io.l5d.k8s/default/http/hello;
      /svc/*      => /host;
      " | namerctl dtab create external -
    fi
---
kind: Job
apiVersion: batch/v1
metadata:
  name: namerctl
spec:
  template:
    metadata:
      name: namerctl
    spec:
      volumes:
      - name: namerctl-script
        configMap:
          name: namerctl-script
          defaultMode: 0755
      containers:
      - name: namerctl
        image: linkerd/namerctl:0.8.6
        env:
        - name: NAMERCTL_BASE_URL
          value: http://namerd.default.svc.cluster.local:4180
        command:
        - "/namerctl/createNs.sh"
        volumeMounts:
        - name: "namerctl-script"
          mountPath: "/namerctl"
          readOnly: true
      restartPolicy: OnFailure

Thank you @dennis.ab. This config got the DaemonSet running. However, when I do a port forwarding to one of the linkerd-tcp pod on port 9992, browsing to localhost:9992 is giving me unreachable site. Any idea what we should do to fix it?

Hi @ying just wanted to check in to see if you are still having issues with this. Do you ever get help on this issue in slack?

Hi @dennis.ab, thanks for checking on me. Someone on slack told me port 9992 is not actually an admin UI port. It is for Premetheus to scrape metrics. ${service}:9992/metrics works. Now it all works together. I am still seeing some minor issues with the dashboard for linker-tcp, though. I will probably need to ping in Slack again

Sounds good, I am going to go ahead and mark this discourse ticket as resolved and if you have any other issues you can go ahead and create a new one.