K8s-linkerd-namerd & access outside of cluster

Hello, I am struggling with configuration of namerd. I am using kubernetes (version 1.8.0) and I have linkerd configured as a service mesh & ingress controller. Configuration worked well without namerd, but with it, I cannot access anything outside of cluster.

Can you please help me if these is something I missed?

Here is my working configuration without namerd:

linkerd-ingress-servicemesh.yaml:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: l5d-config
  namespace: l5d-system
data:
  config.yaml: |-
    admin:
      ip: 0.0.0.0
      port: 9990
      
    namers:
    - kind: io.l5d.k8s
    - kind: io.l5d.k8s
      prefix: /io.l5d.k8s.http
      transformers:
      - kind: io.l5d.k8s.daemonset
        namespace: l5d-system
        port: http-incoming
        service: l5d
        hostNetwork: true
    - kind: io.l5d.rewrite
      prefix: /portNsSvcToK8s
      pattern: "/{port}/{ns}/{svc}"
      name: "/k8s/{ns}/{port}/{svc}"

    telemetry:
    - kind: io.l5d.prometheus
    - kind: io.l5d.recentRequests
      sampleRate: 0.25
      
    usage:
      orgId: flexiana.com

    routers:
    - label: http-outgoing
      protocol: http
      servers:
      - port: 4140
        ip: 0.0.0.0
      dtab: |
        /ph  => /$/io.buoyant.rinet ;
        /svc => /ph/80 ;
        /svc => /$/io.buoyant.porthostPfx/ph ;
        /k8s => /#/io.l5d.k8s.http ;
        /portNsSvc => /#/portNsSvcToK8s ;
        /host => /portNsSvc/http/k2-system ;
        /host => /portNsSvc/http ;
        /svc => /$/io.buoyant.http.domainToPathPfx/host ;

    - label: http-incoming
      protocol: http
      servers:
      - port: 4141
        ip: 0.0.0.0
      interpreter:
        kind: default
        transformers:
        - kind: io.l5d.k8s.localnode
          hostNetwork: true
      dtab: |
        /k8s => /#/io.l5d.k8s ;
        /portNsSvc => /#/portNsSvcToK8s ;
        /host => /portNsSvc/http/k2-system ;
        /host => /portNsSvc/http ;
        /svc => /$/io.buoyant.http.domainToPathPfx/host ;

    - label: http-ingress
      protocol: http
      servers:
        - port: 80
          ip: 0.0.0.0
          clearContext: true
      identifier:
        kind: io.l5d.ingress
      dtab: /svc => /#/io.l5d.k8s

---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  labels:
    app: l5d
  name: l5d
  namespace: l5d-system
spec:
  template:
    metadata:
      labels:
        app: l5d
    spec:
      hostNetwork: true
      volumes:
      - name: l5d-config
        configMap:
          name: "l5d-config"
      containers:
      - name: l5d
        image: buoyantio/linkerd:1.3.1
        env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        args:
        - /io.buoyant/linkerd/config/config.yaml
        ports:
        - name: http-outgoing
          containerPort: 4140
          hostPort: 4140
        - name: http-incoming
          containerPort: 4141
        - name: http-ingress
          containerPort: 80
        volumeMounts:
        - name: "l5d-config"
          mountPath: "/io.buoyant/linkerd/config"
          readOnly: true

      - name: kubectl
        image: buoyantio/kubectl:v1.6.2
        args:
        - "proxy"
        - "-p"
        - "8001"
---
apiVersion: v1
kind: Service
metadata:
  name: l5d
  namespace: l5d-system
spec:
  selector:
    app: l5d
  type: LoadBalancer
  ports:
  - name: http-outgoing
    port: 4140
  - name: http-incoming
    port: 4141
  - name: http-ingress
    port: 80

And here it is with namerd which I cannot meke work (I connected linkerd to namerd with use of NodePort, see this issue for more details: https://github.com/linkerd/linkerd-examples/issues/90):

namerd.yaml:

---
kind: CustomResourceDefinition
apiVersion: apiextensions.k8s.io/v1beta1
metadata:
  name: dtabs.l5d.io
  namespace: l5d-system
spec:
  scope: Namespaced
  group: l5d.io
  version: v1alpha1
  names:
    kind: DTab
    plural: dtabs
    singular: dtab
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: namerd-config
  namespace: l5d-system
data:
  config.yml: |-
    admin:
      ip: 0.0.0.0
      port: 9991

    namers:
    - kind: io.l5d.k8s
    - kind: io.l5d.rewrite
      prefix: /portNsSvcToK8s
      pattern: "/{port}/{ns}/{svc}"
      name: "/k8s/{ns}/{port}/{svc}"

    storage:
      kind: io.l5d.k8s
      host: localhost
      port: 8001
      namespace: l5d-system

    interfaces:
    - kind: io.l5d.thriftNameInterpreter
      ip: 0.0.0.0
      port: 4100
    - kind: io.l5d.httpController
      ip: 0.0.0.0
      port: 4180
---
kind: ReplicationController
apiVersion: v1
metadata:
  name: namerd
  namespace: l5d-system
spec:
  replicas: 1
  selector:
    app: namerd
  template:
    metadata:
      labels:
        app: namerd
    spec:
      dnsPolicy: ClusterFirst
      volumes:
      - name: namerd-config
        configMap:
          name: namerd-config
      containers:
      - name: namerd
        image: buoyantio/namerd:1.3.1
        args:
        - /io.buoyant/namerd/config/config.yml
        ports:
        - name: thrift
          containerPort: 4100
        - name: http
          containerPort: 4180
        - name: admin
          containerPort: 9991
        volumeMounts:
        - name: "namerd-config"
          mountPath: "/io.buoyant/namerd/config"
          readOnly: true
      - name: kubectl
        image: buoyantio/kubectl:v1.6.2
        args:
        - "proxy"
        - "-p"
        - "8001"
---
apiVersion: v1
kind: Service
metadata:
  name: namerd
  namespace: l5d-system
spec:
  selector:
    app: namerd
  type: LoadBalancer
  ports:
  - name: thrift
    port: 4100
    nodePort: 30150
  - name: http
    port: 4180
  - name: admin
    port: 9991
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: namerctl-script
  namespace: l5d-system
data:
  createNs.sh: |-
    #!/bin/sh

    set -e

    if namerctl dtab get http-external > /dev/null 2>&1; then
      echo "http-external namespace already exists"
    else
      echo "
      /svc => /#/io.l5d.k8s ;
      " | namerctl dtab create http-external -
    fi

    if namerctl dtab get http-internal > /dev/null 2>&1; then
      echo "http-internal namespace already exists"
    else
      echo "
      /ph  => /$/io.buoyant.rinet ;
      /svc => /ph/80 ;
      /svc => /$/io.buoyant.porthostPfx/ph ;
      /k8s => /#/io.l5d.k8s ;
      /portNsSvc => /#/portNsSvcToK8s ;
      /host => /portNsSvc/http/k2-system ;
      /host => /portNsSvc/http ;
      /svc => /$/io.buoyant.http.domainToPathPfx/host ;
      " | namerctl dtab create http-internal -
    fi
---
kind: Job
apiVersion: batch/v1
metadata:
  name: namerctl
  namespace: l5d-system
spec:
  template:
    metadata:
      name: namerctl
    spec:
      volumes:
      - name: namerctl-script
        configMap:
          name: namerctl-script
          defaultMode: 0755
      containers:
      - name: namerctl
        image: linkerd/namerctl:0.8.6
        env:
        - name: NAMERCTL_BASE_URL
          value: http://namerd.l5d-system.svc.cluster.local:4180
        command:
        - "/namerctl/createNs.sh"
        volumeMounts:
        - name: "namerctl-script"
          mountPath: "/namerctl"
          readOnly: true
      restartPolicy: OnFailure

linkerd.yaml:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: l5d-config
  namespace: l5d-system
data:
  config.yaml: |-
    admin:
      ip: 0.0.0.0
      port: 9990

    telemetry:
    - kind: io.l5d.prometheus
    - kind: io.l5d.recentRequests
      sampleRate: 0.25
      
    usage:
      orgId: flexiana.com

    routers:
    - label: http-outgoing
      protocol: http
      servers:
      - port: 4140
        ip: 0.0.0.0
      interpreter:
        kind: io.l5d.namerd
        dst: /$/inet/localhost/30150
        namespace: http-internal
        transformers:
        - kind: io.l5d.k8s.daemonset
          namespace: l5d-system
          port: http-incoming
          service: l5d
          hostNetwork: true

    - label: http-incoming
      protocol: http
      servers:
      - port: 4141
        ip: 0.0.0.0
      interpreter:
        kind: io.l5d.namerd
        dst: /$/inet/localhost/30150
        namespace: http-internal
        transformers:
        - kind: io.l5d.k8s.localnode
          hostNetwork: true

    - label: http-ingress
      protocol: http
      servers:
      - port: 80
        ip: 0.0.0.0
        clearContext: true
      identifier:
        kind: io.l5d.ingress
      interpreter:
        kind: io.l5d.namerd
        dst: /$/inet/localhost/30150
        namespace: http-external
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  labels:
    app: l5d
  name: l5d
  namespace: l5d-system
spec:
  template:
    metadata:
      labels:
        app: l5d
    spec:
      hostNetwork: true
      volumes:
      - name: l5d-config
        configMap:
          name: "l5d-config"
      containers:
      - name: l5d
        image: buoyantio/linkerd:1.3.1
        env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        args:
        - /io.buoyant/linkerd/config/config.yaml
        ports:
        - name: http-outgoing
          containerPort: 4140
          hostPort: 4140
        - name: http-incoming
          containerPort: 4141
        - name: http-ingress
          containerPort: 80
        volumeMounts:
        - name: "l5d-config"
          mountPath: "/io.buoyant/linkerd/config"
          readOnly: true

      - name: kubectl
        image: buoyantio/kubectl:v1.6.2
        args:
        - "proxy"
        - "-p"
        - "8001"
---
apiVersion: v1
kind: Service
metadata:
  name: l5d
  namespace: l5d-system
spec:
  selector:
    app: l5d
  type: LoadBalancer
  ports:
  - name: http-outgoing
    port: 4140
  - name: http-incoming
    port: 4141
  - name: http-ingress
    port: 80

Now I try to access for example google.com. Linkerd admin UI seems to bound path correctly:

but from inside of a pod, i get this response:

kubectl exec -ti hello-dkvm5 -- curl google.com
No hosts are available for /svc/google.com, Dtab.base=[], Dtab.local=[]. Remote Info: Not Available

I acctually need to access an outside cluster service. I also tried to access it with IP:port (which works in configuration without namerd). I also tried to add rules to dtab, but no success here:

/hp  => /$/inet ;
/svc => /$/io.buoyant.hostportPfx/hp ;

I also tried it through a service without selectors (but it doesn’t work in configuration without namerd either):

---
kind: Service
apiVersion: v1
metadata:
  name: my-service
  namespace: k2-system
spec:
  ports:
  - protocol: TCP
    port: 80
    targetPort: 5050
---
kind: Endpoints
apiVersion: v1
metadata:
  name: my-service
  namespace: k2-system
subsets:
  - addresses:
      - ip: 192.168.246.61
    ports:
      - port: 5050

Thank you for any help.

Zdenek

Hi @zsojma, thanks for all the detail, it’s very helpful.

This sounds like an issue with namerd’s storage/io.l5d.k8s plugin not being compatible with Kubernetes 1.8. We are actively working to fix this, follow along at https://github.com/linkerd/linkerd/issues/1661. Someone also suggested a workaround at https://github.com/linkerd/linkerd/issues/1661#issuecomment-339585782.

If you’d like to quickly test this, you can switch the io.l5d.k8s storage plugin with io.l5d.inMemory, more info at: https://linkerd.io/config/1.3.1/namerd/index.html#in-memory.

Hi @siggy, thanks for the reply. I tried to use the io.l5d.inMemory storage to test it and unfortunatelly, it doesn’t work either. Do you have any other suggestions, please? Thank you.

Here is my new namerd configuration:

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: namerd-config
  namespace: l5d-system
data:
  config.yml: |-
    admin:
      ip: 0.0.0.0
      port: 9991

    namers:
    - kind: io.l5d.k8s
    - kind: io.l5d.rewrite
      prefix: /portNsSvcToK8s
      pattern: "/{port}/{ns}/{svc}"
      name: "/k8s/{ns}/{port}/{svc}"

    storage:
      kind: io.l5d.inMemory
      namespaces:
        http-external: |
          /svc => /#/io.l5d.k8s ;
        http-internal: |
          /ph  => /$/io.buoyant.rinet ;
          /svc => /ph/80 ;
          /svc => /$/io.buoyant.porthostPfx/ph ;
          /k8s => /#/io.l5d.k8s ;
          /portNsSvc => /#/portNsSvcToK8s ;
          /host => /portNsSvc/http/k2-system ;
          /host => /portNsSvc/http ;
          /svc => /$/io.buoyant.http.domainToPathPfx/host ;

    interfaces:
    - kind: io.l5d.thriftNameInterpreter
      ip: 0.0.0.0
      port: 4100
    - kind: io.l5d.httpController
      ip: 0.0.0.0
      port: 4180
---
kind: ReplicationController
apiVersion: v1
metadata:
  name: namerd
  namespace: l5d-system
spec:
  replicas: 1
  selector:
    app: namerd
  template:
    metadata:
      labels:
        app: namerd
    spec:
      dnsPolicy: ClusterFirst
      volumes:
      - name: namerd-config
        configMap:
          name: namerd-config
      containers:
      - name: namerd
        image: buoyantio/namerd:1.3.1
        args:
        - /io.buoyant/namerd/config/config.yml
        ports:
        - name: thrift
          containerPort: 4100
        - name: http
          containerPort: 4180
        - name: admin
          containerPort: 9991
        volumeMounts:
        - name: "namerd-config"
          mountPath: "/io.buoyant/namerd/config"
          readOnly: true
      - name: kubectl
        image: buoyantio/kubectl:v1.6.2
        args:
        - "proxy"
        - "-p"
        - "8001"
---
apiVersion: v1
kind: Service
metadata:
  name: namerd
  namespace: l5d-system
spec:
  selector:
    app: namerd
  type: LoadBalancer
  ports:
  - name: thrift
    port: 4100
    nodePort: 30150
  - name: http
    port: 4180
  - name: admin
    port: 9991

Service mesh is working correctly (I used hello-world.yaml example in l5d-system namespace):

kubectl exec -ti hello-c8mfk -- curl -v -H "l5d-dtab: /svc/world => /svc/world-v1.l5d-system" hello.l5d-system
* Rebuilt URL to: hello.l5d-system/
*   Trying 192.168.246.216...
* TCP_NODELAY set
* Connected to (nil) (192.168.246.216) port 4140 (#0)
> GET http://hello.l5d-system/ HTTP/1.1
> Host: hello.l5d-system
> User-Agent: curl/7.52.1
> Accept: */*
> Proxy-Connection: Keep-Alive
> l5d-dtab: /svc/world => /svc/world-v1.l5d-system
>
< HTTP/1.1 200 OK
< Date: Sat, 28 Oct 2017 09:37:59 GMT
< Content-Length: 39
< Content-Type: text/plain; charset=utf-8
< l5d-success-class: 1.0
< Via: 1.1 linkerd, 1.1 linkerd
<
* Curl_http_done: called premature == 0
* Connection #0 to host (nil) left intact
Hello (10.38.0.35) world (10.42.0.50)!!

But the issue preserves in order to access outside cluster service/address:

kubectl exec -ti hello-c8mfk -- curl google.com
No hosts are available for /svc/google.com, Dtab.base=[], Dtab.local=[]. Remote Info: Not Available

When I use linkerd UI to resolve /svc/192.168.246.61:5050, I get a timeout:

Thanks!

Also, I switched back to original linkerd configuration (the one without namerd, see my original post) and I tried to resolve some addresses outside of cluster. Linkerd UI bounds paths correctly, but what I realized is, that it displays bounded path in green color:

Which is not true for configuration with namerd as I wrote in original post - all is gray even when address is resolved correctly with use of DNS.

Hi @siggy, the issue https://github.com/linkerd/linkerd/issues/1661 seems to be almost closed. Can you please help? Or should I raise an issue? Thank you.

Hi @zsojma. We just noticed another issue in this setup. In the linkerd+namerd configuration, the io.l5d.k8s.daemonset transformer is configured in linkerd to route all requests via that transformer, meaning routes to the outside will not work. Note that in the original linkerd configuration, io.l5d.k8s.daemonset is configured to work specifically when a /io.l5d.k8s.http prefix is encountered. When that route fails in the original linkerd config, the dtab instructs linkerd to fall back to /svc =>/$/io.buoyant.rinet, which is what you need to route to the outside.

To fix this in the linkerd+namerd configuration, move io.l5d.k8s.daemonset into the namerd config, and use it with a new namer, with a prefix like /io.l5d.k8s.http or /io.l5d.k8s.ds or something unique, which you can then specify in your dtab:

- kind: io.l5d.k8s
  prefix: /io.l5d.k8s.http
  transformers:
  - kind: io.l5d.k8s.daemonset
    namespace: l5d-system
    port: http-incoming
    service: l5d
    hostNetwork: true

Hello @siggy, thank you for the response. I had it configured this way during my first try of namerd, but I abandoned it because it didn’t work. With this configuration I can access outside cluster addresses, but I cannot access within cluster services.

See this curl result (I used the same example command as in my previous comment which worked):

$ kubectl exec -ti hello-kxz4w -- curl -v -H "l5d-dtab: /svc/world => /svc/world-v1.l5d-system" hello.l5d-system
* Rebuilt URL to: hello.l5d-system/
*   Trying 192.168.246.216...
* TCP_NODELAY set
* Connected to (nil) (192.168.246.216) port 4140 (#0)
> GET http://hello.l5d-system/ HTTP/1.1
> Host: hello.l5d-system
> User-Agent: curl/7.52.1
> Accept: */*
> Proxy-Connection: Keep-Alive
> l5d-dtab: /svc/world => /svc/world-v1.l5d-system
>
< HTTP/1.1 431 Request Header Fields Too Large
< Content-Length: 0
< l5d-success-class: 1.0
< Via: 1.0 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd, 1.1 linkerd
<
* Curl_http_done: called premature == 0
* Connection #0 to host (nil) left intact

It looks like some infinite recursion occurs… See via header.

In Linkerd UI, the same routing looks good:

Here is my current configuration:

namerd.yaml:

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: namerd-config
  namespace: l5d-system
data:
  config.yml: |-
    admin:
      ip: 0.0.0.0
      port: 9991

    namers:
    - kind: io.l5d.k8s
    - kind: io.l5d.k8s
      prefix: /io.l5d.k8s.http
      transformers:
      - kind: io.l5d.k8s.daemonset
        namespace: l5d-system
        port: http-incoming
        service: l5d
        hostNetwork: true
    - kind: io.l5d.rewrite
      prefix: /portNsSvcToK8s
      pattern: "/{port}/{ns}/{svc}"
      name: "/k8s/{ns}/{port}/{svc}"

    storage:
      kind: io.l5d.inMemory
      namespaces:
        http-external: |
          /svc => /#/io.l5d.k8s ;
        http-internal: |
          /ph  => /$/io.buoyant.rinet ;
          /svc => /ph/80 ;
          /svc => /$/io.buoyant.porthostPfx/ph ;
          /k8s => /#/io.l5d.k8s.http ;
          /portNsSvc => /#/portNsSvcToK8s ;
          /host => /portNsSvc/http/k2-system ;
          /host => /portNsSvc/http ;
          /svc => /$/io.buoyant.http.domainToPathPfx/host ;

    interfaces:
    - kind: io.l5d.thriftNameInterpreter
      ip: 0.0.0.0
      port: 4100
    - kind: io.l5d.httpController
      ip: 0.0.0.0
      port: 4180
---
kind: ReplicationController
apiVersion: v1
metadata:
  name: namerd
  namespace: l5d-system
spec:
  replicas: 1
  selector:
    app: namerd
  template:
    metadata:
      labels:
        app: namerd
    spec:
      dnsPolicy: ClusterFirst
      volumes:
      - name: namerd-config
        configMap:
          name: namerd-config
      containers:
      - name: namerd
        image: buoyantio/namerd:1.3.1
        args:
        - /io.buoyant/namerd/config/config.yml
        ports:
        - name: thrift
          containerPort: 4100
        - name: http
          containerPort: 4180
        - name: admin
          containerPort: 9991
        volumeMounts:
        - name: "namerd-config"
          mountPath: "/io.buoyant/namerd/config"
          readOnly: true
      - name: kubectl
        image: buoyantio/kubectl:v1.6.2
        args:
        - "proxy"
        - "-p"
        - "8001"
---
apiVersion: v1
kind: Service
metadata:
  name: namerd
  namespace: l5d-system
spec:
  selector:
    app: namerd
  type: LoadBalancer
  ports:
  - name: thrift
    port: 4100
    nodePort: 30150
  - name: http
    port: 4180
    nodePort: 30151
  - name: admin
    port: 9991

linkerd.yaml:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: l5d-config
  namespace: l5d-system
data:
  config.yaml: |-
    admin:
      ip: 0.0.0.0
      port: 9990

    telemetry:
    - kind: io.l5d.prometheus
    - kind: io.l5d.recentRequests
      sampleRate: 0.25
      
    usage:
      orgId: flexiana.com

    routers:
    - label: http-outgoing
      protocol: http
      servers:
      - port: 4140
        ip: 0.0.0.0
      interpreter:
        kind: io.l5d.namerd
        dst: /$/inet/localhost/30150
        namespace: http-internal

    - label: http-incoming
      protocol: http
      servers:
      - port: 4141
        ip: 0.0.0.0
      interpreter:
        kind: io.l5d.namerd
        dst: /$/inet/localhost/30150
        namespace: http-internal
        transformers:
        - kind: io.l5d.k8s.localnode
          hostNetwork: true

    - label: http-ingress
      protocol: http
      servers:
      - port: 80
        ip: 0.0.0.0
        clearContext: true
      identifier:
        kind: io.l5d.ingress
      interpreter:
        kind: io.l5d.namerd
        dst: /$/inet/localhost/30150
        namespace: http-external
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  labels:
    app: l5d
  name: l5d
  namespace: l5d-system
spec:
  template:
    metadata:
      labels:
        app: l5d
    spec:
      hostNetwork: true
      volumes:
      - name: l5d-config
        configMap:
          name: "l5d-config"
      containers:
      - name: l5d
        image: buoyantio/linkerd:1.3.1
        env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        args:
        - /io.buoyant/linkerd/config/config.yaml
        ports:
        - name: http-outgoing
          containerPort: 4140
          hostPort: 4140
        - name: http-incoming
          containerPort: 4141
        - name: http-ingress
          containerPort: 80
        volumeMounts:
        - name: "l5d-config"
          mountPath: "/io.buoyant/linkerd/config"
          readOnly: true

      - name: kubectl
        image: buoyantio/kubectl:v1.6.2
        args:
        - "proxy"
        - "-p"
        - "8001"
---
apiVersion: v1
kind: Service
metadata:
  name: l5d
  namespace: l5d-system
spec:
  selector:
    app: l5d
  type: LoadBalancer
  ports:
  - name: http-outgoing
    port: 4140
  - name: http-incoming
    port: 4141
  - name: http-ingress
    port: 80

Thank you for any other proposal!

@zsojma That last pair http-external / http-external dtabs is quite different from the original, working dtabs you posted in your initial linkerd-ingress-servicemesh.yaml config. I suggest configuring a namerd with those original dtabs, and also move the io.l5d.k8s.http namer into namerd as well:

http-outgoing

        /ph  => /$/io.buoyant.rinet ;
        /svc => /ph/80 ;
        /svc => /$/io.buoyant.porthostPfx/ph ;
        /k8s => /#/io.l5d.k8s.http ;
        /portNsSvc => /#/portNsSvcToK8s ;
        /host => /portNsSvc/http/k2-system ;
        /host => /portNsSvc/http ;
        /svc => /$/io.buoyant.http.domainToPathPfx/host ;

http-incoming

        /k8s => /#/io.l5d.k8s ;
        /portNsSvc => /#/portNsSvcToK8s ;
        /host => /portNsSvc/http/k2-system ;
        /host => /portNsSvc/http ;
        /svc => /$/io.buoyant.http.domainToPathPfx/host ;

Thank you @siggy. I used the same dtab for http-outgoing and http-incoming router and it was a mistake. I didn’t notice that there is a difference in /k8s rule.

/k8s => /#/io.l5d.k8s.http ;
vs.
/k8s => /#/io.l5d.k8s ;

It is working now. Thanks!