Linkerd + Namerd CNI setup help

Hey Linkerd Team,

I’m running into an issue where using Linkerd + Namerd CNI using gRPC cannot properly resolve my service to service communications. I’m probably doing something wrong, so some insight would be helpful.

I have my Linkerd setup working with the new experimental ConfigMap interpreter, but when I move the setup to use Namerd, the dtab resolution for my service to service (http and gRPC) communications aren’t resolved. I just moved the working namers from the ConfigMap setup to Namerd, and I’m basically just using the servicemesh.yaml as the foundation of my setup. I’m also using the http Zipkin version of Linkerd on 1.2.1, so I can use Jaeger for distributed tracing.

The cluster is setup up using Tectonic 1.7.3, which uses Flannel as a CNI.

Here are the roles that I’ve generated for RBAC:

---
# RBAC configs for linkerd
apiVersion: v1
kind: Namespace
metadata:
  name: service-mesh
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: linkerd
  namespace: service-mesh
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: namerd
  namespace: service-mesh
---
# grant linkerd permissions to enable service discovery
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: linkerd-endpoints-reader
  namespace: default
rules:
  - apiGroups: [""] # "" indicates the core API group
    resources: ["endpoints", "services", "pods"] # pod access is required for the *-legacy.yml examples in this folder
    verbs: ["get", "watch", "list"]
---
# grant namerd permisisons to third party resources for dtab storage
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  namespace: service-mesh
  name: namerd-dtab-storage
rules:
- apiGroups: ["l5d.io"]
  resources: ["dtabs"]
  verbs: ["update", "get", "list", "create", "watch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: linkerd-sds-role-binding
subjects:
  - kind: ServiceAccount
    name: linkerd
    namespace: service-mesh
roleRef:
  kind: ClusterRole
  name: linkerd-endpoints-reader
  apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: namerd-storage-role-binding
subjects:
  - kind: ServiceAccount
    name: namerd
    namespace: service-mesh
roleRef:
  kind: ClusterRole
  name: namerd-dtab-storage
  apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: namerd-sds-role-binding
subjects:
  - kind: ServiceAccount
    name: namerd
    namespace: service-mesh
roleRef:
  kind: ClusterRole
  name: linkerd-endpoints-reader
  apiGroup: rbac.authorization.k8s.io
---
# grant linkerd permissions to get configmap
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  namespace: service-mesh
  name: linkerd-configmap-reader
rules:
- apiGroups: [""]
  resources: ["configmaps"]
  resourceNames: ["l5d-dtabs", "l5d-config"]
  verbs: ["update", "get", "list", "create", "watch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: linkerd-configmap-role-binding
subjects:
  - kind: ServiceAccount
    name: linkerd
    namespace: service-mesh
roleRef:
  kind: ClusterRole
  name: linkerd-configmap-reader
  apiGroup: rbac.authorization.k8s.io

Attached is my working ConfigMap based setup:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: l5d-config
  namespace: service-mesh
data:
  config.yaml: |-
    admin:
      ip: 0.0.0.0
      port: 9990

    namers:
    - kind: io.l5d.k8s
      experimental: true
      host: localhost
      port: 8001
    - kind: io.l5d.k8s
      prefix: /io.l5d.k8s.http
      transformers:
      - kind: io.l5d.k8s.daemonset
        namespace: service-mesh
        port: http-incoming
        service: l5d
        hostNetwork: true
    - kind: io.l5d.k8s
      prefix: /io.l5d.k8s.grpc
      transformers:
      - kind: io.l5d.k8s.daemonset
        namespace: service-mesh
        port: grpc-incoming
        service: l5d
        hostNetwork: true
    - kind: io.l5d.rewrite
      prefix: /portNsSvcToK8s
      pattern: "/{port}/{ns}/{svc}"
      name: "/k8s/{ns}/{port}/{svc}"

    telemetry:
    - kind: io.l5d.prometheus # Expose Prometheus style metrics on :9990/admin/metrics/prometheus
    - kind: io.l5d.recentRequests
      sampleRate: 0.25 # Tune this sample rate before going to production
    - kind: io.zipkin.http
      host: zipkin.default.svc.cluster.local:9411 # Zipkin Jaeger collector address
      initialSampleRate: 1.0 # Set to a lower sample rate depending on your traffic volume in production

    usage:
      orgId: headspace
      enabled: false

    routers:
    - label: http-outgoing
      protocol: http
      servers:
      - port: 4140
        ip: 0.0.0.0
      interpreter:
        kind: io.l5d.k8s.configMap
        experimental: true
        name: l5d-dtabs
        filename: http-outgoing
        namespace: service-mesh
      service:
        responseClassifier:
          # All 5XX responses are considered to be failures.
          # However, GET, HEAD, OPTIONS, and TRACE requests may be retried automatically.
          kind: io.l5d.http.retryableRead5XX
      client:
        kind: io.l5d.static
        configs:
        # service-to-service tls config here
        - prefix: "/"
          failureAccrual:
            kind: io.l5d.successRateWindowed
            successRate: 0.9
            window: 40
            backoff:
              kind: jittered
              minMs: 5000
              maxMs: 300000
        - prefix: "/$/io.buoyant.rinet/443/{service}"
          tls:
            commonName: "{service}"

    - label: http-incoming
      protocol: http
      servers:
      - port: 4141
        ip: 0.0.0.0
      interpreter:
        kind: io.l5d.k8s.configMap
        experimental: true
        name: l5d-dtabs
        filename: http-incoming
        namespace: service-mesh
        transformers:
        - kind: io.l5d.k8s.localnode
          hostNetwork: true

    - label: grpc-outgoing
      protocol: h2
      experimental: true
      servers:
      - port: 4340
        ip: 0.0.0.0
      interpreter:
        kind: io.l5d.k8s.configMap
        experimental: true
        name: l5d-dtabs
        filename: grpc-outgoing
        namespace: service-mesh
      service:
        responseClassifier:
          kind: io.l5d.h2.grpc.retryableStatusCodes
          retryableStatusCodes:
          - 4 # deadline exceeded
          - 14 # unavailabe
      identifier:
        kind: io.l5d.header.path
        segments: 1
      client:
        kind: io.l5d.static
        configs:
        - prefix: "/"
          failureAccrual:
            kind: io.l5d.successRateWindowed
            successRate: 0.9
            window: 40
            backoff:
              kind: jittered
              minMs: 5000
              maxMs: 300000
        - prefix: "/$/inet/{service}"
          tls:
            commonName: "{service}"

    - label: grpc-incoming
      protocol: h2
      experimental: true
      servers:
      - port: 4341
        ip: 0.0.0.0
      identifier:
        kind: io.l5d.header.path
        segments: 1
      interpreter:
        kind: io.l5d.k8s.configMap
        experimental: true
        name: l5d-dtabs
        filename: grpc-incoming
        namespace: service-mesh
        transformers:
        - kind: io.l5d.k8s.localnode
          hostNetwork: true

    - protocol: http
      label: http-ingress
      servers:
        - port: 80
          ip: 0.0.0.0
          clearContext: true
      identifier:
        kind: io.l5d.path
        segments: 1
        consume: false
      interpreter:
        kind: io.l5d.k8s.configMap
        experimental: true
        name: l5d-dtabs
        filename: http-ingress
        namespace: service-mesh
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  labels:
    app: l5d
  name: l5d
  namespace: service-mesh
spec:
  template:
    metadata:
      labels:
        app: l5d
    spec:
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      serviceAccountName: linkerd
      volumes:
      - name: l5d-config
        configMap:
          name: "l5d-config"
      - name: certificates
        secret:
          secretName: certificates
      containers:
      - name: l5d
        image: ethanheadspace/linkerd-zipkin-http-test:latest # use headspace http version if using jaeger or just use regular linkerd if using zipkin
        env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        args:
        - /io.buoyant/linkerd/config/config.yaml
        ports:
        - name: http-outgoing
          containerPort: 4140
          hostPort: 4140
        - name: http-incoming
          containerPort: 4141
          hostPort: 4141
        - name: grpc-outgoing
          containerPort: 4340
          hostPort: 4340
        - name: grpc-incoming
          containerPort: 4341
          hostPort: 4341
        - name: http-ingress
          containerPort: 80
        - name: admin
          containerPort: 9990
        volumeMounts:
        - name: "l5d-config"
          mountPath: "/io.buoyant/linkerd/config"
          readOnly: true
        - name: "certificates"
          mountPath: "/io.buoyant/linkerd/certs"
          readOnly: true
      - name: kubectl
        image: buoyantio/kubectl:v1.6.2
        args:
        - "proxy"
        - "-p"
        - "8001"
---
---
apiVersion: v1
kind: Service
metadata:
  name: l5d
  namespace: service-mesh
  annotations:
    prometheus.io/scrape: 'true'
    prometheus.io/path: '/admin/metrics/prometheus'
    prometheus.io/port: '9990'
spec:
  selector:
    app: l5d
  type: LoadBalancer
  ports:
  - name: http-outgoing
    port: 4140
  - name: http-incoming
    port: 4141
  - name: grpc-outgoing
    port: 4340
  - name: grpc-incoming
    port: 4341
  - name: http-ingress
    port: 80
  - name: admin
    port: 9990

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: l5d-dtabs
  namespace: service-mesh
data:
  http-incoming: |
    /k8s       => /#/io.l5d.k8s;
    /portNsSvc => /#/portNsSvcToK8s;
    /host      => /portNsSvc/http/default;
    /host      => /portNsSvc/http;
    /svc       => /$/io.buoyant.http.domainToPathPfx/host;
    /svc       => /$/io.buoyant.porthostPfx/ph;
    /ph/*      => /host;
  http-outgoing: |
    /ph        => /$/io.buoyant.rinet;
    /svc       => /ph/80;
    /svc       => /$/io.buoyant.porthostPfx/ph;
    /k8s       => /#/io.l5d.k8s.http;
    /portNsSvc => /#/portNsSvcToK8s;
    /host      => /portNsSvc/http/default;
    /host      => /portNsSvc/http;
    /svc       => /$/io.buoyant.http.domainToPathPfx/host;
    /svc       => /$/io.buoyant.porthostPfx/ph;
    /ph/*      => /host;
  grpc-incoming: |
    /srv => /#/io.l5d.k8s/default/grpc;
    /svc => /$/io.buoyant.http.domainToPathPfx/srv;
  grpc-outgoing: |
    /hp  => /$/inet;
    /svc => /$/io.buoyant.hostportPfx/hp;
    /srv => /#/io.l5d.k8s.grpc/default/grpc;
    /svc => /$/io.buoyant.http.domainToPathPfx/srv;
  http-ingress: |
    /srv                => /#/io.l5d.k8s/default/http;
    /host               => /srv;
    /tmp                => /srv;
    /svc                => /host;

Here is the not working Linkerd + Namerd setup:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: l5d-config
  namespace: service-mesh
data:
  config.yaml: |-
    admin:
      ip: 0.0.0.0
      port: 9990

    namers:
    - kind: io.l5d.k8s
      experimental: true
      host: localhost
      port: 8001

    telemetry:
    - kind: io.l5d.prometheus # Expose Prometheus style metrics on :9990/admin/metrics/prometheus
    - kind: io.l5d.recentRequests
      sampleRate: 0.25 # Tune this sample rate before going to production
    - kind: io.zipkin.http
      host: zipkin.default.svc.cluster.local:9411 # Zipkin Jaeger collector address
      initialSampleRate: 1.0 # Set to a lower sample rate depending on your traffic volume in production

    usage:
      orgId: headspace
      enabled: false

    routers:
    - label: http-outgoing
      protocol: http
      servers:
      - port: 4140
        ip: 0.0.0.0
      interpreter:
        kind: io.l5d.mesh
        experimental: true
        dst: /#/io.l5d.k8s/service-mesh/4321/namerd
        root: /http-outgoing
      service:
        responseClassifier:
          kind: io.l5d.http.retryableRead5XX
      client:
        kind: io.l5d.static
        configs:
        - prefix: "/"
          failureAccrual:
            kind: io.l5d.successRateWindowed
            successRate: 0.9
            window: 40
            backoff:
              kind: jittered
              minMs: 5000
              maxMs: 300000
        - prefix: "/$/io.buoyant.rinet/443/{service}"
          tls:
            commonName: "{service}"

    - label: http-incoming
      protocol: http
      servers:
      - port: 4141
        ip: 0.0.0.0
      interpreter:
        kind: io.l5d.mesh
        experimental: true
        dst: /#/io.l5d.k8s/service-mesh/4321/namerd
        root: /http-incoming
        transformers:
        - kind: io.l5d.k8s.localnode
          hostNetwork: true

    - label: grpc-outgoing
      protocol: h2
      experimental: true
      servers:
      - port: 4340
        ip: 0.0.0.0
      interpreter:
        kind: io.l5d.mesh
        experimental: true
        dst: /#/io.l5d.k8s/service-mesh/4321/namerd
        root: /grpc-outgoing
      service:
        responseClassifier:
          kind: io.l5d.h2.grpc.retryableStatusCodes
          retryableStatusCodes:
          - 4 # deadline exceeded
          - 14 # unavailabe
      identifier:
        kind: io.l5d.header.path
        segments: 1
      client:
        kind: io.l5d.static
        configs:
        - prefix: "/"
          failureAccrual:
            kind: io.l5d.successRateWindowed
            successRate: 0.9
            window: 40
            backoff:
              kind: jittered
              minMs: 5000
              maxMs: 300000
        - prefix: "/$/inet/{service}"
          tls:
            commonName: "{service}"

    - label: grpc-incoming
      protocol: h2
      experimental: true
      servers:
      - port: 4341
        ip: 0.0.0.0
      identifier:
        kind: io.l5d.header.path
        segments: 1
      interpreter:
        kind: io.l5d.mesh
        experimental: true
        dst: /#/io.l5d.k8s/service-mesh/4321/namerd
        root: /grpc-incoming
        transformers:
        - kind: io.l5d.k8s.localnode
          hostNetwork: true

    - protocol: http
      label: http-ingress
      servers:
        - port: 80
          ip: 0.0.0.0
          clearContext: true
      identifier:
        kind: io.l5d.path
        segments: 1
        consume: false
      interpreter:
        kind: io.l5d.mesh
        experimental: true
        dst: /#/io.l5d.k8s/service-mesh/4321/namerd
        root: /http-ingress
  ---
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  labels:
    app: l5d
  name: l5d
  namespace: service-mesh
spec:
  template:
    metadata:
      labels:
        app: l5d
    spec:
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      serviceAccountName: linkerd
      volumes:
      - name: l5d-config
        configMap:
          name: "l5d-config"
      - name: certificates
        secret:
          secretName: certificates
      containers:
      - name: l5d
        image: ethanheadspace/linkerd-zipkin-http-test:latest # use headspace http version if using jaeger or just use regular linkerd if using zipkin
        env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        args:
        - /io.buoyant/linkerd/config/config.yaml
        ports:
        - name: http-outgoing
          containerPort: 4140
          hostPort: 4140
        - name: http-incoming
          containerPort: 4141
          hostPort: 4141
        - name: grpc-outgoing
          containerPort: 4340
          hostPort: 4340
        - name: grpc-incoming
          containerPort: 4341
          hostPort: 4341
        - name: http-ingress
          containerPort: 80
        - name: admin
          containerPort: 9990
        volumeMounts:
        - name: "l5d-config"
          mountPath: "/io.buoyant/linkerd/config"
          readOnly: true
        - name: "certificates"
          mountPath: "/io.buoyant/linkerd/certs"
          readOnly: true
      - name: kubectl
        image: buoyantio/kubectl:v1.6.2
        args:
        - "proxy"
        - "-p"
        - "8001"
---
apiVersion: v1
kind: Service
metadata:
  name: l5d
  namespace: service-mesh
  annotations:
    prometheus.io/scrape: 'true'
    prometheus.io/path: '/admin/metrics/prometheus'
    prometheus.io/port: '9990'
spec:
  selector:
    app: l5d
  type: LoadBalancer
  ports:
  - name: http-outgoing
    port: 4140
  - name: http-incoming
    port: 4141
  - name: grpc-outgoing
    port: 4340
  - name: grpc-incoming
    port: 4341
  - name: http-ingress
    port: 80
  - name: admin
    port: 9990
---
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: namerd-config
  namespace: service-mesh
data:
  config.yml: |-
    admin:
      ip: 0.0.0.0
      port: 9990

    namers:
    - kind: io.l5d.k8s
      experimental: true
      host: localhost
      port: 8001
    - kind: io.l5d.k8s
      prefix: /io.l5d.k8s.http
      transformers:
      - kind: io.l5d.k8s.daemonset
        namespace: service-mesh
        port: http-incoming
        service: l5d
        hostNetwork: true
    - kind: io.l5d.k8s
      prefix: /io.l5d.k8s.grpc
      transformers:
      - kind: io.l5d.k8s.daemonset
        namespace: service-mesh
        port: grpc-incoming
        service: l5d
        hostNetwork: true
    - kind: io.l5d.rewrite
      prefix: /portNsSvcToK8s
      pattern: "/{port}/{ns}/{svc}"
      name: "/k8s/{ns}/{port}/{svc}"

    storage:
      kind: io.l5d.k8s
      host: localhost
      port: 8001
      namespace: service-mesh

    interfaces:
    - kind: io.l5d.mesh
      ip: 0.0.0.0
      port: 4321
    - kind: io.l5d.httpController
      ip: 0.0.0.0
      port: 4180
---
kind: ThirdPartyResource
apiVersion: extensions/v1beta1
metadata:
  name: d-tab.l5d.io
  namespace: service-mesh
description: stores dtabs used by namerd
versions:
- name: v1alpha1
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: namerd
  namespace: service-mesh
spec:
  replicas: 1 # use 3 for production
  template:
    metadata:
      labels:
        app: namerd
    spec:
      dnsPolicy: ClusterFirst
      serviceAccountName: namerd
      volumes:
      - name: namerd-config
        configMap:
          name: namerd-config
      containers:
      - name: namerd
        image: buoyantio/namerd:1.2.1
        args:
        - /io.buoyant/namerd/config/config.yml
        ports:
        - name: grpc
          containerPort: 4321
        - name: http
          containerPort: 4180
        - name: admin
          containerPort: 9990
        volumeMounts:
        - name: "namerd-config"
          mountPath: "/io.buoyant/namerd/config"
          readOnly: true
      - name: kubectl
        image: buoyantio/kubectl:v1.6.2
        args:
        - "proxy"
        - "-p"
        - "8001"
---
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: namerctl-script
  namespace: service-mesh
data:
  createNs.sh: |-
    #!/bin/sh

    set -e

    if namerctl dtab get http-ingress > /dev/null 2>&1; then
      echo "http-ingress namespace already exists"
    else
      echo "
      /srv                => /#/io.l5d.k8s/default/http;
      /host               => /srv;
      /tmp                => /srv;
      /svc                => /host;
      /host/deployments   => /srv/deployer;
      " | namerctl dtab create http-ingress -
    fi

    if namerctl dtab get http-incoming > /dev/null 2>&1; then
      echo "http-incoming namespace already exists"
    else
      echo "
      /k8s       => /#/io.l5d.k8s;
      /portNsSvc => /#/portNsSvcToK8s;
      /host      => /portNsSvc/http/default;
      /host      => /portNsSvc/http;
      /svc       => /$/io.buoyant.http.domainToPathPfx/host;
      " | namerctl dtab create http-incoming -
    fi

    if namerctl dtab get http-outgoing > /dev/null 2>&1; then
      echo "http-outgoing namespace already exists"
    else
      echo "
      /ph        => /$/io.buoyant.rinet;
      /svc       => /ph/80;
      /svc       => /$/io.buoyant.porthostPfx/ph;
      /k8s       => /#/io.l5d.k8s.http;
      /portNsSvc => /#/portNsSvcToK8s;
      /host      => /portNsSvc/http/default;
      /host      => /portNsSvc/http;
      /svc       => /$/io.buoyant.http.domainToPathPfx/host;
      /svc       => /$/io.buoyant.porthostPfx/ph;
      /ph/*      => /host;
      " | namerctl dtab create http-outgoing -
    fi

    if namerctl dtab get grpc-incoming > /dev/null 2>&1; then
      echo "grpc-incoming namespace already exists"
    else
      echo "
      /srv => /#/io.l5d.k8s/default/grpc;
      /svc => /$/io.buoyant.http.domainToPathPfx/srv;
      " | namerctl dtab create grpc-incoming -
    fi

    if namerctl dtab get grpc-outgoing > /dev/null 2>&1; then
      echo "grpc-outgoing namespace already exists"
    else
      echo "
      /hp  => /$/inet;
      /svc => /$/io.buoyant.hostportPfx/hp;
      /srv => /#/io.l5d.k8s.grpc/default/grpc;
      /svc => /$/io.buoyant.http.domainToPathPfx/srv;
      " | namerctl dtab create grpc-outgoing -
    fi
---
kind: Job
apiVersion: batch/v1
metadata:
  name: namerctl
  namespace: service-mesh
spec:
  template:
    metadata:
      name: namerctl
    spec:
      serviceAccountName: namerd
      volumes:
      - name: namerctl-script
        configMap:
          name: namerctl-script
          defaultMode: 0755
      containers:
      - name: namerctl
        image: linkerd/namerctl:0.8.6
        env:
        - name: NAMERCTL_BASE_URL
          value: http://namerd.service-mesh.svc.cluster.local:4180
        command:
        - "/namerctl/createNs.sh"
        volumeMounts:
        - name: "namerctl-script"
          mountPath: "/namerctl"
          readOnly: true
      restartPolicy: OnFailure
---
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    prometheus.io/scrape: 'true'
    prometheus.io/path: '/admin/metrics/prometheus'
    prometheus.io/port: '9990'
  name: namerd
  namespace: service-mesh
spec:
  selector:
    app: namerd
  type: LoadBalancer
  ports:
  - name: grpc
    port: 4321
  - name: http
    port: 4180
  - name: admin
    port: 9990

TL;DR
My issue is that using CNI, Namerd resolves services correctly using the dtab explorer, while Linkerd does not resolve using dtab explorer.

Hi @ethan! Thanks for including the configs, they look reasonable to me.

It vaguely sounds like linkerd is unable to contact namerd, but it’d be helpful to know more about the type of resolution failures you’re seeing. Does the dtab resolver show a tree that is unresolved, or does it show nothing at all? Do either the linkerd or namerd logs have errors in them?

@esbie the dtab resolver shows a tree that is unresolved in Linkerd while Namerd the tree is resolved.

Here is an error from the Linkerd logs:

I 1002 22:05:54.763 UTC THREAD18 TraceId:6b01dac774a9f3a2: %/io.l5d.k8s.daemonset/service-mesh/http-incoming/l5d/#/io.l5d.k8s.http/default/http/content: name resolution is negative (local dtab: Dtab())

EDIT: There aren’t any errors coming into Namerd. I had previously had bad error logs from when I was testing.

OK sure, and when you are running the linkerd resolver, does namerd increment the request count for the mesh interface? By that I mean if you go to namerd’s /admin/metrics.json endpoint there should be some grpc mesh request counts that will increase on each request.

Also what does the tree in the linkerd resolver look like? Is it the same tree as the namerd resolver?

-Thanks

The Namerd request count of the interface/io.l5d.mesh/requests does get incremented when I manually curl the port of the http-incoming (4141) on Linkerd or even when I hit my APIs through the ingress DNS. I should note that my ingress router works fine where I can actually communicate with my services. It’s the service to service communication that does not work.

I get an error when trying to post two images so here is the first one:

Namerd

Linkerd

Got it, I’m going to to open a github issue for further investigation, this looks like a bug.

In the interim, consider using the io.l5d.namerd.http interpreter instead. https://linkerd.io/config/1.2.1/linkerd/index.html#namerd-http

Filed at https://github.com/linkerd/linkerd/issues/1660

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.