Routing issue in namerd (502 bad gateway)

This is strange, I took the config from linkerd-examples (linkerd-examples/k8s-daemonset/k8s/
) linkerd-namerd and namerd yml and updated with rewrite namer and hostN/w:true on linkerd-namerd only. The service name isn’t resolving. :sweat:

config:
linkerd-namerd.yml (2.9 KB)
namerd.yml (3.0 KB)

What changes I made to the config:

changes in namerd (left)

changes in linkerd-namerd(left)


namerctl:

namerctl dtab update outgoing - <<EOF
/ph  => /$/io.buoyant.rinet ;                     # /ph/80/google.com -> /$/io.buoyant.rinet/80/google.com
/svc => /ph/80 ;                                  # /svc/google.com -> /ph/80/google.com
/svc => /$/io.buoyant.porthostPfx/ph ;            # /svc/google.com:80 -> /ph/80/google.com
/k8s => /#/io.l5d.k8s.http ;                      # /k8s/default/http/foo -> /#/io.l5d.k8s.http/default/http/foo
/portNsSvc => /#/portNsSvcToK8s ;                 # /portNsSvc/http/default/foo -> /k8s/default/http/foo
/host => /portNsSvc/http/default ;                # /host/foo -> /portNsSvc/http/default/foo
/host => /portNsSvc/http ;                        # /host/default/foo -> /portNsSvc/http/default/foo
/svc => /$/io.buoyant.http.domainToPathPfx/host ; # /svc/foo.default -> /host/default/foo
EOF


namerctl dtab create incoming - <<EOF
/k8s => /#/io.l5d.k8s ;                           # /k8s/default/http/foo -> /#/io.l5d.k8s/default/http/foo
/portNsSvc => /#/portNsSvcToK8s ;                 # /portNsSvc/http/default/foo -> /k8s/default/http/foo
/host => /portNsSvc/http/default ;                # /host/foo -> /portNsSvc/http/default/foo
/host => /portNsSvc/http ;                        # /host/default/foo -> /portNsSvc/http/default/foo
/svc => /$/io.buoyant.http.domainToPathPfx/host ; # /svc/foo.default -> /host/default/foo
EOF

service1:

service1.yml (153 Bytes)
helloworld01.yml (627 Bytes)

curl:

http_proxy=$(kubectl get svc l5d -o jsonpath="{.status.loadBalancer.ingress[0].*}"):4140 curl -v http://service1              * Rebuilt URL to: http://service1/
*   Trying 52.xx.xx.xx...
* TCP_NODELAY set
* Connected to xxxxxxx.elb.amazonaws.com (52.xx.xx.xx) port 4140 (#0)
> GET http://service1/ HTTP/1.1
> Host: service1
> User-Agent: curl/7.55.0
> Accept: */*
> Proxy-Connection: Keep-Alive
> 
< HTTP/1.1 502 Bad Gateway
< l5d-err: No+hosts+are+available+for+%2Fsvc%2Fservice1%2C+Dtab.base%3D%5B%5D%2C+Dtab.local%3D%5B%5D.+Remote+Info%3A+Not+Available
< Content-Type: text/plain
< Content-Length: 97

Hi @zshaik! Some questions - can you hit service1 directly (without linkerd?) I see you’re using /#/io.l5d.k8s.http in your dtab. If you’re using linkerd-examples/k8s-daemonset/k8s/linkerd-namerd.yml, there’s no namer configured with that prefix. You’d need to add the prefix. (Or use /#/io.l5d.k8s ). See https://github.com/linkerd/linkerd-examples/blob/master/k8s-daemonset/k8s/servicemesh.yml for a config that does this.

@marzipan Hi Risha, yes I am able to hit the service1 directly. Sorry for not mentioning about the prefix, I had /io.l5d.k8s.http

   - kind: io.l5d.k8s
      prefix: /io.l5d.k8s.http
      transformers:
      - kind: io.l5d.k8s.daemonset
        namespace: default
        port: http-incoming
        service: l5d

updated my question. Thanks

Hey @zshaik! As far as I can tell you dtab looks fine. Some observations:

  • I see your daemonset transformer sends things to a port called http-incoming but your linkerd-namerd.yml doesn’t define such a port.
  • if you’re using a cni setup, you’ll also need to add hostNetwork: true to all of the the daemonset and localnode transformers. (Also you’ll need the NODE_NAME env var set, see linkerd-cni.yml for an example
  • your dtabs need to include something like /svc/world => /svc/world-v1; since the world service is called world-v1 and not world in that deployment.
1 Like

Worked! :dolphin:thanks for those suggestions Risha! @marzipan retested after making corrections (updated configs in the question). Summarizing all the changes in linkerd-namerd.yml below (namerd is same as the one original on github examples)

- Added hostNetwork:true & dnsPolicy:clusterFirstWithHostNet to daemonset

  hostNetwork: true
  dnsPolicy: ClusterFirstWithHostNet
  containers:
  - name: l5d
    image: buoyantio/linkerd:1.2.0
    env:
    - name: NODE_NAME
      valueFrom:
        fieldRef:
          fieldPath: spec.nodeName

- Added hostNetwork:true to all transformers in interpreters (incoming & outgoing)


routers:
- protocol: http
  label: outgoing
  interpreter:
kind: io.l5d.namerd
dst: /$/inet/namerd.default.svc.cluster.local/4100
namespace: internal
transformers:
- kind: io.l5d.k8s.daemonset
  namespace: default
  port: incoming
  service: l5d
  hostNetwork: true

- protocol: http
  label: incoming
  interpreter:
    kind: io.l5d.namerd
    dst: /$/inet/namerd.default.svc.cluster.local/4100
    namespace: internal
    transformers:
    - kind: io.l5d.k8s.localnode
      hostNetwork: true

great, glad it worked!

Hi @marzipan unfortunately I am seeing an error only when I try to route to external endpoints (like google.com) :exploding_head:
[REOPENED THE ISSUE]

curl
http_proxy=$(kubectl get svc l5d -o jsonpath="{.status.loadBalancer.ingress[0].*}"):4140 curl -v http://www.google.com:443

< HTTP/1.1 502 Bad Gateway
< l5d-err: No+hosts+are+available+for+%2Fsvc%2Fwww.google.com%3A443%2C+Dtab.base%3D%5B%5D%2C+Dtab.local%3D%5B%5D.+Remote+Info%3A+Not+Available
< Content-Type: text/plain
< Content-Length: 107
< 
* Connection #0 to host a0bxxe39dxx4911e784cb02ba0eb7ae5-794798232.us-west-2.elb.amazonaws.com left intact
No hosts are available for /svc/www.google.com:443, Dtab.base=[], Dtab.local=[]. Remote Info: Not Available

log
%/io.l5d.k8s.localnode/ip-10-2-105-209.us-west-2.compute.internal/$/io.buoyant.rinet/443/google.com: name resolution is negative

%/io.l5d.k8s.daemonset/default/http-incoming/l5d/$/io.buoyant.rinet/443/google.com: name resolution is negative

in order to also route to external services I used egress config i.e; to move DT(daemonset transformer) from interpreter to namer

namerd

- kind: io.l5d.k8s
  prefix: /io.l5d.k8s.http
  transformers:
  - kind: io.l5d.k8s.daemonset
    namespace: default
    port: incoming
    service: l5d

With this config I am able to route to external endpoints like google.com but the internal services are not resolving

dtabs

namerctl dtab update outgoing - <<EOF
/ph  => /$/io.buoyant.rinet ;                     
/svc => /ph/80 ;                                  
/svc => /$/io.buoyant.porthostPfx/ph ;            
/k8s => /#/io.l5d.k8s.http ;                      
/portNsSvc => /#/portNsSvcToK8s ;                 
/host => /portNsSvc/http/default ;                
/host => /portNsSvc/http ;                        
/svc => /$/io.buoyant.http.domainToPathPfx/host ; 
EOF

namerctl dtab update incoming - <<EOF
/k8s => /#/io.l5d.k8s ;
/portNsSvc => /#/portNsSvcToK8s ;
/host => /portNsSvc/http/default ;
/host => /portNsSvc/http ;
/svc => /$/io.buoyant.http.domainToPathPfx/host ; 
EOF

This config mostly looks good, let’s confirm a couple things:

  1. In your linkerd kubernetes config, the port names must match the string specified in the namerd config, specifically this incoming string:
- kind: io.l5d.k8s
  prefix: /io.l5d.k8s.http
  transformers:
  - kind: io.l5d.k8s.daemonset
    namespace: default
    port: incoming
    service: l5d

should match the port name in the linkerd kubernetes config:

        ports:
        - name: outgoing
          containerPort: 4140
          hostPort: 4140
        - name: incoming
          containerPort: 4141
  1. Just to confirm we’re looking at the right configs, can you provide:
  • complete linkerd and namerd configs
  • complete linkerd and namerd kubernetes configs

@siggy Yes the port: incoming in namerd config exists in linkerd-namerd config

    ports:
    - name: outgoing
      containerPort: 4140
      hostPort: 4140
    - name: incoming
      containerPort: 4141
    - name: external
      containerPort: 4142
    - name: admin
      containerPort: 9990

configs
namerd2.yml (3.8 KB)
linkerd-namerd2.yml (2.9 KB)

@zshaik Did you try adding hostNetwork: true to io.l5d.k8s.daemonset ?

@siggy @marzipan solved! tyvm! :weight_lifting_man: