App is unable to route through linkerd in k8s

Setup:

I have an app which routes to given endpoints, I get the following response when I try to curl from the app.

Request

kubectl exec -it my-app -- /bin/bash
curl -H "target-api: https://google.com" localhost

Response
curl: (5) Could not resolve proxy: ip-10-8-xxx-xxx.us-west-2.compute.internal

note: I am able to route only with http_proxy curl

I used the below configuration, as described here https://linkerd.io/getting-started/k8s-daemonset

env:
- name: NODE_NAME
  valueFrom:
    fieldRef:
      fieldPath: spec.nodeName
- name: POD_IP
  valueFrom:
    fieldRef:
      fieldPath: status.podIP
- name: http_proxy
  value: $(NODE_NAME):4140

linkerd config

routers:
- protocol: http
  label: outgoing
  dtab: |
    /ph        => /$/io.buoyant.rinet ; # Lookup the name in DNS
    /svc       => /ph/80 ; # Use port 80 if unspecified
    /srv       => /$/io.buoyant.porthostPfx/ph ; # Attempt to extract the port from the hostname
    /srv       => /#/io.l5d.k8s.ds/default/http ; # Lookup the name in Kubernetes, use the linkerd daemonset pod
    /svc       => /srv ;
    /svc/world => /srv/world-v1 ;
  servers:
  - port: 4140

Using linkerd’s internal endpoint name (l5d) worked! Not sure if this is standard/recommended/appropriate way to do it ¯_(ツ)_/¯

env:
- name: NODE_NAME
  valueFrom:
    fieldRef:
      fieldPath: spec.nodeName
- name: POD_IP
  valueFrom:
    fieldRef:
      fieldPath: status.podIP
- name: http_proxy
  value: l5d:4140

What kubernetes environment are you running this in? Many don’t have support for spec.nodeName, which could cause this type of behavior.

1 Like

Hi @zshaik! Using linkerd’s service VIP (l5d) is not recommended since this will mean that when you app makes a request, that request will be sent to a random instance of linkerd instead of the instance running on the local node.

Based on the curl output it seems the node name is ip-10-8-xxx-xxx.us-west-2.compute.internal but that name is not resolvable from the pod network. You may need to dig into more details about the way your network is configured to determine why this is the case.

1 Like

k8s is from aws and Below is the node-info,

System info
Machine ID:
f32e0af35637b5dfcbedcb0a1de8dca1
System UUID:
EC23C7FE-59DC-268E-0F09-
Boot ID:
2674466d-d9ac-48ae-a616-
Kernel Version:
3.10.0-327.36.3.el7.x86_64
OS Image:
CentOS Linux 7 (Core)
Container Runtime Version:
docker://1.12.3
Kubelet Version:
v1.6.2
Kube-Proxy Version:
v1.6.2
Operating system:
linux
Architecture:
amd64

Cool thanks! It’s not spec.nodeName related then.

Did you investigate @Alex’s suggestion? Any progress on figuring out why the pods can’t talk to ip-10-8-xxx-xxx.us-west-2.compute.internal ?

1 Like

Hey! We resolved it, some nodes on k8s were not responding also had to set hostNetwork:true in daemonset. Thanks for the follow up :hugs:

1 Like