Why can't assess 4140 port in each host with eth0's ip?


#1

I deploy a linkerd with Kubernetes. When I run hello-world example, It raise some errors:

oot@dfs0:~/linkerd# http_proxy=172.17.229.159:4140 curl -s http://world
world (10.244.29.24)!root@dfs0:~/linkerd# 
root@dfs0:~/linkerd# http_proxy=172.17.229.159:4140 curl -s http://hello
Get http://world: proxyconnect tcp: dial tcp 172.17.254.48:4140: getsockopt: connection refused
root@dfs0:~/linkerd#

When run node-name-test, it success.

root@dfs0:~/linkerd# kubectl logs node-name-test
Server:		10.96.0.10
Address:	10.96.0.10#53

Name:	iz2zeiztnsn7rv8a5tibr8z.acmcoder.mmttnn
Address: 172.17.254.48

root@dfs0:~/linkerd#

And I test some steps:

  1. If I use pod ip, it success: curl http://10.244.30.9:4140
  2. If I use eth0 ip, it raise some error: http://172.17.254.53:4140
root@dfs0:~/linkerd# curl 172.17.254.53:4140
curl: (7) Failed to connect to 172.17.254.53 port 4140: Connection refused
root@dfs0:~/linkerd# curl 10.244.30.9:4140
No hosts are available for /svc/10.244.30.9:4140, Dtab.base=[/srv=>/#/io.l5d.k8s/default/http;/host=>/srv;/svc=>/host;/host/world=>/srv/world-v1], Dtab.local=[]. Remote Info: Not Availableroot@dfs0:~/linkerd# 
root@dfs0:~/linkerd#

#2

One of the pod state is:

root@dfs0:~/linkerd# kubectl describe pod/l5d-k4wxx
Name:           l5d-k4wxx
Namespace:      default
Node:           iz2zef7rx1lvm90vsghh9qz.acmcoder.mmttnn/172.17.254.53
Start Time:     Wed, 13 Jun 2018 11:05:20 +0800
Labels:         app=l5d
                controller-revision-hash=485110773
                pod-template-generation=1
Annotations:    <none>
Status:         Running
IP:             10.244.30.9
Controlled By:  DaemonSet/l5d
Containers:
  l5d:
    Container ID:  docker://8bbca43a945a0229441ce1e1f2b7762d06ef40529ec5015d866880cf584ead9e
    Image:         docker.acmcoder.com/public/linkerd:1.3.6
    Image ID:      docker-pullable://docker.acmcoder.com/lifubang/linkerd@sha256:b7034fa1c0eccd53c0fcfe577ee08f39c11e9a8286a48e99307b6d98cdedb86d
    Ports:         4140/TCP, 4141/TCP, 9990/TCP
    Host Ports:    4140/TCP, 0/TCP, 0/TCP
    Args:
      /io.buoyant/linkerd/config/config.yaml
    State:          Running
      Started:      Wed, 13 Jun 2018 11:05:21 +0800
    Ready:          True
    Restart Count:  0
    Environment:
      POD_IP:   (v1:status.podIP)
    Mounts:
      /io.buoyant/linkerd/config from l5d-config (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-xnblb (ro)
  kubectl:
    Container ID:  docker://bd4081b7b57bf9f7054ef83460ae71793207d5dbf9d6da5282675a1b63572711
    Image:         docker.acmcoder.com/public/kubectl:v1.8.5
    Image ID:      docker-pullable://docker.acmcoder.com/public/kubectl@sha256:11bec77df802ae350c9101bfd9e961a7d67b1288f1fec98bbfbd6e800083b28b
    Port:          <none>
    Host Port:     <none>
    Args:
      proxy
      -p
      8001
    State:          Running
      Started:      Wed, 13 Jun 2018 11:05:21 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-xnblb (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          True 
  PodScheduled   True 
Volumes:
  l5d-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      l5d-config
    Optional:  false
  default-token-xnblb:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-xnblb
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/unreachable:NoExecute
Events:
  Type    Reason                 Age   From                                              Message
  ----    ------                 ----  ----                                              -------
  Normal  SuccessfulMountVolume  27m   kubelet, iz2zef7rx1lvm90vsghh9qz.acmcoder.mmttnn  MountVolume.SetUp succeeded for volume "l5d-config"
  Normal  SuccessfulMountVolume  27m   kubelet, iz2zef7rx1lvm90vsghh9qz.acmcoder.mmttnn  MountVolume.SetUp succeeded for volume "default-token-xnblb"
  Normal  Pulled                 27m   kubelet, iz2zef7rx1lvm90vsghh9qz.acmcoder.mmttnn  Container image "docker.acmcoder.com/public/linkerd:1.3.6" already present on machine
  Normal  Created                27m   kubelet, iz2zef7rx1lvm90vsghh9qz.acmcoder.mmttnn  Created container
  Normal  Started                27m   kubelet, iz2zef7rx1lvm90vsghh9qz.acmcoder.mmttnn  Started container
  Normal  Pulled                 27m   kubelet, iz2zef7rx1lvm90vsghh9qz.acmcoder.mmttnn  Container image "docker.acmcoder.com/public/kubectl:v1.8.5" already present on machine
  Normal  Created                27m   kubelet, iz2zef7rx1lvm90vsghh9qz.acmcoder.mmttnn  Created container
  Normal  Started                27m   kubelet, iz2zef7rx1lvm90vsghh9qz.acmcoder.mmttnn  Started container

#3

Are you using CNI? If so, you’ll need to set hostNetwork: true in your Linkerd transformer configs. More info here: Flavors of Kubernetes


#4

I do use CNI, and I have tried this solution with hostNetwork: true, but have no answer.
I have another solution using nodePort and fix it.