Application not working after configuring my application with http_proxy environment variable

Env: Kubernetes 1.7
Netwoking: Canal (flannel+calico)

Application architecture: [took it from istio sample application - (bookinfo)]

Please follow Application architecture diagram, application with & without http_proxy environment variable

In this sample we will deploy a simple application that displays information about a book, similar to a single catalog entry of an online book store. Displayed on the page is a description of the book, book details (ISBN, number of pages, and so on), and a few book reviews.

The BookInfo application is broken into four separate microservices:

  • productpage. The productpage microservice calls the details and reviews microservices to populate the page.
  • details. The details microservice contains book information.
  • reviews. The reviews microservice contains book reviews. It also calls the ratings microservice.
  • ratings. The ratings microservice contains book ranking information that accompanies a book review.

There are 3 versions of the reviews microservice:

  • Version v1 doesn’t call the ratings service.
  • Version v2 calls the ratings service, and displays each rating as 1 to 5 black stars.
  • Version v3 calls the ratings service, and displays each rating as 1 to 5 red stars.

Application yml with http_proxy environment variable.

apiVersion: v1
kind: Service
metadata:
  name: details
  labels:
    app: details
spec:
  ports:
  - port: 9080
    name: http
  selector:
    app: details
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: details-v1
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: details
        version: v1
    spec:
      containers:
      - name: details
        image: istio/examples-bookinfo-details-v1
        imagePullPolicy: IfNotPresent
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: POD_NAME
          value: details
        - name: http_proxy
          value: $(NODE_NAME):4140
        ports:
        - containerPort: 9080
---
################################################
# Ratings service
################################################
apiVersion: v1
kind: Service
metadata:
  name: ratings
  labels:
    app: ratings
spec:
  ports:
  - port: 9080
    name: http
  selector:
    app: ratings
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: ratings-v1
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: ratings
        version: v1
    spec:
      containers:
      - name: ratings
        image: istio/examples-bookinfo-ratings-v1
        imagePullPolicy: IfNotPresent
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: http_proxy
          value: $(NODE_NAME):4140
        - name: POD_NAME
          value: ratings
        ports:
        - containerPort: 9080
---
########################################################
# Reviews service
########################################################
apiVersion: v1
kind: Service
metadata:
  name: reviews
  labels:
    app: reviews
spec:
  ports:
  - port: 9080
    name: http
  selector:
    app: reviews
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: reviews-v1
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: reviews
        version: v1
    spec:
      containers:
      - name: reviews
        image: istio/examples-bookinfo-reviews-v1
        imagePullPolicy: IfNotPresent
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: http_proxy
          value: $(NODE_NAME):4140
        - name: POD_NAME
          value: reviews1
        ports:
        - containerPort: 9080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: reviews-v2
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: reviews
        version: v2
    spec:
      containers:
      - name: reviews
        image: istio/examples-bookinfo-reviews-v2
        imagePullPolicy: IfNotPresent
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: http_proxy
          value: $(NODE_NAME):4140
        - name: POD_NAME
          value: reviews2
        ports:
        - containerPort: 9080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: reviews-v3
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: reviews
        version: v3
    spec:
      containers:
      - name: reviews
        image: istio/examples-bookinfo-reviews-v3
        imagePullPolicy: IfNotPresent
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: http_proxy
          value: $(NODE_NAME):4140
        - name: POD_NAME
          value: reviews3
        ports:
        - containerPort: 9080
---
##################################################
# Productpage service
##################################################
apiVersion: v1
kind: Service
metadata:
  name: productpage
  labels:
    app: productpage
spec:
  ports:
  - port: 9080
    name: http
  selector:
    app: productpage
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: productpage-v1
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: productpage
        version: v1
    spec:
      containers:
      - name: productpage
        image: istio/examples-bookinfo-productpage-v1
        imagePullPolicy: IfNotPresent
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: http_proxy
          value: $(NODE_NAME):4140
        - name: POD_NAME
          value: productpage
        ports:
        - containerPort: 9080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  labels:
    k8s-app: bookinfo
  name: bookinfo
spec:
  rules:
    - host: bookinfo.13.54.45.20.nip.io
      http:
        paths:
        - path: /productpage
          backend:
            serviceName: productpage
            servicePort: 9080
        - path: /login
          backend:
            serviceName: productpage
            servicePort: 9080
        - path: /logout
          backend:
            serviceName: productpage
            servicePort: 9080

Linkerd CNI + Zipkins yml

apiVersion: rbac.authorization.k8s.io/v1alpha1
kind: ClusterRoleBinding
metadata:
  name: linkerd
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: default
  namespace: default
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: l5d-config
data:
  config.yaml: |-
    admin:
      port: 9990

    namers:
    - kind: io.l5d.k8s
      experimental: true
      host: localhost
      port: 8001

    telemetry:
    - kind: io.l5d.prometheus
    - kind: io.l5d.zipkin
      host: zipkin-collector.default.svc.cluster.local
      port: 9410
      sampleRate: 1.0
    - kind: io.l5d.recentRequests
      sampleRate: 0.25

    usage:
      orgId: linkerd-examples-daemonset-zipkin

    routers:
    - protocol: http
      label: outgoing
      dtab: |
        /srv        => /#/io.l5d.k8s/default/http;
        /host       => /srv;
        /svc        => /host;
        /host/reviews1 => /srv/reviews;
        /host/reviews2 => /srv/reviews;
        /host/reviews3 => /srv/reviews;
        /host/details => /srv/details;
        /host/ratings => /srv/ratings;
        /host/world => /srv/world-v1;
      interpreter:
        kind: default
        transformers:
        - kind: io.l5d.k8s.daemonset
          namespace: default
          port: incoming
          service: l5d
          hostNetwork: true
      servers:
      - port: 4140
        ip: 0.0.0.0
      service:
        responseClassifier:
          kind: io.l5d.http.retryableRead5XX

    - protocol: http
      label: incoming
      dtab: |
        /srv        => /#/io.l5d.k8s/default/http;
        /host       => /srv;
        /svc        => /host;
        /host/reviews1 => /srv/reviews;
        /host/reviews2 => /srv/reviews;
        /host/reviews3 => /srv/reviews;
        /host/details => /srv/details;
        /host/ratings => /srv/ratings;
        /host/world => /srv/world-v1;
      interpreter:
        kind: default
        transformers:
        - kind: io.l5d.k8s.localnode
          hostNetwork: true
      servers:
      - port: 4141
        ip: 0.0.0.0
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  labels:
    app: l5d
  name: l5d
spec:
  template:
    metadata:
      labels:
        app: l5d
    spec:
      hostNetwork: true
      volumes:
      - name: l5d-config
        configMap:
          name: "l5d-config"
      containers:
      - name: l5d
        image: buoyantio/linkerd:1.1.2
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        args:
        - /io.buoyant/linkerd/config/config.yaml
        ports:
        - name: outgoing
          containerPort: 4140
          hostPort: 4140
        - name: incoming
          containerPort: 4141
        - name: admin
          containerPort: 9990
        volumeMounts:
        - name: "l5d-config"
          mountPath: "/io.buoyant/linkerd/config"
          readOnly: true

      - name: kubectl
        image: buoyantio/kubectl:v1.6.2
        args:
        - "proxy"
        - "-p"
        - "8001"
---
apiVersion: v1
kind: Service
metadata:
  name: l5d
spec:
  selector:
    app: l5d
  ports:
  - name: outgoing
    port: 4140
  - name: incoming
    port: 4141
  - name: admin
    port: 9990
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  labels:
    k8s-app: linkerd-dashboard
  name: linkerd-dashboard
  namespace: default
spec:
  rules:
    - host: linkerd-dashboard.13.54.45.20.nip.io
      http:
        paths:
          - backend:
              serviceName: l5d
              servicePort: 9990

Application yml WITHOUT http_proxy environment variable.

######################################################
# Details service
######################################################
apiVersion: v1
kind: Service
metadata:
  name: details
  labels:
    app: details
spec:
  ports:
  - port: 9080
    name: http
  selector:
    app: details
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: details-v1
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: details
        version: v1
    spec:
      containers:
      - name: details
        image: istio/examples-bookinfo-details-v1
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9080
---
####################################################
# Ratings service
####################################################
apiVersion: v1
kind: Service
metadata:
  name: ratings
  labels:
    app: ratings
spec:
  ports:
  - port: 9080
    name: http
  selector:
    app: ratings
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: ratings-v1
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: ratings
        version: v1
    spec:
      containers:
      - name: ratings
        image: istio/examples-bookinfo-ratings-v1
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9080
---
####################################################
# Reviews service
####################################################
apiVersion: v1
kind: Service
metadata:
  name: reviews
  labels:
    app: reviews
spec:
  ports:
  - port: 9080
    name: http
  selector:
    app: reviews
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: reviews-v1
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: reviews
        version: v1
    spec:
      containers:
      - name: reviews
        image: istio/examples-bookinfo-reviews-v1
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: reviews-v2
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: reviews
        version: v2
    spec:
      containers:
      - name: reviews
        image: istio/examples-bookinfo-reviews-v2
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: reviews-v3
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: reviews
        version: v3
    spec:
      containers:
      - name: reviews
        image: istio/examples-bookinfo-reviews-v3
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9080
---
####################################################
# Productpage service
####################################################
apiVersion: v1
kind: Service
metadata:
  name: productpage
  labels:
    app: productpage
spec:
  ports:
  - port: 9080
    name: http
  selector:
    app: productpage
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: productpage-v1
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: productpage
        version: v1
    spec:
      containers:
      - name: productpage
        image: istio/examples-bookinfo-productpage-v1
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9080
---
######################################
# Ingress resource 
######################################
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  labels:
    k8s-app: bookinfo
  name: bookinfo
spec:
  rules:
    - host: bookinfo.13.54.45.20.nip.io
      http:
        paths:
        - path: /productpage
          backend:
            serviceName: productpage
            servicePort: 9080
        - path: /login
          backend:
            serviceName: productpage
            servicePort: 9080
        - path: /logout
          backend:
            serviceName: productpage
            servicePort: 9080

Without http_proxy environment variable application is perfectly working and output coming as follows.

Please follow github issue 162

But when I add http_proxy environment variable its not responding, output as follows.
As per my understanding its not able to connect “details” & “reviews” microservices and fetch the data.

After debug getting following logs

D 0719 03:24:49.799 UTC THREAD38 TraceId:58746e4545e80cbf: k8s ns default service reviews found
D 0719 03:24:49.800 UTC THREAD38 TraceId:58746e4545e80cbf: k8s ns default service reviews port http found + /
D 0719 03:24:49.800 UTC THREAD38 TraceId:58746e4545e80cbf: k8s ns default initial state: kubernetes, productpage, reviews, l5d, ratings, details, zipkin, zipkin-collector
D 0719 03:24:49.800 UTC THREAD38 TraceId:58746e4545e80cbf: k8s ns default service l5d found
D 0719 03:24:49.800 UTC THREAD38 TraceId:58746e4545e80cbf: k8s ns default service l5d port incoming found + /
D 0719 03:24:49.798 UTC THREAD38 TraceId:58746e4545e80cbf: k8s ns default initial state: kubernetes, productpage, reviews, l5d, ratings, details, zipkin, zipkin-collector
E 0719 03:25:04.035 UTC THREAD40 TraceId:261c45c27f88e062: service failure
com.twitter.finagle.NoBrokersAvailableException: No hosts are available for /svc/details:9080, Dtab.base=[/srv=>/#/io.l5d.k8s/default/http;/host=>/srv;/svc=>/host;/host/world=>/srv/world-v1;/host/details=>/srv/details;/host/reviews=>/srv/reviews;/host/ratings=>/srv/ratings], Dtab.local=[]. Remote Info: Not Available

E 0719 03:25:04.057 UTC THREAD41 TraceId:c296d05e0a396e82: service failure
com.twitter.finagle.NoBrokersAvailableException: No hosts are available for /svc/reviews:9080, Dtab.base=[/srv=>/#/io.l5d.k8s/default/http;/host=>/srv;/svc=>/host;/host/world=>/srv/world-v1;/host/details=>/srv/details;/host/reviews=>/srv/reviews;/host/ratings=>/srv/ratings], Dtab.local=[]. Remote Info: Not Available

E 0719 03:25:04.074 UTC THREAD43 TraceId:3d914cc73711c984: service failure
com.twitter.finagle.NoBrokersAvailableException: No hosts are available for /svc/reviews:9080, Dtab.base=[/srv=>/#/io.l5d.k8s/default/http;/host=>/srv;/svc=>/host;/host/world=>/srv/world-v1;/host/details=>/srv/details;/host/reviews=>/srv/reviews;/host/ratings=>/srv/ratings], Dtab.local=[]. Remote Info: Not Available

Looks like dtab section need to edit, though its not clear how to dtab need to to setup based on my application.

Can you please help me to make dtab section based on my application yml file.

Thanks in advance.

The problem is that the bookinfo application makes requests that include the port number: reviews:9080 for example. The Linkerd configuration that you’re using expects there to be no port number and instead always uses the port named http in the Kubernetes service.

There are a few different options for how to proceed:

  1. Modify the bookinfo app to omit the port number from requests
  2. Edit the Linkerd dtab to use the port number from the request
  3. Use the Istio integration for Linkerd which correctly handles these types of requests

Thanks for response.

Option 1 - not want to use bcoz changing code.

Options 2 - how the dtab would be if port is included. Please help me on that.

Option 3 - Personally I don’t like injecting another container with each application, which I think it is a overhead also a anti pattern specially a large scale deployment & when we apply auto scale application, number of container will increase, not a good idea for container platform. That is why I love daemon concept.

Anyway just quick question does linkerd-inject does the same thing as istio doing?

Something like this:

/porthost => /#/io.l5d.k8s/default ;
/svc => /$/io.buoyant.porthostPfx/porthost ;

Reading from bottom to top:

  • the second rule would rewrite names like /svc/reviews:9080 to /porthost/9080/reviews.
  • the first rule would rewrite names like /porthost/9080/reviews to /#/io.l5d.k8s/default/9080/reviews.

Fantastic … its works. :slight_smile:

Quick question on top,
if I run book info application in different namespace (testing) then I added in dtab as follows

/porthost => /#/io.l5d.k8s/default ;
/porthost => /#/io.l5d.k8s/testing ;
/svc => /$/io.buoyant.porthostPfx/porthost ;

in this case result is coming but getting error in logs


E 0720 05:15:09.766 UTC THREAD19 TraceId:5477b4a9d2927803: service failure
Failure(Connection refused: /10.244.1.143:9080 at remote address: /10.244.1.143:9080. Remote Info: Not Available, flags=0x09) with RemoteInfo -> Upstream Address: /172.31.22.47:39588, Upstream Client Id: Not Available, Downstream Address: /10.244.1.143:9080, Downstream Client Id: %/io.l5d.k8s.localnode/ip-172-31-22-47/#/io.l5d.k8s/testing/9080/reviews, Trace Id: 5477b4a9d2927803.87ee2254830be63f<:9c2fefc46e867d5a
Caused by: com.twitter.finagle.ConnectionFailedException: Connection refused: /10.244.1.143:9080 at remote address: /10.244.1.143:9080. Remote Info: Not Available
        at com.twitter.finagle.netty4.Netty4Transporter$$anon$2$$anon$1.operationComplete(Netty4Transporter.scala:107)
        at com.twitter.finagle.netty4.Netty4Transporter$$anon$2$$anon$1.operationComplete(Netty4Transporter.scala:94)
        at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
        at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:481)
        at io.netty.util.concurrent.DefaultPromise.access$000(DefaultPromise.java:34)
        at io.netty.util.concurrent.DefaultPromise$1.run(DefaultPromise.java:431)
        at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
        at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:462)
        at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at com.twitter.finagle.util.BlockingTimeTrackingThreadFactory$$anon$1.run(BlockingTimeTrackingThreadFactory.scala:24)
        at java.lang.Thread.run(Thread.java:748)
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: /10.244.1.143:9080
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
        at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:352)
        at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:340)
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:632)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458)
        ... 5 more
Caused by: java.net.ConnectException: Connection refused
        ... 13 more

E 0720 05:15:10.009 UTC THREAD27 TraceId:5477b4a9d2927803: service failure
Failure(Connection refused: /10.244.1.143:9080 at remote address: /10.244.1.143:9080. Remote Info: Not Available, flags=0x09) with RemoteInfo -> Upstream Address: /172.31.22.47:39592, Upstream Client Id: Not Available, Downstream Address: /10.244.1.143:9080, Downstream Client Id: %/io.l5d.k8s.localnode/ip-172-31-22-47/#/io.l5d.k8s/testing/9080/reviews, Trace Id: 5477b4a9d2927803.ad25725fb8ef9ab0<:384594548b12376e
Caused by: com.twitter.finagle.ConnectionFailedException: Connection refused: /10.244.1.143:9080 at remote address: /10.244.1.143:9080. Remote Info: Not Available
        at com.twitter.finagle.netty4.Netty4Transporter$$anon$2$$anon$1.operationComplete(Netty4Transporter.scala:107)
        at com.twitter.finagle.netty4.Netty4Transporter$$anon$2$$anon$1.operationComplete(Netty4Transporter.scala:94)
        at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
        at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:481)
        at io.netty.util.concurrent.DefaultPromise.access$000(DefaultPromise.java:34)
        at io.netty.util.concurrent.DefaultPromise$1.run(DefaultPromise.java:431)
        at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
        at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:462)
        at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at com.twitter.finagle.util.BlockingTimeTrackingThreadFactory$$anon$1.run(BlockingTimeTrackingThreadFactory.scala:24)
        at java.lang.Thread.run(Thread.java:748)
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: /10.244.1.143:9080
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
        at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:352)
        at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:340)
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:632)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458)
        ... 5 more
Caused by: java.net.ConnectException: Connection refused
        ... 13 more

D 0720 05:15:11.894 UTC THREAD29 TraceId:5477b4a9d2927803: Exception propagated to the default monitor (upstream address: /10.244.1.145:38628, downstream address: /172.31.22.47:4141, label: %/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/testing/9080/reviews).
com.twitter.finagle.CancelledRequestException: request cancelled. Remote Info: Upstream Address: /10.244.1.145:38628, Upstream Client Id: Not Available, Downstream Address: /172.31.22.47:4141, Downstream Client Id: %/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/testing/9080/reviews, Trace Id: 5477b4a9d2927803.26e4e378eb2897c7<:5477b4a9d2927803

E 0720 05:15:11.945 UTC THREAD29 TraceId:5477b4a9d2927803: service failure
com.twitter.finagle.CancelledRequestException: request cancelled. Remote Info: Upstream Address: /10.244.1.145:38628, Upstream Client Id: Not Available, Downstream Address: /172.31.22.47:4141, Downstream Client Id: %/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/testing/9080/reviews, Trace Id: 5477b4a9d2927803.26e4e378eb2897c7<:5477b4a9d2927803

D 0720 05:15:12.707 UTC THREAD26 TraceId:5477b4a9d2927803: Exception propagated to the default monitor (upstream address: /172.31.22.47:39596, downstream address: /10.244.1.144:9080, label: %/io.l5d.k8s.localnode/ip-172-31-22-47/#/io.l5d.k8s/testing/9080/reviews).
Failure(request cancelled. Remote Info: Upstream Address: /10.244.1.145:38628, Upstream Client Id: Not Available, Downstream Address: /172.31.22.47:4141, Downstream Client Id: %/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/testing/9080/reviews, Trace Id: 5477b4a9d2927803.26e4e378eb2897c7<:5477b4a9d2927803, flags=0x02) with RemoteInfo -> Upstream Address: /172.31.22.47:39596, Upstream Client Id: Not Available, Downstream Address: /10.244.1.144:9080, Downstream Client Id: %/io.l5d.k8s.localnode/ip-172-31-22-47/#/io.l5d.k8s/testing/9080/reviews, Trace Id: 5477b4a9d2927803.aa9b9d91ab0d7214<:42a99c3d91084c45
Caused by: com.twitter.finagle.CancelledRequestException: request cancelled. Remote Info: Upstream Address: /10.244.1.145:38628, Upstream Client Id: Not Available, Downstream Address: /172.31.22.47:4141, Downstream Client Id: %/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/testing/9080/reviews, Trace Id: 5477b4a9d2927803.26e4e378eb2897c7<:5477b4a9d2927803

Wanted to know how to use multiple application in different namespace.

for example like hello-world apps want to run in default namespace
&
bookinfo app in testing namespace. in same environment.

so how dtab will be ?

Another question, can I trace any multi tier applications without instrumentation? and send to zipkin.

Those logs show an error connecting to:

This means that the Linkerd on ip-172-31-22-47 received a request for the reviews service but no instance of that service was running on the node. This is a normal thing to happen as instances get moved around. Linkerd should automatically retry the request and everything should be fine.

Your dtab looks fine. Entries toward the bottom will take precedence so Linkerd will look for apps in the testing namespace first and then try the default namespace if not found in testing.

You can read more about how to trace applications without instrumentation here: A Service Mesh for Kubernetes, Part VII: Distributed tracing made easy | Linkerd

Basically I was inspired by that blog. I have another application with two Microservices (nodejs based app + mongodb), when I add http_proxy environment variable in deployment config, linkerd logs shows services added but nothing goes to zipkin when I am using application.

I am using same environment setup. I can see traces for bookinfo, hello-world but not this app.

Looking for your guidance.

Today I was doing some another round of testing.

scenario as follows.

linkerd running, my hello-world & bookinfo also running. bookinfo application working perfectly.
shutdown linkerd, noticing bookinfo not working.
then bringup linkerd bookinfo application woking partly, one service missing (details). error as follow.

D 0721 04:07:10.225 UTC THREAD26 TraceId:a469f2cb7f9ae916: k8s lookup: /testing/9080/details /testing/9080/details
I 0721 04:07:10.243 UTC THREAD26: k8s initializing testing
I 0721 04:07:10.263 UTC THREAD26: k8s initializing testing
D 0721 04:07:10.294 UTC THREAD27: k8s ns testing initial state: details, productpage, ratings, reviews
D 0721 04:07:10.295 UTC THREAD27: k8s ns testing service details found
D 0721 04:07:10.305 UTC THREAD27: k8s ns testing service details port :9080 missing
D 0721 04:07:10.309 UTC THREAD27: k8s lookup: /default/http/details:9080 /default/http/details:9080
D 0721 04:07:10.309 UTC THREAD27: k8s ns default initial state: kubernetes, world-v1, linkerd-viz, l5d, zipkin, zipkin-collector, hello
D 0721 04:07:10.309 UTC THREAD27: k8s ns default service details:9080 missing
D 0721 04:07:10.312 UTC THREAD27: k8s ns default service l5d found
D 0721 04:07:10.312 UTC THREAD27: k8s ns default service l5d port incoming found + /
D 0721 04:07:10.312 UTC THREAD27: k8s ns default initial state: kubernetes, world-v1, linkerd-viz, l5d, zipkin, zipkin-collector, hello
E 0721 04:07:10.325 UTC THREAD27 TraceId:a469f2cb7f9ae916: service failure
com.twitter.finagle.NoBrokersAvailableException: No hosts are available for /svc/details:9080, Dtab.base=[/srv=>/#/io.l5d.k8s/default/http;/host=>/srv;/svc=>/host;/host/world=>/srv/world-v1;/porthost=>/#/io.l5d.k8s/testing;/svc=>/$/io.buoyant.porthostPfx/porthost], Dtab.local=[]. Remote Info: Not Available

D 0721 04:07:10.345 UTC THREAD30 TraceId:cc7c32639bfc5419: k8s lookup: /testing/9080/reviews /testing/9080/reviews
D 0721 04:07:10.345 UTC THREAD30 TraceId:cc7c32639bfc5419: k8s ns testing initial state: details, productpage, ratings, reviews
D 0721 04:07:10.346 UTC THREAD30 TraceId:cc7c32639bfc5419: k8s ns testing service reviews found
D 0721 04:07:10.348 UTC THREAD30 TraceId:cc7c32639bfc5419: k8s ns testing service reviews port :9080 found + /
D 0721 04:07:10.349 UTC THREAD30 TraceId:cc7c32639bfc5419: k8s ns default initial state: kubernetes, world-v1, linkerd-viz, l5d, zipkin, zipkin-collector, hello
D 0721 04:07:10.349 UTC THREAD30 TraceId:cc7c32639bfc5419: k8s ns default service l5d found
D 0721 04:07:10.349 UTC THREAD30 TraceId:cc7c32639bfc5419: k8s ns default service l5d port incoming found + /
D 0721 04:07:10.412 UTC THREAD32 TraceId:cc7c32639bfc5419: k8s lookup: /testing/9080/reviews /testing/9080/reviews
D 0721 04:07:10.415 UTC THREAD32 TraceId:cc7c32639bfc5419: k8s ns testing initial state: details, productpage, ratings, reviews
D 0721 04:07:10.416 UTC THREAD32 TraceId:cc7c32639bfc5419: k8s ns testing service reviews found
D 0721 04:07:10.417 UTC THREAD32 TraceId:cc7c32639bfc5419: k8s ns testing service reviews port :9080 found + /
E 0721 04:07:18.428 UTC THREAD26 TraceId:58a682fd28f6f21b: service failure
com.twitter.finagle.NoBrokersAvailableException: No hosts are available for /svc/details:9080, Dtab.base=[/srv=>/#/io.l5d.k8s/default/http;/host=>/srv;/svc=>/host;/host/world=>/srv/world-v1;/porthost=>/#/io.l5d.k8s/testing;/svc=>/$/io.buoyant.porthostPfx/porthost], Dtab.local=[]. Remote Info: Not Available

E 0721 04:07:22.634 UTC THREAD30 TraceId:cccecb1f9520dadf: service failure
com.twitter.finagle.NoBrokersAvailableException: No hosts are available for /svc/details:9080, Dtab.base=[/srv=>/#/io.l5d.k8s/default/http;/host=>/srv;/svc=>/host;/host/world=>/srv/world-v1;/porthost=>/#/io.l5d.k8s/testing;/svc=>/$/io.buoyant.porthostPfx/porthost], Dtab.local=[]. Remote Info: Not Available

E 0721 04:07:31.814 UTC THREAD32 TraceId:9cd6323185cf9b25: service failure
com.twitter.finagle.NoBrokersAvailableException: No hosts are available for /svc/details:9080, Dtab.base=[/srv=>/#/io.l5d.k8s/default/http;/host=>/srv;/svc=>/host;/host/world=>/srv/world-v1;/porthost=>/#/io.l5d.k8s/testing;/svc=>/$/io.buoyant.porthostPfx/porthost], Dtab.local=[]. Remote Info: Not Available

E 0721 04:07:37.583 UTC THREAD26 TraceId:d068ae6099c64f48: service failure
com.twitter.finagle.NoBrokersAvailableException: No hosts are available for /svc/details:9080, Dtab.base=[/srv=>/#/io.l5d.k8s/default/http;/host=>/srv;/svc=>/host;/host/world=>/srv/world-v1;/porthost=>/#/io.l5d.k8s/testing;/svc=>/$/io.buoyant.porthostPfx/porthost], Dtab.local=[]. Remote Info: Not Available

D 0721 04:07:43.745 UTC THREAD10: UsageMessage(Some(331cc276-4931-4dbc-a952-99a5efc568b5),Some(linkerd-examples-daemonset-zipkin),Some(1.1.2),None,Some(Linux),Some(4.4.0-1022-aws),Some(2017-07-21T04:06Z),List(Router(Some(http),Some(default),List(),List(io.l5d.k8s.daemonset)), Router(Some(http),Some(default),List(),List(io.l5d.k8s.localnode))),List(io.l5d.k8s),List(Counter(Some(srv_requests),Some(12)), Counter(Some(srv_requests),Some(7))),List(Gauge(Some(jvm_mem),Some(7.8398096E7)), Gauge(Some(jvm/gc/msec),Some(7.8398128E7)), Gauge(Some(jvm/uptime),Some(79645.0)), Gauge(Some(jvm/num_cpus),Some(2.0))))
D 0721 04:07:44.783 UTC THREAD29:
E 0721 04:07:46.293 UTC THREAD31 TraceId:0c91d1cbd8f7dea3: service failure
com.twitter.finagle.NoBrokersAvailableException: No hosts are available for /svc/details:9080, Dtab.base=[/srv=>/#/io.l5d.k8s/default/http;/host=>/srv;/svc=>/host;/host/world=>/srv/world-v1;/porthost=>/#/io.l5d.k8s/testing;/svc=>/$/io.buoyant.porthostPfx/porthost], Dtab.local=[]. Remote Info: Not Available

Based on those logs it looks like the details service doesn’t have a port 9080. Try running:

kubectl -n testing describe svc/details

There is nothing change in application configuration. After undeploy and redeploy every things ok.

If I tried to reproduce same thing now this time noticed that problem in reviews service.

So I tried couple of times same things happened , sometimes details is missing sometimes reviews missing & after restarting entier apps everything OK.

Looks like if somehow linkerd daemon dies then its a challenge.

This is my observations, looking for your help if you can reproduce at your end. I think I already posted env, apps & linkerd yml.

Kubernetes 1.7 using kubeadm.
CNI - Canal (calico + flannel)

What is the output of

kubectl -n testing get endpoints/details endpoints/reviews