How to configure linkerd 2.3.2 for different end points and how to generate graph topology

Hi,

I am new to linkerd.We are just setting up linkerd for our application.We have installed linkerd2.3.2 successfully and also we have gone through given booksapp example.While configuring our application in linkerd we are not able to generate graph topology.
Our configured service profile is:

### ServiceProfile for demo1.demo ###
apiVersion: linkerd.io/v1alpha1
kind: ServiceProfile
metadata:
** name: demo1.demo.svc.cluster.local**
** namespace: demo**
spec:
** # A service profile defines a list of routes. Linkerd can aggregate metrics**
** # like request volume, latency, and success rate by route.**
** routes:**
** - name: ‘/test2’**

** # Each route must define a condition. All requests that match the**
** # condition will be counted as belonging to that route. If a request**
** # matches more than one route, the first match wins.**
** condition:**
** # The simplest condition is a path regular expression.**
** pathRegex: ‘/test2’**

** # This is a condition that checks the request method.**
** method: POST**

In this I am calling /test1 from restclient , which will internally call /test2 and /test3.This route we want to represnt in graph topology,which we are not able to get.

Also when we are running command “linkerd routes svc/demo1 -n demo”, we are getting below output:

ROUTE SERVICE SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99
/test2 demo1 0.00% 0.0rps 0ms 0ms 0ms
[DEFAULT] demo1 0.00% 0.0rps 0ms 0ms 0ms

Thanks in advance!

Asawari Kengar

Hi,

I’m also facing the same problem, please help to resolve this issue.

Thanks
Sharique Ansari.

Hey @AsawariKengar – thanks for posting.

Regarding this output:

It looks like your service hasn’t received any traffic in the past minute, which is why all of the RPS numbers are at 0.0. Try sending it steady traffic and re-running the command. For more help getting route stats, checkout the troubleshooting section on this page:

Hey @klingerf,

1.We are getting same response even after sending steady traffic for more than 2 mins for below mentioned command,find below response ,but we are able to see live traffic in Linkerd dashboard.

" linkerd routes svc/demo1 -n demo"

OUTPUT:
ROUTE SERVICE SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99
/test2 demo1 0.00% 0.0rps 0ms 0ms 0ms
[DEFAULT] demo1 0.00% 0.0rps 0ms 0ms 0ms

Find below dashboard screenshot:

  1. As you suggested ,we have gone through the troubleshooting section of Linkerd.Over there they mentioned to run below command to find the culprit:

" linkerd tap deploy/demo1 -n demo -o wide | grep req"

We are getting below output:

req id=12:0 proxy=in src=10.x.x.x:50206 dst=10.x.x.x:30007 tls=not_provided_by_remote :method=POST :authority=10.x.x.x30007 :path=/test/test1 dst_res=deploy/demo1 dst_ns=demo
req id=12:1 proxy=out src=10.x.x.x:53508 dst=10.x.x.x:30008 tls=no_authority_in_http_request :method=POST :authority=10.x.x.x:30008 :path=/test/test2 src_res=deploy/demo1 src_ns=demo
rsp id=12:1 proxy=out src=10.x.x.x:53508 dst=10.x.x.x:30008 tls=no_authority_in_http_request :status=200 latency=10652µs src_res=deploy/demo1 src_ns=demo
end id=12:1 proxy=out src=10.x.x.x:53508 dst=10.x.x.x:30008 tls=no_authority_in_http_request duration=120µs response-length=14B src_res=deploy/demo1 src_ns=demo

3.In Dashboard we are not able to represent graph topology as well.

Thanks,
Asawari

@AsawariKengar Thanks for the additional info! I think I see what’s going on.

It looks like you haven’t injected the pod that’s sending traffic to the demo1 service, and that’s causing this command to not return any stats:

linkerd routes svc/demo1 -n demo

This is actually a bug in Linkerd, and I went ahead and opened the following issue:

In the meantime, you have a few workarounds:

  • you can inject the deployment that’s making requests to the demo1 service
  • you can modify the requests to demo1 to be fully qualified (e.g. instead of sending requests to demo1:30007, try sending them to demo1.demo.svc.cluster.local:30007)

Either of those approaches should start to populate route stats. If you take the first approach, then that will also ensure that the topology graph is properly drawn (only deployments that are injected with linkerd will show up in the topology graph). I also recommend checking out the topology graph that’s displayed at http://127.0.0.1:50750/namespaces/demo in your dashboard.

Hi @klingerf,

1.For 1st point : request will flow from demo1->demo2->demo3 ,so we have already injected demo1,demo2 and demo3 as well.demo1 service will be call by some other client/restclient eg.Postman rest client.So how to inject that?

  1. For 2nd point:We modified demo2:30008 to demo2.demo.svc.cluster.local:30008 ,but still we are facing same issue.

For clear understanding, please find below steps which we followed till now:

Here are kubernetes details:

1.Pod details:

NAME                     READY     STATUS    RESTARTS   AGE
demo1-5d6f94c768-jgpz9   2/2       Running   0          2h
demo2-746d876d74-8vhn5   2/2       Running   0          2h
demo3-58d854ddc9-kn6p5   2/2       Running   0          4h

2.Deployment details:

NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
demo1     1         1         1            1           5d
demo2     1         1         1            1           5d
demo3     1         1         1            1           5d

3.Service details

NAME      TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)           AGE
demo1     NodePort   10.104.71.228    <none>        30007:30007/TCP   5d
demo2     NodePort   10.107.100.253   <none>        30008:30008/TCP   5d
demo3     NodePort   10.97.102.30     <none>        30009:30009/TCP   5d

4. Docker images:
REPOSITORY TAG IMAGE ID CREATED SIZE
test3 v1 0d689892f6b2 5 days ago 810MB
test2 v1 47784c89a2ea 5 days ago 810MB
test1 v1 bd23666020af 5 days ago 810MB

Linkerd Details:

1.Linkerd version

Client version: stable-2.3.2
Server version: stable-2.3.2

2.For command “kubectl -n linkerd get deploy” getting below output:

NAME                   DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
linkerd-controller     1         1         1            1           11d
linkerd-grafana        1         1         1            1           11d
linkerd-identity       1         1         1            1           11d
linkerd-prometheus     1         1         1            1           11d
linkerd-sp-validator   1         1         1            1           11d
linkerd-web            1         1         1            1           11d

3.linkerd -n demo stat deploy
OUTPUT:
NAME MESHED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 TCP_CONN
demo1 1/1 100.00% 0.1rps 0ms 0ms 0ms 2
demo2 1/1 100.00% 0.1rps 0ms 0ms 0ms 2
demo3 1/1 100.00% 0.1rps 0ms 0ms 0ms 2

4**. Deployement/service yaml**

5.demo1.yml for reference:

apiVersion: apps/v1
kind: Deployment
metadata:
   name: demo1
   labels:
     app: demo1
spec:
   replicas: 1
   selector:
     matchLabels:
       app: demo1
   template:
     metadata:
        labels:
          app: demo1
     spec:
        containers:
        - name: demo1
          image: test1:v1
          env:
          - name: demo2
            value: http://demo2:30008
          volumeMounts:
          - name: demo1-log-dir
            mountPath: /opt/logs
          ports:
          - containerPort: 30007
        volumes:
        - name: demo1-log-dir
          hostPath:
            path: /log/g2c-logs/demo1-logs


---
apiVersion: v1
kind: Service
metadata:
  name: demo1
  labels:
    app: demo1
spec:
  ports:
  - protocol: TCP
    port: 30007
    targetPort: 30007
    nodePort: 30007
  selector:
    app: demo1
  type: NodePort

note:similarly we have configured demo2.yml and demo3.yml

6.Service profile yaml:

### ServiceProfile for demo1.demo ###
apiVersion: linkerd.io/v1alpha1
kind: ServiceProfile
metadata:
  name: demo1.demo.svc.cluster.local
  namespace: demo
spec:
      #A service profile defines a list of routes.  Linkerd can aggregate metrics
      # like request volume, latency, and success rate by route.
      routes:
      - name: '/test2'

        # Each route must define a condition.  All requests that match the
        # condition will be counted as belonging to that route.  If a request
        # matches more than one route, the first match wins.
        condition:
          # The simplest condition is a path regular expression.
          pathRegex: '/test2'

          # This is a condition that checks the request method.
          method: POST

          # If more than one condition field is set, all of them must be satisfied.
          # This is equivalent to using the 'all' condition:
          # all:
          # - pathRegex: '/authors/\d+'
          # - method: POST

          # Conditions can be combined using 'all', 'any', and 'not'.
          # any:
          # - all:
          #   - method: POST
          #   - pathRegex: '/authors/\d+'
          # - all:
          #   - not:
          #       method: DELETE
          #   - pathRegex: /info.txt

        # A route may be marked as retryable.  This indicates that requests to this
        # route are always safe to retry and will cause the proxy to retry failed
        # requests on this route whenever possible.
        # isRetryable: true

        # A route may optionally define a list of response classes which describe
        # how responses from this route will be classified.
        responseClasses:

        # Each response class must define a condition.  All responses from this
        # route that match the condition will be classified as this response class.
        - condition:
            # The simplest condition is a HTTP status code range.
            status:
              min: 500
              max: 599

            # Specifying only one of min or max matches just that one status code.
            # status:
            #   min: 404 # This matches 404s only.

            # Conditions can be combined using 'all', 'any', and 'not'.
            # all:
            # - status:
            #     min: 500
            #     max: 599
            # - not:
            #     status:
            #       min: 503

          # The response class defines whether responses should be counted as
          # successes or failures.
          isFailure: true

        # A route can define a request timeout.  Any requests to this route that
        # exceed the timeout will be canceled.  If unspecified, the default timeout
        # is '10s' (ten seconds).
        # timeout: 250ms

      # A service profile can also define a retry budget.  This specifies the
      # maximum total number of retries that should be sent to this service as a
      # ratio of the original request volume.
      # retryBudget:
      #   The retryRatio is the maximum ratio of retries requests to original
      #   requests.  A retryRatio of 0.2 means that retries may add at most an
      #   additional 20% to the request load.
      #   retryRatio: 0.2

      #   This is an allowance of retries per second in addition to those allowed
      #   by the retryRatio.  This allows retries to be performed, when the request
      #   rate is very low.
      #   minRetriesPerSecond: 10

      #   This duration indicates for how long requests should be considered for the
      #   purposes of calculating the retryRatio.  A higher value considers a larger
      #   window and therefore allows burstier retries.
      #   ttl: 10s

note:similarly we have configured service profile for demo2 and demo3

  1. Please find below screenshot,where we are not able to find demo namespace in debug section,also can you please expain what is endpoint in linkerd.

For “linkerd endpoints -n demo” command we are getting “no endpoints found”

Please look into above configuration and suggest if anything we have missed .

Thanks in advanced,
Asawari

@AsawariKengar

Thanks for sending this information.

What is the client that is making the call to the demo1 service? It sounds like it’s a client that is external to the cluster altogether. Are you using an ingress to route traffic to demo1? If so, you can inject the linkerd proxy in to that ingress as well.

This is certainly unexpected behavior. I’d like to take a look at the injected yaml files for the deployments. Can you run the following command and post the output:

kubectl get deploy demo1 demo2 demo3 -o yaml

Another troubleshooting step you can do is to set the linkerd-proxy container to write detailed logs by updating the deployment.

kubectl edit deploy demo2

Under the env section, update the LINKERD2_PROXY_LOG setting by changing linkerd2_proxy=debug

        - env:
          - name: LINKERD2_PROXY_LOG
            value: warn,linkerd2_proxy=info

Thanks,
Charles

@cpretzer

Thanks for giving us the information.

As the output exceed the max character count, i will reply post in 2 part

PART 1:

As requested below are the required output.

  1. kubectl get deploy demo1 -o yaml -n demo

    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      annotations:
        deployment.kubernetes.io/revision: "12"
        kubectl.kubernetes.io/last-applied-configuration: |
          {"apiVersion":"extensions/v1beta1","kind":"Deployment","metadata":{"annotations":{"deployment.kubernetes.io/revision":"10"},"creationTimestamp":"2019-06-21T10:21:44Z","generation":10,"labels":{"app":"demo1"},"name":"demo1","namespace":"demo","resourceVersion":"1299084","selfLink":"/apis/extensions/v1beta1/namespaces/demo/deployments/demo1","uid":"5e5f661e-940e-11e9-8686-02001701f16d"},"spec":{"progressDeadlineSeconds":600,"replicas":1,"revisionHistoryLimit":10,"selector":{"matchLabels":{"app":"demo1"}},"strategy":{"rollingUpdate":{"maxSurge":"25%","maxUnavailable":"25%"},"type":"RollingUpdate"},"template":{"metadata":{"annotations":{"linkerd.io/created-by":"linkerd/cli stable-2.3.2","linkerd.io/identity-mode":"default","linkerd.io/proxy-version":"stable-2.3.2"},"creationTimestamp":null,"labels":{"app":"demo1","linkerd.io/control-plane-ns":"linkerd","linkerd.io/proxy-deployment":"demo1"}},"spec":{"containers":[{"env":[{"name":"DEMO2_URL","value":"demo2.demo.svc.cluster.local:30008"}],"image":"test1:v1","imagePullPolicy":"IfNotPresent","name":"demo1","ports":[{"containerPort":30007,"protocol":"TCP"}],"resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","volumeMounts":[{"mountPath":"/opt/logs","name":"demo1-log-dir"}]},{"env":[{"name":"LINKERD2_PROXY_LOG","value":"warn,linkerd2_proxy=info"},{"name":"LINKERD2_PROXY_DESTINATION_SVC_ADDR","value":"linkerd-destination.linkerd.svc.cluster.local:8086"},{"name":"LINKERD2_PROXY_CONTROL_LISTEN_ADDR","value":"0.0.0.0:4190"},{"name":"LINKERD2_PROXY_ADMIN_LISTEN_ADDR","value":"0.0.0.0:4191"},{"name":"LINKERD2_PROXY_OUTBOUND_LISTEN_ADDR","value":"127.0.0.1:4140"},{"name":"LINKERD2_PROXY_INBOUND_LISTEN_ADDR","value":"0.0.0.0:4143"},{"name":"LINKERD2_PROXY_DESTINATION_PROFILE_SUFFIXES","value":"svc.cluster.local."},{"name":"LINKERD2_PROXY_INBOUND_ACCEPT_KEEPALIVE","value":"10000ms"},{"name":"LINKERD2_PROXY_OUTBOUND_CONNECT_KEEPALIVE","value":"10000ms"},{"name":"_pod_ns","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"LINKERD2_PROXY_DESTINATION_CONTEXT","value":"ns:$(_pod_ns)"},{"name":"LINKERD2_PROXY_IDENTITY_DIR","value":"/var/run/linkerd/identity/end-entity"},{"name":"LINKERD2_PROXY_IDENTITY_TRUST_ANCHORS","value":"-----BEGIN CERTIFICATE-----\nMIIBgzCCASmgAwIBAgIBATAKBggqhkjOPQQDAjApMScwJQYDVQQDEx5pZGVudGl0\neS5saW5rZXJkLmNsdXN0ZXIubG9jYWwwHhcNMTkwNjE0MTQyMDMxWhcNMjAwNjEz\nMTQyMDUxWjApMScwJQYDVQQDEx5pZGVudGl0eS5saW5rZXJkLmNsdXN0ZXIubG9j\nYWwwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAASaab6QQvfH3LLb2ZGDSS/UJhOb\nAxMV1dDaxKDr31+cp7YphN1prNj21ilztjJ0to1x5FNpcIrWRcr3mr3Gr4lOo0Iw\nQDAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMC\nMA8GA1UdEwEB/wQFMAMBAf8wCgYIKoZIzj0EAwIDSAAwRQIgKA4HQggmoSxsXBGm\nPJMHA/vslXTxLvfQ/NhlvEkIGRcCIQCC82guXHH3KROhoObkRFtSRNtqxRr3vPXd\nO4HEyA0yUg==\n-----END CERTIFICATE-----\n"},{"name":"LINKERD2_PROXY_IDENTITY_TOKEN_FILE","value":"/var/run/secrets/kubernetes.io/serviceaccount/token"},{"name":"LINKERD2_PROXY_IDENTITY_SVC_ADDR","value":"linkerd-identity.linkerd.svc.cluster.local:8080"},{"name":"_pod_sa","valueFrom":{"fieldRef":{"fieldPath":"spec.serviceAccountName"}}},{"name":"_l5d_ns","value":"linkerd"},{"name":"_l5d_trustdomain","value":"cluster.local"},{"name":"LINKERD2_PROXY_IDENTITY_LOCAL_NAME","value":"$(_pod_sa).$(_pod_ns).serviceaccount.identity.$(_l5d_ns).$(_l5d_trustdomain)"},{"name":"LINKERD2_PROXY_IDENTITY_SVC_NAME","value":"linkerd-identity.$(_l5d_ns).serviceaccount.identity.$(_l5d_ns).$(_l5d_trustdomain)"},{"name":"LINKERD2_PROXY_DESTINATION_SVC_NAME","value":"linkerd-controller.$(_l5d_ns).serviceaccount.identity.$(_l5d_ns).$(_l5d_trustdomain)"}],"image":"gcr.io/linkerd-io/proxy:stable-2.3.2","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"/metrics","port":4191},"initialDelaySeconds":10},"name":"linkerd-proxy","ports":[{"containerPort":4143,"name":"linkerd-proxy"},{"containerPort":4191,"name":"linkerd-admin"}],"readinessProbe":{"httpGet":{"path":"/ready","port":4191},"initialDelaySeconds":2},"resources":{},"securityContext":{"runAsUser":2102},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/linkerd/identity/end-entity","name":"linkerd-identity-end-entity"}]}],"dnsPolicy":"ClusterFirst","initContainers":[{"args":["--incoming-proxy-port","4143","--outgoing-proxy-port","4140","--proxy-uid","2102","--inbound-ports-to-ignore","4190,4191"],"image":"gcr.io/linkerd-io/proxy-init:stable-2.3.2","imagePullPolicy":"IfNotPresent","name":"linkerd-init","resources":{},"securityContext":{"capabilities":{"add":["NET_ADMIN"]},"privileged":false,"runAsNonRoot":false,"runAsUser":0},"terminationMessagePolicy":"FallbackToLogsOnError"}],"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30,"volumes":[{"hostPath":{"path":"/log/g2c-logs/demo1-logs","type":""},"name":"demo1-log-dir"},{"emptyDir":{"medium":"Memory"},"name":"linkerd-identity-end-entity"}]}}},"status":{"availableReplicas":1,"conditions":[{"lastTransitionTime":"2019-06-21T10:21:47Z","lastUpdateTime":"2019-06-21T10:21:47Z","message":"Deployment has minimum availability.","reason":"MinimumReplicasAvailable","status":"True","type":"Available"},{"lastTransitionTime":"2019-06-21T10:21:44Z","lastUpdateTime":"2019-06-27T09:52:27Z","message":"ReplicaSet \"demo1-7c6c58fdfc\" has successfully progressed.","reason":"NewReplicaSetAvailable","status":"True","type":"Progressing"}],"observedGeneration":10,"readyReplicas":1,"replicas":1,"updatedReplicas":1}}
      creationTimestamp: 2019-06-21T10:21:44Z
      generation: 12
      labels:
        app: demo1
      name: demo1
      namespace: demo
      resourceVersion: "1303131"
      selfLink: /apis/extensions/v1beta1/namespaces/demo/deployments/demo1
      uid: 5e5f661e-940e-11e9-8686-02001701f16d
    spec:
      progressDeadlineSeconds: 600
      replicas: 1
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          app: demo1
      strategy:
        rollingUpdate:
          maxSurge: 25%
          maxUnavailable: 25%
        type: RollingUpdate
      template:
        metadata:
          annotations:
            linkerd.io/created-by: linkerd/cli stable-2.3.2
            linkerd.io/identity-mode: default
            linkerd.io/proxy-version: stable-2.3.2
          creationTimestamp: null
          labels:
            app: demo1
            linkerd.io/control-plane-ns: linkerd
            linkerd.io/proxy-deployment: demo1
        spec:
          containers:
          - env:
            - name: DEMO2_URL
              value: demo2.demo.svc.cluster.local:30008
            image: test1:v1
            imagePullPolicy: IfNotPresent
            name: demo1
            ports:
            - containerPort: 30007
              protocol: TCP
            resources: {}
            terminationMessagePath: /dev/termination-log
            terminationMessagePolicy: File
            volumeMounts:
            - mountPath: /opt/logs
              name: demo1-log-dir
          - env:
            - name: LINKERD2_PROXY_LOG
              value: warn,linkerd2_proxy=debug
            - name: LINKERD2_PROXY_DESTINATION_SVC_ADDR
              value: linkerd-destination.linkerd.svc.cluster.local:8086
            - name: LINKERD2_PROXY_CONTROL_LISTEN_ADDR
              value: 0.0.0.0:4190
            - name: LINKERD2_PROXY_ADMIN_LISTEN_ADDR
              value: 0.0.0.0:4191
            - name: LINKERD2_PROXY_OUTBOUND_LISTEN_ADDR
              value: 127.0.0.1:4140
            - name: LINKERD2_PROXY_INBOUND_LISTEN_ADDR
              value: 0.0.0.0:4143
            - name: LINKERD2_PROXY_DESTINATION_PROFILE_SUFFIXES
              value: svc.cluster.local.
            - name: LINKERD2_PROXY_INBOUND_ACCEPT_KEEPALIVE
              value: 10000ms
            - name: LINKERD2_PROXY_OUTBOUND_CONNECT_KEEPALIVE
              value: 10000ms
            - name: _pod_ns
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
            - name: LINKERD2_PROXY_DESTINATION_CONTEXT
              value: ns:$(_pod_ns)
            - name: LINKERD2_PROXY_IDENTITY_DIR
              value: /var/run/linkerd/identity/end-entity
            - name: LINKERD2_PROXY_IDENTITY_TRUST_ANCHORS
              value: |
                -----BEGIN CERTIFICATE-----
                MIIBgzCCASmgAwIBAgIBATAKBggqhkjOPQQDAjApMScwJQYDVQQDEx5pZGVudGl0
                eS5saW5rZXJkLmNsdXN0ZXIubG9jYWwwHhcNMTkwNjE0MTQyMDMxWhcNMjAwNjEz
                MTQyMDUxWjApMScwJQYDVQQDEx5pZGVudGl0eS5saW5rZXJkLmNsdXN0ZXIubG9j
                YWwwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAASaab6QQvfH3LLb2ZGDSS/UJhOb
                AxMV1dDaxKDr31+cp7YphN1prNj21ilztjJ0to1x5FNpcIrWRcr3mr3Gr4lOo0Iw
                QDAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMC
                MA8GA1UdEwEB/wQFMAMBAf8wCgYIKoZIzj0EAwIDSAAwRQIgKA4HQggmoSxsXBGm
                PJMHA/vslXTxLvfQ/NhlvEkIGRcCIQCC82guXHH3KROhoObkRFtSRNtqxRr3vPXd
                O4HEyA0yUg==
                -----END CERTIFICATE-----
            - name: LINKERD2_PROXY_IDENTITY_TOKEN_FILE
              value: /var/run/secrets/kubernetes.io/serviceaccount/token
            - name: LINKERD2_PROXY_IDENTITY_SVC_ADDR
              value: linkerd-identity.linkerd.svc.cluster.local:8080
            - name: _pod_sa
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: spec.serviceAccountName
            - name: _l5d_ns
              value: linkerd
            - name: _l5d_trustdomain
              value: cluster.local
            - name: LINKERD2_PROXY_IDENTITY_LOCAL_NAME
              value: $(_pod_sa).$(_pod_ns).serviceaccount.identity.$(_l5d_ns).$(_l5d_trustdomain)
            - name: LINKERD2_PROXY_IDENTITY_SVC_NAME
              value: linkerd-identity.$(_l5d_ns).serviceaccount.identity.$(_l5d_ns).$(_l5d_trustdomain)
            - name: LINKERD2_PROXY_DESTINATION_SVC_NAME
              value: linkerd-controller.$(_l5d_ns).serviceaccount.identity.$(_l5d_ns).$(_l5d_trustdomain)
            image: gcr.io/linkerd-io/proxy:stable-2.3.2
            imagePullPolicy: IfNotPresent
            livenessProbe:
              failureThreshold: 3
              httpGet:
                path: /metrics
                port: 4191
                scheme: HTTP
              initialDelaySeconds: 10
              periodSeconds: 10
              successThreshold: 1
              timeoutSeconds: 1
            name: linkerd-proxy
            ports:
            - containerPort: 4143
              name: linkerd-proxy
              protocol: TCP
            - containerPort: 4191
              name: linkerd-admin
              protocol: TCP
            readinessProbe:
              failureThreshold: 3
              httpGet:
                path: /ready
                port: 4191
                scheme: HTTP
              initialDelaySeconds: 2
              periodSeconds: 10
              successThreshold: 1
              timeoutSeconds: 1
            resources: {}
            securityContext:
              runAsUser: 2102
            terminationMessagePath: /dev/termination-log
            terminationMessagePolicy: FallbackToLogsOnError
            volumeMounts:
            - mountPath: /var/run/linkerd/identity/end-entity
              name: linkerd-identity-end-entity
          dnsPolicy: ClusterFirst
          initContainers:
          - args:
            - --incoming-proxy-port
            - "4143"
            - --outgoing-proxy-port
            - "4140"
            - --proxy-uid
            - "2102"
            - --inbound-ports-to-ignore
            - 4190,4191
            image: gcr.io/linkerd-io/proxy-init:stable-2.3.2
            imagePullPolicy: IfNotPresent
            name: linkerd-init
            resources: {}
            securityContext:
              capabilities:
                add:
                - NET_ADMIN
              privileged: false
              runAsNonRoot: false
              runAsUser: 0
            terminationMessagePath: /dev/termination-log
            terminationMessagePolicy: FallbackToLogsOnError
          restartPolicy: Always
          schedulerName: default-scheduler
          securityContext: {}
          terminationGracePeriodSeconds: 30
          volumes:
          - hostPath:
              path: /log/g2c-logs/demo1-logs
              type: ""
            name: demo1-log-dir
          - emptyDir:
              medium: Memory
            name: linkerd-identity-end-entity
    status:
      availableReplicas: 1
      conditions:
      - lastTransitionTime: 2019-06-21T10:21:47Z
        lastUpdateTime: 2019-06-21T10:21:47Z
        message: Deployment has minimum availability.
        reason: MinimumReplicasAvailable
        status: "True"
        type: Available
      - lastTransitionTime: 2019-06-21T10:21:44Z
        lastUpdateTime: 2019-06-27T10:44:20Z
        message: ReplicaSet "demo1-c779f5b5" has successfully progressed.
        reason: NewReplicaSetAvailable
        status: "True"
        type: Progressing
      observedGeneration: 12
      readyReplicas: 1
      replicas: 1
      updatedReplicas: 1
    
  2. kubectl get deploy demo2 -o yaml -n demo

     apiVersion: extensions/v1beta1
     kind: Deployment
     metadata:
       annotations:
         deployment.kubernetes.io/revision: "12"
         kubectl.kubernetes.io/last-applied-configuration: |
           {"apiVersion":"extensions/v1beta1","kind":"Deployment","metadata":{"annotations":{"deployment.kubernetes.io/revision":"10"},"creationTimestamp":"2019-06-21T10:21:50Z","generation":10,"labels":{"app":"demo2"},"name":"demo2","namespace":"demo","resourceVersion":"1299125","selfLink":"/apis/extensions/v1beta1/namespaces/demo/deployments/demo2","uid":"62002583-940e-11e9-8686-02001701f16d"},"spec":{"progressDeadlineSeconds":600,"replicas":1,"revisionHistoryLimit":10,"selector":{"matchLabels":{"app":"demo2"}},"strategy":{"rollingUpdate":{"maxSurge":"25%","maxUnavailable":"25%"},"type":"RollingUpdate"},"template":{"metadata":{"annotations":{"linkerd.io/created-by":"linkerd/cli stable-2.3.2","linkerd.io/identity-mode":"default","linkerd.io/proxy-version":"stable-2.3.2"},"creationTimestamp":null,"labels":{"app":"demo2","linkerd.io/control-plane-ns":"linkerd","linkerd.io/proxy-deployment":"demo2"}},"spec":{"containers":[{"env":[{"name":"DEMO3_URL","value":"demo3.demo.svc.cluster.local:30009"}],"image":"test2:v1","imagePullPolicy":"IfNotPresent","name":"demo2","ports":[{"containerPort":30008,"protocol":"TCP"}],"resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","volumeMounts":[{"mountPath":"/opt/logs","name":"demo1-log-dir"}]},{"env":[{"name":"LINKERD2_PROXY_LOG","value":"warn,linkerd2_proxy=info"},{"name":"LINKERD2_PROXY_DESTINATION_SVC_ADDR","value":"linkerd-destination.linkerd.svc.cluster.local:8086"},{"name":"LINKERD2_PROXY_CONTROL_LISTEN_ADDR","value":"0.0.0.0:4190"},{"name":"LINKERD2_PROXY_ADMIN_LISTEN_ADDR","value":"0.0.0.0:4191"},{"name":"LINKERD2_PROXY_OUTBOUND_LISTEN_ADDR","value":"127.0.0.1:4140"},{"name":"LINKERD2_PROXY_INBOUND_LISTEN_ADDR","value":"0.0.0.0:4143"},{"name":"LINKERD2_PROXY_DESTINATION_PROFILE_SUFFIXES","value":"svc.cluster.local."},{"name":"LINKERD2_PROXY_INBOUND_ACCEPT_KEEPALIVE","value":"10000ms"},{"name":"LINKERD2_PROXY_OUTBOUND_CONNECT_KEEPALIVE","value":"10000ms"},{"name":"_pod_ns","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"LINKERD2_PROXY_DESTINATION_CONTEXT","value":"ns:$(_pod_ns)"},{"name":"LINKERD2_PROXY_IDENTITY_DIR","value":"/var/run/linkerd/identity/end-entity"},{"name":"LINKERD2_PROXY_IDENTITY_TRUST_ANCHORS","value":"-----BEGIN CERTIFICATE-----\nMIIBgzCCASmgAwIBAgIBATAKBggqhkjOPQQDAjApMScwJQYDVQQDEx5pZGVudGl0\neS5saW5rZXJkLmNsdXN0ZXIubG9jYWwwHhcNMTkwNjE0MTQyMDMxWhcNMjAwNjEz\nMTQyMDUxWjApMScwJQYDVQQDEx5pZGVudGl0eS5saW5rZXJkLmNsdXN0ZXIubG9j\nYWwwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAASaab6QQvfH3LLb2ZGDSS/UJhOb\nAxMV1dDaxKDr31+cp7YphN1prNj21ilztjJ0to1x5FNpcIrWRcr3mr3Gr4lOo0Iw\nQDAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMC\nMA8GA1UdEwEB/wQFMAMBAf8wCgYIKoZIzj0EAwIDSAAwRQIgKA4HQggmoSxsXBGm\nPJMHA/vslXTxLvfQ/NhlvEkIGRcCIQCC82guXHH3KROhoObkRFtSRNtqxRr3vPXd\nO4HEyA0yUg==\n-----END CERTIFICATE-----\n"},{"name":"LINKERD2_PROXY_IDENTITY_TOKEN_FILE","value":"/var/run/secrets/kubernetes.io/serviceaccount/token"},{"name":"LINKERD2_PROXY_IDENTITY_SVC_ADDR","value":"linkerd-identity.linkerd.svc.cluster.local:8080"},{"name":"_pod_sa","valueFrom":{"fieldRef":{"fieldPath":"spec.serviceAccountName"}}},{"name":"_l5d_ns","value":"linkerd"},{"name":"_l5d_trustdomain","value":"cluster.local"},{"name":"LINKERD2_PROXY_IDENTITY_LOCAL_NAME","value":"$(_pod_sa).$(_pod_ns).serviceaccount.identity.$(_l5d_ns).$(_l5d_trustdomain)"},{"name":"LINKERD2_PROXY_IDENTITY_SVC_NAME","value":"linkerd-identity.$(_l5d_ns).serviceaccount.identity.$(_l5d_ns).$(_l5d_trustdomain)"},{"name":"LINKERD2_PROXY_DESTINATION_SVC_NAME","value":"linkerd-controller.$(_l5d_ns).serviceaccount.identity.$(_l5d_ns).$(_l5d_trustdomain)"}],"image":"gcr.io/linkerd-io/proxy:stable-2.3.2","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"/metrics","port":4191},"initialDelaySeconds":10},"name":"linkerd-proxy","ports":[{"containerPort":4143,"name":"linkerd-proxy"},{"containerPort":4191,"name":"linkerd-admin"}],"readinessProbe":{"httpGet":{"path":"/ready","port":4191},"initialDelaySeconds":2},"resources":{},"securityContext":{"runAsUser":2102},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/linkerd/identity/end-entity","name":"linkerd-identity-end-entity"}]}],"dnsPolicy":"ClusterFirst","initContainers":[{"args":["--incoming-proxy-port","4143","--outgoing-proxy-port","4140","--proxy-uid","2102","--inbound-ports-to-ignore","4190,4191"],"image":"gcr.io/linkerd-io/proxy-init:stable-2.3.2","imagePullPolicy":"IfNotPresent","name":"linkerd-init","resources":{},"securityContext":{"capabilities":{"add":["NET_ADMIN"]},"privileged":false,"runAsNonRoot":false,"runAsUser":0},"terminationMessagePolicy":"FallbackToLogsOnError"}],"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30,"volumes":[{"hostPath":{"path":"/log/g2c-logs/demo1-logs","type":""},"name":"demo1-log-dir"},{"emptyDir":{"medium":"Memory"},"name":"linkerd-identity-end-entity"}]}}},"status":{"availableReplicas":1,"conditions":[{"lastTransitionTime":"2019-06-21T10:21:52Z","lastUpdateTime":"2019-06-21T10:21:52Z","message":"Deployment has minimum availability.","reason":"MinimumReplicasAvailable","status":"True","type":"Available"},{"lastTransitionTime":"2019-06-21T10:21:50Z","lastUpdateTime":"2019-06-27T09:52:33Z","message":"ReplicaSet \"demo2-c8554b6b6\" has successfully progressed.","reason":"NewReplicaSetAvailable","status":"True","type":"Progressing"}],"observedGeneration":10,"readyReplicas":1,"replicas":1,"updatedReplicas":1}}
       creationTimestamp: 2019-06-21T10:21:50Z
       generation: 12
       labels:
         app: demo2
       name: demo2
       namespace: demo
       resourceVersion: "1303201"
       selfLink: /apis/extensions/v1beta1/namespaces/demo/deployments/demo2
       uid: 62002583-940e-11e9-8686-02001701f16d
     spec:
       progressDeadlineSeconds: 600
       replicas: 1
       revisionHistoryLimit: 10
       selector:
         matchLabels:
           app: demo2
       strategy:
         rollingUpdate:
           maxSurge: 25%
           maxUnavailable: 25%
         type: RollingUpdate
       template:
         metadata:
           annotations:
             linkerd.io/created-by: linkerd/cli stable-2.3.2
             linkerd.io/identity-mode: default
             linkerd.io/proxy-version: stable-2.3.2
           creationTimestamp: null
           labels:
             app: demo2
             linkerd.io/control-plane-ns: linkerd
             linkerd.io/proxy-deployment: demo2
         spec:
           containers:
           - env:
             - name: DEMO3_URL
               value: demo3.demo.svc.cluster.local:30009
             image: test2:v1
             imagePullPolicy: IfNotPresent
             name: demo2
             ports:
             - containerPort: 30008
               protocol: TCP
             resources: {}
             terminationMessagePath: /dev/termination-log
             terminationMessagePolicy: File
             volumeMounts:
             - mountPath: /opt/logs
               name: demo1-log-dir
           - env:
             - name: LINKERD2_PROXY_LOG
               value: warn,linkerd2_proxy=debug
             - name: LINKERD2_PROXY_DESTINATION_SVC_ADDR
               value: linkerd-destination.linkerd.svc.cluster.local:8086
             - name: LINKERD2_PROXY_CONTROL_LISTEN_ADDR
               value: 0.0.0.0:4190
             - name: LINKERD2_PROXY_ADMIN_LISTEN_ADDR
               value: 0.0.0.0:4191
             - name: LINKERD2_PROXY_OUTBOUND_LISTEN_ADDR
               value: 127.0.0.1:4140
             - name: LINKERD2_PROXY_INBOUND_LISTEN_ADDR
               value: 0.0.0.0:4143
             - name: LINKERD2_PROXY_DESTINATION_PROFILE_SUFFIXES
               value: svc.cluster.local.
             - name: LINKERD2_PROXY_INBOUND_ACCEPT_KEEPALIVE
               value: 10000ms
             - name: LINKERD2_PROXY_OUTBOUND_CONNECT_KEEPALIVE
               value: 10000ms
             - name: _pod_ns
               valueFrom:
                 fieldRef:
                   apiVersion: v1
                   fieldPath: metadata.namespace
             - name: LINKERD2_PROXY_DESTINATION_CONTEXT
               value: ns:$(_pod_ns)
             - name: LINKERD2_PROXY_IDENTITY_DIR
               value: /var/run/linkerd/identity/end-entity
             - name: LINKERD2_PROXY_IDENTITY_TRUST_ANCHORS
               value: |
                 -----BEGIN CERTIFICATE-----
                 MIIBgzCCASmgAwIBAgIBATAKBggqhkjOPQQDAjApMScwJQYDVQQDEx5pZGVudGl0
                 eS5saW5rZXJkLmNsdXN0ZXIubG9jYWwwHhcNMTkwNjE0MTQyMDMxWhcNMjAwNjEz
                 MTQyMDUxWjApMScwJQYDVQQDEx5pZGVudGl0eS5saW5rZXJkLmNsdXN0ZXIubG9j
                 YWwwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAASaab6QQvfH3LLb2ZGDSS/UJhOb
                 AxMV1dDaxKDr31+cp7YphN1prNj21ilztjJ0to1x5FNpcIrWRcr3mr3Gr4lOo0Iw
                 QDAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMC
                 MA8GA1UdEwEB/wQFMAMBAf8wCgYIKoZIzj0EAwIDSAAwRQIgKA4HQggmoSxsXBGm
                 PJMHA/vslXTxLvfQ/NhlvEkIGRcCIQCC82guXHH3KROhoObkRFtSRNtqxRr3vPXd
                 O4HEyA0yUg==
                 -----END CERTIFICATE-----
             - name: LINKERD2_PROXY_IDENTITY_TOKEN_FILE
               value: /var/run/secrets/kubernetes.io/serviceaccount/token
             - name: LINKERD2_PROXY_IDENTITY_SVC_ADDR
               value: linkerd-identity.linkerd.svc.cluster.local:8080
             - name: _pod_sa
               valueFrom:
                 fieldRef:
                   apiVersion: v1
                   fieldPath: spec.serviceAccountName
             - name: _l5d_ns
               value: linkerd
             - name: _l5d_trustdomain
               value: cluster.local
             - name: LINKERD2_PROXY_IDENTITY_LOCAL_NAME
               value: $(_pod_sa).$(_pod_ns).serviceaccount.identity.$(_l5d_ns).$(_l5d_trustdomain)
             - name: LINKERD2_PROXY_IDENTITY_SVC_NAME
               value: linkerd-identity.$(_l5d_ns).serviceaccount.identity.$(_l5d_ns).$(_l5d_trustdomain)
             - name: LINKERD2_PROXY_DESTINATION_SVC_NAME
               value: linkerd-controller.$(_l5d_ns).serviceaccount.identity.$(_l5d_ns).$(_l5d_trustdomain)
             image: gcr.io/linkerd-io/proxy:stable-2.3.2
             imagePullPolicy: IfNotPresent
             livenessProbe:
               failureThreshold: 3
               httpGet:
                 path: /metrics
                 port: 4191
                 scheme: HTTP
               initialDelaySeconds: 10
               periodSeconds: 10
               successThreshold: 1
               timeoutSeconds: 1
             name: linkerd-proxy
             ports:
             - containerPort: 4143
               name: linkerd-proxy
               protocol: TCP
             - containerPort: 4191
               name: linkerd-admin
               protocol: TCP
             readinessProbe:
               failureThreshold: 3
               httpGet:
                 path: /ready
                 port: 4191
                 scheme: HTTP
               initialDelaySeconds: 2
               periodSeconds: 10
               successThreshold: 1
               timeoutSeconds: 1
             resources: {}
             securityContext:
               runAsUser: 2102
             terminationMessagePath: /dev/termination-log
             terminationMessagePolicy: FallbackToLogsOnError
             volumeMounts:
             - mountPath: /var/run/linkerd/identity/end-entity
               name: linkerd-identity-end-entity
           dnsPolicy: ClusterFirst
           initContainers:
           - args:
             - --incoming-proxy-port
             - "4143"
             - --outgoing-proxy-port
             - "4140"
             - --proxy-uid
             - "2102"
             - --inbound-ports-to-ignore
             - 4190,4191
             image: gcr.io/linkerd-io/proxy-init:stable-2.3.2
             imagePullPolicy: IfNotPresent
             name: linkerd-init
             resources: {}
             securityContext:
               capabilities:
                 add:
                 - NET_ADMIN
               privileged: false
               runAsNonRoot: false
               runAsUser: 0
             terminationMessagePath: /dev/termination-log
             terminationMessagePolicy: FallbackToLogsOnError
           restartPolicy: Always
           schedulerName: default-scheduler
           securityContext: {}
           terminationGracePeriodSeconds: 30
           volumes:
           - hostPath:
               path: /log/g2c-logs/demo1-logs
               type: ""
             name: demo1-log-dir
           - emptyDir:
               medium: Memory
             name: linkerd-identity-end-entity
     status:
       availableReplicas: 1
       conditions:
       - lastTransitionTime: 2019-06-21T10:21:52Z
         lastUpdateTime: 2019-06-21T10:21:52Z
         message: Deployment has minimum availability.
         reason: MinimumReplicasAvailable
         status: "True"
         type: Available
       - lastTransitionTime: 2019-06-21T10:21:50Z
         lastUpdateTime: 2019-06-27T10:44:44Z
         message: ReplicaSet "demo2-654f645d89" has successfully progressed.
         reason: NewReplicaSetAvailable
         status: "True"
         type: Progressing
       observedGeneration: 12
       readyReplicas: 1
       replicas: 1
       updatedReplicas: 1

PART 2

  1. kubectl get deploy demo3 -o yaml -n demo

     apiVersion: extensions/v1beta1
     kind: Deployment
     metadata:
       annotations:
         deployment.kubernetes.io/revision: "7"
         kubectl.kubernetes.io/last-applied-configuration: |
           {"apiVersion":"extensions/v1beta1","kind":"Deployment","metadata":{"annotations":{"deployment.kubernetes.io/revision":"5"},"creationTimestamp":"2019-06-21T10:22:07Z","generation":7,"labels":{"app":"demo3"},"name":"demo3","namespace":"demo","resourceVersion":"1290476","selfLink":"/apis/extensions/v1beta1/namespaces/demo/deployments/demo3","uid":"6c3639ad-940e-11e9-8686-02001701f16d"},"spec":{"progressDeadlineSeconds":600,"replicas":1,"revisionHistoryLimit":10,"selector":{"matchLabels":{"app":"demo3"}},"strategy":{"rollingUpdate":{"maxSurge":"25%","maxUnavailable":"25%"},"type":"RollingUpdate"},"template":{"metadata":{"annotations":{"linkerd.io/created-by":"linkerd/cli stable-2.3.2","linkerd.io/identity-mode":"default","linkerd.io/proxy-version":"stable-2.3.2"},"creationTimestamp":null,"labels":{"app":"demo3","linkerd.io/control-plane-ns":"linkerd","linkerd.io/proxy-deployment":"demo3"}},"spec":{"containers":[{"image":"test3:v1","imagePullPolicy":"IfNotPresent","name":"demo3","ports":[{"containerPort":30009,"protocol":"TCP"}],"resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","volumeMounts":[{"mountPath":"/opt/logs","name":"demo3-log-dir"}]},{"env":[{"name":"LINKERD2_PROXY_LOG","value":"warn,linkerd2_proxy=info"},{"name":"LINKERD2_PROXY_DESTINATION_SVC_ADDR","value":"linkerd-destination.linkerd.svc.cluster.local:8086"},{"name":"LINKERD2_PROXY_CONTROL_LISTEN_ADDR","value":"0.0.0.0:4190"},{"name":"LINKERD2_PROXY_ADMIN_LISTEN_ADDR","value":"0.0.0.0:4191"},{"name":"LINKERD2_PROXY_OUTBOUND_LISTEN_ADDR","value":"127.0.0.1:4140"},{"name":"LINKERD2_PROXY_INBOUND_LISTEN_ADDR","value":"0.0.0.0:4143"},{"name":"LINKERD2_PROXY_DESTINATION_PROFILE_SUFFIXES","value":"svc.cluster.local."},{"name":"LINKERD2_PROXY_INBOUND_ACCEPT_KEEPALIVE","value":"10000ms"},{"name":"LINKERD2_PROXY_OUTBOUND_CONNECT_KEEPALIVE","value":"10000ms"},{"name":"_pod_ns","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"LINKERD2_PROXY_DESTINATION_CONTEXT","value":"ns:$(_pod_ns)"},{"name":"LINKERD2_PROXY_IDENTITY_DIR","value":"/var/run/linkerd/identity/end-entity"},{"name":"LINKERD2_PROXY_IDENTITY_TRUST_ANCHORS","value":"-----BEGIN CERTIFICATE-----\nMIIBgzCCASmgAwIBAgIBATAKBggqhkjOPQQDAjApMScwJQYDVQQDEx5pZGVudGl0\neS5saW5rZXJkLmNsdXN0ZXIubG9jYWwwHhcNMTkwNjE0MTQyMDMxWhcNMjAwNjEz\nMTQyMDUxWjApMScwJQYDVQQDEx5pZGVudGl0eS5saW5rZXJkLmNsdXN0ZXIubG9j\nYWwwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAASaab6QQvfH3LLb2ZGDSS/UJhOb\nAxMV1dDaxKDr31+cp7YphN1prNj21ilztjJ0to1x5FNpcIrWRcr3mr3Gr4lOo0Iw\nQDAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMC\nMA8GA1UdEwEB/wQFMAMBAf8wCgYIKoZIzj0EAwIDSAAwRQIgKA4HQggmoSxsXBGm\nPJMHA/vslXTxLvfQ/NhlvEkIGRcCIQCC82guXHH3KROhoObkRFtSRNtqxRr3vPXd\nO4HEyA0yUg==\n-----END CERTIFICATE-----\n"},{"name":"LINKERD2_PROXY_IDENTITY_TOKEN_FILE","value":"/var/run/secrets/kubernetes.io/serviceaccount/token"},{"name":"LINKERD2_PROXY_IDENTITY_SVC_ADDR","value":"linkerd-identity.linkerd.svc.cluster.local:8080"},{"name":"_pod_sa","valueFrom":{"fieldRef":{"fieldPath":"spec.serviceAccountName"}}},{"name":"_l5d_ns","value":"linkerd"},{"name":"_l5d_trustdomain","value":"cluster.local"},{"name":"LINKERD2_PROXY_IDENTITY_LOCAL_NAME","value":"$(_pod_sa).$(_pod_ns).serviceaccount.identity.$(_l5d_ns).$(_l5d_trustdomain)"},{"name":"LINKERD2_PROXY_IDENTITY_SVC_NAME","value":"linkerd-identity.$(_l5d_ns).serviceaccount.identity.$(_l5d_ns).$(_l5d_trustdomain)"},{"name":"LINKERD2_PROXY_DESTINATION_SVC_NAME","value":"linkerd-controller.$(_l5d_ns).serviceaccount.identity.$(_l5d_ns).$(_l5d_trustdomain)"}],"image":"gcr.io/linkerd-io/proxy:stable-2.3.2","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"/metrics","port":4191},"initialDelaySeconds":10},"name":"linkerd-proxy","ports":[{"containerPort":4143,"name":"linkerd-proxy"},{"containerPort":4191,"name":"linkerd-admin"}],"readinessProbe":{"httpGet":{"path":"/ready","port":4191},"initialDelaySeconds":2},"resources":{},"securityContext":{"runAsUser":2102},"terminationMessagePolicy":"FallbackToLogsOnError","volumeMounts":[{"mountPath":"/var/run/linkerd/identity/end-entity","name":"linkerd-identity-end-entity"}]}],"dnsPolicy":"ClusterFirst","initContainers":[{"args":["--incoming-proxy-port","4143","--outgoing-proxy-port","4140","--proxy-uid","2102","--inbound-ports-to-ignore","4190,4191"],"image":"gcr.io/linkerd-io/proxy-init:stable-2.3.2","imagePullPolicy":"IfNotPresent","name":"linkerd-init","resources":{},"securityContext":{"capabilities":{"add":["NET_ADMIN"]},"privileged":false,"runAsNonRoot":false,"runAsUser":0},"terminationMessagePolicy":"FallbackToLogsOnError"}],"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30,"volumes":[{"hostPath":{"path":"/log/g2c-logs/demo3-logs","type":""},"name":"demo3-log-dir"},{"emptyDir":{"medium":"Memory"},"name":"linkerd-identity-end-entity"}]}}},"status":{"availableReplicas":1,"conditions":[{"lastTransitionTime":"2019-06-21T10:22:09Z","lastUpdateTime":"2019-06-21T10:22:09Z","message":"Deployment has minimum availability.","reason":"MinimumReplicasAvailable","status":"True","type":"Available"},{"lastTransitionTime":"2019-06-21T10:22:07Z","lastUpdateTime":"2019-06-27T07:53:07Z","message":"ReplicaSet \"demo3-75cdb78b64\" has successfully progressed.","reason":"NewReplicaSetAvailable","status":"True","type":"Progressing"}],"observedGeneration":7,"readyReplicas":1,"replicas":1,"updatedReplicas":1}}
       creationTimestamp: 2019-06-21T10:22:07Z
       generation: 9
       labels:
         app: demo3
       name: demo3
       namespace: demo
       resourceVersion: "1303177"
       selfLink: /apis/extensions/v1beta1/namespaces/demo/deployments/demo3
       uid: 6c3639ad-940e-11e9-8686-02001701f16d
     spec:
       progressDeadlineSeconds: 600
       replicas: 1
       revisionHistoryLimit: 10
       selector:
         matchLabels:
           app: demo3
       strategy:
         rollingUpdate:
           maxSurge: 25%
           maxUnavailable: 25%
         type: RollingUpdate
       template:
         metadata:
           annotations:
             linkerd.io/created-by: linkerd/cli stable-2.3.2
             linkerd.io/identity-mode: default
             linkerd.io/proxy-version: stable-2.3.2
           creationTimestamp: null
           labels:
             app: demo3
             linkerd.io/control-plane-ns: linkerd
             linkerd.io/proxy-deployment: demo3
         spec:
           containers:
           - image: test3:v1
             imagePullPolicy: IfNotPresent
             name: demo3
             ports:
             - containerPort: 30009
               protocol: TCP
             resources: {}
             terminationMessagePath: /dev/termination-log
             terminationMessagePolicy: File
             volumeMounts:
             - mountPath: /opt/logs
               name: demo3-log-dir
           - env:
             - name: LINKERD2_PROXY_LOG
               value: warn,linkerd2_proxy=debug
             - name: LINKERD2_PROXY_DESTINATION_SVC_ADDR
               value: linkerd-destination.linkerd.svc.cluster.local:8086
             - name: LINKERD2_PROXY_CONTROL_LISTEN_ADDR
               value: 0.0.0.0:4190
             - name: LINKERD2_PROXY_ADMIN_LISTEN_ADDR
               value: 0.0.0.0:4191
             - name: LINKERD2_PROXY_OUTBOUND_LISTEN_ADDR
               value: 127.0.0.1:4140
             - name: LINKERD2_PROXY_INBOUND_LISTEN_ADDR
               value: 0.0.0.0:4143
             - name: LINKERD2_PROXY_DESTINATION_PROFILE_SUFFIXES
               value: svc.cluster.local.
             - name: LINKERD2_PROXY_INBOUND_ACCEPT_KEEPALIVE
               value: 10000ms
             - name: LINKERD2_PROXY_OUTBOUND_CONNECT_KEEPALIVE
               value: 10000ms
             - name: _pod_ns
               valueFrom:
                 fieldRef:
                   apiVersion: v1
                   fieldPath: metadata.namespace
             - name: LINKERD2_PROXY_DESTINATION_CONTEXT
               value: ns:$(_pod_ns)
             - name: LINKERD2_PROXY_IDENTITY_DIR
               value: /var/run/linkerd/identity/end-entity
             - name: LINKERD2_PROXY_IDENTITY_TRUST_ANCHORS
               value: |
                 -----BEGIN CERTIFICATE-----
                 MIIBgzCCASmgAwIBAgIBATAKBggqhkjOPQQDAjApMScwJQYDVQQDEx5pZGVudGl0
                 eS5saW5rZXJkLmNsdXN0ZXIubG9jYWwwHhcNMTkwNjE0MTQyMDMxWhcNMjAwNjEz
                 MTQyMDUxWjApMScwJQYDVQQDEx5pZGVudGl0eS5saW5rZXJkLmNsdXN0ZXIubG9j
                 YWwwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAASaab6QQvfH3LLb2ZGDSS/UJhOb
                 AxMV1dDaxKDr31+cp7YphN1prNj21ilztjJ0to1x5FNpcIrWRcr3mr3Gr4lOo0Iw
                 QDAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMC
                 MA8GA1UdEwEB/wQFMAMBAf8wCgYIKoZIzj0EAwIDSAAwRQIgKA4HQggmoSxsXBGm
                 PJMHA/vslXTxLvfQ/NhlvEkIGRcCIQCC82guXHH3KROhoObkRFtSRNtqxRr3vPXd
                 O4HEyA0yUg==
                 -----END CERTIFICATE-----
             - name: LINKERD2_PROXY_IDENTITY_TOKEN_FILE
               value: /var/run/secrets/kubernetes.io/serviceaccount/token
             - name: LINKERD2_PROXY_IDENTITY_SVC_ADDR
               value: linkerd-identity.linkerd.svc.cluster.local:8080
             - name: _pod_sa
               valueFrom:
                 fieldRef:
                   apiVersion: v1
                   fieldPath: spec.serviceAccountName
             - name: _l5d_ns
               value: linkerd
             - name: _l5d_trustdomain
               value: cluster.local
             - name: LINKERD2_PROXY_IDENTITY_LOCAL_NAME
               value: $(_pod_sa).$(_pod_ns).serviceaccount.identity.$(_l5d_ns).$(_l5d_trustdomain)
             - name: LINKERD2_PROXY_IDENTITY_SVC_NAME
               value: linkerd-identity.$(_l5d_ns).serviceaccount.identity.$(_l5d_ns).$(_l5d_trustdomain)
             - name: LINKERD2_PROXY_DESTINATION_SVC_NAME
               value: linkerd-controller.$(_l5d_ns).serviceaccount.identity.$(_l5d_ns).$(_l5d_trustdomain)
             image: gcr.io/linkerd-io/proxy:stable-2.3.2
             imagePullPolicy: IfNotPresent
             livenessProbe:
               failureThreshold: 3
               httpGet:
                 path: /metrics
                 port: 4191
                 scheme: HTTP
               initialDelaySeconds: 10
               periodSeconds: 10
               successThreshold: 1
               timeoutSeconds: 1
             name: linkerd-proxy
             ports:
             - containerPort: 4143
               name: linkerd-proxy
               protocol: TCP
             - containerPort: 4191
               name: linkerd-admin
               protocol: TCP
             readinessProbe:
               failureThreshold: 3
               httpGet:
                 path: /ready
                 port: 4191
                 scheme: HTTP
               initialDelaySeconds: 2
               periodSeconds: 10
               successThreshold: 1
               timeoutSeconds: 1
             resources: {}
             securityContext:
               runAsUser: 2102
             terminationMessagePath: /dev/termination-log
             terminationMessagePolicy: FallbackToLogsOnError
             volumeMounts:
             - mountPath: /var/run/linkerd/identity/end-entity
               name: linkerd-identity-end-entity
           dnsPolicy: ClusterFirst
           initContainers:
           - args:
             - --incoming-proxy-port
             - "4143"
             - --outgoing-proxy-port
             - "4140"
             - --proxy-uid
             - "2102"
             - --inbound-ports-to-ignore
             - 4190,4191
             image: gcr.io/linkerd-io/proxy-init:stable-2.3.2
             imagePullPolicy: IfNotPresent
             name: linkerd-init
             resources: {}
             securityContext:
               capabilities:
                 add:
                 - NET_ADMIN
               privileged: false
               runAsNonRoot: false
               runAsUser: 0
             terminationMessagePath: /dev/termination-log
             terminationMessagePolicy: FallbackToLogsOnError
           restartPolicy: Always
           schedulerName: default-scheduler
           securityContext: {}
           terminationGracePeriodSeconds: 30
           volumes:
           - hostPath:
               path: /log/g2c-logs/demo3-logs
               type: ""
             name: demo3-log-dir
           - emptyDir:
               medium: Memory
             name: linkerd-identity-end-entity
     status:
       availableReplicas: 1
       conditions:
       - lastTransitionTime: 2019-06-21T10:22:09Z
         lastUpdateTime: 2019-06-21T10:22:09Z
         message: Deployment has minimum availability.
         reason: MinimumReplicasAvailable
         status: "True"
         type: Available
       - lastTransitionTime: 2019-06-21T10:22:07Z
         lastUpdateTime: 2019-06-27T10:44:41Z
         message: ReplicaSet "demo3-75cdb78b64" has successfully progressed.
         reason: NewReplicaSetAvailable
         status: "True"
         type: Progressing
       observedGeneration: 9
       readyReplicas: 1
       replicas: 1
       updatedReplicas: 1
    

Also as you suggested, we have update the LINKERD2_PROXY_LOG setting from linkerd2_proxy=info to linkerd2_proxy=debug.

- env:
          - name: LINKERD2_PROXY_LOG
            value: warn,linkerd2_proxy=info

I hope that’s all you need to troubleshoot the issue.

Thanks,
Sharique

Hi @cpretzer,

Any update on mentioned issue?

Thanks in advanced,
Asawari

@AsawariKengar

Thanks for sending the yaml files for the deployments.

Can you tell me what is the client that makes the calls to the demo1 service? It sounds like it’s a client that is external to the cluster. Are you using an ingress to route traffic to demo1? If so, you can inject the linkerd proxy in to that ingress as well.

Now that you have set the LINKERD2_PROXY_LOG to debug, can you send the generated log files for the proxy container in each of the services?

Charles