Unable to make gRPC calls via Linkerd

I am having a hard time getting k8s intra-cluster gRPC calls to work. I have setup Linkerd as a DaemonSet using the servicemesh.yaml example pretty much as-is. What I have changed:

  • Bind the service account to the cluster-admin role, to make sure that RBAC wasn’t getting in the way of having a working mesh.
  • Remove the ingress routers.
  • Changed the l5d service type to ClusterIp instead of LoadBalancer, because I don’t need external access to l5d.

Environment highlights:

  • NodeJS gRPC service (generator.studyo) listening to port 5000 without SSL.
  • The k8s Service exposes port 5000 with name grpc
  • NodeJS gRPC client using an insecure channel credential
  • The client is able to connect directly to generator.studyo:5000 and perform requests.
  • Test 1: Making the client connect to NODE_NAME:4340 (the gRPC outgoing router) and setting authority to generator.studyo doesn’t work.
    • Linkerd logs indicate No hosts are available for /svc/generator.studyo
    • Looks like the port is missing from the resolved name?
  • Test 2: Adding the port number to authority (generator.studyo:5000) doesn’t work either.
    • Linkerd logs indicate marking connection to "$/inet/generator.studyo/5000" as dead
    • Why has it fallen back to $/inet?

I believe that the missing piece in my understanding of the system is how to let Linkerd know which port to connect to when resolving the generator.studyo service.

If anyone can help me figure it out, I will be forever grateful and have a smile on my face in time for the holidays. :wink:

Thanks,

Pascal

Environment and test details:

Linkerd Configuration

kind: Namespace
apiVersion: v1
metadata:
  name: linkerd
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: l5d-config
  namespace: linkerd
data:
  config.yaml: |-
    admin:
      ip: 0.0.0.0
      port: 9990

    namers:
    - kind: io.l5d.k8s
    - kind: io.l5d.k8s
      prefix: /io.l5d.k8s.http
      transformers:
      - kind: io.l5d.k8s.daemonset
        namespace: linkerd
        port: http-incoming
        service: l5d
    - kind: io.l5d.k8s
      prefix: /io.l5d.k8s.h2
      transformers:
      - kind: io.l5d.k8s.daemonset
        namespace: linkerd
        port: h2-incoming
        service: l5d
    - kind: io.l5d.k8s
      prefix: /io.l5d.k8s.grpc
      transformers:
      - kind: io.l5d.k8s.daemonset
        namespace: linkerd
        port: grpc-incoming
        service: l5d
    - kind: io.l5d.rewrite
      prefix: /portNsSvcToK8s
      pattern: "/{port}/{ns}/{svc}"
      name: "/k8s/{ns}/{port}/{svc}"

    telemetry:
    - kind: io.l5d.prometheus # Expose Prometheus style metrics on :9990/admin/metrics/prometheus
    - kind: io.l5d.recentRequests
      sampleRate: 0.25 # Tune this sample rate before going to production

    usage:
      orgId: linkerd-examples-servicemesh

    routers:
    - label: http-outgoing
      protocol: http
      servers:
      - port: 4140
        ip: 0.0.0.0
      dtab: |
        /ph  => /$/io.buoyant.rinet ;                     # /ph/80/google.com -> /$/io.buoyant.rinet/80/google.com
        /svc => /ph/80 ;                                  # /svc/google.com -> /ph/80/google.com
        /svc => /$/io.buoyant.porthostPfx/ph ;            # /svc/google.com:80 -> /ph/80/google.com
        /k8s => /#/io.l5d.k8s.http ;                      # /k8s/default/http/foo -> /#/io.l5d.k8s.http/default/http/foo
        /portNsSvc => /#/portNsSvcToK8s ;                 # /portNsSvc/http/default/foo -> /k8s/default/http/foo
        /host => /portNsSvc/http/default ;                # /host/foo -> /portNsSvc/http/default/foo
        /host => /portNsSvc/http ;                        # /host/default/foo -> /portNsSvc/http/default/foo
        /svc => /$/io.buoyant.http.domainToPathPfx/host ; # /svc/foo.default -> /host/default/foo
      client:
        kind: io.l5d.static
        configs:
        # Use HTTPS if sending to port 443
        - prefix: "/$/io.buoyant.rinet/443/{service}"
          tls:
            commonName: "{service}"

    - label: http-incoming
      protocol: http
      servers:
      - port: 4141
        ip: 0.0.0.0
      interpreter:
        kind: default
        transformers:
        - kind: io.l5d.k8s.localnode
          # hostNetwork: true # Uncomment if using host networking (eg for CNI)
      dtab: |
        /k8s => /#/io.l5d.k8s ;                           # /k8s/default/http/foo -> /#/io.l5d.k8s/default/http/foo
        /portNsSvc => /#/portNsSvcToK8s ;                 # /portNsSvc/http/default/foo -> /k8s/default/http/foo
        /host => /portNsSvc/http/default ;                # /host/foo -> /portNsSvc/http/default/foo
        /host => /portNsSvc/http ;                        # /host/default/foo -> /portNsSvc/http/default/foo
        /svc => /$/io.buoyant.http.domainToPathPfx/host ; # /svc/foo.default -> /host/default/foo

    - label: h2-outgoing
      protocol: h2
      experimental: true
      servers:
      - port: 4240
        ip: 0.0.0.0
      dtab: |
        /ph  => /$/io.buoyant.rinet ;                       # /ph/80/google.com -> /$/io.buoyant.rinet/80/google.com
        /svc => /ph/80 ;                                    # /svc/google.com -> /ph/80/google.com
        /svc => /$/io.buoyant.porthostPfx/ph ;              # /svc/google.com:80 -> /ph/80/google.com
        /k8s => /#/io.l5d.k8s.h2 ;                          # /k8s/default/h2/foo -> /#/io.l5d.k8s.h2/default/h2/foo
        /portNsSvc => /#/portNsSvcToK8s ;                   # /portNsSvc/h2/default/foo -> /k8s/default/h2/foo
        /host => /portNsSvc/h2/default ;                    # /host/foo -> /portNsSvc/h2/default/foo
        /host => /portNsSvc/h2 ;                            # /host/default/foo -> /portNsSvc/h2/default/foo
        /svc => /$/io.buoyant.http.domainToPathPfx/host ;   # /svc/foo.default -> /host/default/foo
      client:
        kind: io.l5d.static
        configs:
        # Use HTTPS if sending to port 443
        - prefix: "/$/io.buoyant.rinet/443/{service}"
          tls:
            commonName: "{service}"

    - label: h2-incoming
      protocol: h2
      experimental: true
      servers:
      - port: 4241
        ip: 0.0.0.0
      interpreter:
        kind: default
        transformers:
        - kind: io.l5d.k8s.localnode
          # hostNetwork: true # Uncomment if using host networking (eg for CNI)
      dtab: |
        /k8s => /#/io.l5d.k8s ;                             # /k8s/default/h2/foo -> /#/io.l5d.k8s/default/h2/foo
        /portNsSvc => /#/portNsSvcToK8s ;                   # /portNsSvc/h2/default/foo -> /k8s/default/h2/foo
        /host => /portNsSvc/h2/default ;                    # /host/foo -> /portNsSvc/h2/default/foo
        /host => /portNsSvc/h2 ;                            # /host/default/foo -> /portNsSvc/h2/default/foo
        /svc => /$/io.buoyant.http.domainToPathPfx/host ;   # /svc/foo.default -> /host/default/foo

    - label: grpc-outgoing
      protocol: h2
      experimental: true
      servers:
      - port: 4340
        ip: 0.0.0.0
      identifier:
        kind: io.l5d.header.path
        segments: 1
      dtab: |
        /hp  => /$/inet ;                                # /hp/linkerd.io/8888 -> /$/inet/linkerd.io/8888
        /svc => /$/io.buoyant.hostportPfx/hp ;           # /svc/linkerd.io:8888 -> /hp/linkerd.io/8888
        /srv => /#/io.l5d.k8s.grpc/default/grpc;         # /srv/service/package -> /#/io.l5d.k8s.grpc/default/grpc/service/package
        /svc => /$/io.buoyant.http.domainToPathPfx/srv ; # /svc/package.service -> /srv/service/package
      client:
        kind: io.l5d.static
        configs:
        # Always use TLS when sending to external grpc servers
        - prefix: "/$/inet/{service}"
          tls:
            commonName: "{service}"

    - label: gprc-incoming
      protocol: h2
      experimental: true
      servers:
      - port: 4341
        ip: 0.0.0.0
      identifier:
        kind: io.l5d.header.path
        segments: 1
      interpreter:
        kind: default
        transformers:
        - kind: io.l5d.k8s.localnode
          # hostNetwork: true # Uncomment if using host networking (eg for CNI)
      dtab: |
        /srv => /#/io.l5d.k8s/default/grpc ;             # /srv/service/package -> /#/io.l5d.k8s/default/grpc/service/package
        /svc => /$/io.buoyant.http.domainToPathPfx/srv ; # /svc/package.service -> /srv/service/package

### DaemonSet ###
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  labels:
    app: l5d
  name: l5d
  namespace: linkerd
spec:
  template:
    metadata:
      labels:
        app: l5d
    spec:
      # hostNetwork: true # Uncomment to use host networking (eg for CNI)
      volumes:
      - name: l5d-config
        configMap:
          name: "l5d-config"
      containers:
      - name: l5d
        image: buoyantio/linkerd:1.3.3
        env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        args:
        - /io.buoyant/linkerd/config/config.yaml
        ports:
        - name: http-outgoing
          containerPort: 4140
          hostPort: 4140
        - name: http-incoming
          containerPort: 4141
        - name: h2-outgoing
          containerPort: 4240
          hostPort: 4240
        - name: h2-incoming
          containerPort: 4241
        - name: grpc-outgoing
          containerPort: 4340
          hostPort: 4340
        - name: grpc-incoming
          containerPort: 4341
        volumeMounts:
        - name: "l5d-config"
          mountPath: "/io.buoyant/linkerd/config"
          readOnly: true

      # Run `kubectl proxy` as a sidecar to give us authenticated access to the
      # Kubernetes API.
      - name: kubectl
        image: buoyantio/kubectl:v1.8.5
        args:
        - "proxy"
        - "-p"
        - "8001"

### Service ###
---
apiVersion: v1
kind: Service
metadata:
  name: l5d
  namespace: linkerd
spec:
  selector:
    app: l5d
  type: ClusterIP
  ports:
  - name: http-outgoing
    port: 4140
  - name: http-incoming
    port: 4141
  - name: h2-outgoing
    port: 4240
  - name: h2-incoming
    port: 4241
  - name: grpc-outgoing
    port: 4340
  - name: grpc-incoming
    port: 4341

gRPC client and service configurations

Generator Deployment

apiVersion: extensions/v1beta1
kind: Deployment
spec:
  template:
    spec:
      containers:
      - ports:
        - containerPort: 5000
          protocol: TCP
      dnsPolicy: ClusterFirst
      schedulerName: default-scheduler
      securityContext: {}

Generator Service

apiVersion: v1
kind: Service
metadata:
  name: generator
  namespace: studyo
  selfLink: /api/v1/namespaces/studyo/services/generator
spec:
  clusterIP: 10.11.254.174
  ports:
  - name: grpc
    port: 5000
    protocol: TCP
    targetPort: 5000
  sessionAffinity: None
  type: ClusterIP

Client Deployment

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: rest-api-linkerd-experiments
  namespace: studyo
  selfLink: /apis/extensions/v1beta1/namespaces/studyo/deployments/rest-api-linkerd-experiments
spec:
  template:
    metadata:
    spec:
      containers:
      - env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: spec.nodeName
      dnsPolicy: ClusterFirst
      schedulerName: default-scheduler
      securityContext: {}

Test 1

  • Don’t set http_proxy
  • gRPC client connects to NODE_NAME:4340
  • gRPC client sets Authority to service.namespace (no port)
  • Result: No hosts are available for /svc/generator.studyo

Client Code

> process.env.http_proxy
undefined
> 
> process.env.NODE_NAME
'gke-studyo-beta-default-pool-6d566286-ndzj'
>
> const grpc = require('grpc');
> const rpcm = require('@studyo/grpc-services').generator_pb;
> const rpcs = require('@studyo/grpc-services').generator_grpc_pb;
> 
> const generatorClient = new rpcs.GeneratorClient(
...     process.env.NODE_NAME + ":4340",
...     grpc.credentials.createInsecure(),
...     {
.....         "grpc.default_authority": "generator.studyo"
.....     }
... );
> 
> const request = new rpcm.StoredConfigGeneratorRequest();
> request.setConfigid("xyz");
> 
> generatorClient.getGeneratedCalendarFromStoredConfig(request, (err, response) => {console.log(err);console.log(response);})

Client Output

{ Error: Received RST_STREAM with error code 7
    at /app/node_modules/grpc/src/client.js:554:15 code: 14, metadata: Metadata { _internal_repr: {} } }

Client gRPC logging

D1218 13:20:53.930499265     884 chttp2_transport.c:1406]    perform_stream_op_locked:  SEND_INITIAL_METADATA{key=3a 73 63 68 65 6d 65 ':scheme' value=68 74 74 70 'http', key=3a 6d 65 74 68 6f 64 ':method' value=50 4f 53 54 'POST', key=3a 70 61 74 68 ':path' value=2f 73 74 75 64 79 6f 2e 73 65 72 76 69 63 65 73 2e 67 65 6e 65 72 61 74 6f 72 2e 47 65 6e 65 72 61 74 6f 72 2f 47 65 74 47 65 6e 65 72 61 74 65 64 43 61 6c 65 6e 64 61 72 46 72 6f 6d 53 74 6f 72 65 64 43 6f 6e 66 69 67 '/studyo.services.generator.Generator/GetGeneratedCalendarFromStoredConfig', key=3a 61 75 74 68 6f 72 69 74 79 ':authority' value=67 65 6e 65 72 61 74 6f 72 2e 73 74 75 64 79 6f 'generator.studyo', key=74 65 'te' value=74 72 61 69 6c 65 72 73 'trailers', key=63 6f 6e 74 65 6e 74 2d 74 79 70 65 'content-type' value=61 70 70 6c 69 63 61 74 69 6f 6e 2f 67 72 70 63 'application/grpc', key=75 73 65 72 2d 61 67 65 6e 74 'user-agent' value=67 72 70 63 2d 6e 6f 64 65 2f 31 2e 37 2e 32 20 67 72 70 63 2d 63 2f 35 2e 30 2e 30 20 28 6c 69 6e 75 78 3b 20 63 68 74 74 70 32 3b 20 67 61 6d 62 69 74 29 'grpc-node/1.7.2 grpc-c/5.0.0 (linux; chttp2; gambit)', key=67 72 70 63 2d 61 63 63 65 70 74 2d 65 6e 63 6f 64 69 6e 67 'grpc-accept-encoding' value=69 64 65 6e 74 69 74 79 2c 64 65 66 6c 61 74 65 2c 67 7a 69 70 'identity,deflate,gzip', key=61 63 63 65 70 74 2d 65 6e 63 6f 64 69 6e 67 'accept-encoding' value=69 64 65 6e 74 69 74 79 2c 67 7a 69 70 'identity,gzip'} SEND_MESSAGE:flags=0x00000000:len=26 SEND_TRAILING_METADATA{} RECV_INITIAL_METADATA RECV_MESSAGE RECV_TRAILING_METADATA COLLECT_STATS:0x28c62b8; on_complete = 0x28d1bb0
I1218 13:20:53.930521320     884 chttp2_transport.c:1385]    HTTP:0:HDR:CLI: :scheme: http
I1218 13:20:53.930531689     884 chttp2_transport.c:1385]    HTTP:0:HDR:CLI: :method: POST
I1218 13:20:53.930541676     884 chttp2_transport.c:1385]    HTTP:0:HDR:CLI: :path: /studyo.services.generator.Generator/GetGeneratedCalendarFromStoredConfig
I1218 13:20:53.930551178     884 chttp2_transport.c:1385]    HTTP:0:HDR:CLI: :authority: generator.studyo
I1218 13:20:53.930561400     884 chttp2_transport.c:1385]    HTTP:0:HDR:CLI: te: trailers
I1218 13:20:53.930570740     884 chttp2_transport.c:1385]    HTTP:0:HDR:CLI: content-type: application/grpc
I1218 13:20:53.930580596     884 chttp2_transport.c:1385]    HTTP:0:HDR:CLI: user-agent: grpc-node/1.7.2 grpc-c/5.0.0 (linux; chttp2; gambit)
I1218 13:20:53.930591287     884 chttp2_transport.c:1385]    HTTP:0:HDR:CLI: grpc-accept-encoding: identity,deflate,gzip
I1218 13:20:53.930601375     884 chttp2_transport.c:1385]    HTTP:0:HDR:CLI: accept-encoding: identity,gzip
D1218 13:20:53.930614447     884 stream_lists.c:123]         0x28bd1d0[0][cli]: add to waiting_for_concurrency
D1218 13:20:53.930627665     884 stream_lists.c:69]          0x28bd1d0[0][cli]: pop from waiting_for_concurrency
D1218 13:20:53.930637754     884 chttp2_transport.c:1181]    HTTP:CLI: Allocating new grpc_chttp2_stream 0x28d1db0 to id 1
D1218 13:20:53.930652484     884 stream_lists.c:123]         0x28bd1d0[1][cli]: add to writable
D1218 13:20:53.930662987     884 chttp2_transport.c:851]     W:0x28bd1d0 CLIENT state WRITING -> WRITING+MORE [START_NEW_STREAM]
D1218 13:20:53.930687656     884 chttp2_transport.c:1248]    complete_closure_step: t=0x28bd1d0 0x28d1bb0 refs=4 flags=0x0003 desc=op->on_complete err="No Error" write_state=WRITING+MORE
D1218 13:20:53.930706567     884 chttp2_transport.c:851]     W:0x28bd1d0 CLIENT state WRITING+MORE -> WRITING [continue writing]
D1218 13:20:53.930724729     884 stream_lists.c:69]          0x28bd1d0[1][cli]: pop from writable
D1218 13:20:53.930735800     884 writing.c:243]              W:0x28bd1d0 CLIENT[1] im-(sent,send)=(0,1) announce=5
D1218 13:20:53.930754856     884 hpack_encoder.c:437]        Encode: ':path: /studyo.services.generator.Generator/GetGeneratedCalendarFromStoredConfig', elem_interned=0 [2], k_interned=1, v_interned=0
D1218 13:20:53.930772817     884 chttp2_transport.c:1248]    complete_closure_step: t=0x28bd1d0 0x28d1bb0 refs=3 flags=0x0003 desc=send_initial_metadata_finished err="No Error" write_state=WRITING
D1218 13:20:53.930800452     884 chttp2_transport.c:1248]    complete_closure_step: t=0x28bd1d0 0x28d1bb0 refs=2 flags=0x0003 desc=send_trailing_metadata_finished err="No Error" write_state=WRITING
D1218 13:20:53.930812820     884 chttp2_transport.c:1248]    complete_closure_step: t=0x28bd1d0 0x28d1bb0 refs=1 flags=0x0003 desc=on_write_finished_cb err="No Error" write_state=WRITING
D1218 13:20:53.930827486     884 stream_lists.c:123]         0x28bd1d0[1][cli]: add to writing
D1218 13:20:53.930838044     884 chttp2_transport.c:851]     W:0x28bd1d0 CLIENT state WRITING -> WRITING [begin write in background]
D1218 13:20:53.930886171     884 chttp2_transport.c:851]     W:0x28bd1d0 CLIENT state WRITING -> IDLE [finish writing]
D1218 13:20:53.930897617     884 stream_lists.c:69]          0x28bd1d0[1][cli]: pop from writing
D1218 13:20:53.941571401     884 chttp2_transport.c:1248]    complete_closure_step: t=0x28bd1d0 0x28d1bb0 refs=0 flags=0x0003 desc=recv_trailing_metadata_finished err="No Error" write_state=IDLE

Linkerd Logs

I 1218 13:20:53.940 UTC THREAD35: no available endpoints
com.twitter.finagle.NoBrokersAvailableException: No hosts are available for /svc/generator.studyo, Dtab.base=[/hp=>/$/inet;/svc=>/$/io.buoyant.hostportPfx/hp;/srv=>/#/io.l5d.k8s.grpc/default/grpc;/svc=>/$/io.buoyant.http.domainToPathPfx/srv], Dtab.local=[]. Remote Info: Not Available

Test 2

  • Don’t set http_proxy
  • gRPC client connects to NODE_NAME:4340
  • gRPC client sets Authority to service.namespace:5000 (with port)
  • Result: marking connection to "$/inet/generator.studyo/5000" as dead

Client Code

> process.env.http_proxy
undefined
> process.env.NODE_NAME
'gke-studyo-beta-default-pool-6d566286-ndzj'
> 
> const grpc = require('grpc');
> const rpcm = require('@studyo/grpc-services').generator_pb;
> const rpcs = require('@studyo/grpc-services').generator_grpc_pb;
> 
> const generatorClient = new rpcs.GeneratorClient(
...     process.env.NODE_NAME + ":4340",
...     grpc.credentials.createInsecure(),
...     {
.....         "grpc.default_authority": "generator.studyo:5000"
.....     }
... );
> 
> const request = new rpcm.StoredConfigGeneratorRequest();
> request.setConfigid("xyz");
> 
> generatorClient.getGeneratedCalendarFromStoredConfig(request, (err, response) => {console.log(err);console.log(response);})

Client Output

{ Error: Received RST_STREAM with error code 7
    at /app/node_modules/grpc/src/client.js:554:15 code: 14, metadata: Metadata { _internal_repr: {} } }

Client gRPC logging

D1218 13:42:53.464786793     909 chttp2_transport.c:1406]    perform_stream_op_locked:  SEND_INITIAL_METADATA{key=3a 73 63 68 65 6d 65 ':scheme' value=68 74 74 70 'http', key=3a 6d 65 74 68 6f 64 ':method' value=50 4f 53 54 'POST', key=3a 70 61 74 68 ':path' value=2f 73 74 75 64 79 6f 2e 73 65 72 76 69 63 65 73 2e 67 65 6e 65 72 61 74 6f 72 2e 47 65 6e 65 72 61 74 6f 72 2f 47 65 74 47 65 6e 65 72 61 74 65 64 43 61 6c 65 6e 64 61 72 46 72 6f 6d 53 74 6f 72 65 64 43 6f 6e 66 69 67 '/studyo.services.generator.Generator/GetGeneratedCalendarFromStoredConfig', key=3a 61 75 74 68 6f 72 69 74 79 ':authority' value=67 65 6e 65 72 61 74 6f 72 2e 73 74 75 64 79 6f 3a 35 30 30 30 'generator.studyo:5000', key=74 65 'te' value=74 72 61 69 6c 65 72 73 'trailers', key=63 6f 6e 74 65 6e 74 2d 74 79 70 65 'content-type' value=61 70 70 6c 69 63 61 74 69 6f 6e 2f 67 72 70 63 'application/grpc', key=75 73 65 72 2d 61 67 65 6e 74 'user-agent' value=67 72 70 63 2d 6e 6f 64 65 2f 31 2e 37 2e 32 20 67 72 70 63 2d 63 2f 35 2e 30 2e 30 20 28 6c 69 6e 75 78 3b 20 63 68 74 74 70 32 3b 20 67 61 6d 62 69 74 29 'grpc-node/1.7.2 grpc-c/5.0.0 (linux; chttp2; gambit)', key=67 72 70 63 2d 61 63 63 65 70 74 2d 65 6e 63 6f 64 69 6e 67 'grpc-accept-encoding' value=69 64 65 6e 74 69 74 79 2c 64 65 66 6c 61 74 65 2c 67 7a 69 70 'identity,deflate,gzip', key=61 63 63 65 70 74 2d 65 6e 63 6f 64 69 6e 67 'accept-encoding' value=69 64 65 6e 74 69 74 79 2c 67 7a 69 70 'identity,gzip'} SEND_MESSAGE:flags=0x00000000:len=26 SEND_TRAILING_METADATA{} RECV_INITIAL_METADATA RECV_MESSAGE RECV_TRAILING_METADATA COLLECT_STATS:0x24eb3b8; on_complete = 0x256a4b0
I1218 13:42:53.464810986     909 chttp2_transport.c:1385]    HTTP:0:HDR:CLI: :scheme: http
I1218 13:42:53.464819653     909 chttp2_transport.c:1385]    HTTP:0:HDR:CLI: :method: POST
I1218 13:42:53.464826218     909 chttp2_transport.c:1385]    HTTP:0:HDR:CLI: :path: /studyo.services.generator.Generator/GetGeneratedCalendarFromStoredConfig
I1218 13:42:53.464835049     909 chttp2_transport.c:1385]    HTTP:0:HDR:CLI: :authority: generator.studyo:5000
I1218 13:42:53.464843423     909 chttp2_transport.c:1385]    HTTP:0:HDR:CLI: te: trailers
I1218 13:42:53.464851416     909 chttp2_transport.c:1385]    HTTP:0:HDR:CLI: content-type: application/grpc
I1218 13:42:53.464857902     909 chttp2_transport.c:1385]    HTTP:0:HDR:CLI: user-agent: grpc-node/1.7.2 grpc-c/5.0.0 (linux; chttp2; gambit)
I1218 13:42:53.464864716     909 chttp2_transport.c:1385]    HTTP:0:HDR:CLI: grpc-accept-encoding: identity,deflate,gzip
I1218 13:42:53.464873232     909 chttp2_transport.c:1385]    HTTP:0:HDR:CLI: accept-encoding: identity,gzip
D1218 13:42:53.464885009     909 stream_lists.c:123]         0x2522060[0][cli]: add to waiting_for_concurrency
D1218 13:42:53.464897430     909 stream_lists.c:69]          0x2522060[0][cli]: pop from waiting_for_concurrency
D1218 13:42:53.464907504     909 chttp2_transport.c:1181]    HTTP:CLI: Allocating new grpc_chttp2_stream 0x256a6b0 to id 1
D1218 13:42:53.464920872     909 stream_lists.c:123]         0x2522060[1][cli]: add to writable
D1218 13:42:53.464930935     909 chttp2_transport.c:851]     W:0x2522060 CLIENT state WRITING -> WRITING+MORE [START_NEW_STREAM]
D1218 13:42:53.464951253     909 chttp2_transport.c:1248]    complete_closure_step: t=0x2522060 0x256a4b0 refs=4 flags=0x0003 desc=op->on_complete err="No Error" write_state=WRITING+MORE
D1218 13:42:53.464966786     909 chttp2_transport.c:851]     W:0x2522060 CLIENT state WRITING+MORE -> WRITING [continue writing]
D1218 13:42:53.464982595     909 stream_lists.c:69]          0x2522060[1][cli]: pop from writable
D1218 13:42:53.464989379     909 writing.c:243]              W:0x2522060 CLIENT[1] im-(sent,send)=(0,1) announce=5
D1218 13:42:53.465006190     909 hpack_encoder.c:437]        Encode: ':path: /studyo.services.generator.Generator/GetGeneratedCalendarFromStoredConfig', elem_interned=0 [2], k_interned=1, v_interned=0
D1218 13:42:53.465019303     909 chttp2_transport.c:1248]    complete_closure_step: t=0x2522060 0x256a4b0 refs=3 flags=0x0003 desc=send_initial_metadata_finished err="No Error" write_state=WRITING
D1218 13:42:53.465046431     909 chttp2_transport.c:1248]    complete_closure_step: t=0x2522060 0x256a4b0 refs=2 flags=0x0003 desc=send_trailing_metadata_finished err="No Error" write_state=WRITING
D1218 13:42:53.465058106     909 chttp2_transport.c:1248]    complete_closure_step: t=0x2522060 0x256a4b0 refs=1 flags=0x0003 desc=on_write_finished_cb err="No Error" write_state=WRITING
D1218 13:42:53.465071133     909 stream_lists.c:123]         0x2522060[1][cli]: add to writing
D1218 13:42:53.465079823     909 chttp2_transport.c:851]     W:0x2522060 CLIENT state WRITING -> WRITING [begin write in background]
D1218 13:42:53.465121492     909 chttp2_transport.c:851]     W:0x2522060 CLIENT state WRITING -> IDLE [finish writing]
D1218 13:42:53.465132866     909 stream_lists.c:69]          0x2522060[1][cli]: pop from writing
D1218 13:42:53.526565074     909 chttp2_transport.c:1248]    complete_closure_step: t=0x2522060 0x256a4b0 refs=0 flags=0x0003 desc=recv_trailing_metadata_finished err="No Error" write_state=IDLE

Linkerd Logs

E 1218 13:42:15.000 UTC THREAD35: [S L:/10.8.2.128:4340 R:/10.8.2.121:47458] dispatcher failed
2017-12-18 08:42:15.000 EST
com.twitter.finagle.ChannelClosedException: ChannelException at remote address: /10.8.2.121:47458. Remote Info: Not Available at com.twitter.finagle.netty4.transport.ChannelTransport$$anon$1.channelInactive(ChannelTransport.scala:188) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231) at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224) at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231) at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224) at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231) at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224) at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75) at com.twitter.finagle.netty4.channel.ChannelRequestStatsHandler.channelInactive(ChannelRequestStatsHandler.scala:35) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231) at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224) at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231) at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224) at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:377) at io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:342) at io.netty.handler.codec.http2.Http2ConnectionHandler.channelInactive(Http2ConnectionHandler.java:391) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231) at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224) at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231) at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224) at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75) at com.twitter.finagle.netty4.channel.ChannelStatsHandler.channelInactive(ChannelStatsHandler.scala:131) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231) at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelInactive(DefaultChannelPipeline.java:1337) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231) at io.netty.channel.DefaultChannelPipeline.fireChannelInactive(DefaultChannelPipeline.java:916) at io.netty.channel.AbstractChannel$AbstractUnsafe$7.run(AbstractChannel.java:744) at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at com.twitter.finagle.util.BlockingTimeTrackingThreadFactory$$anon$1.run(BlockingTimeTrackingThreadFactory.scala:23) at java.lang.Thread.run(Thread.java:748)
io.netty.handler.codec.DecoderException: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record
W 1218 13:42:53.502 UTC THREAD34 TraceId:5090c140a8a05938: k8s ns default service studyo:5000 endpoints resource does not exist, assuming it has yet to be created
I 1218 13:42:53.519 UTC THREAD35 TraceId:5090c140a8a05938: FailureAccrualFactory marking connection to "$/inet/generator.studyo/5000" as dead. Remote Address: Inet(generator.studyo/10.11.254.174:5000,Map())
I 1218 13:42:53.523 UTC THREAD35: [S L:/10.8.2.128:4340 R:/10.8.2.121:58542 S:1] rejected; resetting remote: REFUSED
Failure(not an SSL/TLS record: 00001804000000000000040000ffff000500010008000600004000fe0300000001 at remote address: generator.studyo/10.11.254.174:5000. Remote Info: Not Available, flags=0x09) with RemoteInfo -> Upstream Address: Not Available, Upstream id: Not Available, Downstream Address: generator.studyo/10.11.254.174:5000, Downstream label: $/inet/generator.studyo/5000, Trace Id: 5090c140a8a05938.5090c140a8a05938<:5090c140a8a05938 with Service -> 0.0.0.0/4340
WARN 1218 13:42:53.528 UTC finagle/netty4-7: An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.

Victory!!

Turns out that servicemesh.yaml hard-codes the default namespace in the router dtabs:

/srv => /#/io.l5d.k8s.grpc/default/grpc;

Replacing default with the namespace where my service runs made it work.

Pascal

But I feel like the dtabs for h2 and grpc routers found in servicemesh.yaml are not as complete as the http routers, where namespaces are explicitely parsed by the rules:

        /portNsSvc => /#/portNsSvcToK8s ;                 # /portNsSvc/http/default/foo -> /k8s/default/http/foo
        /host => /portNsSvc/http/default ;                # /host/foo -> /portNsSvc/http/default/foo
        /host => /portNsSvc/http ;                        # /host/default/foo -> /portNsSvc/http/default/foo

In contrast, the h2/grpc routers have the namespace default hardcoded in the rules:

        /srv => /#/io.l5d.k8s.grpc/default/grpc;         # /srv/service/package -> /#/io.l5d.k8s.grpc/default/grpc/service/package

I will try to figure out a way to make those routers parse the namespace and submit a PR.

Pascal

Phew, glad you figured it out! We’re definitely open to PRs for improving servicemesh.yaml.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.