RST_STREAM errors with gRPC

I’m trying to use linkerd in my Kubernetes cluster with some Golang gRPC applications. I get service-to-service communication just fine until I restart one of my pods, at which point I begin to get “stream terminated by RST_STREAM with error code: 7” when another service attempts to talk to the service running in the restarted pod. To get it to work again I have to delete all the linkerd pods and let the DaemonSet recreate them.

I’ve made sure my Golang applications do a clean shutdown on SIGTERM, closing both the gRPC ClientConn and the net Conn the gRPC server uses.

Any ideas? Please advise. Thanks!

Per @klingerf request in Slack, here’s the l5d config I’m using.

admin:
  ip: 0.0.0.0
  port: 9990

# Namers provide Linkerd with service discovery information.  To use a
# namer, you reference it in the dtab by its prefix.  We define 2 namers:
# * /io.l5d.k8s gets the address of the target app
# * /io.l5d.k8s.grpc gets the address of the grpc-incoming Linkerd router on the target app's node
namers:
- kind: io.l5d.k8s
- kind: io.l5d.k8s
  prefix: /io.l5d.k8s.grpc
  transformers:
    # The daemonset transformer replaces the address of the target app with
    # the address of the grpc-incoming router of the Linkerd daemonset pod
    # on the target app's node.
  - kind: io.l5d.k8s.daemonset
    namespace: linkerd
    port: grpc-incoming
    service: l5d

# Telemeters export metrics and tracing data about Linkerd, the services it
# connects to, and the requests it processes.
telemetry:
- kind: io.l5d.prometheus # Expose Prometheus style metrics on :9990/admin/metrics/prometheus
- kind: io.l5d.recentRequests
  sampleRate: 0.25 # Tune this sample rate before going to production
# - kind: io.l5d.zipkin # Uncomment to enable exporting of zipkin traces
#   host: zipkin-collector.default.svc.cluster.local # Zipkin collector address
#   port: 9410
#   sampleRate: 1.0 # Set to a lower sample rate depending on your traffic volume

# Usage is used for anonymized usage reporting.  You can set the orgId to
# identify your organization or set `enabled: false` to disable entirely.
usage:
  enabled: false

# Routers define how Linkerd actually handles traffic.  Each router listens
# for requests, applies routing rules to those requests, and proxies them
# to the appropriate destinations.  Each router is protocol specific.
# For each protocol (HTTP, HTTP/2, gRPC) we define an outgoing router and
# an incoming router.  The application is expected to send traffic to the
# outgoing router which proxies it to the incoming router of the Linkerd
# running on the target service's node.  The incoming router then proxies
# the request to the target application itself.  We also define HTTP and
# HTTP/2 ingress routers which act as Ingress Controllers and route based
# on the Ingress resource.
routers:
- label: grpc-outgoing
  protocol: h2
  experimental: true
  servers:
  - port: 4340
    ip: 0.0.0.0
  identifier:
    kind: io.l5d.header.path
    segments: 1
  dtab: |
    /hp  => /$/inet ;                                # /hp/linkerd.io/8888 -> /$/inet/linkerd.io/8888
    /svc => /$/io.buoyant.hostportPfx/hp ;           # /svc/linkerd.io:8888 -> /hp/linkerd.io/8888
    /srv => /#/io.l5d.k8s.grpc/l5d-test/grpc ;       # /srv/service/package -> /#/io.l5d.k8s.grpc/default/grpc/service/package
    /svc => /$/io.buoyant.http.domainToPathPfx/srv ; # /svc/package.service -> /srv/service/package
  client:
    kind: io.l5d.static
    configs:
    # Always use TLS when sending to external grpc servers
    - prefix: "/$/inet/{service}"
      tls:
        commonName: "{service}"
- label: gprc-incoming
  protocol: h2
  experimental: true
  servers:
  - port: 4341
    ip: 0.0.0.0
  identifier:
    kind: io.l5d.header.path
    segments: 1
  interpreter:
    kind: default
    transformers:
    - kind: io.l5d.k8s.localnode
  dtab: |
    /srv => /#/io.l5d.k8s/l5d-test/grpc ;            # /srv/service/package -> /#/io.l5d.k8s/default/grpc/service/package
    /svc => /$/io.buoyant.http.domainToPathPfx/srv ; # /svc/package.service -> /srv/service/package

To follow up more on this, I jumped on the host running the pod in question and looked at the linkerd logs and it looks like it’s an issue with the grpc-incoming router, and more specifically the localnode interpreter. When the pod for my application gets restarted, and thus gets a new pod IP, it doesn’t look like linkerd is updating its internal routing table w/ the new pod IP. The logs indicate it’s trying to send the message to the downstream IP that’s set to the pods old IP address, not its new one.

I 1013 15:06:56.049 UTC THREAD35 TraceId:12be580742016dd3: FailureAccrualFactory marking connection to "%/io.l5d.k8s.localnode/10.48.15.63/#/io.l5d.k8s/l5d-test/grpc/foo" as dead. Remote Address: Inet(/10.48.15.62:9090,Map(nodeName -> gke-ops-default-pool-abc-xyz))
I 1013 15:06:56.054 UTC THREAD35: [S L:/10.48.15.63:4341 R:/10.48.15.63:48402 S:281] rejected; resetting remote: REFUSED
Failure(connection timed out: /10.48.15.62:9090 at remote address: /10.48.15.62:9090. Remote Info: Not Available, flags=0x09) with RemoteInfo -> Upstream Address: /10.48.15.63:48402, Upstream Client Id: Not Available, Downstream Address: /10.48.15.62:9090, Downstream Client Id: %/io.l5d.k8s.localnode/10.48.15.63/#/io.l5d.k8s/l5d-test/grpc/foo, Trace Id: 12be580742016dd3.7ec8b98c1771ea2f<:1401d7e13307ac34 with Service -> 0.0.0.0/4341
Caused by: com.twitter.finagle.ConnectionFailedException: connection timed out: /10.48.15.62:9090 at remote address: /10.48.15.62:9090. Remote Info: Not Available

@activeshadow Ah, nice sleuthing. I think this is the same issue as:

We are actively working on a fix for that and should have more info later today.

@klingerf would this still apply even if I’m not using namerd?

Also, if you still have the linkerd log around, you might want to check to see if you’re seeing the “too old resource version” message as well, which would confirm it’s the issue I linked to.

@klingerf yeah seeing lots of these for each of my services…

W 1013 15:21:02.004 UTC THREAD27 TraceId:94ddcd2358d0e983: k8s ns l5d service linkerd endpoints watch error Status(Some(Status),Some(v1),Some(ObjectMeta(None,None,None,None,None,None,None,None,None,None,None)),Some(Failure),Some(too old resource version: 5816361 (5859365)),Some(Gone),None,Some(410))

Yeah, this also applies to linkerd unfortunately. The issue is with the io.l5d.k8s namer, which can be used by either linkerd or namerd.

Just to close the loop here, this issue was fixed in:

https://github.com/linkerd/linkerd/pull/1674

And the fix will be going out with the linkerd 1.3.1 release which will be shipping later this week.

Hello. I have setup linkerd-1.3.1 with kubernetes-1.6.6, with linkerd daemonset running for gRPC between node.js microservices. I am having issues when linkerd suddenly starts giving “RST_STREAM with error code: 7”. Upon investigation, I am seeing that downstream address is that of the old pod ip, not the expected new one. Also, as per above discussion , similar issue might have been resolved in k8s namer, but this might be a different case as I could not find “too old resource version” in my linkerd logs. This does not happen every time, but happens randomly, upon which the only way to get back things to work is restart linkerd daemonset , and hence restart all grpc clients, which is cumbersome. I am attaching the my linkerd config file, and the linkerd logs too.

linkerd-config.yaml:

apiVersion: v1
kind: ConfigMap
metadata:
  name: linkerd-config
  # namespace: app
data:
  config.yaml: |-
    admin:
      ip: 0.0.0.0
      port: 9990
    namers:
      - kind: io.l5d.k8s
        experimental: true
        host: 127.0.0.1
        port: 8001
    routers:
    - protocol: h2
      experimental: true
      label: grpc
      client:
        loadBalancer:
          kind: roundRobin
          maxEffort: 10
      servers:
        - port: 4140
          ip: 0.0.0.0
      dtab: |
        /svc/Governance => /#/io.l5d.k8s/default/grpc/governance-service ;
      identifier:
        kind: io.l5d.header.path
        segments: 1
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  labels:
    app: linkerd
  name: linkerd
spec:
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 25%
  template:
    metadata:
      labels:
        app: linkerd
    spec:
      volumes:
      - name: linkerd-config
        configMap:
          name: "linkerd-config"
      nodeSelector:
        role: agent
      containers:
      - name: linkerd
        image: buoyantio/linkerd:1.3.1
        env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        args:
        - /io.buoyant/linkerd/config/config.yaml
        ports:
        - name: outgoing
          containerPort: 4140
          hostPort: 4140
        - name: incoming
          containerPort: 4141
        - name: admin
          containerPort: 9990
        volumeMounts:
        - name: "linkerd-config"
          mountPath: "/io.buoyant/linkerd/config"
          readOnly: true

      - name: kubectl
        image: buoyantio/kubectl:v1.6.2
        args:
        - "proxy"
        - "-p"
        - "8001"

linkerd-logs:

ct 26 13:08:32 linkerd-7j79x linkerd INFO INFO: HttpMuxer[/admin/per_host_metrics.json] = com.twitter.finagle.stats.HostMetricsExporter(<function1>)
Oct 26 13:08:32 linkerd-7j79x linkerd  I 1026 07:38:32.456 UTC THREAD1: linkerd 1.3.1 (rev=fba06b305b28dca17fb1ae37be14774c70db98d3) built at 20171024-164701
Oct 26 13:08:33 linkerd-7j79x linkerd  I 1026 07:38:33.365 UTC THREAD1: Finagle version 7.1.0 (rev=37212517b530319f4ba08cc7473c8cd8c4b83479) built at 20170906-132024
Oct 26 13:08:36 linkerd-7j79x linkerd  I 1026 07:38:36.304 UTC THREAD1: Tracer: com.twitter.finagle.zipkin.thrift.ScribeZipkinTracer
Oct 26 13:08:36 linkerd-7j79x linkerd  I 1026 07:38:36.374 UTC THREAD1: connecting to usageData proxy at Set(Inet(stats.buoyant.io/104.28.23.233:443,Map()))
Oct 26 13:08:36 linkerd-7j79x linkerd  I 1026 07:38:36.908 UTC THREAD1: serving http admin on /0.0.0.0:9990
Oct 26 13:08:36 linkerd-7j79x linkerd  I 1026 07:38:36.958 UTC THREAD1: serving grpc on /0.0.0.0:4140
Oct 26 13:08:37 linkerd-7j79x linkerd  I 1026 07:38:37.030 UTC THREAD1: initialized
Oct 26 13:08:44 linkerd-5nl0m linkerd  I 1026 07:38:44.845 UTC THREAD35 TraceId:c59dff8e72ea4406: FailureAccrualFactory marking connection to "#/io.l5d.k8s/default/grpc/governance-service" as dead. Remote Address: Inet(/10.244.11.26:50054,Map(nodeName -> k8s-agent-b62c4b91-2))
Oct 26 13:08:44 linkerd-5nl0m linkerd  I 1026 07:38:44.847 UTC THREAD35: [S L:/10.244.9.184:4140 R:/10.244.9.61:59240 S:45] rejected; resetting remote: REFUSED
Oct 26 13:08:44 linkerd-5nl0m linkerd  Failure(connection timed out: /10.244.11.26:50054 at remote address: /10.244.11.26:50054. Remote Info: Not Available, flags=0x09) with RemoteInfo -> Upstream Address: Not Available, Upstream id: Not Available, Downstream Address: /10.244.11.26:50054, Downstream label: #/io.l5d.k8s/default/grpc/governance-service, Trace Id: c59dff8e72ea4406.c59dff8e72ea4406<:c59dff8e72ea4406 with Service -> 0.0.0.0/4140
Oct 26 13:08:44 linkerd-5nl0m linkerd  Caused by: com.twitter.finagle.ConnectionFailedException: connection timed out: /10.244.11.26:50054 at remote address: /10.244.11.26:50054. Remote Info: Not Available
Oct 26 13:08:44 linkerd-5nl0m linkerd  	at com.twitter.finagle.netty4.ConnectionBuilder$$anon$1.operationComplete(ConnectionBuilder.scala:124)
Oct 26 13:08:44 linkerd-5nl0m linkerd  	at com.twitter.finagle.netty4.ConnectionBuilder$$anon$1.operationComplete(ConnectionBuilder.scala:104)
Oct 26 13:08:44 linkerd-5nl0m linkerd  	at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
Oct 26 13:08:44 linkerd-5nl0m linkerd  	at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:500)
Oct 26 13:08:44 linkerd-5nl0m linkerd  	at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:479)
Oct 26 13:08:44 linkerd-5nl0m linkerd  	at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
Oct 26 13:08:44 linkerd-5nl0m linkerd  	at io.netty.util.concurrent.DefaultPromise.setFailure(DefaultPromise.java:113)
Oct 26 13:08:44 linkerd-5nl0m linkerd  	at io.netty.channel.DefaultChannelPromise.setFailure(DefaultChannelPromise.java:87)
Oct 26 13:08:44 linkerd-5nl0m linkerd  	at com.twitter.finagle.netty4.channel.ConnectPromiseDelayListeners$$anon$2.operationComplete(ConnectPromiseDelayListeners.scala:52)
Oct 26 13:08:44 linkerd-5nl0m linkerd  	at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
Oct 26 13:08:44 linkerd-5nl0m linkerd  	at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:500)
Oct 26 13:08:44 linkerd-5nl0m linkerd  	at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:479)
Oct 26 13:08:44 linkerd-5nl0m linkerd  	at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
Oct 26 13:08:44 linkerd-5nl0m linkerd  	at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:122)
Oct 26 13:08:44 linkerd-5nl0m linkerd  	at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:269)
Oct 26 13:08:44 linkerd-5nl0m linkerd  	at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
Oct 26 13:08:44 linkerd-5nl0m linkerd  	at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:120)
Oct 26 13:08:44 linkerd-5nl0m linkerd  	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
Oct 26 13:08:44 linkerd-5nl0m linkerd  	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
Oct 26 13:08:44 linkerd-5nl0m linkerd  	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
Oct 26 13:08:44 linkerd-5nl0m linkerd  	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
Oct 26 13:08:44 linkerd-5nl0m linkerd  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
Oct 26 13:08:44 linkerd-5nl0m linkerd  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
Oct 26 13:08:44 linkerd-5nl0m linkerd  	at com.twitter.finagle.util.BlockingTimeTrackingThreadFactory$$anon$1.run(BlockingTimeTrackingThreadFactory.scala:23)
Oct 26 13:08:44 linkerd-5nl0m linkerd  	at java.lang.Thread.run(Thread.java:748)
Oct 26 13:08:44 linkerd-5nl0m linkerd  Caused by: io.netty.channel.ConnectTimeoutException: connection timed out: /10.244.11.26:50054
Oct 26 13:08:44 linkerd-5nl0m linkerd  	at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:267)
Oct 26 13:08:44 linkerd-5nl0m linkerd  	... 10 more
Oct 26 13:08:44 linkerd-5nl0m linkerd
Oct 26 13:08:48 linkerd-5nl0m linkerd  I 1026 07:38:48.660 UTC THREAD22: [S L:/10.244.9.184:4140 R:/10.244.9.61:59240 S:47] rejected; resetting remote: REFUSED
Oct 26 13:08:48 linkerd-5nl0m linkerd  Failure(connection timed out: /10.244.11.26:50054 at remote address: /10.244.11.26:50054. Remote Info: Not Available, flags=0x09) with RemoteInfo -> Upstream Address: Not Available, Upstream id: Not Available, Downstream Address: /10.244.11.26:50054, Downstream label: #/io.l5d.k8s/default/grpc/governance-service, Trace Id: 6cc778bf1fcc5eae.6cc778bf1fcc5eae<:6cc778bf1fcc5eae with Service -> 0.0.0.0/4140
Oct 26 13:08:48 linkerd-5nl0m linkerd  Caused by: com.twitter.finagle.ConnectionFailedException: connection timed out: /10.244.11.26:50054 at remote address: /10.244.11.26:50054. Remote Info: Not Available
Oct 26 13:08:48 linkerd-5nl0m linkerd  	at com.twitter.finagle.netty4.ConnectionBuilder$$anon$1.operationComplete(ConnectionBuilder.scala:124)
Oct 26 13:08:48 linkerd-5nl0m linkerd  	at com.twitter.finagle.netty4.ConnectionBuilder$$anon$1.operationComplete(ConnectionBuilder.scala:104)
Oct 26 13:08:48 linkerd-5nl0m linkerd  	at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
Oct 26 13:08:48 linkerd-5nl0m linkerd  	at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:500)
Oct 26 13:08:48 linkerd-5nl0m linkerd  	at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:479)
Oct 26 13:08:48 linkerd-5nl0m linkerd  	at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
Oct 26 13:08:48 linkerd-5nl0m linkerd  	at io.netty.util.concurrent.DefaultPromise.setFailure(DefaultPromise.java:113)
Oct 26 13:08:48 linkerd-5nl0m linkerd  	at io.netty.channel.DefaultChannelPromise.setFailure(DefaultChannelPromise.java:87)
Oct 26 13:08:48 linkerd-5nl0m linkerd  	at com.twitter.finagle.netty4.channel.ConnectPromiseDelayListeners$$anon$2.operationComplete(ConnectPromiseDelayListeners.scala:52)
Oct 26 13:08:48 linkerd-5nl0m linkerd  	at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
Oct 26 13:08:48 linkerd-5nl0m linkerd  	at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:500)
Oct 26 13:08:48 linkerd-5nl0m linkerd  	at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:479)
Oct 26 13:08:48 linkerd-5nl0m linkerd  	at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
Oct 26 13:08:48 linkerd-5nl0m linkerd  	at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:122)
Oct 26 13:08:48 linkerd-5nl0m linkerd  	at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:269)
Oct 26 13:08:48 linkerd-5nl0m linkerd  	at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
Oct 26 13:08:48 linkerd-5nl0m linkerd  	at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:120)
Oct 26 13:08:48 linkerd-5nl0m linkerd  	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
Oct 26 13:08:48 linkerd-5nl0m linkerd  	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
Oct 26 13:08:48 linkerd-5nl0m linkerd  	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
Oct 26 13:08:48 linkerd-5nl0m linkerd  	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
Oct 26 13:08:48 linkerd-5nl0m linkerd  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
Oct 26 13:08:48 linkerd-5nl0m linkerd  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
Oct 26 13:08:48 linkerd-5nl0m linkerd  	at com.twitter.finagle.util.BlockingTimeTrackingThreadFactory$$anon$1.run(BlockingTimeTrackingThreadFactory.scala:23)
Oct 26 13:08:48 linkerd-5nl0m linkerd  	at java.lang.Thread.run(Thread.java:748)
Oct 26 13:08:48 linkerd-5nl0m linkerd  Caused by: io.netty.channel.ConnectTimeoutException: connection timed out: /10.244.11.26:50054
Oct 26 13:08:48 linkerd-5nl0m linkerd  	... 10 more
Oct 26 13:08:48 linkerd-5nl0m linkerd
Oct 26 13:10:43 linkerd-5nl0m linkerd  E 1026 07:40:43.102 UTC THREAD27: [S L:/10.244.9.184:4140 R:/10.244.9.61:59240] dispatcher failed
Oct 26 13:10:43 linkerd-5nl0m linkerd  com.twitter.finagle.ChannelClosedException: ChannelException at remote address: /10.244.9.61:59240. Remote Info: Not Available
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at com.twitter.finagle.netty4.transport.ChannelTransport$$anon$1.channelInactive(ChannelTransport.scala:188)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at com.twitter.finagle.netty4.channel.ChannelRequestStatsHandler.channelInactive(ChannelRequestStatsHandler.scala:35)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:377)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:342)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.handler.codec.http2.Http2ConnectionHandler.channelInactive(Http2ConnectionHandler.java:391)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at com.twitter.finagle.netty4.channel.ChannelStatsHandler.channelInactive(ChannelStatsHandler.scala:131)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelInactive(DefaultChannelPipeline.java:1337)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.channel.DefaultChannelPipeline.fireChannelInactive(DefaultChannelPipeline.java:916)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.channel.AbstractChannel$AbstractUnsafe$7.run(AbstractChannel.java:744)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at com.twitter.finagle.util.BlockingTimeTrackingThreadFactory$$anon$1.run(BlockingTimeTrackingThreadFactory.scala:23)
Oct 26 13:10:43 linkerd-5nl0m linkerd  	at java.lang.Thread.run(Thread.java:748)