Linkerd h2 router rejects requests from go client

Hello,

I have a simple go application which sends HTTP GET requests and also serves them. When I target this app to itself I see it uses HTTP/2 protocol. However when I try to use linkerd as https_proxy, the request fails on linkerd side with:

E 0803 11:52:44.591 UTC THREAD24: [S L:/172.17.0.11:4240 R:/172.17.0.6:48958] dispatcher failed
java.lang.ClassCastException: Transport.cast failed. Expected type io.netty.handler.codec.http2.Http2Frame but found io.netty.handler.codec.http.DefaultHttpRequest

So my client is indeed contacting the right port (4240), but with https_proxy it makes it http/1.1 by some reason. I don’t see any debug output of my application with GODEBUG=http2debug=2, which means it fails to establish http/2 connection from the very beginning.

It looks like a bug either in go or in linkerd. Please help.

I’m using the linkerd:latest image and go 1.8. TLS setup for linkerd is like that:

- protocol: h2
  experimental: true
  label: outgoing-h2
  dstPrefix: /svcs
  interpreter:
    kind: io.l5d.namerd
    dst: /$/inet/namerd.test.svc.cluster.local/4100
    namespace: internal_out
    transformers:
    - kind: io.l5d.k8s.daemonset
      namespace: test
      port: incoming-h2
      service: l5d
  servers:
  - port: 4240
    ip: 0.0.0.0
    tls:
      certPath: /certificates/certificate.pem
      keyPath: /certificates/key.pem
      caCertPath: /certificates/cacert.pem
  client:
    tls:
      commonName: l5d
      trustCerts:
      - /certificates/cacert.pem
      clientAuth:
        certPath: /certificates/certificate.pem
        keyPath: /certificates/key.pem

When I try to call my app behind linkerd with curl -vv I get this:

  • Trying 127.0.0.1…
  • TCP_NODELAY set
  • Connected to l5d (127.0.0.1) port 31412 (#0)
  • ALPN, offering h2
  • ALPN, offering http/1.1
  • Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
  • successfully set certificate verify locations:
  • CAfile: …/mesh/certificates/cacertificate.pem
    CApath: none
  • TLSv1.2 (OUT), TLS header, Certificate Status (22):
  • TLSv1.2 (OUT), TLS handshake, Client hello (1):
  • TLSv1.2 (IN), TLS handshake, Server hello (2):
  • TLSv1.2 (IN), TLS handshake, Certificate (11):
  • TLSv1.2 (IN), TLS handshake, Server key exchange (12):
  • TLSv1.2 (IN), TLS handshake, Server finished (14):
  • TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
  • TLSv1.2 (OUT), TLS change cipher, Client hello (1):
  • TLSv1.2 (OUT), TLS handshake, Finished (20):
  • TLSv1.2 (IN), TLS change cipher, Client hello (1):
  • TLSv1.2 (IN), TLS handshake, Finished (20):
  • SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
  • ALPN, server accepted to use h2
  • Proxy certificate:
  • subject: CN=l5d; C=US
  • start date: Jul 24 15:04:33 2017 GMT
  • expire date: Jun 30 15:04:33 2117 GMT
  • common name: l5d (matched)
  • issuer: C=FR; CN=l5d CA
  • SSL certificate verify ok.
  • Establish HTTP proxy tunnel to a200:443

CONNECT a200:443 HTTP/1.1
Host: a200:443
User-Agent: curl/7.54.1
Proxy-Connection: Keep-Alive

  • TLSv1.2 (IN), TLS alert, Client hello (1):
  • Proxy CONNECT aborted
  • Connection #0 to host l5d left intact
    curl: (56) Proxy CONNECT aborted

So curl manages to establish a connection with the proxy, but then it fails and I see in the linkerd logs that it complains with Transport.cast failed.

The https_proxy environment variable typically causes the application to do connect tunneling, which linkerd does not support. Instead of setting https_proxy, configure your application to send directly to linkerd.

1 Like

Thanks a lot, Alex.
Now I get “JDK provider does not support NPN_AND_ALPN protocol” exception in the linkerd logs.
And the client logs look like this:

2017/08/04 11:27:14 http2: Transport creating client conn 0xc42006c820 to 127.0.0.1:31412
2017/08/04 11:27:14 http2: Framer 0xc4201520e0: wrote SETTINGS len=18, settings: ENABLE_PUSH=0, INITIAL_WINDOW_SIZE=4194304, MAX_HEADER_LIST_SIZE=10485760
2017/08/04 11:27:14 http2: Framer 0xc4201520e0: wrote WINDOW_UPDATE len=4 (conn) incr=1073741824
2017/08/04 11:27:14 http2: Transport encoding header ":authority" = "a200"
2017/08/04 11:27:14 http2: Transport encoding header ":method" = "GET"
2017/08/04 11:27:14 http2: Transport encoding header ":path" = "/"
2017/08/04 11:27:14 http2: Transport encoding header ":scheme" = "https"
2017/08/04 11:27:14 http2: Transport encoding header "content-type" = "text/plain"
2017/08/04 11:27:14 http2: Transport encoding header "accept-encoding" = "text/plain"
2017/08/04 11:27:14 http2: Transport encoding header "user-agent" = "Go-http-client/2.0"
2017/08/04 11:27:14 http2: Framer 0xc4201520e0: wrote HEADERS flags=END_HEADERS stream=1 len=41
2017/08/04 11:27:14 http2: Framer 0xc4201520e0: wrote DATA flags=END_STREAM stream=1 len=0 data=""
2017/08/04 11:27:14 http2: Framer 0xc4201520e0: read SETTINGS len=0
2017/08/04 11:27:14 http2: Transport received SETTINGS len=0
2017/08/04 11:27:14 http2: Framer 0xc4201520e0: wrote SETTINGS flags=ACK len=0
2017/08/04 11:27:14 http2: Framer 0xc4201520e0: read SETTINGS flags=ACK len=0
2017/08/04 11:27:14 http2: Transport received SETTINGS flags=ACK len=0
2017/08/04 11:27:14 http2: Framer 0xc4201520e0: read RST_STREAM stream=1 len=4 ErrCode=REFUSED_STREAM
2017/08/04 11:27:14 http2: Transport received RST_STREAM stream=1 len=4 ErrCode=REFUSED_STREAM
2017/08/04 11:27:14 RoundTrip failure: stream error: stream ID 1; REFUSED_STREAM

Hmmmm… based on that error message it seems like boringssl is not being used and it’s falling back to the JDK provided ssl. I’m not sure why that would be. We’re going to see if we can reproduce this, thanks. We’ll keep you updated.

Thanks, will wait for the news.

I tried to hack a little bit your docker image by adding Jetty::ALPN::Boot»8.1.11.v20170118 via -Xbootclasspath/p (and for this I had to cut out the shell part of the bundle-exec), but this didn’t change anything.

I haven’t found any mention of boringssl in the bundled jar.

Hi @smartptr, I’ve been trying to reproduce this with Linkerd 1.1.2 running in Docker proxying TLS between nghttp and nghttpd. So far, I haven’t been able to reproduce this – the TLS transaction appears to use ALPN successfully.

Logs from the server (nghttpd):

[ALPN] client offers:
 * h2
SSL/TLS handshake completed
The negotiated protocol: h2
[id=1] [ 15.661] send SETTINGS frame <length=6, flags=0x00, stream_id=0>
          (niv=1)
          [SETTINGS_MAX_CONCURRENT_STREAMS(0x03):100]
[id=1] [ 15.664] recv SETTINGS frame <length=6, flags=0x00, stream_id=0>
          (niv=1)
          [SETTINGS_ENABLE_PUSH(0x02):0]
[id=1] [ 15.664] recv (stream_id=3) :method: GET
[id=1] [ 15.664] recv (stream_id=3) :path: /
[id=1] [ 15.664] recv (stream_id=3) :scheme: https
[id=1] [ 15.664] recv (stream_id=3) :authority: localhost:4240
[id=1] [ 15.664] recv (stream_id=3) accept: */*
[id=1] [ 15.664] recv (stream_id=3) accept-encoding: gzip, deflate
[id=1] [ 15.664] recv (stream_id=3) user-agent: nghttp2/1.24.0
[id=1] [ 15.664] recv (stream_id=3) l5d-dst-service: /svcs/localhost:4240
[id=1] [ 15.664] recv (stream_id=3) via: h2 linkerd
[id=1] [ 15.664] recv (stream_id=3) l5d-dst-client: /$/inet/docker.for.mac.localhost/8888
[id=1] [ 15.664] recv (stream_id=3) l5d-ctx-trace: Arqkib1DZwFVmegj0eAKK1WZ6CPR4AorAAAAAAAAAAA=
[id=1] [ 15.664] recv (stream_id=3) l5d-reqid: 5599e823d1e00a2b
[id=1] [ 15.664] recv HEADERS frame <length=185, flags=0x25, stream_id=3>
          ; END_STREAM | END_HEADERS | PRIORITY
          (padlen=0, dep_stream_id=0, weight=16, exclusive=0)
          ; Open new stream
[id=1] [ 15.664] send SETTINGS frame <length=0, flags=0x01, stream_id=0>
          ; ACK
          (niv=0)
[id=1] [ 15.664] send HEADERS frame <length=69, flags=0x04, stream_id=3>
          ; END_HEADERS
          (padlen=0)
          ; First response header
          :status: 404
          server: nghttpd nghttp2/1.24.0
          date: Mon, 07 Aug 2017 17:24:26 GMT
          content-type: text/html; charset=UTF-8
          content-length: 147
[id=1] [ 15.664] send DATA frame <length=147, flags=0x01, stream_id=3>
          ; END_STREAM
[id=1] [ 15.664] stream_id=3 closed
[id=1] [ 15.665] recv SETTINGS frame <length=0, flags=0x01, stream_id=0>
          ; ACK
          (niv=0)

and the client (nghttp):

[  0.013] Connected
The negotiated protocol: h2
[  0.045] send SETTINGS frame <length=12, flags=0x00, stream_id=0>
          (niv=2)
          [SETTINGS_MAX_CONCURRENT_STREAMS(0x03):100]
          [SETTINGS_INITIAL_WINDOW_SIZE(0x04):65535]
[  0.045] send PRIORITY frame <length=5, flags=0x00, stream_id=3>
          (dep_stream_id=0, weight=201, exclusive=0)
[  0.045] send PRIORITY frame <length=5, flags=0x00, stream_id=5>
          (dep_stream_id=0, weight=101, exclusive=0)
[  0.045] send PRIORITY frame <length=5, flags=0x00, stream_id=7>
          (dep_stream_id=0, weight=1, exclusive=0)
[  0.045] send PRIORITY frame <length=5, flags=0x00, stream_id=9>
          (dep_stream_id=7, weight=1, exclusive=0)
[  0.045] send PRIORITY frame <length=5, flags=0x00, stream_id=11>
          (dep_stream_id=3, weight=1, exclusive=0)
[  0.045] send HEADERS frame <length=38, flags=0x25, stream_id=13>
          ; END_STREAM | END_HEADERS | PRIORITY
          (padlen=0, dep_stream_id=11, weight=16, exclusive=0)
          ; Open new stream
          :method: GET
          :path: /
          :scheme: https
          :authority: localhost:4240
          accept: */*
          accept-encoding: gzip, deflate
          user-agent: nghttp2/1.24.0
[  0.055] recv SETTINGS frame <length=0, flags=0x00, stream_id=0>
          (niv=0)
[  0.055] recv SETTINGS frame <length=0, flags=0x01, stream_id=0>
          ; ACK
          (niv=0)
[  0.056] send SETTINGS frame <length=0, flags=0x01, stream_id=0>
          ; ACK
          (niv=0)
[  0.080] recv (stream_id=13) :status: 404
[  0.080] recv (stream_id=13) server: nghttpd nghttp2/1.24.0
[  0.080] recv (stream_id=13) date: Mon, 07 Aug 2017 17:24:26 GMT
[  0.080] recv (stream_id=13) content-type: text/html; charset=UTF-8
[  0.080] recv (stream_id=13) content-length: 147
[  0.080] recv (stream_id=13) l5d-success-class: 1.0
[  0.080] recv (stream_id=13) via: h2 linkerd
[  0.080] recv HEADERS frame <length=100, flags=0x24, stream_id=13>
          ; END_HEADERS | PRIORITY
          (padlen=0, dep_stream_id=0, weight=16, exclusive=0)
          ; First response header
<html><head><title>404 Not Found</title></head><body><h1>404 Not Found</h1><hr><address>nghttpd nghttp2/1.24.0 at port 8888</address></body></html>[  0.082] recv DATA frame <length=147, flags=0x01, stream_id=13>
          ; END_STREAM
[  0.082] send GOAWAY frame <length=8, flags=0x00, stream_id=0>
          (last_stream_id=0, error_code=NO_ERROR(0x00), opaque_data(0)=[])

Thanks @eliza. Interesting, I see you’re on Mac, and I have a friend for whom h2 works on Mac as well. I’m on Ubuntu and whatever I do I get the same issue.
I’m testing now the standalone bundle (not in docker) with a minimal configuration, and get the same result. Here’s the exact setup:

admin:
  port: 9990

usage:
  enabled: false

namers:
- kind: io.l5d.fs
  rootDir: services

routers:
- protocol: h2
  experimental: true
  label: outgoing-h2
  dstPrefix: /svc
  dtab: |
    /svc => /#/io.l5d.fs
  servers:
  - port: 4240
    ip: 0.0.0.0
    tls:
      certPath: certificates/certificate.pem
      keyPath: certificates/key.pem
      caCertPath: certificates/cacertificate.pem
  identifier:
    kind: io.l5d.header.token
    header: ":authority"
  client:
    tls:
      commonName: l5d
      disableValidation: true
      trustCerts:
      - certificates/cacertificate.pem
      clientAuth:
        certPath: certificates/certificate.pem
        keyPath: certificates/key.pem

File system:

$ head -v services/*
==> services/l5d:4240 <==
127.0.0.1 8543

The admin gui manages to show the right routing to 127.0.0.1:8543, which is the address of the listening server.

The command line: ./linkerd-1.1.2-exec config.yaml -log.level=DEBUG

When I fire a request in Chrome to https://l5d:4240/ I get an infinite (or just too long) log of errors in the linkerd output:

I 0808 09:21:39.839 UTC THREAD1: serving outgoing-h2 on /0.0.0.0:4240
I 0808 09:21:39.867 UTC THREAD1: initialized
D 0808 09:21:48.963 UTC THREAD29 TraceId:946640c03f7d357e: fs observing services
D 0808 09:21:48.974 UTC THREAD34 TraceId:946640c03f7d357e: fs init file services => services/l5d:4240
D 0808 09:21:48.980 UTC THREAD34 TraceId:946640c03f7d357e: fs lookup /#/io.l5d.fs l5d:4240 /
D 0808 09:21:48.981 UTC THREAD34 TraceId:946640c03f7d357e: fs lookup /#/io.l5d.fs file l5d:4240
D 0808 09:21:48.999 UTC THREAD34 TraceId:946640c03f7d357e: fs lookup /#/io.l5d.fs addr l5d:4240 15 bytes
E 0808 09:21:56.004 UTC THREAD37: [S L:/127.0.0.1:4240 R:/127.0.0.1:38320] dispatcher failed
com.twitter.finagle.ChannelClosedException: ChannelException at remote address: /127.0.0.1:38320. Remote Info: Not Available
        at com.twitter.finagle.netty4.transport.ChannelTransport$$anon$1.channelInactive(ChannelTransport.scala:186)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224)
        at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224)
        at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224)
        at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75)
        at com.twitter.finagle.netty4.channel.ChannelRequestStatsHandler.channelInactive(ChannelRequestStatsHandler.scala:36)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224)
        at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224)
        at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224)
        at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75)
        at com.twitter.finagle.netty4.channel.ChannelStatsHandler.channelInactive(ChannelStatsHandler.scala:115)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224)
        at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:360)
        at io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:325)
        at io.netty.handler.ssl.SslHandler.channelInactive(SslHandler.java:900)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224)
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelInactive(DefaultChannelPipeline.java:1329)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
        at io.netty.channel.DefaultChannelPipeline.fireChannelInactive(DefaultChannelPipeline.java:908)
        at io.netty.channel.AbstractChannel$AbstractUnsafe$7.run(AbstractChannel.java:744)
        at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
        at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:462)
        at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at com.twitter.finagle.util.BlockingTimeTrackingThreadFactory$$anon$1.run(BlockingTimeTrackingThreadFactory.scala:24)
        at java.lang.Thread.run(Thread.java:748)

        D 0808 09:21:56.005 UTC THREAD37: [S L:/127.0.0.1:4240 R:/127.0.0.1:38320] go away: GoAway.InternalError
        D 0808 09:21:56.005 UTC THREAD37: [S L:/127.0.0.1:4240 R:/127.0.0.1:38320] resetting all streams: Reset.Cancel
        D 0808 09:21:56.020 UTC THREAD37: [S L:/127.0.0.1:4240 R:/127.0.0.1:38320] transport closed
        com.twitter.finagle.ChannelClosedException: ChannelException at remote address: /127.0.0.1:38320. Remote Info: Not Available
        at com.twitter.finagle.netty4.transport.ChannelTransport$$anon$1.channelInactive(ChannelTransport.scala:186)


...
D 0808 09:21:56.034 UTC THREAD38: h2 server pipeline: installing framer: DefaultChannelPipeline{(ssl = io.netty.handler.ssl.SslHandler), (channelStats = com.twitter.finagle.netty4.channel.ChannelStatsHandler), (UnpoolHandler$#0 = com.twitter.finagle.buoyant.h2.netty4.UnpoolHandler$), (H2FrameCodec$ConnectionHandler#0 = io.netty.handler.codec.http2.H2FrameCodec$ConnectionHandler), (h2 framer = io.netty.handler.codec.http2.H2FrameCodec), (channelRequestStatsHandler = com.twitter.finagle.netty4.channel.ChannelRequestStatsHandler), (exceptionHandler = com.twitter.finagle.netty4.channel.ChannelExceptionHandler), (finagleBridge = com.twitter.finagle.netty4.channel.ServerBridge), (finagleChannelTransport = com.twitter.finagle.netty4.transport.ChannelTransport$$anon$1)}
D 0808 09:21:56.066 UTC THREAD38: [S L:/127.0.0.1:4240 R:/127.0.0.1:38322 S:1] initialized stream
D 0808 09:21:56.128 UTC THREAD38 TraceId:9543477a84c6c09b: fs observing services
D 0808 09:21:56.131 UTC THREAD31 TraceId:9543477a84c6c09b: fs init file services => services/l5d:4240
D 0808 09:21:56.132 UTC THREAD31 TraceId:9543477a84c6c09b: fs lookup /#/io.l5d.fs file l5d:4240
D 0808 09:21:56.132 UTC THREAD31 TraceId:9543477a84c6c09b: fs lookup /#/io.l5d.fs l5d:4240 /
D 0808 09:21:56.140 UTC THREAD34 TraceId:9543477a84c6c09b: fs waiting for events on services
D 0808 09:21:56.182 UTC THREAD38 TraceId:9543477a84c6c09b: fs lookup /#/io.l5d.fs addr l5d:4240 15 bytes
D 0808 09:21:56.204 UTC THREAD38 TraceId:9543477a84c6c09b: fs lookup /#/io.l5d.fs addr l5d:4240 15 bytes
D 0808 09:21:56.206 UTC THREAD38 TraceId:9543477a84c6c09b: fs lookup /#/io.l5d.fs addr l5d:4240 15 bytes
D 0808 09:21:56.213 UTC THREAD38 TraceId:9543477a84c6c09b: fs lookup /#/io.l5d.fs addr l5d:4240 15 bytes
WARN 0808 11:21:56.438 CEST finagle/netty4-1: Failed to initialize a channel. Closing: [id: 0xea268682]
java.lang.UnsupportedOperationException: JDK provider does not support NPN_AND_ALPN protocol
        at io.netty.handler.ssl.JdkSslContext.toNegotiator(JdkSslContext.java:317)
        at io.netty.handler.ssl.JdkSslClientContext.<init>(JdkSslClientContext.java:272)
        at io.netty.handler.ssl.SslContext.newClientContextInternal(SslContext.java:770)
        at io.netty.handler.ssl.SslContextBuilder.build(SslContextBuilder.java:446)
        at com.twitter.finagle.netty4.ssl.client.Netty4ClientEngineFactory.apply(Netty4ClientEngineFactory.scala:66)
        at com.twitter.finagle.netty4.ssl.client.Netty4ClientSslHandler.$anonfun$initChannel$1(Netty4ClientSslHandler.scala:114)
        at com.twitter.finagle.netty4.ssl.client.Netty4ClientSslHandler.$anonfun$initChannel$1$adapted(Netty4ClientSslHandler.scala:111)
        at scala.Option.foreach(Option.scala:257)
        at com.twitter.finagle.netty4.ssl.client.Netty4ClientSslHandler.initChannel(Netty4ClientSslHandler.scala:111)
        at io.netty.channel.ChannelInitializer.initChannel(ChannelInitializer.java:113)
        at io.netty.channel.ChannelInitializer.handlerAdded(ChannelInitializer.java:105)
        at io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:597)
        at io.netty.channel.DefaultChannelPipeline.addFirst(DefaultChannelPipeline.java:178)
        at io.netty.channel.DefaultChannelPipeline.addFirst(DefaultChannelPipeline.java:143)
        at com.twitter.finagle.netty4.channel.AbstractNetty4ClientChannelInitializer.initChannel(AbstractNetty4ClientChannelInitializer.scala:77)
        at com.twitter.finagle.netty4.channel.RawNetty4ClientChannelInitializer.initChannel(RawNetty4ClientChannelInitializer.scala:18)

It is perhaps the first exception that causes the ALPN failure.

Curiouser and curiouser! I’ll see about testing this on a Linux box.

h2c works, by the way: if I remove the client part from the router configuration, the nghttp server receives http/2 without tls.

Okay @smartptr, I can confirm I’ve reproduced this issue on Linux (Ubuntu 16.04 LTS). Sorry it took a couple days.

Thanks for reporting, we’ll look into getting this fixed!

Should we track this in a Github issue?

Good call, @william – I’ve opened linkerd#1581. @smartptr, you’ll be able to track progress on this issue there.

1 Like

Hi again @smartptr, we’ve traced this issue to incompatibility between the disableValidation: true configuration and clientAuth. It looks like disabling validation will force the use of the JDK SSL provider, which doesn’t support client authorization.

We’ve made these configurations incompatible in the upcoming release and closed the corresponding issue linkerd#1581.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.