Getting NotSslRecordException when attempting to connect via TLS

I’ve configured the incoming server to use TLS with self-signed certificate, like so:

- protocol: http
  servers:
  - port: 5151
ip: 0.0.0.0
# accept incoming TLS traffic from remote Linkerd
tls:
  certPath: /mnt/mesos/sandbox/certificates/certificate.pem
  keyPath: /mnt/mesos/sandbox/certificates/key.pem
  dtab: >-
/%/io.l5d.localhost/#/io.l5d.marathon               => /#/io.l5d.marathon;
/host                                               => /$/io.buoyant.http.domainToPathPfx/domain;
/svc                                                => /host
  label: incoming
  interpreter:
kind: default
transformers:
- kind: io.l5d.localhost

And when I attempt to connect I get the following exception,:

WARN 1219 12:25:47.983 UTC finagle/netty4-7: An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 474554202f7465737420485454502f312e310d0a582d466f727761726465642d466f723a2031302e32312e322e3135360d0a582d466f727761726465642d50726f746f3a20687474700d0a582d466f727761726465642d506f72743a20383030300d0a486f73743a2073657276696365612e6d61737465722e6c696e6b6572642d636f6d6d2d746573742d736572766963652e64636f732d73736c2d696e7465726e616c2e6465762d6672612d617070732e7a6f6f7a2e636f3a383030300d0a582d416d7a6e2d54726163652d49643a20526f6f743d312d35613339303534622d3463626539356665366633616164366334316166363465330d0a557365722d4167656e743a206375726c2f372e35302e330d0a4163636570743a202a2f2a0d0a636f6e74656e742d6c656e6774683a20300d0a6c35642d6473742d736572766963653a202f7376632f73657276696365612e6d61737465722e6c696e6b6572642d636f6d6d2d746573742d736572766963652e64636f732d73736c2d696e7465726e616c2e6465762d6672612d617070732e7a6f6f7a2e636f3a383030300d0a5669613a20312e31206c696e6b6572640d0a6c35642d6473742d636c69656e743a202f252f696f2e6c35642e706f72742f333132332f232f696f2e6c35642e6d61726174686f6e2f6c696e6b6572642d636f6d6d2d746573742d736572766963652f6d61737465722f73657276696365610d0a6c35642d6374782d74726163653a206570477049706e474342746f656f6c3657696d66356d6836695870614b5a2f6d41414141414141414141413d0d0a6c35642d72657169643a20363837613839376135613239396665360d0a0d0a
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:459)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1342)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:934)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at com.twitter.finagle.util.BlockingTimeTrackingThreadFactory$$anon$1.run(BlockingTimeTrackingThreadFactory.scala:23)
	at java.lang.Thread.run(Thread.java:748)
Caused by: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 474554202f7465737420485454502f312e310d0a582d466f727761726465642d466f723a2031302e32312e322e3135360d0a582d466f727761726465642d50726f746f3a20687474700d0a582d466f727761726465642d506f72743a20383030300d0a486f73743a2073657276696365612e6d61737465722e6c696e6b6572642d636f6d6d2d746573742d736572766963652e64636f732d73736c2d696e7465726e616c2e6465762d6672612d617070732e7a6f6f7a2e636f3a383030300d0a582d416d7a6e2d54726163652d49643a20526f6f743d312d35613339303534622d3463626539356665366633616164366334316166363465330d0a557365722d4167656e743a206375726c2f372e35302e330d0a4163636570743a202a2f2a0d0a636f6e74656e742d6c656e6774683a20300d0a6c35642d6473742d736572766963653a202f7376632f73657276696365612e6d61737465722e6c696e6b6572642d636f6d6d2d746573742d736572766963652e64636f732d73736c2d696e7465726e616c2e6465762d6672612d617070732e7a6f6f7a2e636f3a383030300d0a5669613a20312e31206c696e6b6572640d0a6c35642d6473742d636c69656e743a202f252f696f2e6c35642e706f72742f333132332f232f696f2e6c35642e6d61726174686f6e2f6c696e6b6572642d636f6d6d2d746573742d736572766963652f6d61737465722f73657276696365610d0a6c35642d6374782d74726163653a206570477049706e474342746f656f6c3657696d66356d6836695870614b5a2f6d41414141414141414141413d0d0a6c35642d72657169643a20363837613839376135613239396665360d0a0d0a
	at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1106)
	at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1162)
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489)
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428)

I do not know if this is related to the fact that I use a self-signed certificate or due to the COMMON_NAME I used when creating the certificate.

What can be the reason for this exception?
Please advise

The exception is thrown when the service receiving TLS packets receives an unencrypted packet. Do you have tls enabled on the outgoing router in your linkerd config? Do you mind sharing your full config yaml?

Thank you for the reply, I wrote the incorrect common_name in the configuration file.