Struggling to make Linkerd work in Kubernetes

Hiya! I’ve been trying to implement Linkerd as a replacement for a simple Nginx proxy gateway. We are introducing gRPC which means Nginx will no longer work for our needs.

A quick background on what I am trying to achieve. We have a few microservices running in Kubernetes this number expected to grow quickly. Those microservices talk to each other, and the outside world can talk directly to some of them. Communication is with REST and gRPC. I’ve been working first with REST and gRPC after.

I currently have a very simple deployment of Nginx serving a default page as my test. It also has a service called test. The Kubernetes cluster currently has 4 nodes with Linkerd deployed as a demonset. My service has a replica of 2.

After a lot of playing around I managed to get connections from the outside world going to the service. However, making requests over and over has a very high rate of failure. 50% or so of requests result in 502 bad gateway ‘No hosts are available for /svc/test’.

The other thing I am trying to figure out after I get this part working is how to route based on namespace. So, each microservice lives in it’s own namespace. I want to be able to route requests like to the namespace user with the service name prod. The documentation isn’t very clear how to do this.

Here is the config I am using

apiVersion: v1
kind: ConfigMap
name: test-linkerd-config
config.yaml: |-
port: 9990

- kind: io.l5d.k8s
experimental: true
host: localhost
port: 8001

- kind: io.l5d.prometheus
- kind: io.l5d.recentRequests
    sampleRate: 0.25

orgId: linkerd-examples-daemonset-grpc

- protocol: http
label: ingress
dtab: |
    /srv                    => /#/io.l5d.k8s/default/http ;
    /domain/world/hello/www => /srv/hello ;
    /domain/world/hello/api => /srv/api ;
    /domain/test            => /srv/test ;
    /host                   => /$/io.buoyant.http.domainToPathPfx/domain ;
    /svc                    => /host ;
    kind: default
    - kind: io.l5d.k8s.daemonset
    namespace: default
    port: incoming
    service: l5d
- port: 4142

- protocol: http
label: outgoing
dtab: |
    /srv        => /#/io.l5d.k8s/default/http;
    /host       => /srv;
    /svc        => /host;
    /host/world => /srv/world-v1;
    /host/test  => /srv/test;
    kind: default
    - kind: io.l5d.k8s.daemonset
    namespace: default
    port: incoming
    service: l5d
- port: 4140
    kind: io.l5d.http.retryableRead5XX

- protocol: http
label: incoming
dtab: |
    /srv                    => /#/io.l5d.k8s/default/http;
    /host                   => /$/io.buoyant.http.domainToPathPfx/domain;
    /host                   => /srv;
    /host/world             => /srv/world-v1;
    /host/test              => /srv/test;
    /svc                    => /host;
    kind: default
    - kind: io.l5d.k8s.localnode
- port: 4141
    kind: io.l5d.http.retryableRead5XX


Routing to service in a namespace:
create a namespace, “linkerd” in your k8s cluster and you can use this config,

kubectl apply -f servicemesh.yml
kubectl describe svc l5d (to make sure that your loadbalancer ingress is up)
wait for 2 mins or so for loadbalancer to comeup
http_proxy=$(kubectl get svc l5d -o jsonpath="{.status.loadBalancer.ingress[0].*}"):4140 curl -v http://service2.default

service2 -> name of microservice
default -> namespace

Hi @jengo! Sorry that the documentation isn’t super clear about this. I’ll try to elaborate.

I will second @zshaik’s suggestion to use the servicemesh config if you want to be able to route to different namespaces. One thing to watch out for is that you always want to be making requests to an “outgoing” or “ingress” router. (port 4140 in @zshaik’s example above). This ensures that the request will get routed to a node where an instance of your service is running. If you make a request directly to an “incoming” router, the request could potentially fail if no instances of your service are running on that node. It’s possible that that’s what was causing the sporadic failures you were seeing.

Give that a try and let us know how it goes! Happy to answer any other questions you have.

And just to clarify: in that servicemesh config, the “outgoing” ports are 4140 for HTTP and 4340 for gRPC.

Hey @jengo, this looks like a good start! The 502s are definitely unexpected. Offhand one possible cause that I can think of is that you might be sending your requests directly to the router running on 4141. That router is configured to only forward requests to pods that are running on the same node, and it will return a 502 if there’s no such pod for a given service on the same node. That router should only be used as the second hop in a linker-to-linker request, meaning that you want to send your ingress requests to the router running port 4142, and your service-to-service requests to the router running on port 4140.

I’ll also echo @zshaik suggest about the service mesh config. That config is a lot more full featured, and it’s really well documented, so it might be easier to start with that and modify as needed. It has support for h1, h2, gRPC, ingress, and multi-namespace routing. You can see a longer description of the supported features in the file itself, here:

Ah, sorry for the double post – @Alex beat me to it.

@Alex Oh interesting because this is the primary reason I am working on this project is to route outside requests in.

We have inside connections between services and some of those services can also provide the exact same service to outside connections (like mobile).

So does this mean that for every service I have to deploy it as a daemonset ? Typically some services are only deployed as 2 or 3 pods. That would be a serious issue in a 6 node cluster, it wouldn’t be able to scale.

@zshaik To clarify, I let’s say I have 2 services. A and B.
When I request it needs to route to prod-a.a.svc.cluster.local
When I request it needs to route to prod-b.b.svc.cluster.local

How do I setup a dtab to take destination namespace into account ?

@klingerf I guess I am not fully understanding the difference between 4140, 4141, and 4142. I need to route services to both the outside world and inside the Kubernetes cluster between
many multiple namespaces.

You definitely don’t need to deploy your app as a daemonset.

If you’re using the servicemesh config, services need to address one another by setting the Host header on the requests they send to Linkerd. Setting Host: prod-a.a would send the request to the prod-a service in the a namespace. Requests coming in from externally through the ingress router will use the routing rules set up in your Kubernetes ingress resource.

If these routing rules don’t meet your requirements, you can write your own config but this will be a bit more complicated. It sounds like your routing requirements are a bit more complex so let me make sure I understand.

For any request, you want to take the first segment of the URI path and use that as the namespace and use prod as the service name? In that case I think you want to use the io.l5d.path identifier. This identifier assigns service names based on the path. So, for example, requests like would get assigned a service name /svc/a. Mapping this to the correct Kubernetes service is a bit tricky because it’s not very common that you would only specify a namespace (and not a service name) but the way you could do it would be by defining a rewrite namer:

- kind: io.l5d.rewrite
  prefix: k8s-prod
  pattern: "/{s}"
  name: "/#/io.l5d.k8s/{s}/http/prod"

You could then use a dtab like:

/svc => /k8s-prod

This has the total effect of renaming /svc/a to /#/io.l5d.k8s/a/http/prod.

I hope this information is helpful.

Thanks for the help with the 502s, the advice from this thread helped me solve that. I had originally used part of the servicemesh as my starting point. But with so many services defined, I stripped it down to bare bones.

I actually have multiple gateways, 1 for each environment. I just didn’t want to complicate my request by mentioning that. Each config would have that part hardcoded. Prod, qa, etc.

So, the microservice name is ‘a’. When you load is the production gateway, everything will always reference prod. The path part of ‘a’ would need to rewrite to prod-a.a.svc.cluster.local.

Maybe it will be a little easier to show an example of what I am doing today with Nginx. I am trying to make this behavior work with REST. For gRPC it would be pretty much the same thing, the service name would be the name of the microservice.

location ~ ^/(?<api>[0-9A-Za-z-]+)/(?<fwd_path>.*)$
    resolver kube-dns.kube-system.svc.cluster.local valid=1s;

    proxy_set_header        X-Real-IP       $remote_addr;
    proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;

    proxy_pass https://prod-$api.$api.svc.cluster.local/$api/$fwd_path$is_args$args;
    proxy_redirect off;

I just can’t see how you do this type of rewrite using dtab.

Since you’re building up service names by using path segments as substrings, it’s probably not possible to do this in a dtab which is really intended for just doing hierarchical prefix replacement. If you absolutely need to use this naming structure I think your only option would be to write a custom namer plugin that performs the rewrite you want.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.