Linkerd with kubernetes nginx ingress controller


#1

Hello,

We currently use the kubernetes nginx-ingress-controller and would like to add Linkerd in a basic service mesh configuration (deloyed as a daemonset). I’ve read the guide at https://buoyant.io/2016/11/18/a-service-mesh-for-kubernetes-part-v-dogfood-environments-ingress-and-edge-routing/ which explains how to configure nginx to proxy to Linkerd and I think this is more or less what I am aiming for, except I would like to continue to use the Ingress objects to configure nginx-ingress-controller, rather than create a custom nginx deployment and config.

The first thing I have tried is using the Linkerd config from the article above, and routing traffic via the backend configuration in the Ingress definition to point at the Service for Linkerd. This sends requests to a Linkerd with the expected service hostname. E.g

rules:
    - host: my-service.platform-stage.gcp0.my-host.net
      http:
        paths:
        - path: /
          backend:
            serviceName: l5d
            servicePort: 4142

With the correct dtabs in place I can route traffic to the backend service successfully and see requests coming through the ingress and incoming routers. But I’m not sure exactly whether what I’ve configured is totally sound.

The main concern is that I am not forwarding requests from nginx through an outbound local linkerd (I assume this is why I see no traffic on the outbound router?). However, I think this is also the case with the article above, where nginx is configured to proxy_pass to
http://l5d.default.svc.cluster.local, which resolves to the cluster VIP of the Kubernetes Service not the node-local linkerd IP.

If I am correct, what are the implications of this? And is it a suitably reliable configuration to use?


#2

:wave:t4:@andyhume sorry for the delay, I’ve double-checked with Risha, who wrote the article, and your setup seems fine.

The main concern is that I am not forwarding requests from nginx through an outbound local linkerd (I assume this is why I see no traffic on the outbound router?).

That is correct, since we want nginx to just send traffic to the VIP of the k8s service, and k8s will load balance over the linkerds.

You don’t want to send traffic to the linkerd outbound router (the service to service linkerd setup will do that for you) - the inbound/outbound routers there are set up purely for service to service routing in the daemonset (basically we’re only adding one more router to the setup explained in https://buoyant.io/2016/10/14/a-service-mesh-for-kubernetes-part-ii-pods-are-great-until-theyre-not/). If you’re sending traffic from service to service (and not external to service) you should see traffic on the outbound routers too.


#3

Thank-you. The configuration I’ve described has been running in a stage cluster with no issues for a week or so. Having you confirm we’re not doing anything off-the-wall is a big step towards us moving forward with this in production.


#4

Great to hear. Please keep us posted on your path to prod!


#5

Andy can you share your configs? I’m thinking of a similar setup wherein nginx-ingress represents the outermost layer. This terminates TLS and matches on subdomains, one of which I’d like to be linkerd managed.

Thus, I’d like to have nginx-ingress-controller handle SSL (with kube-cert-manager) and then forward traffic to https://api.example.com into the linkerd service, which then may be able to redirect grpc requests to various internal services.

Is this similar to your usage?


#6

You may well already have this working now, but yes, it sounds pretty similar.

Linkerd is deployed as a daemonset, with the following configuration.

apiVersion: v1
kind: ConfigMap
metadata:
  name: l5d-config
data:
  config.yaml: |-
    admin:
      ip: 0.0.0.0
      port: 9990

    telemetry:
    - kind: io.l5d.prometheus
    - kind: io.l5d.recentRequests
      sampleRate: 0.01

    routers:
    - protocol: http
      label: ingress
      interpreter:
        kind: io.l5d.namerd
        dst: /$/inet/namerd.system-namerd.svc.cluster.local/4100
        namespace: internal
        transformers:
        - kind: io.l5d.k8s.daemonset
          namespace: default
          port: incoming
          service: l5d
      servers:
      - port: 4142
        ip: 0.0.0.0

    - protocol: http
      label: outgoing
      interpreter:
        kind: io.l5d.namerd
        dst: /$/inet/namerd.system-namerd.svc.cluster.local/4100
        namespace: external
        transformers:
        - kind: io.l5d.k8s.daemonset
          namespace: default
          port: incoming
          service: l5d
      servers:
      - port: 4140
        ip: 0.0.0.0
      service:
        responseClassifier:
          kind: io.l5d.http.retryableRead5XX

    - protocol: http
      label: incoming
      interpreter:
        kind: io.l5d.namerd
        dst: /$/inet/namerd.system-namerd.svc.cluster.local/4100
        namespace: internal
        transformers:
        - kind: io.l5d.k8s.localnode
      servers:
      - port: 4141
        ip: 0.0.0.0

Note that we are running namerd to store dtabs. The internal dtab looks like

/srv                    => /#/io.l5d.k8s/default/http ;
/domain/net/brandwatch/service-name => /srv/service-name ;
/host                   => /$/io.buoyant.http.domainToPathPfx/domain ;
/svc                    => /host ;

And then the Ingress configuration is as follows…

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: theservice
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  tls:
    - secretName: services-certificates
      hosts:
        - service-name.brandwatch.net
  rules:
    - host: service-name.brandwatch.net
      http:
        paths:
        - path: /
          backend:
            serviceName: l5d # The linkerd service name
            servicePort: 4142 # The ingress router port

#7

Thanks for posting that, @andyhume!