Configure traffic split with Nginx ingress controller

I’m evaluating Linkerd as a service mesh for our company project, wasn’t able to setup traffic split with nginx ingress, here is my setup https://gist.github.com/kopachevsky/fe72344c0b04d606a3175f8197d3319e

first question: do I need to inject linkerd proxy to ingress controller pods itself (I’ve tried both options), but traffic always goes to frontend-v1 service if I do call thru public ingress.

If I do internal call from test pod traffic split works well.

Hi @kopachevsky, this is a good question.

The Ingress controller does need to be injected with the Linkerd proxy because the traffic split happens on the client side.

I looked at gist and it looks like the definition of service-v1.yaml and service-v2.yaml are the same. Can you check to make sure that you uploaded the right file?

I’m working to reproducing this with the files that you provided.

@kopachevsky, I updated the service-v2.yaml file with the contents below and used kubectl exec to attach to an nginx-ingress-controller deployment in my environment. From that container, I ran while true; do curl http://frontend-v1.ex:8080; sleep 1; done and saw that V1 and V2 output split intermittently.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
  labels:
    app: frontend-v2
  name: frontend-v2
  namespace: ex
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: frontend-v2
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        linkerd.io/inject: enabled
      labels:
        app: frontend-v2
    spec:
      containers:
      - image: nginx:alpine
        imagePullPolicy: IfNotPresent
        name: nginx
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /etc/nginx/nginx.conf
          name: cfg
          subPath: nginx.conf
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - configMap:
          defaultMode: 420
          name: frontend-v2
        name: cfg

Give this a shot and let me know if you still see unexpected behavior

@cpretzer sorry for this mistake, I’ve copy pasted same code for v1 and v2, updated the gist now, now will do same test you did, run curl from nginx controller

@cpretzer I’ve repeated your test, from nginx controller pod I get even split:

bash-5.0$  while true; do curl http://frontend-v1.ex:8080; sleep 1; done
V1
V2
V2
V2
V1
V1
V2
V1
V1
V1
V2
V1

But if do same from public endpoint attached to ingress gateway:

while true; do curl http://$PUBLIC_IP; done
V1
V1
V1
V1
V1
V1
V1
V1
V1
V1
V1
V1
V1

Here is ingress config:

kubectl describe ing hello-world-ingress -n ex
Name:             hello-world-ingress
Namespace:        ex
Address:          10.0.0.5
Default backend:  default-http-backend:80 (<none>)
Rules:
  Host  Path  Backends
  ----  ----  --------
  *
           frontend-v1:8080 (10.0.0.35:8080)

Is it expected behaviour?

@kopachevsky, that is really unexpected.

I ran a similar test and had different results, although I specified a host header for the ingress. Here is the ingress definition and the command that I used:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/configuration-snippet: |
      proxy_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
      grpc_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port;
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
  name: hello-world-ingress
  namespace: ex
spec:
  rules:
  - host: ts.linkerd.test
    http:
      paths:
      - backend:
          serviceName: frontend-v1
          servicePort: 8080
status:
  loadBalancer:
    ingress:
    - ip: 10.0.0.98
while true; do curl -v -H "HOST: ts.linkerd.test" http://$PUBLIC_IP; sleep 1; done

Can you try with the host header?

Thanks, will try to rebuild cluster from scratch as well, what about status.loadBalaner config, should I use it also?

@kopachevsky, no you can disregard that. Kubernetes won’t try to set the status from a yaml file. In other words, kubernetes will assign its own status to the resource that it creates.

@kopachevsky did you have any luck when you added the HOST header?

it works with hosts header! but it was not work before I injected linkerd proxy into nginx controller pod. but anyway it’s clearly works now, thanks!

That’s good to hear. Please let us know if you have any additional questions. :slight_smile: