I’m using linkerd to encrypt internode communication between Pods. I can currently only get things working for a single namespace at a time.
I’m using CNI and can successfully install linkerd as a daemonset in any namespace. I can also vary the ports for admin, outgoing and incoming and get it all to work. The problem I encounter is when I try to run more than one linkerd at a time. The second linkerd always fails with CrashLoopBackoff.
I’ve turned on debug logging, but this doesn’t appear to help. Is there something that I’m missing?
Here’s my helm config as a configmap:
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "fullname" . }}
data:
linkerd-tls.yaml: |-
admin:
port: {{ .Values.linkerd.admin }}
namers:
- kind: io.l5d.k8s
experimental: true
host: localhost
port: 8001
telemetry:
- kind: io.l5d.prometheus
- kind: io.l5d.recentRequests
sampleRate: 0.25
usage:
orgId: pvue-daemonset-tls
routers:
- protocol: http
label: outgoing
dtab: |
/srv => /#/io.l5d.k8s/{{ .Release.Namespace }}/http;
/host => /srv;
/svc => /host;
/host/world => /srv/world-v1;
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.daemonset
namespace: {{ .Release.Namespace }}
port: incoming
service: {{ template "fullname" . }}
hostNetwork: true
servers:
- port: {{ .Values.linkerd.outgoing }}
ip: 0.0.0.0
client:
tls:
commonName: {{ .Values.credentials.common_name }}
trustCerts:
- /io.buoyant/linkerd/certs/{{ .Values.credentials.ca }}
service:
responseClassifier:
kind: io.l5d.http.retryableRead5XX
- protocol: http
label: incoming
dtab: |
/srv => /#/io.l5d.k8s/{{ .Release.Namespace }}/http;
/host => /srv;
/svc => /host;
/host/world => /srv/world-v1;
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.localnode
hostNetwork: true
servers:
- port: {{ .Values.linkerd.incoming }}
ip: 0.0.0.0
tls:
certPath: /io.buoyant/linkerd/certs/{{ .Values.credentials.cert }}
keyPath: /io.buoyant/linkerd/certs/{{ .Values.credentials.key }}
1 Like
esbie
August 29, 2017, 7:31pm
#2
Hi @leopoldodonnell !
I suspect the crashes are because both daemonsets are fighting over control of hostPort: 4140
. Generally we suggest a single daemonset to do the routing over multiple namespaces…
Alex
August 29, 2017, 8:55pm
#3
Or, at least, the linkerd for each namespace must listen on distinct ports.
I wish I were that dumb - I’m using 4140 in one namespace and 4130 in another. Here are the two configurations:
Namespace default:
linkerd-tls.yaml:
admin:
port: 9980
namers:
- kind: io.l5d.k8s
experimental: true
host: localhost
port: 8001
telemetry:
- kind: io.l5d.prometheus
- kind: io.l5d.recentRequests
sampleRate: 0.25
usage:
orgId: pvue-daemonset-tls
routers:
- protocol: http
label: outgoing
dtab: |
/srv => /#/io.l5d.k8s/default/http;
/host => /srv;
/svc => /host;
/host/world => /srv/world-v1;
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.daemonset
namespace: default
port: incoming
service: tls-default-linkerd-tls
hostNetwork: true
servers:
- port: 4130
ip: 0.0.0.0
client:
tls:
commonName: \*.clddev.pearsonvue.com
trustCerts:
- /io.buoyant/linkerd/certs/wildcard-ca.pem
service:
responseClassifier:
kind: io.l5d.http.retryableRead5XX
- protocol: http
label: incoming
dtab: |
/srv => /#/io.l5d.k8s/default/http;
/host => /srv;
/svc => /host;
/host/world => /srv/world-v1;
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.localnode
hostNetwork: true
servers:
- port: 4131
ip: 0.0.0.0
tls:
certPath: /io.buoyant/linkerd/certs/wildcard-cert.pem
keyPath: /io.buoyant/linkerd/certs/wildcard-key.pem
Namespace leo:
linkerd-tls.yaml:
admin:
port: 9990
namers:
- kind: io.l5d.k8s
experimental: true
host: localhost
port: 8001
telemetry:
- kind: io.l5d.prometheus
- kind: io.l5d.recentRequests
sampleRate: 0.25
usage:
orgId: pvue-daemonset-tls
routers:
- protocol: http
label: outgoing
dtab: |
/srv => /#/io.l5d.k8s/leo/http;
/host => /srv;
/svc => /host;
/host/world => /srv/world-v1;
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.daemonset
namespace: leo
port: incoming
service: tls-leo-linkerd-tls
hostNetwork: true
servers:
- port: 4140
ip: 0.0.0.0
client:
tls:
commonName: \*.clddev.pearsonvue.com
trustCerts:
- /io.buoyant/linkerd/certs/wildcard-ca.pem
service:
responseClassifier:
kind: io.l5d.http.retryableRead5XX
- protocol: http
label: incoming
dtab: |
/srv => /#/io.l5d.k8s/leo/http;
/host => /srv;
/svc => /host;
/host/world => /srv/world-v1;
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.localnode
hostNetwork: true
servers:
- port: 4141
ip: 0.0.0.0
tls:
certPath: /io.buoyant/linkerd/certs/wildcard-cert.pem
keyPath: /io.buoyant/linkerd/certs/wildcard-key.pem
To complete the example; here are the daemonsets:
Namespace default:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
creationTimestamp: 2017-08-29T20:49:56Z
generation: 1
labels:
app: tls-default-linkerd-tls
chart: linkerd-tls-0.1.0
name: tls-default-linkerd-tls
namespace: default
resourceVersion: "827120"
selfLink: /apis/extensions/v1beta1/namespaces/default/daemonsets/tls-default-linkerd-tls
uid: 9d761749-8cfb-11e7-a467-0a0e6846b4d6
spec:
selector:
matchLabels:
app: tls-default-linkerd-tls
template:
metadata:
creationTimestamp: null
labels:
app: tls-default-linkerd-tls
spec:
containers:
- args:
- -log.level=DEBUG
- /io.buoyant/linkerd/config/linkerd-tls.yaml
env:
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
image: buoyantio/linkerd:1.1.2
imagePullPolicy: IfNotPresent
name: linkerd-tls
ports:
- containerPort: 4130
hostPort: 4130
name: outgoing
protocol: TCP
- containerPort: 4131
hostPort: 4131
name: incoming
protocol: TCP
- containerPort: 9980
hostPort: 9980
name: admin
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /io.buoyant/linkerd/config
name: linkerd-tls-config
readOnly: true
- mountPath: /io.buoyant/linkerd/certs
name: certificates
readOnly: true
- args:
- proxy
- -p
- "8001"
image: buoyantio/kubectl:v1.4.0
imagePullPolicy: IfNotPresent
name: kubectl
resources: {}
terminationMessagePath: /dev/termination-log
dnsPolicy: ClusterFirst
hostNetwork: true
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
items:
- key: linkerd-tls.yaml
path: linkerd-tls.yaml
name: tls-default-linkerd-tls
name: linkerd-tls-config
- hostPath:
path: /etc/kubernetes/ssl
name: certificates
status:
currentNumberScheduled: 3
desiredNumberScheduled: 3
numberMisscheduled: 0
numberReady: 0
Namespace leo:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
creationTimestamp: 2017-08-29T20:48:49Z
generation: 1
labels:
app: tls-leo-linkerd-tls
chart: linkerd-tls-0.1.0
name: tls-leo-linkerd-tls
namespace: leo
resourceVersion: "826557"
selfLink: /apis/extensions/v1beta1/namespaces/leo/daemonsets/tls-leo-linkerd-tls
uid: 759e9ad1-8cfb-11e7-a467-0a0e6846b4d6
spec:
selector:
matchLabels:
app: tls-leo-linkerd-tls
template:
metadata:
creationTimestamp: null
labels:
app: tls-leo-linkerd-tls
spec:
containers:
- args:
- -log.level=DEBUG
- /io.buoyant/linkerd/config/linkerd-tls.yaml
env:
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
image: buoyantio/linkerd:1.1.2
imagePullPolicy: IfNotPresent
name: linkerd-tls
ports:
- containerPort: 4140
hostPort: 4140
name: outgoing
protocol: TCP
- containerPort: 4141
hostPort: 4141
name: incoming
protocol: TCP
- containerPort: 9990
hostPort: 9990
name: admin
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /io.buoyant/linkerd/config
name: linkerd-tls-config
readOnly: true
- mountPath: /io.buoyant/linkerd/certs
name: certificates
readOnly: true
- args:
- proxy
- -p
- "8001"
image: buoyantio/kubectl:v1.4.0
imagePullPolicy: IfNotPresent
name: kubectl
resources: {}
terminationMessagePath: /dev/termination-log
dnsPolicy: ClusterFirst
hostNetwork: true
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
items:
- key: linkerd-tls.yaml
path: linkerd-tls.yaml
name: tls-leo-linkerd-tls
name: linkerd-tls-config
- hostPath:
path: /etc/kubernetes/ssl
name: certificates
status:
currentNumberScheduled: 3
desiredNumberScheduled: 3
numberMisscheduled: 0
numberReady: 3
esbie
August 29, 2017, 9:34pm
#5
Oh ok, sure. And kubectl logs crashing_l5d_pod --previous
doesn’t have anything interesting in it at all? Linkerd usually logs fatal exceptions.
Actually, your comment had me go and check one more thing. Note to self - daemonsets are tricksy.
It turns out that you aren’t protected by the namespace barrier with a sidecar. Updating the ‘kubectl’ container’s port appears to solve the problem.
What is less clear is what is going to happen when we start enforcing Network policies - oh well, that’s a problem for another day.
Consider this issue closed
esbie
August 29, 2017, 10:03pm
#7
glad you were able to figure it out!
system
closed
September 28, 2017, 10:03pm
#8
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.