Help with vagrant and calico


#1

I have made the changes as @tfmertz mentioned. but it does nt work . Im using vagrant for poc.
I have kubernetes cluster with one master and two minion with calico network.

cat linkerd.yml
# runs linkerd in a daemonset, in linker-to-linker mode
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: l5d-config
data:
  config.yaml: |-
    admin:
      port: 9990

    namers:
    - kind: io.l5d.k8s
      experimental: true
      host: localhost
      port: 8001

    telemetry:
    - kind: io.l5d.prometheus
    - kind: io.l5d.recentRequests
      sampleRate: 0.25

    usage:
      orgId: linkerd-examples-daemonset

    routers:
    - protocol: http
      label: outgoing
      dtab: |
        /srv        => /#/io.l5d.k8s/linkerd/http;
        /host       => /srv;
        /svc        => /host;
        /host/world => /srv/world-v1;
      interpreter:
        kind: default
        transformers:
        - kind: io.l5d.k8s.daemonset
          namespace: linkerd
          port: incoming
          service: l5d
      servers:
      - port: 4140
        ip: 0.0.0.0
      service:
        responseClassifier:
          kind: io.l5d.http.retryableRead5XX

    - protocol: http
      label: incoming
      dtab: |
        /srv        => /#/io.l5d.k8s/linkerd/http;
        /host       => /srv;
        /svc        => /host;
        /host/world => /srv/world-v1;
      interpreter:
        kind: default
        transformers:
        - kind: io.l5d.k8s.localnode
      servers:
      - port: 4141
        ip: 0.0.0.0
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  labels:
    app: l5d
  name: l5d
spec:
  template:
    metadata:
      labels:
        app: l5d
    spec:
      volumes:
      - name: l5d-config
        configMap:
          name: "l5d-config"
      containers:
      - name: l5d
        image: buoyantio/linkerd:1.1.3
        env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        args:
        - /io.buoyant/linkerd/config/config.yaml
        ports:
        - name: outgoing
          containerPort: 4140
          hostPort: 4140
        - name: incoming
          containerPort: 4141
        - name: admin
          containerPort: 9990
        volumeMounts:
        - name: "l5d-config"
          mountPath: "/io.buoyant/linkerd/config"
          readOnly: true

      - name: kubectl
        image: buoyantio/kubectl:v1.4.0
        args:
        - "proxy"
        - "-p"
        - "8001"
---
apiVersion: v1
kind: Service
metadata:
  name: l5d
spec:
  selector:
    app: l5d
  type: LoadBalancer
  ports:
  - name: outgoing
    port: 4140
  - name: incoming
    port: 4141
  - name: admin
    port: 9990

K8s - Pods aren't created with hostPort config in linker-grpc.yml
#2

Please see the instructions for CNI setups here: Flavors of Kubernetes


#3

@Alex
As per your suggestion I have tried with below mentioned yml file
kubectl apply -f https://raw.githubusercontent.com/linkerd/linkerd-examples/master/k8s-daemonset/k8s/linkerd-cni.yml

But it is nt coming in running state.
Im also nt getting which which changes are need to be done in this linkerd-cni.yml file?


#4

linkerd-cni.yml adds hostNetwork: true to the k8s configs.
I would look for errors in the logs of the pods that aren’t successfully started.


#5

cat lin-cni.yml

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: l5d-config
data:
  config.yaml: |-
    admin:
      port: 9990

    namers:
    - kind: io.l5d.k8s
      experimental: true
      host: localhost
      port: 8001

    routers:
    - protocol: http
      label: outgoing
      baseDtab: |
        /srv        => /#/io.l5d.k8s/default/http;
        /host       => /srv;
        /http/*/*   => /host;
        /host/world => /srv/world-v1;
      interpreter:
        kind: default
        transformers:
        - kind: io.l5d.k8s.daemonset
          namespace: default
          port: incoming
          service: l5d
          hostNetwork: true
      servers:
      - port: 4140
        ip: 0.0.0.0

    - protocol: http
      label: incoming
      dstPrefix: /
      identifier:
        kind: io.l5d.header
        header: l5d-dst-concrete
      interpreter:
        kind: default
        transformers:
        - kind: io.l5d.k8s.localnode
          hostNetwork: true
      servers:
      - port: 4141
        ip: 0.0.0.0
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  labels:
    app: l5d
  name: l5d
spec:
  template:
    metadata:
      labels:
        app: l5d
    spec:
      hostNetwork: true
      volumes:
      - name: l5d-config
        configMap:
          name: "l5d-config"
      containers:
      - name: l5d
        image: buoyantio/linkerd:0.8.3
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        args:
        - /io.buoyant/linkerd/config/config.yaml
        ports:
        - name: incoming
          containerPort: 4141
          hostPort: 4141
        - name: outgoing
          containerPort: 4140
          hostPort: 4140
        - name: admin
          containerPort: 9990
          hostPort: 9990
        volumeMounts:
        - name: "l5d-config"
          mountPath: "/io.buoyant/linkerd/config"
          readOnly: true

      - name: kubectl
        image: buoyantio/kubectl:v1.4.0
        args:
        - "proxy"
        - "-p"
        - "8001"
---
apiVersion: v1
kind: Service
metadata:
  name: l5d
spec:
  selector:
    app: l5d
  type: LoadBalancer
  ports:
  - name: outgoing
    port: 4140
  - name: incoming
    port: 4141
  - name: admin
    port: 9990

I have added hostNetwork: true as you mentioned . but it does nt work. There nothing creepy available in logs too. kubectl logs l5d-vf3u8
Error from server (BadRequest): a container name must be specified for pod l5d-vf3u8, choose one of: [l5d kubectl]


#6

atlast following logs has captured

docker logs 40485b56cfbc

I 0829 18:38:15.592 THREAD10: k8s initializing default
E 0829 18:38:15.605 THREAD10: k8s failed to list endpoints
com.twitter.finagle.ChannelWriteException: java.net.ConnectException: Connection refused: localhost/127.0.0.1:8001. Remote Info: Upstream Address: Not Available, Upstream Client Id: Not Available, Downstream Address: localhost/127.0.0.1:8001, Downstream Client Id: daemonsetTransformer, Trace Id: 99da7fb8f9220090.99da7fb8f9220090<:99da7fb8f9220090
Caused by: java.net.ConnectException: Connection refused: localhost/127.0.0.1:8001
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.jboss.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:152)
at org.jboss.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:105)
at org.jboss.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:79)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at com.twitter.finagle.util.ProxyThreadFactory$$anonfun$newProxiedRunnable$1$$anon$1.run(ProxyThreadFactory.scala:19)
at java.lang.Thread.run(Thread.java:745)


#7

ip of kubernets stack
cat /etc/hosts
172.17.8.101 kube-master
172.17.8.102 minion-1
172.17.8.103 minion-2


#8

Right ok, that k8s failed to list endpoints suggests that linkerd still doesn’t have sufficient permissions to call the kubernetes api. I’ve never use calico, but in addition to adding hostNetwork: true you might try adding rbac configurations as described here: https://buoyant.io/2017/07/24/using-linkerd-kubernetes-rbac/ or as discussed here: K8s - Pods aren’t created with hostPort config in linker-grpc.yml


#9

This looks like Linkerd is not able to connect to the kubectl proxy container running on localhost:8001. Can you check the health and logs of that container? Can you attempt to connect to it manually?


closed #10

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.