Using self signed k8s api certificate


I’m trying to setup Linkerd2 on my Kubernetes setup which uses self signed certs for the K8s api. After executing linkerd check --pre I get:

kubernetes-api: can initialize the client..................................[ok]
kubernetes-api: can query the Kubernetes API...............................[FAIL] -- Get x509: certificate signed by unknown authority

Status check results are [FAIL]

How do I configure Linkerd to trust this certificate?


Hey @sborny, interesting. Are you able to access your cluster using kubectl, and if so, do you have to pass the --insecure-skip-tls-verify flag? Linkerd doesn’t support that flag, but it might be possible to add it, if you feel like opening up a feature request issue in the linkerd2 repo.

Alternatively, I think it’s possible to initialize your Kubernetes cluster to recognize your ca, by passing some additional flags to to kube-apiserver on startup. There’s more info about that approach here:

That would be preferable to skipping verification altogether.


Thanks for the suggestion. I can access the cluster with kubectl without the --insecure-skip-tls-verify flag probably because the flags mentioned in the linked Github issue are already set in my deployment.


The kube-apiserver has the following options set:

/snap/kube-apiserver/450/kube-apiserver --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --advertise-address= --authorization-mode=AlwaysAllow --basic-auth-file=/root/cdk/basic_auth.csv --client-ca-file=/root/cdk/ca.crt --enable-aggregator-routing --etcd-cafile=/root/cdk/etcd/client-ca.pem --etcd-certfile=/root/cdk/etcd/client-cert.pem --etcd-keyfile=/root/cdk/etcd/client-key.pem --etcd-servers=,, --insecure-bind-address= --insecure-port=8080 --kubelet-certificate-authority=/root/cdk/ca.crt --kubelet-client-certificate=/root/cdk/client.crt --kubelet-client-key=/root/cdk/client.key --kubelet-preferred-address-types=[InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP] --logtostderr --min-request-timeout=300 --proxy-client-cert-file=/root/cdk/client.crt --proxy-client-key-file=/root/cdk/client.key --requestheader-allowed-names=client --requestheader-client-ca-file=/root/cdk/ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --service-account-key-file=/root/cdk/serviceaccount.key --service-cluster-ip-range= --storage-backend=etcd3 --tls-cert-file=/root/cdk/server.crt --tls-private-key-file=/root/cdk/server.key --token-auth-file=/root/cdk/known_tokens.csv --v=4


Hmm, in that case linkerd should also be able to connect to your cluster, since under the hood it’s using the same configuration code as kubectl. Linkerd loads your config file from ~/.kube/config by default. Can you verify that the cluster defined in that file includes a certificate-authority field with an absolute path to ca.crt, and verify that that file exists. Alternatively, you could base64 encode the contents of that file and inline it in the certificate-authority-data field.


I checked the contents of ~/.kube/config and the ca.crt on the kubernetes master and the certs match.
The config file has a field certificate-authority-data which if I base64 decode it, it matches the contents in ca.crt (on the k8s-master).


Hmm, ok. Offhand I’m not sure what the issue is then. IIRC another user ran into the same issue a while back and reported that regenerating certs fixed the issue. But it sounds like you have everything setup correctly, so I don’t think that applies in this case.


I am actually running into the same issue.

kubectl works fine but the linkerd command fails. Now I do not have a ~/.kube/config as I have it set in memeory as I connect to many different clusters , but I would assume as linkerd uses the kubectl calls it would work the same.


@abyss1 What do you mean by “in memory”? Both kubectl and linkerd require kubeconfig files for determining the proper cluster and context to use. From the kubectl help config output, the file is loaded as follows:

  1. If the --kubeconfig flag is set, then only that file is loaded. The flag may only be set once and no merging takes place.
  2. If $KUBECONFIG environment variable is set, then it is used a list of paths (normal path delimitting rules for your system). These paths are merged. When a value is modified, it is modified in the file that defines the stanza. When a value is created, it is created in the first file that exists. If no files in the chain exist, then it creates the last file in the list.
  3. Otherwise, ${HOME}/.kube/config is used and no merging takes place.


I got this issue too trying linkerd for the first time. AKS cluster works as usual and I can use kubectl to connect and run commands. but when I run linkerd check --pre gives below error

linkerd check
kubernetes-api: can initialize the client…[ok]
kubernetes-api: can query the Kubernetes API…[FAIL] – Get https://(clustername) context deadline exceeded


Same issue.
Using a Rancher2 based cluster, and can communicate with cluster using kubectl without issue.
Cluster is functional and has been running for quite some time, without problems. Issue is only with Linkerd (linkerd check and linkerd dashboard is not working).

Tried both with latest stable and latest edge


:heavy_check_mark: can initialize the client
✘ can query the Kubernetes API
Get REDACTED/version: x509: certificate signed by unknown authority

Any pointers?


@bjornmagnusson Thanks for reporting this. Offhand I don’t know why linkerd and kubectl would be behaving differently, since they should both be using the same kubectl config loading under the hood. But clearly there’s some discrepancy. I’ve opened to help track it down. If you don’t mind adding some additional details about your environment, including your kubernetes client and server versions, that would be much appreciated.