ebtables or some similar executable not found during installationIf you see the following warnings while running kubeadm init
[preflight] WARNING: ebtables not found in system path
[preflight] WARNING: ethtool not found in system path
Then you may be missing ebtables, ethtool or a similar executable on your Linux machine. You can install them with the following commands:
apt install ebtables ethtool.yum install ebtables ethtool.If you notice that kubeadm init hangs after printing out the following line:
[apiclient] Created API client, waiting for the control plane to become ready
This may be caused by a number of problems. The most common are:
the default cgroup driver configuration for the kubelet differs from that used by Docker.
Check the system log file (e.g. /var/log/message) or examine the output from journalctl -u kubelet. If you see something like the following:
error: failed to run Kubelet: failed to create kubelet:
misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"
There are two common ways to fix the cgroup driver problem:
kubectl describe pod or kubectl logs commands can help you diagnose errors. For example:kubectl -n ${NAMESPACE} describe pod ${POD_NAME}
kubectl -n ${NAMESPACE} logs ${POD_NAME} -c ${CONTAINER_NAME}
docker ps and investigating each container by running docker logs.RunContainerError, CrashLoopBackOff or Error stateRight after kubeadm init there should not be any such Pods. If there are Pods in
such a state right after kubeadm init, please open an issue in the kubeadm repo.
kube-dns should be in the Pending state until you have deployed the network solution.
However, if you see Pods in the RunContainerError, CrashLoopBackOff or Error state
after deploying the network solution and nothing happens to kube-dns, it’s very
likely that the Pod Network solution that you installed is somehow broken. You
might have to grant it more RBAC privileges or use a newer version. Please file
an issue in the Pod Network providers’ issue tracker and get the issue triaged there.
kube-dns is stuck in the Pending stateThis is expected and part of the design. kubeadm is network provider-agnostic, so the admin
should install the pod network solution
of choice. You have to install a Pod Network
before kube-dns may deployed fully. Hence the Pending state before the network is set up.
HostPort services do not workThe HostPort and HostIP functionality is available depending on your Pod Network
provider. Please contact the author of the Pod Network solution to find out whether
HostPort and HostIP functionality are available.
Verified HostPort CNI providers:
For more information, read the CNI portmap documentation.
If your network provider does not support the portmap CNI plugin, you may need to use the NodePort feature of
services or use HostNetwork=true.
Many network add-ons do not yet enable hairpin mode which allows pods to access themselves via their Service IP if they don’t know about their podIP. This is an issue related to CNI. Please contact the providers of the network add-on providers to get timely information about whether they support hairpin mode.
If you are using VirtualBox (directly or via Vagrant), you will need to
ensure that hostname -i returns a routable IP address (i.e. one on the
second network interface, not the first one). By default, it doesn’t do this
and kubelet ends-up using first non-loopback network interface, which is
usually NATed. Workaround: Modify /etc/hosts, take a look at this
Vagrantfileubuntu-vagrantfile for how this can be achieved.
The following error indicates a possible certificate mismatch.
# kubectl get po
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
Verify that the $HOME/.kube/config file contains a valid certificate, and regenerate a certificate if necessary.
Another workaround is to overwrite the default kubeconfig for the “admin” user:
mv $HOME/.kube $HOME/.kube.bak
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
The following error might indicate that something was wrong in the pod network:
Error from server (NotFound): the server could not find the requested resource
If you’re using flannel as the pod network inside vagrant, then you will have to specify the default interface name for flannel.
Vagrant typically assigns two interfaces to all VMs. The first, for which all hosts are assigned the IP address 10.0.2.15, is for external traffic that gets NATed.
This may lead to problems with flannel. By default, flannel selects the first interface on a host. This leads to all hosts thinking they have the same public IP address. To prevent this issue, pass the --iface eth1 flag to flannel so that the second interface is chosen.
In some situations kubectl logs and kubectl run commands may return with the following errors despite an otherwise apparently correctly working cluster:
Error from server: Get https://10.19.0.41:10250/containerLogs/default/mysql-ddc65b868-glc5m/mysql: dial tcp 10.19.0.41:10250: getsockopt: no route to host
This is due to Kubernetes using an IP that can not communicate with other IPs on the seemingly same subnet, possibly by policy of the machine provider. As an example, Digital Ocean assigns a public IP to eth0 as well as a private one to be used internally as anchor for their floating IP feature, yet kubelet will pick the latter as the node’s InternalIP instead of the public one.
Use ip addr show to check for this scenario instead of ifconfig because ifconfig will not display the offending alias IP address. Alternatively an API endpoint specific to Digital Ocean allows to query for the anchor IP from the droplet:
curl http://169.254.169.254/metadata/v1/interfaces/public/0/anchor_ipv4/address
The workaround is to tell kubelet which IP to use using --node-ip. When using Digital Ocean, it can be the public one (assigned to eth0) or the private one (assigned to eth1) should you want to use the optional private network. For example:
IFACE=eth0 # change to eth1 for DO's private network
DROPLET_IP_ADDRESS=$(ip addr show dev $IFACE | awk 'match($0,/inet (([0-9]|\.)+).* scope global/,a) { print a[1]; exit }')
echo $DROPLET_IP_ADDRESS # check this, just in case
echo "Environment=\"KUBELET_EXTRA_ARGS=--node-ip=$DROPLET_IP_ADDRESS\"" >> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Please note that this assumes KUBELET_EXTRA_ARGS hasn’t already been set in the unit file.
Then restart kubelet:
systemctl daemon-reload
systemctl restart kubelet