k8s 工作节点状态始终为 NotReady

k8s-node01

[root@k8s-node01 ~] ○ curl https://k8s-master:6443 -k
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {

  },
  "status": "Failure",
  "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
  "reason": "Forbidden",
  "details": {

  },
  "code": 403
}[root@k8s-node01 ~] ○

k8s-node01 的 kubelet 连接的为什么是 localhost:6443 ?

9月 25 12:43:40 k8s-node01 kubelet[126424]: E0925 12:43:40.639792  126424 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-node01&resourceVersion=0: dial tcp 127.0.0.1:6443: getsockopt: connection refused

6443 是不是 kube-apiserver 的端口么? 那应该是 k8s-master:6443 这个节点才是啊 ?

你好,请在k8s-node01上执行下面两个语句。得到的结果请反馈给我们一下,谢谢。

sudo cat /etc/nginx/nginx.conf

sudo cat /etc/kubernetes/manifests/nginx-proxy.yml

cat /etc/nginx/nginx.conf

error_log stderr notice;

worker_processes auto;
events {
  multi_accept on;
  use epoll;
  worker_connections 1024;
}

stream {
        upstream kube_apiserver {
            least_conn;
            server 192.168.123.155:6443;
                    }

        server {
            listen        127.0.0.1:6443;
            proxy_pass    kube_apiserver;
            proxy_timeout 10m;
            proxy_connect_timeout 1s;

        }

}

cat /etc/kubernetes/manifests/nginx-proxy.yml

apiVersion: v1
kind: Pod
metadata:
  name: nginx-proxy
  namespace: "kube-system"
  labels:
    k8s-app: kube-nginx
spec:
  hostNetwork: true
  containers:
  - name: nginx-proxy
    image: registry.cn-hangzhou.aliyuncs.com/choerodon-tools/nginx:1.11.4-alpine
    imagePullPolicy: IfNotPresent
    resources:
      limits:
        cpu: 300m
        memory: 512M
      requests:
        cpu: 25m
        memory: 32M
    securityContext:
      privileged: true
    volumeMounts:
    - mountPath: /etc/nginx
      name: etc-nginx
      readOnly: true
  volumes:
  - name: etc-nginx
    hostPath:
      path: /etc/nginx

查看上述文件,脚本安装是没有问题的。

我们现在来查看一下nginx的日志来排查问题了。

在master节点执行

kubectl logs nginx-proxy-k8s-node01 -n kube-system

如果在master节点执行得不到日志或者报错,那么就去工作节点执行docker语句。

在工作节点节点执行以下语句

docker ps | grep 'nginx -g' | awk '{print $1}' | xargs docker logs

master节点执行结果如下:

Error from server (InternalError): Internal error occurred: Authorization error (user=kube-apiserver-kubelet-client , verb=get, resource=nodes, subresource=proxy)

工作节点没有任何容器运行记录:

[root@k8s-node01 ~] ○ docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

重启下工作节点的kubelet看有容器吗

很奇怪,重启了kubelet之后,nginx, proxy 这些容器都起来了,过一会就异常停止了,最后连停止的容器都被删除了。。。

被删除前的日志如下:

2018/09/25 06:09:42 [notice] 1#1: using the "epoll" event method
2018/09/25 06:09:42 [notice] 1#1: nginx/1.11.4
2018/09/25 06:09:42 [notice] 1#1: built by gcc 5.3.0 (Alpine 5.3.0)
2018/09/25 06:09:42 [notice] 1#1: OS: Linux 3.10.0-693.5.2.el7.x86_64
2018/09/25 06:09:42 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 65536:65536
2018/09/25 06:09:42 [notice] 1#1: start worker processes
2018/09/25 06:09:42 [notice] 1#1: start worker process 7
2018/09/25 06:09:42 [notice] 1#1: start worker process 8
2018/09/25 06:09:42 [notice] 1#1: start worker process 9
2018/09/25 06:09:42 [notice] 1#1: start worker process 10

k8s-node01 下的 flannel 在停止前的历史日志:

[root@k8s-node01 ~] ○ docker logs -f 08e53031eb51
I0925 06:20:54.165091       1 main.go:417] Searching for interface using 192.168.123.156
I0925 06:20:54.165898       1 main.go:488] Using interface with name eth0 and address 192.168.123.156
I0925 06:20:54.165951       1 main.go:505] Defaulting external address to interface address (192.168.123.156)
I0925 06:20:54.185586       1 kube.go:131] Waiting 10m0s for node controller to sync
I0925 06:20:54.185733       1 kube.go:294] Starting kube subnet manager
I0925 06:20:55.187003       1 kube.go:138] Node controller sync successful
I0925 06:20:55.187060       1 main.go:235] Created subnet manager: Kubernetes Subnet Manager - k8s-node01
I0925 06:20:55.187068       1 main.go:238] Installing signal handlers
I0925 06:20:55.187262       1 main.go:353] Found network config - Backend type: vxlan
I0925 06:20:55.187359       1 vxlan.go:120] VXLAN config: VNI=1 Port=0 GBP=false DirectRouting=false
I0925 06:20:55.187970       1 main.go:300] Wrote subnet file to /run/flannel/subnet.env
I0925 06:20:55.187978       1 main.go:304] Running backend.
I0925 06:20:55.187986       1 main.go:322] Waiting for all goroutines to exit
I0925 06:20:55.188016       1 vxlan_network.go:60] watching for new subnet leases
I0925 06:21:25.715804       1 main.go:337] shutdownHandler sent cancel signal...

nginx-proxy:

[root@k8s-node01 ~] ○ docker logs -f 10e9989c50d2
2018/09/25 06:22:22 [notice] 1#1: using the "epoll" event method
2018/09/25 06:22:22 [notice] 1#1: nginx/1.11.4
2018/09/25 06:22:22 [notice] 1#1: built by gcc 5.3.0 (Alpine 5.3.0)
2018/09/25 06:22:22 [notice] 1#1: OS: Linux 3.10.0-693.5.2.el7.x86_64
2018/09/25 06:22:22 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 65536:65536
2018/09/25 06:22:22 [notice] 1#1: start worker processes
2018/09/25 06:22:22 [notice] 1#1: start worker process 6
2018/09/25 06:22:22 [notice] 1#1: start worker process 7
2018/09/25 06:22:22 [notice] 1#1: start worker process 8
2018/09/25 06:22:22 [notice] 1#1: start worker process 9
2018/09/25 06:23:28 [notice] 1#1: signal 15 (SIGTERM) received, exiting
2018/09/25 06:23:28 [notice] 7#7: exiting
2018/09/25 06:23:28 [notice] 6#6: exiting
2018/09/25 06:23:28 [notice] 7#7: exit
2018/09/25 06:23:28 [notice] 6#6: exit
2018/09/25 06:23:28 [notice] 8#8: exiting
2018/09/25 06:23:28 [notice] 8#8: exit
2018/09/25 06:23:28 [notice] 9#9: exiting
2018/09/25 06:23:28 [notice] 9#9: exit

kube-proxy 日志:

[root@k8s-node01 ~] ○ docker logs -f 2f31b62f315f
W0925 06:24:14.849141       1 server.go:191] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
time="2018-09-25T06:24:14Z" level=warning msg="Running modprobe ip_vs failed with message: `modprobe: ERROR: could not insert 'ip_vs': Exec format error\ninsmod /lib/modules/3.10.0-693.5.2.el7.x86_64/kernel/net/netfilter/ipvs/ip_vs.ko.xz`, error: exit status 1"
time="2018-09-25T06:24:14Z" level=error msg="Could not get ipvs family information from the kernel. It is possible that ipvs is not enabled in your kernel. Native loadbalancing will not work until this is fixed."
W0925 06:24:14.859351       1 server_others.go:268] Flag proxy-mode="" unknown, assuming iptables proxy
I0925 06:24:14.861033       1 server_others.go:122] Using iptables Proxier.
I0925 06:24:14.874697       1 server_others.go:157] Tearing down inactive rules.
E0925 06:24:14.923243       1 proxier.go:699] Failed to execute iptables-restore for nat: exit status 1 (iptables-restore: line 7 failed
)
I0925 06:24:14.927951       1 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0925 06:24:14.927996       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0925 06:24:14.928026       1 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0925 06:24:14.928042       1 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0925 06:24:14.928594       1 config.go:202] Starting service config controller
I0925 06:24:14.928609       1 controller_utils.go:1041] Waiting for caches to sync for service config controller
I0925 06:24:14.928640       1 config.go:102] Starting endpoints config controller
I0925 06:24:14.928650       1 controller_utils.go:1041] Waiting for caches to sync for endpoints config controller
I0925 06:24:15.028766       1 controller_utils.go:1048] Caches are synced for service config controller
I0925 06:24:15.028930       1 controller_utils.go:1048] Caches are synced for endpoints config controller

看上去像是内核也有一点问题
看下 kubelet和docker的日志有没有什么异常

docker 的日志里重复性的有:

level=error msg="Error closing logger: invalid argument"
level=warning msg="Unknown healthcheck type 'NONE' (expected 'CMD') in container xxxxx

kubelet 里面基本上都是由于访问 localhost:6443 被拒绝的的日志。

感谢 @vinkdong 的远程协助!问题已得到解决, 节点状态为 NotReady 的根本原因是 kubelet 所在分区的磁盘空间不足 10G,导致该节点被 K8S 集群驱逐!

目前的解决方案如下(选择其中一种即可):

  1. 调小 /etc/systemd/system/kubelet.service.d/20-kubelet-override.conf 中的硬盘空间阀值,比如 1Gi:
--eviction-hard=memory.available<512Mi,nodefs.available<1Gi,imagefs.available<10Gi
  1. 将 /var/lib/kubelet 目录通过软连接的方式指向到其他空间足够的分区。