集群初始化(Master)

  • 使用NAT网络的网卡(即enp0s3),因为其它主机要加入集群,NAT网络各虚拟机间能互相访问。VirtualBox 的 Host-only 网络模式则虚拟机间不能访问。如果错了,就重置集群。实验学习环境,随便折腾。
  • (注意记录下初始化结果中的kubeadm join命令,部署worker节点时会用到)
    如果忘记,在Master上使用命令 kubeadm token create --print-join-command 获取。

kubeadm init命令文档

  1. IP_ADDR=$(ip addr show ens160 | grep -Po 'inet \K[\d.]+')
  2. echo $IP_ADDR
  3. kubeadm init \
  4. --apiserver-advertise-address=`echo $IP_ADDR` \
  5. --ignore-preflight-errors=Swap \
  6. --image-repository registry.aliyuncs.com/google_containers

初始化命令说明:

  1. --pod-network-cidr=192.168.0.0/16 \
  • —kubernetes-version
    kubernetes的版本,可以省略。默认安装最新稳定版。

  • —apiserver-advertise-address
    Master安装的地址

  • —pod-network-cidr
    指定 Pod 网络的范围。Kubernetes 支持多种网络方案,而且不同网络方案对 —pod-network-cidr 有自己的要求。
    将来使用calico网络,所以指定的是192.168.0.0/16
    如果设置为 10.244.0.0/16 是因为我们将使用 flannel 网络方案,必须设置成这个 CIDR。 ??

  • —image-repository
    Kubenetes默认Registries地址是 k8s.gcr.io,在国内并不能访问 gcr.io,在1.13版本中我们可以增加–image-repository参数,默认值是 k8s.gcr.io,将其指定为阿里云镜像地址:registry.aliyuncs.com/google_containers

  • —kubernetes-version=v1.15.2
    关闭版本探测,因为它的默认值是stable-1,会导致从https://dl.k8s.io/release/stable-1.txt下载最新的版本号(被墙了).

结果:

  1. [init] Using Kubernetes version: v1.15.2
  2. [preflight] Running pre-flight checks
  3. [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
  4. [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
  5. [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
  6. [preflight] Pulling images required for setting up a Kubernetes cluster
  7. [preflight] This might take a minute or two, depending on the speed of your internet connection
  8. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  9. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  10. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  11. [kubelet-start] Activating the kubelet service
  12. [certs] Using certificateDir folder "/etc/kubernetes/pki"
  13. [certs] Generating "ca" certificate and key
  14. [certs] Generating "apiserver" certificate and key
  15. [certs] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.56.101]
  16. [certs] Generating "apiserver-kubelet-client" certificate and key
  17. [certs] Generating "front-proxy-ca" certificate and key
  18. [certs] Generating "front-proxy-client" certificate and key
  19. [certs] Generating "etcd/ca" certificate and key
  20. [certs] Generating "etcd/server" certificate and key
  21. [certs] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.56.101 127.0.0.1 ::1]
  22. [certs] Generating "etcd/peer" certificate and key
  23. [certs] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.56.101 127.0.0.1 ::1]
  24. [certs] Generating "etcd/healthcheck-client" certificate and key
  25. [certs] Generating "apiserver-etcd-client" certificate and key
  26. [certs] Generating "sa" key and public key
  27. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  28. [kubeconfig] Writing "admin.conf" kubeconfig file
  29. [kubeconfig] Writing "kubelet.conf" kubeconfig file
  30. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  31. [kubeconfig] Writing "scheduler.conf" kubeconfig file
  32. [control-plane] Using manifest folder "/etc/kubernetes/manifests"
  33. [control-plane] Creating static Pod manifest for "kube-apiserver"
  34. [control-plane] Creating static Pod manifest for "kube-controller-manager"
  35. [control-plane] Creating static Pod manifest for "kube-scheduler"
  36. [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
  37. [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
  38. [kubelet-check] Initial timeout of 40s passed.
  39. [apiclient] All control plane components are healthy after 53.656957 seconds
  40. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  41. [kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
  42. [upload-certs] Skipping phase. Please see --upload-certs
  43. [mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the label "node-role.kubernetes.io/master=''"
  44. [mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
  45. [bootstrap-token] Using token: 6a73av.mrwjgfk5ot6yzmt5
  46. [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
  47. [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  48. [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  49. [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  50. [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
  51. [addons] Applied essential addon: CoreDNS
  52. [addons] Applied essential addon: kube-proxy
  53. Your Kubernetes control-plane has initialized successfully!
  54. To start using your cluster, you need to run the following as a regular user:
  55. mkdir -p $HOME/.kube
  56. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  57. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  58. You should now deploy a pod network to the cluster.
  59. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  60. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  61. Then you can join any number of worker nodes by running the following on each as root:
  62. kubeadm join 192.168.56.101:6443 --token 6a73av.mrwjgfk5ot6yzmt5 \
  63. --discovery-token-ca-cert-hash sha256:d50e74aa89ff5ed5e0ea999bf07ce66abe2138a4e26fccb550b4924cd8569a06

初始化过程说明:

  1. [preflight] kubeadm 执行初始化前的检查。
  2. [kubelet-start] 生成kubelet的配置文件“/var/lib/kubelet/config.yaml”
  3. [certificates] 生成相关的各种token和证书
  4. [kubeconfig] 生成 KubeConfig 文件,kubelet 需要这个文件与 Master 通信
  5. [control-plane] 安装 Master 组件,会从指定的 Registry 下载组件的 Docker 镜像。
  6. [bootstraptoken] 生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到
  7. [addons] 安装附加组件 kube-proxy 和 kube-dns。
  8. Kubernetes Master 初始化成功,提示如何配置常规用户使用kubectl访问集群。
  9. 提示如何安装 Pod 网络。
  10. 提示如何注册其他节点到 Cluster。

执行如下命令,等待 3-10 分钟,直到所有的容器组处于 Running 状态

  1. watch kubectl get pod -n kube-system -o wide

查看 master 节点初始化结果

  1. kubectl get nodes -o wide

重置集群

如果发现初始化错误了,重头再来。

  1. kubeadm reset

记得在所有节点都执行一遍。

文档更新时间: 2020-04-23 11:01   作者:admin