准备工作

关闭防火墙

关闭Selinux

关闭 swap,永久关闭

安装依赖包和必须的包

yum -y install wget jq psmisc vim net-tools nfs-utils socat telnet device-mapper-persistent-data lvm2  network-script curl  conntrack ipvsadm ipset jq iptables  sysstat libseccomp wget vim net-tools git libcgroup
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack

wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.4/cri-dockerd-0.3.4-3.el8.x86_64.rpm

 rpm -ivh cri-dockerd-0.3.4-3.el8.x86_64.rpm

修改docker配置文件

修改/etc/docker/daemon.json文件如下

[root@master ~]# cat /etc/docker/daemon.json
{
"registry-mirrors" : [
    "https://nexus.ycjy.info"
  ],
  "insecure-registries" : [
    "https://nexus.ycjy.info"
  ],
   "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  }
}

重新启动

   systemctl daemon-reload
   systemctl restart docker

修改cri docker的服务文件

/usr/lib/systemd/system/cri-docker.service
第10行改成如下

 ExecStart=/usr/bin/cri-dockerd --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9 --container-runtime-endpoint fd://

启动CRL服务
systemctl start cri-docker

配置k8s需要的组件镜像下载

kubeadm config images list --kubernetes-version=v1.28.1

看到原来的镜像是不能访问的。

重新配置如下

kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers --cri-socket=unix:///var/run/cri-dockerd.sock

初始化master节点,只在master运行

kubeadm init --kubernetes-version=v1.28.2 --pod-network-cidr=10.224.0.0/16 --apiserver-advertise-address=192.168.10.49 --image-repository registry.aliyuncs.com/google_containers --cri-socket=unix:///var/run/cri-dockerd.sock

.....
.....

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.10.49:6443 --token s3hgyt.9o1dg2qm88zfnc38 \
        --discovery-token-ca-cert-hash sha256:1d86c79510692384e423026445ea197bc68d463a026093ed37361888f820e568

node端配置

直接运行上面的最后生成的命令,但必须要自己手工加上参数

kubeadm join 192.168.10.49:6443 --token s3hgyt.9o1dg2qm88zfnc38 \
        --discovery-token-ca-cert-hash sha256:1d86c79510692384e423026445ea197bc68d463a026093ed37361888f820e568 --cri-socket=unix:///var/run/cri-dockerd.sock

配置可使用kubectl命令的环境文件

   mkdir -p $HOME/.kube
   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
   sudo chown $(id -u):$(id -g) $HOME/.kube/config
   echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >>/etc/profile

使用命令发现node不是ready状态

[root@master ~]# kubectl get  no
NAME     STATUS     ROLES           AGE     VERSION
master   NotReady   control-plane   8m18s   v1.28.2
node1    NotReady   <none>          2m57s   v1.28.2
node2    NotReady   <none>          2m54s   v1.28.2

看log日志


root@master ~]# kubectl describe node master
Name:               master
Roles:              control-plane
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=master
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/control-plane=
                    node.kubernetes.io/exclude-from-external-load-balancers=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 20 Oct 2023 10:19:20 -0400
Taints:             node-role.kubernetes.io/control-plane:NoSchedule
                    node.kubernetes.io/not-ready:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  master
  AcquireTime:     <unset>
  RenewTime:       Fri, 20 Oct 2023 10:28:01 -0400
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Fri, 20 Oct 2023 10:24:46 -0400   Fri, 20 Oct 2023 10:19:16 -0400   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Fri, 20 Oct 2023 10:24:46 -0400   Fri, 20 Oct 2023 10:19:16 -0400   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Fri, 20 Oct 2023 10:24:46 -0400   Fri, 20 Oct 2023 10:19:16 -0400   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Fri, 20 Oct 2023 10:24:46 -0400   Fri, 20 Oct 2023 10:19:16 -0400   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
  InternalIP:  192.168.10.49
  Hostname:    master
Capacity:
  cpu:                2
  ephemeral-storage:  36678148Ki
  hugepages-2Mi:      0
  memory:             4644548Ki
  pods:               110
Allocatable:
  cpu:                2
  ephemeral-storage:  33802581141
  hugepages-2Mi:      0
  memory:             4542148Ki
  pods:               110
System Info:
  Machine ID:                 a50aeeffb6d04db491fbc789cc892800
  System UUID:                be84f15e-6df8-4b4e-b9b7-e03f016dc00b
  Boot ID:                    b572dd1c-e8cc-46dd-8f87-774aa185f9aa
  Kernel Version:             4.18.0-477.10.1.el8_8.x86_64
  OS Image:                   Rocky Linux 8.8 (Green Obsidian)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://24.0.6
  Kubelet Version:            v1.28.2
  Kube-Proxy Version:         v1.28.2
PodCIDR:                      10.224.0.0/24
PodCIDRs:                     10.224.0.0/24
Non-terminated Pods:          (5 in total)
  Namespace                   Name                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                              ------------  ----------  ---------------  -------------  ---
  kube-system                 etcd-master                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m42s
  kube-system                 kube-apiserver-master             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m40s
  kube-system                 kube-controller-manager-master    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m47s
  kube-system                 kube-proxy-wnjtf                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m29s
  kube-system                 kube-scheduler-master             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m43s
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                650m (32%)  0 (0%)
  memory             100Mi (2%)  0 (0%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
Events:
  Type     Reason                   Age              From             Message
  ----     ------                   ----             ----             -------
  Normal   Starting                 8m25s            kube-proxy
  Normal   NodeHasSufficientMemory  9m (x8 over 9m)  kubelet          Node master status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    9m (x7 over 9m)  kubelet          Node master status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     9m (x7 over 9m)  kubelet          Node master status is now: NodeHasSufficientPID
  Normal   NodeAllocatableEnforced  9m               kubelet          Updated Node Allocatable limit across pods
  Normal   Starting                 8m40s            kubelet          Starting kubelet.
  Warning  InvalidDiskCapacity      8m40s            kubelet          invalid capacity 0 on image filesystem
  Normal   NodeHasSufficientMemory  8m40s            kubelet          Node master status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    8m40s            kubelet          Node master status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     8m40s            kubelet          Node master status is now: NodeHasSufficientPID
  Normal   NodeAllocatableEnforced  8m40s            kubelet          Updated Node Allocatable limit across pods
  Normal   RegisteredNode           8m30s            node-controller  Node master event: Registered Node master in Controller

发现最下方有
Warning InvalidDiskCapacity 8m40s kubelet invalid capacity 0 on image filesystem 错误,实际上网络没安装好。

在master机器上安装网络

[root@master ~]#  wget https://docs.tigera.io/archive/v3.25/manifests/calico.yaml

[root@master ~]#  kubectl apply -f calico.yaml

[root@master ~]# kubectl get node
NAME     STATUS   ROLES           AGE   VERSION
master   Ready    control-plane   39m   v1.28.2
node1    Ready    <none>          34m   v1.28.2
node2    Ready    <none>          34m   v1.28.2
[root@master ~]#

[root@master ~]# kubectl get po -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS      AGE
kube-system   calico-kube-controllers-658d97c59c-r4vdp   1/1     Running   0             27m
kube-system   calico-node-29sft                          1/1     Running   0             27m
kube-system   calico-node-jwfpp                          1/1     Running   0             27m
kube-system   calico-node-v27cf                          1/1     Running   0             27m
kube-system   coredns-66f779496c-4vhdx                   1/1     Running   0             39m
kube-system   coredns-66f779496c-wfr27                   1/1     Running   0             39m
kube-system   etcd-master                                1/1     Running   0             39m
kube-system   kube-apiserver-master                      1/1     Running   0             39m
kube-system   kube-controller-manager-master             1/1     Running   2 (21m ago)   40m
kube-system   kube-proxy-fq9rs                           1/1     Running   0             34m
kube-system   kube-proxy-w5btd                           1/1     Running   0             34m
kube-system   kube-proxy-wnjtf                           1/1     Running   0             39m
kube-system   kube-scheduler-master                      1/1     Running   2 (20m ago)   39m
[root@master ~]#

作者:严锋  创建时间:2023-10-20 22:42
最后编辑:严锋  更新时间:2025-06-05 17:05