运维部署
搭建一个单机版的k8s
Gitlab部署及使用
docker-compose安装Harbor
LDAP部署
Chrony时钟源部署
PXE批量安装
wiki.js部署指南
常用源
常用脚本
阿里云ossfs部署
华为光交划zone
Ubuntu虚拟部署FusionCompute
Rancher部署
AIX7.0安装JAVA
eggo部署K8S
Harbor本地镜像仓库离线安装及使用
使用kubeadm部署K8S(docker+CRI+Flannel)集群
使用kubeadm部署K8S(containerd+Calico)集群
AIX7.0安装JAVA
Elasticsearch单机部署
本文档使用 MrDoc 发布
-
+
首页
使用kubeadm部署K8S(containerd+Calico)集群
# 1、环境规划 | HostName | IPV4 | 角色 | | --- | --- | --- | | master01 | 10.0.1.200/24 | master | | master02 | 10.0.1.201/24 | master | | master03 | 10.0.1.202/24 | master | | node01 | 10.0.1.203/24 | worker(node) | | node02 | 10.0.1.204/24 | worker(node) | | node03 | 10.0.1.205/24 | worker(node) | 系统版本:Rocky8.8 K8S版本:1.26.9 底层驱动:containerd network:Calico # 2、环境准备(所有节点) ## 2.1 关闭防火墙、SELINUX ``` $ systemctl stop firewalld && systemctl disable firewalld $ sed -i 's/enforcing/disabled/g' /etc/selinux/config $ setenforce 0 ``` ## 2.2 配置主机名 master节点,名称为master01 ``` $ hostnamectl set-hostname master01 ``` node节点,名称为node01 ``` $ hostnamectl set-hostname node01 ``` ## 2.3 配置Host文件 ``` $ cat >> /etc/hosts <<EOF 10.0.1.200 master01 10.0.1.201 master02 10.0.1.202 master03 10.0.1.203 node01 10.0.1.204 node02 10.0.1.205 node03 EOF ``` ## 2.4 时间同步配置 ``` $ yum install -y chrony $ systemctl start chronyd && systemctl enable chronyd ``` ## 2.5 配置内核转发及网桥过滤 ``` $ cat > /etc/modules-load.d/k8s.conf << EOF overlay br_netfilter EOF 添加网桥过滤及内核转发配置文件 $ cat > /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 vm.swappiness = 0 EOF 加载br_netfilter模块 $ modprobe overlay $ modprobe br_netfilter 查看是否加载 $ lsmod | grep br_netfilter br_netfilter 22256 0 bridge 151336 1 br_netfilter 加载网桥过滤及内核转发配置文件 $ sysctl -p /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 vm.swappiness = 0 ``` ## 2.6 安装ipset及ipvsadm 安装ipset及ipvsadm ``` $ dnf install -y ipset ipvsadm ``` 配置ipvsadm模块加载方式,添加需要加载的模块 ``` $ cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack EOF ``` 授权、运行、检查是否加载 ``` $ chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack ``` ## 2.7 关闭SWAP分区 ``` $ swapoff -a # 临时 --有用 $ sed -ri 's/.*swap.*/#&/' /etc/fstab ``` ## 2.8 配置containerd 官方文档:https://github.com/containerd/containerd/blob/main/docs/getting-started.md 下载并解压 地址:https://github.com/containerd/containerd/releases ``` $ wget https://github.com/containerd/containerd/releases/download/v1.7.6/containerd-1.7.6-linux-amd64.tar.gz $ tar xzvf containerd-1.7.6-linux-amd64.tar.gz -C /usr/local ``` 配置systemd ``` $ wget -O /usr/lib/systemd/system/containerd.service https://raw.githubusercontent.com/containerd/containerd/main/containerd.service $ systemctl daemon-reload $ systemctl enable --now containerd ``` 配置runc 地址: https://github.com/opencontainers/runc/releases ``` $ wget https://github.com/opencontainers/runc/archive/refs/tags/v1.1.8.tar.gz $ install -m 755 runc.amd64 /usr/local/sbin/runc ``` 配置CNI plugins 地址:https://github.com/containernetworking/plugins/releases ``` $ wget https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz $ mkdir -p /opt/cni/bin $ tar xzvf cni-plugins-linux-amd64-v1.1.1.tgz -C /opt/cni/bin ``` 配置cgroup ``` $ mkdir /etc/containerd $ /usr/local/bin/containerd config default > /etc/containerd/config.toml $ sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml $ sed -i 's/sandbox_image = "registry.k8s.io\/pause:3.8"/sandbox_image = "registry.aliyuncs.com\/google_containers\/pause:3.9"/g' /etc/containerd/config.toml $ systemctl restart containerd ``` # 3、安装K8S 1.26.x ## 3.1 配置yum源 ``` $ cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF $ yum makecache ## 查看所有的可用版本 $ yum list kubelet --showduplicates | sort -r |grep 1.26 ``` ## 3.2 安装kubeadm,kubectl 和 kubelet ``` $ dnf install -y kubectl-1.26.9-0 kubelet-1.26.9-0 kubeadm-1.26.9-0 ``` * 为了实现docker使用的cgroupdriver与kubelet使用的cgroup的一致性,建议修改如下文件内容。 ``` $ vim /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--cgroup-driver=systemd" 或 sed -i 's/KUBELET_EXTRA_ARGS=/KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"/g' /etc/sysconfig/kubelet ``` * 设置kubelet为开机自启动即可,由于没有生成配置文件,集群初始化后自动启动 ``` $ systemctl enable kubelet ``` * 设置crictl连接 containerd ``` $ crictl config --set runtime-endpoint=unix:///run/containerd/containerd.sock ``` ``` $ crictl images ls ``` ## 3.3 准备镜像 ``` $ kubeadm config images list --kubernetes-version=v1.26.9 registry.k8s.io/kube-apiserver:v1.26.9 registry.k8s.io/kube-controller-manager:v1.26.9 registry.k8s.io/kube-scheduler:v1.26.9 registry.k8s.io/kube-proxy:v1.26.9 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.6-0 registry.k8s.io/coredns/coredns:v1.9.3 $ cat > image_download.sh << EOF #!/bin/bash images_list=' registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.0 registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.0 registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.0 registry.aliyuncs.com/google_containers/kube-proxy:v1.21.0 registry.aliyuncs.com/google_containers/pause:3.4.1 registry.aliyuncs.com/google_containers/etcd:3.4.13-0 registry.aliyuncs.com/google_containers/coredns:v1.8.0' for i in \$images_list do crictl pull \$i done EOF ``` ## 3.4 集群初始化(master上执行) 使用kubeadm init命令初始化 ``` $ kubeadm init --kubernetes-version=v1.26.9 --pod-network-cidr=10.224.0.0/16 --apiserver-advertise-address=master01 --apiserver-advertise-address=10.0.1.200 --apiserver-cert-extra-sans master02 --apiserver-cert-extra-sans 10.0.1.201 --apiserver-cert-extra-sans master03 --apiserver-cert-extra-sans 10.0.1.202 --control-plane-endpoint=k8s.zhoumx.cc --image-repository registry.aliyuncs.com/google_containers 参数: --apiserver-advertise-address 集群apiServer地址(master节点IP即可) --apiserver-cert-extra-sans 用于 API Server 服务证书的可选附加主题备用名称 --image-repository 由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址 --kubernetes-version K8s版本,与上面安装的一致 --service-cidr 集群内部虚拟网络,Pod统一访问入口 --pod-network-cidr Pod网络,与下面部署的CNI网络组件yaml中保持一致 --control-plane-endpoint 用于为所有控制平面节点设置共享端点,允许 IP 地址和可以映射到 IP 地址的 DNS 名称(可选,kubeadm 不支持将没有 --control-plane-endpoint 参数的单个控制平面集群转换为高可用性集群) 成功后显示: Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join k8s.zhoumx.cc:6443 --token r8xstf.pvxxtgsqcrkzix5t \ --discovery-token-ca-cert-hash sha256:1299ad3afbf3e37c0421bb6abbf2a45191accdbeca410b358546db69d2e1c293 \ --control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join k8s.zhoumx.cc:6443 --token r8xstf.pvxxtgsqcrkzix5t \ --discovery-token-ca-cert-hash sha256:1299ad3afbf3e37c0421bb6abbf2a45191accdbeca410b358546db69d2e1c293 ``` * 将master节点相关证书拷贝至master02-03 ``` 创建目录(master02-03) $ mkdir -p /etc/kubernetes/pki/etcd 拷贝证书(master01) $ scp /etc/kubernetes/pki/ca.* master02:/etc/kubernetes/pki/ $ scp /etc/kubernetes/pki/sa.* master02:/etc/kubernetes/pki/ $ scp /etc/kubernetes/pki/front-proxy-ca.* master02:/etc/kubernetes/pki/ $ scp /etc/kubernetes/pki/etcd/ca.* master02:/etc/kubernetes/pki/etcd/ $ scp /etc/kubernetes/pki/ca.* master03:/etc/kubernetes/pki/ $ scp /etc/kubernetes/pki/sa.* master03:/etc/kubernetes/pki/ $ scp /etc/kubernetes/pki/front-proxy-ca.* master03:/etc/kubernetes/pki/ $ scp /etc/kubernetes/pki/etcd/ca.* master03:/etc/kubernetes/pki/etcd/ ``` * 在master02和master03上执行: ``` $ kubeadm join k8s.zhoumx.cc:6443 --token r8xstf.pvxxtgsqcrkzix5t \ --discovery-token-ca-cert-hash sha256:1299ad3afbf3e37c0421bb6abbf2a45191accdbeca410b358546db69d2e1c293 \ --control-plane 成功后显示: This node has joined the cluster and a new control plane instance was created: * Certificate signing request was sent to apiserver and approval was received. * The Kubelet was informed of the new secure connection details. * Control plane label and taint were applied to the new node. * The Kubernetes control plane instances scaled up. * A new etcd member was added to the local/stacked etcd cluster. To start administering your cluster from this node, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Run 'kubectl get nodes' to see this node join the cluster. ``` * 在node01、node02和node03上执行: ``` $ kubeadm join k8s.zhoumx.cc:6443 --token r8xstf.pvxxtgsqcrkzix5t \ --discovery-token-ca-cert-hash sha256:1299ad3afbf3e37c0421bb6abbf2a45191accdbeca410b358546db69d2e1c293 成功后显示: This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. ``` * 验证 master01节点执行 ``` $ kubectl get nodes NAME STATUS ROLES AGE VERSION master01 NotReady control-plane 8m39s v1.26.9 master02 NotReady control-plane 5m52s v1.26.9 master03 NotReady control-plane 3m45s v1.26.9 node01 NotReady <none> 114s v1.26.9 node02 NotReady <none> 39s v1.26.9 node03 NotReady <none> 35s v1.26.9 ``` * 打上work节点标记 ``` $ kubectl label node node01 node-role.kubernetes.io/worker=worker $ kubectl label node node02 node-role.kubernetes.io/worker=worker $ kubectl label node node03 node-role.kubernetes.io/worker=worker [root@master01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master01 NotReady control-plane 10m v1.26.9 master02 NotReady control-plane 7m16s v1.26.9 master03 NotReady control-plane 5m9s v1.26.9 node01 NotReady worker 3m18s v1.26.9 node02 NotReady worker 2m3s v1.26.9 node03 NotReady worker 119s v1.26.9 ``` ## 3.5 集群部署网络插件 官方文档:https://docs.tigera.io/calico/latest/operations/calicoctl/install * 应用资源清单文件 ``` $ kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml ``` * 配置必要资源配置文件 ``` $ mkdir calicodir $ cd calicodir $ wget https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/custom-resources.yaml 修改 cidr地址 修改文件第13行,修改为使用kubeadm init ----pod-network-cidr对应的IP地址段 - blockSize: 26 cidr: 10.224.0.0/16 encapsulation: VXLANCrossSubnet natOutgoing: Enabled nodeSelector: all() $kubectl create -f custom-resources.yaml ``` * 确认所有pod运行正常 ``` $kubectl get pods -n calico-system ``` > 等待每个pod状态为running * 删除控制平面上的污点 ``` $ kubectl taint nodes --all node-role.kubernetes.io/control-plane- $ kubectl taint nodes --all node-role.kubernetes.io/master- ``` * calico客户端安装 ``` $ curl -L https://github.com/projectcalico/calico/releases/download/v3.26.1/calicoctl-linux-amd64 -o calicoctl $ chmod +x ./calicoctl $ mv calicoctl /usr/bin/ ``` * 查看已运行节点 ``` $ calicoctl get nodes --allow-version-mismatch NAME master01 ``` # 4、验证集群可用性 ``` [root@master01 package]# kubectl get node NAME STATUS ROLES AGE VERSION master01 Ready control-plane 64m v1.26.9 master02 Ready control-plane 61m v1.26.9 master03 Ready control-plane 59m v1.26.9 node01 Ready worker 57m v1.26.9 node02 Ready worker 56m v1.26.9 node03 Ready worker 56m v1.26.9 ``` ``` [root@master01 package]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-5bbd96d687-78dsn 1/1 Running 0 64m coredns-5bbd96d687-mjrks 1/1 Running 0 64m etcd-master01 1/1 Running 5 (7m17s ago) 64m etcd-master02 1/1 Running 2 (5m45s ago) 61m etcd-master03 1/1 Running 0 58m kube-apiserver-master01 1/1 Running 24 (7m17s ago) 64m kube-apiserver-master02 1/1 Running 3 (5m45s ago) 61m kube-apiserver-master03 1/1 Running 0 59m kube-controller-manager-master01 1/1 Running 12 (7m17s ago) 64m kube-controller-manager-master02 1/1 Running 2 (5m45s ago) 60m kube-controller-manager-master03 1/1 Running 0 59m kube-proxy-4pzd6 1/1 Running 0 57m kube-proxy-8v2q2 1/1 Running 1 (5m45s ago) 61m kube-proxy-9m8rl 1/1 Running 1 (7m17s ago) 64m kube-proxy-mxl6w 1/1 Running 0 59m kube-proxy-tntzw 1/1 Running 0 56m kube-proxy-zn7qg 1/1 Running 0 56m kube-scheduler-master01 1/1 Running 11 (7m17s ago) 64m kube-scheduler-master02 1/1 Running 2 (5m45s ago) 61m kube-scheduler-master03 1/1 Running 0 59m ``` ``` [root@master01 package]# calicoctl get nodes --allow-version-mismatch NAME master01 master02 master03 node01 node02 node03 ``` ``` [root@master01 package]# kubectl get pods -n calico-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-6597b8f988-vgngl 1/1 Running 0 45m calico-node-2nldr 1/1 Running 0 45m calico-node-4g7m9 1/1 Running 0 45m calico-node-ppvlc 1/1 Running 1 (7m27s ago) 45m calico-node-v756d 1/1 Running 0 45m calico-node-xn458 1/1 Running 0 45m calico-node-z528j 1/1 Running 1 (5m55s ago) 45m calico-typha-ffd6bdcc7-2xzpl 1/1 Running 0 44m calico-typha-ffd6bdcc7-sbg65 1/1 Running 0 45m calico-typha-ffd6bdcc7-w9x45 1/1 Running 0 44m csi-node-driver-9pkrz 2/2 Running 0 45m csi-node-driver-h96ld 2/2 Running 0 45m csi-node-driver-pqtjw 2/2 Running 0 45m csi-node-driver-qjlmj 2/2 Running 0 45m csi-node-driver-w2z6b 2/2 Running 0 45m csi-node-driver-xv9f2 2/2 Running 0 45m [root@master01 package]# ```
阿星
2024年1月21日 22:09
转发文档
收藏文档
上一篇
下一篇
手机扫码
复制链接
手机扫一扫转发分享
复制链接
Markdown文件
PDF文档(打印)
分享
链接
类型
密码
更新密码