Kubernetes 1.25 集群保姆级安装教程,使用 Containerd和Calico网络插件

近期文章:

k8s决定在 1.20 开始放弃 Docker,并在1.21完全抛弃 Docker 的支持。引入containerd,它是 Kubernetes 容器运行时更好的选择,性能远优于docker,至于具体原因,请自行查找,废话不多说,直接实操吧

kubernetes环境规划:

podSubnet(pod网段) 10.244.0.0/16

serviceSubnet(service网段): 10.96.0.0/12

实验环境规划:

操作系统:centos7.9

配置: 4Gib内存/2vCPU/50G硬盘

网络:NAT模式

kubernetes集群角色ip主机名安装的组件
控制节点192.168.5.132pengfei-master1apiserver、conroller-manager、scheduler、etcd、kube-proxy、containerd、calico、keepalived、nginx
工作节点192.168.5.133pengfei-node1kubelet、kube-proxy、containerd、calico、coredns
工作节点192.168.5.134pengfei-node2kubelet、kube-proxy、containerd、calico、coredns

一. 初始化安装 kubernetes 集群的实验环境

  • master1控制端和node1、node2工作端同时操作
  • WIN:Xshell 工具开启命令行输入发送到三台服务器,如图所示,勾选即可

MAC终端:

  • 快捷键:shift+command+I 打开输入会话同时发送到三台服务器,再次执行即可关闭

注意:以下所有操作在控制端(pengfei-master1)和node端(pengfei-node1和pengfei-node2)都要执行

1.1 禁用selinux

#临时 [root@pengfei-master1 ~]# setenforce 0  #永久 [root@pengfei-master1 ~]# sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config

1.2 配置主机hosts

#/etc/hosts 中新增下面三台 192.168.5.132   pengfei-master1   192.168.5.133   pengfei-node1   192.168.5.134   pengfei-node2

1.3 配置免密登录

相互之间通过主机名互相访问 修改每台机器的/etc/hosts文件,增加如下三行:

[root@pengfei-master1 ~]# ssh-keygen -t rsa 一直回车即可 [root@pengfei-master1 ~]# ssh-copy-id pengfei-node1 [root@pengfei-master1 ~]# ssh-copy-id pengfei-node2

1.4 禁用swap分区,提升性能

#临时关闭 [root@pengfei-master1 ~]# swapoff -a #永久关闭:注释swap挂载,给swap这行开头加一下注释 [root@pengfei-master1 ~]# cat /etc/fstab #/dev/mapper/centos_172-swap swap          swap  defaults    0 0

为什么要关闭swap交换分区?

Swap是交换分区,如果机器内存不够,会使用swap分区,但是swap分区的性能较低,k8s设计的时候为了能提升性能,默认是不允许使用交换分区的。Kubeadm初始化的时候会检测swap是否关闭,如果没关闭,那就初始化失败。如果不想要关闭交换分区,安装k8s的时候可以指定–ignore-preflight-errors=Swap来解决

1.5 修改机器内核参数

[root@pengfei-master1 ~]# modprobe br_netfilter [root@pengfei-master1 ~]# echo "modprobe br_netfilter" >> /etc/profile [root@pengfei-master1 ~]# cat >/etc/sysctl.d/kubernetes.conf <<EOF                                         net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF [root@pengfei-master1 ~]# sysctl -p /etc/sysctl.d/kubernetes.conf  #问题1:为什么要执行modprobe br_netfilter? 修改/etc/sysctl.d/kubernetes.conf文件,增加如下三行参数: net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1  sysctl -p /etc/sysctl.d/kubernetes.conf出现报错:  sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory  解决方法: [root@pengfei-master1 ~]# modprobe br_netfilter  #问题2:为什么开启net.bridge.bridge-nf-call-iptables内核参数? 在centos下安装docker,执行docker info出现如下警告: WARNING: bridge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled  解决办法: [root@pengfei-master1 ~]# vim  /etc/sysctl.d/kubernetes.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1  #问题3:为什么要开启net.ipv4.ip_forward = 1参数? kubeadm初始化k8s如果报错:  就表示没有开启ip_forward,需要开启。  net.ipv4.ip_forward是数据包转发: 出于安全考虑,Linux系统默认是禁止数据包转发的。 所谓转发即当主机拥有多于一块的网卡时,其中一块收到数据包, 根据数据包的目的ip地址将数据包发往本机另一块网卡, 该网卡根据路由表继续发送数据包。这通常是路由器所要实现的功能。 要让Linux系统具有路由转发功能,需要配置一个Linux的内核参数net.ipv4.ip_forward。 这个参数指定了Linux系统当前对路由转发功能的支持情况; 其值为0时表示禁止进行IP转发;如果是1,则说明IP转发功能已经打开。

1.6 关闭防火墙

[root@pengfei-master1 ~]# systemctl stop firewalld  [root@pengfei-master1 ~]# systemctl disable firewalld

1.7 配置阿里云的repo源

[root@pengfei-master1 ~]# yum install -y yum-utils [root@pengfei-master1 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

1.8 安装依赖ipvsadm等

[root@pengfei-master1 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel  python-devel epel-release openssh-server socat  ipvsadm conntrack ntpdate telnet ipvsadm

1.9 配置 kubernete s国内源

[root@pengfei-master1 ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=0 EOF

1.10 配置时间同步

[root@pengfei-master1 ~]# yum install ntpdate -y [root@pengfei-master1 ~]# ntpdate cn.pool.ntp.org [root@pengfei-master1 ~]# crontab -e * */1 * * * /usr/sbin/ntpdate   cn.pool.ntp.org [root@pengfei-master1 ~]# service crond restart

1.11 安装containerd 服务

[root@pengfei-master1 ~]# yum  install containerd.io-1.6.6 -y

接下来生成 containerd 的配置文件:

[root@pengfei-master1 ~]# mkdir -p /etc/containerd [root@pengfei-master1 ~]# containerd config default > /etc/containerd/config.toml  #修改配置文件:打开/etc/containerd/config.toml 把SystemdCgroup = false修改成SystemdCgroup = true 把sandbox_image = "k8s.gcr.io/pause:3.6"修改成sandbox_image="registry.aliyuncs.com/google_containers/pause:3.7"

配置 containerd 开机启动,并启动 containerd

[root@pengfei-master1 ~]# systemctl enable containerd --now

修改/etc/crictl.yaml文件

[root@pengfei-master1 ~]# cat > /etc/crictl.yaml <<EOF runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock timeout: 10 debug: false EOF  [root@pengfei-master1 ~]# systemctl restart containerd

1.12 安装docker

备注:docker也要安装,docker跟containerd不冲突,安装docker是为了能基于dockerfile构建镜像

[root@pengfei-master1 ~]# yum install docker-ce -y [root@pengfei-master1 ~]# systemctl enable docker --now #启动docker并开机自动启动

1.13 配置containerd镜像加速器

[root@pengfei-master1 ~]# sed -i 's#config_path = ""#config_path = "/etc/containerd/certs.d"#' /etc/containerd/config.toml [root@pengfei-master1 ~]# mkdir /etc/containerd/certs.d/docker.io/ -p [root@pengfei-master1 ~]# cat >/etc/containerd/certs.d/docker.io/hosts.toml <<EOF [host."https://dbxvt5s3.mirror.aliyuncs.com",host."https://registry.docker-cn.com"] capabilities = ["pull"] EOF  #重启containerd [root@pengfei-master1 ~]# systemctl restart containerd

1.14 配置docker镜像加速器

[root@pengfei-master1 ~]# cat >/etc/docker/daemon.json <<EOF { "registry-mirrors": ["https://dbxvt5s3.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com"] } EOF #注意 https://dbxvt5s3.mirror.aliyuncs.com改成自己的 #重启docker [root@pengfei-master1 ~]# systemctl restart docker

二. 安装 kubernetes

2.1 安装初始化k8s需要的软件包

[root@pengfei-master1 ~]# yum install -y kubelet-1.25.0 kubeadm-1.25.0 kubectl-1.25.0 [root@pengfei master1 ~]# systemctl enable kubelet  注:每个软件包的作用 Kubeadm:  kubeadm是一个工具,用来初始化k8s集群的 kubelet:   安装在集群所有节点上,用于启动Pod的,kubeadm安装k8s,k8s控制节点和工作节点的组件,都是基于pod运行的,只要pod启动,就需要kubelet kubectl:   通过kubectl可以部署和管理应用,查看各种资源,创建、删除和更新各种组件

配置containerd

[root@pengfei-master1~]# crictl config runtime-endpoint /run/containerd/containerd.sock

注意:以上所有操作在控制端(pengfei-master1)和node端(pengfei-node1和pengfei-node2)都要执行

2.2 kubeadm初始化kubernetes集群(控制端执行)

2.2.1 使用kubeadm生成k8s集群文件

#注意:只在pengfei-master1控制端执行 [root@pengfei-master1 ~]# kubeadm config print init-defaults > kubeadm.yaml

2.2.2 修改配置文件

[root@pengfei-master1 ~]# kubeadm.yaml apiVersion: kubeadm.k8s.io/v1beta3 bootstrapTokens: - groups:   - system:bootstrappers:kubeadm:default-node-token   token: abcdef.0123456789abcdef   ttl: 24h0m0s   usages:   - signing   - authentication kind: InitConfiguration localAPIEndpoint:   advertiseAddress: 192.168.5.132 #修改控制节点的ip   bindPort: 6443 nodeRegistration:   criSocket: unix:///var/run/containerd/containerd.sock #修改此处,指定containerd容器运行时   imagePullPolicy: IfNotPresent   name: pengfei-master1 #修改控制节点主机名   taints: null --- apiServer:   timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta3 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: {} etcd:   local:     dataDir: /var/lib/etcd imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers #修改镜像仓库为阿里云镜像仓库地址 kind: ClusterConfiguration kubernetesVersion: 1.25.0 networking:   dnsDomain: cluster.local   podSubnet: 10.244.0.0/16 #修改指定pod网段,需要新增加这个   serviceSubnet: 10.96.0.0/12 #修改指定Service网段 scheduler: {}  #插入以下内容,(复制时,要带着---): --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvs --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration cgroupDriver: systemd  #mode: ipvs 表示kube-proxy代理模式是ipvs,如果不指定ipvs,会默认使用iptables,但是iptables效率低,所以我们生产环境建议开启ipvs,阿里云和华为云托管的K8s,也提供ipvs模式

2.2.3 使用kubeadm初始化 kubernetes 集群

注意:只在pengfei-master1控制端执行

[root@pengfei-master1 ~]# kubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemVerification [init] Using Kubernetes version: v1.25.0 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local pengfei-master1] and IPs [10.96.0.1 192.168.5.132] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost pengfei-master1] and IPs [192.168.5.132 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost pengfei-master1] and IPs [192.168.5.132 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 7.503598 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node pengfei-master1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node pengfei-master1 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule] [bootstrap-token] Using token: abcdef.0123456789abcdef [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy  Your Kubernetes control-plane has initialized successfully!  To start using your cluster, you need to run the following as a regular user:    mkdir -p $HOME/.kube   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config   sudo chown $(id -u):$(id -g) $HOME/.kube/config  Alternatively, if you are the root user, you can run:    export KUBECONFIG=/etc/kubernetes/admin.conf  You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:   https://kubernetes.io/docs/concepts/cluster-administration/addons/  Then you can join any number of worker nodes by running the following on each as root:  kubeadm join 192.168.5.132:6443 --token abcdef.0123456789abcdef \   --discovery-token-ca-cert-hash sha256:4a8dc13f8752e26705222186578c501f47afb35a3478990e0093142c449135dd

配置kubectl的配置文件config,相当于对kubectl进行授权,这样kubectl命令可以使用这个证书对k8s集群进行管理

[root@pengfei-master1 ~]# mkdir -p $HOME/.kube [root@pengfei-master1 ~]#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@pengfei-master1 ~]#   sudo chown $(id -u):$(id -g) $HOME/.kube/config [root@pengfei-master1 ~]#  kubectl get nodes NAME              STATUS     ROLES           AGE     VERSION pengfei-master1   NotReady   control-plane   3m37s   v1.25.0

2.2.4 kubernetes集群添加工作节点

查看加入节点的命令

[root@pengfei-master1 ~]# kubeadm token create --print-join-command kubeadm join 192.168.5.132:6443 --token zshezb.get0w6lwd4384ts4 --discovery-token-ca-cert-hash sha256:4a8dc13f8752e26705222186578c501f47afb35a3478990e0093142c449135dd

pengfei-node1执行

[root@pengfei-node1 ~]# kubeadm join 192.168.5.132:6443 --token 7xr5wj.u7r7gu94bbq59qx8 --discovery-token-ca-cert-hash sha256:4a8dc13f8752e26705222186578c501f47afb35a3478990e0093142c449135dd [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...  This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details.  Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

pengfei-node2执行

[root@pengfei-node2 ~]# kubeadm join 192.168.5.132:6443 --token 7xr5wj.u7r7gu94bbq59qx8 --discovery-token-ca-cert-hash sha256:4a8dc13f8752e26705222186578c501f47afb35a3478990e0093142c449135dd [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...  This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details.  Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

2.2.5 查看集群节点

在控制节点执行

[root@pengfei-master1 ~]# kubectl get nodes NAME              STATUS     ROLES           AGE   VERSION pengfei-master1   NotReady   control-plane   42m   v1.25.0 pengfei-node1     NotReady   <none>          36m   v1.25.0 pengfei-node2     NotReady   <none>          35m   v1.25.0

可以看的出,此时集群状态都是NotReady,这是因为没有安装网络插件

三. kubernetes安装网络插件calico

集中式的路由分发

Calico网络模型主要工作组件

1、Felix:运行在每一台 Host 的 agent 进程,主要负责网络接口管理和监听、路由、ARP 管理、ACL 管理和同步、状态上报等。保证跨主机容器网络互通。  2、etcd:分布式键值存储,相当于k8s集群中的数据库,存储着Calico网络模型中IP地址等相关信息。主要负责网络元数据一致性,确保 Calico 网络状态的准确性;  3、BGP Client(BIRD):Calico 为每一台 Host 部署一个 BGP Client,即每台host上部署一个BIRD。主要负责把 Felix 写入 Kernel 的路由信息分发到当前 Calico 网络,确保 Workload 间的通信的有效性;  4、BGP Route Reflector:在大型网络规模中,如果仅仅使用 BGP client 形成 mesh 全网互联的方案就会导致规模限制,因为所有节点之间俩俩互联,需要 N^2 个连接,为了解决这个规模问题,可以采用 BGP 的 Router Reflector 的方法,通过一个或者多个 BGP Route Reflector 来完成

3.1 安装calico

[root@pengfei-master1 ~]# wget https://docs.projectcalico.org/manifests/calico.yaml [root@pengfei-master1 ~]# kubectl apply -f  calico.yaml [root@pengfei-master1 ~]# kubectl get pod -n kube-system NAME                                       READY   STATUS    RESTARTS   AGE calico-kube-controllers-74677b4c5f-8ql22   1/1     Running   0          3m42s calico-node-46bld                          1/1     Running   0          3m42s calico-node-bc9jh                          1/1     Running   0          3m42s calico-node-npc7r                          1/1     Running   0          3m42s coredns-7f8cbcb969-hwpgs                   1/1     Running   0          76m coredns-7f8cbcb969-j2pn4                   1/1     Running   0          76m etcd-pengfei-master1                       1/1     Running   0          76m kube-apiserver-pengfei-master1             1/1     Running   0          76m kube-controller-manager-pengfei-master1    1/1     Running   0          76m kube-proxy-5v4ll                           1/1     Running   0          76m kube-proxy-r9jds                           1/1     Running   0          69m kube-proxy-s7kd6                           1/1     Running   1          70m kube-scheduler-pengfei-master1             1/1     Running   0          76m

3.2 calico网络插件 配置文件说明

1、Daemonset配置 …… containers:         # Runs calico-node container on each Kubernetes node. This         # container programs network policy and routes on each         # host.         - name: calico-node           image: docker.io/calico/node:v3.18.0 ……           env:             # Use Kubernetes API as the backing datastore.             - name: DATASTORE_TYPE               value: "kubernetes"             # Cluster type to identify the deployment type             - name: CLUSTER_TYPE               value: "k8s,bgp"             # Auto-detect the BGP IP address.             - name: IP               value: "autodetect"          #pod网段          - name: CALICO_IPV4POOL_CIDR  value: "10.244.0.0/16"             # Enable IPIP             - name: CALICO_IPV4POOL_IPIP               value: "Always" calico-node服务的主要参数如下: CALICO_IPV4POOL_IPIP:是否启用IPIP模式。启用IPIP模式时,Calico将在Node上创建一个名为tunl0的虚拟隧道。IP Pool可以使用两种模式:BGP或IPIP。使用IPIP模式时,设置CALICO_IPV4POOL_IPIP="Always",不使用IPIP模式时,设置CALICO_IPV4POOL_IPIP="Off",此时将使用BGP模式。  IP_AUTODETECTION_METHOD:获取Node IP地址的方式,默认使用第1个网络接口的IP地址,对于安装了多块网卡的Node,可以使用正则表达式选择正确的网卡,例如"interface=eth.*"表示选择名称以eth开头的网卡的IP地址。 -  name: IP_AUTODETECTION_METHOD   value: "interface=ens33"  扩展:calico的IPIP模式和BGP模式对比分析 1)IPIP 把一个IP数据包又套在一个IP包里,即把IP层封装到IP层的一个 tunnel,它的作用其实基本上就相当于一个基于IP层的网桥,一般来说,普通的网桥是基于mac层的,根本不需要IP,而这个ipip则是通过两端的路由做一个tunnel,把两个本来不通的网络通过点对点连接起来;  calico以ipip模式部署完毕后,node上会有一个tunl0的网卡设备,这是ipip做隧道封装用的,也是一种overlay模式的网络。当我们把节点下线,calico容器都停止后,这个设备依然还在,执行 rmmodipip命令可以将它删除。  2)BGP BGP模式直接使用物理机作为虚拟路由路(vRouter),不再创建额外的tunnel  边界网关协议(BorderGateway Protocol, BGP)是互联网上一个核心的去中心化的自治路由协议。它通过维护IP路由表或‘前缀’表来实现自治系统(AS)之间的可达性,属于矢量路由协议。BGP不使用传统的内部网关协议(IGP)的指标,而是基于路径、网络策略或规则集来决定路由。因此,它更适合被称为矢量性协议,而不是路由协议,通俗的说就是将接入到机房的多条线路(如电信、联通、移动等)融合为一体,实现多线单IP;  BGP 机房的优点:服务器只需要设置一个IP地址,最佳访问路由是由网络上的骨干路由器根据路由跳数与其它技术指标来确定的,不会占用服务器的任何系统;  官方提供的calico.yaml模板里,默认打开了ip-ip功能,该功能会在node上创建一个设备tunl0,容器的网络数据会经过该设备被封装一个ip头再转发。这里,calico.yaml中通过修改calico-node的环境变量:CALICO_IPV4POOL_IPIP来实现ipip功能的开关:默认是Always,表示开启;Off表示关闭ipip。 - name:  CLUSTER_TYPE               value: "k8s,bgp"             # Auto-detect the BGP IP address.             - name: IP               value: "autodetect"             # Enable IPIP             - name: CALICO_IPV4POOL_IPIP               value: "Always"  总结: calico BGP通信是基于TCP协议的,所以只要节点间三层互通即可完成,即三层互通的环境bird就能生成与邻居有关的路由。但是这些路由和flannel host-gateway模式一样,需要二层互通才能访问的通,因此如果在实际环境中配置了BGP模式生成了路由但是不同节点间pod访问不通,可能需要再确认下节点间是否二层互通。 为了解决节点间二层不通场景下的跨节点通信问题,calico也有自己的解决方案——IPIP模式

3.3 kubernetes node节点打个标签,显示work

[root@pengfei-master1 ~]# kubectl label nodes pengfei-node1 node-role.kubernetes.io/work=work [root@pengfei-master1 ~]# kubectl label nodes pengfei-node2 node-role.kubernetes.io/work=work

3.4 查看 kubernetes集群节点

稍等一会儿,再次查看node节点,集群状态变为Ready

[root@pengfei-master1 ~]# kubectl get nodes NAME              STATUS   ROLES           AGE   VERSION pengfei-master1   Ready    control-plane   74m   v1.25.0 pengfei-node1     Ready    work            68m   v1.25.0 pengfei-node2     Ready    work            67m   v1.25.0

3.5 测试 kubernetes calico 网络

[root@pengfei-master1 ~]# kubectl run busybox --image docker.io/library/busybox:1.28  --image-pull-policy=IfNotPresent --restart=Never --rm -it busybox -- sh If you don't see a command prompt, try pressing enter. / #  ping www.baidu.com  #ping baidu.com是否正常 PING www.baidu.com (36.152.44.96): 56 data bytes 64 bytes from 36.152.44.96: seq=0 ttl=127 time=18.008 ms 64 bytes from 36.152.44.96: seq=1 ttl=127 time=13.028 ms #可以看到能访问网络,说明calico网络插件已经被正常安装了  / # nslookup kubernetes.default.svc.cluster.local Server:    10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local  Name:      kubernetes.default.svc.cluster.local Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local  #10.96.0.10 就是我们coreDNS的clusterIP,说明coreDNS配置好了。 #解析内部Service的名称,是通过coreDNS去解析的。  #注意: #busybox要用指定的1.28版本,不能用最新版本,最新版本,nslookup会解析不到dns和ip

至此k8s集群安装完成。

如果你集群故障,可以看下面这篇内容恢复kubernetes集群