Kubernetes 1.19.0——网络

K8S是如何实现跨主机通信的

Pod间的通信

准备两台虚拟机:

192.168.135.91—-etcd1

192.168.135.92—-etcd2

先在两个节点安装etcd(etcd有关请参考此专栏之前的文章)

两个节点均需配置好etcd
[root@vms91 ~]# cat /etc/etcd/etcd.conf
ETCD_DATA_DIR="/var/lib/etcd/cluster.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.135.91:2380,http://localhost:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.135.91:2379,http://localhost:2379"
ETCD_NAME="etcd-91"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.135.91:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379,http://192.168.135.91:2379"
ETCD_INITIAL_CLUSTER="etcd-91=http://192.168.135.91:2380,etcd-92=http://192.168.135.92:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

然后两个节点同时安装docker

修改配置文件,两台机器均需要修改成各自对应的ip后重启docker
[root@vms91 ~]# cat /etc/sysconfig/docker
# /etc/sysconfig/docker
# Modify these options if you want to change the way the docker daemon runs
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false --cluster-store=etcd://192.168.135.91:2379'
if [ -z "${DOCKER_CERT_PATH}" ]; then
    DOCKER_CERT_PATH=/etc/docker
fi
# Do not add registries in this file anymore. Use /etc/containers/registries.conf
# instead. For more information reference the registries.conf(5) man page.
# Location used for temporary files, such as those created by
# docker load and build operations. Default is /var/lib/docker/tmp
# Can be overriden by setting the following environment variable.
# DOCKER_TMPDIR=/var/tmp
# Controls the /etc/cron.daily/docker-logrotate cron job status.
# To disable, uncomment the line below.
# LOGROTATE=false
# docker-latest daemon can be used by starting the docker-latest unitfile.
# To use docker-latest client, uncomment below lines
#DOCKERBINARY=/usr/bin/docker-latest
#DOCKERDBINARY=/usr/bin/dockerd-latest
#DOCKER_CONTAINERD_BINARY=/usr/bin/docker-containerd-latest
#DOCKER_CONTAINERD_SHIM_BINARY=/usr/bin/docker-containerd-shim-latest

创建calico配置文件并添加配置(两个节点都要做)
[root@vms91 ~]# mkdir /etc/calico
[root@vms91 ~]# vi /etc/calico/calicoctl.cfg
apiVersion: v1
kind: calicoApiConfig
metadata:
spec:
  datastoreType: "etcdv2"
  etcdEndpoints: "http://192.168.135.91:2379"

开始建立pod信息

将事先下载好的calicoctl和calico-node上传至两台机器,并同时导入镜像
将calicoctl加上执行权限再移至bin目录下就可以直接使用
在两个节点都执行此条命令,可看到容器此时已生成
[root@vms91 ~]# calicoctl node run --node-image=quay.io/calico/node:v2.6.12 -c /etc/calico/calicoctl.cfg
Running command to load modules: modprobe -a xt_set ip6_tables
Enabling IPv4 forwarding
Enabling IPv6 forwarding
Increasing conntrack limit
Removing old calico-node container (if running).
Running the following command to start calico-node:
docker run --net=host --privileged --name=calico-node -d --restart=always -e NODENAME=vms91 -e CALICO_NETWORKING_BACKEND=bird -e CALICO_LIBNETWORK_ENABLED=true -e ETCD_ENDPOINTS=http://192.168.135.91:2379 -v /var/log/calico:/var/log/calico -v /var/run/calico:/var/run/calico -v /lib/modules:/lib/modules -v /run:/run -v /run/docker/plugins:/run/docker/plugins -v /var/run/docker.sock:/var/run/docker.sock quay.io/calico/node:v2.6.12
Image may take a short time to download if it is not available locally.
Container started, checking progress logs.
2020-10-03 06:21:08.577 [INFO][8] startup.go 173: Early log level set to info
2020-10-03 06:21:08.577 [INFO][8] client.go 202: Loading config from environment
2020-10-03 06:21:08.578 [INFO][8] startup.go 83: Skipping datastore connection test
2020-10-03 06:21:08.593 [INFO][8] startup.go 259: Building new node resource Name="vms91"
2020-10-03 06:21:08.593 [INFO][8] startup.go 273: Initialise BGP data
2020-10-03 06:21:08.594 [INFO][8] startup.go 467: Using autodetected IPv4 address on interface ens32: 192.168.135.91/24
2020-10-03 06:21:08.594 [INFO][8] startup.go 338: Node IPv4 changed, will check for conflicts
2020-10-03 06:21:08.601 [INFO][8] etcd.go 430: Error enumerating host directories error=100: Key not found (/calico) [7]
2020-10-03 06:21:08.601 [INFO][8] startup.go 530: No AS number configured on node resource, using global value
2020-10-03 06:21:08.604 [INFO][8] etcd.go 105: Ready flag is now set
2020-10-03 06:21:08.608 [INFO][8] client.go 133: Assigned cluster GUID ClusterGUID="59666997aef64507a55ba1aa69ae14d8"
2020-10-03 06:21:08.629 [INFO][8] startup.go 419: CALICO_IPV4POOL_NAT_OUTGOING is true (defaulted) through environment variable
2020-10-03 06:21:08.629 [INFO][8] startup.go 659: Ensure default IPv4 pool is created. IPIP mode: off
2020-10-03 06:21:08.634 [INFO][8] startup.go 670: Created default IPv4 pool (192.168.0.0/16) with NAT outgoing true. IPIP mode: off
2020-10-03 06:21:08.634 [INFO][8] startup.go 419: FELIX_IPV6SUPPORT is true (defaulted) through environment variable
2020-10-03 06:21:08.634 [INFO][8] startup.go 626: IPv6 supported on this platform: true
2020-10-03 06:21:08.634 [INFO][8] startup.go 419: CALICO_IPV6POOL_NAT_OUTGOING is false (defaulted) through environment variable
2020-10-03 06:21:08.634 [INFO][8] startup.go 659: Ensure default IPv6 pool is created. IPIP mode: off
2020-10-03 06:21:08.637 [INFO][8] startup.go 670: Created default IPv6 pool (fd80:24e2:f998:72d6::/64) with NAT outgoing false. IPIP mode: off
2020-10-03 06:21:08.683 [INFO][8] startup.go 131: Using node name: vms91
2020-10-03 06:21:08.775 [INFO][13] client.go 202: Loading config from environment
Starting libnetwork service
Calico node started successfully
[root@vms91 ~]# docker ps
CONTAINER ID        IMAGE                         COMMAND             CREATED              STATUS              PORTS               NAMES
ffcd376cda40        quay.io/calico/node:v2.6.12   "start_runit"       About a minute ago   Up About a minute                       calico-node

通过calicoctl node status可以相互查看到对方
通过calicoctl node status可以相互查看到对方

通过docker network create –driver calico –ipam-driver calico-ipam calnet1创建一个名为calnet1的全局的网络,第一个节点创建成功后在第二个节点上自动出现

–driver calico 指定使用 calico 的 libnetwork CNM driver。

–ipam-driver calico-ipam 指定使用 calico 的 IPAM driver 管理 IP。

calico 为 global 网络,etcd 会将 calnet1 同步到所有主机。

过docker network create –driver calico –ipam-driver calico-ipam calnet1
第一个节点创建成功后在第二个节点上自动出现
[root@vms91 ~]# docker network create --driver calico --ipam-driver calico-ipam calnet1
ba5794c56fb0a00de3b50b9b4ddaafa0984fa7936f9c4e9c790acdceb5a78632
[root@vms91 ~]# docker network list
NETWORK ID          NAME                DRIVER              SCOPE
cc525016a37d        bridge              bridge              local
ba5794c56fb0        calnet1             calico              global
af5df4b4da48        host                host                local
17e9381c6de0        none                null                local

到此,两个节点已经建立起来可以通信了

在两个节点下载busybox镜像(注意配置一下加速器,不然很慢)
每在主机上创建一个容器,则会在物理机上创建一张虚拟网卡出来
现在还没有创建容器,物理机上网卡信息如上
在vms91上创建一个容器docker run –name c91 –net calnet1 -itd busybox,进入容器后发现多了个网卡
创建后再次查看ip a,发现多了一个网卡(红框是创建前,蓝框是创建后)
也就是说这里是一一对应的,好像牵了一根网线将其连通
通过route -n查询物理机的路由可知,凡是去往192.168.247.64的数据包全部转发到cali171e27e5a40,而ali171e27e5a40@if4又是和容器里cali0@if5是连通的
在另一台vms92创建一个容器,道理也是一样的
再查看两台机器的路由,发现多了一条,目的地址是192.168.55.128的,直接甩给192.168.135.92
再在vms92上查看路由,发现凡是去往192.168.55.128的直接甩给calieb5192ae7ab网卡 这样也就实现了两台机器pod之间的通信
直接ping通,测试成功

注:除了calico,也可以用flannel,还支持Weave Net等等,有兴趣可以去官网研究

网络解决方案

CNI(container network interface) CNCF下的一个项目,容器网络接口,由coreOS提出

通过插件的方式统一配置

flannel—基于overlay 不支持网络策略

calico—基于BGP 支持网络策略

canal—支持网络策略

配置canal网络

下载新的yaml文件重新apply一下,这里为节约篇幅不作演示,可自行尝试

–在maser上执行

kubeadm init –kubernetes-version=v1.19.0 –pod-network-cidr=10.244.0.0/16

kubectl apply -f https://docs.projectcalico.org/v3.1/getting-

started/kubernetes/installation/hosted/canal/rbac.yaml

kubectl apply -f https://docs.projectcalico.org/v3.1/getting-

started/kubernetes/installation/hosted/canal/canal.yaml

正文完