Kubernetes 1.19.0——deployment(2)

HPA

HPA(horizontal pod autoscalers)水平自动伸缩

通过检测pod CPU的负载,解决deployment里某pod负载太重,动态伸缩pod的数量来负载均衡

HPA监测pod使用

应用场景

配置HPA

设置副本数最小为1个,最大为5个
[root@vms61 chap5-deploy]# kubectl apply -f web1.yaml 
deployment.apps/web1 created
[root@vms61 chap5-deploy]# kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
web1-77cc489b4b-wmw45   1/1     Running   0          3s
[root@vms61 chap5-deploy]# kubectl get hpa
No resources found in chap5-deploy namespace.
[root@vms61 chap5-deploy]# kubectl autoscale deployment web1 --min=1 --max=5
horizontalpodautoscaler.autoscaling/web1 autoscaled
[root@vms61 chap5-deploy]# kubectl get hpa
NAME   REFERENCE         TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
web1   Deployment/web1   <unknown>/80%   1         5         0          8s

如果硬要创建10个副本,到最后还是只剩下5个正常运行
[root@vms61 chap5-deploy]# kubectl scale deployment web1 --replicas=10
deployment.apps/web1 scaled
[root@vms61 chap5-deploy]# kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
web1-77cc489b4b-2vbwt   1/1     Running   0          3s
web1-77cc489b4b-6hrgx   1/1     Running   0          3s
web1-77cc489b4b-97z4t   1/1     Running   0          3s
web1-77cc489b4b-f4qt4   1/1     Running   0          3s
web1-77cc489b4b-kv6wp   1/1     Running   0          3s
web1-77cc489b4b-n9gwb   1/1     Running   0          3s
web1-77cc489b4b-nkzqs   1/1     Running   0          3s
web1-77cc489b4b-pmzhd   1/1     Running   0          3s
web1-77cc489b4b-wmw45   1/1     Running   0          4m25s
web1-77cc489b4b-xpz7p   1/1     Running   0          3s
[root@vms61 chap5-deploy]# kubectl get pods
NAME                    READY   STATUS        RESTARTS   AGE
web1-77cc489b4b-2vbwt   1/1     Running       0          18s
web1-77cc489b4b-6hrgx   1/1     Running       0          18s
web1-77cc489b4b-f4qt4   1/1     Running       0          18s
web1-77cc489b4b-kv6wp   0/1     Terminating   0          18s
web1-77cc489b4b-wmw45   1/1     Running       0          4m40s
web1-77cc489b4b-xpz7p   1/1     Running       0          18s
[root@vms61 chap5-deploy]# kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
web1-77cc489b4b-2vbwt   1/1     Running   0          42s
web1-77cc489b4b-6hrgx   1/1     Running   0          42s
web1-77cc489b4b-f4qt4   1/1     Running   0          42s
web1-77cc489b4b-wmw45   1/1     Running   0          5m4s
web1-77cc489b4b-xpz7p   1/1     Running   0          42s

反之,如果最低副本数小于minpods,也会自动创建出来

谁后执行谁生效

解决当前cpu的使用量为unknown

修改deploy的配置文件添加参数
[root@vms61 1.8+]# pwd
/root/kubernetes-sigs-metrics-server-d1f4f6f/deploy/1.8+
[root@vms61 1.8+]# cat metrics-server-deployment.yaml 
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: metrics-serve
  namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: metrics-serve
  namespace: kube-system
  labels:
    k8s-app: metrics-serve
spec:
  selector:
    matchLabels:
      k8s-app: metrics-serve
  template:
    metadata:
      name: metrics-serve
      labels:
        k8s-app: metrics-serve
    spec:
      serviceAccountName: metrics-serve
      volumes:
      # mount in tmp so we can safely use from-scratch images and/or read-only containers
      - name: tmp-di
        emptyDir: {}
      containers:
      - name: metrics-serve
        image: k8s.gcr.io/metrics-server-amd64:v0.3.6
        imagePullPolicy: IfNotPresent
        command: 
        - --metric-resolution=30s
        - /metrics-server 
        - --kubelet-insecure-tls 
        - --kubelet-preferred-address-types=InternalIP
        
        volumeMounts:
        - name: tmp-di
          mountPath: /tmp

[root@vms61 1.8+]# kubectl apply -f metrics-server-deployment.yaml 
serviceaccount/metrics-server unchanged
deployment.apps/metrics-server configured

修改配置文件并添加此参数至resources下
重新创建pod和hpa并设置阈值为80%,可看到unknow已变成具体使用率
[root@vms61 chap5-deploy]# cat web1.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: web1
  name: web1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: web1
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: web1
        app1: web1
        app2: web2
    spec:
      volumes:
      - name: v1
        emptyDir: {}
      containers:
      - image: nginx
        imagePullPolicy: IfNotPresent
        name: nginx
        volumeMounts:
        - name: v1
          mountPath: /xx
        ports:
        - containerPort: 80
        env:
        - name: myenv1
          value: haha1
        - name: myenv2
          value: haha2
        resources: 
          requests:
            cpu: 400m
status: {}
[root@vms61 chap5-deploy]# kubectl apply -f web1.yaml 
deployment.apps/web1 created
[root@vms61 chap5-deploy]# kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
web1-5c445ff8fc-zwhml   1/1     Running   0          21s
[root@vms61 chap5-deploy]# kubectl autoscale deployment web1 --max=5 --cpu-percent=80
horizontalpodautoscaler.autoscaling/web1 autoscaled
[root@vms61 chap5-deploy]# kubectl get hpa
NAME   REFERENCE         TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
web1   Deployment/web1   <unknown>/80%   1         5         0          12s
[root@vms61 chap5-deploy]# kubectl get hpa
NAME   REFERENCE         TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
web1   Deployment/web1   <unknown>/80%   1         5         1          28s
[root@vms61 chap5-deploy]# kubectl get hpa
NAME   REFERENCE         TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
web1   Deployment/web1   0%/80%    1         5         1          44s

测试HPA

通过cat /dev/zero > /dev/null &模拟pod的cpu使用率过高
[root@vms61 chap5-deploy]# kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
web1-5c445ff8fc-zwhml   1/1     Running   0          14m
[root@vms61 chap5-deploy]# kubectl exec -it web1-5c445ff8fc-zwhml -- bash
root@web1-5c445ff8fc-zwhml:/# cat /dev/zero > /dev/null &
[1] 39
root@web1-5c445ff8fc-zwhml:/# cat /dev/zero > /dev/null &
[2] 40
root@web1-5c445ff8fc-zwhml:/# cat /dev/zero > /dev/null &
[3] 41
root@web1-5c445ff8fc-zwhml:/# cat /dev/zero > /dev/null &
[4] 42
root@web1-5c445ff8fc-zwhml:/# cat /dev/zero > /dev/null &
[5] 43
root@web1-5c445ff8fc-zwhml:/# cat /dev/zero > /dev/null &
[6] 44

在vms63节点查询到cat进程
[root@vms63 ~]#   ps aux | grep -v grep | grep cat
root        900  0.8  1.2 1041752 49080 ?       Ssl  14:54   0:49 /usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json --selinux-enabled --log-driver=journald --signature-verification=false --storage-driver overlay2
root      60366 37.8  0.0   2424   640 pts/1    R    16:33   0:14 cat /dev/zero
root      60367 38.0  0.0   2424   636 pts/1    R    16:33   0:14 cat /dev/zero
root      60368 32.2  0.0   2424   632 pts/1    R    16:33   0:11 cat /dev/zero
root      60369 29.1  0.0   2424   632 pts/1    R    16:33   0:10 cat /dev/zero
root      60370 31.8  0.0   2424   632 pts/1    R    16:33   0:11 cat /dev/zero
root      60373 29.3  0.0   2424   632 pts/1    R    16:33   0:10 cat /dev/zero

此时因cpu使用率已经自动为我们创建出5个pod
[root@vms61 ~]# kubectl top pods
NAME                    CPU(cores)   MEMORY(bytes)   
web1-5c445ff8fc-zwhml   1949m        3Mi             
[root@vms61 ~]# kubectl top pods
NAME                    CPU(cores)   MEMORY(bytes)   
web1-5c445ff8fc-glxzp   0m           1Mi             
web1-5c445ff8fc-rllf6   0m           1Mi             
web1-5c445ff8fc-xph72   0m           1Mi             
web1-5c445ff8fc-z75np   0m           1Mi             
web1-5c445ff8fc-zwhml   1941m        3Mi  

为什么只有一个pod负载很高而其他却为0呢?

因为此测试的负载时来自内部,而不是通过svc来做负载均衡,所有不可能转移其他节点

通过killall -9 cat将进程杀死,副本数和负载就会自动降低,这里不作演示

正文完