Kubernetes【K8S】(四):资源控制器

时间:2023-12-29 23:11:14

什么是控制器

​ Kubernetes中内建了很多controller(控制器),这些相当于一个状态机,用来控制Pod的具体状态和行为。

控制器类型

  • ReplicationController和ReplicaSet
  • Deployment
  • DaemonSet
  • StateFulSet
  • Job/CronJob
  • Horizontal Pod Autoscaling

命令式编程和声明式编程

  • 命令式编程:命令“机器”如何去做事情(how),这样不管你想要的是什么(what),它都会按照你的命令实现。
  • 声明式编程:告诉“机器”你想要的是什么(what),让机器想出如何去做(how)

声明式推荐用apply创建,命令式推荐用create创建。Deployment属于声明式,RelicaSet属于命令式。替换创建不是不可以,只是推荐属于最优方式。

Pod分类

  • 自主式Pod: Pod退出、删除,不会被自动创建新副本。
  • 控制器管理的Pod: 在控制器的生命周期里,始终要维持Pod的副本数目。

ReplicationController和ReplicaSet

​ ReplicationController(RC)用来确保容器应用的副本数始终保持在用户定义的副本数,即如果有容器异常退出,会自动创建新的Pod来替代;而如果异常多出来的容器也会自动回收。

​ 在新版本的Kubernetes中建议使用ReplicaSet来取代ReplicationController。ReplicaSet和ReplicationController没有本质区别,ReplicaSet新增支持集合式的selector。

编写rs.yaml

apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
name: frontend
spec:
replicas: 3
selector:
matchLabels:
tier: frontend
template:
metadata:
name: frontend
labels:
tier: frontend
spec:
containers:
- name: myapp
image: chinda.com/library/myapp:v1
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- containerPort: 80

创建RS

[root@k8s-master01 ~]# kubectl create -f rs.yaml
[root@k8s-master01 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
frontend-gzjh7 1/1 Running 0 5m58s
frontend-jnwtp 1/1 Running 0 5m58s
frontend-ktmdc 1/1 Running 0 5m58s
# 可以显示labels
[root@k8s-master01 ~]# kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
frontend-gzjh7 1/1 Running 0 3m45s tier=frontend
frontend-jnwtp 1/1 Running 0 3m45s tier=frontend
frontend-ktmdc 1/1 Running 0 3m45s tier=frontend
# 修改labels
[root@k8s-master01 ~]# kubectl label pod frontend-gzjh7 tier=frontend1
error: 'tier' already has a value (frontend), and --overwrite is false
# 因为--overwrite 是 false
[root@k8s-master01 ~]# kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
frontend-gtpc7 1/1 Running 0 21s tier=frontend
frontend-gzjh7 1/1 Running 0 9m13s tier=frontend1
frontend-jnwtp 1/1 Running 0 9m13s tier=frontend
frontend-ktmdc 1/1 Running 0 9m13s tier=frontend
# 删除rs看下,修改过label的没有随着rs删除跟着销毁,是因为现在这个pod已经不属于rs管理范围,变成自主式的pod
[root@k8s-master01 ~]# kubectl delete rs --all
replicaset.extensions "frontend" deleted
[root@k8s-master01 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
frontend-gzjh7 1/1 Running 0 11m

RS与Deployment的关联

Kubernetes【K8S】(四):资源控制器

Deployment

​ Deployement为Pod和ReplicaSet提供了一个声明式定义(Declarative)方法,用来替代以前的ReplicationController来方便管理应用。

​ 典型的应用场景:

  • 定义Deployment来创建Pod和ReplicaSet
  • 滚动升级和回滚应用
  • 扩容和缩容
  • 暂停和继续Deployment

编写deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
replicas: 3
template:
metadata:
name: myapp
labels:
app: myapp
spec:
containers:
- name: myapp
image: chinda.com/library/myapp:v1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
restartPolicy: Always
selector:
matchLabels:
app: myapp

创建Deployment

# --record参数可以记录命令,可以很方便的查看每次revision的变化
[root@k8s-master01 ~]# kubectl apply -f deployment.yaml --record
deployment.apps/myapp created
# 查看pods
[root@k8s-master01 ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myapp-cd7f9754d-5qbgh 1/1 Running 0 4m37s 10.244.1.29 k8s-node01 <none> <none>
myapp-cd7f9754d-7pr52 1/1 Running 0 4m37s 10.244.2.21 k8s-node02 <none> <none>
myapp-cd7f9754d-ccrbq 1/1 Running 0 4m37s 10.244.1.30 k8s-node01 <none> <none>
# 查看rs
[root@k8s-master01 ~]# kubectl get rs
NAME DESIRED CURRENT READY AGE
myapp-cd7f9754d 3 3 3 5m12s
# 查看deployment
[root@k8s-master01 ~]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
myapp 3/3 3 3 5m43s

扩容

# 扩容
[root@k8s-master01 ~]# kubectl scale deployment myapp --replicas=10
deployment.extensions/myapp scaled
[root@k8s-master01 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-cd7f9754d-2gdvk 1/1 Running 0 60s
myapp-cd7f9754d-5qbgh 1/1 Running 0 9m8s
myapp-cd7f9754d-66lqp 1/1 Running 0 60s
myapp-cd7f9754d-7nnfj 1/1 Running 0 60s
myapp-cd7f9754d-7pr52 1/1 Running 0 9m8s
myapp-cd7f9754d-7vt7p 1/1 Running 0 60s
myapp-cd7f9754d-8jsvj 1/1 Running 0 60s
myapp-cd7f9754d-ccrbq 1/1 Running 0 9m8s
myapp-cd7f9754d-hm89p 1/1 Running 0 60s
myapp-cd7f9754d-qtx2d 1/1 Running 0 60s
# 注意:扩容前后rs的名称没有变动
[root@k8s-master01 ~]# kubectl get rs
NAME DESIRED CURRENT READY AGE
myapp-cd7f9754d 10 10 10 9m55s

更新镜像

# 语法kubectl set image deployment/[deploymentName] [containerName]=[iamge]
[root@k8s-master01 ~]# kubectl set image deployment/myapp myapp=chinda.com/library/myapp:v2
deployment.extensions/myapp image updated
# 触发rs创建
[root@k8s-master01 ~]# kubectl get rs
NAME DESIRED CURRENT READY AGE
myapp-69f8478dfc 10 10 5 6s
myapp-cd7f9754d 3 3 3 13m
# 也可以使用edit命令来编辑Deployment
[root@k8s-master01 ~]# kubectl edit deployment/myapp

回滚

[root@k8s-master01 ~]# kubectl rollout undo deployment/myapp
deployment.extensions/myapp rolled back

Kubernetes【K8S】(四):资源控制器

查看rollout状态、历史、回滚指定revision

# 状态
[root@k8s-master01 ~]# kubectl rollout status deployment/myapp
deployment "myapp" successfully rolled out
# 历史
[root@k8s-master01 ~]# kubectl rollout history deployment/myapp
deployment.extensions/myapp
REVISION CHANGE-CAUSE
5 kubectl apply --filename=deployment.yaml --record=true
6 kubectl apply --filename=deployment.yaml --record=true
7 kubectl apply --filename=deployment.yaml --record=true
# 回滚指定revision
[root@k8s-master01 ~]# kubectl rollout undo deployment/myapp --to-revision=5
deployment.extensions/myapp rolled back
# rollout成功完成,将会返回一个0值得Exit Code
[root@k8s-master01 ~]# kubectl rollout status deployment/myapp
deployment "myapp" successfully rolled out
[root@k8s-master01 ~]# echo $?
0
# 暂停
[root@k8s-master01 ~]# kubectl rollout pause deployment/myapp

查看RS历史

[root@k8s-master01 ~]# kubectl get rs
NAME DESIRED CURRENT READY AGE
myapp-69f8478dfc 10 10 10 18m
myapp-6bf96697b6 0 0 0 4m47s
myapp-cd7f9754d 0 0 0 21m

Deployment更新策略

Deployment可以保证在升级时只有一定数量的Pod是down的。期望副本数的25%

Deployment同时可以确保只创建出超过期望数量的一定数量的Pod。期望副本数的25%

Rellover(多个rollout并行)

假如创建了一个5个myapp:v1副本的Deployment,当只有3个myapp:v1副本创建出来时,就开始更新含有5个副本的myapp:v2的Deployment。在这种情况下,Deployment会立即杀掉已经创建的3个myapp:v1的Pod,并开始创建myapp:v2的Pod。它不会等到所有5个myapp:v1的Pod都创建完成后才开始改变航道。

DaemonSet控制器

DaemonSet确保全部(或者一些)Node上运行一个Pod的副本。当有Node假如集群时也会为他们新增一个Pod。当有Node从集群移除时,这些Pod也会被回收。删除DaemonSet将会删除它创建的所有Pod。

DaemonSet的典型用法

  • 运行集群存储daemon,例如再每个Node上运行glusterd、ceph。
  • 在每个Node上运行日志收集daemon,例如fluentd、logstash。
  • 在每个Node上运行监控daemon,例如Prometheus Node Exporter、collectd、Datadog代理、New Relic代理、Ganglia gmond。

编写damon.yaml

apiVersion: apps/v1
kind: DaemonSet
metadata:
name: daemonset
labels:
app: daemonset
spec:
template:
metadata:
name: daemonset
labels:
app: daemonset
spec:
containers:
- name: daemonset
image: chinda.com/library/myapp:v1
imagePullPolicy: IfNotPresent
restartPolicy: Always
selector:
matchLabels:
app: daemonset

创建Ds

[root@k8s-master01 ~]# kubectl create -f daemonset.yaml
daemonset.apps/daemonset created
[root@k8s-master01 ~]# kubectl get ds
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset 2 2 2 2 2 <none> 11s
[root@k8s-master01 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
daemonset-48jrg 1/1 Running 0 63s 10.244.1.101 k8s-node01 <none> <none>
daemonset-k6mvj 1/1 Running 0 63s 10.244.2.92 k8s-node02 <none> <none>
[root@k8s-master01 ~]# kubectl delete pod daemonset-48jrg
pod "daemonset-48jrg" deleted
# 注意: 新创建的Pod和删除的是不同的
[root@k8s-master01 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
daemonset-k6mvj 1/1 Running 0 3m22s
daemonset-q4chg 1/1 Running 0 7s

Job控制器

Job负载批处理任务,即仅执行一次的任务,它保证批处理任务的一个活多个Pod成功结束。

特殊说明

  • spec.template格式同Pod。
  • RestartPolicy仅支持Never或Onfailure。
  • 单个Pod时,默认Pod成功运行后Job即结束。
  • .spec.comletions标志Job结束需要成功运行Pod个数,默认为1。
  • .spec.parallelism标志并行运行的Pod个数,默认为1。
  • spec.activeDeadlineSeconds标志失败Pod的重试最大时间,超过这个时间不会继续重试。

编写job.yaml

apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
template:
metadata:
name: pi
spec:
containers:
- name: pi
image: chinda.com/library/perl:v1
command: ['perl', '-Mbignum=bpi', '-wle', 'print bpi(2000)']
restartPolicy: Never

创建Job

[root@k8s-master01 ~]# kubectl create -f job.yaml
job.batch/pi created
[root@k8s-master01 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
pi-smqkb 1/1 Running 0 6s
# 查看日志
[root@k8s-master01 ~]# kubectl log pi-smqkb
log is DEPRECATED and will be removed in a future version. Use logs instead.
3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901

CronJob控制器

CronJob是基于时间的Job,即1. 在给定的时间点只运行一次;2. 周期性的在给定时间点运行。

典型用法

  • 在给定的时间点调度Job运行。
  • 创建周期性运行的Job,例如:数据库备份、发送邮件等。

字段解释

  • spec.shuedule: 调度,必须字段,指定任务运行周期,格式同Cron。

  • spec.jobTemplate: Job模板,必须字段,指定需要运行的任务,格式同Job。

  • spec.startingDeadlineSeconds: 启动Job的期限(秒级别), 超过此时间Job认为是失败的,该字段是可选的。如果因为任何原因而错过了被调度的时间,那么错过执行时间的Job将被认为是失败的。如果没有指定,则没有期限。

  • spec.concurrencyPolicy: 并发策略,该字段可选。它指定了如何处理被CronJob创建Job的并发执行。只允许指定一种策略。

    • Allow(默认):允许并发运行Job。
    • Forbid:禁止并发运行,如果前一个还没有完成,则直接跳过下一个。
    • Replace:取消当前正在运行的Job,用一个新的来替换。

    注意:当前策略只能应用于同一个CronJob创建的Job。如果存在多个CronJob,他们创建的Job之间总是允许并发运行的。

  • spec.suspend:挂起,该字段可选。如果设置为true,后续所有执行都会被挂起。它对已经开始执行的Job不起作用。默认值为false。

  • spec.successfulJobsHistoryLimitspec.failedJobsHistoryLimit:历史限制,该字段可选。它们指定了可以保留多少完成和失败的Job。默认情况下,它们分别设置为3和1。设置限制的值为0,相关类型的Job完成后将不会保留。

编写cronjob.yaml

apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
schedule: '*/1 * * * *'

创建CronJob

[root@k8s-master01 ~]# kubectl apply -f cronjob.yaml
cronjob.batch/hello created
[root@k8s-master01 ~]# kubectl get job -w
NAME COMPLETIONS DURATION AGE
hello-1588114140 1/1 6s 2m33s
hello-1588114200 1/1 7s 93s
hello-1588114260 1/1 4s 33s
[root@k8s-master01 ~]# kubectl get pod -w
NAME READY STATUS RESTARTS AGE
hello-1588114140-6l2g2 0/1 Completed 0 2m47s
hello-1588114200-snrtc 0/1 Completed 0 107s
hello-1588114260-zv8dq 0/1 Completed 0 47s
[root@k8s-master01 ~]# kubectl get cronjob
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
hello */1 * * * * False 0 69s 4m9s
[root@k8s-master01 ~]# kubectl log hello-1588114140-6l2g2
log is DEPRECATED and will be removed in a future version. Use logs instead.
Tue Apr 28 22:53:12 UTC 2020
Hello from the Kubernetes cluster