比如 1.16.0
K8s所有組件 kube-controller,kube-scheduler,kubelet的版本號(hào)不得高于kube-apiserver的版本號(hào)。
這些組件的版本號(hào)可低于kube-apiserver的1個(gè)次要版本,比如kube-apierver是1.16.0,其它組件的版本可以為1.16.x和1.15.x。
在一個(gè)HA集群中,多個(gè)kube-apiserver間的版本號(hào)最多只能相差一個(gè)次版本號(hào),比如 1.16和1.15。
最好所有組件與kube-apiserver版本號(hào)完全一致。
因此升級(jí)Kubernetes集群時(shí),最先升級(jí)的核心組件就是kube-apiserver。
且只能向上升級(jí)為一個(gè)次要版本。
kubectl版本最多只能比kube-apiserver高或低一個(gè)次版本號(hào)。

宏觀升級(jí)流程

升級(jí)主控制平面節(jié)點(diǎn)。
升級(jí)其他控制平面節(jié)點(diǎn)。
升級(jí)Node節(jié)點(diǎn)。

微觀升級(jí)步驟

先升級(jí)kubeadm版本
升級(jí)第一個(gè)主控制平面節(jié)點(diǎn)Master組件。
升級(jí)第一個(gè)主控制平面節(jié)點(diǎn)上的kubelet及kubectl。
升級(jí)其它控制平面節(jié)點(diǎn)。
升級(jí)Node節(jié)點(diǎn)
驗(yàn)證集群。

升級(jí)注意事項(xiàng)

確定升級(jí)前的的kubeadm集群版本。
kubeadm upgrade不會(huì)影響到工作負(fù)載,僅涉及k8s內(nèi)部的組件,但是備份etcd數(shù)據(jù)庫是最佳實(shí)踐。
升級(jí)后,所有容器都會(huì)重啟動(dòng),因?yàn)槿萜鞯膆ash值已更改。
由于版本的兼容性,只能從一個(gè)次要版本升級(jí)到另外一個(gè)次要版本,不能跳躍升級(jí)。
集群控制平面應(yīng)使用靜態(tài)Pod和etcd pod或外部etcd。

kubeadm upgrade 集群升級(jí)命令詳解

通過查詢命令行幫助:

$ kubeadm upgrade -h

Upgrade your cluster smoothly to a newer version with this command.

Usage:
  kubeadm upgrade [flags]
  kubeadm upgrade [command]
`

Available Commands:
  apply       Upgrade your Kubernetes cluster to the specified version.
  diff        Show what differences would be applied to existing static pod manifests. See also: kubeadm upgrade apply --dry-run
  node        Upgrade commands for a node in the cluster. Currently only supports upgrading the configuration, not the kubelet itself.
  plan        Check which versions are available to upgrade to and validate whether your current cluster is upgradeable. To skip the internet check, pass in the optional [version] parameter.

命令解析:

apply: 升級(jí)Kubernetes集群到指定版本。
diff: 即將運(yùn)行的靜態(tài)Pod文件清單與當(dāng)前正運(yùn)行的靜態(tài)Pod清單文件的差異。
node: 升級(jí)集群中的node,當(dāng)前(v1.16)僅支持升級(jí)kubelet的配置文件(/var/lib/kubelet/config.yaml),而非kubelet本身。
plan: 檢測(cè)當(dāng)前集群是否可升級(jí),并支持升級(jí)到哪些版本。

其中node子命令又支持如下子命令和選項(xiàng):

$ kubeadm upgrade node  -h
Upgrade commands for a node in the cluster. Currently only supports upgrading the configuration, not the kubelet itself.

Usage:
  kubeadm upgrade node [flags]
  kubeadm upgrade node [command]

Available Commands:
  config                     Downloads the kubelet configuration from the cluster ConfigMap kubelet-config-1.X, where X is the minor version of the kubelet.
  experimental-control-plane Upgrades the control plane instance deployed on this node. IMPORTANT. This command should be executed after executing `kubeadm upgrade apply` on another control plane instance

Flags:
  -h, --help   help for node

Global Flags:
      --log-file string   If non-empty, use this log file
      --rootfs string     [EXPERIMENTAL] The path to the \\\'real\\\' host root filesystem.
      --skip-headers      If true, avoid header prefixes in the log messages
  -v, --v Level           number for the log level verbosity

命令解析:

config: 從集群configmap中下載kubelet的配置文件kubelet-config-1.x,其中x是kubelet的次要版本。
experimental-control-plane: 升級(jí)部署在此節(jié)點(diǎn)的控制平面各組件, 通常在第一個(gè)控制平面實(shí)例上執(zhí)行"kubeadm upgrade apply"后,應(yīng)執(zhí)行此命令。

操作環(huán)境說明:

OS: Ubuntu16.04
k8s: 一個(gè)Master,一個(gè)Node

kubernetes之從1.13.x升級(jí)到1.14.x

由于當(dāng)前環(huán)境中的集群是由kubeadm創(chuàng)建的,其版本為1.13.1,所以本次實(shí)驗(yàn)將其升級(jí)至1.14.0。

執(zhí)行升級(jí)流程
升級(jí)第一個(gè)控制平面節(jié)點(diǎn)

首先,在第一個(gè)控制平面節(jié)點(diǎn)也就是主控制平面上操作:

1. 確定升級(jí)前集群版本:

root@k8s-master:~# kubectl version
Client Version: version.Info{Major:1, Minor:13, GitVersion:v1.13.1, GitCommit:eec55b9ba98609a46fee712359c7b5b365bdd920, GitTreeState:clean, BuildDate:2018-12-13T10:39:04Z, GoVersion:go1.11.2, Compiler:gc, Platform:linux/amd64}
Server Version: version.Info{Major:1, Minor:13, GitVersion:v1.13.1, GitCommit:eec55b9ba98609a46fee712359c7b5b365bdd920, GitTreeState:clean, BuildDate:2018-12-13T10:31:33Z, GoVersion:go1.11.2, Compiler:gc, Platform:linux/amd64}

2. 查找可升級(jí)的版本:

apt update
apt-cache policy kubeadm
# find the latest 1.14 version in the list
# it should look like 1.14.x-00, where x is the latest patch
1.14.0-00 500
500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages

3. 先升級(jí)kubeadm到1.14.0

# replace x in 1.14.x-00 with the latest patch version
apt-mark unhold kubeadm kubelet && \\\\
apt-get update && apt-get install -y kubeadm=1.14.0-00 && \\\\
apt-mark hold kubeadm

如上升級(jí)kubeadm到1.14版本,Ubuntu系統(tǒng)有可能會(huì)自動(dòng)升級(jí)kubelet到當(dāng)前最新版本的1.16.0,所以此時(shí)就把kubelet也升級(jí)下:

apt-get install -y kubeadm=1.14.0-00 kubelet=1.14.0-00

如果確實(shí)發(fā)生這種情況,導(dǎo)致了kubeadm和kubelet版本不一致,最終致使后面的升級(jí)集群操作失敗,此時(shí)可以刪除kubeadm、kubelet

刪除:

apt-get remove kubelet kubeadm

再次安裝預(yù)期版本:

apt-get install -y kubeadm=1.14.0-00 kubelet=1.14.0-00

確定kubeadm已升級(jí)到預(yù)期版本:

root@k8s-master:~# kubeadm version
kubeadm version: &version.Info{Major:1, Minor:14, GitVersion:v1.14.0, GitCommit:641856db18352033a0d96dbc99153fa3b27298e5, GitTreeState:clean, BuildDate:2019-03-25T15:51:21Z, GoVersion:go1.12.1, Compiler:gc, Platform:linux/amd64}
root@k8s-master:~# 

4. 運(yùn)行升級(jí)計(jì)劃命令:檢測(cè)集群是否可以升級(jí),及獲取到的升級(jí)的版本。

kubeadm upgrade plan

輸出如下:

root@k8s-master:~# kubeadm upgrade plan
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with \\\'kubectl -n kube-system get cm kubeadm-config -oyaml\\\'
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.13.1
[upgrade/versions] kubeadm version: v1.14.0

Awesome, you\\\'re up-to-date! Enjoy!

告訴你集群可以升級(jí)。

5. 升級(jí)控制平面各組件,包含etcd。

root@k8s-master:~# kubeadm upgrade apply v1.14.0
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with \\\'kubectl -n kube-system get cm kubeadm-config -oyaml\\\'
[upgrade/version] You have chosen to change the cluster version to v1.14.0
[upgrade/versions] Cluster version: v1.13.1
[upgrade/versions] kubeadm version: v1.14.0
//輸出 y 確認(rèn)之后,開始進(jìn)行升級(jí)。
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version v1.14.0...
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-controller-manager-k8s-master hash: 31a4d945c251e62ac94e215494184514
Static pod: kube-scheduler-k8s-master hash: fefab66bc5a8a35b1f328ff4f74a8477
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Writing new Static Pod manifests to /etc/kubernetes/tmp/kubeadm-upgraded-manifests696355120
[upgrade/staticpods] Moved new manifest to /etc/kubernetes/manifests/kube-apiserver.yaml and backed up old manifest to /etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-03-20-30-46/kube-apiserver.yaml
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: bb799a8d323c1577bf9e10ede7914b30
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[apiclient] Found 0 Pods for label selector component=kube-apiserver
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component kube-apiserver upgraded successfully!
[upgrade/staticpods] Moved new manifest to /etc/kubernetes/manifests/kube-controller-manager.yaml and backed up old manifest to /etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-03-20-30-46/kube-controller-manager.yaml
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-k8s-master hash: 31a4d945c251e62ac94e215494184514
Static pod: kube-controller-manager-k8s-master hash: 54146492ed90bfa147f56609eee8005a
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component kube-controller-manager upgraded successfully!
[upgrade/staticpods] Moved new manifest to /etc/kubernetes/manifests/kube-scheduler.yaml and backed up old manifest to /etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-03-20-30-46/kube-scheduler.yaml
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-k8s-master hash: fefab66bc5a8a35b1f328ff4f74a8477
Static pod: kube-scheduler-k8s-master hash: 58272442e226c838b193bbba4c44091e
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component kube-scheduler upgraded successfully!
[upload-config] storing the configuration used in ConfigMap kubeadm-config in the kube-system Namespace
[kubelet] Creating a ConfigMap kubelet-config-1.14 in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the kubelet-config-1.14 ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file /var/lib/kubelet/config.yaml
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[certs] Generating apiserver certificate and key
[certs] apiserver serving cert is signed for dns names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.3.1.20]
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to v1.14.0. Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven\\\'t already done so.
root@k8s-master:~# 

在最后兩行中,可以看到,集群升級(jí)成功。

kubeadm upgrade apply 執(zhí)行了如下操作:

檢測(cè)集群是否可以升級(jí):
API Service是否可用、
所有的Node節(jié)點(diǎn)是不是處于Ready、
控制平面是否healthy。
強(qiáng)制實(shí)施版本的skew policies。
確??刂破矫骁R像可用且拉取到機(jī)器上。
通過更新/etc/kubernetes/manifests下清單文件來升級(jí)控制平面組件,如果升級(jí)失敗,則將清單文件還原。
應(yīng)用新的kube-dns和kube-proxy配置清單文件,及創(chuàng)建相關(guān)的RBAC規(guī)則。
為API Server創(chuàng)建新的證書和key,并把舊的備份一份(如果它們將在180天后過期)。

到v1.16版本為止,kubeadm upgrade apply必須在主控制平面節(jié)點(diǎn)上執(zhí)行。

6. 運(yùn)行完之后,驗(yàn)證集群版本:

root@k8s-master:~# kubectl version 
Client Version: version.Info{Major:1, Minor:13, GitVersion:v1.13.1, GitCommit:eec55b9ba98609a46fee712359c7b5b365bdd920, GitTreeState:clean, BuildDate:2018-12-13T10:39:04Z, GoVersion:go1.11.2, Compiler:gc, Platform:linux/amd64}
Server Version: version.Info{Major:1, Minor:14, GitVersion:v1.14.0, GitCommit:641856db18352033a0d96dbc99153fa3b27298e5, GitTreeState:clean, BuildDate:2019-03-25T15:45:25Z, GoVersion:go1.12.1, Compiler:gc, Platform:linux/amd64}

可以看到,雖然kubectl版本是在1.13.1,而服務(wù)端的控制平面已經(jīng)升級(jí)到了1.14.0

Master組件已正常運(yùn)行:

root@k8s-master:~# kubectl get componentstatuses 
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {health:true}   

到這里,第一臺(tái)控制平面的Master組件已升級(jí)完成,控制平面節(jié)點(diǎn)上通常還有kubelet和kubectl,所以這兩個(gè)也要做升級(jí)。

7. 升級(jí)CNI插件。

這一步是可選的,查詢CNI插件是否可以升級(jí)。

8. 升級(jí)該控制平面上的kubelet和kubectl

現(xiàn)在可以升級(jí)kubelet了,在升級(jí)過程中,不影響業(yè)務(wù)Pod的運(yùn)行。

8.1. 升級(jí)kubelet、kubectl

# replace x in 1.14.x-00 with the latest patch version
apt-mark unhold kubelet kubectl && \\\\
apt-get update && apt-get install -y kubelet=1.14.0-00 kubectl=1.14.0-00 && \\\\
apt-mark hold kubelet kubectl 

8.2. 重啟kubelet:

sudo systemctl restart kubelet

9. 查看kubectl版本,與預(yù)期一致。

root@k8s-master:~# kubectl version 
Client Version: version.Info{Major:1, Minor:14, GitVersion:v1.14.0, GitCommit:641856db18352033a0d96dbc99153fa3b27298e5, GitTreeState:clean, BuildDate:2019-03-25T15:53:57Z, GoVersion:go1.12.1, Compiler:gc, Platform:linux/amd64}
Server Version: version.Info{Major:1, Minor:14, GitVersion:v1.14.0, GitCommit:641856db18352033a0d96dbc99153fa3b27298e5, GitTreeState:clean, BuildDate:2019-03-25T15:45:25Z, GoVersion:go1.12.1, Compiler:gc, Platform:linux/amd64}
root@k8s-master:~# 

第一臺(tái)控制平面節(jié)點(diǎn)已完成升級(jí)。

升級(jí)其它控制平面節(jié)點(diǎn)

10. 升級(jí)其它控制平面節(jié)點(diǎn)。

在其它控制平面上執(zhí)行,與第一個(gè)控制平面節(jié)點(diǎn)相同,但使用:

sudo kubeadm upgrade node experimental-control-plane

代替:

sudo kubeadm upgrade apply

而 sudo kubeadm upgrade plan 沒有必要再執(zhí)行了。

kubeadm upgrade node experimental-control-plane執(zhí)行如下操作:

從集群中獲取kubeadm的ClusterConfiguration。
備份kube-apiserver證書(可選)。
升級(jí)控制平面上的三個(gè)核心組件的靜態(tài)Pod清單文件。

升級(jí)Node

現(xiàn)在開始升級(jí)Node上的各組件:kubeadm、kubelet、kube-proxy。

在不影響集群訪問的情況下,一個(gè)節(jié)點(diǎn)一個(gè)節(jié)點(diǎn)的執(zhí)行。

1.將Node標(biāo)記為維護(hù)狀態(tài)。

現(xiàn)在Node還原來的1.13:

root@k8s-master:~# kubectl get node
NAME         STATUS   ROLES    AGE    VERSION
k8s-master   Ready    master   292d   v1.14.0
k8s-node01   Ready    node     292d   v1.13.1

升級(jí)Node之前先將Node標(biāo)記為不可用,并逐出所有Pod:

kubectl drain $NODE --ignore-daemonsets

2. 升級(jí)kubeadm和kubelet

現(xiàn)在在各Node上同樣的安裝kubeadm、kubelet,因?yàn)槭褂胟ubeadm升級(jí)kubelet。

# replace x in 1.14.x-00 with the latest patch version
apt-mark unhold kubeadm kubelet && \\\\
apt-get update && apt-get install -y kubeadm=1.14.0-00 kubelet=1.14.0-00 && \\\\
apt-mark hold kubeadm kubelet

3. 升級(jí)kubelet的配置文件

$ kubeadm upgrade node config --kubelet-version v1.14.0
[kubelet-start] Downloading configuration for the kubelet from the kubelet-config-1.14 ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file /var/lib/kubelet/config.yaml
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
root@k8s-master:~# 

4. 重新啟動(dòng)kubelet

$ sudo systemctl restart kubelet

5. 最后將節(jié)點(diǎn)標(biāo)記為可調(diào)度來使其重新加入集群

kubectl uncordon $NODE

至此,該Node升級(jí)完畢,可以查看kubelet、kube-proxy的版本已變?yōu)轭A(yù)期版本v1.14.0

驗(yàn)證集群版本

root@k8s-master:~# kubectl get node
NAME         STATUS   ROLES    AGE    VERSION
k8s-master   Ready    master   292d   v1.14.0
k8s-node01   Ready    node     292d   v1.14.0

該STATUS列應(yīng)所有節(jié)點(diǎn)顯示Ready,并且版本號(hào)已更新。

到這里,所有升級(jí)流程已完美攻克。

從故障狀態(tài)中恢復(fù)

如果kubeadm upgrade失敗并且無法 回滾(例如由于執(zhí)行期間意外關(guān)閉),則可以再次運(yùn)行kubeadm upgrade。此命令是冪等的,并確保實(shí)際狀態(tài)最終是您聲明的狀態(tài)。

要從不良狀態(tài)中恢復(fù),您可以在不更改集群運(yùn)行版本的情況下運(yùn)行:

kubeadm upgrade --force。

更多升級(jí)信息查看官方升級(jí)文檔

kubernetes之從1.14.x升級(jí)到1.15.x

從1.14.0升級(jí)到1.15.0的升級(jí)流程也大致相同,只是升級(jí)命令稍有區(qū)別。

升級(jí)主控制平面節(jié)點(diǎn)

升級(jí)流程 與 從1.13升級(jí)至 1.14.0 相同。

1. 查詢可升級(jí)版本,安裝kubeadm到預(yù)期版本v1.15.0

apt-cache policy kubeadm
apt-mark unhold kubeadm kubelet
apt-get install -y kubeadm=1.15.0-00

kubeadm已達(dá)預(yù)期版本:

root@k8s-master:~# kubeadm version
kubeadm version: &version.Info{Major:1, Minor:15, GitVersion:v1.15.0, GitCommit:e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529, GitTreeState:clean, BuildDate:2019-06-19T16:37:41Z, GoVersion:go1.12.5, Compiler:gc, Platform:linux/amd64}

2. 執(zhí)行升級(jí)計(jì)劃

由于v1.15版本中,證書到期會(huì)自動(dòng)續(xù)費(fèi),kubeadm在控制平面升級(jí)期間更新所有證書,即 v1.15發(fā)布的kubeadm upgrade,會(huì)自動(dòng)續(xù)訂它在該節(jié)點(diǎn)上管理的證書。如果不想自動(dòng)更新證書,可以加上參數(shù):–certificate-renewal=false。

升級(jí)計(jì)劃:

kubeadm upgrade plan

可以看到如下輸出:

root@k8s-master:~# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with \\\'kubectl -n kube-system get cm kubeadm-config -oyaml\\\'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.14.0
[upgrade/versions] kubeadm version: v1.15.0
I1005 20:45:04.474363   38108 version.go:248] remote version is much newer: v1.16.1; falling back to: stable-1.15
[upgrade/versions] Latest stable version: v1.15.4
[upgrade/versions] Latest version in the v1.14 series: v1.14.7

Components that must be upgraded manually after you have upgraded the control plane with \\\'kubeadm upgrade apply\\\':
COMPONENT   CURRENT       AVAILABLE
Kubelet     1 x v1.14.0   v1.14.7
            1 x v1.15.0   v1.14.7

Upgrade to the latest version in the v1.14 series:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.14.0   v1.14.7
Controller Manager   v1.14.0   v1.14.7
Scheduler            v1.14.0   v1.14.7
Kube Proxy           v1.14.0   v1.14.7
CoreDNS              1.3.1     1.3.1
Etcd                 3.3.10    3.3.10

You can now apply the upgrade by executing the following command:

    kubeadm upgrade apply v1.14.7

_____________________________________________________________________

Components that must be upgraded manually after you have upgraded the control plane with \\\'kubeadm upgrade apply\\\':
COMPONENT   CURRENT       AVAILABLE
Kubelet     1 x v1.14.0   v1.15.4
            1 x v1.15.0   v1.15.4

Upgrade to the latest stable version:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.14.0   v1.15.4
Controller Manager   v1.14.0   v1.15.4
Scheduler            v1.14.0   v1.15.4
Kube Proxy           v1.14.0   v1.15.4
CoreDNS              1.3.1     1.3.1
Etcd                 3.3.10    3.3.10

You can now apply the upgrade by executing the following command:

    kubeadm upgrade apply v1.15.4

Note: Before you can perform this upgrade, you have to update kubeadm to v1.15.4.

_____________________________________________________________________

3. 升級(jí)控制平面

根據(jù)任務(wù)指引,升級(jí)控制平面:

kubeadm upgrade apply v1.15.0

由于kubeadm的版本是v1.15.0,所以集群版本也只能為v1.15.0。

輸出如下信息:

root@k8s-master:~# kubeadm upgrade apply v1.15.0
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with \\\'kubectl -n kube-system get cm kubeadm-config -oyaml\\\'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/version] You have chosen to change the cluster version to v1.15.0
[upgrade/versions] Cluster version: v1.14.0
[upgrade/versions] kubeadm version: v1.15.0
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
...
##正在拉取鏡像
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-scheduler.
...
##已經(jīng)拉取所有組件的鏡像
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
...
...
##如下自動(dòng)續(xù)訂了所有證書
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Writing new Static Pod manifests to /etc/kubernetes/tmp/kubeadm-upgraded-manifests353124264
[upgrade/staticpods] Preparing for kube-apiserver upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
...
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to v1.15.0. Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven\\\'t already done so.

4. 升級(jí)成功,驗(yàn)證。

可以看到,升級(jí)成功,此時(shí),再次查詢集群核心組件版本:

root@k8s-master:~# kubectl version
Client Version: version.Info{Major:1, Minor:14, GitVersion:v1.14.0, GitCommit:641856db18352033a0d96dbc99153fa3b27298e5, GitTreeState:clean, BuildDate:2019-03-25T15:53:57Z, GoVersion:go1.12.1, Compiler:gc, Platform:linux/amd64}
Server Version: version.Info{Major:1, Minor:15, GitVersion:v1.15.0, GitCommit:e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529, GitTreeState:clean, BuildDate:2019-06-19T16:32:14Z, GoVersion:go1.12.5, Compiler:gc, Platform:linux/amd64}

查該node版本:

NAME         STATUS   ROLES    AGE    VERSION
k8s-master   Ready    master   295d   v1.14.0
k8s-node01   Ready    node     295d   v1.14.0

5. 升級(jí)該控制平面上的kubelet和kubectl

控制平面核心組件已升級(jí)為v1.15.0,現(xiàn)在升級(jí)該節(jié)點(diǎn)上的kubelet及kubectl了,在升級(jí)過程中,不影響業(yè)務(wù)Pod的運(yùn)行。

# replace x in 1.15.x-00 with the latest patch version
apt-mark unhold kubelet kubectl && \\\\
apt-get update && apt-get install -y kubelet=1.15.0-00 kubectl=1.15.0-00 && \\\\
apt-mark hold kubelet kubectl 

6. 重啟kubelet:

sudo systemctl restart kubelet

7. 驗(yàn)證kubelet、kubectl版本,與預(yù)期一致。

root@k8s-master:~# kubectl version
Client Version: version.Info{Major:1, Minor:15, GitVersion:v1.15.0, GitCommit:e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529, GitTreeState:clean, BuildDate:2019-06-19T16:40:16Z, GoVersion:go1.12.5, Compiler:gc, Platform:linux/amd64}
Server Version: version.Info{Major:1, Minor:15, GitVersion:v1.15.0, GitCommit:e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529, GitTreeState:clean, BuildDate:2019-06-19T16:32:14Z, GoVersion:go1.12.5, Compiler:gc, Platform:linux/amd64}

查該node版本:

root@k8s-master:~# kubectl get node
NAME         STATUS   ROLES    AGE    VERSION
k8s-master   Ready    master   295d   v1.15.0
k8s-node01   Ready    node     295d   v1.14.0

升級(jí)其它控制平面

升級(jí)其它控制平面上的三個(gè)組件的命令有所不同,使用:

1. 升級(jí)其它控制平面組件,但是使用如下命令:

$ sudo kubeadm upgrade node

2. 然后,再升級(jí)kubelet和kubectl。

# replace x in 1.15.x-00 with the latest patch version
apt-mark unhold kubelet kubectl && \\\\
apt-get update && apt-get install -y kubelet=1.15.x-00 kubectl=1.15.x-00 && \\\\
apt-mark hold kubelet kubectl

3. 重啟kubelet

$ sudo systemctl restart kubelet

升級(jí)Node

升級(jí)Node與前面一致,此處簡(jiǎn)寫。

在所有Node上執(zhí)行。

1. 升級(jí)kubeadm:

# replace x in 1.15.x-00 with the latest patch version
apt-mark unhold kubeadm && \\\\
apt-get update && apt-get install -y kubeadm=1.15.x-00 && \\\\
apt-mark hold kubeadm

查詢kubeadm版本:

root@k8s-node01:~# kubeadm version
kubeadm version: &version.Info{Major:1, Minor:15, GitVersion:v1.15.0, GitCommit:e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529, GitTreeState:clean, BuildDate:2019-06-19T16:37:41Z, GoVersion:go1.12.5, Compiler:gc, Platform:linux/amd64}

2. 設(shè)置node為維護(hù)狀態(tài):

kubectl cordon $NODE

3. 更新kubelet配置文件

$ sudo kubeadm upgrade node
upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with \\\'kubectl -n kube-system get cm kubeadm-config -oyaml\\\'
[upgrade] Skipping phase. Not a control plane node[kubelet-start] Downloading configuration for the kubelet from the kubelet-config-1.15 ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file /var/lib/kubelet/config.yaml
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.

4. 升級(jí)kubelet組件和kubectl。

# replace x in 1.15.x-00 with the latest patch version
apt-mark unhold kubelet kubectl && \\\\
apt-get update && apt-get install -y kubelet=1.15.x-00 kubectl=1.15.x-00 && \\\\
apt-mark hold kubelet kubectl

5. 重啟kubelet

sudo systemctl restart kubelet

此時(shí)kube-proxy也會(huì)自動(dòng)升級(jí)并重啟。

6. 取消維護(hù)狀態(tài)

kubectl uncordon $NODE

Node升級(jí)完成。

驗(yàn)證集群版本

root@k8s-master:~# kubectl get node
NAME         STATUS     ROLES    AGE    VERSION
k8s-master   Ready      master   295d   v1.15.0
k8s-node01   NotReady   node     295d   v1.15.0

kubeadm upgrade node詳解

在這次升級(jí)流程中,升級(jí)其它控制平面和升級(jí)Node 用的都是 kubeadm upgrade node。

kubeadm upgrade node 在其它控制平面節(jié)點(diǎn)執(zhí)行時(shí):

從集群中獲取kubeadm的ClusterConfiguration。
備份kube-apiserver證書(可選)。
升級(jí)控制平面上的三個(gè)核心組件的靜態(tài)Pod清單文件。
升級(jí)該控制平面上的kubelet配置。

kubeadm upgrade node 在Node節(jié)點(diǎn)上執(zhí)行以下操作:

從集群中獲取kubeadm的ClusterConfiguration。
升級(jí)該Node節(jié)點(diǎn)的kubelet配置。

kubernetes之從1.15.x升級(jí)到1.16.x

從1.15.x升級(jí)到1.16.x 與 前面的 從1.14.x升級(jí)到1.15.x,升級(jí)命令完全相同,此處就不再重復(fù)。

更多關(guān)于云服務(wù)器,域名注冊(cè),虛擬主機(jī)的問題,請(qǐng)?jiān)L問三五互聯(lián)官網(wǎng):m.shinetop.cn

贊(0)
聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享網(wǎng)絡(luò)內(nèi)容為主,如果涉及侵權(quán)請(qǐng)盡快告知,我們將會(huì)在第一時(shí)間刪除。文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如需處理請(qǐng)聯(lián)系客服。郵箱:3140448839@qq.com。本站原創(chuàng)內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時(shí)需注明出處:三五互聯(lián)知識(shí)庫 » kubernetes集群版本升級(jí)攻略

登錄

找回密碼

注冊(cè)