比如 1.16.0
K8s所有組件 kube-controller,kube-scheduler,kubelet的版本號不得高于kube-apiserver的版本號。
這些組件的版本號可低于kube-apiserver的1個次要版本,比如kube-apierver是1.16.0,其它組件的版本可以為1.16.x和1.15.x。
在一個HA集群中,多個kube-apiserver間的版本號最多只能相差一個次版本號,比如 1.16和1.15。
最好所有組件與kube-apiserver版本號完全一致。
因此升級Kubernetes集群時,最先升級的核心組件就是kube-apiserver。
且只能向上升級為一個次要版本。
kubectl版本最多只能比kube-apiserver高或低一個次版本號。

宏觀升級流程

升級主控制平面節點。
升級其他控制平面節點。
升級Node節點。

微觀升級步驟

先升級kubeadm版本
升級第一個主控制平面節點Master組件。
升級第一個主控制平面節點上的kubelet及kubectl。
升級其它控制平面節點。
升級Node節點
驗證集群。

升級注意事項

確定升級前的的kubeadm集群版本。
kubeadm upgrade不會影響到工作負載,僅涉及k8s內部的組件,但是備份etcd數據庫是最佳實踐。
升級后,所有容器都會重啟動,因為容器的hash值已更改。
由于版本的兼容性,只能從一個次要版本升級到另外一個次要版本,不能跳躍升級。
集群控制平面應使用靜態Pod和etcd pod或外部etcd。

kubeadm upgrade 集群升級命令詳解

通過查詢命令行幫助:

$ kubeadm upgrade -h

Upgrade your cluster smoothly to a newer version with this command.

Usage:
  kubeadm upgrade [flags]
  kubeadm upgrade [command]
`

Available Commands:
  apply       Upgrade your Kubernetes cluster to the specified version.
  diff        Show what differences would be applied to existing static pod manifests. See also: kubeadm upgrade apply --dry-run
  node        Upgrade commands for a node in the cluster. Currently only supports upgrading the configuration, not the kubelet itself.
  plan        Check which versions are available to upgrade to and validate whether your current cluster is upgradeable. To skip the internet check, pass in the optional [version] parameter.

命令解析:

apply: 升級Kubernetes集群到指定版本。
diff: 即將運行的靜態Pod文件清單與當前正運行的靜態Pod清單文件的差異。
node: 升級集群中的node,當前(v1.16)僅支持升級kubelet的配置文件(/var/lib/kubelet/config.yaml),而非kubelet本身。
plan: 檢測當前集群是否可升級,并支持升級到哪些版本。

其中node子命令又支持如下子命令和選項:

$ kubeadm upgrade node  -h
Upgrade commands for a node in the cluster. Currently only supports upgrading the configuration, not the kubelet itself.

Usage:
  kubeadm upgrade node [flags]
  kubeadm upgrade node [command]

Available Commands:
  config                     Downloads the kubelet configuration from the cluster ConfigMap kubelet-config-1.X, where X is the minor version of the kubelet.
  experimental-control-plane Upgrades the control plane instance deployed on this node. IMPORTANT. This command should be executed after executing `kubeadm upgrade apply` on another control plane instance

Flags:
  -h, --help   help for node

Global Flags:
      --log-file string   If non-empty, use this log file
      --rootfs string     [EXPERIMENTAL] The path to the \\\'real\\\' host root filesystem.
      --skip-headers      If true, avoid header prefixes in the log messages
  -v, --v Level           number for the log level verbosity

命令解析:

config: 從集群configmap中下載kubelet的配置文件kubelet-config-1.x,其中x是kubelet的次要版本。
experimental-control-plane: 升級部署在此節點的控制平面各組件, 通常在第一個控制平面實例上執行"kubeadm upgrade apply"后,應執行此命令。

操作環境說明:

OS: Ubuntu16.04
k8s: 一個Master,一個Node

kubernetes之從1.13.x升級到1.14.x

由于當前環境中的集群是由kubeadm創建的,其版本為1.13.1,所以本次實驗將其升級至1.14.0。

執行升級流程
升級第一個控制平面節點

首先,在第一個控制平面節點也就是主控制平面上操作:

1. 確定升級前集群版本:

root@k8s-master:~# kubectl version
Client Version: version.Info{Major:1, Minor:13, GitVersion:v1.13.1, GitCommit:eec55b9ba98609a46fee712359c7b5b365bdd920, GitTreeState:clean, BuildDate:2018-12-13T10:39:04Z, GoVersion:go1.11.2, Compiler:gc, Platform:linux/amd64}
Server Version: version.Info{Major:1, Minor:13, GitVersion:v1.13.1, GitCommit:eec55b9ba98609a46fee712359c7b5b365bdd920, GitTreeState:clean, BuildDate:2018-12-13T10:31:33Z, GoVersion:go1.11.2, Compiler:gc, Platform:linux/amd64}

2. 查找可升級的版本:

apt update
apt-cache policy kubeadm
# find the latest 1.14 version in the list
# it should look like 1.14.x-00, where x is the latest patch
1.14.0-00 500
500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages

3. 先升級kubeadm到1.14.0

# replace x in 1.14.x-00 with the latest patch version
apt-mark unhold kubeadm kubelet && \\\\
apt-get update && apt-get install -y kubeadm=1.14.0-00 && \\\\
apt-mark hold kubeadm

如上升級kubeadm到1.14版本,Ubuntu系統有可能會自動升級kubelet到當前最新版本的1.16.0,所以此時就把kubelet也升級下:

apt-get install -y kubeadm=1.14.0-00 kubelet=1.14.0-00

如果確實發生這種情況,導致了kubeadm和kubelet版本不一致,最終致使后面的升級集群操作失敗,此時可以刪除kubeadm、kubelet

刪除:

apt-get remove kubelet kubeadm

再次安裝預期版本:

apt-get install -y kubeadm=1.14.0-00 kubelet=1.14.0-00

確定kubeadm已升級到預期版本:

root@k8s-master:~# kubeadm version
kubeadm version: &version.Info{Major:1, Minor:14, GitVersion:v1.14.0, GitCommit:641856db18352033a0d96dbc99153fa3b27298e5, GitTreeState:clean, BuildDate:2019-03-25T15:51:21Z, GoVersion:go1.12.1, Compiler:gc, Platform:linux/amd64}
root@k8s-master:~# 

4. 運行升級計劃命令:檢測集群是否可以升級,及獲取到的升級的版本。

kubeadm upgrade plan

輸出如下:

root@k8s-master:~# kubeadm upgrade plan
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with \\\'kubectl -n kube-system get cm kubeadm-config -oyaml\\\'
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.13.1
[upgrade/versions] kubeadm version: v1.14.0

Awesome, you\\\'re up-to-date! Enjoy!

告訴你集群可以升級。

5. 升級控制平面各組件,包含etcd。

root@k8s-master:~# kubeadm upgrade apply v1.14.0
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with \\\'kubectl -n kube-system get cm kubeadm-config -oyaml\\\'
[upgrade/version] You have chosen to change the cluster version to v1.14.0
[upgrade/versions] Cluster version: v1.13.1
[upgrade/versions] kubeadm version: v1.14.0
//輸出 y 確認之后,開始進行升級。
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version v1.14.0...
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-controller-manager-k8s-master hash: 31a4d945c251e62ac94e215494184514
Static pod: kube-scheduler-k8s-master hash: fefab66bc5a8a35b1f328ff4f74a8477
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Writing new Static Pod manifests to /etc/kubernetes/tmp/kubeadm-upgraded-manifests696355120
[upgrade/staticpods] Moved new manifest to /etc/kubernetes/manifests/kube-apiserver.yaml and backed up old manifest to /etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-03-20-30-46/kube-apiserver.yaml
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca
Static pod: kube-apiserver-k8s-master hash: bb799a8d323c1577bf9e10ede7914b30
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[apiclient] Found 0 Pods for label selector component=kube-apiserver
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component kube-apiserver upgraded successfully!
[upgrade/staticpods] Moved new manifest to /etc/kubernetes/manifests/kube-controller-manager.yaml and backed up old manifest to /etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-03-20-30-46/kube-controller-manager.yaml
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-k8s-master hash: 31a4d945c251e62ac94e215494184514
Static pod: kube-controller-manager-k8s-master hash: 54146492ed90bfa147f56609eee8005a
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component kube-controller-manager upgraded successfully!
[upgrade/staticpods] Moved new manifest to /etc/kubernetes/manifests/kube-scheduler.yaml and backed up old manifest to /etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-03-20-30-46/kube-scheduler.yaml
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-k8s-master hash: fefab66bc5a8a35b1f328ff4f74a8477
Static pod: kube-scheduler-k8s-master hash: 58272442e226c838b193bbba4c44091e
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component kube-scheduler upgraded successfully!
[upload-config] storing the configuration used in ConfigMap kubeadm-config in the kube-system Namespace
[kubelet] Creating a ConfigMap kubelet-config-1.14 in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the kubelet-config-1.14 ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file /var/lib/kubelet/config.yaml
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[certs] Generating apiserver certificate and key
[certs] apiserver serving cert is signed for dns names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.3.1.20]
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to v1.14.0. Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven\\\'t already done so.
root@k8s-master:~# 

在最后兩行中,可以看到,集群升級成功。

kubeadm upgrade apply 執行了如下操作:

檢測集群是否可以升級:
API Service是否可用、
所有的Node節點是不是處于Ready、
控制平面是否healthy。
強制實施版本的skew policies。
確保控制平面鏡像可用且拉取到機器上。
通過更新/etc/kubernetes/manifests下清單文件來升級控制平面組件,如果升級失敗,則將清單文件還原。
應用新的kube-dns和kube-proxy配置清單文件,及創建相關的RBAC規則。
為API Server創建新的證書和key,并把舊的備份一份(如果它們將在180天后過期)。

到v1.16版本為止,kubeadm upgrade apply必須在主控制平面節點上執行。

6. 運行完之后,驗證集群版本:

root@k8s-master:~# kubectl version 
Client Version: version.Info{Major:1, Minor:13, GitVersion:v1.13.1, GitCommit:eec55b9ba98609a46fee712359c7b5b365bdd920, GitTreeState:clean, BuildDate:2018-12-13T10:39:04Z, GoVersion:go1.11.2, Compiler:gc, Platform:linux/amd64}
Server Version: version.Info{Major:1, Minor:14, GitVersion:v1.14.0, GitCommit:641856db18352033a0d96dbc99153fa3b27298e5, GitTreeState:clean, BuildDate:2019-03-25T15:45:25Z, GoVersion:go1.12.1, Compiler:gc, Platform:linux/amd64}

可以看到,雖然kubectl版本是在1.13.1,而服務端的控制平面已經升級到了1.14.0

Master組件已正常運行:

root@k8s-master:~# kubectl get componentstatuses 
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {health:true}   

到這里,第一臺控制平面的Master組件已升級完成,控制平面節點上通常還有kubelet和kubectl,所以這兩個也要做升級。

7. 升級CNI插件。

這一步是可選的,查詢CNI插件是否可以升級。

8. 升級該控制平面上的kubelet和kubectl

現在可以升級kubelet了,在升級過程中,不影響業務Pod的運行。

8.1. 升級kubelet、kubectl

# replace x in 1.14.x-00 with the latest patch version
apt-mark unhold kubelet kubectl && \\\\
apt-get update && apt-get install -y kubelet=1.14.0-00 kubectl=1.14.0-00 && \\\\
apt-mark hold kubelet kubectl 

8.2. 重啟kubelet:

sudo systemctl restart kubelet

9. 查看kubectl版本,與預期一致。

root@k8s-master:~# kubectl version 
Client Version: version.Info{Major:1, Minor:14, GitVersion:v1.14.0, GitCommit:641856db18352033a0d96dbc99153fa3b27298e5, GitTreeState:clean, BuildDate:2019-03-25T15:53:57Z, GoVersion:go1.12.1, Compiler:gc, Platform:linux/amd64}
Server Version: version.Info{Major:1, Minor:14, GitVersion:v1.14.0, GitCommit:641856db18352033a0d96dbc99153fa3b27298e5, GitTreeState:clean, BuildDate:2019-03-25T15:45:25Z, GoVersion:go1.12.1, Compiler:gc, Platform:linux/amd64}
root@k8s-master:~# 

第一臺控制平面節點已完成升級。

升級其它控制平面節點

10. 升級其它控制平面節點。

在其它控制平面上執行,與第一個控制平面節點相同,但使用:

sudo kubeadm upgrade node experimental-control-plane

代替:

sudo kubeadm upgrade apply

而 sudo kubeadm upgrade plan 沒有必要再執行了。

kubeadm upgrade node experimental-control-plane執行如下操作:

從集群中獲取kubeadm的ClusterConfiguration。
備份kube-apiserver證書(可選)。
升級控制平面上的三個核心組件的靜態Pod清單文件。

升級Node

現在開始升級Node上的各組件:kubeadm、kubelet、kube-proxy。

在不影響集群訪問的情況下,一個節點一個節點的執行。

1.將Node標記為維護狀態。

現在Node還原來的1.13:

root@k8s-master:~# kubectl get node
NAME         STATUS   ROLES    AGE    VERSION
k8s-master   Ready    master   292d   v1.14.0
k8s-node01   Ready    node     292d   v1.13.1

升級Node之前先將Node標記為不可用,并逐出所有Pod:

kubectl drain $NODE --ignore-daemonsets

2. 升級kubeadm和kubelet

現在在各Node上同樣的安裝kubeadm、kubelet,因為使用kubeadm升級kubelet。

# replace x in 1.14.x-00 with the latest patch version
apt-mark unhold kubeadm kubelet && \\\\
apt-get update && apt-get install -y kubeadm=1.14.0-00 kubelet=1.14.0-00 && \\\\
apt-mark hold kubeadm kubelet

3. 升級kubelet的配置文件

$ kubeadm upgrade node config --kubelet-version v1.14.0
[kubelet-start] Downloading configuration for the kubelet from the kubelet-config-1.14 ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file /var/lib/kubelet/config.yaml
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
root@k8s-master:~# 

4. 重新啟動kubelet

$ sudo systemctl restart kubelet

5. 最后將節點標記為可調度來使其重新加入集群

kubectl uncordon $NODE

至此,該Node升級完畢,可以查看kubelet、kube-proxy的版本已變為預期版本v1.14.0

驗證集群版本

root@k8s-master:~# kubectl get node
NAME         STATUS   ROLES    AGE    VERSION
k8s-master   Ready    master   292d   v1.14.0
k8s-node01   Ready    node     292d   v1.14.0

該STATUS列應所有節點顯示Ready,并且版本號已更新。

到這里,所有升級流程已完美攻克。

從故障狀態中恢復

如果kubeadm upgrade失敗并且無法 回滾(例如由于執行期間意外關閉),則可以再次運行kubeadm upgrade。此命令是冪等的,并確保實際狀態最終是您聲明的狀態。

要從不良狀態中恢復,您可以在不更改集群運行版本的情況下運行:

kubeadm upgrade --force。

更多升級信息查看官方升級文檔

kubernetes之從1.14.x升級到1.15.x

從1.14.0升級到1.15.0的升級流程也大致相同,只是升級命令稍有區別。

升級主控制平面節點

升級流程 與 從1.13升級至 1.14.0 相同。

1. 查詢可升級版本,安裝kubeadm到預期版本v1.15.0

apt-cache policy kubeadm
apt-mark unhold kubeadm kubelet
apt-get install -y kubeadm=1.15.0-00

kubeadm已達預期版本:

root@k8s-master:~# kubeadm version
kubeadm version: &version.Info{Major:1, Minor:15, GitVersion:v1.15.0, GitCommit:e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529, GitTreeState:clean, BuildDate:2019-06-19T16:37:41Z, GoVersion:go1.12.5, Compiler:gc, Platform:linux/amd64}

2. 執行升級計劃

由于v1.15版本中,證書到期會自動續費,kubeadm在控制平面升級期間更新所有證書,即 v1.15發布的kubeadm upgrade,會自動續訂它在該節點上管理的證書。如果不想自動更新證書,可以加上參數:–certificate-renewal=false。

升級計劃:

kubeadm upgrade plan

可以看到如下輸出:

root@k8s-master:~# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with \\\'kubectl -n kube-system get cm kubeadm-config -oyaml\\\'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.14.0
[upgrade/versions] kubeadm version: v1.15.0
I1005 20:45:04.474363   38108 version.go:248] remote version is much newer: v1.16.1; falling back to: stable-1.15
[upgrade/versions] Latest stable version: v1.15.4
[upgrade/versions] Latest version in the v1.14 series: v1.14.7

Components that must be upgraded manually after you have upgraded the control plane with \\\'kubeadm upgrade apply\\\':
COMPONENT   CURRENT       AVAILABLE
Kubelet     1 x v1.14.0   v1.14.7
            1 x v1.15.0   v1.14.7

Upgrade to the latest version in the v1.14 series:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.14.0   v1.14.7
Controller Manager   v1.14.0   v1.14.7
Scheduler            v1.14.0   v1.14.7
Kube Proxy           v1.14.0   v1.14.7
CoreDNS              1.3.1     1.3.1
Etcd                 3.3.10    3.3.10

You can now apply the upgrade by executing the following command:

    kubeadm upgrade apply v1.14.7

_____________________________________________________________________

Components that must be upgraded manually after you have upgraded the control plane with \\\'kubeadm upgrade apply\\\':
COMPONENT   CURRENT       AVAILABLE
Kubelet     1 x v1.14.0   v1.15.4
            1 x v1.15.0   v1.15.4

Upgrade to the latest stable version:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.14.0   v1.15.4
Controller Manager   v1.14.0   v1.15.4
Scheduler            v1.14.0   v1.15.4
Kube Proxy           v1.14.0   v1.15.4
CoreDNS              1.3.1     1.3.1
Etcd                 3.3.10    3.3.10

You can now apply the upgrade by executing the following command:

    kubeadm upgrade apply v1.15.4

Note: Before you can perform this upgrade, you have to update kubeadm to v1.15.4.

_____________________________________________________________________

3. 升級控制平面

根據任務指引,升級控制平面:

kubeadm upgrade apply v1.15.0

由于kubeadm的版本是v1.15.0,所以集群版本也只能為v1.15.0。

輸出如下信息:

root@k8s-master:~# kubeadm upgrade apply v1.15.0
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with \\\'kubectl -n kube-system get cm kubeadm-config -oyaml\\\'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/version] You have chosen to change the cluster version to v1.15.0
[upgrade/versions] Cluster version: v1.14.0
[upgrade/versions] kubeadm version: v1.15.0
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
...
##正在拉取鏡像
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-scheduler.
...
##已經拉取所有組件的鏡像
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
...
...
##如下自動續訂了所有證書
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Writing new Static Pod manifests to /etc/kubernetes/tmp/kubeadm-upgraded-manifests353124264
[upgrade/staticpods] Preparing for kube-apiserver upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
...
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to v1.15.0. Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven\\\'t already done so.

4. 升級成功,驗證。

可以看到,升級成功,此時,再次查詢集群核心組件版本:

root@k8s-master:~# kubectl version
Client Version: version.Info{Major:1, Minor:14, GitVersion:v1.14.0, GitCommit:641856db18352033a0d96dbc99153fa3b27298e5, GitTreeState:clean, BuildDate:2019-03-25T15:53:57Z, GoVersion:go1.12.1, Compiler:gc, Platform:linux/amd64}
Server Version: version.Info{Major:1, Minor:15, GitVersion:v1.15.0, GitCommit:e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529, GitTreeState:clean, BuildDate:2019-06-19T16:32:14Z, GoVersion:go1.12.5, Compiler:gc, Platform:linux/amd64}

查該node版本:

NAME         STATUS   ROLES    AGE    VERSION
k8s-master   Ready    master   295d   v1.14.0
k8s-node01   Ready    node     295d   v1.14.0

5. 升級該控制平面上的kubelet和kubectl

控制平面核心組件已升級為v1.15.0,現在升級該節點上的kubelet及kubectl了,在升級過程中,不影響業務Pod的運行。

# replace x in 1.15.x-00 with the latest patch version
apt-mark unhold kubelet kubectl && \\\\
apt-get update && apt-get install -y kubelet=1.15.0-00 kubectl=1.15.0-00 && \\\\
apt-mark hold kubelet kubectl 

6. 重啟kubelet:

sudo systemctl restart kubelet

7. 驗證kubelet、kubectl版本,與預期一致。

root@k8s-master:~# kubectl version
Client Version: version.Info{Major:1, Minor:15, GitVersion:v1.15.0, GitCommit:e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529, GitTreeState:clean, BuildDate:2019-06-19T16:40:16Z, GoVersion:go1.12.5, Compiler:gc, Platform:linux/amd64}
Server Version: version.Info{Major:1, Minor:15, GitVersion:v1.15.0, GitCommit:e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529, GitTreeState:clean, BuildDate:2019-06-19T16:32:14Z, GoVersion:go1.12.5, Compiler:gc, Platform:linux/amd64}

查該node版本:

root@k8s-master:~# kubectl get node
NAME         STATUS   ROLES    AGE    VERSION
k8s-master   Ready    master   295d   v1.15.0
k8s-node01   Ready    node     295d   v1.14.0

升級其它控制平面

升級其它控制平面上的三個組件的命令有所不同,使用:

1. 升級其它控制平面組件,但是使用如下命令:

$ sudo kubeadm upgrade node

2. 然后,再升級kubelet和kubectl。

# replace x in 1.15.x-00 with the latest patch version
apt-mark unhold kubelet kubectl && \\\\
apt-get update && apt-get install -y kubelet=1.15.x-00 kubectl=1.15.x-00 && \\\\
apt-mark hold kubelet kubectl

3. 重啟kubelet

$ sudo systemctl restart kubelet

升級Node

升級Node與前面一致,此處簡寫。

在所有Node上執行。

1. 升級kubeadm:

# replace x in 1.15.x-00 with the latest patch version
apt-mark unhold kubeadm && \\\\
apt-get update && apt-get install -y kubeadm=1.15.x-00 && \\\\
apt-mark hold kubeadm

查詢kubeadm版本:

root@k8s-node01:~# kubeadm version
kubeadm version: &version.Info{Major:1, Minor:15, GitVersion:v1.15.0, GitCommit:e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529, GitTreeState:clean, BuildDate:2019-06-19T16:37:41Z, GoVersion:go1.12.5, Compiler:gc, Platform:linux/amd64}

2. 設置node為維護狀態:

kubectl cordon $NODE

3. 更新kubelet配置文件

$ sudo kubeadm upgrade node
upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with \\\'kubectl -n kube-system get cm kubeadm-config -oyaml\\\'
[upgrade] Skipping phase. Not a control plane node[kubelet-start] Downloading configuration for the kubelet from the kubelet-config-1.15 ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file /var/lib/kubelet/config.yaml
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.

4. 升級kubelet組件和kubectl。

# replace x in 1.15.x-00 with the latest patch version
apt-mark unhold kubelet kubectl && \\\\
apt-get update && apt-get install -y kubelet=1.15.x-00 kubectl=1.15.x-00 && \\\\
apt-mark hold kubelet kubectl

5. 重啟kubelet

sudo systemctl restart kubelet

此時kube-proxy也會自動升級并重啟。

6. 取消維護狀態

kubectl uncordon $NODE

Node升級完成。

驗證集群版本

root@k8s-master:~# kubectl get node
NAME         STATUS     ROLES    AGE    VERSION
k8s-master   Ready      master   295d   v1.15.0
k8s-node01   NotReady   node     295d   v1.15.0

kubeadm upgrade node詳解

在這次升級流程中,升級其它控制平面和升級Node 用的都是 kubeadm upgrade node。

kubeadm upgrade node 在其它控制平面節點執行時:

從集群中獲取kubeadm的ClusterConfiguration。
備份kube-apiserver證書(可選)。
升級控制平面上的三個核心組件的靜態Pod清單文件。
升級該控制平面上的kubelet配置。

kubeadm upgrade node 在Node節點上執行以下操作:

從集群中獲取kubeadm的ClusterConfiguration。
升級該Node節點的kubelet配置。

kubernetes之從1.15.x升級到1.16.x

從1.15.x升級到1.16.x 與 前面的 從1.14.x升級到1.15.x,升級命令完全相同,此處就不再重復。

更多關于云服務器域名注冊,虛擬主機的問題,請訪問三五互聯官網:m.shinetop.cn

贊(0)
聲明:本網站發布的內容(圖片、視頻和文字)以原創、轉載和分享網絡內容為主,如果涉及侵權請盡快告知,我們將會在第一時間刪除。文章觀點不代表本網站立場,如需處理請聯系客服。郵箱:3140448839@qq.com。本站原創內容未經允許不得轉載,或轉載時需注明出處:三五互聯知識庫 » kubernetes集群版本升級攻略

登錄

找回密碼

注冊

主站蜘蛛池模板: 丁香婷婷综合激情五月色| 人人澡人人妻人人爽人人蜜桃| 国产高清在线不卡一区| 中文毛片无遮挡高潮免费| 人妻一区二区三区人妻黄色 | 风韵丰满妇啪啪区老老熟女杏吧| 久久这里都是精品一区| 中文有无人妻vs无码人妻激烈| 国产999久久高清免费观看| 永久无码天堂网小说区| 四虎影视永久在线精品| 国产精品无码无卡在线播放| 亚洲熟女综合色一区二区三区| 国产精品无码av天天爽播放器| 亚洲一区二区三区18禁| 麻豆成人传媒一区二区| 麻豆一区二区中文字幕| 亚洲国产欧美一区二区好看电影| 久久久精品2019中文字幕之3| 国产福利免费在线观看| 亚洲欧洲日产国码高潮αv| 久久丫精品国产| 国内极度色诱视频网站| 国产一区二区三区无遮挡| 九九久久人妻一区精品色| 国产极品精品自在线不卡| 比如县| 亚洲av成人免费在线| 色综合人人超人人超级国碰| AV教师一区高清| 亚洲精品亚洲人成在线| 亚洲暴爽av人人爽日日碰| 久久五十路丰满熟女中出| 亚洲精品国产免费av| 日韩区中文字幕在线观看| av高清无码 在线播放| 丰满人妻跪趴高撅肥臀| 国产午夜福利视频在线| 婷婷综合亚洲| 人妻一区二区三区三区| 精品久久久久久无码中文字幕 |