Closed22

kubeadm で Kubernetes Upgrade v1.24 to v1.28

t_umet_ume
  • 去年アップグレードしたはずのk8sの環境が、また古くなってしまっていた(バージョンアップ早すぎ・・・)
  • 自宅開発用のk8sなので気軽にアップグレードしてみる
  • 前回のアップグレードはこちら

https://zenn.dev/t_ume/scraps/d56706bc054281

  • kubeadmで構築しているので、kubeadm upgradeでアップグレードさせる。
  • 開発機なので乗っているコンテナの稼働(移動や停止)は気にしない。
t_umet_ume

クラスター構成

  • OS:Ubuntu
  • ControlPlane:3台
  • Worker:3台
  • スタート:v1.24.5
  • ゴール:v.1.28.x
    • 2024/10にメンテナンスサポートが切れる・・・早い・・・
t_umet_ume
  • ControlPlane(Master)⇒ Workerの順にアップグレードする。
  • 基本的な流れはバージョン通して以下の流れ
  1. 対象バージョン選定
  2. kubeadmアップグレード
  3. アップグレードプランの確認(影響確認など)
  4. kubeadm upgrade(ControlPlaneを1台ずつ)
  5. kubelet/kubectl アップグレードと kubelet 再起動(ControlPlaneを1台ずつ)
  6. Workerに対して、上記2・4・5を1台ずつ実行する
t_umet_ume
  • 現状は以下の通り。
$ kubectl get node -owide
NAME   STATUS   ROLES           AGE      VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
cp11   Ready    control-plane   344d     v1.24.5   192.168.10.81   <none>        Ubuntu 20.04.5 LTS   5.4.0-169-generic   containerd://1.6.15
cp12   Ready    control-plane   343d     v1.24.5   192.168.10.82   <none>        Ubuntu 20.04.5 LTS   5.4.0-139-generic   containerd://1.6.15
cp13   Ready    control-plane   343d     v1.24.5   192.168.10.83   <none>        Ubuntu 20.04.5 LTS   5.4.0-169-generic   containerd://1.6.15
nd01   Ready    <none>          2y232d   v1.24.5   192.168.10.71   <none>        Ubuntu 20.04.6 LTS   5.4.0-169-generic   containerd://1.6.21
nd02   Ready    <none>          2y232d   v1.24.5   192.168.10.72   <none>        Ubuntu 20.04.6 LTS   5.4.0-169-generic   containerd://1.6.21
nd03   Ready    <none>          345d     v1.24.5   192.168.10.73   <none>        Ubuntu 20.04.6 LTS   5.4.0-169-generic   containerd://1.6.21
t_umet_ume

1.24 ⇒ 1.25

  • 1.25の最新版を確認
バージョン選定
$ apt update
...
75 packages can be upgraded. Run 'apt list --upgradable' to see them.

$ apt-cache madison kubeadm | grep 1.25
   kubeadm | 1.25.16-1.1 | https://pkgs.k8s.io/core:/stable:/v1.25/deb  Packages
...
  • ControlPlaneからアップグレードする
  • まずは1台目
#kubeadmをアップグレード 
$ apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm='1.25.16-1.1' && \
apt-mark hold kubeadm

kubeadm was already not hold.
...
1 upgraded, 0 newly installed, 0 to remove and 74 not upgraded.
...
Unpacking kubeadm (1.25.16-1.1) over (1.24.5-00) ...
dpkg: warning: unable to delete old directory '/etc/systemd/system/kubelet.service.d': Directory not empty
Setting up kubeadm (1.25.16-1.1) ...
kubeadm set on hold.

$ kubeadm version
kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.16", GitCommit:"c5f43560a4f98f2af3743a59299fb79f07924373", GitTreeState:"clean", BuildDate:"2023-11-15T22:36:51Z", GoVersion:"go1.20.10", Compiler:"gc", Platform:"linux/amd64"}

# アップグレードプランの確認
$ kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.24.5
[upgrade/versions] kubeadm version: v1.25.16
I1224 18:05:27.284226  218379 version.go:256] remote version is much newer: v1.29.0; falling back to: stable-1.25
[upgrade/versions] Target version: v1.25.16
[upgrade/versions] Latest version in the v1.24 series: v1.24.17

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       TARGET
kubelet     6 x v1.24.5   v1.24.17

Upgrade to the latest version in the v1.24 series:

COMPONENT                 CURRENT   TARGET
kube-apiserver            v1.24.5   v1.24.17
kube-controller-manager   v1.24.5   v1.24.17
kube-scheduler            v1.24.5   v1.24.17
kube-proxy                v1.24.5   v1.24.17
CoreDNS                   v1.8.6    v1.9.3
etcd                      3.5.3-0   3.5.6-0

You can now apply the upgrade by executing the following command:

        kubeadm upgrade apply v1.24.17

_____________________________________________________________________

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       TARGET
kubelet     6 x v1.24.5   v1.25.16

Upgrade to the latest stable version:

COMPONENT                 CURRENT   TARGET
kube-apiserver            v1.24.5   v1.25.16
kube-controller-manager   v1.24.5   v1.25.16
kube-scheduler            v1.24.5   v1.25.16
kube-proxy                v1.24.5   v1.25.16
CoreDNS                   v1.8.6    v1.9.3
etcd                      3.5.3-0   3.5.9-0

You can now apply the upgrade by executing the following command:

        kubeadm upgrade apply v1.25.16

_____________________________________________________________________


The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.

API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io   v1alpha1          v1alpha1            no
kubelet.config.k8s.io     v1beta1           v1beta1             no
_____________________________________________________________________
  • 問題なさそう
  • 1台目のアップグレード
$ kubeadm upgrade apply v1.25.16
...
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.25.16". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
  • まずは1台目成功
  • 途中でetcdkube-apiserverkube-controller-managerkube-schedulerが再作成された。
  • 他にも各ControlPlane/Workerのkube-proxycorednsも再作成されていた。
  • kubectlで状態を確認
  • get nodeだとまだ反映されていない
  • kubectlのバージョンが新しくしていたせいか、kubectlで見るバージョンは上がっていた。
$ kubectl get node -owide
NAME   STATUS   ROLES           AGE      VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
cp11   Ready    control-plane   346d     v1.24.5   192.168.10.81   <none>        Ubuntu 20.04.5 LTS   5.4.0-169-generic   containerd://1.6.15
cp12   Ready    control-plane   345d     v1.24.5   192.168.10.82   <none>        Ubuntu 20.04.5 LTS   5.4.0-169-generic   containerd://1.6.15
cp13   Ready    control-plane   345d     v1.24.5   192.168.10.83   <none>        Ubuntu 20.04.5 LTS   5.4.0-169-generic   containerd://1.6.15
nd01   Ready    <none>          2y233d   v1.24.5   192.168.10.71   <none>        Ubuntu 20.04.6 LTS   5.4.0-169-generic   containerd://1.6.21
nd02   Ready    <none>          2y233d   v1.24.5   192.168.10.72   <none>        Ubuntu 20.04.6 LTS   5.4.0-169-generic   containerd://1.6.21
nd03   Ready    <none>          347d     v1.24.5   192.168.10.73   <none>        Ubuntu 20.04.6 LTS   5.4.0-169-generic   containerd://1.6.21

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.1", GitCommit:"e4d4e1ab7cf1bf15273ef97303551b279f0920a9", GitTreeState:"clean", BuildDate:"2022-09-14T19:49:27Z", GoVersion:"go1.19.1", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.16", GitCommit:"c5f43560a4f98f2af3743a59299fb79f07924373", GitTreeState:"clean", BuildDate:"2023-11-15T22:28:05Z", GoVersion:"go1.20.10", Compiler:"gc", Platform:"linux/amd64"}
t_umet_ume
  • 2台目、3台目を実施
# 2台目、kubeadmをアップグレード
$ apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm='1.25.16-1.1' && \
apt-mark hold kubeadm

# 2台目をアップグレード(1台目とコマンドが異なる)
$ kubeadm upgrade node
...
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.

# 3台目も同様にアップグレード
# 作業後のノードの状態は、最初のノードをアップグレードした際と同様にまだ上がっていなかった。
t_umet_ume
  • ControlPlaneの作業の最後に、各ControlPlaneのkubectlとkubeletをアップグレードする。
$ apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet='1.25.16-1.1' kubectl='1.25.16-1.1' && \
apt-mark hold kubelet kubectl

kubelet was already not hold.
kubectl was already not hold.
...
The following packages will be upgraded:
  kubelet
The following packages will be DOWNGRADED:
  kubectl
1 upgraded, 0 newly installed, 1 downgraded, 0 to remove and 73 not upgraded.
E: Packages were downgraded and -y was used without --allow-downgrades
  • kubectlはいつのまにかアップグレードされていた、、、過去にインストールした?
  • kubeletだけ上げるように実施
  • daemonのリロードとkubeletの再起動
    • ControlPlaneにkube-system以外のコンテナがいれば先に退避しておく
$ systemctl daemon-reload
$ systemctl restart kubelet
  • kubelet再起動したノードからkubectl get nodeの結果が新バージョンになった
  • 最終的には以下の通り。
$ kubectl get node -owide
NAME   STATUS   ROLES           AGE      VERSION    INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
cp11   Ready    control-plane   346d     v1.25.16   192.168.10.81   <none>        Ubuntu 20.04.5 LTS   5.4.0-169-generic   containerd://1.6.15
cp12   Ready    control-plane   345d     v1.25.16   192.168.10.82   <none>        Ubuntu 20.04.5 LTS   5.4.0-169-generic   containerd://1.6.15
cp13   Ready    control-plane   345d     v1.25.16   192.168.10.83   <none>        Ubuntu 20.04.5 LTS   5.4.0-169-generic   containerd://1.6.15
nd01   Ready    <none>          2y234d   v1.24.5    192.168.10.71   <none>        Ubuntu 20.04.6 LTS   5.4.0-169-generic   containerd://1.6.21
nd02   Ready    <none>          2y234d   v1.24.5    192.168.10.72   <none>        Ubuntu 20.04.6 LTS   5.4.0-169-generic   containerd://1.6.21
nd03   Ready    <none>          347d     v1.24.5    192.168.10.73   <none>        Ubuntu 20.04.6 LTS   5.4.0-169-generic   containerd://1.6.21
t_umet_ume
  • 続いてWorker Nodeを実施していく
  • 流れはControlPlaneと同様
  1. kubeadm コマンドをアップグレード
  2. Workerをアップグレード
  3. kubelet/kubectl のアップグレード
  4. kubelet 再起動
  • ノードの台数分以下を実施していく
  • 今回はプライベート開発環境なので気にせずアップグレード書けるが、本番環境等考慮必要な場合はkubelet アップグレードのタイミングでdrainしてコンテナを他のノードに退避しておくこと
# Upgrade kubeadm
$ apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm='1.25.16-1.1' && \
apt-mark hold kubeadm

# Upgrade node
# kubeletの設定ファイルが書き換わるのみ
$ kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks
[preflight] Skipping prepull. Not a control plane node.
[upgrade] Skipping phase. Not a control plane node.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.

# Upgrade kubelet/kubectl
$ apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet='1.25.16-1.1' kubectl='1.25.16-1.1' && \
apt-mark hold kubelet kubectl

# Restart kubelet
$ systemctl daemon-reload
$ systemctl restart kubelet
t_umet_ume
  • 1.24 ⇒ 1.25のアップグレード完了
$ kubectl get node -owide
NAME   STATUS   ROLES           AGE      VERSION    INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
cp11   Ready    control-plane   346d     v1.25.16   192.168.10.81   <none>        Ubuntu 20.04.5 LTS   5.4.0-169-generic   containerd://1.6.15
cp12   Ready    control-plane   345d     v1.25.16   192.168.10.82   <none>        Ubuntu 20.04.5 LTS   5.4.0-169-generic   containerd://1.6.15
cp13   Ready    control-plane   345d     v1.25.16   192.168.10.83   <none>        Ubuntu 20.04.5 LTS   5.4.0-169-generic   containerd://1.6.15
nd01   Ready    <none>          2y234d   v1.25.16   192.168.10.71   <none>        Ubuntu 20.04.6 LTS   5.4.0-169-generic   containerd://1.6.21
nd02   Ready    <none>          2y234d   v1.25.16   192.168.10.72   <none>        Ubuntu 20.04.6 LTS   5.4.0-169-generic   containerd://1.6.21
nd03   Ready    <none>          347d     v1.25.16   192.168.10.73   <none>        Ubuntu 20.04.6 LTS   5.4.0-169-generic   containerd://1.6.21
  • Podはrook/cephとharborくらいしか起動していなかったがどちらも無事
  • cephは時々Create/Terminateで動いてはいたが、正常なうごきに見えた
    • 最終的にはRunningで落ち着いていた
    • kubeletの再起動が遅れると一部PenddingするPodがあるように見えたが、再起動後は正常に稼働した。
t_umet_ume
  • バージョンアップ後は証明書も更新されるので、今回も確認してみる
  • バージョンアップ前は以下の通り
    • もう少しで切れそうでした・・・
$ kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Jan 14, 2024 16:12 UTC   18d             ca                      no
apiserver                  Jan 14, 2024 16:12 UTC   18d             ca                      no
apiserver-etcd-client      Jan 14, 2024 16:12 UTC   18d             etcd-ca                 no
apiserver-kubelet-client   Jan 14, 2024 16:12 UTC   18d             ca                      no
controller-manager.conf    Jan 14, 2024 16:12 UTC   18d             ca                      no
etcd-healthcheck-client    Jan 14, 2024 16:12 UTC   18d             etcd-ca                 no
etcd-peer                  Jan 14, 2024 16:12 UTC   18d             etcd-ca                 no
etcd-server                Jan 14, 2024 16:12 UTC   18d             etcd-ca                 no
front-proxy-client         Jan 14, 2024 16:12 UTC   18d             front-proxy-ca          no
scheduler.conf             Jan 14, 2024 16:12 UTC   18d             ca                      no

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      May 04, 2031 16:31 UTC   7y              no
etcd-ca                 May 04, 2031 16:31 UTC   7y              no
front-proxy-ca          May 04, 2031 16:31 UTC   7y              no
  • バージョンアップ後は以下の通り
  • バージョンアップした時点から1年後に延びました
$ kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Dec 25, 2024 16:38 UTC   364d            ca                      no
apiserver                  Dec 25, 2024 16:37 UTC   364d            ca                      no
apiserver-etcd-client      Dec 25, 2024 16:37 UTC   364d            etcd-ca                 no
apiserver-kubelet-client   Dec 25, 2024 16:37 UTC   364d            ca                      no
controller-manager.conf    Dec 25, 2024 16:37 UTC   364d            ca                      no
etcd-healthcheck-client    Dec 25, 2024 16:36 UTC   364d            etcd-ca                 no
etcd-peer                  Dec 25, 2024 16:36 UTC   364d            etcd-ca                 no
etcd-server                Dec 25, 2024 16:36 UTC   364d            etcd-ca                 no
front-proxy-client         Dec 25, 2024 16:37 UTC   364d            front-proxy-ca          no
scheduler.conf             Dec 25, 2024 16:37 UTC   364d            ca                      no

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      May 04, 2031 16:31 UTC   7y              no
etcd-ca                 May 04, 2031 16:31 UTC   7y              no
front-proxy-ca          May 04, 2031 16:31 UTC   7y              no
t_umet_ume
  • 確認したところ、rook-ceph の operator でエラーが出ていた。
  • 別のスクラップで対応しようと思う。

https://zenn.dev/t_ume/scraps/aa3036b85c72bb

$ kubectl logs -f rook-ceph-operator-95f44b96c-gjdxw
...
2024-01-03 14:49:21.561780 I | clusterdisruption-controller: create event from ceph cluster CR
I0103 14:49:22.600197       7 request.go:655] Throttling request took 1.036381077s, request: GET:https://10.96.0.1:443/apis/cassandra.k8ssandra.io/v1alpha1?timeout=32s
2024-01-03 14:49:23.452267 E | operator: gave up to run the operator. failed to run the controller-runtime manager: no matches for kind "PodDisruptionBudget" in version "policy/v1beta1"
failed to run operator
: failed to run the controller-runtime manager: no matches for kind "PodDisruptionBudget" in version "policy/v1beta1"
t_umet_ume

1.25 ⇒ 1.26

  • 引き続き 1.26 にアップグレードしていく
  • まずはレポジトリファイルを 1.26 に修正
/etc/apt/sources.list.d/kubernetes.list
deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.26/deb/ /
  • アップグレード対象を確認
$ apt-cache madison kubeadm
   kubeadm | 1.26.12-1.1 | https://pkgs.k8s.io/core:/stable:/v1.26/deb  Packages
...
  • 1.25.16 から 1.26.12 にアップグレードする
t_umet_ume
  • 1台目のControlPlaneをアップグレード
$ apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm='1.26.12-1.1' && \
apt-mark hold kubeadm

Canceled hold on kubeadm.
...
The following packages will be upgraded:
  kubeadm
1 upgraded, 0 newly installed, 0 to remove and 75 not upgraded.
...
Setting up kubeadm (1.26.12-1.1) ...
kubeadm set on hold.

$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.12", GitCommit:"df63cd7cd818dd2262473d2170f4957c6735ba53", GitTreeState:"clean", BuildDate:"2023-12-19T13:41:12Z", GoVersion:"go1.20.12", Compiler:"gc", Platform:"linux/amd64"}

$ kubeadm upgrade plan
...
[upgrade/versions] Target version: v1.26.12
[upgrade/versions] Latest version in the v1.25 series: v1.25.16
...
Upgrade to the latest stable version:

COMPONENT                 CURRENT    TARGET
kube-apiserver            v1.25.16   v1.26.12
kube-controller-manager   v1.25.16   v1.26.12
kube-scheduler            v1.25.16   v1.26.12
kube-proxy                v1.25.16   v1.26.12
CoreDNS                   v1.9.3     v1.9.3
etcd                      3.5.9-0    3.5.9-0

You can now apply the upgrade by executing the following command:

        kubeadm upgrade apply v1.26.12
...
API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io   v1alpha1          v1alpha1            no
kubelet.config.k8s.io     v1beta1           v1beta1             no

$ kubeadm upgrade apply v1.26.12
...
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.26.12"
[upgrade/versions] Cluster version: v1.25.16
[upgrade/versions] kubeadm version: v1.26.12
[upgrade] Are you sure you want to proceed? [y/N]: y ★yを入力
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
...
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.26.12" (timeout: 5m0s)...
...
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
...
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
...
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
...
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.26.12". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
  • 正常にアップグレードできた
  • 2台目、3台目は以下の通り
$ apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm='1.26.12-1.1' && \
apt-mark hold kubeadm

$ kubeadm upgrade node
  • 1台目と同様に正常にアップグレードできた
t_umet_ume
  • 最後に ControlPlane 3台の kubelet/kubectl をアップグレード
$ apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet='1.26.12-1.1' kubectl='1.26.12-1.1' && \
apt-mark hold kubelet kubectl

$ systemctl daemon-reload
$ systemctl restart kubelet
  • 正常にアップグレードできた
$ kubectl get no -owide
NAME   STATUS   ROLES           AGE      VERSION    INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
cp11   Ready    control-plane   354d     v1.26.12   192.168.10.81   <none>        Ubuntu 20.04.5 LTS   5.4.0-169-generic   containerd://1.6.15
cp12   Ready    control-plane   354d     v1.26.12   192.168.10.82   <none>        Ubuntu 20.04.5 LTS   5.4.0-169-generic   containerd://1.6.15
cp13   Ready    control-plane   354d     v1.26.12   192.168.10.83   <none>        Ubuntu 20.04.5 LTS   5.4.0-169-generic   containerd://1.6.15
nd01   Ready    <none>          2y242d   v1.25.16   192.168.10.71   <none>        Ubuntu 20.04.6 LTS   5.4.0-169-generic   containerd://1.6.21
nd02   Ready    <none>          2y242d   v1.25.16   192.168.10.72   <none>        Ubuntu 20.04.6 LTS   5.4.0-169-generic   containerd://1.6.21
nd03   Ready    <none>          355d     v1.25.16   192.168.10.73   <none>        Ubuntu 20.04.6 LTS   5.4.0-169-generic   containerd://1.6.21

t_umet_ume
  • Worker 3台のkubeadm/kubeletをにアップグレードする。

  • リポジトリの修正とアップグレードの流れは以下同様

  • 無事にアップグレードできた

  • アップグレード後の状態

$ kubectl get node -owide
NAME   STATUS   ROLES           AGE      VERSION    INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
cp11   Ready    control-plane   354d     v1.26.12   192.168.10.81   <none>        Ubuntu 20.04.5 LTS   5.4.0-169-generic   containerd://1.6.15
cp12   Ready    control-plane   354d     v1.26.12   192.168.10.82   <none>        Ubuntu 20.04.5 LTS   5.4.0-169-generic   containerd://1.6.15
cp13   Ready    control-plane   354d     v1.26.12   192.168.10.83   <none>        Ubuntu 20.04.5 LTS   5.4.0-169-generic   containerd://1.6.15
nd01   Ready    <none>          2y242d   v1.26.12   192.168.10.71   <none>        Ubuntu 20.04.6 LTS   5.4.0-169-generic   containerd://1.6.21
nd02   Ready    <none>          2y242d   v1.26.12   192.168.10.72   <none>        Ubuntu 20.04.6 LTS   5.4.0-169-generic   containerd://1.6.21
nd03   Ready    <none>          355d     v1.26.12   192.168.10.73   <none>        Ubuntu 20.04.6 LTS   5.4.0-169-generic   containerd://1.6.21
t_umet_ume

1.26 ⇒ 1.27

  • 1.27 にアップグレードしていく
  • まずはレポジトリファイルを 1.27 に修正
/etc/apt/sources.list.d/kubernetes.list
deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.27/deb/ /
  • アップグレード対象を確認
$ apt-cache madison kubeadm
   kubeadm | 1.27.9-1.1 | https://pkgs.k8s.io/core:/stable:/v1.27/deb  Packages
...
  • 1.26.12 から 1.27.9 にアップグレードする
t_umet_ume
  • 手順は一緒なので upgrade plan のみ記載
$ kubeadm upgrade plan
...
[upgrade/versions] Cluster version: v1.26.12
[upgrade/versions] kubeadm version: v1.27.9
...
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT        TARGET
kubelet     6 x v1.26.12   v1.27.9

Upgrade to the latest stable version:

COMPONENT                 CURRENT    TARGET
kube-apiserver            v1.26.12   v1.27.9
kube-controller-manager   v1.26.12   v1.27.9
kube-scheduler            v1.26.12   v1.27.9
kube-proxy                v1.26.12   v1.27.9
CoreDNS                   v1.9.3     v1.10.1
etcd                      3.5.9-0    3.5.9-0

You can now apply the upgrade by executing the following command:

        kubeadm upgrade apply v1.27.9
...
API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io   v1alpha1          v1alpha1            no
kubelet.config.k8s.io     v1beta1           v1beta1             no
  • 無事にアップグレードできた
$ kubectl get node -owide
NAME   STATUS   ROLES           AGE      VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
cp11   Ready    control-plane   354d     v1.27.9   192.168.10.81   <none>        Ubuntu 20.04.5 LTS   5.4.0-169-generic   containerd://1.6.15
cp12   Ready    control-plane   354d     v1.27.9   192.168.10.82   <none>        Ubuntu 20.04.5 LTS   5.4.0-169-generic   containerd://1.6.15
cp13   Ready    control-plane   354d     v1.27.9   192.168.10.83   <none>        Ubuntu 20.04.5 LTS   5.4.0-169-generic   containerd://1.6.15
nd01   Ready    <none>          2y242d   v1.27.9   192.168.10.71   <none>        Ubuntu 20.04.6 LTS   5.4.0-169-generic   containerd://1.6.21
nd02   Ready    <none>          2y242d   v1.27.9   192.168.10.72   <none>        Ubuntu 20.04.6 LTS   5.4.0-169-generic   containerd://1.6.21
nd03   Ready    <none>          355d     v1.27.9   192.168.10.73   <none>        Ubuntu 20.04.6 LTS   5.4.0-169-generic   containerd://1.6.21
t_umet_ume

1.27 ⇒ 1.28

  • 今回の最後のアップグレード
  • まずはレポジトリファイルを 1.28 に修正
/etc/apt/sources.list.d/kubernetes.list
deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /
  • アップグレード対象を確認
$ apt-cache madison kubeadm
   kubeadm | 1.28.5-1.1 | https://pkgs.k8s.io/core:/stable:/v1.28/deb  Packages
...
  • 1.27.9 から 1.28.5 にアップグレードする
t_umet_ume
  • 手順は一緒なので upgrade plan のみ記載
$ kubeadm upgrade plan
...
[upgrade/versions] Cluster version: v1.27.9
[upgrade/versions] kubeadm version: v1.28.5
...
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       TARGET
kubelet     6 x v1.27.9   v1.28.5

Upgrade to the latest stable version:

COMPONENT                 CURRENT   TARGET
kube-apiserver            v1.27.9   v1.28.5
kube-controller-manager   v1.27.9   v1.28.5
kube-scheduler            v1.27.9   v1.28.5
kube-proxy                v1.27.9   v1.28.5
CoreDNS                   v1.10.1   v1.10.1
etcd                      3.5.9-0   3.5.9-0

You can now apply the upgrade by executing the following command:

        kubeadm upgrade apply v1.28.5
...
API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io   v1alpha1          v1alpha1            no
kubelet.config.k8s.io     v1beta1           v1beta1             no
  • 最終的なクラスタの状態は以下の通り
$ kubectl get no -owide
NAME   STATUS   ROLES           AGE      VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
cp11   Ready    control-plane   354d     v1.28.5   192.168.10.81   <none>        Ubuntu 20.04.5 LTS   5.4.0-169-generic   containerd://1.6.15
cp12   Ready    control-plane   354d     v1.28.5   192.168.10.82   <none>        Ubuntu 20.04.5 LTS   5.4.0-169-generic   containerd://1.6.15
cp13   Ready    control-plane   354d     v1.28.5   192.168.10.83   <none>        Ubuntu 20.04.5 LTS   5.4.0-169-generic   containerd://1.6.15
nd01   Ready    <none>          2y242d   v1.28.5   192.168.10.71   <none>        Ubuntu 20.04.6 LTS   5.4.0-169-generic   containerd://1.6.21
nd02   Ready    <none>          2y242d   v1.28.5   192.168.10.72   <none>        Ubuntu 20.04.6 LTS   5.4.0-169-generic   containerd://1.6.21
nd03   Ready    <none>          355d     v1.28.5   192.168.10.73   <none>        Ubuntu 20.04.6 LTS   5.4.0-169-generic   containerd://1.6.21
t_umet_ume
  • 今年も無事にアップグレードが完了した
  • プライベートで使っているクラスタなので乗っているコンテナは少ないが、一応無事?にアップグレードできてほっとした
  • 来年もまた複数のアップグレードが必要なるかと思うと・・・
このスクラップは4ヶ月前にクローズされました