🍎

ラズパイkubeadmクラスタのアップグレード

2021/12/29に公開

はじめに

最近kubernetes 1.23がリリースされましたね。
自宅のラズパイクラスタのバージョンが気になったので確認してみると、まあまあ古かったのでアップデートすることにしました(構築したのが一年前なので当たり前ですね笑)

今回は最新のv1.23.1にあげます。

前提確認

今回対象となるk8sクラスタはv1.19.2です。

$ kubectl get no
NAME    STATUS   ROLES    AGE    VERSION
hkm01   Ready    master   356d   v1.19.2
hkw01   Ready    <none>   356d   v1.19.2
hkw02   Ready    <none>   356d   v1.19.2

$ kubectl top node
NAME    CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
hkm01   597m         14%    2143Mi          58%
hkw01   558m         13%    2853Mi          77%
hkw02   393m         9%     2535Mi          68%

MEMORYが足りなくてDrainできるのか心配ですが、特に止まって困るものもないのでこのまま実施します。
すべてのノードで同じバージョンでした。

$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.2", GitCommit:"f5743093fd1c663cb0cbc89748f730662345d44d", GitTreeState:"clean", BuildDate:"2020-09-16T13:38:53Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/arm64"}

まずは公式のクラスタアップデート手順を確認します。
https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/

ざっと目を通してみると以下の記載がありました。

Skipping MINOR versions when upgrading is unsupported.

今回対象となるk8sクラスタのバージョンは1.19.2なので、以下の順でアップグレードする必要があります。
v1.19.2 → v1.20.x → v1.21.x → v1.22.x → v1.23.1

ちょっと面倒くさそうですが、大人しく公式に従います。

クラスタアップデート

kubeadmクラスターを1.19から1.20にアップグレードする

以下を参考に実施
https://v1-20.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/

マスターノードのアップグレード

kubeadmのバージョンを確認します。

$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.2", GitCommit:"f5743093fd1c663cb0cbc89748f730662345d44d", GitTreeState:"clean", BuildDate:"2020-09-16T13:38:53Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/arm64"}

アップグレードするバージョンを決めるための確認をします。

$ apt update
$ apt-cache madison kubeadm | grep 1.20
   kubeadm | c | https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages
   kubeadm | 1.20.13-00 | https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages
   kubeadm | 1.20.12-00 | https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages
   kubeadm | 1.20.11-00 | https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages
   kubeadm | 1.20.10-00 | https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages
   kubeadm |  1.20.9-00 | https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages
   kubeadm |  1.20.8-00 | https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages
   kubeadm |  1.20.7-00 | https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages
   kubeadm |  1.20.6-00 | https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages
   kubeadm |  1.20.5-00 | https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages
   kubeadm |  1.20.4-00 | https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages
   kubeadm |  1.20.2-00 | https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages
   kubeadm |  1.20.1-00 | https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages
   kubeadm |  1.20.0-00 | https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages

パッケージのバージョンを固定しているか確認します。

$ apt-mark showhold
docker-ce
docker-ce-cli
kubeadm
kubectl
kubelet

アップグレード前に、再度kubeadmのバージョンを確認します。

apt list --installed | grep kubeadm
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
kubeadm/kubernetes-xenial,now 1.19.2-00 arm64 [installed,upgradable to: 1.23.1-00]

kubeadmをアップグレードします。

$ sudo apt-get update && sudo apt-get install -y --allow-change-held-packages kubeadm=1.20.14-00
Hit:1 https://download.docker.com/linux/ubuntu focal InRelease
Hit:3 http://ports.ubuntu.com/ubuntu-ports focal InRelease
Hit:2 https://packages.cloud.google.com/apt kubernetes-xenial InRelease
Hit:4 http://ports.ubuntu.com/ubuntu-ports focal-updates InRelease
Hit:5 http://ports.ubuntu.com/ubuntu-ports focal-backports InRelease
Hit:6 http://ports.ubuntu.com/ubuntu-ports focal-security InRelease
Reading package lists... Done
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer required:
  adwaita-icon-theme at-spi2-core cpu-checker fontconfig fontconfig-config fonts-dejavu-core gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-x
  gtk-update-icon-cache hicolor-icon-theme humanity-icon-theme ibverbs-providers ipxe-qemu ipxe-qemu-256k-compat-efi-roms libaa1 libasyncns0 libatk-bridge2.0-0
  libatk1.0-0 libatk1.0-data libatspi2.0-0 libavahi-client3 libavahi-common-data libavahi-common3 libavc1394-0 libboost-iostreams1.71.0 libboost-thread1.71.0
  libbrlapi0.7 libcaca0 libcacard0 libcairo-gobject2 libcairo2 libcdparanoia0 libcolord2 libcups2 libdatrie1 libdv4 libepoxy0 libflac8 libfontconfig1
  libfreetype6 libgbm1 libgdk-pixbuf2.0-0 libgdk-pixbuf2.0-bin libgdk-pixbuf2.0-common libgraphite2-3 libgstreamer-plugins-base1.0-0
  libgstreamer-plugins-good1.0-0 libgtk-3-0 libgtk-3-bin libgtk-3-common libharfbuzz0b libibverbs1 libiec61883-0 libiscsi7 libjack-jackd2-0 libjbig0
  libjpeg-turbo8 libjpeg8 liblcms2-2 libmp3lame0 libmpg123-0 libnspr4 libnss3 libopus0 liborc-0.4-0 libpango-1.0-0 libpangocairo-1.0-0 libpangoft2-1.0-0
  libpixman-1-0 libpmem1 libpulse0 librados2 libraw1394-11 librbd1 librdmacm1 librest-0.7-0 librsvg2-2 librsvg2-common libsamplerate0 libshout3 libslirp0
  libsndfile1 libsoup-gnome2.4-1 libspeex1 libspice-server1 libtag1v5 libtag1v5-vanilla libthai-data libthai0 libtheora0 libtiff5 libtwolame0 libusbredirparser1
  libv4l-0 libv4lconvert0 libvirglrenderer1 libvisual-0.4-0 libvorbisenc2 libvpx6 libvte-2.91-0 libvte-2.91-common libwavpack1 libwayland-client0
  libwayland-cursor0 libwayland-egl1 libwayland-server0 libwebp6 libxcb-render0 libxcb-shm0 libxcomposite1 libxcursor1 libxdamage1 libxfixes3 libxi6
  libxinerama1 libxkbcommon0 libxrandr2 libxrender1 libxtst6 libxv1 ovmf qemu-block-extra qemu-efi-aarch64 qemu-efi-arm qemu-slof qemu-system-arm
  qemu-system-common qemu-system-data qemu-system-gui qemu-system-mips qemu-system-misc qemu-system-ppc qemu-system-s390x qemu-system-sparc qemu-system-x86
  qemu-utils seabios sharutils ubuntu-mono x11-common
Use 'sudo apt autoremove' to remove them.
The following additional packages will be installed:
  cri-tools
The following held packages will be changed:
  kubeadm
The following packages will be upgraded:
  cri-tools kubeadm
2 upgraded, 0 newly installed, 0 to remove and 143 not upgraded.
Need to get 16.8 MB of archives.
After this operation, 4110 kB of additional disk space will be used.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main arm64 cri-tools arm64 1.19.0-00 [10.2 MB]
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main arm64 kubeadm arm64 1.20.14-00 [6562 kB]
Fetched 16.8 MB in 3s (5166 kB/s)
(Reading database ... 166698 files and directories currently installed.)
Preparing to unpack .../cri-tools_1.19.0-00_arm64.deb ...
Unpacking cri-tools (1.19.0-00) over (1.13.0-01) ...
Preparing to unpack .../kubeadm_1.20.14-00_arm64.deb ...
Unpacking kubeadm (1.20.14-00) over (1.19.2-00) ...
Setting up cri-tools (1.19.0-00) ...
Setting up kubeadm (1.20.14-00) ...

ダウンロードが成功し、指定したバージョンv1.20.14であることを確認します。

$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.14", GitCommit:"57a3aa3f13699cf3db9c52d228c18db94fa81876", GitTreeState:"clean", BuildDate:"2021-12-15T14:51:22Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/arm64"}

クラスタのUpgrade planを確認します。
このコマンドは、クラスターをアップグレード可能であることを確認します。

$ kubeadm upgrade plan
couldn't create a Kubernetes client from file "/etc/kubernetes/admin.conf": failed to load admin kubeconfig: open /etc/kubernetes/admin.conf: permission denied
To see the stack trace of this error execute with --v=5 or higher

$ sudo kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.19.6
[upgrade/versions] kubeadm version: v1.20.14
I1228 13:43:20.331441 3818849 version.go:254] remote version is much newer: v1.23.1; falling back to: stable-1.20
[upgrade/versions] Latest stable version: v1.20.14
[upgrade/versions] Latest stable version: v1.20.14
[upgrade/versions] Latest version in the v1.19 series: v1.19.16
[upgrade/versions] Latest version in the v1.19 series: v1.19.16

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
kubelet     3 x v1.19.2   v1.19.16

Upgrade to the latest version in the v1.19 series:

COMPONENT                 CURRENT    AVAILABLE
kube-apiserver            v1.19.6    v1.19.16
kube-controller-manager   v1.19.6    v1.19.16
kube-scheduler            v1.19.6    v1.19.16
kube-proxy                v1.19.6    v1.19.16
CoreDNS                   1.7.0      1.7.0
etcd                      3.4.13-0   3.4.13-0

You can now apply the upgrade by executing the following command:

	kubeadm upgrade apply v1.19.16

_____________________________________________________________________

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
kubelet     3 x v1.19.2   v1.20.14

Upgrade to the latest stable version:

COMPONENT                 CURRENT    AVAILABLE
kube-apiserver            v1.19.6    v1.20.14
kube-controller-manager   v1.19.6    v1.20.14
kube-scheduler            v1.19.6    v1.20.14
kube-proxy                v1.19.6    v1.20.14
CoreDNS                   1.7.0      1.7.0
etcd                      3.4.13-0   3.4.13-0

You can now apply the upgrade by executing the following command:

	kubeadm upgrade apply v1.20.14

_____________________________________________________________________


The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.

API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io   v1alpha1          v1alpha1            no
kubelet.config.k8s.io     v1beta1           v1beta1             no
_____________________________________________________________________

planの実行結果に従ってapplyを行います。

You can now apply the upgrade by executing the following command:

kubeadm upgrade apply v1.20.14

$ sudo kubeadm upgrade apply v1.20.14
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.20.14"
[upgrade/versions] Cluster version: v1.19.6
[upgrade/versions] kubeadm version: v1.20.14
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.20.14"...
Static pod: kube-apiserver-hkm01 hash: 4da921c828ee81a7012c835bde9c346e
Static pod: kube-controller-manager-hkm01 hash: 3dd832312b02d2a3938b1cb5fa654401
Static pod: kube-scheduler-hkm01 hash: 092bf93ace9b8cbbba45a9126334ea19
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-hkm01 hash: 67efb2c21839ef84ea45ffb98ce3072b
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-12-28-13-47-00/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: etcd-hkm01 hash: 67efb2c21839ef84ea45ffb98ce3072b
Static pod: etcd-hkm01 hash: 67efb2c21839ef84ea45ffb98ce3072b
Static pod: etcd-hkm01 hash: 37d1db3f69039c3425f51e47bb70609b
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests004872833"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-12-28-13-47-00/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-hkm01 hash: 4da921c828ee81a7012c835bde9c346e
Static pod: kube-apiserver-hkm01 hash: 850ff94c2a1834034737653ed97a98dd
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-12-28-13-47-00/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-hkm01 hash: 3dd832312b02d2a3938b1cb5fa654401
Static pod: kube-controller-manager-hkm01 hash: be6e92e47ac3ce1b586504ca208b8187
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-12-28-13-47-00/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-hkm01 hash: 092bf93ace9b8cbbba45a9126334ea19
Static pod: kube-scheduler-hkm01 hash: 0ae46508b5aeed56b7122644106323ce
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Applying label node-role.kubernetes.io/control-plane='' to Nodes with label node-role.kubernetes.io/master='' (deprecated)
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.20.14". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

無事マスターノードのkubeadmのバージョンアップが成功しました。
今回対象とするクラスタではflannelを使用していますが、特に変更は加えません。

続いてマスターノードをドレインします

$ kubectl get nodes
NAME    STATUS   ROLES                  AGE    VERSION
hkm01   Ready    control-plane,master   356d   v1.19.2
hkw01   Ready    <none>                 356d   v1.19.2
hkw02   Ready    <none>                 356d   v1.19.2

$ kubectl drain hkm01 --ignore-daemonsets
node/hkm01 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-arm64-stkc7, kube-system/kube-proxy-djxcw, metallb-system/metallb-speaker-ts9ss
evicting pod kube-system/coredns-74ff55c5b-ptgbn
pod/coredns-74ff55c5b-ptgbn evicted
node/hkm01 evicted

$ kubectl get nodes
NAME    STATUS                     ROLES                  AGE    VERSION
hkm01   Ready,SchedulingDisabled   control-plane,master   356d   v1.19.2
hkw01   Ready                      <none>                 356d   v1.19.2
hkw02   Ready                      <none>                 356d   v1.19.2

kubeletとkubectlをアップグレードします。

$ sudo apt-get update && sudo apt-get install -y --allow-change-held-packages kubelet=1.20.14-00 kubectl=1.20.14-00
Hit:1 https://download.docker.com/linux/ubuntu focal InRelease
Hit:3 http://ports.ubuntu.com/ubuntu-ports focal InRelease
Hit:2 https://packages.cloud.google.com/apt kubernetes-xenial InRelease
Hit:4 http://ports.ubuntu.com/ubuntu-ports focal-updates InRelease
Hit:5 http://ports.ubuntu.com/ubuntu-ports focal-backports InRelease
Hit:6 http://ports.ubuntu.com/ubuntu-ports focal-security InRelease
Reading package lists... Done
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer required:
  adwaita-icon-theme at-spi2-core cpu-checker fontconfig fontconfig-config fonts-dejavu-core gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-x
  gtk-update-icon-cache hicolor-icon-theme humanity-icon-theme ibverbs-providers ipxe-qemu ipxe-qemu-256k-compat-efi-roms libaa1 libasyncns0 libatk-bridge2.0-0
  libatk1.0-0 libatk1.0-data libatspi2.0-0 libavahi-client3 libavahi-common-data libavahi-common3 libavc1394-0 libboost-iostreams1.71.0 libboost-thread1.71.0
  libbrlapi0.7 libcaca0 libcacard0 libcairo-gobject2 libcairo2 libcdparanoia0 libcolord2 libcups2 libdatrie1 libdv4 libepoxy0 libflac8 libfontconfig1
  libfreetype6 libgbm1 libgdk-pixbuf2.0-0 libgdk-pixbuf2.0-bin libgdk-pixbuf2.0-common libgraphite2-3 libgstreamer-plugins-base1.0-0
  libgstreamer-plugins-good1.0-0 libgtk-3-0 libgtk-3-bin libgtk-3-common libharfbuzz0b libibverbs1 libiec61883-0 libiscsi7 libjack-jackd2-0 libjbig0
  libjpeg-turbo8 libjpeg8 liblcms2-2 libmp3lame0 libmpg123-0 libnspr4 libnss3 libopus0 liborc-0.4-0 libpango-1.0-0 libpangocairo-1.0-0 libpangoft2-1.0-0
  libpixman-1-0 libpmem1 libpulse0 librados2 libraw1394-11 librbd1 librdmacm1 librest-0.7-0 librsvg2-2 librsvg2-common libsamplerate0 libshout3 libslirp0
  libsndfile1 libsoup-gnome2.4-1 libspeex1 libspice-server1 libtag1v5 libtag1v5-vanilla libthai-data libthai0 libtheora0 libtiff5 libtwolame0 libusbredirparser1
  libv4l-0 libv4lconvert0 libvirglrenderer1 libvisual-0.4-0 libvorbisenc2 libvpx6 libvte-2.91-0 libvte-2.91-common libwavpack1 libwayland-client0
  libwayland-cursor0 libwayland-egl1 libwayland-server0 libwebp6 libxcb-render0 libxcb-shm0 libxcomposite1 libxcursor1 libxdamage1 libxfixes3 libxi6
  libxinerama1 libxkbcommon0 libxrandr2 libxrender1 libxtst6 libxv1 ovmf qemu-block-extra qemu-efi-aarch64 qemu-efi-arm qemu-slof qemu-system-arm
  qemu-system-common qemu-system-data qemu-system-gui qemu-system-mips qemu-system-misc qemu-system-ppc qemu-system-s390x qemu-system-sparc qemu-system-x86
  qemu-utils seabios sharutils ubuntu-mono x11-common
Use 'sudo apt autoremove' to remove them.
The following held packages will be changed:
  kubectl kubelet
The following packages will be upgraded:
  kubectl kubelet
2 upgraded, 0 newly installed, 0 to remove and 142 not upgraded.
Need to get 23.2 MB of archives.
After this operation, 1255 kB of additional disk space will be used.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main arm64 kubectl arm64 1.20.14-00 [6753 kB]
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main arm64 kubelet arm64 1.20.14-00 [16.5 MB]
Fetched 23.2 MB in 6s (4188 kB/s)
(Reading database ... 166698 files and directories currently installed.)
Preparing to unpack .../kubectl_1.20.14-00_arm64.deb ...
Unpacking kubectl (1.20.14-00) over (1.19.2-00) ...
Preparing to unpack .../kubelet_1.20.14-00_arm64.deb ...
Unpacking kubelet (1.20.14-00) over (1.19.2-00) ...
Setting up kubectl (1.20.14-00) ...
Setting up kubelet (1.20.14-00) ...

バージョンを確認します。v1.20.14に更新されていることがわかります。

$ kubectl get node
NAME    STATUS                     ROLES                  AGE    VERSION
hkm01   Ready,SchedulingDisabled   control-plane,master   356d   v1.20.14
hkw01   Ready                      <none>                 356d   v1.19.2
hkw02   Ready                      <none>                 356d   v1.19.2

kubeletを再起動し、ノードをスケジュール可能にします。

$ sudo systemctl daemon-reload
$ sudo systemctl restart kubelet

$ kubectl uncordon hkm01
node/hkm01 uncordoned

$ kubectl get node
NAME    STATUS   ROLES                  AGE    VERSION
hkm01   Ready    control-plane,master   356d   v1.20.14
hkw01   Ready    <none>                 356d   v1.19.2
hkw02   Ready    <none>                 356d   v1.19.2

公式通りの手順ですんなり更新できました。

ワーカーノードのアップグレード

続いてワーカーノードのアップグレードも行います。手順は公式通りに実施しています。
アップグレードするバージョンの存在確認と現バージョンの確認を行います。
マスターノード同様にv1.20.14に更新します。

kota@hkw01:~$ apt-cache madison kubeadm | grep 1.20.14
   kubeadm | 1.20.14-00 | https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages
kota@hkw01:~$ apt-cache madison kubectl | grep 1.20.14
   kubectl | 1.20.14-00 | https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages
kota@hkw01:~$ apt-cache madison kubelet | grep 1.20.14
   kubelet | 1.20.14-00 | https://apt.kubernetes.io kubernetes-xenial/main arm64 Packages

# パッケージのバージョンを固定しているか確認します。
$ apt-mark showhold
docker-ce
docker-ce-cli
kubeadm
kubectl
kubelet

$ apt list --installed | grep -e kubeadm -e kubelet -e kubectl
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
kubeadm/kubernetes-xenial,now 1.19.2-00 arm64 [installed,upgradable to: 1.23.1-00]
kubectl/kubernetes-xenial,now 1.19.2-00 arm64 [installed,upgradable to: 1.23.1-00]
kubelet/kubernetes-xenial,now 1.19.2-00 arm64 [installed,upgradable to: 1.23.1-00]

kubeadmをアップグレードします。

$ sudo apt-get update && sudo apt-get install -y --allow-change-held-packages kubeadm=1.20.14-00
[sudo] password for kota:
Hit:1 https://download.docker.com/linux/ubuntu focal InRelease
Hit:3 http://ports.ubuntu.com/ubuntu-ports focal InRelease
Hit:4 http://ports.ubuntu.com/ubuntu-ports focal-updates InRelease
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [9383 B]
Hit:5 http://ports.ubuntu.com/ubuntu-ports focal-backports InRelease
Hit:6 http://ports.ubuntu.com/ubuntu-ports focal-security InRelease
Fetched 9383 B in 4s (2106 B/s)
Reading package lists... Done
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  cri-tools
The following held packages will be changed:
  kubeadm
The following packages will be upgraded:
  cri-tools kubeadm
2 upgraded, 0 newly installed, 0 to remove and 115 not upgraded.
Need to get 16.8 MB of archives.
After this operation, 4110 kB of additional disk space will be used.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main arm64 cri-tools arm64 1.19.0-00 [10.2 MB]
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main arm64 kubeadm arm64 1.20.14-00 [6562 kB]
Fetched 16.8 MB in 4s (3998 kB/s)
(Reading database ... 134366 files and directories currently installed.)
Preparing to unpack .../cri-tools_1.19.0-00_arm64.deb ...
Unpacking cri-tools (1.19.0-00) over (1.13.0-01) ...
Preparing to unpack .../kubeadm_1.20.14-00_arm64.deb ...
Unpacking kubeadm (1.20.14-00) over (1.19.2-00) ...
Setting up cri-tools (1.19.0-00) ...
Setting up kubeadm (1.20.14-00) ...

ワーカーノードの場合、以下のコマンドでkubeadmの構成をアップグレードします。

$ sudo kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks
[preflight] Skipping prepull. Not a control plane node.
[upgrade] Skipping phase. Not a control plane node.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.

ワーカーノードをドレインします

kubectl get node
NAME    STATUS   ROLES                  AGE    VERSION
hkm01   Ready    control-plane,master   356d   v1.20.14
hkw01   Ready    <none>                 356d   v1.19.2
hkw02   Ready    <none>                 356d   v1.19.2

$ kubectl get po -A -owide | grep hkw01
cassandra        k8ssandra-dc1-default-sts-0                                2/2     Running   0          55d    10.244.1.189     hkw01   <none>           <none>
cassandra        k8ssandra-dc1-stargate-7847f6bfb9-prprt                    1/1     Running   0          55d    10.244.1.188     hkw01   <none>           <none>
kube-system      kube-flannel-ds-arm64-q5h76                                1/1     Running   12         356d   192.168.13.102   hkw01   <none>           <none>
kube-system      kube-proxy-xt6f5                                           1/1     Running   0          46m    192.168.13.102   hkw01   <none>           <none>
kube-system      metrics-server-bfbf699c9-d5bh7                             0/1     Running   1          58d    10.244.1.133     hkw01   <none>           <none>
kube-system      metrics-server-dfb99b657-2mfhw                             1/1     Running   6          69d    10.244.1.120     hkw01   <none>           <none>
kube-system      nfs-subdir-external-provisioner-558764c4-gcq7j             1/1     Running   51         70d    10.244.1.119     hkw01   <none>           <none>
metallb-system   metallb-controller-864bcdbbb7-g5phc                        1/1     Running   3          71d    10.244.1.118     hkw01   <none>           <none>
metallb-system   metallb-speaker-9htcx                                      1/1     Running   3          71d    192.168.13.102   hkw01   <none>           <none>

$ kubectl drain hkw01 --ignore-daemonsets
node/hkw01 cordoned
error: unable to drain node "hkw01", aborting command...

There are pending nodes to be drained:
 hkw01
error: cannot delete Pods with local storage (use --delete-emptydir-data to override): cassandra/k8ssandra-dc1-default-sts-0, kube-system/metrics-server-bfbf699c9-d5bh7, kube-system/metrics-server-dfb99b657-2mfhw
kota@hkm01:~$ kubectl drain hkw01 --ignore-daemonsets --delete-emptydir-data
node/hkw01 already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-arm64-q5h76, kube-system/kube-proxy-xt6f5, metallb-system/metallb-speaker-9htcx
evicting pod metallb-system/metallb-controller-864bcdbbb7-g5phc
evicting pod cassandra/k8ssandra-dc1-default-sts-0
evicting pod kube-system/metrics-server-bfbf699c9-d5bh7
evicting pod cassandra/k8ssandra-dc1-stargate-7847f6bfb9-prprt
evicting pod kube-system/metrics-server-dfb99b657-2mfhw
evicting pod kube-system/nfs-subdir-external-provisioner-558764c4-gcq7j
pod/metallb-controller-864bcdbbb7-g5phc evicted
pod/metrics-server-dfb99b657-2mfhw evicted
pod/nfs-subdir-external-provisioner-558764c4-gcq7j evicted
pod/metrics-server-bfbf699c9-d5bh7 evicted
pod/k8ssandra-dc1-stargate-7847f6bfb9-prprt evicted
pod/k8ssandra-dc1-default-sts-0 evicted
node/hkw01 evicted

$ kubectl get node
NAME    STATUS                     ROLES                  AGE    VERSION
hkm01   Ready                      control-plane,master   356d   v1.20.14
hkw01   Ready,SchedulingDisabled   <none>                 356d   v1.19.2
hkw02   Ready                      <none>                 356d   v1.19.2

$ kubectl get po -A -owide | grep hkw01
kube-system      kube-flannel-ds-arm64-q5h76                                1/1     Running            12         356d    192.168.13.102   hkw01   <none>           <none>
kube-system      kube-proxy-xt6f5                                           1/1     Running            0          53m     192.168.13.102   hkw01   <none>           <none>
metallb-system   metallb-speaker-9htcx                                      1/1     Running            3          71d     192.168.13.102   hkw01   <none>           <none>

kubeletとkubectlをアップグレードします

$ sudo apt-get update && sudo apt-get install -y --allow-change-held-packages kubelet=1.20.14-00 kubectl=1.20.14-00
[sudo] password for kota:
Hit:1 https://download.docker.com/linux/ubuntu focal InRelease
Hit:3 http://ports.ubuntu.com/ubuntu-ports focal InRelease
Hit:2 https://packages.cloud.google.com/apt kubernetes-xenial InRelease
Hit:4 http://ports.ubuntu.com/ubuntu-ports focal-updates InRelease
Hit:5 http://ports.ubuntu.com/ubuntu-ports focal-backports InRelease
Hit:6 http://ports.ubuntu.com/ubuntu-ports focal-security InRelease
Reading package lists... Done
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following held packages will be changed:
  kubectl kubelet
The following packages will be upgraded:
  kubectl kubelet
2 upgraded, 0 newly installed, 0 to remove and 114 not upgraded.
Need to get 23.2 MB of archives.
After this operation, 1255 kB of additional disk space will be used.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main arm64 kubectl arm64 1.20.14-00 [6753 kB]
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main arm64 kubelet arm64 1.20.14-00 [16.5 MB]
Fetched 23.2 MB in 7s (3190 kB/s)
(Reading database ... 134366 files and directories currently installed.)
Preparing to unpack .../kubectl_1.20.14-00_arm64.deb ...
Unpacking kubectl (1.20.14-00) over (1.19.2-00) ...
Preparing to unpack .../kubelet_1.20.14-00_arm64.deb ...
Unpacking kubelet (1.20.14-00) over (1.19.2-00) ...
Setting up kubectl (1.20.14-00) ...
Setting up kubelet (1.20.14-00) ...

パッケージのバージョンを固定しているか確認します。

kota@hkw01:~$ sudo systemctl daemon-reload
kota@hkw01:~$ sudo systemctl restart kubelet

対象のワーカーノードをスケジュール可能にします。

$ kubectl uncordon hkw01
node/hkw01 uncordoned
kota@hkm01:~$ kubectl get node
NAME    STATUS   ROLES                  AGE    VERSION
hkm01   Ready    control-plane,master   356d   v1.20.14
hkw01   Ready    <none>                 356d   v1.20.14
hkw02   Ready    <none>                 356d   v1.19.2

今までの手順を繰り返して、2台目のワーカノードもアップグレードします(手順同じなので省略します)

おまけ

今回、特に問題なくアップグレードができましたが、何度も同じ作業をやらないといけないのはめんどくさいので雑なshellを書いてアップグレードさせました。

curl -sL https://git.io/kubeadm-upgrade | bash -s <version>

まとめ

一度にバージョンアップをさせるのは面倒なので、都度更新していく必要がありますね。
というか、どっかで構成情報をAnsibleで管理するようにしたい。。

参考

https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/

Discussion