Open18

Raspberry Piでminikubeを動かす

https://zenn.dev/asataka/scraps/f887caae728572
の前にRaspberry Pi 4でminikubeを動かしてみます。このツリーに全体のまとめをしていきます。

環境情報

pi@pi4a:~ $ cat /etc/os-release
PRETTY_NAME="Raspbian GNU/Linux 10 (buster)"
NAME="Raspbian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"
pi@pi4a:~ $ uname -a
Linux pi4a 5.10.17-v7l+ #1414 SMP Fri Apr 30 13:20:47 BST 2021 armv7l GNU/Linux
pi@pi4a:~ $ free -mh
              total        used        free      shared  buff/cache   available
Mem:          3.7Gi       613Mi       1.8Gi        10Mi       1.4Gi       3.0Gi
Swap:          99Mi          0B        99Mi

kubectlminikube をホスト上にセットアップする。

とりあえずコマンド系はrootユーザでセットアップ。

# armのバージョンは特に指定がない模様
curl -LO https://dl.k8s.io/v1.21.0/kubernetes-client-linux-arm.tar.gz
tar xvfz kubernetes-client-linux-arm.tar.gz
mv kubernetes/client/bin/kubectl /usr/local/bin/
kubectl version # 動作確認

# minikubeはバイナリが落ちてくる。こちらもarmのバージョンは特に指定がない
curl -LO https://github.com/kubernetes/minikube/releases/download/v1.20.0/minikube-linux-arm
chmod +x minikube-linux-arm
cp minikube-linux-arm /usr/local/bin/minikube
minikube version # 動作確認

# 以下、`minikube start` が失敗した際のトラブルシュートで最終的に必要になったもの

# Exiting due to GUEST_MISSING_CONNTRACK: 
# Sorry, Kubernetes 1.20.2 requires conntrack to be installed in root's path
# を解消
apt install conntrack

# [ERROR SystemVerification]: missing required cgroups: memory を解消
# memoryのcgroupを確認
cat /proc/cgroups
# disableだったら/boot/cmdline.txtにcgroup_enable=memory cgroup_memory=1を追加する
vi /boot/cmdline.txt
reboot

コマンドがインストールできたら通常ユーザで環境を整える。

mkdir .kube

# kubectl/minikube completion -h に従ってcompletionを設定
kubectl completion bash > ~/.kube/completion.kubectl.bash.inc
minikube completion bash > .kube/completion.minikube.bash.inc

printf "
# Kubernetes tools shell completion
. '$HOME/.kube/completion.kubectl.bash.inc'
. '$HOME/.kube/completion.minikube.bash.inc'
" >> $HOME/.bashrc
exec $SHELL -l

セットアップに問題がなければ minikube start が成功し ~/.kube/config が生成され kubectl で操作できるようになる。

pi@pi4a:~ $ minikube start --driver=none

pi@pi4a:~ $ kubectl get nodes
NAME   STATUS   ROLES                  AGE   VERSION
pi4a   Ready    control-plane,master   31m   v1.20.2

pi@pi4a:~ $ kubectl -n kube-system get pods
NAME                           READY   STATUS    RESTARTS   AGE
coredns-74ff55c5b-5hzkn        1/1     Running   4          118s
etcd-pi4a                      1/1     Running   0          2m2s
kube-apiserver-pi4a            1/1     Running   0          2m2s
kube-controller-manager-pi4a   1/1     Running   0          2m2s
kube-proxy-gjfxn               1/1     Running   0          118s
kube-scheduler-pi4a            1/1     Running   0          2m2s
storage-provisioner            1/1     Running   0          2m7s

arm32 向けのイメージがないため --driver=none でホストOSのdockerで直接k8sのコンテナを動かすようにする。docker ps するとコンテナがたくさん作られていることが確認できる。

pi@pi4a:~ $ docker ps
CONTAINER ID   IMAGE                  COMMAND                  CREATED         STATUS         PORTS     NAMES
0f666a81e049   38db8dddeb72           "/usr/local/bin/kube…"   3 minutes ago   Up 3 minutes             k8s_kube-proxy_kube-proxy-4jdfb_kube-system_f8c77fab-d7e3-4146-aa99-90fd524b283f_0
4fd975d533a5   db6ed25e18aa           "/storage-provisioner"   3 minutes ago   Up 3 minutes             k8s_storage-provisioner_storage-provisioner_kube-system_2ca5b9df-56b4-497f-9029-50e446ca35df_0
:

minikubeを動かしてみる。

minikube create 中にエラーが出た。

Exiting due to GUEST_MISSING_CONNTRACK: Sorry, Kubernetes 1.20.2 requires conntrack to be installed in root's path

conntrack が必要とのことでインストールする。

sudo apt install conntrack

再び minikube create をしたところまた別のエラーが出た。

:
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: missing
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: missing

stderr:
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING Swap]: running with swap on is not supported. Please disable swap
	[WARNING FileExisting-socat]: socat not found in system path
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.6. Latest validated version: 19.03
	[WARNING SystemVerification]: missing optional cgroups: hugetlb
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR SystemVerification]: missing required cgroups: memory
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

色々warningが出ているが、最終的に [ERROR SystemVerification]: missing required cgroups: memory で失敗している。

以下のサイトを参考にした。

https://askubuntu.com/questions/1237813/enabling-memory-cgroup-in-ubuntu-20-04
https://kuromt.hatenablog.com/entry/2019/01/03/233347

cgroup についてはよく知らないが、/proc下でステータスが見られるらしい。

pi@pi4a:~ $ cat /proc/cgroups
#subsys_name	hierarchy	num_cgroups	enabled
cpuset	2	1	1
cpu	4	85	1
cpuacct	4	85	1
blkio	9	85	1
memory	0	96	0
devices	7	85	1
freezer	6	1	1
net_cls	5	1	1
perf_event	8	1	1
net_prio	5	1	1
pids	3	93	1

sybsys_name=memoryのエントリがdisableになっている。

/boot/cmdline.txtに見様見真似で cgroup_enable=memory cgroup_memory=1 を追記する。全体は以下のようになった。

console=serial0,115200 console=tty1 root=PARTUUID=xxxx rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait quiet splash plymouth.ignore-serial-consoles cgroup_enable=memory cgroup_memory=1

これで再起動するとsybsys_name=memoryのエントリがenableになっていた。再び minikube start を行うと今度は無事成功した。~/.kube/config が生成されるので何も考えず kubectl を使ってみると問題なく利用できている様子。

pi@pi4a:~ $ kubectl get nodes
NAME   STATUS   ROLES                  AGE   VERSION
pi4a   Ready    control-plane,master   37s   v1.20.2

グラフはPrometheusのnode_exporterの出力を加工したもの。01:27辺りでminikube startが成功した。minikubeの上にはまだ何もデプロイしていない。CPUの使用率は10pt上昇し、温度は52→57℃に上昇した。

プロセスはこんな感じ。api-serverとkubeletは何をしてここまで負荷が上がっているんだろう。

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
 2981 root      20   0  892576 285892  59868 S  30.8   7.3   2:20.99 kube-apiserver
 3410 root      20   0 1018328  82604  55516 S  10.6   2.1   0:52.81 kubelet
 2951 root      20   0  885072  81160  52636 S   5.0   2.1   0:28.72 kube-controller
 1103 root      20   0 1027596  80320  32788 S   4.0   2.0   1:21.72 dockerd
 2965 root      20   0  816380  34088  16252 S   2.0   0.9   0:21.28 etcd
 2963 root      20   0  830404  36768  26952 S   1.3   0.9   0:22.78 kube-scheduler
 6528 pi        20   0   10428   3036   2624 R   1.0   0.1   0:00.16 top
  491 root      20   0  980552  38388  18368 S   0.3   1.0   0:02.37 containerd
    1 root      20   0   34968   8368   6420 S   0.0   0.2   0:08.68 systemd

corednsが動いていない。

pi@pi4a:~ $ kubectl -n kube-system get pods
NAME                           READY   STATUS             RESTARTS   AGE
coredns-74ff55c5b-nhb46        0/1     CrashLoopBackOff   10         30m
etcd-pi4a                      1/1     Running            0          31m
kube-apiserver-pi4a            1/1     Running            0          31m
kube-controller-manager-pi4a   1/1     Running            0          31m
kube-proxy-fvzk5               1/1     Running            0          30m
kube-scheduler-pi4a            1/1     Running            0          31m
storage-provisioner            1/1     Running            0          31m

pi@pi4a:~ $ kubectl -n kube-system logs coredns-74ff55c5b-nhb46
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.7.0
linux/arm, go1.14.4, f59c03d
[FATAL] plugin/loop: Loop (127.0.0.1:53960 -> :53) detected for zone ".", see https://coredns.io/plugins/loop#troubleshooting. Query: "HINFO 188532551.961968404."

ローカルのunbound.confのaccess-control: 10.0.0.0/8 allow_snoopをしたら動いたので、自分の環境固有の問題だった。ホストOSのDNS設定が問題なければ遭遇しない問題だった。

pi@pi4a:~ $ kubectl -n kube-system get pods
NAME                           READY   STATUS    RESTARTS   AGE
coredns-74ff55c5b-5hzkn        1/1     Running   4          118s
etcd-pi4a                      1/1     Running   0          2m2s
kube-apiserver-pi4a            1/1     Running   0          2m2s
kube-controller-manager-pi4a   1/1     Running   0          2m2s
kube-proxy-gjfxn               1/1     Running   0          118s
kube-scheduler-pi4a            1/1     Running   0          2m2s
storage-provisioner            1/1     Running   0          2m7s

無事 kube-system のpodは全て動いた。

minikube start が試行を繰り返していそうなのでオプションをつければ時間短縮できそう。

pi@pi4a:~ $ minikube start
😄  minikube v1.20.0 on Raspbian 10.9 (arm)
✨  Automatically selected the docker driver. Other choices: none, ssh
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
E0512 02:35:56.489619   22437 cache.go:189] Error downloading kic artifacts:  failed to download kic base image or any fallback image
🔥  Creating docker container (CPUs=2, Memory=2200MB) ...
🤦  StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for minikube container: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -d /var/lib: exit status 125
stdout:

stderr:
Unable to find image 'gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e' locally
gcr.io/k8s-minikube/kicbase@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e: Pulling from k8s-minikube/kicbase
docker: no matching manifest for linux/arm/v7 in the manifest list entries.
See 'docker run --help'.

🤷  docker "minikube" container is missing, will recreate.
🔥  Creating docker container (CPUs=2, Memory=2200MB) ...
😿  Failed to start docker container. Running "minikube delete" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for minikube container: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -d /var/lib: exit status 125
stdout:

stderr:
Unable to find image 'gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e' locally
gcr.io/k8s-minikube/kicbase@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e: Pulling from k8s-minikube/kicbase
docker: no matching manifest for linux/arm/v7 in the manifest list entries.
See 'docker run --help'.

❗  Startup with docker driver failed, trying with alternate driver none: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for minikube container: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -d /var/lib: exit status 125
stdout:

stderr:
Unable to find image 'gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e' locally
gcr.io/k8s-minikube/kicbase@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e: Pulling from k8s-minikube/kicbase
docker: no matching manifest for linux/arm/v7 in the manifest list entries.
See 'docker run --help'.

🔥  Deleting "minikube" in docker ...
🔥  Removing /home/pi/.minikube/machines/minikube ...
💀  Removed all traces of the "minikube" cluster.
👍  Starting control plane node minikube in cluster minikube
E0512 02:36:48.017822   22437 cache.go:189] Error downloading kic artifacts:  failed to download kic base image or any fallback image
🤹  Running on localhost (CPUs=4, Memory=3827MB, Disk=29645MB) ...
ℹ️  OS release is Raspbian GNU/Linux 10 (buster)
🐳  Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🤹  Configuring local host environment ...

❗  The 'none' driver is designed for experts who need to integrate with an existing VM
💡  Most users should use the newer 'docker' driver instead, which does not require root!
📘  For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/

❗  kubectl and minikube configuration will be stored in /home/pi
❗  To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:

    ▪ sudo mv /home/pi/.kube /home/pi/.minikube $HOME
    ▪ sudo chown -R $USER $HOME/.kube $HOME/.minikube

💡  This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
pi@pi4a:~ $

gcr.io/k8s-minikube/kicbase:v0.0.22 の linux/arm/v7 向けがないので none というドライバでrecreateしていたらしい。

deleted comment

deleted comment

次作るときは --driver=none を試して完了が早くなるかやってみたい。

https://minikube.sigs.k8s.io/docs/drivers/

ただ現状もそうだけれど --driver=none だとローカル(=bare-metal)に作られるっぽいので docker ps するとk8sのコンテナがそのまま見える。これはこれで面白いけれど。

--driver=none をつけたら minikube start の完了は早くなった。

pi@pi4a:~ $ minikube start --driver=none
😄  minikube v1.20.0 on Raspbian 10.9 (arm)
✨  Using the none driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🤹  Running on localhost (CPUs=4, Memory=3827MB, Disk=29645MB) ...
ℹ️  OS release is Raspbian GNU/Linux 10 (buster)
🐳  Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🤹  Configuring local host environment ...

❗  The 'none' driver is designed for experts who need to integrate with an existing VM
💡  Most users should use the newer 'docker' driver instead, which does not require root!
📘  For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/

❗  kubectl and minikube configuration will be stored in /home/pi
❗  To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:

    ▪ sudo mv /home/pi/.kube /home/pi/.minikube $HOME
    ▪ sudo chown -R $USER $HOME/.kube $HOME/.minikube

💡  This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
pi@pi4a:~ $

minikubeとkindはどちらがいいのか。

ログインするとコメントできます