Open12

Kubernetes 1.32のインストールメモ

YamahitsujiYamahitsuji

kubeadmの手順書に従ってインストール
https://kubernetes.io/ja/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

  1. MACアドレスとproduct_uuidが全てのノードでユニークであることの検証
    ip linkで確認。product_uuidがraspberry piでは存在しなかったが、なくてもいけたという記事を見たので一旦スルー
  2. 必須ポートの確認
    他のプロセスによって必須ポートが使用されていないことを確認した。sudo lsof -i -P -n
YamahitsujiYamahitsuji
  1. ランタイム containerd のインストール
    https://kubernetes.io/ja/docs/setup/production-environment/container-runtimes/#ipv4フォワーディングを有効化し-iptablesからブリッジされたトラフィックを見えるようにする
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# この構成に必要なカーネルパラメーター、再起動しても値は永続します
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# 再起動せずにカーネルパラメーターを適用
sudo sysctl --system

https://github.com/containerd/containerd/blob/main/docs/cri/config.md

https://github.com/containerd/containerd/blob/main/docs/getting-started.md

aptでインストール。一緒にruncもインストールされた。

$ sudo apt install containerd
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
  runc
The following NEW packages will be installed:
  containerd runc
0 upgraded, 2 newly installed, 0 to remove and 54 not upgraded.
Need to get 35.7 MB of archives.
After this operation, 148 MB of additional disk space will be used.
Do you want to continue? [Y/n]
Get:1 http://ports.ubuntu.com/ubuntu-ports noble-updates/main arm64 runc arm64 1.1.12-0ubuntu3.1 [7913 kB]
Get:2 http://ports.ubuntu.com/ubuntu-ports noble-updates/main arm64 containerd arm64 1.7.19+really1.7.12-0ubuntu4.2 [27.8 MB]
Fetched 35.7 MB in 8s (4345 kB/s)
Selecting previously unselected package runc.
(Reading database ... 130504 files and directories currently installed.)
Preparing to unpack .../runc_1.1.12-0ubuntu3.1_arm64.deb ...
Unpacking runc (1.1.12-0ubuntu3.1) ...
Selecting previously unselected package containerd.
Preparing to unpack .../containerd_1.7.19+really1.7.12-0ubuntu4.2_arm64.deb ...
Unpacking containerd (1.7.19+really1.7.12-0ubuntu4.2) ...
Setting up runc (1.1.12-0ubuntu3.1) ...
Setting up containerd (1.7.19+really1.7.12-0ubuntu4.2) ...
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /usr/lib/systemd/system/containerd.service.
Processing triggers for man-db (2.12.0-4build2) ...

CNI pluginsをインストール

sudo mkdir -p /opt/cni/bin
curl -LO https://github.com/containernetworking/plugins/releases/download/v1.6.1/cni-plugins-linux-arm64-v1.6.1.tgz
sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-arm64-v1.6.1.tgz

設定ファイルの作成&変更

sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml

/etc/containerd/config.toml内でruncがsystemd cgroupドライバーを使うように設定

sudo nano /etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  ...
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true
systemctl restart containerd
systemctl is-active containerd
YamahitsujiYamahitsuji

kubeadm、kubelet、kubectlのインストール

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
sudo vi /etc/default/kubelet

内容を↓に変更

KUBELET_EXTRA_ARGS=--cgroup-driver=systemd
systemctl daemon-reload
systemctl restart kubelet
YamahitsujiYamahitsuji

マスターノードを初期化

sudo kubeadm init --node-name master --apiserver-advertise-address=192.168.11.253 --pod-network-cidr=10.244.0.0/16

↓結果

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.11.253:6443 --token xxxxxxxxx \
	--discovery-token-ca-cert-hash sha256:xxxxxxxxxx

指示通りにマスターノードで実行

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
YamahitsujiYamahitsuji

ワーカーノードでkubeadmのインストールまで同じようにセットアップ。

kubeadm joinで先に作成したマスターノードに接続
https://kubernetes.io/ja/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#join-nodes

sudo kubeadm join 192.168.11.253:6443 --token xxx --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxx

マスターノードでノードが追加されていることを確認

$ kubectl get nodes
NAME             STATUS     ROLES           AGE    VERSION
master           NotReady   control-plane   22m    v1.32.0
yamahitsuji-xt   NotReady   <none>          3m1s   v1.32.0
YamahitsujiYamahitsuji

CNIのインストール
https://docs.tigera.io/calico/3.29/getting-started/kubernetes/quickstart

$ kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.29.1/manifests/tigera-operator.yaml
namespace/tigera-operator created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/tiers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/adminnetworkpolicies.policy.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created

カスタムリソースのIPレンジをPodのIPレンジに合わせてcreate

$ curl https://raw.githubusercontent.com/projectcalico/calico/v3.29.1/manifests/custom-resources.yaml > calico-custom-resources.yaml
$ sed -i 's|192.168.0.0/16|10.244.0.0/16|' calico-custom-resources.yaml
$ kubectl create -f calico-custom-resources.yaml
YamahitsujiYamahitsuji

ArgoCDのインストール

helmリポジトリを追加

❯ helm repo add argo https://argoproj.github.io/argo-helm
"argo" has been added to your repositories

ArgoCD用のnamespaceを作成し、helm install。後のcloudflare tunnnelで外部アクセスを可能にしたい場合はvalues.yamlで configs.params.server.insecuretrue にしておく。デフォルトでは自己証明書なのでcloudflare tunnnelによる外部アクセスが失敗する。

❯ kubectl create namespace argocd
namespace/argocd created
❯ helm install -n argocd argocd argo/argo-cd --version 7.7.11 -f values.yaml
NAME: argocd
LAST DEPLOYED: Tue Dec 24 13:25:30 2024
NAMESPACE: argocd
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
In order to access the server UI you have the following options:

1. kubectl port-forward service/argocd-server -n argocd 8080:443

    and then open the browser on http://localhost:8080 and accept the certificate

2. enable ingress in the values file `server.ingress.enabled` and either
      - Add the annotation for ssl passthrough: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-1-ssl-passthrough
      - Set the `configs.params."server.insecure"` in the values file and terminate SSL at your ingress: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-2-multiple-ingress-objects-and-hosts


After reaching the UI the first time you can login with username: admin and the random password generated during the installation. You can find the password by running:

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d

(You should delete the initial secret afterwards as suggested by the Getting Started Guide: https://argo-cd.readthedocs.io/en/stable/getting_started/#4-login-using-the-cli)

secret二保存された初期パスワードを確認

❯ kubectl -n argocd get secret/argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
S4zW3mLF7oygV009

argocd cliで初期パスワードログイン

❯ argocd login localhost:8080
WARNING: server certificate had error: tls: failed to verify certificate: x509: certificate signed by unknown authority. Proceed insecurely (y/n)? y
Username: admin
Password:
'admin:login' logged in successfully
Context 'localhost:8080' updated

パスワードを更新

❯ argocd account update-password
*** Enter password of currently logged in user (admin):
*** Enter new password for user admin:
*** Confirm new password for user admin:
Password updated

初期パスワードが保存されていたシークレットを削除

❯ kubectl --namespace argocd delete secret/argocd-initial-admin-secret
secret "argocd-initial-admin-secret" deleted

argocd用のDeploy Keyを作成

❯ ssh-keygen -t ed25519 -C ""
Generating public/private ed25519 key pair.
Enter file in which to save the key (/Users/user_name/.ssh/id_ed25519): /Users/user_name/.ssh/id_ed25519_k8s_manifest

Githubマニフェストリポジトリ > Settings > Deploy keysに公開鍵を登録

Argocd UIからリポジトリと秘密鍵を登録

Appendix

https://zenn.dev/kou_pg_0131/articles/argocd-getting-started
https://argo-cd.readthedocs.io/en/stable/getting_started/

YamahitsujiYamahitsuji

Cloudflare tunnelのセットアップ

作業Macでcloudflared CLIでトンネル作成&トークン取得

❯ cloudflared tunnel create home-k8s
Tunnel credentials written to /Users/user_name/.cloudflared/1ea36dfa-e7e4-4e99-816e-e97bae39f789.json. cloudflared chose this file based on where your origin certificate was found. Keep this file secret. To revoke these credentials, delete the tunnel.

Created tunnel home-k8s with id 1ea36dfa-e7e4-4e99-816e-e97bae39f789
❯ cloudflared tunnel token home-k8s
eyJhIjxxxxxxxxxxxxxxxxxxxxxx=

Kubernetesクラスターにnamesapceを作成

apiVersion: v1
kind: Namespace
metadata:
  name: cloudflared
  labels:
    name: cloudflared

Secretとしてトークンを保存

❯ kubectl create secret generic tunnel-token --from-literal=token='eyJhIjxxxxxxxxxxxxxxxxxxxxxx=' -n cloudflared
secret/tunnel-token created

cloudflaredのDeploymentを作成

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: cloudflared
  name: cloudflared-deployment
  namespace: cloudflared
spec:
  replicas: 2
  selector:
    matchLabels:
      pod: cloudflared
  template:
    metadata:
      labels:
        pod: cloudflared
    spec:
      containers:
        - command:
            - cloudflared
            - tunnel
            - --no-autoupdate
            # In a k8s environment, the metrics server needs to listen outside the pod it runs on.
            # The address 0.0.0.0:2000 allows any pod in the namespace.
            - --metrics
            - 0.0.0.0:2000
            - run
          image: cloudflare/cloudflared:latest
          name: cloudflared
          livenessProbe:
            httpGet:
              # Cloudflared has a /ready endpoint which returns 200 if and only if
              # it has an active connection to the edge.
              path: /ready
              port: 2000
            failureThreshold: 1
            initialDelaySeconds: 10
            periodSeconds: 10
          env:
            - name: TUNNEL_TOKEN
              valueFrom:
                secretKeyRef:
                  name: tunnel-token
                  key: token
❯ kubectl apply -f cloudflared/cloudflared.yaml
deployment.apps/cloudflared-deployment created

UIからドメインを設定する。URLにはcloudflaredから見たdestinationのURLを設定する。

appendix

https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/deploy-tunnels/deployment-guides/kubernetes/
https://hatappi.blog/entry/2022/11/03/231455

YamahitsujiYamahitsuji

hostへCloudflaredのインストール

kubernetes内のDeployment cloudflaredはプロキシ用、kubernetesホスト側のcloudflaredはsshなどの作業用として使用。

Cloudflare GUIのコネクタ作成画面通りにcontroll planeでコマンドを実行

curl -L --output cloudflared.deb https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-arm64.deb &&  sudo dpkg -i cloudflared.deb &&  sudo cloudflared service install eyJhIjoixxxxxxxxxxxx

パブリックホスト名ではなく、プライベートネットワークでローカルIPレンジを登録

↓の手順に従ってクライアント、Split Tunnelsを設定。
https://note.com/ringocandy/n/n3c4a9e169f42

YamahitsujiYamahitsuji

メモ

helmのアップグレード

helm upgrade argocd argo/argo-cd -f argocd/values.yaml -n argocd
YamahitsujiYamahitsuji

アプリケーションの公開

https://developers.cloudflare.com/learning-paths/clientless-access/connect-private-applications/create-tunnel/

tunnelにPublic Hostnames でアプリケーションを作成する。
serviceにはcloudflaredから見たサービスDNSを設定する。
この状態では、インターネット上に公開され誰でもアクセスできる状態。

このサイトを見れる人を制限するには、Access → Applicationsで self-hosted Applicationを作成する。
ここで先ほど作成したpublic hostnameを入力する。加えて、Access policyを設定することで、policyで定めた認証を通過できる人のみがアクセスできるようになる。