MicroK8sの独習2 〜クラスター内のアプリケーションにアクセスするために外部IPアドレスを公開する〜
中身見てないけどこれやってみるか
流石にkubectl打つのにlxcコマンドから打ってるのだるくなってきたしalias貼ろう
$ alias kubectl='lxc exec mk8s1 -- kubectl'
Deployment情報の確認
kubectl apply -f https://k8s.io/examples/service/load-balancer-example.yaml
deployment.apps/hello-world created
$ kubectl get deployments hello-world
NAME READY UP-TO-DATE AVAILABLE AGE
hello-world 5/5 5 5 2m24s
$ kubectl describe deployments hello-world
Name: hello-world
Namespace: default
CreationTimestamp: Mon, 19 Apr 2021 02:24:05 +0000
Labels: app.kubernetes.io/name=load-balancer-example
Annotations: deployment.kubernetes.io/revision: 1
Selector: app.kubernetes.io/name=load-balancer-example
Replicas: 5 desired | 5 updated | 5 total | 5 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app.kubernetes.io/name=load-balancer-example
Containers:
hello-world:
Image: gcr.io/google-samples/node-hello:1.0
Port: 8080/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: hello-world-6df5659cb7 (5/5 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 3m14s deployment-controller Scaled up replica set hello-world-6df5659cb7 to 5
ReplicaSetの確認
$ kubectl get replicasets
NAME DESIRED CURRENT READY AGE
hello-world-6df5659cb7 5 5 5 4m46s
$ kubectl describe replicasets
Name: hello-world-6df5659cb7
Namespace: default
Selector: app.kubernetes.io/name=load-balancer-example,pod-template-hash=6df5659cb7
Labels: app.kubernetes.io/name=load-balancer-example
pod-template-hash=6df5659cb7
Annotations: deployment.kubernetes.io/desired-replicas: 5
deployment.kubernetes.io/max-replicas: 7
deployment.kubernetes.io/revision: 1
Controlled By: Deployment/hello-world
Replicas: 5 current / 5 desired
Pods Status: 5 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app.kubernetes.io/name=load-balancer-example
pod-template-hash=6df5659cb7
Containers:
hello-world:
Image: gcr.io/google-samples/node-hello:1.0
Port: 8080/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 4m52s replicaset-controller Created pod: hello-world-6df5659cb7-t4nt2
Normal SuccessfulCreate 4m52s replicaset-controller Created pod: hello-world-6df5659cb7-mgxvw
Normal SuccessfulCreate 4m52s replicaset-controller Created pod: hello-world-6df5659cb7-kfzj6
Normal SuccessfulCreate 4m52s replicaset-controller Created pod: hello-world-6df5659cb7-nkvp2
Normal SuccessfulCreate 4m52s replicaset-controller Created pod: hello-world-6df5659cb7-gqfvm
これで公開されるはずなんだけど
外部IPアドレスが<pending>のまま変わらない。。。
microk8sじゃだめ?
$ kubectl expose deployment hello-world --type=LoadBalancer --name=my-service
service/my-service exposed
$ kubectl get services my-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service LoadBalancer 10.152.183.169 <pending> 8080:30296/TCP 13s
microk8sのmetallbアドオンとか言うのを有効化したらいけるみたい
めんどいしmicrok8sコマンドもエイリアスしとこ
$ alias microk8s='lxc exec mk8s1 -- microk8s'
$ microk8s enable metallb
Enabling MetalLB
Enter each IP address range delimited by comma (e.g. '10.64.140.43-10.64.140.49,192.168.0.105-192.168.0.111'): 10.116.214.10-10.116.214.50
Applying Metallb manifest
namespace/metallb-system created
secret/memberlist created
podsecuritypolicy.policy/controller created
podsecuritypolicy.policy/speaker created
serviceaccount/controller created
serviceaccount/speaker created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
role.rbac.authorization.k8s.io/config-watcher created
role.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/config-watcher created
rolebinding.rbac.authorization.k8s.io/pod-lister created
daemonset.apps/speaker created
deployment.apps/controller created
configmap/config created
MetalLB is enabled
$ kubectl get service,pod -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 2d22h
default service/my-service LoadBalancer 10.152.183.169 10.116.214.10 8080:30296/TCP 11m
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/calico-kube-controllers-847c8c99d-5tcwh 1/1 Running 0 2d22h
kube-system pod/calico-node-g6tfl 1/1 Running 1 2d22h
default pod/hello-world-6df5659cb7-nkvp2 1/1 Running 0 18m
default pod/hello-world-6df5659cb7-kfzj6 1/1 Running 0 18m
default pod/hello-world-6df5659cb7-gqfvm 1/1 Running 0 18m
default pod/hello-world-6df5659cb7-t4nt2 1/1 Running 0 18m
default pod/hello-world-6df5659cb7-mgxvw 1/1 Running 0 18m
metallb-system pod/speaker-5knwj 1/1 Running 0 2m20s
metallb-system pod/controller-559b68bfd8-t94sf 1/1 Running 0 2m20s
お、行けた行けた
てかホストOSから簡単にアクセスしようとしてlxcの空いてそうなアドレス帯を勝手に割り当てたけどちゃんとしとかな後で大問題になりそう
$ lxc list
+-------+---------+-----------------------------+----------------------------------------------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------+---------+-----------------------------+----------------------------------------------+-----------+-----------+
| mk8s1 | RUNNING | 10.116.214.130 (eth0) | fd42:d878:2ae:7d9e:216:3eff:fe7b:6039 (eth0) | CONTAINER | 0 |
| | | 10.1.238.128 (vxlan.calico) | | | |
+-------+---------+-----------------------------+----------------------------------------------+-----------+-----------+
| mk8s2 | RUNNING | 10.116.214.158 (eth0) | fd42:d878:2ae:7d9e:216:3eff:fe19:4663 (eth0) | CONTAINER | 0 |
| | | 10.1.115.128 (vxlan.calico) | | | |
+-------+---------+-----------------------------+----------------------------------------------+-----------+-----------+
$ curl http://10.116.214.10:8080/
Hello Kubernetes!
これクラスタ増やしても単一のIPでバランシングしてくれるってことかな?
てかそうじゃないと前回--type=NodePortでポート公開したときと同じやしな
とりあえず分散確認にもう一台追加するか
しかしmicrok8s joinのコマンド時間かかるけどこれなんかもっと早くする方法ないんか。。。
$ lxc launch mk8s mk8s2 && lxc exec mk8s2 -- microk8s status --wait-ready | head -n4
Creating mk8s2
Starting mk8s2
microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
$ microk8s add-node | tail -n1 | xargs lxc exec mk8s2 --
Contacting cluster at 10.116.214.130
Waiting for this node to finish joining the cluster. ..
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
mk8s1 Ready <none> 2d22h v1.20.5-34+40f5951bd9888a 10.116.214.130 <none> Ubuntu 20.04.2 LTS 5.4.0-71-generic containerd://1.3.7
mk8s2 Ready <none> 88s v1.20.5-34+40f5951bd9888a 10.116.214.251 <none> Ubuntu 20.04.2 LTS 5.4.0-71-generic containerd://1.3.7
後は今あるpodsを追加した方にリバランスしたいけどそういうコマンド無いんかな?
とりあえずレプリカセットの数変えて調整するか
てかこれ分散確認するならmicrobotの方が便利よな。。。。
$ kubectl scale --replicas=1 deployment hello-world
deployment.apps/hello-world scaled
$ kubectl scale --replicas=5 deployment hello-world
deployment.apps/hello-world scaled
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
hello-world-6df5659cb7-nkvp2 1/1 Running 0 45m 10.1.238.131 mk8s1 <none> <none>
hello-world-6df5659cb7-kfzj6 1/1 Terminating 0 45m 10.1.238.132 mk8s1 <none> <none>
hello-world-6df5659cb7-gqfvm 1/1 Terminating 0 45m 10.1.238.133 mk8s1 <none> <none>
hello-world-6df5659cb7-t4nt2 1/1 Terminating 0 45m 10.1.238.134 mk8s1 <none> <none>
hello-world-6df5659cb7-mgxvw 1/1 Terminating 0 45m 10.1.238.135 mk8s1 <none> <none>
hello-world-6df5659cb7-64vb9 0/1 ContainerCreating 0 3s <none> mk8s1 <none> <none>
hello-world-6df5659cb7-xk5n2 0/1 ContainerCreating 0 3s <none> mk8s2 <none> <none>
hello-world-6df5659cb7-xpdd2 0/1 ContainerCreating 0 3s <none> mk8s2 <none> <none>
hello-world-6df5659cb7-66dcr 0/1 ContainerCreating 0 3s <none> mk8s2 <none> <none>
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
hello-world-6df5659cb7-nkvp2 1/1 Running 0 47m 10.1.238.131 mk8s1 <none> <none>
hello-world-6df5659cb7-64vb9 1/1 Running 0 2m18s 10.1.238.137 mk8s1 <none> <none>
hello-world-6df5659cb7-xk5n2 1/1 Running 0 2m18s 10.1.115.129 mk8s2 <none> <none>
hello-world-6df5659cb7-xpdd2 1/1 Running 0 2m18s 10.1.115.130 mk8s2 <none> <none>
hello-world-6df5659cb7-66dcr 1/1 Running 0 2m18s 10.1.115.131 mk8s2 <none> <none>
一回全部消してmicrobot版でやり直そう
$ kubectl delete services my-service
service "my-service" deleted
$ kubectl delete deployment hello-world
deployment.apps "hello-world" deleted
gistsにmicrobot版作って上げ直して見たから最初からやり直してみよう
$ kubectl apply -f https://git.io/JOzGI
deployment.apps/microbot created
$ kubectl get deployments microbot
NAME READY UP-TO-DATE AVAILABLE AGE
microbot 5/5 5 5 49s
$ kubectl describe deployments microbot
Name: microbot
Namespace: default
CreationTimestamp: Mon, 19 Apr 2021 04:21:34 +0000
Labels: app.kubernetes.io/name=microbot-load-balancer-example
Annotations: deployment.kubernetes.io/revision: 1
Selector: app.kubernetes.io/name=microbot-load-balancer-example
Replicas: 5 desired | 5 updated | 5 total | 5 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app.kubernetes.io/name=microbot-load-balancer-example
Containers:
microbot:
Image: dontrebootme/microbot:v1
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: microbot-5cd654fccc (5/5 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 64s deployment-controller Scaled up replica set microbot-5cd654fccc to 5
$ kubectl get replicasets
NAME DESIRED CURRENT READY AGE
microbot-5cd654fccc 5 5 5 90s
$ kubectl describe replicasets
Name: microbot-5cd654fccc
Namespace: default
Selector: app.kubernetes.io/name=microbot-load-balancer-example,pod-template-hash=5cd654fccc
Labels: app.kubernetes.io/name=microbot-load-balancer-example
pod-template-hash=5cd654fccc
Annotations: deployment.kubernetes.io/desired-replicas: 5
deployment.kubernetes.io/max-replicas: 7
deployment.kubernetes.io/revision: 1
Controlled By: Deployment/microbot
Replicas: 5 current / 5 desired
Pods Status: 5 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app.kubernetes.io/name=microbot-load-balancer-example
pod-template-hash=5cd654fccc
Containers:
microbot:
Image: dontrebootme/microbot:v1
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 97s replicaset-controller Created pod: microbot-5cd654fccc-gc2xr
Normal SuccessfulCreate 97s replicaset-controller Created pod: microbot-5cd654fccc-nvgkt
Normal SuccessfulCreate 97s replicaset-controller Created pod: microbot-5cd654fccc-4rdxl
Normal SuccessfulCreate 97s replicaset-controller Created pod: microbot-5cd654fccc-t7tdn
Normal SuccessfulCreate 97s replicaset-controller Created pod: microbot-5cd654fccc-txjdc
外部IP割り当てて
$ kubectl expose deployment microbot --type=LoadBalancer --name=microbot
service/microbot exposed
$ kubectl get service,pod -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 2d23h
default service/microbot LoadBalancer 10.152.183.249 10.116.214.10 80:32575/TCP 9s
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/calico-kube-controllers-847c8c99d-5tcwh 1/1 Running 0 2d23h
metallb-system pod/speaker-5knwj 1/1 Running 0 104m
metallb-system pod/controller-559b68bfd8-t94sf 1/1 Running 0 104m
kube-system pod/calico-node-7fhjp 1/1 Running 0 80m
metallb-system pod/speaker-tgwp9 1/1 Running 0 78m
kube-system pod/calico-node-hdfcp 1/1 Running 0 78m
default pod/microbot-5cd654fccc-4rdxl 1/1 Running 0 2m57s
default pod/microbot-5cd654fccc-t7tdn 1/1 Running 0 2m57s
default pod/microbot-5cd654fccc-gc2xr 1/1 Running 0 2m57s
default pod/microbot-5cd654fccc-txjdc 1/1 Running 0 2m57s
default pod/microbot-5cd654fccc-nvgkt 1/1 Running 0 2m57s
ん〜mk8s1に1個しかpod無いのが微妙やなぁ。。。
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
microbot-5cd654fccc-4rdxl 1/1 Running 0 8m16s 10.1.238.138 mk8s1 <none> <none>
microbot-5cd654fccc-t7tdn 1/1 Running 0 8m16s 10.1.115.132 mk8s2 <none> <none>
microbot-5cd654fccc-gc2xr 1/1 Running 0 8m16s 10.1.115.133 mk8s2 <none> <none>
microbot-5cd654fccc-txjdc 1/1 Running 0 8m16s 10.1.115.134 mk8s2 <none> <none>
microbot-5cd654fccc-nvgkt 1/1 Running 0 8m16s 10.1.115.135 mk8s2 <none> <none>
こういう時は経験上レプリカを増減したらええ感じになりそうやな
$ kubectl scale --replicas=2 deployment microbot
deployment.apps/microbot scaled
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
microbot-5cd654fccc-4rdxl 1/1 Running 0 9m43s 10.1.238.138 mk8s1 <none> <none>
microbot-5cd654fccc-t7tdn 1/1 Running 0 9m43s 10.1.115.132 mk8s2 <none> <none>
microbot-5cd654fccc-nvgkt 1/1 Terminating 0 9m43s 10.1.115.135 mk8s2 <none> <none>
microbot-5cd654fccc-txjdc 1/1 Terminating 0 9m43s 10.1.115.134 mk8s2 <none> <none>
microbot-5cd654fccc-gc2xr 1/1 Terminating 0 9m43s 10.1.115.133 mk8s2 <none> <none>
$ kubectl scale --replicas=5 deployment microbot
deployment.apps/microbot scaled
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
microbot-5cd654fccc-4rdxl 1/1 Running 0 10m 10.1.238.138 mk8s1 <none> <none>
microbot-5cd654fccc-t7tdn 1/1 Running 0 10m 10.1.115.132 mk8s2 <none> <none>
microbot-5cd654fccc-4tr69 1/1 Running 0 61s 10.1.115.136 mk8s2 <none> <none>
microbot-5cd654fccc-992w7 1/1 Running 0 61s 10.1.238.139 mk8s1 <none> <none>
microbot-5cd654fccc-cj9c6 1/1 Running 0 61s 10.1.238.140 mk8s1 <none> <none>
2と3やねええ感じ
いい感じ
$ for i in $(seq 100);do w3m -dump http://10.116.214.10/ | grep hostname ; done | sort | uniq -c
20 Container hostname: microbot-5cd654fccc-4rdxl
17 Container hostname: microbot-5cd654fccc-4tr69
15 Container hostname: microbot-5cd654fccc-992w7
23 Container hostname: microbot-5cd654fccc-cj9c6
25 Container hostname: microbot-5cd654fccc-t7tdn
とりあえずクラスタを交互に落としてみよか
$ lxc exec mk8s2 -- kubectl get nodes
Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
エラー🤣
これはあれかな2台だとHAが無効になってるからってことかな
とりあえずmk8s1を再起動して確認
$ lxc start mk8s1
$ microk8s status --wait-ready | head -n4
microk8s is running
high-availability: no
datastore master nodes: 10.116.214.130:19001
datastore standby nodes: none
$ lxc exec mk8s2 -- kubectl get nodes
NAME STATUS ROLES AGE VERSION
mk8s2 Ready <none> 95m v1.20.5-34+40f5951bd9888a
mk8s1 Ready <none> 3d v1.20.5-34+40f5951bd9888a
マスターノードが1個だけだけどノードとしては2台だから多分HAやね
とりま3台でクラスタ組みますか
$ lxc launch mk8s mk8s3 && lxc exec mk8s3 -- microk8s status --wait-ready | head -n4
Creating mk8s3
Starting mk8s3
microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
$ microk8s add-node | tail -n1 | xargs lxc exec mk8s3 --
Contacting cluster at 10.116.214.130
Waiting for this node to finish joining the cluster. ..
$ microk8s status --wait-ready | head -n4
microk8s is running
high-availability: yes
datastore master nodes: 10.116.214.130:19001 10.116.214.251:19001 10.116.214.160:19001
datastore standby nodes: none
うんHA有効になってる
3台にpodを分散しよう
$ kubectl scale --replicas=4 deployment microbot
deployment.apps/microbot scaled
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
microbot-5cd654fccc-t7tdn 1/1 Running 0 26m 10.1.115.132 mk8s2 <none> <none>
microbot-5cd654fccc-4tr69 1/1 Running 0 16m 10.1.115.136 mk8s2 <none> <none>
microbot-5cd654fccc-cj9c6 1/1 Running 1 16m 10.1.238.144 mk8s1 <none> <none>
microbot-5cd654fccc-4rdxl 1/1 Running 1 26m 10.1.238.145 mk8s1 <none> <none>
microbot-5cd654fccc-992w7 1/1 Terminating 1 16m 10.1.238.143 mk8s1 <none> <none>
$ kubectl scale --replicas=6 deployment microbot
deployment.apps/microbot scaled
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
microbot-5cd654fccc-t7tdn 1/1 Running 0 26m 10.1.115.132 mk8s2 <none> <none>
microbot-5cd654fccc-4tr69 1/1 Running 0 17m 10.1.115.136 mk8s2 <none> <none>
microbot-5cd654fccc-cj9c6 1/1 Running 1 17m 10.1.238.144 mk8s1 <none> <none>
microbot-5cd654fccc-4rdxl 1/1 Running 1 26m 10.1.238.145 mk8s1 <none> <none>
microbot-5cd654fccc-jwqks 1/1 Running 0 26s 10.1.217.193 mk8s3 <none> <none>
microbot-5cd654fccc-q52nl 1/1 Running 0 26s 10.1.217.194 mk8s3 <none> <none>
$ for i in $(seq 100);do w3m -dump http://10.116.214.10/ | grep hostname ; done | sort | uniq -c
11 Container hostname: microbot-5cd654fccc-4rdxl
16 Container hostname: microbot-5cd654fccc-4tr69
16 Container hostname: microbot-5cd654fccc-cj9c6
19 Container hostname: microbot-5cd654fccc-jwqks
20 Container hostname: microbot-5cd654fccc-q52nl
18 Container hostname: microbot-5cd654fccc-t7tdn
いいね2個ずつ分散できた
レプリカの動きなんとなくわかってきたな
減らす時はpodの数が多い(余裕がない?)ところから減らしていって増やす時はpodの数が少ない(余力がある?)ところから増えてく感じかな
気を取り直して1台止めてみよう
$ lxc stop mk8s1
$ lxc exec mk8s2 -- kubectl get nodes
NAME STATUS ROLES AGE VERSION
mk8s3 Ready <none> 11m v1.20.5-34+40f5951bd9888a
mk8s2 Ready <none> 111m v1.20.5-34+40f5951bd9888a
mk8s1 NotReady <none> 3d v1.20.5-34+40f5951bd9888a
$ lxc exec mk8s2 -- kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
microbot-5cd654fccc-t7tdn 1/1 Running 0 35m 10.1.115.132 mk8s2 <none> <none>
microbot-5cd654fccc-4tr69 1/1 Running 0 26m 10.1.115.136 mk8s2 <none> <none>
microbot-5cd654fccc-jwqks 1/1 Running 0 9m29s 10.1.217.193 mk8s3 <none> <none>
microbot-5cd654fccc-q52nl 1/1 Running 0 9m29s 10.1.217.194 mk8s3 <none> <none>
microbot-5cd654fccc-4rdxl 1/1 Running 1 35m 10.1.238.145 mk8s1 <none> <none>
microbot-5cd654fccc-cj9c6 1/1 Running 1 26m 10.1.238.144 mk8s1 <none> <none>
$ for i in $(seq 100);do w3m -dump http://10.116.214.10/ | grep hostname ; done | sort | uniq -c
26 Container hostname: microbot-5cd654fccc-4tr69
28 Container hostname: microbot-5cd654fccc-jwqks
19 Container hostname: microbot-5cd654fccc-q52nl
27 Container hostname: microbot-5cd654fccc-t7tdn
4台しか返ってこないけど一応全部成功はしてるね
前回確認した感じだともっと時間をおいてからなら6台にリバランスされると思うけどとりあえずロードバランサの確認だしそこまでせんでいいか
残り2台も入れ替えで立ち上げてみよう
まずは1⇔2
$ lxc start mk8s1
$ lxc stop mk8s2
$ microk8s status | head -n4
microk8s is running
high-availability: yes
datastore master nodes: 10.116.214.130:19001 10.116.214.251:19001 10.116.214.160:19001
datastore standby nodes: none
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
mk8s2 NotReady <none> 119m v1.20.5-34+40f5951bd9888a
mk8s3 Ready <none> 20m v1.20.5-34+40f5951bd9888a
mk8s1 Ready <none> 3d v1.20.5-34+40f5951bd9888a
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
microbot-5cd654fccc-jwqks 1/1 Running 0 17m 10.1.217.193 mk8s3 <none> <none>
microbot-5cd654fccc-q52nl 1/1 Running 0 17m 10.1.217.194 mk8s3 <none> <none>
microbot-5cd654fccc-cj9c6 1/1 Running 2 34m 10.1.238.149 mk8s1 <none> <none>
microbot-5cd654fccc-4rdxl 1/1 Running 2 44m 10.1.238.148 mk8s1 <none> <none>
microbot-5cd654fccc-4tr69 1/1 Terminating 0 34m 10.1.115.136 mk8s2 <none> <none>
microbot-5cd654fccc-t7tdn 1/1 Terminating 0 44m 10.1.115.132 mk8s2 <none> <none>
microbot-5cd654fccc-7mbh9 1/1 Running 0 33s 10.1.217.196 mk8s3 <none> <none>
microbot-5cd654fccc-54kf4 1/1 Running 0 33s 10.1.217.195 mk8s3 <none> <none>
$ for i in $(seq 100);do w3m -dump http://10.116.214.10/ | grep hostname ; done | sort | uniq -c
18 Container hostname: microbot-5cd654fccc-4rdxl
19 Container hostname: microbot-5cd654fccc-54kf4
14 Container hostname: microbot-5cd654fccc-7mbh9
14 Container hostname: microbot-5cd654fccc-cj9c6
14 Container hostname: microbot-5cd654fccc-jwqks
21 Container hostname: microbot-5cd654fccc-q52nl
今度はpod6個うまいこと立ち上がり直したな
2⇔3
$ lxc start mk8s2
$ lxc stop mk8s3
kubectl get nodes
Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
$ lxc exec mk8s2 -- kubectl get nodes
The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port?
こわれたっぽい🤔
どうもさっき6台podが立ち上がってるタイミングでクラスタ2台構成に切り替わってHA切れたんじゃないかと予想
とりあえず止めた2を復活させてみたら復旧したっぽい
HA切れたタイミングで2がマスターノードになってたってことかな
$ lxc start mk8s3
$ lxc exec mk8s3 -- microk8s status --wait-ready
microk8s is running
high-availability: yes
datastore master nodes: 10.116.214.130:19001 10.116.214.251:19001 10.116.214.160:19001
datastore standby nodes: none
気を取り直して
2⇔3
$ lxc stop mk8s3
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
mk8s1 Ready <none> 3d v1.20.5-34+40f5951bd9888a 10.116.214.130 <none> Ubuntu 20.04.2 LTS 5.4.0-71-generic containerd://1.3.7
mk8s2 Ready <none> 135m v1.20.5-34+40f5951bd9888a 10.116.214.251 <none> Ubuntu 20.04.2 LTS 5.4.0-71-generic containerd://1.3.7
mk8s3 NotReady <none> 35m v1.20.5-34+40f5951bd9888a 10.116.214.160 <none> Ubuntu 20.04.2 LTS 5.4.0-71-generic containerd://1.3.7
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
microbot-5cd654fccc-cj9c6 1/1 Running 2 49m 10.1.238.149 mk8s1 <none> <none>
microbot-5cd654fccc-4rdxl 1/1 Running 2 59m 10.1.238.148 mk8s1 <none> <none>
microbot-5cd654fccc-q52nl 1/1 Running 1 33m 10.1.217.197 mk8s3 <none> <none>
microbot-5cd654fccc-jwqks 1/1 Running 1 33m 10.1.217.199 mk8s3 <none> <none>
microbot-5cd654fccc-54kf4 1/1 Running 1 16m 10.1.217.198 mk8s3 <none> <none>
microbot-5cd654fccc-7mbh9 1/1 Running 1 16m 10.1.217.200 mk8s3 <none> <none>
$ for i in $(seq 100);do w3m -dump http://10.116.214.10/ | grep hostname ; done | sort | uniq -c
54 Container hostname: microbot-5cd654fccc-4rdxl
46 Container hostname: microbot-5cd654fccc-cj9c6
う〜ん動いてるんやけど2個しか生きてない。。。
一応どれ落としても同じIPでアクセスできるようにはなってるんだけどなんだか腑に落ちない。。。
HA周りの動きのせいだともうけどこれは多分microk8sの領分でk8sそのものとはまた別の問題かなぁ。。
1台落ちてる状態でもHAのステータス確認すると有効になってるのも不思議だよな
NotReadyってステータス上はなってるのに
起動した直後のノードで
microk8s status --wait-ready
がちゃんと返却された状態なら他のノード落としても大丈夫みたい
ステータス確認も出来ない状態なのに焦って他のノード落とすなってことやな
lxdで払い出すdhcpのレンジ設定できるみたいやな
これでかぶらないように設定しよう
$ lxc network set lxdbr0 ipv4.dhcp.ranges 10.116.214.100-10.116.214.254
$ lxc network show lxdbr0
config:
ipv4.address: 10.116.214.1/24
ipv4.dhcp.ranges: 10.116.214.100-10.116.214.254
ipv4.nat: "true"
ipv6.address: fd42:d878:2ae:7d9e::1/64
ipv6.nat: "true"
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/instances/mk8s1
- /1.0/instances/mk8s2
- /1.0/instances/mk8s3
- /1.0/profiles/default
- /1.0/profiles/myDefault
managed: true
status: Created
locations:
- none
metalLBがなんで必要かわからんかったけどAWSとかのサービス使う分にはこれに相当する機能が最初から存在するから自分で準備する必要がないのか
オンプレだとそういうの準備できない場合はこれ使えばええよってことね
でもこれ動いてるどこかのノードが選出されてそこで通信を一手に引き受けるっぽいこと書いてるから
大量アクセスで単一ノードにトラフィックが集中するね
一般的なLBを準備できれば前段に置くだけでAWSとかのそういうのと同じようにtype=LoadBalancer指定するだけで動くように出来るのかな?
とりあえずこの章の書いてる内容は
kubectl expose deployment hello-world --type=LoadBalancer --name=my-service
service/my-service exposed
で適切な外部LBがある環境ならデプロイしたアプリを外部に出せるよって感じかな
また大したことしてないのに長大になってきてしまったとりあえずなんとなくわかったし一旦閉じよう