Closed23

MicroK8sの独習6 〜StatefulSetを使用したCassandraのデプロイ〜

坦々狸坦々狸

最初の備考がまず理解不能なんだが
どっちもノードでどっちもクラスターでってややこしすぎる。。。

坦々狸坦々狸

ストレージの定義を
provisioner: microk8s.io/hostpath
に変えたらpv周りのエラー消えたけど今度はDNSがどうのこうの言い出して再起動ループしてる。。。

坦々狸坦々狸

やり直したら1個だけ立ち上がったけど2個めでエラーになるな。。。

坦々狸坦々狸

なんか名前解決出来てないっぽいぞ

WARN 02:20:13 Seed provider couldn't lookup host cassandra-0.cassandra.default.svc.cluster.local

坦々狸坦々狸

1個めのpodからはping通るな。。。

root@mk8s1:~# kubectl exec -it cassandra-0 ping cassandra-0.cassandra.default.svc.cluster.local
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
PING cassandra-0.cassandra.default.svc.cluster.local (10.1.238.135): 56 data bytes
64 bytes from 10.1.238.135: icmp_seq=0 ttl=64 time=0.035 ms
64 bytes from 10.1.238.135: icmp_seq=1 ttl=64 time=0.090 ms
64 bytes from 10.1.238.135: icmp_seq=2 ttl=64 time=0.038 ms
^C--- cassandra-0.cassandra.default.svc.cluster.local ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.035/0.054/0.090/0.025 ms
坦々狸坦々狸

なんやろこれ他のpodからは見れない?
CoreDNS有効にしてるから見れる気がするんやけど。。。

root@mk8s1:~# kubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm ping cassandra-0.cassandra.default.svc.cluster.local
ping: bad address 'cassandra-0.cassandra.default.svc.cluster.local'
坦々狸坦々狸

"my-namespace"というネームスペース内でhostnameがfooとセットされていて、subdomainがbarとセットされているPodの場合、そのPodは"foo.bar.my-namespace.svc.cluster.local"という名前の完全修飾ドメイン名(FQDN)を持つことになります。

坦々狸坦々狸

bar部分にサービス名入るっぽいけどサービスのapply忘れてたわwww

坦々狸坦々狸

とりあえず今回はもうmk8s1だけでいいや
あとdnsとstorageとmetallbを有効化しとこう

$ lxc shell mk8s1
root@mk8s1:~# microk8s status --wait-ready
microk8s is running
high-availability: no
  datastore master nodes: 10.116.214.136:19001
  datastore standby nodes: none
addons:
  enabled:
    ha-cluster           # Configure high availability on the current node
  disabled:
    ambassador           # Ambassador API Gateway and Ingress
    cilium               # SDN, fast with full network policy
    dashboard            # The Kubernetes dashboard
    dns                  # CoreDNS
    fluentd              # Elasticsearch-Fluentd-Kibana logging and monitoring
    gpu                  # Automatic enablement of Nvidia CUDA
    helm                 # Helm 2 - the package manager for Kubernetes
    helm3                # Helm 3 - Kubernetes package manager
    host-access          # Allow Pods connecting to Host services smoothly
    ingress              # Ingress controller for external access
    istio                # Core Istio service mesh services
    jaeger               # Kubernetes Jaeger operator with its simple config
    keda                 # Kubernetes-based Event Driven Autoscaling
    knative              # The Knative framework on Kubernetes.
    kubeflow             # Kubeflow for easy ML deployments
    linkerd              # Linkerd is a service mesh for Kubernetes and other frameworks
    metallb              # Loadbalancer for your Kubernetes cluster
    metrics-server       # K8s Metrics Server for API access to service metrics
    multus               # Multus CNI enables attaching multiple network interfaces to pods
    portainer            # Portainer UI for your Kubernetes cluster
    prometheus           # Prometheus operator for monitoring and logging
    rbac                 # Role-Based Access Control for authorisation
    registry             # Private image registry exposed on localhost:32000
    storage              # Storage class; allocates storage from host directory
    traefik              # traefik Ingress controller for external access
root@mk8s1:~# microk8s enable dns storage metallb
Enabling DNS
Applying manifest
serviceaccount/coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
clusterrole.rbac.authorization.k8s.io/coredns created
clusterrolebinding.rbac.authorization.k8s.io/coredns created
Restarting kubelet
Adding argument --cluster-domain to nodes.
Configuring node 10.116.214.136
Adding argument --cluster-domain to nodes.
Configuring node 10.116.214.136
Adding argument --cluster-dns to nodes.
Configuring node 10.116.214.136
Adding argument --cluster-dns to nodes.
Configuring node 10.116.214.136
Restarting nodes.
Configuring node 10.116.214.136
Restarting nodes.
Configuring node 10.116.214.136
DNS is enabled
Enabling default storage class
deployment.apps/hostpath-provisioner created
storageclass.storage.k8s.io/microk8s-hostpath created
serviceaccount/microk8s-hostpath created
clusterrole.rbac.authorization.k8s.io/microk8s-hostpath created
clusterrolebinding.rbac.authorization.k8s.io/microk8s-hostpath created
Storage will be available soon
Enabling MetalLB
Enter each IP address range delimited by comma (e.g. '10.64.140.43-10.64.140.49,192.168.0.105-192.168.0.111'): 10.116.214.2-10.116.214.99
Applying Metallb manifest
namespace/metallb-system created
secret/memberlist created
podsecuritypolicy.policy/controller created
podsecuritypolicy.policy/speaker created
serviceaccount/controller created
serviceaccount/speaker created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
role.rbac.authorization.k8s.io/config-watcher created
role.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/config-watcher created
rolebinding.rbac.authorization.k8s.io/pod-lister created
daemonset.apps/speaker created
deployment.apps/controller created
configmap/config created
MetalLB is enabled
root@mk8s1:~# kubectl get storageclass
NAME                          PROVISIONER            RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
microk8s-hostpath (default)   microk8s.io/hostpath   Delete          Immediate           false                  34s
坦々狸坦々狸

今までpod作ってservice作ってたのに今回はserviceが先なんやね

このせいで見落としてたわ。。。

root@mk8s1:~# kubectl apply -f https://k8s.io/examples/application/cassandra/cassandra-service.yaml
service/cassandra created
root@mk8s1:~# kubectl get svc cassandra
NAME        TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
cassandra   ClusterIP   None         <none>        9042/TCP   12s
坦々狸坦々狸

でminikube用の設定を書き換えてk8s用にして

root@mk8s1:~# curl -LO https://k8s.io/examples/application/cassandra/cassandra-statefulset.yaml
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   178  100   178    0     0   1098      0 --:--:-- --:--:-- --:--:--  1098
100  2593  100  2593    0     0   4757      0 --:--:-- --:--:-- --:--:--  4757
root@mk8s1:~# sed -i.bak -e 's|k8s.io/minikube-hostpath|microk8s.io/hostpath|' cassandra-statefulset.yaml
root@mk8s1:~# diff
diff   diff3
root@mk8s1:~# diff cassandra-statefulset.yaml{,.bak}
98c98
< provisioner: microk8s.io/hostpath
---
> provisioner: k8s.io/minikube-hostpath
坦々狸坦々狸

起動。。。
ここまで来るのに長かった。。。😂

root@mk8s1:~# kubectl apply -f cassandra-statefulset.yaml
statefulset.apps/cassandra created
storageclass.storage.k8s.io/fast created
root@mk8s1:~# kubectl get statefulset cassandra
NAME        READY   AGE
cassandra   3/3     3m26s
root@mk8s1:~# kubectl get pods -l="app=cassandra"
NAME          READY   STATUS    RESTARTS   AGE
cassandra-0   1/1     Running   0          5m51s
cassandra-1   1/1     Running   0          5m9s
cassandra-2   1/1     Running   0          4m2s
坦々狸坦々狸

検証
しかしcassandraなんもしらんからこれで検証できてるかどうかすらわからねぇ

root@mk8s1:~# kubectl exec -it cassandra-0 -- nodetool status
Datacenter: DC1-K8Demo
======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address       Load       Tokens       Owns (effective)  Host ID                               Rack
UN  10.1.238.137  104.54 KiB  32           68.7%             83e8c884-2357-4ba9-9ec7-2eb0ff76d870  Rack1-K8Demo
UN  10.1.238.139  65.85 KiB  32           65.1%             ead34072-5715-45f8-851a-3732c54fe9f8  Rack1-K8Demo
UN  10.1.238.138  108.9 KiB  32           66.3%             e303295f-2fc0-4a30-ac6a-c6e23a5c2853  Rack1-K8Demo
坦々狸坦々狸

んでお決まりのスケールアップね

root@mk8s1:~# kubectl scale statefulset --replicas=4 cassandra
statefulset.apps/cassandra scaled
root@mk8s1:~# kubectl get pods -l="app=cassandra"
NAME          READY   STATUS    RESTARTS   AGE
cassandra-0   1/1     Running   0          14m
cassandra-1   1/1     Running   0          14m
cassandra-2   1/1     Running   0          12m
cassandra-3   1/1     Running   0          3m27s
root@mk8s1:~# kubectl exec -it cassandra-0 -- nodetool status
Datacenter: DC1-K8Demo
======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address       Load       Tokens       Owns (effective)  Host ID                               Rack
UN  10.1.238.137  104.54 KiB  32           56.5%             83e8c884-2357-4ba9-9ec7-2eb0ff76d870  Rack1-K8Demo
UN  10.1.238.139  65.85 KiB  32           45.2%             ead34072-5715-45f8-851a-3732c54fe9f8  Rack1-K8Demo
UN  10.1.238.138  108.9 KiB  32           49.4%             e303295f-2fc0-4a30-ac6a-c6e23a5c2853  Rack1-K8Demo
UN  10.1.238.140  65.86 KiB  32           48.9%             2ce2bc97-6f59-4cd5-834e-d9647e70bef5  Rack1-K8Demo
坦々狸坦々狸

結局変更箇所はmicrok8s.io/hostpathだけやったね
サービス定義忘れるとdns解決出来なくなるってのも知れたのは怪我の功名か?

坦々狸坦々狸

カサンドラってなんやろって思ってちょっと調べたけど分散型のKVSなんやね
ハッシュでデータを分散保存させてノードあたりの負荷を分散させてる感じっぽいからノードが増えるたびに負荷が下がるんかな確かにこれはk8s上で動かして負荷問題になったらスケールアウトするだけでよくなるから良さそうやな
障害耐性が気になるけど

坦々狸坦々狸

マスターレスって書いてあるけどこれノード起動時に1番目に繋ぎに行くっぽいから1番落ちて再起動した時に違うノードで実行されてpvが新規作成とかになってたら動かんのちゃうやろか。。。

坦々狸坦々狸

まぁチュートリアルで作るようなもんにそこまで求めるのもおかしいか。。。
とりあえず動いたし閉めよう

このスクラップは2021/04/22にクローズされました