Closed29

MicroK8sの独習7 〜分散システムコーディネーターであるZooKeeperの実行〜

坦々狸坦々狸

とりあえず何者かってところからやなこの飼育員とかいうやつ
wiki見てみよ

設定情報の集中管理や名前付けなどのサービスを提供するソフトウェア
あるZooKeeperノードへの問い合わせが失敗したら、他のノードに問い合わせることができる。
データの更新は一つのマスターノードだけが行うようになっているので、データがノード間で矛盾した内容になることはない(ただし、最新のデータでない可能性はある)。
マスターノードが停止した場合には、各ノード間でリーダー選出を行い、新たな更新ノードが選ばれる。

よくわからんけど分散アプリを便利に管理できる賢いやつみたいな感じかな?

坦々狸坦々狸

ノードが4台以上必要って書いてるね
なんとなく4台構成で作っててよかった
とりあえず初期化初期化

$ seq 4 | xargs -n1 -P4 -I{} lxc restore mk8s{} snap0
$ lxc start mk8s{1,2,3,4}
$ lxc shell mk8s1
root@mk8s1:~# kubectl get nodes
NAME    STATUS   ROLES    AGE   VERSION
mk8s2   Ready    <none>   29h   v1.20.5-34+40f5951bd9888a
mk8s4   Ready    <none>   29h   v1.20.5-34+40f5951bd9888a
mk8s3   Ready    <none>   29h   v1.20.5-34+40f5951bd9888a
mk8s1   Ready    <none>   29s   v1.20.5-34+40f5951bd9888a
root@mk8s1:~# microk8s enable metallb storage dns
〜略〜
Enter each IP address range delimited by comma (e.g. '10.64.140.43-10.64.140.49,192.168.0.105-192.168.0.111'): 10.116.214.2-10.116.214.99
〜略〜
坦々狸坦々狸

初手からエラー

# kubectl apply -f https://k8s.io/examples/application/zookeeper/zookeeper.yaml
service/zk-hs created                 
service/zk-cs created                                                                                   
statefulset.apps/zk created                
error: unable to recognize "https://k8s.io/examples/application/zookeeper/zookeeper.yaml": no matches for kind "PodDisruptionBudget" in version "policy/v1"
坦々狸坦々狸

わからん。。。

apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: zk-pdb
spec:
  selector:
    matchLabels:
      app: zk
  maxUnavailable: 1

これ作るの失敗してるんやろうけど特にここで問題になってる人が見つからん。。。

坦々狸坦々狸

katacodaでzk−pdbだけapplyしても同じエラーでるな。。。

坦々狸坦々狸

たまたまhelp見てたら見つけた

root@mk8s1:~# kubectl api-versions |grep policy
policy/v1beta1
坦々狸坦々狸

使ってるk8sがpolicy/v1に対応してないみたいやね

root@mk8s1:~# cat PodDissruptionBudget.yaml 
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: zk-pdb
spec:
  selector:
    matchLabels:
      app: zk
  maxUnavailable: 1
root@mk8s1:~# kubectl apply -f PodDissruptionBudget.yaml                                                                                                                                                        
poddisruptionbudget.policy/zk-pdb created
root@mk8s1:~# kubectl get pdb
NAME     MIN AVAILABLE   MAX UNAVAILABLE   ALLOWED DISRUPTIONS   AGE
zk-pdb   N/A             1                 1                     3m50s

なんか出来たわ

坦々狸坦々狸

よーし気を取り直して行くで

$ lxc exec mk8s1 -- microk8s status --wait-ready | head -n4
microk8s is running
high-availability: yes
  datastore master nodes: 10.116.214.136:19001 10.116.214.122:19001 10.116.214.107:19001
  datastore standby nodes: 10.116.214.248:19001
$ lxc exec mk8s1 -- microk8s status --wait-ready | head -n4
microk8s is running
high-availability: yes
  datastore master nodes: 10.116.214.136:19001 10.116.214.122:19001 10.116.214.107:19001
  datastore standby nodes: 10.116.214.248:19001
$ lxc exec mk8s1 -- microk8s enable metallb storage dns
Enabling MetalLB
Enter each IP address range delimited by comma (e.g. '10.64.140.43-10.64.140.49,192.168.0.105-192.168.0.111'): 10.116.214.2-10.116.214.99
〜略〜
$ lxc exec mk8s1 -- kubectl get nodes
NAME    STATUS   ROLES    AGE     VERSION
mk8s1   Ready    <none>   6d20h   v1.20.5-34+40f5951bd9888a
mk8s4   Ready    <none>   47h     v1.20.5-34+40f5951bd9888a
mk8s3   Ready    <none>   47h     v1.20.5-34+40f5951bd9888a
mk8s2   Ready    <none>   47h     v1.20.5-34+40f5951bd9888a
坦々狸坦々狸

ちょっとyamlいじる
policy/v1 → policy/v1beta1
にして
securityContextの箇所削除した
securityContextは実行時ユーザの変更なんだけど
これ指定してたらpvで割り当てた領域にzookeeperさんがアクセス不可になったからけした

root@mk8s1:~# curl -LO https://k8s.io/examples/application/zookeeper/zookeeper.yaml
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   178  100   178    0     0   1028      0 --:--:-- --:--:-- --:--:--  1034
100  2898  100  2898    0     0   5337      0 --:--:-- --:--:-- --:--:--  5337
root@mk8s1:~# sed -i.bak -e 's|policy/v1|policy/v1beta1|' zookeeper.yaml                                                                                                                                        
root@mk8s1:~# vi zookeeper.yaml
root@mk8s1:~# diff zookeeper.yaml{,.bak}
30c30
< apiVersion: policy/v1beta1
---
> apiVersion: policy/v1
122a123,125
>       securityContext:
>         runAsUser: 1000
>         fsGroup: 1000
坦々狸坦々狸

適用

root@mk8s1:~# kubectl apply -f zookeeper.yaml 
service/zk-hs created
service/zk-cs created
poddisruptionbudget.policy/zk-pdb created
statefulset.apps/zk created
root@mk8s1:~# kubectl get pods -w -l app=zk
NAME   READY   STATUS              RESTARTS   AGE
zk-0   1/1     Running             0          102s
zk-1   1/1     Running             0          59s
zk-2   0/1     ContainerCreating   0          19s
zk-2   0/1     Running             0          31s
zk-2   1/1     Running             0          42s
坦々狸坦々狸

Facilitating leader election(リーダー選出の促進?)

ん〜リーダが誰かわからん。。。
とりあえず起動した順に正の整数でIDが振られて一意になりますよって感じの事書いてる気がする

root@mk8s1:~# for i in 0 1 2; do echo "myid zk-$i";kubectl exec zk-$i -- cat /var/lib/zookeeper/data/myid; done
myid zk-0
1
myid zk-1
2
myid zk-2
3
root@mk8s1:~# for i in 0 1 2; do kubectl exec zk-$i -- hostname -f; done
zk-0.zk-hs.default.svc.cluster.local
zk-1.zk-hs.default.svc.cluster.local
zk-2.zk-hs.default.svc.cluster.local
root@mk8s1:~# kubectl exec zk-0 -- cat /opt/zookeeper/conf/zoo.cfg
#This file was autogenerated DO NOT EDIT
clientPort=2181
dataDir=/var/lib/zookeeper/data
dataLogDir=/var/lib/zookeeper/data/log
tickTime=2000
initLimit=10
syncLimit=5
maxClientCnxns=60
minSessionTimeout=4000
maxSessionTimeout=40000
autopurge.snapRetainCount=3
autopurge.purgeInteval=12
server.1=zk-0.zk-hs.default.svc.cluster.local:2888:3888
server.2=zk-1.zk-hs.default.svc.cluster.local:2888:3888
server.3=zk-2.zk-hs.default.svc.cluster.local:2888:3888
坦々狸坦々狸

Achieving consensus(コンセンサスの達成)

同時にノード起動したら一意にならないから気をつけてね
2つ以上のノードが生きててリーダ選出が出来てればそいつがちゃんと書き込むけどそうじゃなきゃ失敗します
Zabプロトコルってのでリーダは選出されるよ
/opt/zookeeper/conf/zoo.cfgみたらどんな感じで構成されてるかわかるよ
って感じのことが書かれてる気がする

坦々狸坦々狸

Sanity testing the ensemble(アンサンブルの健全性テスト)
とりあえずデータ書き込んでみて違うノードでそれが読み取れるか見るのが手っ取り早いぞってことかね

書き込み
root@mk8s1:~# kubectl exec zk-0 zkCli.sh create /hello world
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Connecting to localhost:2181
2021-04-23 01:56:19,141 [myid:] - INFO  [main:Environment@100] - Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT
2021-04-23 01:56:19,145 [myid:] - INFO  [main:Environment@100] - Client environment:host.name=zk-0.zk-hs.default.svc.cluster.local
2021-04-23 01:56:19,145 [myid:] - INFO  [main:Environment@100] - Client environment:java.version=1.8.0_131
2021-04-23 01:56:19,148 [myid:] - INFO  [main:Environment@100] - Client environment:java.vendor=Oracle Corporation
2021-04-23 01:56:19,148 [myid:] - INFO  [main:Environment@100] - Client environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre
2021-04-23 01:56:19,149 [myid:] - INFO  [main:Environment@100] - Client environment:java.class.path=/usr/bin/../build/classes:/usr/bin/../build/lib/*.jar:/usr/bin/../share/zookeeper/zookeeper-3.4.10.jar:/usr/bin/../share/zookeeper/slf4j-log4j12-1.6.1.jar:/usr/bin/../share/zookeeper/slf4j-api-1.6.1.jar:/usr/bin/../share/zookeeper/netty-3.10.5.Final.jar:/usr/bin/../share/zookeeper/log4j-1.2.16.jar:/usr/bin/../share/zookeeper/jline-0.9.94.jar:/usr/bin/../src/java/lib/*.jar:/usr/bin/../etc/zookeeper:
2021-04-23 01:56:19,149 [myid:] - INFO  [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
2021-04-23 01:56:19,149 [myid:] - INFO  [main:Environment@100] - Client environment:java.io.tmpdir=/tmp
2021-04-23 01:56:19,149 [myid:] - INFO  [main:Environment@100] - Client environment:java.compiler=<NA>
2021-04-23 01:56:19,149 [myid:] - INFO  [main:Environment@100] - Client environment:os.name=Linux
2021-04-23 01:56:19,150 [myid:] - INFO  [main:Environment@100] - Client environment:os.arch=amd64
2021-04-23 01:56:19,150 [myid:] - INFO  [main:Environment@100] - Client environment:os.version=5.4.0-71-generic
2021-04-23 01:56:19,150 [myid:] - INFO  [main:Environment@100] - Client environment:user.name=root
2021-04-23 01:56:19,150 [myid:] - INFO  [main:Environment@100] - Client environment:user.home=/root
2021-04-23 01:56:19,150 [myid:] - INFO  [main:Environment@100] - Client environment:user.dir=/
2021-04-23 01:56:19,152 [myid:] - INFO  [main:ZooKeeper@438] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@22d8cfe0
2021-04-23 01:56:19,184 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1032] - Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error)
2021-04-23 01:56:19,260 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@876] - Socket connection established to localhost/0:0:0:0:0:0:0:1:2181, initiating session
2021-04-23 01:56:19,297 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1299] - Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x178fc5589a40000, negotiated timeout = 30000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
Created /hello
読み込み
root@mk8s1:~# kubectl exec zk-1 zkCli.sh get /hello
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Connecting to localhost:2181
2021-04-23 01:58:31,819 [myid:] - INFO  [main:Environment@100] - Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT
2021-04-23 01:58:31,822 [myid:] - INFO  [main:Environment@100] - Client environment:host.name=zk-1.zk-hs.default.svc.cluster.local
2021-04-23 01:58:31,823 [myid:] - INFO  [main:Environment@100] - Client environment:java.version=1.8.0_131
2021-04-23 01:58:31,826 [myid:] - INFO  [main:Environment@100] - Client environment:java.vendor=Oracle Corporation
2021-04-23 01:58:31,826 [myid:] - INFO  [main:Environment@100] - Client environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre
2021-04-23 01:58:31,826 [myid:] - INFO  [main:Environment@100] - Client environment:java.class.path=/usr/bin/../build/classes:/usr/bin/../build/lib/*.jar:/usr/bin/../share/zookeeper/zookeeper-3.4.10.jar:/usr/bin/../share/zookeeper/slf4j-log4j12-1.6.1.jar:/usr/bin/../share/zookeeper/slf4j-api-1.6.1.jar:/usr/bin/../share/zookeeper/netty-3.10.5.Final.jar:/usr/bin/../share/zookeeper/log4j-1.2.16.jar:/usr/bin/../share/zookeeper/jline-0.9.94.jar:/usr/bin/../src/java/lib/*.jar:/usr/bin/../etc/zookeeper:
2021-04-23 01:58:31,826 [myid:] - INFO  [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
2021-04-23 01:58:31,826 [myid:] - INFO  [main:Environment@100] - Client environment:java.io.tmpdir=/tmp
2021-04-23 01:58:31,826 [myid:] - INFO  [main:Environment@100] - Client environment:java.compiler=<NA>
2021-04-23 01:58:31,826 [myid:] - INFO  [main:Environment@100] - Client environment:os.name=Linux
2021-04-23 01:58:31,826 [myid:] - INFO  [main:Environment@100] - Client environment:os.arch=amd64
2021-04-23 01:58:31,826 [myid:] - INFO  [main:Environment@100] - Client environment:os.version=5.4.0-71-generic
2021-04-23 01:58:31,826 [myid:] - INFO  [main:Environment@100] - Client environment:user.name=root
2021-04-23 01:58:31,827 [myid:] - INFO  [main:Environment@100] - Client environment:user.home=/root
2021-04-23 01:58:31,827 [myid:] - INFO  [main:Environment@100] - Client environment:user.dir=/
2021-04-23 01:58:31,828 [myid:] - INFO  [main:ZooKeeper@438] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@22d8cfe0
2021-04-23 01:58:31,852 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1032] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2021-04-23 01:58:31,925 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@876] - Socket connection established to localhost/127.0.0.1:2181, initiating session
2021-04-23 01:58:31,944 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1299] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x278fc5589b80000, negotiated timeout = 30000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
cZxid = 0x100000002
world
ctime = Fri Apr 23 01:56:19 UTC 2021
mZxid = 0x100000002
mtime = Fri Apr 23 01:56:19 UTC 2021
pZxid = 0x100000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 5
numChildren = 0
坦々狸坦々狸

Providing durable storage(耐久性のあるストレージを提供)
statefulset消してpod全消えしてもpv残ってて再登録した時にそこ割り当てられるからデータ残ってるでってことかね?
毎回同じpodが同じノードに振り分けられるんやろか。。。
なんかストレージの定義が読み込んでるyamlと違うのはなんでやろ。。。
そもそもこれ復活する前にノードが死んだりしたらどうなるんや。。。

root@mk8s1:~# kubectl get pods -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP             NODE    NOMINATED NODE   READINESS GATES
zk-0   1/1     Running   0          40m   10.1.115.130   mk8s2   <none>           <none>
zk-1   1/1     Running   0          39m   10.1.238.133   mk8s1   <none>           <none>
zk-2   1/1     Running   0          38m   10.1.236.131   mk8s4   <none>           <none>
root@mk8s1:~# kubectl delete statefulset zk
statefulset.apps "zk" deleted
root@mk8s1:~# kubectl get pods -w -l app=zk
NAME   READY   STATUS        RESTARTS   AGE
zk-0   1/1     Terminating   0          40m
zk-2   1/1     Terminating   0          39m
zk-1   1/1     Terminating   0          39m
zk-1   0/1     Terminating   0          40m
zk-2   0/1     Terminating   0          39m
zk-0   0/1     Terminating   0          40m
zk-1   0/1     Terminating   0          40m
zk-1   0/1     Terminating   0          40m
root@mk8s1:~# kubectl apply -f zookeeper.yaml
service/zk-hs unchanged
service/zk-cs unchanged
poddisruptionbudget.policy/zk-pdb unchanged
statefulset.apps/zk created
root@mk8s1:~# kubectl get pods -w -l app=zk -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP             NODE    NOMINATED NODE   READINESS GATES
zk-0   0/1     Running   0          14s   10.1.115.131   mk8s2   <none>           <none>
zk-0   1/1     Running   0          19s   10.1.115.131   mk8s2   <none>           <none>
zk-1   0/1     Pending   0          0s    <none>         <none>   <none>           <none>
zk-1   0/1     Pending   0          1s    <none>         mk8s1    <none>           <none>
zk-1   0/1     ContainerCreating   0          1s    <none>         mk8s1    <none>           <none>
zk-1   0/1     ContainerCreating   0          1s    <none>         mk8s1    <none>           <none>
zk-1   0/1     Running             0          6s    10.1.238.134   mk8s1    <none>           <none>
zk-1   1/1     Running             0          16s   10.1.238.134   mk8s1    <none>           <none>
zk-2   0/1     Pending             0          1s    <none>         <none>   <none>           <none>
zk-2   0/1     Pending             0          1s    <none>         mk8s4    <none>           <none>
zk-2   0/1     ContainerCreating   0          1s    <none>         mk8s4    <none>           <none>
zk-2   0/1     ContainerCreating   0          2s    <none>         mk8s4    <none>           <none>
zk-2   0/1     Running             0          6s    10.1.236.132   mk8s4    <none>           <none>
zk-2   1/1     Running             0          19s   10.1.236.132   mk8s4    <none>           <none>
root@mk8s1:~# kubectl exec zk-2 zkCli.sh get /hello | grep -A5 WATCHER
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
WATCHER::

WatchedEvent state:SyncConnected type:None path:null
world
cZxid = 0x100000002
ctime = Fri Apr 23 01:56:19 UTC 2021
mZxid = 0x100000002
mtime = Fri Apr 23 01:56:19 UTC 2021
pZxid = 0x100000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 5
numChildren = 0
root@mk8s1:~# kubectl get pvc -l app=zk
NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE
datadir-zk-0   Bound    pvc-3e207934-c8e7-47d8-ade0-9c3fd47c4b8b   10Gi       RWO            microk8s-hostpath   43m
datadir-zk-1   Bound    pvc-4ddff378-030c-4154-92b3-da5e4e6fe963   10Gi       RWO            microk8s-hostpath   43m
datadir-zk-2   Bound    pvc-a82b3e10-8f90-49f2-ad7c-b31b8a2ca54f   10Gi       RWO            microk8s-hostpath   42m
坦々狸坦々狸

Ensuring consistent configuration(一貫した構成の確保)
ここまでちゃんと動いてるのはzookeeperの構成がちゃんとしとるおかげやぞ
その構成はマニフェストファイルに定義してるからまぁ見てみろや(😤ドヤァ!)

ちなみに実行時引数で渡してるけど環境変数で設定することも出来るんやで

って書いてる気がする

root@mk8s1:~# kubectl get sts zk -o yaml|grep -A7 -- '- command:'                                                                                                                                               
      - command:
        - sh
        - -c
        - start-zookeeper --servers=3 --data_dir=/var/lib/zookeeper/data --data_log_dir=/var/lib/zookeeper/data/log
          --conf_dir=/opt/zookeeper/conf --client_port=2181 --election_port=3888 --server_port=2888
          --tick_time=2000 --init_limit=10 --sync_limit=5 --heap=512M --max_client_cnxns=60
          --snap_retain_count=3 --purge_interval=12 --max_session_timeout=40000 --min_session_timeout=4000
          --log_level=INFO
坦々狸坦々狸

てかもうこれk8sのチュートリアルじゃなくてZooKeeperの解説じゃ。。。

坦々狸坦々狸

Configuring logging(ロギングの構成)

ロギングはlog4j使ってるわ構成ファイルはこんな感じ

root@mk8s1:~# kubectl exec zk-0 -- cat /usr/etc/zookeeper/log4j.properties                                                                                                                                      
zookeeper.root.logger=CONSOLE
zookeeper.console.threshold=INFO
log4j.rootLogger=${zookeeper.root.logger}
log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.Threshold=${zookeeper.console.threshold}
log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout
log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n

標準出力とエラー出力に出とるログの最終20行を確認してみよか
こんな感じや

root@mk8s1:~# kubectl logs zk-0 --tail 20
2021-04-23 02:32:23,990 [myid:1] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@883] - Processing ruok command from /127.0.0.1:42032
2021-04-23 02:32:23,993 [myid:1] - INFO  [Thread-316:NIOServerCnxn@1044] - Closed socket connection for client /127.0.0.1:42032 (no session established for client)
2021-04-23 02:32:31,797 [myid:1] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:42102
2021-04-23 02:32:31,797 [myid:1] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@883] - Processing ruok command from /127.0.0.1:42102
2021-04-23 02:32:31,799 [myid:1] - INFO  [Thread-317:NIOServerCnxn@1044] - Closed socket connection for client /127.0.0.1:42102 (no session established for client)
2021-04-23 02:32:33,992 [myid:1] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:42118
2021-04-23 02:32:33,992 [myid:1] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@883] - Processing ruok command from /127.0.0.1:42118
2021-04-23 02:32:33,993 [myid:1] - INFO  [Thread-318:NIOServerCnxn@1044] - Closed socket connection for client /127.0.0.1:42118 (no session established for client)
2021-04-23 02:32:41,793 [myid:1] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:42188
2021-04-23 02:32:41,793 [myid:1] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@883] - Processing ruok command from /127.0.0.1:42188
2021-04-23 02:32:41,795 [myid:1] - INFO  [Thread-319:NIOServerCnxn@1044] - Closed socket connection for client /127.0.0.1:42188 (no session established for client)
2021-04-23 02:32:43,989 [myid:1] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:42210
2021-04-23 02:32:43,989 [myid:1] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@883] - Processing ruok command from /127.0.0.1:42210
2021-04-23 02:32:43,993 [myid:1] - INFO  [Thread-320:NIOServerCnxn@1044] - Closed socket connection for client /127.0.0.1:42210 (no session established for client)
2021-04-23 02:32:51,904 [myid:1] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:42262
2021-04-23 02:32:51,906 [myid:1] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@883] - Processing ruok command from /127.0.0.1:42262
2021-04-23 02:32:51,909 [myid:1] - INFO  [Thread-321:NIOServerCnxn@1044] - Closed socket connection for client /127.0.0.1:42262 (no session established for client)
2021-04-23 02:32:53,992 [myid:1] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:42286
2021-04-23 02:32:53,993 [myid:1] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@883] - Processing ruok command from /127.0.0.1:42286
2021-04-23 02:32:53,996 [myid:1] - INFO  [Thread-322:NIOServerCnxn@1044] - Closed socket connection for client /127.0.0.1:42286 (no session established for client)

ちなk8sには色んなロギングソリューションが使えるで
クラスタレベルならログの集約するような仕組みを探して適用してみたらええんちゃうか?

坦々狸坦々狸

Configuring a non-privileged user(非特権ユーザーの構成)

これ出来なかった

これやるとzookeeperのプロセスがUIDとGID1000で動くけど
/var/lib/zookeeper/data
への書き込み権限が無いから異常終了する

デフォルトでは、ポッドのPersistentVolumesがZooKeeperサーバーのデータディレクトリにマウントされている場合、rootユーザーのみがアクセスできます。この構成により、ZooKeeperプロセスがWALに書き込み、スナップショットを保存できなくなります。

って書いてるけどじゃあそれどうやってアクセスできるようにするんやって説明がないからもう消した

坦々狸坦々狸

Managing the ZooKeeper process(ZooKeeperプロセスの管理)

プロセス監視して再起動とかするやろうけど適当な外部ツール使うなよ
Kubernetesをアプリケーションのウォッチドッグとして使用しろよ

ってことかな

坦々狸坦々狸

アンサンブルの更新

ステートフルセット更新してロールバックしてみたりするでちゃんと1個ずつpodが更新されていくから問題が出ないで
ってことか?

root@mk8s1:~# kubectl patch sts zk --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/resources/requests/cpu", "value":"0.3"}]'
statefulset.apps/zk patched
root@mk8s1:~# kubectl rollout status sts/zk
waiting for statefulset rolling update to complete 0 pods at revision zk-5774bcd65c...
Waiting for 1 pods to be ready...
Waiting for 1 pods to be ready...
waiting for statefulset rolling update to complete 1 pods at revision zk-5774bcd65c...
Waiting for 1 pods to be ready...
Waiting for 1 pods to be ready...
waiting for statefulset rolling update to complete 2 pods at revision zk-5774bcd65c...
Waiting for 1 pods to be ready...
Waiting for 1 pods to be ready...
statefulset rolling update complete 3 pods at revision zk-5774bcd65c...
root@mk8s1:~# kubectl rollout history sts/zk
statefulset.apps/zk 
REVISION  CHANGE-CAUSE
1         <none>
2         <none>

root@mk8s1:~# kubectl rollout undo sts/zk
statefulset.apps/zk rolled back
root@mk8s1:~# kubectl rollout status sts/zk                                                                                                                                                                     
waiting for statefulset rolling update to complete 0 pods at revision zk-5cdb69f579...
Waiting for 1 pods to be ready...
Waiting for 1 pods to be ready...
waiting for statefulset rolling update to complete 1 pods at revision zk-5cdb69f579...
Waiting for 1 pods to be ready...
Waiting for 1 pods to be ready...
waiting for statefulset rolling update to complete 2 pods at revision zk-5cdb69f579...
Waiting for 1 pods to be ready...
Waiting for 1 pods to be ready...
statefulset rolling update complete 3 pods at revision zk-5cdb69f579...
坦々狸坦々狸

プロセス障害の処理

ステートフルセットの場合はRestartPolicyはAlways一択や変えたらあかんぞ
とりあえずzookeeperのプロセス落としてもk8sがちゃんと上げ直したるところ見せたるわ

root@mk8s1:~# kubectl exec zk-0 -- pkill java
ログ監視ターミナル
$ lxc exec mk8s1 -- kubectl get pod -w -l app=zk
NAME   READY   STATUS    RESTARTS   AGE
zk-2   1/1     Running   0          79m
zk-1   1/1     Running   0          78m
zk-0   1/1     Running   0          77m
zk-0   0/1     Error     0          77m
zk-0   0/1     Running   1          77m
zk-0   1/1     Running   1          77m
坦々狸坦々狸

死活テスト
プロセスがいるだけで生きてるって判断は出来んよな
livenessProbeの設定してるからちゃんと15秒毎にzookeeper-readyコマンド打って応答するかのチェックもしとるで

 livenessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 15
          timeoutSeconds: 5

ほなこれがちゃんと動いてるかチェックしてみよか
このコマンドのファイル消したらええだけやいくで

root@mk8s1:~# kubectl exec zk-0 -- rm /usr/bin/zookeeper-ready
root@mk8s1:~# kubectl get pod -w -l app=zk
NAME   READY   STATUS    RESTARTS   AGE
zk-2   1/1     Running   0          94m
zk-1   1/1     Running   0          93m
zk-0   1/1     Running   1          92m
zk-0   0/1     Running   1          92m
zk-0   0/1     Running   2          93m
zk-0   1/1     Running   2          93m

ほらリスタートされたやろ
イベントログにちゃんとコマンド失敗したから作り直したわってログも出とるな

# kubectl describe pods zk-0 |grep Events -A100
Events:
  Type     Reason     Age                    From     Message
  ----     ------     ----                   ----     -------
  Normal   Pulled     17m                    kubelet  Successfully pulled image "k8s.gcr.io/kubernetes-zookeeper:1.0-3.4.10" in 1.381803033s
  Warning  Unhealthy  2m18s (x3 over 2m38s)  kubelet  Liveness probe failed: sh: 1: zookeeper-ready: not found
  Normal   Killing    2m18s                  kubelet  Container kubernetes-zookeeper failed liveness probe, will be restarted
  Warning  Unhealthy  117s (x5 over 2m37s)   kubelet  Readiness probe failed: sh: 1: zookeeper-ready: not found
  Normal   Pulling    108s (x3 over 95m)     kubelet  Pulling image "k8s.gcr.io/kubernetes-zookeeper:1.0-3.4.10"
  Normal   Pulled     108s                   kubelet  Successfully pulled image "k8s.gcr.io/kubernetes-zookeeper:1.0-3.4.10" in 539.784765ms
  Warning  Unhealthy  107s                   kubelet  Readiness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state
  Normal   Created    104s (x3 over 95m)     kubelet  Created container kubernetes-zookeeper
  Normal   Started    104s (x3 over 95m)     kubelet  Started container kubernetes-zookeeper
坦々狸坦々狸

Testing for readiness

次はreadinessProbeや起動する時にこのテストに合格するまでネットワークトラフィックが遮断されるんからpodとしては動かん状態やで今回はlivenessProbeとチェック内容同じやから内容はプローブ名が変わるだけやな
ただの起動チェックやねコイツとlivenessProbeをセットで設定しとくことで健全性を保っとる感じやな

  readinessProbe:
    exec:
      command:
      - sh
      - -c
      - "zookeeper-ready 2181"
    initialDelaySeconds: 15
    timeoutSeconds: 5
坦々狸坦々狸

ノード障害の許容

3台構成のzookeeperは2台以上死んだら動かんくなる
だから同一ノードで2つ起動してしまったらそのノード死んだ時一緒にzookeeperも死んでしまうんや
これは問題やねというわけでとりあえず今の状態を表示するで

root@mk8s1:~# kubectl get pods -o wide
NAME   READY   STATUS    RESTARTS   AGE    IP             NODE    NOMINATED NODE   READINESS GATES
zk-2   1/1     Running   0          138m   10.1.238.137   mk8s1   <none>           <none>
zk-1   1/1     Running   0          137m   10.1.115.134   mk8s2   <none>           <none>
zk-0   1/1     Running   2          136m   10.1.236.136   mk8s4   <none>           <none>

うまいことノードが分散して動いとるねたまたまじゃないんや
それを回避するためにこの設定をいれとるからや

      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                    - zk
              topologyKey: "kubernetes.io/hostname"

appラベルがzkのやつは同じnodeで動かさんみたいな意味やな
色んな条件で設定が出来るみたいやから電源装置毎とか物理サーバ毎とかで分散とかも出来るみたいやで
オンプレで実運用するなら結構複雑な条件が必要になりそうやね

坦々狸坦々狸

ちょっと知らないコマンド出てきたので予習

コマンド 意味
cordon ノード停止するからこのノードにはもうpod追加すんな
drain cordonするついでにこのノードで動いてるpodは他に追い出すぞ
uncordon ノード停止終わったからまたpod追加してええぞ
坦々狸坦々狸

Surviving maintenance

なんか理解するのにかなり時間かかった。。。
英語苦手や。。。。

坦々狸坦々狸

Surviving maintenance

最初に演習に使ってる4ノード以上あるクラスタを使ってるなら他のノードは一旦cordon状態にしといてくれって書いてるけど演習用に作ったクラスタやから別にやることないな

まずpdbの確認で
MAX UNAVAILABLEが1ってことはpodは1個しか止めれないでって設定が入ってることを確認してるのかな

root@mk8s1:~# kubectl get pdb zk-pdb
NAME     MIN AVAILABLE   MAX UNAVAILABLE   ALLOWED DISRUPTIONS   AGE
zk-pdb   N/A             1                 1                     7h3m

でpodがどのノードで動いてるか確認して

root@mk8s1:~# kubectl get pod -l app=zk -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP             NODE    NOMINATED NODE   READINESS GATES
zk-2   1/1     Running   0          89m   10.1.238.139   mk8s1   <none>           <none>
zk-0   1/1     Running   0          45m   10.1.236.137   mk8s4   <none>           <none>
zk-1   1/1     Running   0          40m   10.1.217.201   mk8s3   <none>           <none>

zk-0のいるノードmk8s4をdrainしたらzk-0はmk8s2に移動されてちゃんと動いてるよmk8s4ノードはSchedulingDisabledだからもうpodは配備できないよってことを確認してる感じかね

root@mk8s1:~# kubectl drain $(kubectl get pod zk-0 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data
Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
node/mk8s4 cordoned
WARNING: ignoring DaemonSet-managed Pods: metallb-system/speaker-hchpp, kube-system/calico-node-lxhxc
evicting pod kube-system/hostpath-provisioner-5c65fbdb4f-7djmb
evicting pod kube-system/calico-kube-controllers-847c8c99d-qwkqt
evicting pod default/zk-0
evicting pod metallb-system/controller-559b68bfd8-gvgzg
evicting pod kube-system/coredns-86f78bb79c-lg568
pod/controller-559b68bfd8-gvgzg evicted
pod/calico-kube-controllers-847c8c99d-qwkqt evicted
pod/coredns-86f78bb79c-lg568 evicted
pod/hostpath-provisioner-5c65fbdb4f-7djmb evicted
pod/zk-0 evicted
node/mk8s4 evicted
root@mk8s1:~# kubectl get pod -l app=zk -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP             NODE    NOMINATED NODE   READINESS GATES
zk-2   1/1     Running   0          91m   10.1.238.139   mk8s1   <none>           <none>
zk-1   1/1     Running   0          42m   10.1.217.201   mk8s3   <none>           <none>
zk-0   0/1     Running   0          15s   10.1.115.140   mk8s2   <none>           <none>
root@mk8s1:~# kubectl get nodes
NAME    STATUS                     ROLES    AGE    VERSION
mk8s1   Ready                      <none>   7d4h   v1.20.5-34+40f5951bd9888a
mk8s4   Ready,SchedulingDisabled   <none>   2d6h   v1.20.5-34+40f5951bd9888a
mk8s3   Ready                      <none>   2d6h   v1.20.5-34+40f5951bd9888a
mk8s2   Ready                      <none>   2d6h   v1.20.5-34+40f5951bd9888a

続いてzk-1のノードをドレインすると今度は配備できるノードが無いからPending状態で止まる

root@mk8s1:~# kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data
node/mk8s3 cordoned
WARNING: ignoring DaemonSet-managed Pods: metallb-system/speaker-jvp9c, kube-system/calico-node-5dc5b
evicting pod default/zk-1
pod/zk-1 evicted
node/mk8s3 evicted
root@mk8s1:~# kubectl get pod -l app=zk -o wide
NAME   READY   STATUS    RESTARTS   AGE     IP             NODE     NOMINATED NODE   READINESS GATES
zk-2   1/1     Running   0          102m    10.1.238.139   mk8s1    <none>           <none>
zk-0   1/1     Running   0          11m     10.1.115.140   mk8s2    <none>           <none>
zk-1   0/1     Pending   0          2m43s   <none>         <none>   <none>           <none>
root@mk8s1:~# kubectl get nodes
NAME    STATUS                     ROLES    AGE    VERSION
mk8s1   Ready                      <none>   7d4h   v1.20.5-34+40f5951bd9888a
mk8s4   Ready,SchedulingDisabled   <none>   2d7h   v1.20.5-34+40f5951bd9888a
mk8s3   Ready,SchedulingDisabled   <none>   2d7h   v1.20.5-34+40f5951bd9888a
mk8s2   Ready                      <none>   2d7h   v1.20.5-34+40f5951bd9888a

そのままzk-2のノードをドレインすると今度は終わらなくなるのでctrl+zで停止します
pdbの設定で2個はpodは必要なのでステータスはRunningのままですがノード自体はSchedulingDisabledになります。

root@mk8s1:~# kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data                                                                            
node/mk8s1 cordoned
WARNING: ignoring DaemonSet-managed Pods: metallb-system/speaker-fnfzg, kube-system/calico-node-hgr9t
evicting pod default/zk-2
error when evicting pods/"zk-2" -n "default" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod default/zk-2
error when evicting pods/"zk-2" -n "default" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod default/zk-2
error when evicting pods/"zk-2" -n "default" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod default/zk-2
error when evicting pods/"zk-2" -n "default" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
^C
root@mk8s1:~# kubectl get nodes
NAME    STATUS                     ROLES    AGE    VERSION
mk8s1   Ready,SchedulingDisabled   <none>   7d4h   v1.20.5-34+40f5951bd9888a
mk8s4   Ready,SchedulingDisabled   <none>   2d7h   v1.20.5-34+40f5951bd9888a
mk8s3   Ready,SchedulingDisabled   <none>   2d7h   v1.20.5-34+40f5951bd9888a
mk8s2   Ready                      <none>   2d7h   v1.20.5-34+40f5951bd9888a
root@mk8s1:~# kubectl get pod -l app=zk -o wide
NAME   READY   STATUS    RESTARTS   AGE    IP             NODE     NOMINATED NODE   READINESS GATES
zk-2   1/1     Running   0          132m   10.1.238.139   mk8s1    <none>           <none>
zk-0   1/1     Running   0          41m    10.1.115.140   mk8s2    <none>           <none>
zk-1   0/1     Pending   0          32m    <none>         <none>   <none>           <none>

この状態でzookeeperに保存したテストデータを読み出してみるとPendingになってるzk-1以外はちゃんと値が取得出来ていることが確認できます。

root@mk8s1:~# kubectl exec zk-0 zkCli.sh get /hello 2>/dev/null | grep WATCHER:: -A3
WATCHER::

WatchedEvent state:SyncConnected type:None path:null
world
root@mk8s1:~# kubectl exec zk-1 zkCli.sh get /hello 2>/dev/null | grep WATCHER:: -A3
root@mk8s1:~# kubectl exec zk-2 zkCli.sh get /hello 2>/dev/null | grep WATCHER:: -A3
WATCHER::

WatchedEvent state:SyncConnected type:None path:null
world

最初にcordonされたノードをuncordonするとPendingだったzk-1がそこに復旧されることが分かります

root@mk8s1:~# kubectl get nodes
NAME    STATUS                     ROLES    AGE    VERSION
mk8s1   Ready,SchedulingDisabled   <none>   7d4h   v1.20.5-34+40f5951bd9888a
mk8s3   Ready,SchedulingDisabled   <none>   2d7h   v1.20.5-34+40f5951bd9888a
mk8s2   Ready                      <none>   2d7h   v1.20.5-34+40f5951bd9888a
mk8s4   Ready                      <none>   2d7h   v1.20.5-34+40f5951bd9888a
root@mk8s1:~# kubectl get pod -l app=zk -o wide
NAME   READY   STATUS    RESTARTS   AGE    IP             NODE    NOMINATED NODE   READINESS GATES
zk-2   1/1     Running   0          143m   10.1.238.139   mk8s1   <none>           <none>
zk-0   1/1     Running   0          52m    10.1.115.140   mk8s2   <none>           <none>
zk-1   1/1     Running   0          43m    10.1.236.142   mk8s4   <none>           <none>
root@mk8s1:~# kubectl exec zk-1 zkCli.sh get /hello 2>/dev/null | grep WATCHER:: -A3
WATCHER::

WatchedEvent state:SyncConnected type:None path:null
world

この状態で先程失敗したzk-2を再drainしたらzk-1と同じようにPendingになりますので2個目にcordonされたノードをuncordonするとそこに復旧することが確認できます。

root@mk8s1:~# kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data
node/mk8s1 already cordoned
WARNING: ignoring DaemonSet-managed Pods: metallb-system/speaker-fnfzg, kube-system/calico-node-hgr9t
evicting pod default/zk-2
pod/zk-2 evicted
node/mk8s1 evicted
root@mk8s1:~# kubectl get pod -l app=zk -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP             NODE     NOMINATED NODE   READINESS GATES
zk-0   1/1     Running   0          57m   10.1.115.140   mk8s2    <none>           <none>
zk-1   1/1     Running   0          48m   10.1.236.142   mk8s4    <none>           <none>
zk-2   0/1     Pending   0          7s    <none>         <none>   <none>           <none>
root@mk8s1:~# kubectl uncordon mk8s3
node/mk8s3 uncordoned
root@mk8s1:~# kubectl get pod -l app=zk -o wide
NAME   READY   STATUS    RESTARTS   AGE     IP             NODE    NOMINATED NODE   READINESS GATES
zk-0   1/1     Running   0          60m     10.1.115.140   mk8s2   <none>           <none>
zk-1   1/1     Running   0          52m     10.1.236.142   mk8s4   <none>           <none>
zk-2   1/1     Running   0          3m31s   10.1.217.202   mk8s3   <none>           <none>
root@mk8s1:~# kubectl get nodes
NAME    STATUS                     ROLES    AGE    VERSION
mk8s1   Ready,SchedulingDisabled   <none>   7d5h   v1.20.5-34+40f5951bd9888a
mk8s4   Ready                      <none>   2d7h   v1.20.5-34+40f5951bd9888a
mk8s3   Ready                      <none>   2d7h   v1.20.5-34+40f5951bd9888a
mk8s2   Ready                      <none>   2d7h   v1.20.5-34+40f5951bd9888a
root@mk8s1:~# seq 0 2 | xargs -n1 -I{} kubectl exec zk-{} zkCli.sh get /hello 2>/dev/null | grep WATCHER:: -A3
WATCHER::

WatchedEvent state:SyncConnected type:None path:null
world
--
WATCHER::

WatchedEvent state:SyncConnected type:None path:null
world
--
WATCHER::

WatchedEvent state:SyncConnected type:None path:null
world

こんな感じでdrainとかPodDisruptionBudgetsの設定とかうまいこと設定しとけばサービス止めずにメンテナンス出来るから便利やでだからノードとディスクは余裕持って設定しときやみたいな事が言いたいんかな

坦々狸坦々狸

チュートリアルは終わったけどなんか何を得られたのかわからなかったな
商用ではProbeとPodDisruptionBudgetsうまいこと設定しろよって感じかね
まぁ終わったしクローズしとくか。。。

このスクラップは2021/04/26にクローズされました