Open3
cephadmの環境構築
環境
GCP
どれぐらいのサイズのインスタンスと数がいるか?
Ubuntu 20.04LTS
cephadmの要件として、以下が必要。
- Systemd
- Podman or Docker for running containers
- Time synchronization (such as chrony or NTP)
- LVM2 for provisioning storage devices
LVM、フリーインスタンスだと厳しいかな。
Docker
$ sudo snap install docker
docker 19.03.11 from Canonical✓ installed
$ sudo docker --version
Docker version 19.03.11, build dd360c7
cephadmでのbootstrapに失敗
ここを参考に。うまく行かなかった。
エラーメッセージ読む。
$ sudo ./cephadm bootstrap --mon-ip 10.138.0.3
Verifying podman|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
Unit chrony.service is enabled and running
Repeating the final host check...
podman|docker (/snap/bin/docker) is present
systemctl is present
lvcreate is present
Unit chrony.service is enabled and running
Host looks OK
Cluster fsid: e2944a62-57d3-11eb-903f-750eac242784
Verifying IP 10.138.0.3 port 3300 ...
Verifying IP 10.138.0.3 port 6789 ...
Mon IP 10.138.0.3 is in CIDR network 10.138.0.1
Pulling container image docker.io/ceph/ceph:v15...
Extracting ceph user uid/gid from container image...
Creating initial keys...
Creating initial monmap...
Non-zero exit code 1 from /snap/bin/docker run --rm --ipc=host --net=host --entrypoint /usr/bin/monmaptool -e CONTAINER_IMAGE=docker.io/ceph/ceph:v15 -e NODE_NAME=ceph-2 -v /tmp/ceph-tmp0su7o5sf:/tmp/monmap:z docker.io/ceph/ceph:v15 --create --clobber --fsid e2944a62-57d3-11eb-903f-750eac242784 --addv ceph-2 [v2:10.138.0.3:3300,v1:10.138.0.3:6789] /tmp/monmap
/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap
/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to e2944a62-57d3-11eb-903f-750eac242784
/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
/usr/bin/monmaptool: stderr bufferlist::write_file(/tmp/monmap): failed to open file: (21) Is a directory
/usr/bin/monmaptool: stderr monmaptool: error writing to '/tmp/monmap': (21) Is a directory
Traceback (most recent call last):
File "/home/tzkoba_gm/cephadm", line 6142, in <module>
r = args.func()
File "/home/tzkoba_gm/cephadm", line 1401, in _default_image
return func()
File "/home/tzkoba_gm/cephadm", line 2947, in command_bootstrap
out = CephContainer(
File "/home/tzkoba_gm/cephadm", line 2659, in run
out, _, _ = call_throws(
File "/home/tzkoba_gm/cephadm", line 1062, in call_throws
raise RuntimeError('Failed command: %s' % ' '.join(command))
RuntimeError: Failed command: /snap/bin/docker run --rm --ipc=host --net=host --entrypoint /usr/bin/monmaptool -e CONTAINER_IMAGE=docker.io/ceph/ceph:v15 -e NODE_NAME=ceph-2 -v /tmp/ceph-tmp0su7o5sf:/tmp/monmap:z docker.io/ceph/ceph:v15 --create --clobber --fsid e2944a62-57d3-11eb-903f-750eac242784 --addv ceph-2 [v2:10.138.0.3:3300,v1:10.138.0.3:6789] /tmp/monmap
事象見る限り、これかな。
dockerのインストール方法を変更
snap installで入ったv19.03.11ではエラーが解消されなかったので、Dockerで紹介されているapt-getでインストールするやり方。
dockerのバージョンは以下になる。
$ sudo docker --version
Docker version 20.10.2, build 2291f61
これでbootstrapがcompleteした。
$ sudo ./cephadm bootstrap --mon-ip 10.138.0.4
Verifying podman|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
Unit chrony.service is enabled and running
Repeating the final host check...
podman|docker (/usr/bin/docker) is present
systemctl is present
lvcreate is present
Unit chrony.service is enabled and running
Host looks OK
Cluster fsid: 7137c022-57ee-11eb-8f95-42010a8a0004
Verifying IP 10.138.0.4 port 3300 ...
Verifying IP 10.138.0.4 port 6789 ...
Mon IP 10.138.0.4 is in CIDR network 10.138.0.1
Pulling container image docker.io/ceph/ceph:v15...
Extracting ceph user uid/gid from container image...
Creating initial keys...
Creating initial monmap...
Creating mon...
Waiting for mon to start...
Waiting for mon...
mon is available
Assimilating anything we can from ceph.conf...
Generating new minimal ceph.conf...
Restarting the monitor...
Setting mon public_network...
Creating mgr...
Verifying port 9283 ...
Wrote keyring to /etc/ceph/ceph.client.admin.keyring
Wrote config to /etc/ceph/ceph.conf
Waiting for mgr to start...
Waiting for mgr...
mgr not available, waiting (1/10)...
mgr not available, waiting (2/10)...
mgr not available, waiting (3/10)...
mgr not available, waiting (4/10)...
mgr not available, waiting (5/10)...
mgr is available
Enabling cephadm module...
Waiting for the mgr to restart...
Waiting for Mgr epoch 5...
Mgr epoch 5 is available
Setting orchestrator backend to cephadm...
Generating ssh key...
Wrote public SSH key to to /etc/ceph/ceph.pub
Adding key to root@localhost's authorized_keys...
Adding host instance-2...
Deploying mon service with default placement...
Deploying mgr service with default placement...
Deploying crash service with default placement...
Enabling mgr prometheus module...
Deploying prometheus service with default placement...
Deploying grafana service with default placement...
Deploying node-exporter service with default placement...
Deploying alertmanager service with default placement...
Enabling the dashboard module...
Waiting for the mgr to restart...
Waiting for Mgr epoch 13...
Mgr epoch 13 is available
Generating a dashboard self-signed certificate...
Creating initial admin user...
Fetching dashboard port number...
Ceph Dashboard is now available at:
URL: https://instance-2.us-west1-b.c.focus-skein-301806.internal:8443/
User: admin
Password: ys08w4koyk
You can access the Ceph CLI with:
sudo ./cephadm shell --fsid 7137c022-57ee-11eb-8f95-42010a8a0004 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
Please consider enabling telemetry to help improve Ceph:
ceph telemetry on
For more information see:
https://docs.ceph.com/docs/master/mgr/telemetry/
Bootstrap complete.