Hello,
I'm new to ceph and setting up my first cluster and playing around with
it. I followed the steps in cephadm guide (
https://docs.ceph.com/en/latest/cephadm/install/)
Here is my implementation of the instructions, from a Ubuntu Server 20.04.1
install:
sudo apt-get update && sudo apt-get dist-upgrade -y
sudo apt-get install curl git nano docker docker-compose docker.io attr ntp
bash-completion -y
sudo usermod -aG docker $USER
curl --silent --remote-name --location
https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm
chmod +x cephadm
sudo ./cephadm add-repo --release octopus
sudo ./cephadm install
sudo mkdir -p /etc/ceph
sudo cephadm install ceph-common ceph
sudo cephadm bootstrap --mon-ip <IP>
ceph orch daemon add server-node1:/dev/sda
ceph orch daemon add server-node1:/dev/sdc
ceph orch daemon add server-node1:/dev/sdd
ceph orch daemon add server-node1:/dev/sde
ceph orch daemon add server-node1:/dev/sdf
Then I log into the ceph dashboard, change password, click around. Take a
break...
...HOURS Later...
My OSDs are down.
I notice this in ceph-volume.log:
[ceph_volume.main][INFO ] Running command: ceph-volume lvm deactivate 1
7a0fb8df-2d12-4e96-9def-5b2c195f6af4
So I run:
ceph-volume lvm activate --all
And my OSDs are back and 'cephadm ls' shows them as legacy. However,
running 'cephadm adopt --style legacy --name osd.0' causes the osd to go
down again.
What is going on?
PS: The only two other issues I in the logs see are
/usr/bin/docker:stderr Error: No such object: ceph-<ID>-osd.0
and
[ceph_volume.util.system][INFO ] /var/lib/ceph/osd/ceph-0 does not appear
to be a tmpfs mount
Jie