Hello,
Thank you very much Joshua, it worked.
I have set up three nodes with the cephadm tool, which was very easy.
But I asked myself, what if node 1 goes down?
Before cephadm I just could manage everything from the other nodes with the ceph
commands.
Now I'm a bit stuck, because this cephadm container is just running on one node.
I've installed it on the second one, but i'm getting a "[errno 13] RADOS
permission denied (error connecting to the cluster)".
Do I need some special "cephadm" keyring from the first node? Which one? And
where to put it?
Caphadm might be an easy to handle solution, but for me as a beginner, the added layer is
very complicated to get in.
We are trying to build a new Ceph cluster (never got in touch with it before) but I might
not go with octopus, but instead use nautilus with ceph-deploy.
That's a bit easyer to understand, and the documentation out there is way better.
Thanks in advance,
Simon
________________________________
Von: Joshua Schmid <jschmid(a)suse.de>
Gesendet: Dienstag, 5. Mai 2020 16:39:29
An: Simon Sutter
Cc: ceph-users(a)ceph.io
Betreff: Re: [ceph-users] Re: Add lvm in cephadm
On 20/05/05 08:46, Simon Sutter wrote:
Sorry I missclicked, here the second part:
ceph-volume --cluster ceph lvm prepare --data /dev/centos_node1/ceph
But that gives me just:
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring
/var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new
f3b442b1-68f7-456a-9991-92254e7c9c30
stderr: [errno 13] RADOS permission denied (error connecting to the cluster)
--> RuntimeError: Unable to create a new OSD id
Hey Simon,
This still works but is now encapsulated in a cephadm
command.
ceph orch daemon add osd <host>:<vg_name/lv_name>
so in your case:
ceph orch daemon add osd $host:centos_node1/ceph
hth
--
Joshua Schmid
Software Engineer
SUSE Enterprise Storage