Hello,
Thank you for the valuable info, and especially for the slack link (it's not listed on
the community page)
The ceph-volume command was issued in the following manner :
login to my 1st vps from which I performed the boostrap with cephadm
exec
sudo cephadm shell
which gets me root shell inside the container and then ceph-volume [...] etc.
-----
I nuked my environment to recreate the issues and paste them here so my new vg/lv names
are different
root@dev0:/# ceph-volume raw prepare --bluestore
--data /dev/sdb --block.db /dev/mapper/ssd0-ssd0_0
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring
/var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new
0011a2a8-084b-4d79-ab8f-2503dfc2c804
stderr: 2023-01-20T11:50:23.495+0000 7fdeebd02700 -1 auth: unable to find a keyring on
/var/lib/ceph/bootstrap-osd/ceph.keyring: (2) No such file or directory
stderr: 2023-01-20T11:50:23.495+0000 7fdeebd02700 -1 AuthRegistry(0x7fdee4060d70) no
keyring found at /var/lib/ceph/bootstrap-osd/ceph.keyring, disabling cephx
stderr: 2023-01-20T11:50:23.495+0000 7fdeebd02700 -1 auth: unable to find a keyring on
/var/lib/ceph/bootstrap-osd/ceph.keyring: (2) No such file or directory
stderr: 2023-01-20T11:50:23.495+0000 7fdeebd02700 -1 AuthRegistry(0x7fdee4064440) no
keyring found at /var/lib/ceph/bootstrap-osd/ceph.keyring, disabling cephx
stderr: 2023-01-20T11:50:23.499+0000 7fdeebd02700 -1 auth: unable to find a keyring on
/var/lib/ceph/bootstrap-osd/ceph.keyring: (2) No such file or directory
stderr: 2023-01-20T11:50:23.499+0000 7fdeebd02700 -1 AuthRegistry(0x7fdeebd00ea0) no
keyring found at /var/lib/ceph/bootstrap-osd/ceph.keyring, disabling cephx
stderr: 2023-01-20T11:50:23.503+0000 7fdee929d700 -1 monclient(hunting):
handle_auth_bad_method server allowed_methods [2] but i only support [1]
stderr: 2023-01-20T11:50:23.503+0000 7fdeea29f700 -1 monclient(hunting):
handle_auth_bad_method server allowed_methods [2] but i only support [1]
stderr: 2023-01-20T11:50:23.503+0000 7fdee9a9e700 -1 monclient(hunting):
handle_auth_bad_method server allowed_methods [2] but i only support [1]
stderr: 2023-01-20T11:50:23.503+0000 7fdeebd02700 -1 monclient: authenticate NOTE: no
keyring found; disabled cephx authentication
stderr: [errno 13] RADOS permission denied (error connecting to the cluster)
--> RuntimeError: Unable to create a new OSD id
After manually ln -s /etc/ceph/ceph.keyring /var/lib/ceph/bootstrap-osd/ i got the the
credentials from ceph auth ls and added them to the keyring file respecting it's
syntax
[client.bootstrap-osd]
key = AQA5vcdj/pClABAAt9hDro+HC73wrZysJSHyAg==
caps mon = "allow profile bootstrap-osd"
Then it worked:
root@dev0:/# ceph-volume raw prepare --bluestore
--data /dev/sdb --block.db /dev/mapper/ssd0-ssd0_0
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring
/var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new
4d47af7e-cf8c-451a-8773-894854e3ce8a
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-3
Running command: /usr/bin/chown -R ceph:ceph /dev/sdb
Running command: /usr/bin/ln -s /dev/sdb /var/lib/ceph/osd/ceph-3/block
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring
/var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o
/var/lib/ceph/osd/ceph-3/activate.monmap
stderr: got monmap epoch 3
--> Creating keyring file for osd.3
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/keyring
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/
Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ssd0-ssd0_0
Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 3
--monmap /var/lib/ceph/osd/ceph-3/activate.monmap --keyfile - --bluestore-block-db-path
/dev/mapper/ssd0-ssd0_0 --osd-data /var/lib/ceph/osd/ceph-3/ --osd-uuid
4d47af7e-cf8c-451a-8773-894854e3ce8a --setuser ceph --setgroup ceph
stderr: 2023-01-20T11:50:57.723+0000 7f0ee2e4b3c0 -1 bluestore(/var/lib/ceph/osd/ceph-3/)
_read_fsid unparsable uuid--> ceph-volume raw clear prepare successful for: /dev/sdb
So it creates osd entries in /var/lib/ceph/osd
As for the mgr logs, I'm trying to figure out how to get them: I listed all pods using
podman ps on the host then podman logs <container_id> of the node that has mgr in
the name. It's a lot to parse but I found something relevant:
2023-01-20T12:02:14.680+0000 7f0839777700 -1
log_channel(cephadm) log [ERR] : Failed to apply osd.all-available-devices spec
DriveGroupSpec.from_json(yaml.safe_load('''service_type: osd
service_id: all-available-devices
service_name: osd.all-available-devices
placement:
host_pattern: '*'
spec:
data_devices:
all: true
filter_logic: AND
objectstore: bluestore
''')): cephadm exited with an error code: 1, stderr:Inferring config
/var/lib/ceph/51d65c78-9713-11ed-b841-c7c07153e51c/mon.dev0/configNon-zero exit code 1
from /usr/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint
/usr/sbin/ceph-volume --privileged --group-add=disk --init -e
CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45
-e NODE_NAME=dev0.ddhosted.ro -e CEPH_USE_RANDOM_NONCE=1 -e
CEPH_VOLUME_OSDSPEC_AFFINITY=all-available-devices -e CEPH_VOLUME_SKIP_RESTORECON=yes -e
CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/51d65c78-9713-11ed-b841-c7c07153e51c:/var/run/ceph:z
-v /var/log/ceph/51d65c78-9713-11ed-b841-c7c07153e51c:/var/log/ceph:z -v
/var/lib/ceph/51d65c78-9713-11ed-b841-c7c07153e51c/crash:/var/lib/ceph/crash:z -v
/run/systemd/journal:/run/systemd/journal -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys
-v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v
/tmp/ceph-tmpmo5i2jom:/etc/ceph/ceph.conf:z -v
/tmp/ceph-tmph0tezmty:/var/lib/ceph/bootstrap-osd/ceph.keyring:z
quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45
lvm batch --no-auto /dev/sdb /dev/sdc --yes --no-systemd
---
However, that being said, this time around after issuing the ceph-volume command that
'provisions' the osds, two podman containers spawned, one for each osd so it seems
to have worked. I'm a bit confused but will be researching more into this .
I may have messed up my dev env really bad initially, so maybe that's why it didnt
previously work.
20d41af95386
quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45
-n osd.1 -f --set... 16 minutes ago Up 16 minutes ago
ceph-51d65c78-9713-11ed-b841-c7c07153e51c-osd-1
5f836b90b5b2
quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45
-n osd.2 -f --set... 16 minutes ago Up 16 minutes ago
ceph-51d65c78-9713-11ed-b841-c7c07153e51c-osd-2
Sent with [Proton Mail](https://proton.me/) secure email.
------- Original Message -------
On Thursday, January 19th, 2023 at 9:56 PM, Guillaume Abrioux <gabrioux(a)redhat.com>
wrote:
> Hi Seccentral,
>
> How did you run that `ceph-volume raw prepare` command exactly? If you ran it
manually from within a separate container, the keyring issue you faced is expected.
>
> In any case, what you are trying to achieve is not supported by ceph-volume at the
moment but from what I've seen, it doesn't require much effort to support it. I
could make it with a very small change in ceph-volume.
> I created this tracker [1] and started a patch (not pushed yet, I'll update this
thread accordingly).
> Looks like you tried multiple things in this environment, the few troubles you are
facing regarding the usage of `--all-available-devices` would require a bit more details
(mgr logs for instance...).
>
> I'm personally available on IRC (OFTC, nick: guits) and slack
(
ceph-storage.slack.com)
>
> Thanks,
>
> [1]
https://tracker.ceph.com/issues/58515
>
> On Thu, 19 Jan 2023 at 18:28, seccentral <seccentral(a)protonmail.com> wrote:
>
>> Hi.
>> I'm new to ceph, been toying around in a virtual environment (for now) trying
to understand how to manage it. I made 3 vms in proxmox and provisioned a bunch of virtual
drives to each. Bootstrapped following the quincy-branch official documentation.
>> These are the drives:
>>
>>> /dev/sdb 128.00 GB sdb True False QEMU HARDDISK (HDD)
>>> /dev/sdc 128.00 GB sdc True False QEMU HARDDISK (HDD)
>>> /dev/sdd 32.00 GB sdd False False QEMU HARDDISK (SSD)
>>
>> This is the lvdisplay on /dev/sdd after creating two lvs:
>>
>>> db-0 dev0-db-0 -wi-a----- 16.00g
>>>
>>> db-1 dev0-db-0 -wi-a----- <16.00g
>>
>> My curiosity was to have OSDs with data=raw + block.db=lv created like this:
>>
>>> ceph-volume raw prepare --bluestore --data /dev/sdd --block.db
/dev/mapper/dev0--db--0--db--0
>>
>> This required tinkering with permissions and temporarily modifying
/etc/ceph/ceph.keyring because by default it wasn't allowing access, RADOS complained
about unauthorized client.boostrap-osd something but I got it to work eventually.
>> (By the way, In a real environment, would RAW be of any benefit vs lvm everywhere
?)
>> So now I have created 2 OSDs, each with the journal on the SSD and the data on
the HDD.
>> I repeated the steps on my other two boxes (btw, can't this be done from the
local box via ceph cli ?)
>> Now I am trying (and failing) to start OSD daemons on this host. I tried apply
osd --all-available-devices, it tells me "Scheduled osd.all-available-devices
update..." but nothing happens.
>> I'm also not sure how to apply osds from a yaml file since that would
provision them and .. they're already provisioned using the ceph-volume command
above... right ?
>>
>> I'm having trouble getting a lot of things to work, this is just one of them
and even if I feel nostalgic using mailing lists, It's inefficient. Is there any
interactive community where I can find some people usually online and talk to them
realtime like discord/slack etc ? I tried irc but most are afk.
>>
>> Thanks
>>
>> Sent with [Proton Mail](https://proton.me/) secure email.
>> _______________________________________________
>> ceph-users mailing list -- ceph-users(a)ceph.io
>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>
> --
>
> Guillaume AbriouxSenior Software Engineer