On Tue, Jan 7, 2020 at 11:17 AM <miguel.castillo(a)centro.net> wrote:
Thanks for the reply Jason!
We don't have selinux running on these machines, but I did fix the ownership on that
file now so the ceph user can access it properly. The rbd mirror daemon does start up now,
but the test image still shows down+unknown. I'll continue poking at it, but if you or
anyone else can offer more things to look at and verify, that would be greatly
appreciated!
--------------------------------
systemctl status ceph-rbd-mirror@admin
● ceph-rbd-mirror(a)admin.service - Ceph rbd mirror daemon
Loaded: loaded (/lib/systemd/system/ceph-rbd-mirror@.service; enabled; vendor preset:
enabled)
Active: active (running) since Tue 2020-01-07 11:03:57 EST; 6min ago
Main PID: 917157 (rbd-mirror)
Tasks: 58
CGroup:
/system.slice/system-ceph\x2drbd\x2dmirror.slice/ceph-rbd-mirror(a)admin.service
└─917157 /usr/bin/rbd-mirror -f --cluster ceph --id admin --setuser ceph
--setgroup ceph
Jan 07 11:03:57 ceph1-dc2 systemd[1]: Started Ceph rbd mirror daemon.
# rbd --cluster dc1ceph mirror pool status fs_data --verbose
If the rbd-mirror daemon is running on ceph1-d2, you will need to run
the "rbd mirror pool status" command against that cluster. The future
Octopus release includes some improvements that replicate the
mirroring status between clusters so you would only need to check it
in one location (and designating RX/TX, RX-only, and TX-only
relationships), but in the meantime the status represents the
mirroring "pull" operation which in your case is occurring on DC2.
health: WARNING
images: 1 total
1 unknown
mirror_test:
global_id: c335017c-9b8f-49ee-9bc1-888789537c47
state: down+unknown
description: status not found
last_update:
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
--
Jason