Hi,
I manage to activate the OSDs after adding the keys with
for i in `seq 0 8`; do ceph auth get-or-create osd.$i mon 'profile osd' mgr
'profile osd' osd 'allow *'; done
# ceph osd status
+----+------+-------+-------+--------+---------+--------+---------+------------+
| id | host | used | avail | wr ops | wr data | rd ops | rd data | state |
+----+------+-------+-------+--------+---------+--------+---------+------------+
| 0 | | 0 | 0 | 0 | 0 | 0 | 0 | exists,new |
| 1 | | 0 | 0 | 0 | 0 | 0 | 0 | exists,new |
| 2 | | 0 | 0 | 0 | 0 | 0 | 0 | exists,new |
| 4 | | 0 | 0 | 0 | 0 | 0 | 0 | exists,new |
| 5 | | 0 | 0 | 0 | 0 | 0 | 0 | exists,new |
| 6 | | 0 | 0 | 0 | 0 | 0 | 0 | exists,new |
| 7 | | 0 | 0 | 0 | 0 | 0 | 0 | exists,new |
| 8 | | 0 | 0 | 0 | 0 | 0 | 0 | exists,new |
+----+------+-------+-------+--------+---------+--------+---------+------------+
However calls to cephfs-journal-tool all require a rank which I am unable to determine:
# cephfs-journal-tool journal export backup.bin
2020-07-20 17:13:15.420 7f3231694700 -1 NetHandler create_socket couldn't create
socket (97) Address family not supported by protocol
Error (2020-07-20 17:13:15.421 7f32413fea80 -1 main: missing mandatory "--rank”
argument
I understand that rank should be in the format: cephfs-journal-tool
--rank=<fs>:<rank>
When I use the name of the existing fs it doesn’t work either:
# cephfs-journal-tool —rank=plexfs:0 journal export backup.bin
2020-07-21 09:28:54.552 7feffed1b700 -1 NetHandler create_socket couldn't create
socket (97) Address family not supported by protocol
Error (2020-07-21 09:28:54.553 7ff00ea85a80 -1 main: Couldn't determine MDS rank.
Is that because I am misremembering the fs name, or is there something else that I am
missing?
Appreciate any help.
Thanks & regards,
Daniel
On 4 Jul 2020, at 18:05, Burkhard Linke
<Burkhard.Linke(a)computational.bio.uni-giessen.de> wrote:
Hi,
in addition you need a way to recover the mon maps (I assume the mon was on the same
host). If the mon data is lost, you can try to retrieve some of the maps from the existing
OSDs. See the documentation about desaster recovery in the ceph documentation.
If you cannot restore the mons, recovering the OSDs will be more or less useless.
Regards,
Burkhard
On 04.07.20 10:05, Eugen Block wrote:
Hi,
it should work with ceph-volume after you re-created the OS:
ceph-volume lvm activate --all
We had that case just recently in a Nautilus Cluster and it worked perfectly.
Regards,
Eugen
Zitat von Daniel Da Cunha <daniel(a)ddc.im>im>:
Hello,
As a hobbies, I have been using Ceph Nautilus as a single server with 8 OSDs. Part of the
setup I set the crush map to fail at OSD level: step chooseleaf firstn 0 type osd.
Sadly, I didn’t take the necessary precaution for my boot disk and the OS failed. I have
backups of /etc/ceph/ but I am not able to recover the OS.
Can you think of a way for me to recreate the OS and adopt the 8 OSD without loosing the
data?
Thanks & regards,
Daniel
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io