Em sex., 3 de jul. de 2020 às 17:58, Alexander E. Patrakov
<patrakov(a)gmail.com> escreveu:
On Sat, Jul 4, 2020 at 1:37 AM Rodrigo Severo - Fábrica
<rodrigo(a)fabricadeideias.com> wrote:
Hi,
Just rebooted one of my OSD servers after upgrading Ceph from 14.2.9 to
14.2.10 and it's OSDs won't come up.
I find the following messages on my log:
4991 Jul 3 17:24:03 osdserver1-df ceph-osd[1272]: 2020-07-03
17:24:03.036 7fcc497f1c00 -1 auth: unable to find a keyring on
/var/lib/ceph/osd/ceph-6/keyring: (2) No such file or directory
4992 Jul 3 17:24:03 osdserver1-df ceph-osd[1272]: 2020-07-03
17:24:03.036 7fcc497f1c00 -1 AuthRegistry(0x55e2ff810140) no keyring found
at /var/lib/ceph/osd/ceph-6/keyring, disabling cephx
and my /var/lib/ceph/osd/ceph-6 directory is empty.
I see that on my other servers these /var/lib/ceph/osd/ceph-? directories
are tmpfs mounts but I can't understand who is responsible for mounting
them as there are no entries for them in /etc/fstab.
How can I fix this osd server?
Hi, it is not possible to figure this out based on just the
information that you provided. E.g., how was the OSD initially
provisioned? Was it with "ceph-volume lvm"?
I used ceph-deploy. Each osd is a lvm in a different disk.
In any case, the following commands (please run as
root) would help debugging:
lsblk
lvs -a -o name,lv_tags
The output of the above commands:
root@osdserver1-df:~# lsblk
NAME
MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda
8:0 0 223.6G 0 disk
└─sda1
8:1 0 223.6G 0 part
├─vg-root
253:2 0 50G 0 lvm /
├─vg-swap_1
253:3 0 8G 0 lvm [SWAP]
├─vg-home
253:4 0 10G 0 lvm /home
├─vg-opt
253:5 0 10G 0 lvm /opt
├─vg-var
253:6 0 30G 0 lvm /var
└─vg-tmp
253:7 0 10G 0 lvm /tmp
sdb
8:16 0 3.7T 0 disk
└─sdb1
8:17 0 3.7T 0 part
└─vg_ceph_slow_8-lv_ceph_slow_8
253:1 0 3.7T 0 lvm
sdc
8:32 0 1.8T 0 disk
└─sdc1
8:33 0 1.8T 0 part
└─ceph--bfcfde03--3a62--41ed--a037--d8aaf030a6d8-osd--block--445c4224--5087--4a28--a1ee--f23c5768207c
253:0 0 1.8T 0 lvm
root@osdserver1-df:~# lvs -a -o name,lv_tags
LV LV Tags
osd-block-445c4224-5087-4a28-a1ee-f23c5768207c
ceph.block_device=/dev/ceph-bfcfde03-3a62-41ed-a037-d8aaf030a6d8/osd-block-445c4224-5087-4a28-a1ee-f23c5768207c,ceph.block_uuid=NzwNAF-zVwO-t1sh-OMZa-yFs1-QLWs-GZUvUT,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e348b63c-d239-4a15-a2ce-32f29a00431c,ceph.cluster_name=ceph,ceph.crush_device_class=None,ceph.encrypted=0,ceph.osd_fsid=445c4224-5087-4a28-a1ee-f23c5768207c,ceph.osd_id=6,ceph.type=block,ceph.vdo=0
home
opt
root
swap_1
tmp
var
lv_ceph_slow_8
ceph.block_device=/dev/vg_ceph_slow_8/lv_ceph_slow_8,ceph.block_uuid=yrS8R7-N9rN-r0Jx-JoOU-Rapz-uoKJ-2FI1p4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e348b63c-d239-4a15-a2ce-32f29a00431c,ceph.cluster_name=ceph,ceph.crush_device_class=None,ceph.encrypted=0,ceph.osd_fsid=45b3c8ed-3852-4f1c-84b3-7d8a7cb129cb,ceph.osd_id=8,ceph.type=block,ceph.vdo=0
root@osdserver1-df:~#
Any ideas on how to further debug this issue?
Regards,
Rodrigo Severo