Hmm, I'm getting a bit confused. Could you also send the output of "ceph osd pool
ls detail".
Did you look at the disk/controller cache settings?
I think you should start a deep-scrub with "ceph pg deep-scrub 3.b" and record
the output of "ceph -w | grep '3\.b'" (note the single quotes).
The error messages you included in one of your first e-mails are only on 1 out of 3 scrub
errors (3 lines for 1 error). We need to find all 3 errors.
Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
________________________________________
From: Sagara Wijetunga <sagarawmw(a)yahoo.com>
Sent: 02 November 2020 14:25:08
To: ceph-users(a)ceph.io; Frank Schilder
Subject: Re: [ceph-users] Re: How to recover from
active+clean+inconsistent+failed_repair?
Hi Frank
the primary OSD is probably not listed as a peer. Can
you post the complete output of
- ceph pg 3.b query
- ceph pg dump
- ceph osd df tree
in a pastebin?
Yes, the Primary OSD is 0.
I have attached above as .txt files. Please let me know if you still cannot read them.
Regards
Sagara