Hi Frank
Found the issue and fixed. It was a one copy of 0 byte object. Removed it. Deep scrub the
PG fixed the issue.
# find /var/lib/ceph/osd/ -type f -name "1000023675e*"
/var/lib/ceph/osd/ceph-2/current/3.b_head/DIR_B/DIR_A/DIR_E/1000023675e.00000000__head_AE97EEAB__3
# ls -l
/var/lib/ceph/osd/ceph-2/current/3.b_head/DIR_B/DIR_A/DIR_E/1000023675e.00000000__head_AE97EEAB__3-rw-r--r--
1 ceph ceph 0 Oct 31 19:18
/var/lib/ceph/osd/ceph-2/current/3.b_head/DIR_B/DIR_A/DIR_E/1000023675e.00000000__head_AE97EEAB__3
Once again, many thanks for your help.
Best regards
Sagara
Show replies by date
Hi Sagara,
good to hear. Are you using filestore? I completely missed that. Bluestore-tool would have
been useless :)
My suspicion is a lost write from cache due to power loss.
Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
________________________________________
From: Sagara Wijetunga <sagarawmw(a)yahoo.com>
Sent: 03 November 2020 16:06:16
To: ceph-users(a)ceph.io; Frank Schilder
Subject: Re: [ceph-users] Re: How to recover from active+clean+inconsistent+failed_repair?
[SOLVED]
Hi Frank
Found the issue and fixed. It was a one copy of 0 byte object. Removed it. Deep scrub the
PG fixed the issue.
# find /var/lib/ceph/osd/ -type f -name "1000023675e*"
/var/lib/ceph/osd/ceph-2/current/3.b_head/DIR_B/DIR_A/DIR_E/1000023675e.00000000__head_AE97EEAB__3
# ls -l
/var/lib/ceph/osd/ceph-2/current/3.b_head/DIR_B/DIR_A/DIR_E/1000023675e.00000000__head_AE97EEAB__3
-rw-r--r-- 1 ceph ceph 0 Oct 31 19:18
/var/lib/ceph/osd/ceph-2/current/3.b_head/DIR_B/DIR_A/DIR_E/1000023675e.00000000__head_AE97EEAB__3
Once again, many thanks for your help.
Best regards
Sagara