When I suggested this to the senior admin here I was told that was a bad
idea because it would negatively impact performance.
Is that true? I thought all that would do was accept the information from
the other 2 OSDs and the one with the errors would rebuild the record.
The underlying disks don't appear to have actual catastrophic errors based
on smartctl and other tools.
On Tue, Feb 28, 2023 at 12:21 PM Janne Johansson <icepic.dz(a)gmail.com>
wrote:
Den tis 28 feb. 2023 kl 18:13 skrev Dave Ingram
<dave(a)adaptable.sh>sh>:
There are also several
scrub errors. In short, it's a complete wreck.
health: HEALTH_ERR
3 scrub errors
Possible data damage: 3 pgs inconsistent
[root@ceph-admin davei]# ceph health detail
HEALTH_ERR 3 scrub errors; Possible data damage: 3 pgs inconsistent
OSD_SCRUB_ERRORS 3 scrub errors
PG_DAMAGED Possible data damage: 3 pgs inconsistent
pg 2.8a is active+clean+inconsistent, acting [13,152,127]
pg 2.ce is active+clean+inconsistent, acting [145,13,152]
pg 2.e8 is active+clean+inconsistent, acting [150,162,42]
You can ask the cluster to repair those three,
"ceph pg repair 2.8a"
"ceph pg repair 2.ce"
"ceph pg repair 2.e8"
and they should start fixing themselves.
--
May the most significant bit of your life be positive.