Hi Mehmet,
In our case the ceph pg repair fixed the issues (read_error). I think the
read_error was just temporary due to low available RAM.
You might want to check your actual issue with ceph pg query <pg>
Kind regards,
Caspar Smit
Systemengineer
SuperNAS
Dorsvlegelstraat 13
1445 PA Purmerend
t: (+31) 299 410 414
e: casparsmit(a)supernas.eu
w:
Op di 25 feb. 2020 om 18:45 schreef Mehmet <ceph(a)elchaka.de>de>:
Hello Casper,
did you found an answer on this topic?
my guess is, that with "ceph pg repair" the copy of primary osd will
overwrite the 2nd and 3rd - in case it is readable.. but what is when it
is not readable? :thinking:
Would be nice to know if there is a way to tell ceph to repair pg with
copy from osd X.
best regards,
Mehmet
Am 04.12.2019 um 13:47 schrieb Caspar Smit:
Hi all,
I tried to dig in the mailinglist archives but couldn't find a clear
answer to the following situation:
Ceph encountered a scrub error resulting in HEALTH_ERR
Two PG's are active+clean+inconsistent. When investigating the PG i see
a "read_error" on the primary OSD. Both PG's are replicated PG's with
3
copies.
I'm on Luminous 12.2.5 on this installation, is it safe to just run
"ceph pg repair" on those PG's or will it then overwrite the two good
copies with the bad one from the primary?
If the latter is true, what is the correct way to resolve this?
Kind regards,
Caspar Smit
_______________________________________________
ceph-users mailing list
ceph-users(a)lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io