Hello there,
Thank you for your response.
There is no error at syslog, dmesg, or SMART.
# ceph health detail
HEALTH_WARN Too many repaired reads on 2 OSDs
OSD_TOO_MANY_REPAIRS Too many repaired reads on 2 OSDs
osd.29 had 38 reads repaired
osd.16 had 17 reads repaired
How can i clear this waning ?
My ceph is version 14.2.9(clear_shards_repaired is not supported.)
/dev/sdh1 on /var/lib/ceph/osd/ceph-16 type xfs (rw,relatime,attr2,inode64,noquota)
# cat dmesg | grep sdh
[ 12.990728] sd 5:2:3:0: [sdh] 19531825152 512-byte logical blocks: (10.0 TB/9.09 TiB)
[ 12.990728] sd 5:2:3:0: [sdh] Write Protect is off
[ 12.990728] sd 5:2:3:0: [sdh] Mode Sense: 1f 00 00 08
[ 12.990728] sd 5:2:3:0: [sdh] Write cache: enabled, read cache: enabled, doesn't
support DPO or FUA
[ 13.016616] sdh: sdh1 sdh2
[ 13.017780] sd 5:2:3:0: [sdh] Attached SCSI disk
# ceph tell osd.29 bench
{
"bytes_written": 1073741824,
"blocksize": 4194304,
"elapsed_sec": 6.464404,
"bytes_per_sec": 166100668.21318716,
"iops": 39.60148530320815
}
# ceph tell osd.16 bench
{
"bytes_written": 1073741824,
"blocksize": 4194304,
"elapsed_sec": 9.6168945000000008,
"bytes_per_sec": 111651617.26584397,
"iops": 26.619819942914003
}
Thank you
On 26 Mar 2021, at 16:04, Anthony D'Atri
<anthony.datri(a)gmail.com> wrote:
Did you look at syslog, dmesg, or SMART? Mostly likely the drives are failing.
On Mar 25, 2021, at 9:55 PM,
jinguk.kwon(a)ungleich.ch wrote:
Hello there,
Thank you for advanced.
My ceph is ceph version 14.2.9
I have a repair issue too.
ceph health detail
HEALTH_WARN Too many repaired reads on 2 OSDs
OSD_TOO_MANY_REPAIRS Too many repaired reads on 2 OSDs
osd.29 had 38 reads repaired
osd.16 had 17 reads repaired
~# ceph tell osd.16 bench
{
"bytes_written": 1073741824,
"blocksize": 4194304,
"elapsed_sec": 7.1486738159999996,
"bytes_per_sec": 150201541.10217974,
"iops": 35.81083800844663
}
~# ceph tell osd.29 bench
{
"bytes_written": 1073741824,
"blocksize": 4194304,
"elapsed_sec": 6.9244327500000002,
"bytes_per_sec": 155065672.9246161,
"iops": 36.970537406114602
}
But it looks like those osds are ok. how can i clear this warning ?
Best regards
JG
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io