---- On Fri, 06 Sep 2019 02:11:06 +0800 solarflow99@gmail.com wrote ----
no, I mean ceph sees it as a failure and marks it out for a whileOn Thu, Sep 5, 2019 at 11:00 AM Ashley Merrick <singapore@amerrick.co.uk> wrote:Is your HD actually failing and vanishing from the OS and then coming back shortly?
Or do you just mean your OSD is crashing and then restarting it self shortly later?
---- On Fri, 06 Sep 2019 01:55:25 +0800 solarflow99@gmail.com wrote ----_______________________________________________One of the things i've come to notice is when HDD drives fail, they often recover in a short time and get added back to the cluster. This causes the data to rebalance back and forth, and if I set the noout flag I get a health warning. Is there a better way to avoid this?
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-leave@ceph.io