For Ceph,this is fortunately not a major issue. Drives failing is
considered entirely normal, and Ceph will automatically rebuild your data
from redundancy onto a new replacement drive.If You're able to predict the
imminent failure of a drive, adding a new drive /OSD will automatically
start flowing data onto that drive immediately, thus reducing the time
period with decreased redundancy.If You're running with very tight levels
of redundancy, You're better off, creating a new OSD on a replacement drive
before destroying the old OSD on the failed drive. But if You're running
with anything near the recommended/default levels of redundancy, it doesn't
really matter in which order you do it.
Best regards,
Simon Kepp,
Kepp Technologies.
On Tue, Dec 8, 2020 at 8:59 PM Konstantin Shalygin <k0ste(a)k0ste.ru> wrote:
Destroy this OSD, replace disk, deploy OSD.
k
Sent from my iPhone
On 8 Dec 2020, at 15:13, huxiaoyu(a)horebdata.cn
wrote:
Hi, dear cephers,
On one ceph i have a failing disk, whose SMART information signals an
impending
failure but still availble for reads and writes. I am setting up
a new disk on the same node to replace it.
What is the best procedure to migrate data (or
COPY ) from the failing
OSD to the new one?
Is there any stardard method to copy the OSD from one to another?
best regards,
samuel
huxiaoyu(a)horebdata.cn
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io