Hello Torkil,
It would help if you provided the whole "ceph osd df tree" and "ceph
pg ls" outputs.
On Sat, Mar 23, 2024 at 4:26 PM Torkil Svensgaard <torkil(a)drcmr.dk> wrote:
Hi
We have this after adding some hosts and changing crush failure domain
to datacenter:
pgs: 1338512379/3162732055 objects misplaced (42.321%)
5970 active+remapped+backfill_wait
4853 active+clean
11 active+remapped+backfilling
We have 3 datacenters each with 6 hosts and ~400 HDD OSDs with DB/WAL on
NVMe. Using mclock with high_recovery_ops profile.
What is the bottleneck here? I would have expected a huge number of
simultaneous backfills. Backfill reservation logjam?
Mvh.
Torkil
--
Torkil Svensgaard
Systems Administrator
Danish Research Centre for Magnetic Resonance DRCMR, Section 714
Copenhagen University Hospital Amager and Hvidovre
Kettegaard Allé 30, 2650 Hvidovre, Denmark
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
--
Alexander E. Patrakov