increase threads to make fast sync, rgw_num_async_rados_threads
On Thu, Jul 23, 2020 at 7:12 PM Casey Bodley <cbodley(a)redhat.com> wrote:
radosgws need to be restarted after the 'sync
init' commands before
they'll start syncing again.
On Wed, Jul 22, 2020 at 7:16 AM Nghia Viet Tran
<Nghia.Viet.Tran(a)mgm-tp.com> wrote:
Hi everyone,
Our Ceph cluster is stuck in syncing status for a long time after
executing the
radosgw-admin data sync init command.
-----
realm dcd64504-c445-4810-9b83-851875443bcd (storage)
zonegroup 313a345a-4886-4cb3-8d06-0fe3919d591a (mastergroup)
zone 76fc5fe2-9f89-4419-b611-ab275000b358 (dc01)
metadata sync no sync (zone is master)
data sync source: cc4e8e55-988a-430e-b1df-4d88f0c81f4f (dc02)
syncing
full sync: 117/128 shards
full sync: 3 buckets to sync
incremental sync: 11/128 shards
data is behind on 117 shards
behind shards:
[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127]
-----
Can anyone give me a hint to let the synchronization job finish?
Many thanks!
--
Nghia Viet Tran (Mr)
mgm technology partners Vietnam Co. Ltd
7 Phan Châu Trinh
Đà Nẵng, Vietnam
+84 935905659
nghia.viet.tran@mgm-tp.com<mailto:nghia.viet.tran@mgm-tp.com>
www.mgm-tp.com<https://www.mgm-tp.com/en/>
Visit us on LinkedIn<
https://www.linkedin.com/company/mgm-technology-partners-vietnam-co-ltd>
and
Facebook<https://www.facebook.com/mgmTechnologyPartnersVietnam>am>!
Innovation Implemented.
General Director: Frank Müller
Registered office: 7 Pasteur, Hải Châu 1, Hải Châu, Đà Nẵng
MST/Tax 0401703955
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io