Christian;
Do the second site's RGW instance(s) have access to the first site's OSDs? Is the
reverse true?
It's been a while since I set up the multi-site sync between our clusters, but I seem
to remember that, while metadata is exchanged RGW1<-->RGW2, data is exchanged
OSD1<-->RGW2.
Anyone else on the list, PLEASE correct me if I'm wrong.
Thank you,
Dominic L. Hilsbos, MBA
Vice President – Information Technology
Perform Air International Inc.
DHilsbos(a)PerformAir.com
www.PerformAir.com
-----Original Message-----
From: Christian Rohmann [mailto:christian.rohmann@inovex.de]
Sent: Friday, June 25, 2021 9:25 AM
To: ceph-users(a)ceph.io
Subject: [ceph-users] rgw multisite sync not syncing data, error:
RGW-SYNC:data:init_data_sync_status: ERROR: failed to read remote data log shards
Hey ceph-users,
I setup a multisite sync between two freshly setup Octopus clusters.
In the first cluster I created a bucket with some data just to test the
replication of actual data later.
I then followed the instructions on
https://docs.ceph.com/en/octopus/radosgw/multisite/#migrating-a-single-site…
to add a second zone.
Things went well and both zones are now happily reaching each other and
the API endpoints are talking.
Also the metadata is in sync already - both sides are happy and I can
see bucket listings and users are "in sync":
# radosgw-admin sync status
realm 13d1b8cb-dc76-4aed-8578-2ce5d3d010e8 (obst)
zonegroup 17a06c15-2665-484e-8c61-cbbb806e11d2 (obst-fra)
zone 6d2c1275-527e-432f-a57a-9614930deb61 (obst-rgn)
metadata sync no sync (zone is master)
data sync source: c07447eb-f93a-4d8f-bf7a-e52fade399f3 (obst-az1)
init
full sync: 128/128 shards
full sync: 0 buckets to sync
incremental sync: 0/128 shards
data is behind on 128 shards
behind shards: [0...127]
and on the other side ...
# radosgw-admin sync status
realm 13d1b8cb-dc76-4aed-8578-2ce5d3d010e8 (obst)
zonegroup 17a06c15-2665-484e-8c61-cbbb806e11d2 (obst-fra)
zone c07447eb-f93a-4d8f-bf7a-e52fade399f3 (obst-az1)
metadata sync syncing
full sync: 0/64 shards
incremental sync: 64/64 shards
metadata is caught up with master
data sync source: 6d2c1275-527e-432f-a57a-9614930deb61 (obst-rgn)
init
full sync: 128/128 shards
full sync: 0 buckets to sync
incremental sync: 0/128 shards
data is behind on 128 shards
behind shards: [0...127]
also the newly created buckets (read: their metadata) is synced.
What is apparently not working in the sync of actual data.
Upon startup the radosgw on the second site shows:
2021-06-25T16:15:06.445+0000 7fe71eff5700 1
RGW-SYNC:meta: start
2021-06-25T16:15:06.445+0000 7fe71eff5700 1 RGW-SYNC:meta: realm
epoch=2 period id=f4553d7c-5cc5-4759-9253-9a22b051e736
2021-06-25T16:15:11.525+0000 7fe71dff3700 0
RGW-SYNC:data:sync:init_data_sync_status: ERROR: failed to read remote
data log shards
also when issuing
# radosgw-admin data sync init --source-zone obst-rgn
it throws
2021-06-25T16:20:29.167+0000 7f87c2aec080 0
RGW-SYNC:data:init_data_sync_status: ERROR: failed to read remote data
log shards
Does anybody have any hints on where to look for what could be broken here?
Thanks a bunch,
Regards
Christian
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io