Debugging a bit more it shows in all sites many stale instances which can't be removed
due to multisite limitation ☹ in octopus 15.2.7.
-----Original Message-----
From: Szabo, Istvan (Agoda) <Istvan.Szabo(a)agoda.com>
Sent: Monday, January 25, 2021 11:51 AM
To: ceph-users(a)ceph.io
Subject: [ceph-users] Re: Multisite bucket data inconsistency
Email received from outside the company. If in doubt don't click links nor open
attachments!
________________________________
Hmm,
Looks like attached screenshots not allowed, so in HKG we have 19 millions objects, in ash
we have 32millions.
-----Original Message-----
From: Szabo, Istvan (Agoda) <Istvan.Szabo(a)agoda.com>
Sent: Monday, January 25, 2021 11:44 AM
To: ceph-users(a)ceph.io
Subject: [ceph-users] Multisite bucket data inconsistency
Email received from outside the company. If in doubt don't click links nor open
attachments!
________________________________
Hi,
We have bucket sync enabled and seems like it is inconsistent ☹
This is the master zone sync status on that specific bucket:
realm 5fd28798-9195-44ac-b48d-ef3e95caee48 (realm)
zonegroup 31a5ea05-c87a-436d-9ca0-ccfcbad481e3 (data)
zone 9213182a-14ba-48ad-bde9-289a1c0c0de8 (hkg)
metadata sync no sync (zone is master)
data sync source: 61c9d940-fde4-4bed-9389-edc8d7741817 (sin)
syncing
full sync: 0/128 shards
incremental sync: 128/128 shards
data is caught up with source
source: f20ddd64-924b-4f78-8d2d-dd6c65f98ba9 (ash)
syncing
full sync: 0/128 shards
incremental sync: 128/128 shards
data is behind on 126 shards
behind shards:
[0,1,2,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127]
oldest incremental change not applied:
2021-01-25T11:32:57.726042+0700 [62]
104 shards are recovering
recovering shards:
[0,2,3,4,5,7,8,9,10,11,12,13,15,16,17,18,19,20,21,22,24,25,26,27,28,29,31,32,33,36,37,38,39,40,42,43,44,45,47,50,51,52,53,54,55,57,58,61,63,65,66,67,68,69,70,71,72,73,74,75,76,78,80,81,82,83,84,85,87,88,90,92,93,95,96,97,98,99,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,123,124,125,126,127]
This is the secondary zone where the data has been uploaded:
realm 5fd28798-9195-44ac-b48d-ef3e95caee48 (realm)
zonegroup 31a5ea05-c87a-436d-9ca0-ccfcbad481e3 (data)
zone f20ddd64-924b-4f78-8d2d-dd6c65f98ba9 (ash)
metadata sync syncing
full sync: 0/64 shards
incremental sync: 64/64 shards
metadata is caught up with master
data sync source: 61c9d940-fde4-4bed-9389-edc8d7741817 (sin)
syncing
full sync: 0/128 shards
incremental sync: 128/128 shards
data is caught up with source
source: 9213182a-14ba-48ad-bde9-289a1c0c0de8 (hkg)
syncing
full sync: 0/128 shards
incremental sync: 128/128 shards
data is behind on 125 shards
behind shards:
[0,1,2,3,4,5,6,8,9,10,11,12,13,14,15,16,17,18,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127]
oldest incremental change not applied:
2021-01-25T11:29:32.450031+0700 [61]
126 shards are recovering
recovering shards:
[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,104,105,106,107,108,109,110,111,112,114,115,116,117,118,119,120,121,122,123,124,125,126,127]
The pipes are already there:
"id": "seo-2",
"data_flow": {
"symmetrical": [
{
"id": "seo-2-flow",
"zones": [
"9213182a-14ba-48ad-bde9-289a1c0c0de8",
"f20ddd64-924b-4f78-8d2d-dd6c65f98ba9"
]
}
]
},
"pipes": [
{
"id": "seo-2-hkg-ash-pipe",
"source": {
"bucket": "seo..prerender",
"zones": [
"9213182a-14ba-48ad-bde9-289a1c0c0de8"
]
},
"dest": {
"bucket": "seo..prerender",
"zones": [
"f20ddd64-924b-4f78-8d2d-dd6c65f98ba9"
]
},
"params": {
"source": {
"filter": {
"tags": []
}
},
"dest": {},
"priority": 0,
"mode": "system",
"user": ""
}
},
{
"id": "seo-2-ash-hkg-pipe",
"source": {
"bucket": "seo..prerender",
"zones": [
"f20ddd64-924b-4f78-8d2d-dd6c65f98ba9"
]
},
"dest": {
"bucket": "seo..prerender",
"zones": [
"9213182a-14ba-48ad-bde9-289a1c0c0de8"
]
},
"params": {
"source": {
"filter": {
"tags": []
}
},
"dest": {},
"priority": 0,
"mode": "system",
"user": ""
}
}
],
"status": "enabled"
}
Any idea to troubleshoot?
Thank you
________________________________
This message is confidential and is for the sole use of the intended recipient(s). It may
also be privileged or otherwise protected by copyright or other legal rules. If you have
received it by mistake please let us know by reply email and delete it from your system.
It is prohibited to copy this message or disclose its content to anyone. Any
confidentiality or privilege is not waived or lost by any mistaken delivery or
unauthorized disclosure of the message. All messages sent to and from Agoda may be
monitored to ensure compliance with company policies, to protect the company's
interests and to remove potential malware. Electronic messages may be intercepted,
amended, lost or deleted, or contain viruses.
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an email to
ceph-users-leave(a)ceph.io _______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an email to
ceph-users-leave(a)ceph.io