It is RGW, but the index is on a different pool. Not seeing any key/s being reported in
recovery. We've definitely had OSDs flap multiple times.
David
On Wed, Apr 24, 2024, at 16:48, Anthony D'Atri wrote:
> Do you see *keys* aka omap traffic? Especially if you have RGW set up?
>
>> On Apr 24, 2024, at 15:37, David Orman <ormandj(a)corenode.com> wrote:
>>
>> Did you ever figure out what was happening here?
>>
>> David
>>
>> On Mon, May 29, 2023, at 07:16, Hector Martin wrote:
>>> On 29/05/2023 20.55, Anthony D'Atri wrote:
>>>> Check the uptime for the OSDs in question
>>>
>>> I restarted all my OSDs within the past 10 days or so. Maybe OSD
>>> restarts are somehow breaking these stats?
>>>
>>>>
>>>>> On May 29, 2023, at 6:44 AM, Hector Martin <marcan(a)marcan.st>
wrote:
>>>>>
>>>>> Hi,
>>>>>
>>>>> I'm watching a cluster finish a bunch of backfilling, and I
noticed that
>>>>> quite often PGs end up with zero misplaced objects, even though they
are
>>>>> still backfilling.
>>>>>
>>>>> Right now the cluster is down to 6 backfilling PGs:
>>>>>
>>>>> data:
>>>>> volumes: 1/1 healthy
>>>>> pools: 6 pools, 268 pgs
>>>>> objects: 18.79M objects, 29 TiB
>>>>> usage: 49 TiB used, 25 TiB / 75 TiB avail
>>>>> pgs: 262 active+clean
>>>>> 6 active+remapped+backfilling
>>>>>
>>>>> But there are no misplaced objects, and the misplaced column in `ceph
pg
>>>>> dump` is zero for all PGs.
>>>>>
>>>>> If I do a `ceph pg dump_json`, I can see `num_objects_recovered`
>>>>> increasing for these PGs... but the misplaced count is still 0.
>>>>>
>>>>> Is there something else that would cause recoveries/backfills other
than
>>>>> misplaced objects? Or perhaps there is a bug somewhere causing the
>>>>> misplaced object count to be misreported as 0 sometimes?
>>>>>
>>>>> # ceph -v
>>>>> ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5)
quincy
>>>>> (stable)
>>>>>
>>>>> - Hector
>>>>> _______________________________________________
>>>>> ceph-users mailing list -- ceph-users(a)ceph.io
>>>>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>>>>
>>>>
>>>
>>> - Hector
>>> _______________________________________________
>>> ceph-users mailing list -- ceph-users(a)ceph.io
>>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io