Hi Mattias
thanks for the info
All OSDs active and clean
With help we found it is related to this we think:
https://tracker.ceph.com/issues/48946
osd_committed is better now
I'm not sure that it is fixed so are watching closely
currently
ceph report |grep "osdmap_.*_committed"
report 3319096257
"osdmap_first_committed": 303919,
"osdmap_last_committed": 304671,
Thanks Joe
>> Matthias Grandl
<matthias.grandl(a)croit.io> 5/6/2021 10:39 PM >>>
Hi Joe,
are all PGs active+clean? If not, you will only get osdmap pruning, which
will try to keep only every 10th osdmap.
https://docs.ceph.com/en/latest/dev/mon-osdmap-prune/
If you have remapped PGs and need to urgently get rid of osdmaps, you can
try the upmap-remapped script to get to a pseudo clean state.
https://github.com/HeinleinSupport/cern-ceph-scripts/blob/master/tools/upma…
Matthias Grandl
Head of UX
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web:
https://croit.io
On Fri, May 7, 2021, 02:16 Joe Comeau <Joe.Comeau(a)hli.ubc.ca> wrote:
Nautilus cluster is not unmapping
ceph 14.2.16
ceph report |grep "osdmap_.*_committed"
report 1175349142
"osdmap_first_committed": 285562,
"osdmap_last_committed": 304247,
we've set osd_map_cache_size = 20000
but its is slowly growing to that difference as well
OSD map first committed is not changing for some strange reason
Cluster has been around and upgraded since either firefly or jewel
I have seen a few other with this problem to no solution to it
Any suggestions ?
Thanks Joe
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io