Ok thanks, very clear, I am also indeed within this range.
-----Original Message-----
Subject: Re: [ceph-users] Re: Massive Mon DB Size with noout on 14.2.11
The important metric is the difference between these two values:
# ceph report | grep osdmap | grep committed report 3324953770
"osdmap_first_committed": 3441952,
"osdmap_last_committed": 3442452,
The mon stores osdmaps on disk, and trims the older versions whenever
the PGs are clean. Trimming brings the osdmap_first_committed to be
closer to osdmap_last_committed.
In a cluster with no PGs backfilling or recovering, the mon should trim
that difference to be within 500-750 epochs.
If there are any PGs backfilling or recovering, then the mon will not
trim beyond the osdmap epoch when the pools were clean.
So if you are accumulating gigabytes of data in the mon dir, it suggests
that you have unclean PGs/Pools.
Cheers, dan
On Fri, Oct 2, 2020 at 4:14 PM Marc Roos <M.Roos(a)f1-outsourcing.eu>
wrote:
Does this also count if your cluster is not healthy because of errors
like '2 pool(s) have no replicas configured'
I sometimes use these pools for testing, they are empty.