Hi,
I've upgraded to Nautilus from Mimic a while ago and enabled the pg_autoscaler.
When pg_autoscaler was activated I got a HEALTH_WARN regarding:
POOL_TARGET_SIZE_BYTES_OVERCOMMITTED 1 subtrees have overcommitted pool target_size_bytes
Pools ['cephfs_data_reduced', 'cephfs_data',
'cephfs_metadata'] overcommit available storage by 1.460x due to target_size_bytes
0 on pools []
POOL_TARGET_SIZE_RATIO_OVERCOMMITTED 1 subtrees have overcommitted pool target_size_ratio
Pools ['cephfs_data_reduced', 'cephfs_data',
'cephfs_metadata'] overcommit available storage by 1.460x due to target_size_ratio
0.000 on pools []
Both target_size_bytes and target_size_ratio on all the pools are set to 0, so I started
to wonder why this error message appear.
My autoscale-status looks like this:
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO
BIAS PG_NUM NEW PG_NUM AUTOSCALE
cephfs_metadata 16708M 4.0 34465G 0.0019
1.0 8 warn
cephfs_data_reduced 15506G 2.0 34465G 0.8998
1.0 375 warn
cephfs_data 6451G 3.0 34465G
0.5616 1.0 250 warn
So the ratio in total is 1.4633..
Isn't 1.0 of the combined ratio of all pools equal of full?
I also enabled the Dashboard and saw that the PG Status showed "645% clean"
PG's.
This cluster was originally installed with version Jewel, so may it be any legacy setting
or such that causing this?
Show replies by date