Hi,
I'm running a small nautilus cluster (14.2.2) which was recently
upgraded from mimic (13.2.6). After the upgrade I enabled the
pg_autoscaler which resulted in most of the pools having their pg count
changed. All the remapping has completed but the cluster is still
reporting a HEALTH_WARN. I have adjusted the target ratios such that
sum < 1.0 but this didn't help. What else can I look at?
Thanks,
James
# ceph -s
cluster:
id: ...
health: HEALTH_WARN
1 subtrees have overcommitted pool target_size_bytes
1 subtrees have overcommitted pool target_size_ratio
services:
mon: 3 daemons, quorum ceph-00,ceph-01,ceph-02 (age 3d)
mgr: ceph-01(active, since 6d), standbys: ceph-02, ceph-00
osd: 32 osds: 32 up (since 2d), 32 in (since 2d)
rgw: 1 daemon active (rgw-00)
data:
pools: 14 pools, 1512 pgs
objects: 4.17M objects, 16 TiB
usage: 47 TiB used, 69 TiB / 116 TiB avail
pgs: 1510 active+clean
2 active+clean+scrubbing+deep
# ceph osd pool autoscale-status (this might wrap horribly...):
POOL SIZE TARGET SIZE RATE RAW CAPACITY
RATIO TARGET RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE
loc.rgw.buckets.index 0 3.0 116.1T
0.0000 1.0 4 on
vms1 5318G 3.0 116.1T
0.1341 0.2000 1.0 256 on
vms2 3419G 3.0 116.1T
0.0862 0.0200 1.0 64 on
.rgw.root 3648k 3.0 116.1T
0.0000 1.0 4 on
default.rgw.meta 384.0k 3.0 116.1T
0.0000 1.0 4 on
lov.rgw.log 384.0k 3.0 116.1T
0.0000 1.0 4 on
vms3 35799G 3.0 116.1T
0.9028 0.6000 1.0 1024 on
default.rgw.control 0 3.0 116.1T
0.0000 1.0 4 on
loc.rgw.meta 768.5k 3.0 116.1T
0.0000 1.0 4 on
vms4 2306G 3.0 116.1T
0.0582 0.1000 1.0 128 on
loc.rgw.buckets.non-ec 200.4k 3.0 116.1T
0.0000 1.0 4 on
loc.rgw.buckets.data 56390M 3.0 116.1T
0.0014 1.0 4 on
loc.rgw.control 0 3.0 116.1T
0.0000 1.0 4 on
default.rgw.log 0 3.0 116.1T
0.0000 1.0 4 on
Zynstra is a private limited company registered in England and Wales (registered number
07864369). Our registered office and Headquarters are at The Innovation Centre, Broad
Quay, Bath, BA1 1UD. This email, its contents and any attachments are confidential. If you
have received this message in error please delete it from your system and advise the
sender immediately.
Show replies by date