Hello Users,
I've also got a cluster upgraded from v18.2.0 to v18.2.1 recently where it
appears to be working fine. Not sure if I'm missing something here.
ceph> version
ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)
ceph> status
cluster:
id: 80468512-289d-11ee-a043-314ee4b0dffc
health: HEALTH_OK
services:
mon: 1 daemons, quorum cs418 (age 6d)
mgr: cs418.qihtft(active, since 6d), standbys: cs418.wksgkz
mds: 1/1 daemons up, 1 standby
osd: 25 osds: 25 up (since 6d), 25 in (since 5M)
data:
volumes: 1/1 healthy
pools: 5 pools, 1201 pgs
objects: 212.12k objects, 814 GiB
usage: 1.6 TiB used, 33 TiB / 35 TiB avail
pgs: 1201 active+clean
io:
client: 39 KiB/s rd, 979 KiB/s wr, 50 op/s rd, 158 op/s wr
ceph> osd pool autoscale-status
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET
RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE BULK
.mgr 14148k 2.0 35909G 0.0000
1.0 1 on False
.nfs 13024 2.0 35909G 0.0000
1.0 32 on False
cephfs.cloudstack.meta 24290k 2.0 35909G 0.0000
4.0 16 on False
cephfs.cloudstack.data 14431M 2.0 35909G 0.0008
1.0 128 off False
cloudstack 771.0G 2.0 35909G 0.0429
1.0 1024 on True
Thanks,
Jayanth
On Mon, Dec 25, 2023 at 9:27 PM Jayanth Reddy <jayanthreddy5666(a)gmail.com>
wrote:
Hello Users,
I deployed a new cluster with v18.2.1 but noticed that pg_num and pgp_num
always remained 1 for the pools with autoscale turned on. Below is the env
and the relevant information
ceph> version
ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef
(stable)
ceph> status
cluster:
id: 273c8410-a333-11ee-b3c2-9791c3098e2b
health: HEALTH_WARN
clock skew detected on mon.ec-rgw-s3
services:
mon: 3 daemons, quorum ec-rgw-s1,ec-rgw-s2,ec-rgw-s3 (age 48m)
mgr: ec-rgw-s1.icpgxx(active, since 69m), standbys: ec-rgw-s2.quzjfv
osd: 3 osds: 3 up (since 29m), 3 in (since 49m)
rgw: 1 daemon active (1 hosts, 1 zones)
data:
pools: 8 pools, 23 pgs
objects: 1.85k objects, 6.0 GiB
usage: 4.8 GiB used, 595 GiB / 600 GiB avail
pgs: 23 active+clean
ceph> osd pool get noautoscale
noautoscale is off
ceph> osd pool autoscale-status
ceph> osd pool autoscale-status
ceph> osd pool ls detail
pool 1 '.mgr' replicated size 3 min_size 2 crush_rule 0 object_hash
rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 21 flags
hashpspool stripe_width 0 pg_num_max 32 pg_num_min 1 application mgr
read_balance_score 3.00
pool 2 'default.rgw.buckets.data' erasure profile ec-21 size 3 min_size 2
crush_rule 1 object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode off
last_change 111 lfor 0/0/55 flags hashpspool stripe_width 8192
compression_algorithm lz4 compression_mode force application rgw
pool 3 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash
rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 34 flags
hashpspool stripe_width 0 application rgw read_balance_score 3.00
pool 4 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0
object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 37
flags hashpspool stripe_width 0 application rgw read_balance_score 3.00
pool 5 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0
object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 39
flags hashpspool stripe_width 0 application rgw read_balance_score 3.00
pool 6 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0
object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 41
flags hashpspool stripe_width 0 pg_autoscale_bias 4 application rgw
read_balance_score 3.00
pool 7 'default.rgw.buckets.index' replicated size 3 min_size 2 crush_rule
0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 44
flags hashpspool stripe_width 0 pg_autoscale_bias 4 application rgw
read_balance_score 3.00
pool 8 'default.rgw.buckets.non-ec' replicated size 3 min_size 2
crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on
last_change 47 flags hashpspool stripe_width 0 application rgw
read_balance_score 3.00
I'd manually changed for pool ID 2.
Is this by any chance due to PR [1]?
[1]
https://github.com/ceph/ceph/pull/53658
Thanks,
Jayanth