pg_num and pgp_num need to be the same, not?
3.5.1. Set the Number of PGs
To set the number of placement groups in a pool, you must specify the
number of placement groups at the time you create the pool. See Create a
Pool for details. Once you set placement groups for a pool, you can
increase the number of placement groups (but you cannot decrease the
number of placement groups). To increase the number of placement groups,
execute the following:
ceph osd pool set {pool-name} pg_num {pg_num}
Once you increase the number of placement groups, you must also increase
the number of placement groups for placement (pgp_num) before your
cluster will rebalance. The pgp_num should be equal to the pg_num. To
increase the number of placement groups for placement, execute the
following:
ceph osd pool set {pool-name} pgp_num {pgp_num}
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/4/html/s…
-----Original Message-----
To: norman
Cc: ceph-users
Subject: [ceph-users] Re: pool pgp_num not updated
Hi everyone,
I'm seeing a similar issue here. Any ideas on this?
Mac Wynkoop,
On Sun, Sep 6, 2020 at 11:09 PM norman <norman.kern(a)gmx.com> wrote:
Hi guys,
When I update the pg_num of a pool, I found it not worked(no
rebalanced), anyone know the reason? Pool's info:
pool 21 'openstack-volumes-rs' replicated size 3 min_size 2 crush_rule
21 object_hash rjenkins pg_num 1024 pgp_num 512 pgp_num_target 1024
autoscale_mode warn last_change 85103 lfor 82044/82044/82044 flags
hashpspool,nodelete,selfmanaged_snaps stripe_width 0 application rbd
removed_snaps
[1~1e6,1e8~300,4e9~18,502~3f,542~11,554~1a,56f~1d7]
pool 22 'openstack-vms-rs' replicated size 3 min_size 2 crush_rule 22
object_hash rjenkins pg_num 512 pgp_num 512 pg_num_target 256
pgp_num_target 256 autoscale_mode warn last_change 84769 lfor
0/0/55294 flags hashpspool,nodelete,selfmanaged_snaps stripe_width 0
application rbd
The pgp_num_target is set, but pgp_num not set.
I have scale out new OSDs and is backfilling before setting the value,
is it the reason?
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an
email to ceph-users-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an
email to ceph-users-leave(a)ceph.io