Hello Mark,
Ceph itself does it incremental. Just select the value you will have
at the end, and wait for Ceph to do so.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.verges(a)croit.io
Chat:
https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web:
https://croit.io
YouTube:
https://goo.gl/PGE1Bx
Am So., 21. Feb. 2021 um 23:34 Uhr schrieb Mark Johnson <markj(a)iovox.com>om>:
>
> Hi,
>
> Probably a basic/stupid question but I'm asking anyway. Through lack of
knowledge and experience at the time, when we set up our pools, our pool that holds the
majority of our data was created with a PG/PGP num of 64. As the amount of data has
grown, this has started causing issues with balance of data across OSDs. I want to
increase the PG count to at least 512, or maybe 1024 - obviously, I want to do this
incrementally. However, rather than going from 64 to 128, then 256 etc, I'm
considering doing this in much smaller increments over a longer period of time so that it
will hopefully be doing the majority of moving around of data during the quieter time of
day. So, may start by going in increments of 4 until I get up to 128 and then go in jumps
of 8 and so on.
>
> My question is, will I still end up with the same net result going in increments of 4
until I hit 128 as I would if I were to go straight to 128 in one hit. What I mean by
that is that once I reach 128, would I have the exact same level of data balance across
PGs as I would if I went straight to 128? Are there any drawbacks in going up in small
increments over a long period of time? I know that I'll have uneven PG sizes until I
get to that exponent of 2 but that should be OK as long as the end result is the desired
result. I suspect I may have a greater amount of data moving around overall doing it this
way but given my goal is to reduce the amount of intensive data moves during higher
traffic times, that's not a huge concern in the grand scheme of things.
>
> Thanks in advance,
> Mark
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io