Hi Nathan
Thanks for the reply.
root@ceph1 16:30 [~]: ceph osd pool autoscale-status
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET
RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE
ec82pool 2886T 1.25 4732T 0.7625
1.0 16384 4096 warn
We increased to 16384 back in August in anticipation of adding 200 more
osds to the system over the next 6 months / year. Maybe we should have
gone for 8192?
Cheers
Toby
On 11/23/20 6:02 PM, Nathan Fish wrote:
What does "ceph osd pool autoscale-status"
report?
On Mon, Nov 23, 2020 at 12:59 PM Toby Darling <toby(a)mrc-lmb.cam.ac.uk> wrote:
>
> Hi
>
> We're having problems getting our erasure coded ec82pool to upmap balance.
> "ceph version 14.2.11 (f7fdb2f52131f54b891a2ec99d8205561242cdaf)
> nautilus (stable)": 554
>
> The pool consists of 20 nodes in 10 racks, each rack containing a pair
> of nodes 1@45*8TB drives and 1@10*16TB.
>
https://pastebin.com/YLwu8VVi
>
> The problem is the 8TB drives are roughly 62-74% full, while the 16TB
> drives are 84-87% full.
>
https://pastebin.com/j7Dx883i
>
> Neither osdmaptool or reweight-by-utilization are able to improve the
> distribution .
>
> There's an osdmap ftp://ftp.mrc-lmb.cam.ac.uk/pub/toby/osdmap.2135441.
>
> Any thoughts/pointers much appreciated.
>
> Cheers
> Toby
> --
> Toby Darling, Scientific Computing (2N249)
> MRC Laboratory of Molecular Biology
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
--
Toby Darling, Scientific Computing (2N249)
MRC Laboratory of Molecular Biology
https://www.mrc-lmb.cam.ac.uk/scicomp/