So perhaps we'll need to change the OSD to allow
for 500 or 1000 PGs
We had a support case last year where we where forced to set the OSD
limit to >4000 for a few days, and had more then 4k active PGs on that
single OSD. You can do that, however it is quite uncommon.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.verges(a)croit.io
Chat:
https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web:
https://croit.io
YouTube:
https://goo.gl/PGE1Bx
On Sat, 13 Mar 2021 at 14:29, Dan van der Ster <dan(a)vanderster.com> wrote:
>
> On Fri, Mar 12, 2021 at 6:35 PM Robert Sander
> <r.sander(a)heinlein-support.de> wrote:
> >
> > Am 12.03.21 um 18:30 schrieb huxiaoyu(a)horebdata.cn:
> >
> > > Any other aspects on the limits of bigger capacity hard disk drives?
> >
> > Recovery will take longer increasing the risk of another failure in the
> > same time.
> >
>
> Another limitation is that OSDs should store 100 PGs each regardless
> of their size, so those PGs will each need to store many more objects
> and therefore recovery, scrubbing, removal, listing, etc... will all
> take longer and longer.
>
So perhaps we'll need to change the OSD to allow
for 500 or 1000 PGs
> per OSD eventually, (meaning also that PGs per cluster
needs to scale
> up too!)
>
> Cheers, Dan
>
> > Regards
> > --
> > Robert Sander
> > Heinlein Support GmbH
> > Schwedter Str. 8/9b, 10119 Berlin
> >
> >
http://www.heinlein-support.de
> >
> > Tel: 030 / 405051-43
> > Fax: 030 / 405051-19
> >
> > Zwangsangaben lt. §35a GmbHG:
> > HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
> > Geschäftsführer: Peer Heinlein -- Sitz: Berlin
> >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users(a)ceph.io
> > To unsubscribe send an email to ceph-users-leave(a)ceph.io
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io