On Tue, Oct 1, 2019 at 6:12 PM Darrell Enns <darrelle(a)knowledge.ca> wrote:
The standard advice is “1GB RAM per 1TB of OSD”. Does this actually still hold with large
OSDs on bluestore?
No
Can it be reasonably reduced with tuning?
Yes
From the docs, it looks like bluestore should target
the “osd_memory_target” value by default. This is a fixed value (4GB by default), which
does not depend on OSD size. So shouldn’t the advice really by “4GB per OSD”, rather than
“1GB per TB”? Would it also be reasonable to reduce osd_memory_target for further RAM
savings?
Yes
For example, suppose we have 90 12TB OSD drives:
Please don't put 90 drives in one node, that's not a good idea in
99.9% of the use cases.
“1GB per TB” rule: 1080GB RAM
“4GB per OSD” rule: 360GB RAM
“2GB per OSD” (osd_memory_target reduced to 2GB): 180GB RAM
Those are some massively different RAM values. Perhaps the old advice was for filestore?
Or there is something to consider beyond the bluestore memory target? What about when
using very dense nodes (for example, 60 12TB OSDs on a single node)?
Keep in mind that it's only a target value, it will use more during
recovery if you set a low value.
We usually set a target of 3 GB per OSD and recommend 4 GB of RAM per OSD.
RAM saving trick: use fewer PGs than recommended.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at
https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io