Hi,
Quoting Frank Schilder (frans(a)dtu.dk):
I recently upgraded from 13.2.2 to 13.2.8 and observe
two changes that
I struggle with:
- from release notes: The bluestore_cache_* options are no longer
needed. They are replaced by osd_memory_target, defaulting to 4GB. -
the default for bluestore_allocator has changed from stupid to bitmap,
which seem to conflict each other, or at least I seem unable to
achieve what I want.
I have a number of OSDs for which I would like to increase the cache
size. In the past I used bluestore_cache_size=8G and it worked like a
charm. I now changed that to osd_memory_target=8G without any effect.
The usage stays at 4G and the virtual size is about 5G. I would expect
both to be close to 8G. The read cache for these OSDs usually fills up
within a few hours. The cluster is now running a few days with the new
configs to no avail.
How do you check the memory usage? We have a osd_memory_target=11G and
the OSDs consume this exact amount of RAM (ps aux |grep osd). We are
running 13.2.8. ceph daemon osd.$id dump_mempools would give ~ 4 GiB of
RAM. So there is more RAM usage than only specified by "mempool"
obviously.
The documentation of osd_memory_target refers to tcmalloc a lot. Is
this in conflict with allocator=bitmap? If so, what is the way to tune
cache sizes (say if tcmalloc is not used/how to check?)? Are
bluestore_cache_* indeed obsolete as the above release notes suggest,
or is this not true?
AFAIK these are not related. We use "bluefs_allocator": "bitmap" and
"bluestore_allocator": "bitmap".
Gr. Stefan
--
| BIT BV
https://www.bit.nl/ Kamer van Koophandel 09090351
| GPG: 0xD14839C6 +31 318 648 688 / info(a)bit.nl