Linux will automatically make use of all available memory for the buffer
cache, freeing buffers when it needs more memory for other things. This is
why MemAvailable is more useful than MemFree; the former indicates how much
memory could be used between Free, buffer cache, and anything else that
could be freed up. If you'd like to learn more about the buffer cache and
Linux's management of it, there are plenty of resources a search away.
My guess is that you're using a Ceph release that has bluefs_buffered_io
set to true by default, which will cause the OSDs to use the buffer cache
for some of their IO. What you're seeing is normal behaviour in this case.
Josh
On Sat., Mar. 27, 2021, 8:59 p.m. Tony Liu, <tonyliu0592(a)hotmail.com> wrote:
I don't see any problems yet. All OSDs are working
fine.
Just that 1.8GB free memory concerns me.
I know 256GB memory for 10 OSDs (16TB HDD) is a lot, I am planning to
reduce it or increate osd_memory_target (if that's what you meant) to
boost performance. But before doing that, I'd like to understand what's
taking so much buff/cache and if there is any option to control it.
Thanks!
Tony
________________________________________
From: Anthony D'Atri <anthony.datri(a)gmail.com>
Sent: March 27, 2021 07:27 PM
To: ceph-users
Subject: [ceph-users] Re: memory consumption by osd
Depending on your kernel version, MemFree can be misleading. Attend to
the value of MemAvailable instead.
Your OSDs all look to be well below the target, I wouldn’t think you have
any problems. In fact 256GB for just 10 OSDs is an embarassment of
riches. What type of drives are you using, and what’s the cluster used
for? If anything I might advise *raising* the target.
You might check tcmalloc usage
https://ceph-devel.vger.kernel.narkive.com/tYp0KkIT/ceph-daemon-memory-util…
but I doubt this is an issue for you.
What's taking that much buffer?
# free -h
total used free shared buff/cache
available
Mem: 251Gi 31Gi 1.8Gi
1.6Gi 217Gi
215Gi
# cat /proc/meminfo
MemTotal: 263454780 kB
MemFree: 2212484 kB
MemAvailable: 226842848 kB
Buffers: 219061308 kB
Cached: 2066532 kB
SwapCached: 928 kB
Active: 142272648 kB
Inactive: 109641772 kB
......
Thanks!
Tony
________________________________________
From: Tony Liu <tonyliu0592(a)hotmail.com>
Sent: March 27, 2021 01:25 PM
To: ceph-users
Subject: [ceph-users] memory consumption by osd
Hi,
Here is a snippet from top on a node with 10 OSDs.
===========================
MiB Mem : 257280.1 total, 2070.1 free, 31881.7 used, 223328.3
buff/cache
MiB Swap: 128000.0 total, 126754.7 free, 1245.3
used. 221608.0 avail
Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+
COMMAND
30492 167 20 0 4483384 2.9g 16696 S
6.0 1.2 707:05.25
ceph-osd
35396 167 20 0 4444952 2.8g 16468 S
5.0 1.1 815:58.52
ceph-osd
33488 167 20 0 4161872 2.8g 16580 S
4.7 1.1 496:07.94
ceph-osd
36371 167 20 0 4387792 3.0g 16748 S
4.3 1.2 762:37.64
ceph-osd
39185 167 20 0 5108244 3.1g 16576 S
4.0 1.2 998:06.73
ceph-osd
38729 167 20 0 4748292 2.8g 16580 S
3.3 1.1 895:03.67
ceph-osd
34439 167 20 0 4492312 2.8g 16796 S
2.0 1.1 921:55.50
ceph-osd
31473 167 20 0 4314500 2.9g 16684 S
1.3 1.2 680:48.09
ceph-osd
32495 167 20 0 4294196 2.8g 16552 S
1.0 1.1 545:14.53
ceph-osd
37230 167 20 0 4586020 2.7g 16620 S
1.0 1.1 844:12.23
ceph-osd
===========================
Does it look OK with 2GB free?
I can't tell how that 220GB is used for buffer/cache.
Is that used by OSDs? Is it controlled by configuration or auto scaling
based
on physical memory? Any clarifications would be
helpful.
Thanks!
Tony
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io