To clarify, to avoid PG log taking too much memory, I already set
osd_max_pg_log_entries from default 10000 to 1000.
I checked PG log size. They are all under 1100.
ceph pg dump -f json | jq '.pg_map.pg_stats[]' | grep ondisk_log_size
I also checked eash OSD. The total is only a few hundreds MB.
ceph daemon osd.<id> dump_mempools
And osd_memory_target stays default 4GB.
What's taking that much buffer?
# free -h
total used free shared buff/cache available
Mem: 251Gi 31Gi 1.8Gi 1.6Gi 217Gi 215Gi
# cat /proc/meminfo
MemTotal: 263454780 kB
MemFree: 2212484 kB
MemAvailable: 226842848 kB
Buffers: 219061308 kB
Cached: 2066532 kB
SwapCached: 928 kB
Active: 142272648 kB
Inactive: 109641772 kB
......
Thanks!
Tony
________________________________________
From: Tony Liu <tonyliu0592(a)hotmail.com>
Sent: March 27, 2021 01:25 PM
To: ceph-users
Subject: [ceph-users] memory consumption by osd
Hi,
Here is a snippet from top on a node with 10 OSDs.
===========================
MiB Mem : 257280.1 total, 2070.1 free, 31881.7 used, 223328.3 buff/cache
MiB Swap: 128000.0 total, 126754.7 free, 1245.3 used. 221608.0 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
30492 167 20 0 4483384 2.9g 16696 S 6.0 1.2 707:05.25 ceph-osd
35396 167 20 0 4444952 2.8g 16468 S 5.0 1.1 815:58.52 ceph-osd
33488 167 20 0 4161872 2.8g 16580 S 4.7 1.1 496:07.94 ceph-osd
36371 167 20 0 4387792 3.0g 16748 S 4.3 1.2 762:37.64 ceph-osd
39185 167 20 0 5108244 3.1g 16576 S 4.0 1.2 998:06.73 ceph-osd
38729 167 20 0 4748292 2.8g 16580 S 3.3 1.1 895:03.67 ceph-osd
34439 167 20 0 4492312 2.8g 16796 S 2.0 1.1 921:55.50 ceph-osd
31473 167 20 0 4314500 2.9g 16684 S 1.3 1.2 680:48.09 ceph-osd
32495 167 20 0 4294196 2.8g 16552 S 1.0 1.1 545:14.53 ceph-osd
37230 167 20 0 4586020 2.7g 16620 S 1.0 1.1 844:12.23 ceph-osd
===========================
Does it look OK with 2GB free?
I can't tell how that 220GB is used for buffer/cache.
Is that used by OSDs? Is it controlled by configuration or auto scaling based
on physical memory? Any clarifications would be helpful.
Thanks!
Tony
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io