Dear Stefan,
is it possible that there is a mistake in the documentation or a bug? Out of curiosity, I
restarted one of these OSDs and the memory usage starts going up:
ceph 881203 15.4 4.0 6201580 5344764 ? Sl 09:18 6:38 /usr/bin/ceph-osd
--cluster ceph -f -i 243 --setuser ceph --setgroup disk
The documentation of ods_memory_target says "Can update at runtime: true", but
it seems that a restart is required to activate the setting, so it can *not* be updated at
runtime (meaning it takes effect without restart).
In addition to that, I would like to have different default memory targets set for
different device classes. Unfortunately, there seem not to be different
memory_target_[devide class] default options. Is there a good way to set different while
avoiding to bloat "ceph config dump" unnecessarily?
Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
________________________________________
From: Frank Schilder <frans(a)dtu.dk>
Sent: 05 February 2020 09:09:22
To: Stefan Kooman
Cc: ceph-users
Subject: [ceph-users] Re: osd_memory_target ignored
Hi Stefan,
its all at the defaults it seems:
[root@gnosis ~]# ceph config get osd.243 bluestore_cache_size
0
[root@gnosis ~]# ceph config get osd.243 bluestore_cache_size_ssd
3221225472
I explicitly removed the old settings with commands like
ceph config rm osd.243 bluestore_cache_size
Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
________________________________________
From: Stefan Kooman <stefan(a)bit.nl>
Sent: 04 February 2020 21:14:28
To: Frank Schilder
Cc: ceph-users
Subject: Re: [ceph-users] osd_memory_target ignored
Quoting Frank Schilder (frans(a)dtu.dk):
Dear Stefan,
I check with top the total allocation. ps -aux gives:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
ceph 784155 15.8 3.1 6014276 4215008 ? Sl Jan31 932:13 /usr/bin/ceph-osd
--cluster ceph -f -i 243 ...
ceph 784732 16.6 3.0 6058736 4082504 ? Sl Jan31 976:59 /usr/bin/ceph-osd
--cluster ceph -f -i 247 ...
ceph 785812 17.1 3.0 5989576 3959996 ? Sl Jan31 1008:46 /usr/bin/ceph-osd
--cluster ceph -f -i 254 ...
ceph 786352 14.9 3.1 5955520 4132840 ? Sl Jan31 874:37 /usr/bin/ceph-osd
--cluster ceph -f -i 256 ...
These should have 8GB resident by now, but stay at or just below 4G. The other options
are set as
[root@ceph-04 ~]# ceph config get osd.243 bluefs_allocator
bitmap
[root@ceph-04 ~]# ceph config get osd.243 bluestore_allocator
bitmap
[root@ceph-04 ~]# ceph config get osd.243 osd_memory_target
8589934592
What does "bluestore_cache_size" read? Our OSDs report "0".
Gr. Stefan
--
| BIT BV
https://www.bit.nl/ Kamer van Koophandel 09090351
| GPG: 0xD14839C6 +31 318 648 688 / info(a)bit.nl
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io