I will bring this up for discussion in the Ceph Leadership Team meeting twenty-seven hours from the time of this email.

Zac

------- Original Message -------
On Tuesday, October 17th, 2023 at 3:20 AM, Zakhar Kirpichenko <zakhar@gmail.com> wrote:

Dear Zac,

This is a kind reminder that we're still waiting for some clarification regarding this behavior of Ceph monitors.

Best regards,
Zakhar

On Wed, 11 Oct 2023 at 22:21, Zakhar Kirpichenko <zakhar@gmail.com> wrote:
Dear Zac,

FYI, together with other members of ceph-users we have established that monitors indeed write to disks at very high rates, hundreds of gigabytes per day in healthy clusters: https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/XGCI2LFW5RH3GUOQFJ542ISCSZH3FRX2/

This behavior is not documented, i.e. there's no way to tell whether this is expected or "normal", why this is happening, what these "compaction" events are, and how they can be dealt with.

I would appreciate some insightful feedback. Moreover, it is likely that Ceph hardware requirements need to be adjusted to explain that the system disks should be able to accommodate such a large amount of writes on the nodes which run Ceph monitors.

/Z

On Wed, 11 Oct 2023 at 12:57, Zakhar Kirpichenko <zakhar@gmail.com> wrote:
Many thanks!

/Z

On Wed, 11 Oct 2023 at 12:51, Zac Dover <zac.dover@proton.me> wrote:
Zakhar,

I will take this matter to the Leadership Team. I will write to you within the week.

Zac Dover
Head of Upstream Documentation
Ceph Foundation

------- Original Message -------
On Wednesday, October 11th, 2023 at 7:10 PM, Zakhar Kirpichenko <zakhar@gmail.com> wrote:

Hi!

Not sure this question is a good fit for this mailing list, but the subject appears to be undocumented, and I'm not getting any response from ceph-users.

Monitors in our 16.2.14 cluster appear to quite often run "manual compaction" tasks, usually more than once per minute. During each compaction the monitor process writes approximately 500-600 MB of data to disk over a short period of time. These writes add up to tens of gigabytes per hour and hundreds of gigabytes per day, reducing .

Monitor rocksdb and compaction options are default:

"mon_compact_on_bootstrap": "false",
"mon_compact_on_start": "false",
"mon_compact_on_trim": "true",
"mon_rocksdb_options": "write_buffer_size=33554432,compression=kNoCompression,level_compaction_dynamic_level_bytes=true",

Where can I find some documentation regarding rocksdb usage by monitors? How can I ascertain where this is expected behavior, or whether this is something I can adjust?

I would appreciate your advice and/or direction towards the relevant documentation.

Best regards,
Zakhar