We test it fairly regularly on our development test nodes. Basically what this does is
cache data in the bluestore buffer cache on write. By default we only cache things when
they are first read. The advantage to enabling this is that you immediately have data in
the cache once it's written if you expect to read it back soon. The downside is that
you might end up filling the cache with a bunch of cold data and force other things out.
It can also make the caches spend more time evicting data which can be slow when running
Ceph on fast NVMe drives (This is less of an issue in octopus though as we redesigned the
way that bluestore's caches work to avoid using a single thread for cache trimming).
One way to see if it's working is to issue a write workload to a freshly started
cluster. With bluestore_default_buffered_write enabled you should see bluestore's
buffer cache filling quickly with data while when it's off the cache should only start
filling up once you start reading data.
----- Original Message -----
From: "Adam Koczarski"
To: "ceph-users" <ceph-users(a)ceph.com>
Sent: Tuesday, January 14, 2020 12:23:41 PM
Subject: [ceph-users] bluestore_default_buffered_write = true
Has anyone ever tried using this feature? I've added it to the [global]
section of the ceph.conf on my POC cluster but I'm not sure how to tell if
it's actually working. I did find a reference to this feature via Google and
they had it in their [OSD] section?? I've tried that too..
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io