My plan is to use at least 500GB NVMe per HDD OSD. I have not started that yet, but there
are threads of other people sharing their experience. If you go beyond 300GB per OSD,
apparently the WAL/DB options cannot really use the extra capacity. With dm-cache or the
like you would additionally start holding hot data in cache.
Ideally, I can split a 4TB or even a 8TB NVMe over 6 OSDs.
Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
________________________________________
From: Anthony D'Atri <anthony.datri(a)gmail.com>
Sent: 14 November 2020 10:57:57
To: Frank Schilder
Subject: Re: [ceph-users] Re: which of cpu frequency and number of threads servers osd
better?
Guten Tag.
My plan for the future is to use dm-cache for LVM OSDs
instead of WAL/DB device.
Do you have any insights into the benefits of that approach instead of WAL/DB, and of
dm-cache vs bcache vs dm-writecache vs … ? And any for sizing the cache device and
handling failures? Presumably the DB will be active enough that it will persist in the
cache, so sizing should be at a minimum that to hold 2 copies of the DB to accomodate
compaction?
I have an existing RGW cluster on HDDs that utilizes a cache tier; the high water mark is
set fairly low so that it doesn’t fill up, something that apparently happened last
Christmas. I’ve been wanting to get a feel for OSD cache as an alternative to deprecated
and fussy cache tiering, as well as something like a Varnish cache on RGW load balancers
to short-circult small requests.
— Anthony