Hi Victor,
that's true for Ceph releases prior to Octopus. The latter has some
improvements in this area..
There is pending backport PR to fix that in Nautilus as well:
https://github.com/ceph/ceph/pull/33889
AFAIR this topic has been discussed in this mailing list multiple times.
Thanks,
Igor
On 3/27/2020 10:56 PM, victorhooi(a)yahoo.com wrote:
> Hi,
>
> I'm using Intel Optane disks to provide WAL/DB capacity for my Ceph cluster
(which is part of Proxmox - for VM hosting).
>
> I've read that WAL/DB partitions only use either 3GB, or 30GB, or 300GB - due to
the way that RocksDB works.
>
> Is this true?
>
> My current partition for WAL/DB is 145 GB - does this mean that 115Gb of that will be
permanently wasted?
>
> Is this behaviour documented somewhere, or is there some background, so I can
understand a bit more about how it works?
>
> Thanks,
> Victor
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io