WAL is 1G (you can allocate 2 to be sure), DB should always be 30G. And this doesn't
depend on the size of the data partition :-)
14 марта 2020 г. 22:50:37 GMT+03:00, Victor Hooi <victorhooi(a)yahoo.com> пишет:
Hi,
I'm building a 4-node Proxmox cluster, with Ceph for the VM disk
storage.
On each node, I have:
- 1 x 512Gb M.2 SSD (for Proxmox/boot volume)
- 1 x 960GB Intel Optane 905P (for Ceph WAL/DB)
- 6 x 1.92TB Intel S4610 SATA SSD (for Ceph OSD)
I'm using the Proxmox "pveceph" command to setup the OSDs.
By default this seems to pick 10% of the OSD size for the DB volume,
and 1%
of the OSD size for the WAL volume.
This means after four drives, I ran out of space:
# pveceph osd create /dev/sde -db_dev /dev/nvme0n1
create OSD on /dev/sde (bluestore)
creating block.db on '/dev/nvme0n1'
Rounding up size to full physical extent 178.85 GiB
lvcreate
'ceph-861ebf6d-8fee-4313-8de6-4e797dc436ee/osd-db-da591d0f-8a05-42fa-bc62-a093bf98aded'
error: Volume group
"ceph-861ebf6d-8fee-4313-8de6-4e797dc436ee" has
insufficient free space (45784 extents): 45786 required.
Anyway, I assume that means I need to tune my DB and WAL volumes down
from
the defaults.
What advice to you have in terms of making best use of the available
space,
between WAL and DB?
What is the impact of having WAL and DB smaller than 1% and 10% of OSD
size
respectively?
Thanks,
Victor
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io