For sure ceph-bluestore-tool can be used for that. Unfortunately it
lacks LVM tag manipulation stuff required to properly setup DB/WAL
volume for Ceph.
Which means that LVM tags to be updated manually if pure
ceph-bluestore-tool is used.
Additionally there is a pending PR
) to implement DB/WAL
manipulation at ceph-volume (which in turn rely on ceph-bluestore-tool
to perform lower level operations). Hence one should either wait until
it's merged and backported or do such a backport (actually python code
only) on his own.
On 3/23/2021 2:37 PM, Dave Hall wrote:
> Based on other discussions in this list I have concluded that I need to add
> NVMe to my OSD nodes and expand the NVMe (DB/WAL) for each OSD. Is there a
> way to do this without destroying and rebuilding each OSD (after
> safe removal from the cluster, of course)? Is there a way to use
> ceph-bluestore-tool for this? Is it as simple as lvextend?
> Why more NVMe? Frequent DB spillovers, and the recommendation that the
> NVMe should be 40GB for every TB of HDD. When I did my initial setup I
> thought that 124GB of NVMe for a 12TB HDD would be sufficient, but by the
> above metric it should be more like 480GB of NVMe.
> Dave Hall
> Binghamton University
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io