I don't think you need a bucket under host for the two LVs. It's unnecessary.
September 23, 2020 6:45 AM, "George Shuklin" <george.shuklin(a)gmail.com>
wrote:
> On 23/09/2020 10:54, Marc Roos wrote:
>
>>> Depends on your expected load not? I already read here numerous of times
>> that osd's can not keep up with nvme's, that is why people put 2
osd's
>> on a single nvme. So on a busy node, you probably run out of cores? (But
>> better verify this with someone that has an nvme cluster ;))
>
> Did you? I just start to though about this idea too, as some devices can deliver
about twice of the
> own ceph-osd performance.
>
> How they did it?
>
> I have an idea to create a new bucket type under host, and put two LV from each ceph
osd VG into
> that new bucket. Rules are the same (different host), so redundancy won't be
affected, but doubling
> number of ceph-osd daemons can squeeze a bit more iops from backend devices at
expense of doubling
> Rocksdb size (reducing payload size) and using more cores.
>
> And I really want to hear all bad things about this setup before trying it.
>
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io