The nvme devices show up as ssd, so I have to manually reclassify them on my cluster.
Sent from my iPhone. Typos are Apple's fault.
On Nov 21, 2019, at 4:10 PM, Kyle Bader
<kyle.bader(a)gmail.com> wrote:
We ssd device class on rook-ceph built clusters on m5 instances
(devices appear as nvme)
> On Thu, Nov 21, 2019 at 2:48 PM Mark Nelson <mnelson(a)redhat.com> wrote:
>
>
>> On 11/21/19 4:46 PM, Mark Nelson wrote:
>> On 11/21/19 4:25 PM, Sage Weil wrote:
>>> Adding dev(a)ceph.io
>>>
>>> On Thu, 21 Nov 2019, Muhammad Ahmad wrote:
>>>> While trying to research how crush maps are used/modified I stumbled
>>>> upon these device classes.
>>>>
https://ceph.io/community/new-luminous-crush-device-classes/
>>>>
>>>> I wanted to highlight that having nvme as a separate class will
>>>> eventually break and should be removed.
>>>>
>>>> There is already a push within the industry to consolidate future
>>>> command sets and NVMe will likely be it. In other words, NVMe HDDs are
>>>> not too far off. In fact, the recent October OCP F2F discussed this
>>>> topic in detail.
>>>>
>>>> If the classification is based on performance then command set
>>>> (SATA/SAS/NVMe) is probably not the right classification.
>>> I opened a PR that does this:
>>>
>>>
https://github.com/ceph/ceph/pull/31796
>>>
>>> I can't remember seeing 'nvme' as a device class on any real
cluster;
>>> the
>>> exceptoin is my basement one, and I think the only reason it ended up
>>> that
>>> way was because I deployed bluestore *very* early on (with ceph-disk)
>>> and
>>> the is_nvme() detection helper doesn't work with LVM. That's my
>>> theory at
>>> least.. can anybody with bluestore on NVMe devices confirm? Does anybody
>>> see class 'nvme' devices in their cluster?
>>>
>>> Thanks!
>>> sage
>>>
>>
>> Here's what we've got on the new performance nodes with Intel NVMe
>> drives:
>>
>>
>> ID CLASS WEIGHT TYPE NAME
>> -1 64.00000 root default
>> -3 64.00000 rack localrack
>> -2 8.00000 host o03
>> 0 ssd 1.00000 osd.0
>> 1 ssd 1.00000 osd.1
>> 2 ssd 1.00000 osd.2
>> 3 ssd 1.00000 osd.3
>> 4 ssd 1.00000 osd.4
>> 5 ssd 1.00000 osd.5
>> 6 ssd 1.00000 osd.6
>> 7 ssd 1.00000 osd.7
>>
>>
>> Mark
>>
>
> I should probably clarify that this cluster was built with cbt though!
>
>
> Mark
> _______________________________________________
> Dev mailing list -- dev(a)ceph.io
> To unsubscribe send an email to dev-leave(a)ceph.io