Hello.
22 нояб. 2019 г., в 01:25, Sage Weil
<sage(a)newdream.net> написал(а):
Adding dev(a)ceph.io
Does anybody see class 'nvme' devices in their cluster?
Thanks!
sage
This is my production Luminous cluster:
[root@r1flash1 ~]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 130.98889 root default
-3 21.83148 host r1flash1
0 nvme 1.81929 osd.0 up 1.00000 1.00000
1 nvme 1.81929 osd.1 up 1.00000 1.00000
2 nvme 1.81929 osd.2 up 1.00000 1.00000
3 nvme 1.81929 osd.3 up 1.00000 1.00000
4 nvme 1.81929 osd.4 up 1.00000 1.00000
5 nvme 1.81929 osd.5 up 1.00000 1.00000
6 nvme 1.81929 osd.6 up 1.00000 1.00000
7 nvme 1.81929 osd.7 up 1.00000 1.00000
8 nvme 1.81929 osd.8 up 1.00000 1.00000
9 nvme 1.81929 osd.9 up 1.00000 1.00000
10 nvme 1.81929 osd.10 up 1.00000 1.00000
11 nvme 1.81929 osd.11 up 1.00000 1.00000
…
6 nodes, 6 Intel NVMe drives per server and 2 OSD per drive.
An OSD was created with custom a script, no use LVM at all, no use ceph-disk or
ceph-volume.
A part create an OSD in script:
<cut>
#
ID=$(echo "{\"cephx_secret\": \"$OSD_SECRET\"}" | ceph osd
new $UUID -i - -n client.bootstrap-osd -k /var/lib/ceph/bootstrap-osd/ceph.keyring)
sudo -u ceph mkdir /var/lib/ceph/osd/ceph-$ID
ceph-authtool --create-keyring /var/lib/ceph/osd/ceph-$ID/keyring --name osd.$ID --add-key
$OSD_SECRET
echo bluestore > /var/lib/ceph/osd/ceph-$ID/type
ln -s /dev/disk/by-partuuid/$PARTUUID /var/lib/ceph/osd/ceph-$ID/block
ln -s /dev/disk/by-partuuid/$PARTUUID_DB /var/lib/ceph/osd/ceph-$ID/block.db
chown ceph:ceph /var/lib/ceph/osd/ceph-$ID
chown ceph:ceph /var/lib/ceph/osd/ceph-$ID/*
chmod 600 /var/lib/ceph/osd/ceph-$ID/keyring
chmod 600 /var/lib/ceph/osd/ceph-$ID/type
ceph-osd -i $ID --mkfs --osd-uuid $UUID
chown ceph:ceph /var/lib/ceph/osd/ceph-$ID/*
<cut>
We didn’t use LVM for maximize IO performance and latency and use the script because the
ceph-volume don’t support RAW devices by now.
—
Mike, runs!