Hi,
I have an existing Luminous installation, 3 nodes with 8x4TB HDD and a
1x200GB SSD which was used as journal before. On a default Luminous
installation via ceph-deploy, I forgot to prepare the OSD with the WAL and
DB on separate SSD. The environment is running and in production and I
want to configure it to use the SSD as the WAL device or maybe for DB also,
since the environment is in production I am hesitant to do it because it
may cause some problems along the way. What should I do for me to
reconfigure it without downtime, or if there are downtime at least minimal?
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 101.86957 root default
-3 29.10559 host ceph01
0 hdd 3.63820 osd.0 up 1.00000 1.00000
1 hdd 3.63820 osd.1 up 1.00000 1.00000
2 hdd 3.63820 osd.2 up 1.00000 1.00000
3 hdd 3.63820 osd.3 up 1.00000 1.00000
4 hdd 3.63820 osd.4 up 1.00000 1.00000
5 hdd 3.63820 osd.5 up 1.00000 1.00000
6 hdd 3.63820 osd.6 up 1.00000 1.00000
7 hdd 3.63820 osd.7 up 1.00000 1.00000
-5 29.10559 host ceph02
8 hdd 3.63820 osd.8 up 1.00000 1.00000
9 hdd 3.63820 osd.9 up 1.00000 1.00000
10 hdd 3.63820 osd.10 up 1.00000 1.00000
11 hdd 3.63820 osd.11 up 1.00000 1.00000
12 hdd 3.63820 osd.12 up 1.00000 1.00000
13 hdd 3.63820 osd.13 up 1.00000 1.00000
14 hdd 3.63820 osd.14 up 1.00000 1.00000
15 hdd 3.63820 osd.15 up 1.00000 1.00000
-7 29.10559 host ceph03
16 hdd 3.63820 osd.16 up 1.00000 1.00000
17 hdd 3.63820 osd.17 up 1.00000 1.00000
18 hdd 3.63820 osd.18 up 1.00000 1.00000
19 hdd 3.63820 osd.19 up 1.00000 1.00000
20 hdd 3.63820 osd.20 up 1.00000 1.00000
21 hdd 3.63820 osd.21 up 1.00000 1.00000
22 hdd 3.63820 osd.22 up 1.00000 1.00000
23 hdd 3.63820 osd.23 up 1.00000 1.00000
- Vlad
ᐧ
Show replies by date