Le 23/08/2019 à 17:01, Anthony D'Atri a écrit :
>> Is it better to put all WAL on one SSD and all DBs on the other one? Or put WAL and DB of the first 5 OSD on the first SSD and the 5 others on
>> the second one.
>> Think about what happens when an SSD dies.
My plan is to use Erasure Coding 7+5 for both RGW and cephfs pool with failure domain set to host, I don't mind if I lose 1 server, or a half
one (if I put wal+db on the same SSD for 5 OSDs).
I don't have much experience with bluestore, in filestore, we split journals between the 2 SSDs to get better performance. I can configure a
raid1 hw on these 2 SSDs if this is relevant and does not change performance in the end. And in my experience, EC give much less performance so
if I can avoid setup that decreases performance more, that would be better.
I am trying to configure ceph-ansible (stable-4.0) to deploy a new cluster in Nautilus.
What would be the best OSDs scenario config in this case :
10x 6TB Disk OSDs (data)
2x 480G SSD previously used for journal and can be used for WAL and/or DB
is this good :
- data: data-lv1
- data: data-lv2
Is it better to put all WAL on one SSD and all DBs on the other one? Or put WAL and DB of the first 5 OSD on the first SSD and the 5 others on
the second one.
Is it possible to let the playbook configure LVM for each disk in a mixed case? It looks like I must configure LVM before running the playbook
but I am not sure if I missed something.
Is wal_vg and db_vg can be identical (on VG per SSD shared with multiple OSDs)?
Thanks for your help.