Hello,
Just a note:
With 7+5 you will need 13 host for access your data in case one goes down.
As far as I know, EC 7+5 imply ‘erasure size 12 min_size 8’. So I need at least 8 servers
access to my data,
k=7, m=5,
size = k+m = 12
min_size = k+1 = 8
Am I wrong?
Expected in the nexts version allow access data with
the EC numbers.
I think it is still possible to set min_size = k in Nautilus but it is not recommended.
Best,
Yoann
-----Mensaje original-----
De: Yoann Moulin <yoann.moulin(a)epfl.ch>
Enviado el: martes, 3 de septiembre de 2019 11:28
Para: ceph-users(a)ceph.io
Asunto: [ceph-users] Best osd scenario + ansible config?
Hello,
I am deploying a new Nautilus cluster and I would like to know what would be
the best OSD's scenario config in this case :
10x 6TB Disk OSDs (data)
2x 480G SSD previously used for journal and can be used for WAL and/or DB
Is it better to put all WAL on one SSD and all DBs on the other one? Or put
WAL and DB of the first 5 OSDs on the first SSD and the 5 others on the
second one.
A more general question, what is the impact on an OSD if we lose the WAL?
The DB? Both?
I plan to use EC 7+5 on 12 servers and I am OK if I lose one server
temporarily. I have spare servers and I can easily add another one in this
cluster.
To deploy this cluster, I use ceph-ansible (stable-4.0). I am not sure how
to configure the playbook to use SSD and disks with LVM.
https://github.com/ceph/ceph-ansible/blob/master/docs/source/osds/scenarios.
rst
Is this good?
osd_objectstore: bluestore
lvm_volumes:
- data: data-lv1
data_vg: data-vg1
db: db-lv1
db_vg: db-vg1
wal: wal-lv1
wal_vg: wal-vg1
- data: data-lv2
data_vg: data-vg2
db: db-lv2
db_vg: db-vg2
wal: wal-lv2
wal_vg: wal-vg2
Is it possible to let the playbook configure LVM for each disk in a mixed
case? It looks like I must configure LVM before running the playbook but I
am not sure if I missed something.
Is wal_vg and db_vg can be identical (on VG per SSD shared with multiple
OSDs)?
Thanks for your help.
Best regards,
--
Yoann Moulin
EPFL IC-IT
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an email
to ceph-users-leave(a)ceph.io
--
Yoann Moulin
EPFL IC-IT