If you have decent CPU and RAM on the OSD nodes, you can try Erasure Coding, even just 4:2
should keep the cost per GB/TB lower than 2:1 replica (as that's basically 1.5:1 for
cost) and much safer (same protection as 3:1 replica). We use that on our biggest
production SSD pool.
________________________________
From: Wesley Peng <weslepeng(a)gmail.com>
Sent: Sunday, 25 August 2019 9:11 PM
To: Wido den Hollander <wido(a)42on.com>
Cc: ceph-users(a)ceph.io <ceph-users(a)ceph.io>
Subject: [ceph-users] Re: ceph's replicas question
Ok thanks.
Wido den Hollander <wido@42on.com<mailto:wido@42on.com>>于2019年8月25日
周日上午4:47写道:
Op 24 aug. 2019 om 16:36 heeft Darren Soothill
<darren.soothill@suse.com<mailto:darren.soothill@suse.com>> het volgende
geschreven:
So can you do it.
Yes you can.
Should you do it is the bigger question.
So my first question would be what type of drives are you using? Enterprise class drives
with a low failure rate?
Doesn’t matter. From my experience: With 2x replication you will loose data at some
point.
As a consultant I have just seen too many cases of data loss with 2x.
Please, don’t do it.
Then you have to ask yourself are you feeling lucky?
If you do a scrub and 1 drive returns 1 value and another drive returns another value
which one is correct?
What happens should you have a drive failure and you have any other error? A node
failure? Another disk failure? A disk read error? All of these could mean data loss.
How important is the data you are storing and do you have a backup of it as you will need
that backup at some point.
Darren
Sent from my iPhone
On 24 Aug 2019, at 14:01, Wesley Peng
<weslepeng@gmail.com<mailto:weslepeng@gmail.com>> wrote:
Hi,
We have all SSD disks as ceph's backend storage.
Consider the cost factor, can we setup the cluster to have only two replicas for
objects?
thanks & regards
Wesley
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io<mailto:ceph-users@ceph.io>
To unsubscribe send an email to
ceph-users-leave@ceph.io<mailto:ceph-users-leave@ceph.io>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io<mailto:ceph-users@ceph.io>
To unsubscribe send an email to
ceph-users-leave@ceph.io<mailto:ceph-users-leave@ceph.io>