Anthony,
I had recently found a reference in the Ceph docs that indicated something
like 40GB per TB for WAL+DB space. For a 12TB HDD that comes out to
480GB. If this is no longer the guideline I'd be glad to save a couple
dollars.
-Dave
--
Dave Hall
Binghamton University
kdhall(a)binghamton.edu
On Thu, Jun 3, 2021 at 6:10 PM Anthony D'Atri <anthony.datri(a)gmail.com>
wrote:
Agreed. I think oh …. maybe 15-20 years ago there was often a wider
difference between SAS and SATA drives, but with modern queuing etc. my
sense is that there is less of an advantage. Seek and rotational latency I
suspect dwarf interface differences wrt performance. The HBA may be a
bigger bottleneck (and way more trouble).
500 GB NVMe seems like a lot per HDD, are you using that as WAL+DB with
RGW, or as dmcache or something?
Depending on your constraints, QLC flash might be more competitive than
you think ;)
— aad
I suspect the behavior of the controller and the
behavior of the drive
firmware will end up mattering more than SAS vs SATA. As
always it's best
if you can test it first before committing to buying a pile of them.
Historically I have seen SATA drives that have performed well as far as
HDDs go though.
Mark
On 6/3/21 4:25 PM, Dave Hall wrote:
> Hello,
>
> We're planning another batch of OSD nodes for our cluster. Our prior
nodes
> have been 8 x 12TB SAS drives plus 500GB NVMe
per HDD. Due to market
> circumstances and the shortage of drives those 12TB SAS drives are in
short
supply.
Our integrator has offered an option of 8 x 14TB SATA drives (still
Enterprise). For Ceph, will the switch to SATA carry a performance
difference that I should be concerned about?
Thanks.
-Dave
--
Dave Hall
Binghamton University
kdhall(a)binghamton.edu
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io