I would recommend Intel S4510 series, which has power loss protection (PLP).
If you do not care about PLP, lower-cost Samsung 870EVO and Crucial MX500 should also be
OK (with separate DB/WAL on enterprise SSD with PLP)
Samuel
huxiaoyu(a)horebdata.cn
From: by morphin
Date: 2021-05-30 02:48
To: Anthony D'Atri
CC: Ceph Users
Subject: [ceph-users] Re: SSD recommendations for RBD and VM's
Hello Anthony.
I use Qemu and I don't need size.
I've 1000 vm and usually they're clones from the same rbd image. The
image is 30GB.
Right now I've 7TB Stored data. rep x3 = 20TB data. It's mostly read
intensive. Usage is stable and does not grow.
So I need I/O more than capacity. That's why I'm looking for 256-512GB
SSD's.
I think right now 480-512GB is sweet spot for $ / GB. So 60PCS 512GB
will be enough. Actually 120PCS 256GB will be better but the price
goes up.
I have Dell R720-740 and I use SATA Intel DCS3700 for journal. I've
40PCS 100GB. I'm gonna make them OSD as well.
7 years and DC S3700 still rocks. Not even one of them is dead.
The SSD must be Low price & High TBW life span. Rest is not important.
Anthony D'Atri <anthony.datri(a)gmail.com>om>, 30 May 2021 Paz, 02:26
tarihinde şunu yazdı:
The choice depends on scale, your choice of chassis / form factor, budget, workload and
needs.
The sizes you list seem awfully small. Tell us more about your use-case. OpenStack?
Proxmox? QEMU? VMware? Converged? Dedicated ?
—aad
> On May 29, 2021, at 2:10 PM, by morphin <morphinwithyou(a)gmail.com> wrote:
>
> Hello.
>
> I have virtualization env and I'm looking new SSD for HDD replacement.
> What are the best Performance / Price SSDs in the market right now?
> I'm looking 1TB, 512GB, 480GB, 256GB, 240GB.
>
> Is there a SSD recommendation list for ceph?
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io