Welcome to Ceph!
I think better questions to start with are “what are your objectives in your study?” Is it
just seeing Ceph run with many disks, or are you trying to see how much performance you
can get out of it with distributed disk? What is your budget? Do you want to try different
combinations of storage devices to learn how they differ in performance or do you just
want to jump to the fastest things out there?
One often doesn’t need a bunch of machines to determine that Ceph is a really versatile
and robust solution. I pretty regularly deploy Ceph on a single node using Kubernetes and
Rook. Some would ask “why would one ever do that, just use direct storage!”. The answer is
when I want to expand a cluster, I am willing to have traded initial performance overhead
for letting Ceph distribute data at a later date. And the overhead is far lower than one
might think when there’s not a network bottleneck to deal with. I do use direct storage on
LVM when I have distributed workloads such as Kafka that abstract storage that a service
instance depends on. It doesn’t make much sense in my mind for Kafka or Cassandra to use
Ceph because I can afford to lose nodes using those services.
In other words, Ceph is virtualized storage. You have likely come to it because your
workloads need to be able to come up anywhere on your network and reach that storage. How
do you see those workloads exercising the capabilities of Ceph? That’s where your
interesting use cases come from, and can help you better decide what the best lab platform
is to get started.
Hope that helps, Brian
On Sep 29, 2020, at 12:44 AM, Ignacio Ocampo
<nafiux(a)gmail.com> wrote:
Hi All :),
I would like to get your feedback about the components below to build a PoC OSD Node (I
will build 3 of these).
SSD for OS.
NVMe for cache.
HDD for storage.
The Supermicro motherboard has 2 10Gb cards, and I will use ECC memories.
<image.png>
Thanks for your feedback!
--
Ignacio Ocampo
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io