Hi Brian,
Here more context about what I want to accomplish: I've migrated a bunch of
services from AWS to a local server, but having everything in a single
server is not safe, and instead of investing in RAID, I would like to start
setting up a small Ceph Cluster to have redundancy and a robust mechanism
in case any component fails.
Also, in the mid-term, I do have plans to deploy a small OpenStack Cluster.
Because of that, I would like to set up the first small Ceph Cluster that
can scale as my needs grow, the idea is to have 3 OSD nodes with the same
characteristics and add additional HDDs as needed, up to 5 HDD per OSD
node, starting with 1 HDD per node.
Thanks!
On Thu, Oct 1, 2020 at 11:35 AM Brian Topping <brian.topping(a)gmail.com>
wrote:
Welcome to Ceph!
I think better questions to start with are “what are your objectives in
your study?” Is it just seeing Ceph run with many disks, or are you trying
to see how much performance you can get out of it with distributed disk?
What is your budget? Do you want to try different combinations of storage
devices to learn how they differ in performance or do you just want to jump
to the fastest things out there?
One often doesn’t need a bunch of machines to determine that Ceph is a
really versatile and robust solution. I pretty regularly deploy Ceph on a
single node using Kubernetes and Rook. Some would ask “why would one ever
do that, just use direct storage!”. The answer is when I want to expand a
cluster, I am willing to have traded initial performance overhead for
letting Ceph distribute data at a later date. And the overhead is far lower
than one might think when there’s not a network bottleneck to deal with. I
do use direct storage on LVM when I have distributed workloads such as
Kafka that abstract storage that a service instance depends on. It doesn’t
make much sense in my mind for Kafka or Cassandra to use Ceph because I can
afford to lose nodes using those services.
In other words, Ceph is virtualized storage. You have likely come to it
because your workloads need to be able to come up anywhere on your network
and reach that storage. How do you see those workloads exercising the
capabilities of Ceph? That’s where your interesting use cases come from,
and can help you better decide what the best lab platform is to get started.
Hope that helps, Brian
On Sep 29, 2020, at 12:44 AM, Ignacio Ocampo <nafiux(a)gmail.com> wrote:
Hi All :),
I would like to get your feedback about the components below to build a
PoC OSD Node (I will build 3 of these).
SSD for OS.
NVMe for cache.
HDD for storage.
The Supermicro motherboard has 2 10Gb cards, and I will use ECC memories.
<image.png>
Thanks for your feedback!
--
Ignacio Ocampo
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
--
Ignacio Ocampo