Hi Ignacio, apologies I missed your responses here.
I would agree with Martin about buying used hardware for as cheap as possible, but also
understand the desire to have hardware you can promote into future OpenStack usage.
Regarding networking, I started to use SFP+ cables like
https://amzn.to/36sHZo1. These run
with less energy consumption, so are far cooler, and can be easily swapped out with Fiber
or 10GBase-T modules if / when I need them.
https://amzn.to/34s6Ohj is the switch that I
am currently using and will be moving to a SONiC based EdgeCore when I get some time to
fix the crashes i’m having in the software. If the motherboards you found are built with
10GBase-T, it may be cheaper now to just stick with that in your switch, but long term,
you’ll probably save money going with SFP+, so it’s worth considering.
Regarding the MikroTik switch, I bought it on price since I thought I was just getting a
switch. I turns out the thing is a full-fledged router on the level of a Cisco Catalyst.
It has a very complete and mature CLI configuration language, as well as a web UI that is
a bit overwhelming at first but very useful once one gets the hang of it. I thought it was
something I’d be putting back on eBay pretty rapidly and I ended up growing quite fond of
it.
Regarding bottlenecks, there’s always going to be a bottleneck. Parts are never completely
balanced. Spindles still have good characteristics for certain workloads, and I’d say it’s
doubtful you’d exceed them with a three node experimentation cluster. The question I’d ask
is whether you can use the spindles in a long term solution for things like log storage
and backups. If the answer is yes, then having them now will allow you to get a better
feel for the differences. It’s a lot easier to discover that way than it is to describe
it. (In the old days, this was like people comparing statistics about stereo equipment -
today people don’t care about all that, they care about whether they can find the song
they want and headphones are more than good enough…)
Brian
On Oct 2, 2020, at 2:03 AM, Ignacio Ocampo
<nafiux(a)gmail.com> wrote:
What about the network cards? The motherboard I’m looking for has 2 x 10Gbe, with that
and the CPU frequency, I think the bottleneck will be the HDD. Is that overkill? Thanks!
Ignacio Ocampo
> On 2 Oct 2020, at 0:38, Martin Verges <martin.verges(a)croit.io> wrote:
>
>
> For private projects, you can search small 1U servers with up to 4 3.5" disk
slots and some e3-1230 v3/4/5 cpu. They can be bought for 250-350€ (used) and then you
just plug in a disk.
> They are also good for SATA SSDs and work quite well. You can mix both drives in the
same system as well.
>
> --
> Martin Verges
> Managing director
>
> Mobile: +49 174 9335695
> E-Mail: martin.verges(a)croit.io <mailto:martin.verges@croit.io>
> Chat:
https://t.me/MartinVerges <https://t.me/MartinVerges>
>
> croit GmbH, Freseniusstr. 31h, 81247 Munich
> CEO: Martin Verges - VAT-ID: DE310638492
> Com. register: Amtsgericht Munich HRB 231263
>
> Web:
https://croit.io <https://croit.io/>
> YouTube:
https://goo.gl/PGE1Bx <https://goo.gl/PGE1Bx>
>
>
> Am Fr., 2. Okt. 2020 um 08:32 Uhr schrieb Ignacio Ocampo <nafiux(a)gmail.com
<mailto:nafiux@gmail.com>>:
> Hi Brian,
>
> Here more context about what I want to accomplish: I've migrated a bunch of
> services from AWS to a local server, but having everything in a single
> server is not safe, and instead of investing in RAID, I would like to start
> setting up a small Ceph Cluster to have redundancy and a robust mechanism
> in case any component fails.
>
> Also, in the mid-term, I do have plans to deploy a small OpenStack Cluster.
>
> Because of that, I would like to set up the first small Ceph Cluster that
> can scale as my needs grow, the idea is to have 3 OSD nodes with the same
> characteristics and add additional HDDs as needed, up to 5 HDD per OSD
> node, starting with 1 HDD per node.
>
> Thanks!
>
> On Thu, Oct 1, 2020 at 11:35 AM Brian Topping <brian.topping(a)gmail.com
<mailto:brian.topping@gmail.com>>
> wrote:
>
> > Welcome to Ceph!
> >
> > I think better questions to start with are “what are your objectives in
> > your study?” Is it just seeing Ceph run with many disks, or are you trying
> > to see how much performance you can get out of it with distributed disk?
> > What is your budget? Do you want to try different combinations of storage
> > devices to learn how they differ in performance or do you just want to jump
> > to the fastest things out there?
> >
> > One often doesn’t need a bunch of machines to determine that Ceph is a
> > really versatile and robust solution. I pretty regularly deploy Ceph on a
> > single node using Kubernetes and Rook. Some would ask “why would one ever
> > do that, just use direct storage!”. The answer is when I want to expand a
> > cluster, I am willing to have traded initial performance overhead for
> > letting Ceph distribute data at a later date. And the overhead is far lower
> > than one might think when there’s not a network bottleneck to deal with. I
> > do use direct storage on LVM when I have distributed workloads such as
> > Kafka that abstract storage that a service instance depends on. It doesn’t
> > make much sense in my mind for Kafka or Cassandra to use Ceph because I can
> > afford to lose nodes using those services.
> >
> > In other words, Ceph is virtualized storage. You have likely come to it
> > because your workloads need to be able to come up anywhere on your network
> > and reach that storage. How do you see those workloads exercising the
> > capabilities of Ceph? That’s where your interesting use cases come from,
> > and can help you better decide what the best lab platform is to get started.
> >
> > Hope that helps, Brian
> >
> > On Sep 29, 2020, at 12:44 AM, Ignacio Ocampo <nafiux(a)gmail.com
<mailto:nafiux@gmail.com>> wrote:
> >
> > Hi All :),
> >
> > I would like to get your feedback about the components below to build a
> > PoC OSD Node (I will build 3 of these).
> >
> > SSD for OS.
> > NVMe for cache.
> > HDD for storage.
> >
> > The Supermicro motherboard has 2 10Gb cards, and I will use ECC memories.
> >
> > <image.png>
> >
> > Thanks for your feedback!
> >
> > --
> > Ignacio Ocampo
> >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users(a)ceph.io <mailto:ceph-users@ceph.io>
> > To unsubscribe send an email to ceph-users-leave(a)ceph.io
<mailto:ceph-users-leave@ceph.io>
> >
> >
> >
>
> --
> Ignacio Ocampo
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io <mailto:ceph-users@ceph.io>
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
<mailto:ceph-users-leave@ceph.io>