Hi Hermann,
Yes, I asked the same question a while ago, and received very valuable
advice. We ended up purchasing dual refurb 40G arista's, for very little
money compared to new 10G switches.
Ours are these:
https://emxcore.com/shop/category/product/arista-dcs-7050qx-32s/
The complete thread on the subject, including many more recommendations
is here:
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/5DH57H4VO27…
Best,
MJ
On 5/19/21 2:10 PM, Max Vernimmen wrote:
> Hermann,
>
> I think there was a discussion on recommended switches not too long ago.
> You should be able to find it in the mailing list archives.
> I think the latency of the network is usually very minor compared to ceph's
> dependency on cpu and disk latency, so for a simple cluster I wouldn't
> worry about it too much.
> I have found fs.com's dac cables to get stuck a lot, so I don't use them
> anymore. I usually buy dell or mellanox cables.
> Regarding network cards I've found the intel cards to be not that great due
> to bugs with lacp bonds, embedded lldp getting in the way and other issues.
> So I'm using mellanox cards instead, but broadcom should also work.
>
> hope it helps!
>
> best regards,
>
>
> Max
>
> On Wed, May 19, 2021 at 1:48 PM <ceph-users-request(a)ceph.io> wrote:
>
>> ---------- Forwarded message ----------
>> From: Hermann Himmelbauer <hermann(a)qwer.tk>
>> To: ceph-users(a)ceph.com
>> Cc:
>> Bcc:
>> Date: Wed, 19 May 2021 11:22:26 +0200
>> Subject: [ceph-users] Suitable 10G Switches for ceph storage - any
>> recommendations?
>> Dear Ceph users,
>> I am currently constructing a small hyperconverged Proxmox cluster with
>> ceph as storage. So far I always had 3 nodes, which I directly linked
>> together via 2 bonded 10G network interfaces for the Ceph storage, so I
>> never needed any switching devices.
>>
>> This new cluster has more nodes, so I am considering using a 10G switch
>> for the storage network. As I have no experience with such a setup, I
>> wonder if there are any specific issues that I should think of
>> (latency...)?
>>
>> As the whole cluster should be not too expensive, I am currently
>> thinking of the following solution:
>>
>> 2* CRS317-1G-16s+RM switches:
>>
https://mikrotik.com/product/crs317_1g_16s_rm#fndtn-testresults
>>
>> SFP+ Cables like these:
>>
https://www.fs.com/de/products/48883.html
>>
>> Some network interface for each node with two SFP+ ports, e.g.:
>>
>>
https://ark.intel.com/content/www/de/de/ark/products/39776/intel-ethernet-c…
>>
>> Connect each port with each switch and configure master/slave
>> configuration so that the switches are redundant.
>>
>> What do you think of this setup - or is there any information /
>> recommendation for an optimized setup of a 10G storage network?
>>
>> Best Regards,
>> Hermann
>>
>> --
>> hermann(a)qwer.tk
>> PGP/GPG: 299893C7 (on keyservers)
>>
>>
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>