Ok, so 100G seems to be the better choice. I will probably go with some of these.
[
https://www.fs.com/products/75808.html |
https://www.fs.com/products/75808.html ]
From: "Paul Emmerich" <paul.emmerich(a)croit.io>
To: "EDH" <mriosfer(a)easydatahost.com>
Cc: "adamb" <adamb(a)medent.com>om>, "ceph-users"
<ceph-users(a)ceph.io>
Sent: Friday, January 31, 2020 8:49:29 AM
Subject: Re: [ceph-users] Re: Micron SSD/Basic Config
On Fri, Jan 31, 2020 at 2:06 PM EDH - Manuel Rios
<mriosfer(a)easydatahost.com> wrote:
Hmm change 40Gbps to 100Gbps networking.
40Gbps technology its just a bond of 4x10 Links with some latency due link aggregation.
100 Gbps and 25Gbps got less latency and Good performance. In ceph a 50% of the latency
comes from Network commits and the other 50% from disk commits.
40G ethernet is not the same as 4x 10G bond. A bond load balances on a
per-packet (or well, per flow usually) basis. A 40G link uses all four
links even for a single packet.
100G is "just" 4x 25G
I also wouldn't agree that network and disk latency is a 50/50 split
in Ceph unless you have some NVRAM disks or something.
Even for the network speed the processing and queuing in the network
stack dominates over the serialization delay from a 40G/100G
difference (4kb at 100G is 320ns, and 800ns at 40G for the
serialization; I don't have any figures for processing times on
40/100G ethernet, but 10G fiber is at 300ns, 10G base-t at 2300
nanoseconds)
Paul