Latency from a client side is not an issue. It just combines with other
latencies in the stack. The more client lags, the easier it's for the
cluster.
Here, the thing I talk, is slightly different. When you want to establish
baseline performance for osd daemon (disregarding block device and network
latencies), sudden order-of-magnitude delay on syscalls cause
disproportionate skew to results.
It does not relate to the production in any way, only to ceph-osd
benchmarks.
On Thu, Sep 10, 2020, 23:21 <vitalif(a)yourcmc.ru> wrote:
Yeah, of course... but RBD is primarily used for KVM
VMs, so the results
from a VM are the thing that real clients see. So they do mean something...
:)
I know. I tested fio before testing ceph
with fio. On null ioengine fio can handle up to 14M IOPS (on my dusty
lab's R220). On blk_null to gets down to 2.4-2.8M IOPS.
On brd it drops to sad 700k IOPS.
BTW, never run synthetic high-performance benchmarks on kvm. My old server
with 'makelinuxfastagain' fixes make one io request in 3.4us, and on KVM VM
it become 24us. Some guy said it got about 8.5us on vmware. It's all on
purely software stack without any hypervisor IO.
24us sounds like a small number, but if your synthetics makes 200k iops,
it's 4us. You can't make 200k on VM with 24us syscall time.