Hello,
we did some local testing a few days ago on a new installation of a small
cluster.
Performance of our iSCSI implementation showed a performance drop of 20-30%
against krbd.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.verges(a)croit.io
Chat:
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web:
Am Do., 24. Okt. 2019 um 19:37 Uhr schrieb Drew Weaver <
drew.weaver(a)thenap.com>gt;:
I was told by someone at Red Hat that ISCSI
performance is still several
magnitudes behind using the client / driver.
Thanks,
-Drew
-----Original Message-----
From: Nathan Fish <lordcirth(a)gmail.com>
Sent: Thursday, October 24, 2019 1:27 PM
To: Ryan <rswagoner(a)gmail.com>
Cc: ceph-users <ceph-users(a)ceph.com>
Subject: [ceph-users] Re: iSCSI write performance
Are you using Erasure Coding or replication? What is your crush rule?
What SSDs and CPUs? Does each OSD use 100% of a core or more when writing?
On Thu, Oct 24, 2019 at 1:22 PM Ryan <rswagoner(a)gmail.com> wrote:
I'm in the process of testing the iscsi target feature of ceph. The
cluster is
running ceph 14.2.4 and ceph-iscsi 3.3. It consists of 5 hosts
with 12 SSD OSDs per host. Some basic testing moving VMs to a ceph backed
datastore is only showing 60MB/s transfers. However moving these back off
the datastore is fast at 200-300MB/s.
What should I be looking at to track down the write performance issue?
In
comparison with the Nimble Storage arrays I can see 200-300MB/s in both
directions.
Thanks,
Ryan
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an
email to ceph-users-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an
email to ceph-users-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io