They are Samsung 860 EVO 2TB SSDs. The Dell R740xd servers have dual Intel Gold 6130 CPUs and dual SAS controllers with 6 SSDs each. Top shows around 20-25% of a core being used by each OSD daemon. I am using erasure coding with crush-failure-domain=host k=3 m=2.  

On Thu, Oct 24, 2019 at 1:37 PM Drew Weaver <drew.weaver@thenap.com> wrote:
I was told by someone at Red Hat that ISCSI performance is still several magnitudes behind using the client / driver.

Thanks,
-Drew


-----Original Message-----
From: Nathan Fish <lordcirth@gmail.com>
Sent: Thursday, October 24, 2019 1:27 PM
To: Ryan <rswagoner@gmail.com>
Cc: ceph-users <ceph-users@ceph.com>
Subject: [ceph-users] Re: iSCSI write performance

Are you using Erasure Coding or replication? What is your crush rule?
What SSDs and CPUs? Does each OSD use 100% of a core or more when writing?

On Thu, Oct 24, 2019 at 1:22 PM Ryan <rswagoner@gmail.com> wrote:
>
> I'm in the process of testing the iscsi target feature of ceph. The cluster is running ceph 14.2.4 and ceph-iscsi 3.3. It consists of 5 hosts with 12 SSD OSDs per host. Some basic testing moving VMs to a ceph backed datastore is only showing 60MB/s transfers. However moving these back off the datastore is fast at 200-300MB/s.
>
> What should I be looking at to track down the write performance issue? In comparison with the Nimble Storage arrays I can see 200-300MB/s in both directions.
>
> Thanks,
> Ryan
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an
> email to ceph-users-leave@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-leave@ceph.io