Are you using Erasure Coding or replication? What is your crush rule?
What SSDs and CPUs? Does each OSD use 100% of a core or more when
writing?
On Thu, Oct 24, 2019 at 1:22 PM Ryan <rswagoner(a)gmail.com> wrote:
>
> I'm in the process of testing the iscsi target feature of ceph. The cluster is
running ceph 14.2.4 and ceph-iscsi 3.3. It consists of 5 hosts with 12 SSD OSDs per host.
Some basic testing moving VMs to a ceph backed datastore is only showing 60MB/s transfers.
However moving these back off the datastore is fast at 200-300MB/s.
>
> What should I be looking at to track down the write performance issue? In comparison
with the Nimble Storage arrays I can see 200-300MB/s in both directions.
>
> Thanks,
> Ryan
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io