Yes, with someone I did some consulting for. Veeam
seems to be one of the
prevalent uses for ceph-iscsi, though I'd try to use the native RBD client
instead if possible.
Veeam appears by default to store really tiny blocks, so there's a lot of
protocol overhead. I understand that Veeam can be configured to use "large
blocks" that can make a distinct difference.
On Jun 23, 2023, at 09:33, Work Ceph <work.ceph.user.mailing(a)gmail.com>
wrote:
Great question!
Yes, one of the slowness was detected in a Veeam setup. Have you
experienced that before?
On Fri, Jun 23, 2023 at 10:32 AM Anthony D'Atri <aad(a)dreamsnake.net>
wrote:
Are you using Veeam by chance?
On Jun 22, 2023, at 21:18, Work Ceph
<work.ceph.user.mailing(a)gmail.com>
wrote:
Hello guys,
We have a Ceph cluster that runs just fine with Ceph Octopus; we use RBD
for some workloads, RadosGW (via S3) for others, and iSCSI for some
Windows
clients.
We started noticing some unexpected performance issues with iSCSI. I
mean,
an SSD pool is reaching 100MB of write speed for
an image, when it can
reach up to 600MB+ of write speed for the same image when mounted and
consumed directly via RBD.
Is that performance degradation expected? We would expect some
degradation,
but not as much as this one.
Also, we have a question regarding the use of Intel Turbo boost. Should
we
disable it? Is it possible that the root cause of
the slowness in the
iSCSI
GW is caused by the use of Intel Turbo boost
feature, which reduces the
clock of some cores?
Any feedback is much appreciated.
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io