Thanks, Mark.
I’m interested as well, wanting to provide block service to baremetal hosts; iSCSI seems
to be the classic way to do that.
I know there’s some work on MS Windows RBD code, but I’m uncertain if it’s
production-worthy, and if RBD namespaces suffice for tenant isolation — and are themselves
mature.
Thoughts anyone?
I don't have super recent results, but we do have
some test data from last year looking at kernel rbd, rbd-nbd, rbd+tcmu, fuse, etc:
https://docs.google.com/spreadsheets/d/1oJZ036QDbJQgv2gXts1oKKhMOKXrOI2XLTk…
Generally speaking going through the tcmu layer was slower than kernel rbd or librbd
directly (sometimes by quite a bit!). There was also more client side CPU usage per unit
performance as well (which makes sense since there's additional work being done). You
may be able to get some of that performance back with more clients as I do remember there
being some issues with iodepth and tcmu. The only setup that I remember being slower at
the time though was rbd-fuse which I don't think is even really maintained.
Mark
On 10/5/20 4:43 PM, DHilsbos(a)performair.com wrote:
All;
I've finally gotten around to setting up iSCSI gateways on my primary production
cluster, and performance is terrible.
We're talking 1/4 to 1/3 of our current solution.
I see no evidence of network congestion on any involved network link. I see no evidence
CPU or memory being a problem on any involved server (MON / OSD / gateway /client).
What can I look at to tune this, preferably on the iSCSI gateways?
Thank you,
Dominic L. Hilsbos, MBA
Director - Information Technology
Perform Air International, Inc.
DHilsbos(a)PerformAir.com
www.PerformAir.com
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io