Thanks for your valuable answer about write cache!
For object gateway, the performance is got by `swift-bench -t 64` which
uses 64 threads concurrently. Will the radosgw and http overhead be so
significant (94.5MB/s to 26MB/s for cluster1) when multiple threads are
used? Thanks in advance!
On Wed, Feb 5, 2020, 11:33 PM Janne Johansson <icepic.dz(a)gmail.com> wrote:
Den ons 5 feb. 2020 kl 16:19 skrev quexian da
<daquexian566(a)gmail.com>om>:
Thanks for your valuable answer!
Is the write cache specific to ceph? Could you please provide some links
to the documentation about the write cache? Thanks!
It is all the possible caches used by ceph, by the device driver, the
filesystem (in filestore+xfs), the controllers (emulated or real) and the
harddisk electronics, ie anything between the benchmark software and the
spinning disk write head (or not so spinning on ssds).
Do you have any idea about the slow oss speed? Is
it normal that the
write performance of object gateway is slower than that of rados cluster?
Thanks in advance!
Object gateway (be it swift or S3) goes over something that looks like
http, so it will most certainly have longer turn around times and hence
slower speed for single streams.
You may possibly get over parts of that overhead by having many multiple
streams and counting the sum of the transfers, but there is no big surprise
that individual writes get slower if you have to pass via an external box
(the radosgw) using https instead of writing directly to the storage.
--
May the most significant bit of your life be positive.