The read IOPs in "normal" operation was with bluefs_buffered_io=false
somewhat about 1. And now with true around 2. So this seems slightly
higher, but far away from any problem.
While snapshot trimming the difference is enormous.
with false: around 200
with true: around 10
scrubing read IOPs do not appear to be affected. They are around 100
I'am using librados to access my objects. So I don't know if this would
be any different with rgw.
On Fri, 7 Aug 2020 08:08:40 -0500
Mark Nelson <mnelson(a)redhat.com> wrote:
It's quite possible that the issue is really about
rocksdb living on
top of bluefs with bluefs_buffered_io and rgw causing a ton of OMAP
traffic. rgw is the only case so far where the issue has shown up,
but it was significant enough that we didn't feel like we could leave
bluefs_buffered_io enabled. In your case with a 14GB target per OSD,
do you still see significantly increased disk reads with