Maneul, thank you for your input.
This is actually huge, and the problem is exactly that.
On a side note I will add, that I observed lower memory utilisation on OSD
nodes since the update, and a big throughput on block.db devices(up to
100+MB/s) that was not there before, so logically that meant that some
operations that were performed in memory before, now were executed directly
on block device. Was digging through possible causes, but your time-saving
message arrived earlier.
Thank you!
чт, 6 авг. 2020 г. в 14:56, Manuel Lausch <manuel.lausch(a)1und1.de>de>:
Hi,
I found the reasen of this behavior change.
With 14.2.10 the default value of "bluefs_buffered_io" was changed from
true to false.
https://tracker.ceph.com/issues/44818
configureing this to true my problems seems to be solved.
Regards
Manuel
On Wed, 5 Aug 2020 13:30:45 +0200
Manuel Lausch <manuel.lausch(a)1und1.de> wrote:
Hello Vladimir,
I just tested this with a single node testcluster with 60 HDDs (3 of
them with bluestore without separate wal and db).
With the 14.2.10, I see on the bluestore OSDs a lot of read IOPs while
snaptrimming. With 14.2.9 this was not an issue.
I wonder if this would explain the huge amount of slowops on my big
testcluster (44 Nodes 1056 OSDs) while snaptrimming. I
cannot test a downgrade there, because there are no packages of older
releases for CentOS 8 available.
Regards
Manuel
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io