FWIW when using rbd-mirror to migrate volumes between SATA SSD clusters, I found that
rbd_mirror_journal_max_fetch_bytes:
section: "client"
value: "33554432"
rbd_journal_max_payload_bytes:
section: "client"
value: “8388608"
Indeed, that's a good tweak that applies to the primary-side librbd
client for the mirrored image for IO workloads that routinely issue
large (> than the 16KiB default), sequential writes. This was another
compromise configuration setting to reduce the potential memory
footprint of the rbd-mirror daemon.
Direct advice from you last year ;)
Extrapolating for those who haven’t done much with rbd-mirror, or who find this thread in
the future, these settings worked well for me migrating at most 2 active volumes at once,
ones where I had no insight into client activity. YMMV.
Setting these specific values when mirroring an entire pool could well be
doubleplusungood.
— aad