Hi again!
I've found an open issue at ZFS and there are some good informations:
https://github.com/openzfs/zfs/issues/3324
For example max_recordsize=4M very helps due to Ceph recordsize.
I was getting 30-50MiB/s with fio %50 RW now I can reach 200MiBs
that's good but still not good as I expected.
RBD disk easily reaches %100 utilization and stays there even write
operation is done.
But at the beginning I was have 100++ async_write pending operations,
with these zfs tunes that's mostly zero now and Utilization decreases
quickly.
options zfs zfs_vdev_max_active=40000
options zfs zfs_vdev_sync_read_max_active=100
options zfs zfs_vdev_sync_read_min_active=100
options zfs zfs_vdev_sync_write_max_active=100
options zfs zfs_vdev_sync_write_min_active=100
options zfs zfs_vdev_async_read_max_active=20000
options zfs zfs_vdev_async_read_min_active=10
options zfs zfs_vdev_async_write_max_active=20000
options zfs zfs_vdev_async_write_min_active=10
options zfs zfs_max_recordsize=4194304
options zfs zfs_vdev_aggregation_limit=4194304
These tunings messed up with 4K speed. It's just terrible.
I think it's possible to reach 500MiB/s with more tunings at both
sides, RBD+ZFS. I've started with the ZFS, Do you have any advice for
RBD tunings?
What do you think about zfs raid0 pool with 2 or 4 rbd disk ?
mhnx <morphinwithyou(a)gmail.com>om>, 5 Haz 2021 Cmt, 16:30 tarihinde şunu yazdı:
>
> Hello.
>
> I'm using rbd disks for zfs and on the zfs I use NFS.
> I see 10-50MB/s top. When I use the rbd disk in the VM environment it
> can reach 1GB/s RW so there is nothing wrong with the rbd and the
> cluster. Also I'm sure about ZFS+NFS performance when zfs on the real
> device. But when they work together I can't get even %10 performance.
> I think something is wrong, it shouldn't be that bad.
>
> Ceph version: Nautilus 14.2.16
> RBD Data = EC Pool
> RBD Metadata = SSD replicated
> zpool create $testpool rbd0
> exportfs /testpool
>
> Is there anyone who has tried it before?