> >
> >>
> >> Say I think my cephfs is slow when I rsync to it, slower than it
> used >> to be. First of all, I do not get
why it reads so much
data.
> I assume >> the file attributes need to
come from the mds server,
so
> the rsync >> backup should mostly cause
writes not?
> >>
> >
> >Are you running one or multiple MDS? I've seen cases where the
> >synchronization between the different MDS slow down rsync.
>
> One
>
> >The problem is that rsync creates and renames files a lot. When
> doing >this with small files it can be very heavy for the MDS.
> >
>
> Strange thing is that I did not have performance problems with
> luminous, after upgrading to nautilus and enabling snapshots on a
> different tree of the cephfs. Rsync is taking 10 hours more.
> There is also another option, degrading performance on the source.
> However it is impossible for me to verify this.
> I have increased the mds_cache_memory_limit from 8GB to 16GB, see
what
that brings.
how many snapshot are there ?
22 still, it is this kworker again, filed bug
https://tracker.ceph.com/issues/44100?next_issue_id=44099
Looks like when I unmount and then mount again, the problem
is temporary gone.
> >
> >> I think it started being slow, after enabling snapshots on the
> file >> system.
> >>
> >> - how can I determine if mds_cache_memory_limit = 8000000000 is
> still >> correct?
> >>
> >> - how can I test the mds performance from the command line, so I
> can
>
> >> experiment with cpu power configurations, and see if this brings
a
> >> significant change?
> >> _______________________________________________
> >> ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe
send
>> an
>>
>> >> email to ceph-users-leave(a)ceph.io
>> >>
>> >
>
>