Thanks Samy I will give this a try.
It would be helpful if there is some value that shows cache misses or
so, so you have a more precise idea with how much you need to increase
the cache. I have now added a couple of GB's see if it is being used and
helps speed up things.
PS. I have been looking at the mds with 'ceph daemonperf mds.a'
-----Original Message-----
From: Samy Ascha [mailto:samy@xel.nl]
Sent: 11 February 2020 17:10
To: Marc Roos
Cc: ceph-users
Subject: Re: [ceph-users] cephfs slow, howto investigate and tune mds
configuration?
Say I think my cephfs is slow when I rsync to it, slower than it used
to be. First of all, I do not get why it reads so much data. I assume
the file attributes need to come from the mds server, so the rsync
backup should mostly cause writes not?
I think it started being slow, after enabling snapshots on the file
system.
- how can I determine if mds_cache_memory_limit = 8000000000 is still
correct?
- how can I test the mds performance from the command line, so I can
experiment with cpu power configurations, and see if this brings a
significant change?
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an
email to ceph-users-leave(a)ceph.io
Hi,
Incidentally, I was checking this in my CephFS cluster too, and I have
used this to monitor cache usage:
# while sleep 1; do ceph daemon mds.your-mds perf dump | jq
'.mds_mem.rss'; ceph daemon mds.your-mds dump_mempools | jq -c
'.mempool.by_pool.mds_co'; done
You will need `jq` for this example, or you can filter the JSON however
you prefer.
This will show you, by second, the size of the cache in items and memory
usage, also showing the total memory usage.
I found this somewhere on the net. Basically, you can use those stats in
JSON to gather what you need for checking if your cache is used and big
enough etc.
I'm no expert, and also still learning how to best monitor my CephFS
performance. This did give me some insight, though.
Its not a lot I have to offer, but since I got help on the list
recently, I thought I might as well share what small bits I can ;)
Samy