hello
many thx for the time you take helping me on this.
I restarted one of the backup and now the space usage for cephfs meta
data goes from 17Go to 70Go but the hints you give me seems to not help
here.
cephfs-metadata/mds1_openfiles.0 mtime 2021-04-06 18:27:08.000000, size
0
cephfs-metadata/mds0_openfiles.0 mtime 2021-04-06 18:27:10.000000, size
0
cephfs-metadata/mds2_openfiles.1 mtime 2021-04-06 06:31:00.000000, size
0
cephfs-metadata/mds0_openfiles.1 mtime 2021-04-06 06:31:02.000000, size
0
cephfs-metadata/mds2_openfiles.0 mtime 2021-04-06 18:27:08.000000, size
0
uhm. As I'm writting this email the metadata pool goes from 70Go to
39Go as the backup is still running.
I don't really get what is going on here ...
oau
Le mardi 06 avril 2021 à 15:08 +0200, Burkhard Linke a écrit :
Hi,
On 4/6/21 2:20 PM, Olivier AUDRY wrote:
hello
now backup is running since 3hours and cephfs metadata goes from
20G to
479Go...
POOL
ID STORED OBJECTS USED %USED MAX AVAIL
cephfs-metadata 12 479 GiB 642.26k 1.4 TiB
18.79 2.0 TiB
cephfs-data0 13 2.9 TiB 9.23M 9.4
TiB
10.67 26 TiB
is that a normal behaviour ?
The MDS maintains a list of open files in the metadata pool. If your
backup is scanning a lot of files, and caps are not reclaimed by the
MDS, this list will become large.
The corresponding objects are called 'mds<rank>_openfiles.<chunk>',
e.g.
mds0_openfiles.0, mds0_openfiles.1 etc. You can check the size of
these
objects with the rados command.
If this is the reason for the large pool, I would recommend to
restrict
the number of caps per client, otherwise you might run into out of
memory problems if the MDS is restarted during the backup.
Regards,
Burkhard
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io