i think there is something wrong with the cephfs_data pool.
i created a new pool "cephfs_data2" and copied data from the "cephfs_data" to the "cephfs_data2" pool by using this command:

$ rados cppool cephfs_data cephfs_data2

$ ceph df detail
RAW STORAGE:
    CLASS     SIZE        AVAIL       USED        RAW USED     %RAW USED
    hdd       7.8 TiB     7.4 TiB     390 GiB      407 GiB          5.11
    TOTAL     7.8 TiB     7.4 TiB     390 GiB      407 GiB          5.11

POOLS:
    POOL                          ID     STORED      OBJECTS     USED        %USED     MAX AVAIL     QUOTA OBJECTS     QUOTA BYTES     DIRTY      USED COMPR     UNDER COMPR
    cephfs_data                    6      30 GiB       2.52M      61 GiB      1.02       2.9 TiB     N/A               N/A              2.52M            0 B             0 B
    cephfs_data2                  20      30 GiB      11.06k      61 GiB      1.02       2.9 TiB     N/A               N/A             11.06k            0 B             0 B
    cephfs_metadata                7     9.8 MiB         379      20 MiB         0       2.9 TiB     N/A               N/A                379            0 B             0 B

in the new pool the stored amount is also 30GiB, but object count and dirty count are significant smaller.

i think that in the "cephfs_data" pool are something like "orpahned" objects. but how can i cleanup that pool ?


Am 14.01.20 um 11:15 schrieb Florian Pritz:
Hi,

When we tried putting some load on our test cephfs setup by restoring a
backup in artifactory, we eventually ran out of space (around 95% used
in `df` = 3.5TB) which caused artifactory to abort the restore and clean
up. However, while a simple `find` no longer shows the files, `df` still
claims that we have around 2.1TB of data on the cephfs. `df -i` also
shows 2.4M used inodes. When using `du -sh` on a top-level mountpoint, I
get 31G used, which is data that is still really here and which is
expected to be here.

Consequently, we also get the following warning:

MANY_OBJECTS_PER_PG 1 pools have many more objects per pg than average
    pool cephfs_data objects per pg (38711) is more than 231.802 times cluster average (167)
We are running ceph 14.2.5.

We have snapshots enabled on cephfs, but there are currently no active
snapshots listed by `ceph daemon mds.$hostname dump snaps --server` (see
below). I can't say for sure if we created snapshots during the backup
restore.

{
    "last_snap": 39,
    "last_created": 38,
    "last_destroyed": 39,
    "pending_noop": [],
    "snaps": [],
    "need_to_purge": {},
    "pending_update": [],
    "pending_destroy": []
}
We only have a single CephFS.

We use the pool_namespace xattr for our various directory trees on the
cephfs.

`ceph df` shows:

POOL         ID STORED   OBJECTS   USED    %USED     MAX AVAIL
cephfs_data  6  2.1 TiB  2.48M     2.1 TiB 24.97       3.1 TiB
`ceph daemon mds.$hostname perf dump | grep stray` shows:

"num_strays": 0,
"num_strays_delayed": 0,
"num_strays_enqueuing": 0,
"strays_created": 5097138,
"strays_enqueued": 5097138,
"strays_reintegrated": 0,
"strays_migrated": 0,
`rados -p cephfs_data df` shows:

POOL_NAME      USED OBJECTS CLONES  COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED   RD_OPS      RD   WR_OPS     WR USED COMPR UNDER COMPR
cephfs_data 2.1 TiB 2477540      0 4955080                  0       0        0 10699626 6.9 TiB 86911076 35 TiB        0 B         0 B

total_objects    29718
total_used       329 GiB
total_avail      7.5 TiB
total_space      7.8 TiB
When I combine the usage and the free space shown by `df` we would
exceed our cluster size. Our test cluster currently has 7.8TB total
space with a replication size of 2 for all pools. With 2.1TB
"used" on the cephfs according to `df` + 3.1TB being shows as "free" I
get 5.2TB total size. This would mean >10TB of data when accounted for
replication. Clearly this can't fit on a cluster with only 7.8TB of
capacity.

Do you have any ideas why we see so many objects and so much reported
usage? Is there any way to fix this without recreating the cephfs?

Florian


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-leave@ceph.io