We have some inexplicable situation.
We have ceph cluster on 14.2.4.
About 14 nodes with 12 disks (4Tb) in each node.
But command ceph df returns for us next report:
# ceph df
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 611 TiB 155 TiB 455 TiB 456 TiB 74.60
TOTAL 611 TiB 155 TiB 455 TiB 456 TiB 74.60
POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL
poolera01 20 238 TiB 84.59M 362 TiB 76.39 75 TiB
poolera01md 21 17 GiB 58.33k 17 GiB 0.01 37 TiB
poolera02dt 25 71 TiB 24.56M 92 TiB 45.02 90 TiB
poolera02md 26 749 MiB 42.23k 1.2 GiB 0 37 TiB
poolera01 - pool Erasure-coded 4+2
poolera02dt - pool Erasure-coded 8+2
poolera01md and poolera02md - both replicated size 3
We are using 2 cephfs.
It seems, that all our pools can only utilize about 111TiB raw capacity. But we have 155
TiB!
We have enabled balancer with mode upmap
# ceph balancer status
{
"active": true,
"plans": [],
"mode": "upmap"
}
Can someone explain why ceph df show less MAX AVAIL than possible from AVAIL 155 TiB?
Show replies by date