Multiple people have posted to this mailing list with this exact problem,
presumably others have it as well, but the developers don't believe it is
worthy of even placing a warning in documentation, for all the good that
ceph does this issue is oddly treated with little urgency. Basically Ceph
doesn't support anything besides a 4k blocksize for EC pools.
On Thu, Sep 10, 2020 at 4:44 AM Frank Schilder <frans(a)dtu.dk> wrote:
We might have the same problem. EC 6+2 on a pool for
RBD images on
spindles. Please see the earlier thread "mimic: much more raw used than
reported". In our case, this seems to be a problem exclusively for RBD
workloads and here, in particular, Windows VMs. I see no amplification at
all on our ceph fs pool.
Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
________________________________________
From: norman <norman.kern(a)gmx.com>
Sent: 10 September 2020 08:34:42
To: ceph-users(a)ceph.io
Subject: [ceph-users] Re: The confusing output of ceph df command
Anyone else met the same problem? Using EC instead of Replica is to save
spaces, but now it's worse than replica...
On 9/9/2020 上午7:30, norman kern wrote:
Hi,
I have changed most of pools from 3-replica to ec 4+2 in my cluster,
when I use
ceph df command to show
the used capactiy of the cluster:
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED
%RAW USED
hdd 1.8 PiB 788 TiB 1.0
PiB 1.0 PiB
57.22
ssd 7.9 TiB 4.6 TiB 181
GiB 3.2 TiB
41.15
ssd-cache 5.2 TiB 5.2 TiB 67
GiB 73 GiB
1.36
TOTAL 1.8 PiB 798 TiB 1.0
PiB 1.0 PiB
56.99
POOLS:
POOL ID STORED OBJECTS
USED %USED MAX AVAIL
default-oss.rgw.control 1 0 B 8
0
B 0 1.3 TiB
default-oss.rgw.meta 2 22 KiB 97
3.9
MiB 0 1.3 TiB
default-oss.rgw.log 3 525 KiB 223
621
KiB 0 1.3 TiB
default-oss.rgw.buckets.index 4 33 MiB 34
33
MiB 0 1.3 TiB
default-oss.rgw.buckets.non-ec 5 1.6 MiB 48
3.8
MiB 0 1.3 TiB
.rgw.root 6 3.8 KiB 16
720
KiB 0 1.3 TiB
default-oss.rgw.buckets.data 7 274 GiB 185.39k
450
GiB 0.14 212 TiB
default-fs-metadata 8 488 GiB 153.10M
490
GiB 10.65 1.3 TiB
default-fs-data0 9 374 TiB 1.48G
939
TiB 74.71 212 TiB
...
The USED = 3 * STORED in 3-replica mode is completely right, but for EC
4+2 pool
(for default-fs-data0 )
the USED is not equal 1.5 * STORED, why...:(
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
*P* 516.938.4100 x
*E * steven.pine(a)webair.com
<https://www.facebook.com/WebairInc/>
<https://www.linkedin.com/company/webair>