109 is 81.75 yes the rest of them some bluestore stuffs I guess.
Istvan Szabo
Senior Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: istvan.szabo(a)agoda.com
---------------------------------------------------
-----Original Message-----
From: Simon Sutter <ssutter(a)hosttech.ch>
Sent: Thursday, February 25, 2021 5:55 PM
To: ceph-users(a)ceph.io
Subject: [ceph-users] Erasure coded calculation
Email received from outside the company. If in doubt don't click links nor open
attachments!
________________________________
Hello everyone!
I'm trying to calculate the theoretical usable storage of a ceph cluster with erasure
coded pools.
I have 8 nodes and the profile for all data pools will be k=6 m=2.
If every node has 6 x 1TB wouldn't the calculation be like this:
RAW capacity: 8Nodes x 6Disks x 1TB = 48TB Loss to m=2: 48TB / 8Nodes x 2m = 12TB EC
capacity: 48TB - 12TB = 36TB
At the moment I have one cluster with 8 nodes and different disks than the sample (but
every node has the same amount of disks and the same sized disks).
The output of ceph df detail is:
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 109 TiB 103 TiB 5.8 TiB 5.9 TiB 5.41
TOTAL 109 TiB 103 TiB 5.8 TiB 5.9 TiB 5.41
--- POOLS ---
POOL ID PGS STORED OBJECTS %USED MAX AVAIL
device_health_metrics 1 1 51 MiB 48 0 30 TiB
rep_data_fs 2 32 14 KiB 3.41k 0 30 TiB
rep_meta_fs 3 32 227 MiB 1.72k 0 30 TiB
ec_bkp1 4 32 4.2 TiB 1.10M 6.11 67 TiB
So ec_bkp1 uses 4.2TiB an there are 67TiB free usable Storage.
This means total EC usable storage would be 71.2TiB.
But calculating with the 109TiB RAW storage, shouldn't it be 81.75?
Are the 10TiB just some overhead (that would be much overhead) or is the calculation not
correct?
And what If I want to expand the cluster in the first sample above by three nodes with 6 x
2TB, which means not the same sized disks as the others.
Will the calculation with the same EC profile still be the same?
RAW capacity: 8Nodes x 6Disks x 1TB + 3Nodes x 6Disks x 2TB = 84TB Loss to m=2: 84TB /
11Nodes x 2m = 15.27TB EC capacity: 84TB - 15.27TB = 68.72TB
Thanks in advance,
Simon
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an email to
ceph-users-leave(a)ceph.io
________________________________
This message is confidential and is for the sole use of the intended recipient(s). It may
also be privileged or otherwise protected by copyright or other legal rules. If you have
received it by mistake please let us know by reply email and delete it from your system.
It is prohibited to copy this message or disclose its content to anyone. Any
confidentiality or privilege is not waived or lost by any mistaken delivery or
unauthorized disclosure of the message. All messages sent to and from Agoda may be
monitored to ensure compliance with company policies, to protect the company's
interests and to remove potential malware. Electronic messages may be intercepted,
amended, lost or deleted, or contain viruses.