HI Matthew,
the results of the commands are:
ceph df detail
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 190 TiB 61 TiB 129 TiB 129 TiB 67.70
TOTAL 190 TiB 61 TiB 129 TiB 129 TiB 67.70
--- POOLS ---
POOL ID PGS STORED (DATA) (OMAP) OBJECTS USED (DATA)
(OMAP) %USED MAX AVAIL QUOTA OBJECTS QUOTA BYTES DIRTY USED COMPR UNDER COMPR
device_health_metrics 1 1 31 MiB 0 B 31 MiB 73 92 MiB 0 B 92
MiB 0 9.8 TiB N/A N/A 73 0 B 0 B
libvirt-pool 3 512 43 TiB 43 TiB 9.1 MiB 11.18M 128 TiB 128 TiB 27
MiB 81.21 9.8 TiB N/A N/A 11.18M 0 B 0 B
ceph osd pool ls detail
pool 1 'device_health_metrics' replicated size 3 min_size 2 crush_rule 0
object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change
32288 flags hashpspool stripe_width 0 pg_num_min 1 application
mgr_devicehealth
pool 3 'libvirt-pool' replicated size 3 min_size 2 crush_rule 0
object_hash rjenkins pg_num 512 pgp_num 512 autoscale_mode on
last_change 31905 lfor 0/28367/30002 flags hashpspool,selfmanaged_snaps
stripe_width 0 application rbd
ceph balancer status
{
"active": true,
"last_optimize_duration": "0:00:00.003773",
"last_optimize_started": "Sun Mar 14 06:28:22 2021",
"mode": "upmap",
"optimize_result": "Unable to find further optimization, or pool(s)
pg_num is decreasing, or distribution is already perfect",
"plans": []
}
We are using a vm with rbd disk, in this case 6 x 10T disk, given that
the pool is replicated 3, it going to run out of space. We are in the
process of upgrading the 3T disks to 8T but still won't be enough space
if I understand things correctly, may need to considering changing to
replication2, is that possible?
many thanks
Darrin
On 13/3/21 3:07 pm, Matthew H wrote:
Hey Darrin,
Can you provide the output of the following commands?
ceph df detail
ceph osd pool ls detail
ceph balancer status
Thanks so much,
------------------------------------------------------------------------
*From:* Darrin Hodges <darrin(a)catalyst-au.net>
*Sent:* Wednesday, March 10, 2021 8:41 PM
*To:* ceph-users(a)ceph.io <ceph-users(a)ceph.io>
*Subject:* [ceph-users] Some confusion around PG, OSD and balancing issue
HI all,
Just looking for clarification around the relationship between PGs,OSDs
and balancing on a ceph (octopus) cluster. We have pg autobalance on
and balancing is set to upmap. There are 2 pools, one is the default
metric pool with 1 pg, the other is the pool we are using for
everything, it has 512 PG. There are 60 OSD's split across 4 hosts. The
USD usage ranges between 39% and 68%:
* current cluster score 0.048908
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP
META AVAIL %USE VAR PGS STATUS
0 hdd 2.74599 1.00000 2.7 TiB 1.3 TiB 1.3 TiB 2.3 MiB 2.3
GiB 1.5 TiB 47.08 0.86 19 up
1 hdd 2.74550 1.00000 2.7 TiB 1.4 TiB 1.3 TiB 3.2 MiB 2.3
GiB 1.4 TiB 49.46 0.90 20 up
2 hdd 2.74550 1.00000 2.7 TiB 1.4 TiB 1.4 TiB 2.8 MiB 2.4
GiB 1.3 TiB 51.81 0.95 21 up
3 hdd 2.74550 1.00000 2.7 TiB 1.4 TiB 1.4 TiB 589 KiB 2.4
GiB 1.3 TiB 51.73 0.95 21 up
4 hdd 2.74550 1.00000 2.7 TiB 1.6 TiB 1.6 TiB 2.2 MiB 2.7
GiB 1.1 TiB 59.05 1.08 24 up
5 hdd 9.00000 1.00000 9.1 TiB 5.2 TiB 5.2 TiB 6.9 MiB 8.2
GiB 3.9 TiB 56.75 1.04 77 up
6 hdd 2.74550 1.00000 2.7 TiB 1.5 TiB 1.5 TiB 1.9 MiB 2.8
GiB 1.3 TiB 54.21 0.99 22 up
7 hdd 2.74550 1.00000 2.7 TiB 1.4 TiB 1.3 TiB 1.1 MiB 2.7
GiB 1.4 TiB 49.32 0.90 20 up
8 hdd 2.74550 1.00000 2.7 TiB 1.4 TiB 1.3 TiB 1.1 MiB 2.3
GiB 1.4 TiB 49.36 0.90 20 up
9 hdd 2.74550 1.00000 2.7 TiB 1.6 TiB 1.6 TiB 3.3 MiB 3.0
GiB 1.1 TiB 59.21 1.08 24 up
10 hdd 2.74550 1.00000 2.7 TiB 1.8 TiB 1.8 TiB 2.2 MiB 3.4
GiB 941 GiB 66.53 1.22 27 up
11 hdd 2.74550 1.00000 2.7 TiB 1.1 TiB 1.1 TiB 3.9 MiB 2.0
GiB 1.7 TiB 39.69 0.73 16 up
12 hdd 2.74550 1.00000 2.7 TiB 1.2 TiB 1.2 TiB 1.2 MiB 2.1
GiB 1.5 TiB 44.69 0.82 18 up
13 hdd 2.74550 1.00000 2.7 TiB 1.1 TiB 1.1 TiB 2.4 MiB 1.9
GiB 1.7 TiB 39.59 0.72 16 up
14 hdd 2.74550 1.00000 2.7 TiB 1.8 TiB 1.8 TiB 2.1 MiB 3.1
GiB 945 GiB 66.37 1.21 27 up
15 hdd 2.74599 1.00000 2.7 TiB 1.6 TiB 1.5 TiB 1.7 MiB 2.6
GiB 1.2 TiB 56.68 1.04 23 up
16 hdd 2.74599 1.00000 2.7 TiB 1.3 TiB 1.3 TiB 3.3 MiB 2.3
GiB 1.5 TiB 46.76 0.86 19 up
17 hdd 2.74599 0.95001 2.7 TiB 1.8 TiB 1.8 TiB 2.8 MiB 3.1
GiB 953 GiB 66.12 1.21 26 up
18 hdd 2.74599 0.95001 2.7 TiB 1.8 TiB 1.7 TiB 1.5 MiB 3.0
GiB 1010 GiB 64.08 1.17 26 up
19 hdd 2.74599 1.00000 2.7 TiB 1.8 TiB 1.7 TiB 3.0 MiB 2.9
GiB 1016 GiB 63.88 1.17 26 up
20 hdd 9.00000 1.00000 9.1 TiB 4.7 TiB 4.7 TiB 28 MiB 7.5
GiB 4.4 TiB 51.47 0.94 70 up
21 hdd 2.74599 1.00000 2.7 TiB 1.2 TiB 1.2 TiB 2.8 MiB 2.2
GiB 1.5 TiB 44.43 0.81 18 up
22 hdd 2.74599 1.00000 2.7 TiB 1.4 TiB 1.4 TiB 653 KiB 2.4
GiB 1.3 TiB 51.89 0.95 21 up
23 hdd 2.74599 1.00000 2.7 TiB 1.6 TiB 1.6 TiB 2.3 MiB 2.7
GiB 1.1 TiB 59.26 1.08 24 up
24 hdd 2.74599 1.00000 2.7 TiB 1.6 TiB 1.5 TiB 3.1 MiB 3.0
GiB 1.2 TiB 56.67 1.04 23 up
25 hdd 2.74599 0.90002 2.7 TiB 1.7 TiB 1.7 TiB 5.3 MiB 4.0
GiB 1.1 TiB 61.47 1.12 25 up
26 hdd 2.74599 1.00000 2.7 TiB 1.4 TiB 1.4 TiB 1.5 MiB 2.4
GiB 1.3 TiB 51.82 0.95 21 up
27 hdd 2.74599 1.00000 2.7 TiB 1.3 TiB 1.3 TiB 1.1 MiB 2.2
GiB 1.5 TiB 46.93 0.86 19 up
28 hdd 2.74599 1.00000 2.7 TiB 1.3 TiB 1.3 TiB 2.6 MiB 2.2
GiB 1.5 TiB 46.88 0.86 19 up
29 hdd 2.74599 1.00000 2.7 TiB 1.4 TiB 1.3 TiB 557 KiB 2.6
GiB 1.4 TiB 49.37 0.90 20 up
45 hdd 2.74599 1.00000 2.7 TiB 1.6 TiB 1.6 TiB 2.3 MiB 2.7
GiB 1.1 TiB 59.18 1.08 24 up
46 hdd 2.74599 1.00000 2.7 TiB 1.4 TiB 1.4 TiB 28 MiB 2.5
GiB 1.3 TiB 51.76 0.95 22 up
47 hdd 2.74599 1.00000 2.7 TiB 1.6 TiB 1.6 TiB 977 KiB 3.1
GiB 1.1 TiB 59.07 1.08 24 up
48 hdd 2.74599 1.00000 2.7 TiB 1.3 TiB 1.3 TiB 625 KiB 2.6
GiB 1.5 TiB 46.86 0.86 19 up
49 hdd 2.74599 1.00000 2.7 TiB 1.7 TiB 1.7 TiB 1.9 MiB 3.2
GiB 1.1 TiB 61.68 1.13 25 up
50 hdd 2.74599 1.00000 2.7 TiB 1.7 TiB 1.7 TiB 1.0 MiB 2.7
GiB 1.1 TiB 61.53 1.13 25 up
51 hdd 2.74599 1.00000 2.7 TiB 1.2 TiB 1.1 TiB 2.5 MiB 2.3
GiB 1.6 TiB 41.88 0.77 17 up
52 hdd 2.74599 0.95001 2.7 TiB 1.8 TiB 1.8 TiB 1.8 MiB 3.8
GiB 942 GiB 66.49 1.22 27 up
53 hdd 9.00000 1.00000 9.1 TiB 4.3 TiB 4.3 TiB 6.2 MiB 6.8
GiB 4.8 TiB 47.46 0.87 64 up
54 hdd 2.74599 0.95001 2.7 TiB 1.8 TiB 1.7 TiB 1.3 MiB 2.9
GiB 1008 GiB 64.13 1.17 26 up
55 hdd 2.74599 1.00000 2.7 TiB 1.8 TiB 1.7 TiB 1.5 MiB 3.0
GiB 1014 GiB 63.95 1.17 26 up
56 hdd 2.74599 1.00000 2.7 TiB 1.8 TiB 1.7 TiB 1.3 MiB 2.8
GiB 1013 GiB 63.99 1.17 26 up
57 hdd 2.74599 1.00000 2.7 TiB 1.6 TiB 1.5 TiB 2.9 MiB 2.6
GiB 1.2 TiB 56.67 1.04 23 up
58 hdd 2.74599 1.00000 2.7 TiB 1.5 TiB 1.5 TiB 1.2 MiB 2.5
GiB 1.3 TiB 54.41 1.00 22 up
59 hdd 2.74599 0.95001 2.7 TiB 1.8 TiB 1.7 TiB 1.3 MiB 2.9
GiB 1015 GiB 63.92 1.17 26 up
30 hdd 2.74599 1.00000 2.7 TiB 1.9 TiB 1.9 TiB 1.6 MiB 3.1
GiB 878 GiB 68.79 1.26 28 up
31 hdd 2.74599 1.00000 2.7 TiB 1.3 TiB 1.3 TiB 2.3 MiB 2.2
GiB 1.5 TiB 47.03 0.86 19 up
32 hdd 2.74599 1.00000 2.7 TiB 1.8 TiB 1.8 TiB 3.2 MiB 3.3
GiB 944 GiB 66.45 1.22 27 up
33 hdd 2.74599 1.00000 2.7 TiB 1.1 TiB 1.1 TiB 3.1 MiB 1.9
GiB 1.7 TiB 39.88 0.73 16 up
34 hdd 2.74599 1.00000 2.7 TiB 1.9 TiB 1.9 TiB 28 MiB 3.0
GiB 874 GiB 68.93 1.26 29 up
35 hdd 2.74599 1.00000 2.7 TiB 1.3 TiB 1.3 TiB 1.3 MiB 2.2
GiB 1.5 TiB 47.01 0.86 19 up
36 hdd 2.74599 1.00000 2.7 TiB 1.8 TiB 1.8 TiB 1.6 MiB 3.0
GiB 947 GiB 66.31 1.21 27 up
37 hdd 2.74599 1.00000 2.7 TiB 1.6 TiB 1.6 TiB 1.8 MiB 2.9
GiB 1.1 TiB 58.96 1.08 24 up
38 hdd 2.74599 1.00000 2.7 TiB 1.9 TiB 1.9 TiB 2.8 MiB 3.4
GiB 877 GiB 68.82 1.26 28 up
39 hdd 2.74599 0.85004 2.7 TiB 1.6 TiB 1.6 TiB 2.4 MiB 3.5
GiB 1.1 TiB 59.20 1.08 24 up
40 hdd 2.74599 1.00000 2.7 TiB 1.4 TiB 1.3 TiB 1.5 MiB 2.3
GiB 1.4 TiB 49.35 0.90 20 up
41 hdd 2.74599 1.00000 2.7 TiB 1.4 TiB 1.4 TiB 2.3 MiB 2.4
GiB 1.3 TiB 51.73 0.95 21 up
42 hdd 9.00000 1.00000 9.1 TiB 4.5 TiB 4.5 TiB 2.1 MiB 7.2
GiB 4.6 TiB 49.76 0.91 67 up
43 hdd 2.74599 0.90002 2.7 TiB 1.6 TiB 1.6 TiB 2.9 MiB 3.2
GiB 1.1 TiB 59.17 1.08 24 up
44 hdd 2.74599 1.00000 2.7 TiB 1.2 TiB 1.2 TiB 4.0 MiB 2.2
GiB 1.5 TiB 44.34 0.81 18 up
54 of the OSD are 3TiB and there are four 8TiB - can this cause
imbalance? Or do I need to increase the PG from 512? or is this as good
as it gets?
many thanks
Darrin
--
CONFIDENTIALITY NOTICE: This email is intended for the named
recipients only. It may contain privileged, confidential or copyright
information. If you are not the named recipients, any use, reliance
upon, disclosure or copying of this email or any attachments is
unauthorised. If you have received this email in error, please reply
via email or telephone +61 2 8004 5928.
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
--
CONFIDENTIALITY NOTICE: This email is intended for the named recipients only. It may
contain privileged, confidential or copyright information. If you are not the named
recipients, any use, reliance upon, disclosure or copying of this email or any attachments
is unauthorised. If you have received this email in error, please reply via email or
telephone +61 2 8004 5928.