Hi Nathan,
Attached crushmap output.
let me know if you find any thing odd.
On Sat, Oct 24, 2020 at 6:47 PM Nathan Fish <lordcirth(a)gmail.com> wrote:
Can you post your crush map? Perhaps some OSDs are in
the wrong place.
On Sat, Oct 24, 2020 at 8:51 AM Amudhan P <amudhan83(a)gmail.com> wrote:
Hi,
I have created a test Ceph cluster with Ceph Octopus using cephadm.
Cluster total RAW disk capacity is 262 TB but it's allowing to use of
only
132TB.
I have not set quota for any of the pool. what could be the issue?
Output from :-
ceph -s
cluster:
id: f8bc7682-0d11-11eb-a332-0cc47a5ec98a
health: HEALTH_WARN
clock skew detected on mon.strg-node3, mon.strg-node2
2 backfillfull osd(s)
4 pool(s) backfillfull
1 pools have too few placement groups
services:
mon: 3 daemons, quorum strg-node1,strg-node3,strg-node2 (age 7m)
mgr: strg-node3.jtacbn(active, since 7m), standbys: strg-node1.gtlvyv
mds: cephfs-strg:1 {0=cephfs-strg.strg-node1.lhmeea=up:active} 1
up:standby
osd: 48 osds: 48 up (since 7m), 48 in (since 5d)
task status:
scrub status:
mds.cephfs-strg.strg-node1.lhmeea: idle
data:
pools: 4 pools, 289 pgs
objects: 17.29M objects, 66 TiB
usage: 132 TiB used, 130 TiB / 262 TiB avail
pgs: 288 active+clean
1 active+clean+scrubbing+deep
mounted volume shows
node1:/ 67T 66T 910G 99% /mnt/cephfs
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io