All;
We're setting up our second cluster, using version 14.2.4, and we've run into a
weird issue: all of our OSDs are created with a size of 0 B. Weights are appropriate for
the size of the underlying drives, but ceph -s shows this:
cluster:
id: <id>
health: HEALTH_WARN
Reduced data availability: 256 pgs inactive
too few PGs per OSD (28 < min 30)
services:
mon: 3 daemons, quorum s700041,s700042,s700043 (age 4d)
mgr: s700041(active, since 3d), standbys: s700042, s700043
osd: 9 osds: 9 up (since 21m), 9 in (since 44m)
data:
pools: 1 pools, 256 pgs
objects: 0 objects, 0 B
-->usage: 0 B used, 0 B / 0 B avail<-- (emphasis added)
pgs: 100.000% pgs unknown
256 unknown
Thoughts?
I have ceph-volumne.log, and the log from one of the OSD daemons, though it looks like the
auth keys get printed to the ceph-volume.log.
Thank you,
Dominic L. Hilsbos, MBA
Director - Information Technology
Perform Air International inc.
DHilsbos(a)PerformAir.com
www.PerformAir.com