I don’t quite understand why that zap would not work. But, here’s where I’d start.
1. cephadm check-host
* Run this on each of your hosts to make sure cephadm, podman and all other
prerequisites are installed and recognized
2. ceph orch ls
* This should show at least a mon, mgr, and osd spec deployed
3. ceph orch ls osd –export
* This will show the OSD placement service specifications that orchestrator uses to
identify devices to deploy as OSDs
4. ceph orch host ls
* This will list the hosts that have been added to orchestrator’s inventory, and
what labels are applied which correlate to the service placement labels
5. ceph log last cephadm
* This will show you what orchestrator has been trying to do, and how it may be
failing
Also, it’s never un-helpful to have a look at “ceph -s” and “ceph health detail”,
particularly for any people trying to help you without access to your systems.
Best of luck,
Josh Beaman
From: Patrick Begou <Patrick.Begou(a)univ-grenoble-alpes.fr>
Date: Friday, May 12, 2023 at 10:45 AM
To: ceph-users <ceph-users(a)ceph.io>
Subject: [EXTERNAL] [ceph-users] [Pacific] ceph orch device ls do not returns any HDD
Hi everyone
I'm new to CEPH, just a french 4 days training session with Octopus on
VMs that convince me to build my first cluster.
At this time I have 4 old identical nodes for testing with 3 HDDs each,
2 network interfaces and running Alma Linux8 (el8). I try to replay the
training session but it fails, breaking the web interface because of
some problems with podman 4.2 not compatible with Octopus.
So I try to deploy Pacific with cephadm tool on my first node (mostha1)
(to enable testing also an upgrade later).
dnf -y install
https://urldefense.com/v3/__https://download.ceph.com/rpm-16.2.13/el8/noarc…
monip=$(getent ahostsv4 mostha1 |head -n 1| awk '{ print $1 }')
cephadm bootstrap --mon-ip $monip --initial-dashboard-password xxxxx \
--initial-dashboard-user admceph \
--allow-fqdn-hostname --cluster-network 10.1.0.0/16
This was sucessfull.
But running "*c**eph orch device ls*" do not show any HDD even if I have
/dev/sda (used by the OS), /dev/sdb and /dev/sdc
The web interface shows a row capacity which is an aggregate of the
sizes of the 3 HDDs for the node.
I've also tried to reset /dev/sdb but cephadm do not see it:
[ceph: root@mostha1 /]# ceph orch device zap
mostha1.legi.grenoble-inp.fr /dev/sdb --force
Error EINVAL: Device path '/dev/sdb' not found on host
'mostha1.legi.grenoble-inp.fr'
On my first attempt with octopus, I was able to list the available HDD
with this command line. Before moving to Pacific, the OS on this node
has been reinstalled from scratch.
Any advices for a CEPH beginner ?
Thanks
Patrick
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io