Hello,
I have searched and found some references to how to set up an Inventory
file to accommodate OSD nodes with different hardware configurations. I
tried a few things with the INI form of the Inventory file, but I ended
up using ansible-inventory to convert it to YAML. Still, it's not
clear what's the correct way to differentiate between different
subgroups of hosts.
Example: I'm currently adding some new OSD nodes to a functioning
cluster. Although both the old nodes and the new nodes have the same
size and number of HDDs, they have different NVMe drives. Also, for
some reasons the HDDs are enumerated starting from /dev/sdc on the old
nodes, but from /dev/sda on the new. Also, the new nodes will have a
different block_db_size.
Is there a straightforward way, other than duplicating the devices and
block_db_size values multiple times, to define specific settings once
for specific groups of hosts?
Thanks.
-Dave
--
Dave Hall
Binghamton University
Hello,
This might be a more general Ansible question, but I will have other
Ceph-Ansible questions, so I thought I'd start here. If this is a
basic Ansible question, I'd be grateful for a link to a useful tutorial.
Question: If I understand Ansible correctly, at the far end of it all
ansible-playbook issues commands to hosts via SSH. I'm looking for how
to have those commands dumped to a log file rather than actually issuing
them to the target hosts, especially for commands that actually cause a
change on the host.
In other words, while the standard output during a Ceph ansible-playbook
run is interesting and useful, there's a lot of intermediate
information. OTOH, in the end a ceph-volume command is issued for an
OSD node. I'd like to be able to look at it and see if it looks right.
If not, I need to go back to my inventory file or my group-vars and make
an adjustment or fix an error.
Thanks.
-Dave
--
Dave Hall
Binghamton University
ceph cluster with three node;
os: ubuntu20.04
Each of three ceph nodes have four disks to be ceph osd;
sdb 480g ssd
sbc 4t hdd
sde 4t hdd
sdd 4t hdd
wanna use ceph-ansible deploy ceph cluster ,connect to openstack;
but have some issues in osds.yml.
wanna openstack data -------direct access--------->ceph ssd------>ceph hdd;
how to configuration osds.yml better ,thanks ?
link:
https://github.com/ceph/ceph-ansible/issues/2435https://github.com/ceph/ceph-ansible/blob/master/group_vars/osds.yml.sample