All;
The overall issue has been resolved.
There were two major causes:
Misplacement of keyring(s) (they were not within /etc/ceph/)
'openstack-cinder-volume' service was not started/enabled
Thank you,
Stephen Self
IT Manager
sself(a)performair.com
463 South Hamilton Court
Gilbert, Arizona 85233
Phone: (480) 610-3500
Fax: (480) 610-3501
www.performair.com
-----Original Message-----
From: SSelf(a)performair.com [mailto:SSelf@performair.com]
Sent: Thursday, January 7, 2021 2:21 PM
To: ceph-users(a)ceph.io; openstack-discuss(a)lists.openstack.org
Subject: [ceph-users] [cinder] Cinder & Ceph Integration Error: No Valid Backend
All;
We're having problems with our Openstack/Ceph integration. The versions we're
using are Ussuri & Nautilus.
When trying to create a volume, the volume is created, though the status is stuck at
'ERROR'.
This appears to be the most relevant line from the Cinder scheduler.log:
2021-01-07 14:00:38.473 140686 ERROR cinder.scheduler.flows.create_volume
[req-f86556b5-cb2e-4b2d-b556-ed07e632289d 824c26c133b34d8b8e84a7acabbe6f91
a983323b5ffc47e18660794cd9344869 - default default] Failed to run task
cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create: No valid
backend was found. No weighed backends available: cinder.exception.NoValidBackend: No
valid backend was found. No weighed backends available
Here is the 'cinder.conf' from our Controller Node:
[DEFAULT]
# define own IP address
my_ip = 10.0.80.40
log_dir = /var/log/cinder
state_path = /var/lib/cinder
auth_strategy = keystone
enabled_backends = ceph
glance_api_version = 2
debug = true
# RabbitMQ connection info
transport_url = rabbit://openstack:<password>@10.0.80.40:5672
enable_v3_api = True
# MariaDB connection info
[database]
connection = mysql+pymysql://cinder:<password>@10.0.80.40/cinder
# Keystone auth info
[keystone_authtoken]
www_authenticate_uri =
http://10.0.80.40:5000
auth_url =
http://10.0.80.40:5000
memcached_servers = 10.0.80.40:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = <password>
[oslo_concurrency]
lock_path = $state_path/tmp
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_pool = rbd_os_volumes
rbd_ceph_conf = /etc/ceph/463/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
rbd_user = cinder
rbd_exclusive_cinder_pool = true
backup_driver = cinder.backup.drivers.ceph
backup_ceph_conf = /etc/ceph/300/ceph.conf
backup_ceph_user = cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool = rbd_os_backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true
Does anyone have any ideas as to what is going wrong?
Thank you,
Stephen Self
IT Manager
Perform Air International
sself(a)performair.com
www.performair.com
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io