Which openstack version is this? With cephadm your ceph version is at
least Octopus, but it might be an older openstack version so the
backend can’t parse the newer mon-mgr target but expects only mon.
Zitat von Haitham Abdulaziz <H14m_(a)hotmail.com>om>:
i deployed kolla-ansible & cephadm on virtual
machines (kvm) .
My ceph cluster is on 3 vms with 12 vCPU each and 24gb of ram i used
cephadm to deploy ceph
ceph -s :
--------------------------
cluster:
id: a0e5ad36-a54c-11ed-9aea-5254008c2a3e
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph0,ceph1,ceph2 (age 6h)
mgr: ceph0.dzutak(active, since 24h), standbys: ceph1.aizuyc
mds: 3/3 daemons up, 6 standby
osd: 9 osds: 9 up (since 24h), 9 in (since 24h)
data:
volumes: 3/3 healthy
pools: 9 pools, 257 pgs
objects: 70 objects, 7.3 KiB
usage: 76 MiB used, 780 GiB / 780 GiB avail
pgs: 257 active+clean
--------------------------
my openstack deployment is AIO on a single node , now i wanna link
them together so i started with manila & native cephfs thinking its
the easist following this doc :
https://docs.openstack.org/manila/latest/admin/cephfs_driver.html#authorizi…
i created the user
--------------------------
client.manila
key: AQC7ot9jfiDsIxAA57fb7S6bVMnr5IadsnukHQ==
caps: [mgr] allow rw
caps: [mon] allow r
caps: [osd] allow rw pool=ganesha_rados_store
and created a file system called manila
--------------------------
my ceph.conf
--------------------------
[global]
fsid = a0e5ad36-a54c-11ed-9aea-5254008c2a3e
mon_host = [v2:192.168.122.25:3300/0,v1:192.168.122.25:6789/0]
[v2:192.168.122.115:3300/0,v1:192.168.122.115:6789/0]
[v2:192.168.122.14:3300/0,v1:192.168.122.14:6789/0]
--------------------------
i moved the files as to the openstack node and trying to connect
them together but it didnt go will , Viewing the logs shows
--------------------------
<AIO@cephfsnative1: manila.exception.ShareBackendException:
json_command failed - prefix=fs volume ls, argdict={'format':
'json'} - exception message: Bad target type 'mon-mgr'.
--------------------------where should i start to fix this issue ?
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io