On Mon, 30 Mar 2020, Ml Ml wrote:
Hello List,
is this a bug?
root@ceph02:~# ceph cephadm generate-key
Error EINVAL: Traceback (most recent call last):
File "/usr/share/ceph/mgr/cephadm/module.py", line 1413, in _generate_key
with open(path, 'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmp4ejhr7wh/key'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/share/ceph/mgr/mgr_module.py", line 1153, in _handle_command
return self.handle_command(inbuf, cmd)
File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 110, in
handle_command
return dispatch[cmd['prefix']].call(self, cmd, inbuf)
File "/usr/share/ceph/mgr/mgr_module.py", line 308, in call
return self.func(mgr, **kwargs)
File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 72, in
<lambda>
wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs)
File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 63, in wrapper
return func(*args, **kwargs)
File "/usr/share/ceph/mgr/cephadm/module.py", line 1418, in _generate_key
os.unlink(path)
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmp4ejhr7wh/key'
Huh.. yeah looks like it. My guess is there is a missing openssl
dependency. Turn up debugging (debug_mgr=20) and see if there is
anything helpful in the log?
Note that this is a moot point if you run ceph-mgr in a container. If
you're converting to cephadm, you can adopt the mgr daemon (cephadm adopt
--style legacy --name mgr.whatever) and then retry the same command.
sage
>
> root@ceph02:~# dpkg -l |grep ceph
> ii ceph-base 15.2.0-1~bpo10+1
> amd64 common ceph daemon libraries and management tools
> ii ceph-common 15.2.0-1~bpo10+1
> amd64 common utilities to mount and interact with a ceph
> storage cluster
> ii ceph-deploy 2.0.1
> all Ceph-deploy is an easy to use configuration tool
> ii ceph-mds 15.2.0-1~bpo10+1
> amd64 metadata server for the ceph distributed file system
> ii ceph-mgr 15.2.0-1~bpo10+1
> amd64 manager for the ceph distributed storage system
> ii ceph-mgr-cephadm 15.2.0-1~bpo10+1
> all cephadm orchestrator module for ceph-mgr
> ii ceph-mgr-dashboard 15.2.0-1~bpo10+1
> all dashboard module for ceph-mgr
> ii ceph-mgr-diskprediction-cloud 15.2.0-1~bpo10+1
> all diskprediction-cloud module for ceph-mgr
> ii ceph-mgr-diskprediction-local 15.2.0-1~bpo10+1
> all diskprediction-local module for ceph-mgr
> ii ceph-mgr-k8sevents 15.2.0-1~bpo10+1
> all kubernetes events module for ceph-mgr
> ii ceph-mgr-modules-core 15.2.0-1~bpo10+1
> all ceph manager modules which are always enabled
> ii ceph-mgr-rook 15.2.0-1~bpo10+1
> all rook module for ceph-mgr
> ii ceph-mon 15.2.0-1~bpo10+1
> amd64 monitor server for the ceph storage system
> ii ceph-osd 15.2.0-1~bpo10+1
> amd64 OSD server for the ceph storage system
> ii cephadm 15.2.0-1~bpo10+1
> amd64 cephadm utility to bootstrap ceph daemons with systemd
> and containers
> ii libcephfs1 10.2.11-2
> amd64 Ceph distributed file system client library
> ii libcephfs2 15.2.0-1~bpo10+1
> amd64 Ceph distributed file system client library
> ii python-ceph-argparse 14.2.8-1
> all Python 2 utility libraries for Ceph CLI
> ii python3-ceph-argparse 15.2.0-1~bpo10+1
> all Python 3 utility libraries for Ceph CLI
> ii python3-ceph-common 15.2.0-1~bpo10+1
> all Python 3 utility libraries for Ceph
> ii python3-cephfs 15.2.0-1~bpo10+1
> amd64 Python 3 libraries for the Ceph libcephfs library
> root@ceph02:~# cat /etc/debian_version
> 10.3
>
> Thanks,
> Michael
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>
>