Not entirely sure about this...
but after a bunch of cluster teardown and rebuilds I got rbds mapped.
Seems to me like biggest difference is that recently, I was sticking to using the webgui
to create the pools
(and I did the enable-application = rbd checkbox!!!)
but this last time, I went back to the old commandline, including
rbd pool init testpool
So maybe there's something that the command line is doing, that the gui SHOULD be
doing, but isnt.
----- Original Message -----
From: "Philip Brown" <pbrown(a)medata.com>
To: "ceph-users" <ceph-users(a)ceph.io>
Sent: Tuesday, December 22, 2020 4:43:32 PM
Subject: after octopus cluster reinstall, rbd map fails with timeout
More banging on my prototype cluster, and ran into an odd problem.
Used to be, when I create an rbd device, then try to map it, it would initially fail,
saying I have to disable some features.
Then I just run the suggested disable line -- usually
rbd feature disable poolname/rbdname object-map fast-diff deep-flatten
and then I can map it fine.
but now after the latest cluster recreation, when I try to map, I just get
# rbd map testpool/zfs02
rbd: sysfs write failed
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (110) Connection timed out
and no errors in dmesg output
if I try to disable those features anyway, I get
librbd::Operations: one or more requested features are already disabled(22) Invalid
argument
nothing in /var/log/ceph/cephadm.log either
Any suggestions?
--
Philip Brown| Sr. Linux System Administrator | Medata, Inc.
5 Peters Canyon Rd Suite 250
Irvine CA 92606
Office 714.918.1310| Fax 714.918.1325
pbrown(a)medata.com|
www.medata.com