In Nautilus 14.2.13, we have converted to the minimized ceph.conf file and are now storing our config in the monitors. Previously, radosgw was getting its keyring from /etc/ceph/ceph.client.radosgw.keyring, but now that the ceph.conf has been minimized, the keyring value was not converted into the monitor k/v store under the radosgw client configuration options and radosgw is looking for the keyring in /var/lib/ceph/radosgw/ceph-radosgw.gw01/keyring instead.
Attempting to put the keyring location into the monitor configuration with "ceph config set ..." fails because it says the keyring value cannot be stored in the monitor config settings. Is there a way to store an alternate keyring location without putting it in ceph.conf? It's easy enough to work around the issue by using a link or just copying the keyring to the new location, but I was wondering if there is a way to specify the keyring location in the monitor K/V storage.
thanks,
Wyllys Ingersoll
Hi,
We noticed that if one set an osd crush weight using the command
ceph osd crush set $id $weight host=$host
it updates the osd weight on $host bucket, but does not update it on
the "class" bucket (${host}~hdd or ${host}~ssd), and as a result the
old weight is still used until one runs `ceph osd crush reweight-all`
or do some other changes that cause the crushmap recalculation.
The same behaviour is for `ceph osd crush reweight-subtree <name> <weight>`
command.
Is it expected behavior or should I report a bug to the tracker?
I would consider this is ok if I knew a way how to set a weight for a
"class" (xyz~hdd or ~ssd) bucket. When I try to use ${host}~ssd it
complains about invalid chars ~ in the bucket name.
--
Mykola Golub
I'm working on a F_SETLEASE implementation for kcephfs, and am hitting a
deadlock of sorts, due to a truncate triggering a cap revoke at an
inopportune time.
The issue is that truncates to a smaller size are always done via
synchronous call to the MDS, whereas a truncate larger does not if Fx
caps are held. That synchronous call causes the MDS to issue the client
a cap revoke for caps that the lease holds references on (Frw, in
particular).
The client code has been this way since the inception and I haven't been
able to locate any rationale for it. Some questions about this:
1) Why doesn't the client ever buffer a truncate to smaller size? It
seems like that is something that could be done without a synchronous
MDS call if we hold Fx caps.
2) The client setattr implementations set inode_drop values in the
MetaRequest, but as far as I can tell, those values end up being ignored
by the MDS. What purpose does inode_drop actually serve? Is this field
vestigial?
Thanks,
--
Jeff Layton <jlayton(a)redhat.com>
Hi Folks,
The weekly performance meeting is starting now! This week we have a
number of topics ranging from pg autoscaling, onode memory usage,
io_uring, and continued disucssion of rocksdb and pglog.
Hope to see you there!
Etherpad:
https://pad.ceph.com/p/performance_weekly
Bluejeans:
https://bluejeans.com/908675367
Thanks,
Mark
This is the 13th backport release in the Nautilus series. This release fixes a
regression introduced in v14.2.12, and a few ceph-volume & RGW fixes. We
recommend users to update to this release.
Notable Changes
---------------
* Fixed a regression that caused breakage in clusters that referred to ceph-mon
hosts using dns names instead of ip addresses in the `mon_host` param in
`ceph.conf` (issue#47951)
* ceph-volume: the ``lvm batch`` subcommand received a major rewrite
Changelog
---------
* ceph-volume: major batch refactor (pr#37522, Jan Fajerski)
* mgr/dashboard: Proper format iSCSI target portals (pr#37060, Volker Theile)
* rpm: move python-enum34 into rhel 7 conditional (pr#37747, Nathan Cutler)
* mon/MonMap: fix unconditional failure for init_with_hosts (pr#37816, Nathan Cutler, Patrick Donnelly)
* rgw: allow rgw-orphan-list to note when rados objects are in namespace (pr#37799, J. Eric Ivancich)
* rgw: fix setting of namespace in ordered and unordered bucket listing (pr#37798, J. Eric Ivancich)
--
Abhishek
Hi,
See this issue: https://tracker.ceph.com/issues/47951
PR for Nautilus: https://github.com/ceph/ceph/pull/37816
This breaks a lot of Nautilus deployments I know of and might cause many
other users problems if they upgrade to .12
I would say this fix is big enough to quickly release .13.
What do others say?
Wido