quincy 17.2.7: released!
* major 'dashboard v3' changes causing issues?
https://github.com/ceph/ceph/pull/54250 did not merge for 17.2.7
* planning a retrospective to discuss what kind of changes should go
in minor releases when members of the dashboard team are present
reef 18.2.1:
* most PRs already tested/merged
* possibly start validation next week?
I am looking to create a new pool that would be backed by a particular set
of drives that are larger nVME SSDs (Intel SSDPF2NV153TZ, 15TB drives).
Particularly, I am wondering about what is the best way to move devices
from one pool and to direct them to be used in a new pool to be created. In
this case, the documentation suggests I could want to assign them to a new
device-class and have a placement rule that targets that device-class in
the new pool.
Currently the Ceph cluster has two device classes 'hdd' and 'ssd', and the
larger 15TB drives were automatically assigned to the 'ssd' device class
that is in use by a different pool. The `ssd` device classes are used in a
placement rule targeting that class.
The documentation describes that I could set a device class for an OSD with
a command like:
`ceph osd crush set-device-class CLASS OSD_ID [OSD_ID ..]`
Class names can be arbitrary strings like 'big_nvme". Before setting a new
device class to an OSD that already has an assigned device class, should
use `ceph osd crush rm-device-class ssd osd.XX`.
Can I proceed to directly remove these OSDs from the current device class
and assign to a new device class? Should they be moved one by one? What is
the way to safely protect data from the existing pool that they are mapped
to?
Thanks,
Matt
--
Matt Larson, PhD
Madison, WI 53705 U.S.A.
In Ceph version 17.2.5, a notable performance drop is observed when executing list operations on non-existent objects, especially for large buckets undergoing high concurrency with frequent updates.
For the context, initiating concurrent PUT/DELETE/LIST operations on 4 buckets, each containing ~10 million objects, initially yields satisfactory latencies. However, as updates continue to accumulate, the efficiency of the `negative list` operation diminishes notably. I suspect that over time, with many object insertions and deletions, the bucket index gets fragmented. While this affects all operations, negative lookups appear to be particularly hampered, possibly because they necessitate a comprehensive scan of the fragmented index.
With the LIST arguments defined as $prefix=xxx $max-keys=2, are there any recommendations to optimize the Ceph configuration to enhance the performance of negative list operations?