Hi,
is there any way to fix it instead a reboot?
[128632.995249] block nbd0: Possible stuck request 00000000b14a04af: control (read@2097152,4096B). Runtime 9540 seconds
[128663.718993] block nbd0: Possible stuck request 00000000b14a04af: control (read@2097152,4096B). Runtime 9570 seconds
[128694.434774] block nbd0: Possible stuck request 00000000b14a04af: control (read@2097152,4096B). Runtime 9600 seconds
[128725.154515] block nbd0: Possible stuck request 00000000b14a04af: control (read@2097152,4096B). Runtime 9630 seconds
# ceph -v
ceph version 12.2.13 (584a20eb0237c657dc0567da126be145106aa47e) luminous (stable)
# rbd-nbd list-mapped
#
# uname -r
5.4.52-050452-generic
Thanks,
--
Herbert
iSCSI Targets not available
Please consult the documentation on how to configure and enable the iSCSI Targets management functionality.
Available information:
There are no gateways defined
any idea to enable it. tks so much
Hi all,
I have a question about the garbage collector within RGWs. We run Nautilus 14.2.8 and we have 32 garbage objects in the gc pool with totally 39 GB of garbage that needs to be processed.
When we run,
radosgw-admin gc process --include-all
objects are processed but most of them won't be deleted. This can be checked by using --debug-rgw=5 in the command and stat the objects which are mentioned that they have been processed. Also the monitoring doesn't show that a huge amount of objects are deleted by the gc. So, I assume that it doesn't actually delete the objects. It might be due to a renewed time stamp? (not sure about this) Is there anybody who had similar issues with removing a large amount of garbage and is there a way to let the gc delete the objects?
Most of the objects within the gc list are __multipart__ objects. Are they processed differently than single part objects? E.g. collect all the multiparts before the deletion actually happens or how is this implemented? The garbage is still increasing and the gc cannot process things what scares us a bit. Also, we cannot bypass the gc because the bucket is still in use.
I also thought about reinitializing the GC in order to get an up to date list of garbage. (some entries show with `radosgw-admin gc list --include-all` are over a month old) Is there a way to make this happen and how save is it?
I thought about exporting the omapobjects from the gc pool (as a backup) and delete the objects within the pool (or rename the pool).
I appreciate any input and thank you in advance.
Regards,
Michael
Hi,
Did any one installed ceph-deploy on rhel7 with rados gate way.
I see there are no rpms available for rhel7 in ceph-deploy in
download.ceph.com for nautilis , luminous, octopus versions.
Is ceph-deploy go to for rhel7???
Hi all,
I have a cluster providing object storage.
The cluster has worked well until someone saves flink checkpoints in
the 'flink' bucket. I checked its behavior and I find that the flink saves
the current checkpoint data and delete the former ones frequently. I
suppose that it makes the bucket index get large omap object. I got
'1 large omap objects' warning message these days. And after I check
the cluster status and logs, of course, all of those large omap object
points to exactly the same one index. The warning message:
*cluster [WRN] Large omap object found. Object:
17:2f908b17:::.dir.313c8244-fe4d-4d46-bf9b-0e33e46be041.166289.1:head PG:
17.e8d109f4 (17.74) Key count: 568681 Size (bytes): 149443581*
I did 'bilog trim' and 'pg deep-scrub' and the cluster became health again.
However, I cannot do this all the time. Is there a way to solve this issue
permanently?
Thanks