Hi,
there is mds damage on our cluster, version 17.2.5,
[
{
"damage_type": "backtrace",
"id": 2287166658,
"ino": 3298564401782,
"path": "/hpc/home/euliz/.Xauthority"
}
]
The recursive repair does not fix it,
...ceph tell mds.0 scrub start /hpc/home/euliz force,repair,recursive
mds log:
2023-02-10T07:01:34.012+0100 7f46df3ea700 0 mds.0.cache failed to open
ino 0x30001c26a76 err -116/0
2023-02-10T07:01:34.012+0100 7f46df3ea700 0 mds.0.cache
open_remote_dentry_finish bad remote dentry [dentry
#0x1/hpc/home/euliz/.Xauthority [568,head] auth
REMOTE(reg) (dversion lock) pv=0 v=4425667830 ino=(nil) state=1073741824
| ptrwaiter=1 0x5560eb33a780]
Any clue how to fix this? or remove the file from namespace? it is not
important...
Thanks,
Andrej
--
_____________________________________________________________
prof. dr. Andrej Filipcic, E-mail:Andrej.Filipcic@ijs.si
Department of Experimental High Energy Physics - F9
Jozef Stefan Institute, Jamova 39, P.o.Box 3000
SI-1001 Ljubljana, Slovenia
Tel.: +386-1-477-3674 Fax: +386-1-477-3166
-------------------------------------------------------------
Hi,
we use rgw as our backup storage, and it basically holds only compressed
rbd snapshots.
I would love to move these out of the replicated into a ec pool.
I've read that I can set a default placement target for a user (
https://docs.ceph.com/en/octopus/radosgw/placement/). What does happen to
the existing user data?
How do I move the existing data to the new pool?
Does it somehow interfere with ongoing data upload (it is one internal
user, with 800 buckets which constantly get new data and old data removed)?
Cheers
Boris
ps: Can't wait to see some of you at the cephalocon :)
--
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
groüen Saal.
Hi,
I have one bucket that showed up with a large omap warning, but the amount
of objects in the bucket, does not align with the amount of omap keys. The
bucket is sharded to get rid of the "large omapkeys" warning.
I've counted all the omapkeys of one bucket and it came up with 33.383.622
(rados -p INDEXPOOL listomapkeys INDEXOBJECT | wc -l)
I've checked the amount of actual rados objects and it came up with
17.095.877
(rados -p DATAPOOL ls | grep BUCKETMARKER | wc -l)
I've checked the bucket index and it came up with 16.738.482
(radosgw-admin bi list --bucket BUCKET | grep -F '"idx":' | wc -l)
I have tried to fix it with
radosgw-admin bucket check --check-objects --fix --bucket BUCKET
but this did not change anything.
Is this a known bug or might there be something else going on. How can I
investigate further?
Cheers
Boris
--
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
groüen Saal.
Hi,
I'm on Ceph version 16.2.10, and I found there are a bunch of bootstrap
keyrings (i.e., client.bootstrap-<mds|mgr|osd|rbd|rbd-mirror|rgw>) located
at /var/lib/ceph/bootstrap-<mds|mgr|osd|rbd|rbd-mirror|rgw>/ceph.keyring
after bootstrap. Are they still in use after bootstrap? Is it safe to
remove them from host and even from ceph monitor?
Thanks,
Zhongzhou Cai
The RATIO for cephfs.application-acc.data shouldn't be over 1.0, I believe this triggered the error.
All weekend I was thinking about this issue, but couldn't find an option to correct this.
But minutes after posting I found a blog about the autoscaler (https://ceph.io/en/news/blog/2022/autoscaler_tuning) and it speaks about the option to set the Rate. Shouldn't this option be set to 2 when using a stretched cluster and not 4?
Hi list,
A little bit of background: we provide S3 buckets using RGW (running
quincy), but users are not allowed to manage their buckets, just read and
write objects in them. Buckets are created by an admin user, and read/write
permissions are given to end users using S3 bucket policies. We set the
users quota to 0 for everything to forbid them to create buckets. This is
not really scalable and a bit annoying for the users.
So we are trying to find a solution to allow users to create their own
buckets but with a limited set of APIs available (no policy change for
example).
The ceph doc says that policies cannot be applied on users, groups or roles
yet. Is there any other way to achieve this?
Any feedback will be appreciated.
Thanks!
Gauvain
Hi!
Thank you 😊
Your message was very helpful!
The main reason why “ceph df“ went to “100% USAGE” was because of the crush rule said this:
"min_size": 2
"max_size": 2
And the new “size” was 3, so the rule did not want to work with the pool.
After creating a new rule and setting the pools to this new rule, it seems to work (in our test cluster)
rule newrule {
ruleset 4
type replicated
min_size 3
max_size 3
step take default
step choose firstn 2 type room
step chooseleaf firstn 2 type host
step emit
}
Thanks again!
Further testing now 😊
________________________________
BearingPoint GmbH
Sitz: Wien
Firmenbuchgericht: Handelsgericht Wien
Firmenbuchnummer: FN 175524z
The information in this email is confidential and may be legally privileged. If you are not the intended recipient of this message, any review, disclosure, copying, distribution, retention, or any action taken or omitted to be taken in reliance on it is prohibited and may be unlawful. If you are not the intended recipient, please reply to or forward a copy of this message to the sender and delete the message, any attachments, and any copies thereof from your system.