Hello,
Lately I’ve been playing with Lua scripting on top of RGW.
I would like to implement a request blocking based on bucket name -> when there is a dot in a bucket name return error code and a message that this name is invalid.
Here is the code I was able to came up with.
if string.find(Request.HTTP.URI, '%.') then
Request.Response.HTTPStatusCode = 400
Request.Response.HTTPStatus = “InvalidBucketName"
Request.Response.Message = “Dots in bucket name are not allowed."
end
This works fine, but the request for creating a bucket would be processed and the bucket will be created. I thought about a dirty workaround with setting the Request.Bucket.Name to a bucket that already exists but it seems that this field is not writable in Quincy.
Is there a way to block the request from processing?
Any help is much appreciated.
Kind regards,
Ondrej
How do you guys backup CephFS? (if at all?)
I'm building 2 ceph clusters, a primary one and a backup one, and I'm
looking into CephFS as the primary store for research files. CephFS
mirroring seems a very fast and efficient way to copy data to the
backup location, and it has the benefit of the files on the backup
location being fully in a ready-to-use state instead of some binary
proprietary archive.
But I am wondering how to do 'ransomware protection' in this setup. I
can't believe I'm the only one that wants to secure my data ;)
I'm reading up on snapshots and mirroring, and that's great to protect
from user error. I could schedule snapshots on the primary cluster,
and they would automatically get synced to the backup cluster.
But a user can still delete all snapshots on the source side, right?
And you need to create a ceph user on the backup cluster, and import
that on the primary cluster. That means that if a hacker has those
credentials, he could also delete the data on the backup cluster? Or
is there some 'append-only' mode for immutability?
Another option I'm looking into is restic. Restic looks like a cool
tool, but it does not support s3 object locks yet. See the discussion
here [1]. I should be able to get immutability working with the
restic-rest backend according to the developer. But I have my worries
that running restic to sync up an 800TB filesystem with millions of
files will be.. worrysome ;) Anyone using restic in production?
Thanks again for your input!
Angelo.
[1] https://github.com/restic/restic/issues/3195
Is there a way to enable the LUKS encryption format on a snapshot that was created from an unencrypted image without losing data? I've seen in https://docs.ceph.com/en/quincy/rbd/rbd-encryption/ that "Any data written to the image prior to its format may become unreadable, though it may still occupy storage resources." and observed that to be the case when running `encryption format` on an image that already has data in it. However is there any way to take a snapshot of an unencrypted image and enable encryption on the snapshot (or even on a new image cloned from the snapshot?)
Hello,
we are running a 3-node ceph cluster with version 17.2.6.
For CephFS snapshots we have configured the following snap schedule with
retention:
/PATH 2h 72h15d6m
But we observed that max 50 snapshot are preserved. If a new snapshot is
created the oldest 51st is deleted.
Is there a limit for maximum cephfs snapshots or maybe this is a bug?
I have found the setting "mds_max_snaps_per_dir" which is 100 by default
but I think this is not related to my problem?
Thanks,
Tobias
Hi,
I followed the steps to repair journal and MDS I found here in the list.
I hit a bug that stopped my MDS to start so I took the long way with
reading the data.
Everything went fine and I can even mount one of my CephFS now. That's a
big relieve.
But when I start scrub, I just get return code -116 and no scrub is
initiatet. I didn't find that code in the docs. Can you help me?
[ceph: root@ceph06 /]# ceph tell mds.mds01.ceph06.huavsw scrub start
recursive
2023-04-29T10:46:36.926+0000 7ff676ff5700 0 client.79389355
ms_handle_reset on v2:192.168.23.66:6800/1133836262
2023-04-29T10:46:36.953+0000 7ff676ff5700 0 client.79389361
ms_handle_reset on v2:192.168.23.66:6800/1133836262
{
"return_code": -116
}
(I get the same error, no matter what kind of scrub I start)
--
http://www.widhalm.or.at
GnuPG : 6265BAE6 , A84CB603
Threema: H7AV7D33
Telegram, Signal: widhalmt(a)widhalm.or.at
Hi,
The cluster is with Pacific and deployed by cephadm on container.
The case is to import OSDs after host OS reinstallation.
All OSDs are SSD who has DB/WAL and data together.
Did some research, but not able to find a working solution.
Wondering if anyone has experiences in this?
What needs to be done before host OS reinstallation and what's after?
Thanks!
Tony
Hello everyone,
I've started playing with Lua scripting and would like to ask If anyone knows about a way to drop or close user request on the prerequest context.
I would like to block creating buckets with dots in the name, but the use-case could be blocking certain operations, etc.
I was able to come up with some like this
if string.find(Request.HTTP.URI, '%.') then
Request.Response.HTTPStatusCode = 400
Request.Response.HTTPStatus = "InvalidBucketName"
Request.Response.Message = "Dots are not allowed."
end
This works fine, but the bucket is created which is something that I don't want to do. As a dirty workaround, I've thought about changing the bucket name here to an already existing bucket, but the Request.Bucket.Name = "taken" doesn't seem to work as the log gives me an error "attempt to index a nil value (field 'Bucket')".
Any help is much appreciated.
Dear Ceph folks,
I would like to listen to your advice on the following topic: We have a 6-node Ceph cluster (for RGW usage only ) running on Luminous 12.2.12, and now will add 10 new nodes. Our plan is to phase out the old 6 nodes, and run RGW Ceph cluster with the new 10 nodes on Nautilus version。
I can think of two ways to achieve the above goal. The first method would be: 1) Upgrade the current 6-node cluster from Luminous 12.2.12 to Nautilus 14.2.22; 2) Expand the cluster with the 10 new nodes, and then re-balance; 3) After rebalance completes, remove the 6 old nodes from the cluster
The second method would get rid of the procedure to upgrade the old 6-node from Luminous to Nautilus, because those 6 nodes will be phased out anyway, but then we have to deal with a hybrid cluster with 6-node on Luminous 12.2.12, and 10-node on Nautilus, and after re-balancing, we can remove the 6 old nodes from the cluster.
Any suggestions, advice, or best practice would be highly appreciated.
best regards,
Samuel
huxiaoyu(a)horebdata.cn