Hello everyone,
we are facing a problem regarding the topic operations to send
notification, particularly when using amqp protocol.
We are using Ceph version 18.2.1. We have created a topic by giving as
attributes all needed information and so the push-endpoint (in our case
a rabbit endpoint that is used to collect notification messages). Then
we have configured all the buckets in our cluster Ceph so that it is
possible to send notification when some changes occur.
The problem regards particularly the list_topic operation: we noticed
that any authenticated user is able to get a full list of the created
topics and with them to get all the information, including endpoint,
and so username and password and IP and port, when using the
boto3.set_stream_logger(), which is not good for our goal since we do
not want the users to know implementation details.
There is the possibility to solve this problem? Any help would be useful.
Thanks and best regards.
GM.
Hi,
I am currently working on Ceph object storage and would like to inquire about how we can calculate the ingress and egress traffic for buckets/tenant via API.
Hi Team,
I'm currently working with Ceph object storage and would like to understand how to set permissions to private or public on buckets/objects in Ceph object storage.
It works for me on 17.2.6 as well. Could you be more specific what
doesn't work for you? Running that command only removes the cluster
configs etc. on that host, it does not orchestrate a removal on all
hosts, not sure if you're aware of that.
Zitat von Vahideh Alinouri <vahideh.alinouri(a)gmail.com>:
> The version that has been installed is 17.2.5. But this method does not
> work at all.
>
> On Fri, Feb 23, 2024, 10:23 AM Eugen Block <eblock(a)nde.ag> wrote:
>
>> Which ceph version is this? In a small Reef test cluster this works as
>> expected:
>>
>> # cephadm rm-cluster --fsid 2851404a-d09a-11ee-9aaa-fa163e2de51a
>> --zap-osds --force
>> Using recent ceph image
>>
>> registry.cloud.hh.nde.ag/ebl/ceph-upstream@sha256:057e08bf8d2d20742173a571bc28b65674b055bebe5f4c6cd488c1a6fd51f685
>> Zapping /dev/sdb...
>> Zapping /dev/sdc...
>> Zapping /dev/sdd...
>>
>> and lsblk shows empty drives.
>>
>> Zitat von Vahideh Alinouri <vahideh.alinouri(a)gmail.com>:
>>
>> > Hi Guys,
>> >
>> > I faced an issue. When I wanted to purge, the cluster was not purged
>> > using the below command:
>> >
>> > ceph mgr module disable cephadm
>> > cephadm rm-cluster --force --zap-osds --fsid <fsid>
>> >
>> > The OSDs will remain. There should be some cleanup methods for the
>> > whole cluster, not just MON nodes. Is there anything related to this?
>> >
>> > Regards
>> > _______________________________________________
>> > ceph-users mailing list -- ceph-users(a)ceph.io
>> > To unsubscribe send an email to ceph-users-leave(a)ceph.io
>>
>>
>> _______________________________________________
>> ceph-users mailing list -- ceph-users(a)ceph.io
>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>>
Hi Guys,
I faced an issue. When I wanted to purge, the cluster was not purged
using the below command:
ceph mgr module disable cephadm
cephadm rm-cluster --force --zap-osds --fsid <fsid>
The OSDs will remain. There should be some cleanup methods for the
whole cluster, not just MON nodes. Is there anything related to this?
Regards
Hello Ceph users,
we've had a incident with CephFS recently which resulted in the MDSs crashing for one of the filesystems. First it was a journal corruption which was easily recoverable with no real damage but following that the MDSs fail to start due to a crash related to snapshots.
-10> 2024-02-18T20:03:51.656+0000 7f08725bfb38 1 mds.0.234010 handle_mds_map state change up:rejoin --> up:active
-9> 2024-02-18T20:03:51.656+0000 7f08725bfb38 1 mds.0.234010 recovery_done -- successful recovery!
-8> 2024-02-18T20:03:51.656+0000 7f087271fb38 10 monclient: get_auth_request con 0x7f0871836380 auth_method 0
-7> 2024-02-18T20:03:51.656+0000 7f08726d5b38 10 monclient: get_auth_request con 0x7f08718353c0 auth_method 0
-6> 2024-02-18T20:03:51.656+0000 7f08726fab38 10 monclient: get_auth_request con 0x7f0871895280 auth_method 0
-5> 2024-02-18T20:03:51.656+0000 7f08725bfb38 1 mds.0.234010 active_start
-4> 2024-02-18T20:03:51.656+0000 7f08725bfb38 1 mds.0.cache dump_cache to cachedump.234013.mds0
-3> 2024-02-18T20:03:51.736+0000 7f08725bfb38 1 mds.0.234010 cluster recovered.
-2> 2024-02-18T20:03:51.736+0000 7f08725bfb38 4 mds.0.234010 set_osd_epoch_barrier: epoch=75012
-1> 2024-02-18T20:03:51.736+0000 7f08722f9b38 -1 /home/buildozer/aports/community/ceph18/src/ceph-18.2.1/src/mds/MDCache.cc: In function 'void MDCache::journal_cow_dentry(MutationImpl*, EMetaBlob*, CDentry*, snapid_t, CInode**, CDentry::linkage_t*)' thread 7f08722f9b38 time 2024-02-18T20:03:51.747600+0000
/home/buildozer/aports/community/ceph18/src/ceph-18.2.1/src/mds/MDCache.cc: 1638: FAILED ceph_assert(follows >= realm->get_newest_seq())
ceph version 18.2.1 (e3fce6809130d78ac0058fc87e537ecd926cd213) reef (stable)
0> 2024-02-18T20:03:51.736+0000 7f08722f9b38 -1 *** Caught signal (Aborted) **
in thread 7f08722f9b38 thread_name:MR_Finisher
ceph version 18.2.1 (e3fce6809130d78ac0058fc87e537ecd926cd213) reef (stable)
We've sadly attempted to restore the root, which made the entire filesystem tree stray.
1. Is there a way to re-link a directory as the root (or dentry of root) manually?
2. Could snapshots of objects be purged completly manually?
Thanks in advance.
--
Alex D.
RedXen System & Infrastructure Administration
https://redxen.eu/
Hello guys,
We are running Ceph Octopus on Ubuntu 18.04, and we are noticing spikes of
IO utilization for bstore_kv_sync thread during processes such as adding a
new pool and increasing/reducing the number of PGs in a pool.
It is funny though that the IO utilization (reported with IOTOP) is 99.99%,
but the reading for R/W speeds are slow. The devices where we are seeing
these issues are all SSDs systems. We are not using high end SSDs devices
though.
Have you guys seen such behavior?
Also, do you guys have any clues on why the IO utilization would be high,
when there is such a small amount of data being read and written to the
OSD/disks?