Hello,
I am running ceph nautilus 14.2.8
I had to remove 2 pools (old cephfs data and metadata pool with 1024 pgs).
The removal of the pools seems to take a incredible time to free the
space (the data pool I deleted was more than 100 TB and in 36h I got
back only 10TB). In the meantime, the cluster is extremely slow (a rbd
extract takes ~1h30 mn for a 32 GB image and writing 10MB in cephfs
takes half a minute !!) which makes the cluster almost unusable.
It seems that the removal of deleted pg is done by deep-scrubs according
tohttps://medium.com/opsops/a-very-slow-pool-removal-7089e4ac8301
Also it has been reported that this could be a regression in
nautilushttps://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/W4M5XQRDBLXFGJGDYZALG6TQ4QBVGGAJ/#W4M5XQRDBLXFGJGDYZALG6TQ4QBVGGAJ
But I couldn't find a fix or a way to speedup (or slow down) the process
and get back the cluster to a decent reactivity.
Is there a way ?
Thanks
F.
Hi everyone:
There are two types of qos in ceph(one based on tokenbucket algorithm,another based on mclock ).
Which one I can use in nautilus production environment ?Thank you
Hi Marc,
None of the CephFS issues are show-stoppers but we're anyway waiting
for them to land in nautilus:
* https://tracker.ceph.com/issues/45090
* https://tracker.ceph.com/issues/45261
* https://tracker.ceph.com/issues/45835
* https://tracker.ceph.com/issues/45875
Cheers, Dan
On Thu, Jun 25, 2020 at 11:38 PM Marc Roos <M.Roos(a)f1-outsourcing.eu> wrote:
>
>
> Top! Good to see such pro's on the team.
>
> What bugs is Dan waiting for to be fixed in cephfs before he upgrades
> from luminous to nautilus?
>
>
>
> -----Original Message-----
> To: ceph-users(a)ceph.io
> Subject: [ceph-users] Ceph Tech Talk: Solving the Bug of the Year
>
> Hi everyone,
>
> Thanks again to everyone who was able to join us for discussion, and to
> Dan for providing some great content. You can find the full recording
> for the latest Ceph Tech Talk here:
>
> https://www.youtube.com/watch?v=_4HUR00oCGo
>
> We're looking for a talk for August 27th. If you're available and have
> content to share, let me know!
>
> --
>
> Mike Perez
>
> He/Him
>
> Ceph Community Manager
>
> Red Hat Los Angeles <https://www.redhat.com>
>
> thingee(a)redhat.com <mailto:thingee@redhat.com>
> M: 1-951-572-2633 <tel:1-951-572-2633> IM: IRC Freenode/OFTC: thingee
>
> 494C 5D25 2968 D361 65FB 3829 94BC D781 ADA8 8AEA
>
> @Thingee <https://twitter.com/thingee>
> <https://www.redhat.com>
> <https://redhat.com/summit>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an
> email to ceph-users-leave(a)ceph.io
>
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
Hi everyone,
We are currently transitioning from a temporary machine to our production hardware. Since we're starting with under 200 TB raw storage, we are currently on only 1–2 physical machines per cluster, eventually in 3 zones. The temporary machine is undersized for even that with an older single 6-core CPU and spinning disks only. As of now that "cluster-of-one" is running on Nautilus and has 3 buckets with 98K, 1.1M and 1.4M objects, respectively for a total of 9.1 TB. As we're expecting these to grow to around 5M objects each and will be in a multisite configuration, I went with 50 shards per bucket.
Listing "directories" via S3 is somewhat slow (sometimes to the point of read timeouts) but mostly bearable. After the new production setup (dual 8-core/16-thread Xeon Silvers, 2 x SATA SSDs for RGW index pool, on Octopus, with enough free memory to easily fit all bucket indexes multiple times) synced successfully, listings via S3 always time out on the RGW on that machine/zone.
As soon as I trigger a single listing via S3 (even on the 98K object bucket), reads go up to a sustained 300–500MB/s and 20–50K IOPS on the bucket index pool for several hours. The RGW debug log is flooded with lines like this:
{"log":"debug 2020-06-08T19:31:08.315+0000 7f83d704c700 1 RGWRados::Bucket::List::list_objects_ordered INFO ordered bucket listing requires read #1\n","stream":"stdout","time":"2020-06-08T19:31:08.317198682Z"}
I get that sharded RGW indexes (and listing objects in S3 buckets in general) are not very efficient, but after getting somewhat decent results on slower hardware and an older Ceph version, I wasn't expecting the nominally much better setup to be orders of magnitude slower.
Any help or pointers would be greatly appreciated.
Thank you,
Stefan
Do you mean unfound instead of undersized? There is an as yet
unreproducible bug:
https://tracker.ceph.com/issues/44286
(Please follow this bug if it affects you! I've experienced it and am
leery of doing any drive swaps or upgrades until it is fixed.)
Chad.
Hi everyone,
Thanks again to everyone who was able to join us for discussion, and to
Dan for providing some great content. You can find the full recording
for the latest Ceph Tech Talk here:
https://www.youtube.com/watch?v=_4HUR00oCGo
We're looking for a talk for August 27th. If you're available and have
content to share, let me know!
--
Mike Perez
He/Him
Ceph Community Manager
Red Hat Los Angeles <https://www.redhat.com>
thingee(a)redhat.com <mailto:thingee@redhat.com>
M: 1-951-572-2633 <tel:1-951-572-2633> IM: IRC Freenode/OFTC: thingee
494C 5D25 2968 D361 65FB 3829 94BC D781 ADA8 8AEA
@Thingee <https://twitter.com/thingee>
<https://www.redhat.com>
<https://redhat.com/summit>
Hi,
Our ceph cluster system health is fine but when I looked at the "ceph orch
ps" one of the image has error state as stated below.
node-exporter.ceph102 ceph102 error 7m ago 13m <unknown>
prom/node-exporter <unknown> <unknown>
How can we debug and locate the problem with ceph command? Another thing is
where can I find that error log? Inside the docker or host?
Regards.
hello
i want to unsubscribe this mail list . help me please.
在 2020/6/25 22:42, ceph-users-request(a)ceph.io 写道:
> Send ceph-users mailing list submissions to
> ceph-users(a)ceph.io
>
> To subscribe or unsubscribe via email, send a message with subject or
> body 'help' to
> ceph-users-request(a)ceph.io
>
> You can reach the person managing the list at
> ceph-users-owner(a)ceph.io
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of ceph-users digest..."
>
> Today's Topics:
>
> 1. Re: Removing pool in nautilus is incredibly slow
> (Francois Legrand)
> 2. Re: Bench on specific OSD (Marc Roos)
> 3. Re: Removing pool in nautilus is incredibly slow
> (Wout van Heeswijk)
> 4. Lifecycle message on logs (Marcelo Miziara)
> 5. Re: Feedback of the used configuration (Simon Sutter)
> 6. Re: Removing pool in nautilus is incredibly slow
> (Francois Legrand)
> 7. Re: Removing pool in nautilus is incredibly slow (Eugen Block)
>
>
> ----------------------------------------------------------------------
>
> Date: Thu, 25 Jun 2020 07:57:48 -0000
> From: "Francois Legrand" <fleg(a)lpnhe.in2p3.fr>
> Subject: [ceph-users] Re: Removing pool in nautilus is incredibly slow
> To: ceph-users(a)ceph.io
> Message-ID: <159307186843.20.7354239610800797635@mailman-web>
> Content-Type: text/plain; charset="utf-8"
>
> Does someone have an idea ?
> F.
>
> ------------------------------
>
> Date: Thu, 25 Jun 2020 11:36:24 +0200
> From: "Marc Roos" <M.Roos(a)f1-outsourcing.eu>
> Subject: [ceph-users] Re: Bench on specific OSD
> To: ceph-users <ceph-users(a)ceph.io>, seenafallah
> <seenafallah(a)gmail.com>
> Message-ID: <"H0000071001729d1.1593077784.sx.f1-outsourcing.eu*"@MHS>
> Content-Type: text/plain; charset="US-ASCII"
>
>
>
> What is wrong with just doing multiple tests and group in your charts
> osd's by host?
>
>
> -----Original Message-----
> To: ceph-users
> Subject: [ceph-users] Bench on specific OSD
>
> Hi all.
>
> Is there anyway to completely health check one OSD host or instance?
> For example rados bech just on that OSD or do some checks for disk and
> front and back netowrk?
>
> Thanks.
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an
> email to ceph-users-leave(a)ceph.io
>
>
> ------------------------------
>
> Date: Thu, 25 Jun 2020 14:26:59 +0200
> From: Wout van Heeswijk <wout(a)42on.com>
> Subject: [ceph-users] Re: Removing pool in nautilus is incredibly slow
> To: fleg(a)lpnhe.in2p3.fr
> Cc: ceph-users(a)ceph.io
> Message-ID: <38a5d557-7797-a68a-1b67-4c8f0e1ecf4c(a)42on.com>
> Content-Type: text/plain; charset=utf-8; format=flowed
>
> Hi Francois,
>
> Have you already looked at the option "osd_delete_sleep"? It will not
> speed up the process but I will give you some control over your cluster
> performance.
>
> Something like:
>
> ceph tell osd.\* injectargs '--osd_delete_sleep1'
>
> kind regards,
>
> Wout
> 42on
>
> On 25-06-2020 09:57, Francois Legrand wrote:
>> Does someone have an idea ?
>> F.
>> _______________________________________________
>> ceph-users mailing list -- ceph-users(a)ceph.io
>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>
> ------------------------------
>
> Date: Thu, 25 Jun 2020 09:53:09 -0300
> From: Marcelo Miziara <raxidex(a)gmail.com>
> Subject: [ceph-users] Lifecycle message on logs
> To: ceph-users(a)ceph.io
> Message-ID:
> <CAGxsqB296eczKux19OKObkimXg3PZ_UDEWVTTqEnrAf=Agm0+A(a)mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Hello...it's the first time I need to use the lifecycle, and I created a
> bucket and set it to expire in one day with s3cmd:
> s3cmd expire --expiry-days=1 s3://bucket
>
> The rgw_lifecycle_work_time is set to the default values(00:00-06:00). But
> I noticed in the rgw logs a lot of messages like:
> 2020-06-16 00:00:00.311369 7fe2cac87700 0 RGWLC::process() failed to get
> obj entry lc.8
> 2020-06-16 00:00:00.311623 7fe2c8c83700 0 RGWLC::process() failed to get
> obj entry lc.16
> 2020-06-16 00:00:00.311862 7fe2c6c7f700 0 RGWLC::process() failed to get
> obj entry lc.4
> 2020-06-16 00:00:00.319424 7fe2cac87700 0 RGWLC::process() failed to get
> obj entry lc.10
> 2020-06-16 00:00:00.319647 7fe2c8c83700 0 RGWLC::process() failed to get
> obj entry lc.18
> 2020-06-16 00:00:00.320682 7fe2c6c7f700 0 RGWLC::process() failed to get
> obj entry lc.16
> 2020-06-16 00:00:00.327770 7fe2cac87700 0 RGWLC::process() failed to get
> obj entry lc.6
> 2020-06-16 00:00:00.328941 7fe2c8c83700 0 RGWLC::process() failed to get
> obj entry lc.17
> 2020-06-16 00:00:00.332463 7fe2c6c7f700 0 RGWLC::process() failed to get
> obj entry lc.20
> 2020-06-16 00:00:00.336788 7fe2cac87700 0 RGWLC::process() failed to get
> obj entry lc.1
> 2020-06-16 00:00:00.336924 7fe2c8c83700 0 RGWLC::process() failed to get
> obj entry lc.24
> 2020-06-16 00:00:00.340915 7fe2c6c7f700 0 RGWLC::process() failed to get
> obj entry lc.2
>
> The object was deleted, but these messages keep appearing.
> Is it safe to ignore them?
>
> For the records, i'm using redhat luminous 12.2.12
>
> Thanks, Marcelo.
>
> ------------------------------
>
> Date: Thu, 25 Jun 2020 13:19:38 +0000
> From: Simon Sutter <ssutter(a)hosttech.ch>
> Subject: [ceph-users] Re: Feedback of the used configuration
> To: Paul Emmerich <paul.emmerich(a)croit.io>
> Cc: "ceph-users(a)ceph.io" <ceph-users(a)ceph.io>
> Message-ID: <f6e11ee189a74c35815c495d64cbe4cb(a)hosttech.ch>
> Content-Type: text/plain; charset="utf-8"
>
> Hello Paul,
>
> Thanks for the Answer.
> I took a look at the subvolumes, but they are a bit odd in my opinion.
> If I create one with a subvolume-group, the folder structure will look like this:
> /cephfs/volumes/group-name/subvolume-name/random-uuid/
> And I have to issue two commands, first set the group and then set the volume name, but why so complicated?
>
> Wouldn't it be easier to just make subvolumes anywhere inside the cephfs?
> I can see the intended use for groups, but if I want to publish a pool in some different directory that's not possible (except for setfattr).
> Without first creating subvolume-groups, the orchestrator creates subvolumes in the /cephfs/volumes/_nogroup/subvolume-name/randmon-uuid/ folder.
>
> And the more important question is, why is there a new folder with a random uuid inside the subvolume?
> I try to understand the points the devs had, when they developed this, but for me, this is something I have to explain to some devs in our team and at the moment I can't.
>
> It is indeed easier to deploy but comes with much less flexibility.
> Maybe something to write in the tracker about?
>
> Thanks in advance,
> Simon
>
> Von: Paul Emmerich [mailto:paul.emmerich@croit.io]
> Gesendet: Mittwoch, 24. Juni 2020 17:35
> An: Simon Sutter <ssutter(a)hosttech.ch>
> Cc: ceph-users(a)ceph.io
> Betreff: Re: [ceph-users] Feedback of the used configuration
>
> Have a look at cephfs subvolumes: https://docs.ceph.com/docs/master/cephfs/fs-volumes/#fs-subvolumes
>
> They are internally just directories with quota/pool placement layout/namespace with some mgr magic to make it easier than doing that all by hand
>
> Paul
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io<http://www.croit.io>
> Tel: +49 89 1896585 90
>
>
> On Wed, Jun 24, 2020 at 4:38 PM Simon Sutter <ssutter(a)hosttech.ch<mailto:ssutter@hosttech.ch>> wrote:
> Hello,
>
> After two months of the "ceph try and error game", I finally managed to get an Octopuss cluster up and running.
> The unconventional thing about it is, it's just for hot backups, no virtual machines on there.
> All the nodes are without any caching ssd's, just plain hdd's.
> At the moment there are eight of them with a total of 50TB. We are planning to go up to 25 and bigger disks so we end on 300TB-400TB
>
> I decided to go with cephfs, because I don't have any experience in things like S3 and I need to read the same file system from more than one client.
>
> I made one cephfs with a replicated pool.
> On there I added erasure-coded pools to save some Storage.
> To add those pools, I did it with the setfattr command like this:
> setfattr -n ceph.dir.layout.pool -v ec_data_server1 /cephfs/nfs/server1
>
> Some of our servers cannot use cephfs (old kernels, special OS's) so I have to use nfs.
> This is set up with the included ganesha-nfs.
> Exported is the /cephfs/nfs folder and clients can mount folders below this.
>
> There are two final questions:
>
> - Was it right to go with the way of "mounting" pools with setfattr, or should I have used multiple cephfs?
>
> First I was thinking about using multiple cephfs but there are warnings everywhere. The deeper I got in, the more it seems I would have been fine with multiple cephfs.
>
> - Is there a way I don't know, but it would be easier?
>
> I still don't know much about Rest, S3, RBD etc... so there may be a better way
>
> Other remarks are desired.
>
> Thanks in advance,
> Simon
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io<mailto:ceph-users@ceph.io>
> To unsubscribe send an email to ceph-users-leave(a)ceph.io<mailto:ceph-users-leave@ceph.io>
>
> ------------------------------
>
> Date: Thu, 25 Jun 2020 16:22:12 +0200
> From: Francois Legrand <fleg(a)lpnhe.in2p3.fr>
> Subject: [ceph-users] Re: Removing pool in nautilus is incredibly slow
> To: Wout van Heeswijk <wout(a)42on.com>
> Cc: ceph-users(a)ceph.io
> Message-ID: <db34e791-1172-8b94-09e7-4ab5790d3162(a)lpnhe.in2p3.fr>
> Content-Type: text/plain; charset=utf-8; format=flowed
>
> Thanks for the hint.
> I tryed but it doesn't seems to change anything...
> Moreover, as the osds seems quite loaded I had regularly some osd marked
> down which triggered some new peering and thus more load !!!
> I set the osd no down flag, but I still have some osd reported (wrongly)
> as down (and back up in the minute) which generate peering and
> remapping. I don't really understand the action of no down parameter !
> Is there a way to tell ceph not to peer immediately after an osd is
> reported down (let say wait for 60s) ?
> I am thinking about restarting all osd (or maybe the whole cluster) to
> get osd_op_queue_cut_off changed to high and osd_op_thread_timeout to
> something higher than 15 (but I don't think it will really improve the
> situation).
> F.
>
>
> Le 25/06/2020 à 14:26, Wout van Heeswijk a écrit :
>> Hi Francois,
>>
>> Have you already looked at the option "osd_delete_sleep"? It will not
>> speed up the process but I will give you some control over your
>> cluster performance.
>>
>> Something like:
>>
>> ceph tell osd.\* injectargs '--osd_delete_sleep1'
>> kind regards,
>>
>> Wout
>> 42on
>> On 25-06-2020 09:57, Francois Legrand wrote:
>>> Does someone have an idea ?
>>> F.
>>> _______________________________________________
>>> ceph-users mailing list --ceph-users(a)ceph.io
>>> To unsubscribe send an email toceph-users-leave(a)ceph.io
>
> ------------------------------
>
> Date: Thu, 25 Jun 2020 14:42:57 +0000
> From: Eugen Block <eblock(a)nde.ag>
> Subject: [ceph-users] Re: Removing pool in nautilus is incredibly slow
> To: ceph-users(a)ceph.io
> Message-ID:
> <20200625144257.Horde.I8JPcddeor47WdOehQKQPNY(a)webmail.nde.ag>
> Content-Type: text/plain; charset=utf-8; format=flowed; DelSp=Yes
>
> I'm not sure if your OSDs have their rocksDB on faster devices, if not
> it sounds a lot like rocksdb fragmentation [1] leading to a very high
> load on the OSDs and occasionally crashing OSDs. If you don't plan to
> delete so much data at once on a regular basis you could sit this one
> out, but one solution is to re-create the OSDs with rocksDB/WAL on
> faster devices.
>
>
> [1] https://www.mail-archive.com/ceph-users@ceph.io/msg03160.html
>
>
> Zitat von Francois Legrand <fleg(a)lpnhe.in2p3.fr>:
>
>> Thanks for the hint.
>> I tryed but it doesn't seems to change anything...
>> Moreover, as the osds seems quite loaded I had regularly some osd
>> marked down which triggered some new peering and thus more load !!!
>> I set the osd no down flag, but I still have some osd reported
>> (wrongly) as down (and back up in the minute) which generate peering
>> and remapping. I don't really understand the action of no down
>> parameter !
>> Is there a way to tell ceph not to peer immediately after an osd is
>> reported down (let say wait for 60s) ?
>> I am thinking about restarting all osd (or maybe the whole cluster)
>> to get osd_op_queue_cut_off changed to high and
>> osd_op_thread_timeout to something higher than 15 (but I don't think
>> it will really improve the situation).
>> F.
>>
>>
>> Le 25/06/2020 à 14:26, Wout van Heeswijk a écrit :
>>> Hi Francois,
>>>
>>> Have you already looked at the option "osd_delete_sleep"? It will
>>> not speed up the process but I will give you some control over your
>>> cluster performance.
>>>
>>> Something like:
>>>
>>> ceph tell osd.\* injectargs '--osd_delete_sleep1'
>>> kind regards,
>>>
>>> Wout
>>> 42on
>>> On 25-06-2020 09:57, Francois Legrand wrote:
>>>> Does someone have an idea ?
>>>> F.
>>>> _______________________________________________
>>>> ceph-users mailing list --ceph-users(a)ceph.io
>>>> To unsubscribe send an email toceph-users-leave(a)ceph.io
>> _______________________________________________
>> ceph-users mailing list -- ceph-users(a)ceph.io
>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
> %(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s
>
>
> ------------------------------
>
> End of ceph-users Digest, Vol 89, Issue 112
> *******************************************
Hello Paul,
Thanks for the Answer.
I took a look at the subvolumes, but they are a bit odd in my opinion.
If I create one with a subvolume-group, the folder structure will look like this:
/cephfs/volumes/group-name/subvolume-name/random-uuid/
And I have to issue two commands, first set the group and then set the volume name, but why so complicated?
Wouldn't it be easier to just make subvolumes anywhere inside the cephfs?
I can see the intended use for groups, but if I want to publish a pool in some different directory that's not possible (except for setfattr).
Without first creating subvolume-groups, the orchestrator creates subvolumes in the /cephfs/volumes/_nogroup/subvolume-name/randmon-uuid/ folder.
And the more important question is, why is there a new folder with a random uuid inside the subvolume?
I try to understand the points the devs had, when they developed this, but for me, this is something I have to explain to some devs in our team and at the moment I can't.
It is indeed easier to deploy but comes with much less flexibility.
Maybe something to write in the tracker about?
Thanks in advance,
Simon
Von: Paul Emmerich [mailto:paul.emmerich@croit.io]
Gesendet: Mittwoch, 24. Juni 2020 17:35
An: Simon Sutter <ssutter(a)hosttech.ch>
Cc: ceph-users(a)ceph.io
Betreff: Re: [ceph-users] Feedback of the used configuration
Have a look at cephfs subvolumes: https://docs.ceph.com/docs/master/cephfs/fs-volumes/#fs-subvolumes
They are internally just directories with quota/pool placement layout/namespace with some mgr magic to make it easier than doing that all by hand
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io<http://www.croit.io>
Tel: +49 89 1896585 90
On Wed, Jun 24, 2020 at 4:38 PM Simon Sutter <ssutter(a)hosttech.ch<mailto:ssutter@hosttech.ch>> wrote:
Hello,
After two months of the "ceph try and error game", I finally managed to get an Octopuss cluster up and running.
The unconventional thing about it is, it's just for hot backups, no virtual machines on there.
All the nodes are without any caching ssd's, just plain hdd's.
At the moment there are eight of them with a total of 50TB. We are planning to go up to 25 and bigger disks so we end on 300TB-400TB
I decided to go with cephfs, because I don't have any experience in things like S3 and I need to read the same file system from more than one client.
I made one cephfs with a replicated pool.
On there I added erasure-coded pools to save some Storage.
To add those pools, I did it with the setfattr command like this:
setfattr -n ceph.dir.layout.pool -v ec_data_server1 /cephfs/nfs/server1
Some of our servers cannot use cephfs (old kernels, special OS's) so I have to use nfs.
This is set up with the included ganesha-nfs.
Exported is the /cephfs/nfs folder and clients can mount folders below this.
There are two final questions:
- Was it right to go with the way of "mounting" pools with setfattr, or should I have used multiple cephfs?
First I was thinking about using multiple cephfs but there are warnings everywhere. The deeper I got in, the more it seems I would have been fine with multiple cephfs.
- Is there a way I don't know, but it would be easier?
I still don't know much about Rest, S3, RBD etc... so there may be a better way
Other remarks are desired.
Thanks in advance,
Simon
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io<mailto:ceph-users@ceph.io>
To unsubscribe send an email to ceph-users-leave(a)ceph.io<mailto:ceph-users-leave@ceph.io>