Hi,
With v15.2.8, after zap a device on OSD node, it's still not available.
The reason is "locked, LVM detected". If I reboot the whole OSD node,
then the device will be available. There must be something no being
cleaned up. Any clues?
Thanks!
Tony
I am running a simple workload (just 1 client and 1 file) of random writings on Ceph FS and I noticed that approximately 3% of the operations (well spread over time) show latencies higher than the other 97% (100 ms x 10 ms). Is there any reason for this to happen?
- I'm using fio with O_DIRECT to avoid cache buffer, so it is expected that the operations will only be completed after writing to the disk.
- My WAL is also disabled, so there is no reason for ceph to be doing deferred writing.
- I performed the same workload on the gluster fs and the latencies were uniform over time.
Hi,
Quoting the page https://docs.ceph.com/en/latest/architecture/
> location query over a chatty session. The CRUSH algorithm allows a
> client to compute where objects should be stored, and enables the
> client to contact the primary OSD to store or retrieve the objects.
So clients contact the *primary* OSD to store/retrieve objects.
Why do clients not contact secondary (or even tertiary) OSDs for reading
data? Would that not (potentially) result in greatly improved performance..?
I'm sure there are good reasons for the current behaviour, but it seems
logical, since you have multiple copies of the same data, to try and use
the nearest copy?
Curious :-)
MJ
Hi all,
I'm trying to understand if the CephFS is a good approach for the following scenario. From some of the OLD benchmarks, GlusterFS significantly beat the CephFS when many file I/Os were required. But... this was an OLD benchmark. I'd like your thoughts on the matter.
What I need to perform are the following two steps:
Re-organize files (Step one)
I need to take a large directory structure (assumed to reside on CephFS) and "re-arrange" it via a copy or link mechanism. I will want to make a full copy of the directory structure but do a simple disk span chunking so that all the files in the original copy end up in a set of folders where each folder is no larger than a fixed size. This is like what we did back in the days where we needed to write data in CDROM sized chunks. There is a set of tools that will do this in the genisoimage package (dssplit and dirsplit). Folder Axe was the MS Windows equivalent
Presumably, this would put a large random read and random write load on the cluster. Since the size can be large (hundreds of G (maybe up to 1TB) with 10s to 100s of thousands of small files), I would need for this to be well optimized. One mechanism that might be available is to use hard or soft links so that no actual copying is done (Don't know if CephFS/POSIX supports this). The linking approach would probably put a large strain on the MDS servers but not so much on the storage.
Write to media (Step two)
I need to stream the chunked folders to a set of media devices (think tape drive) that can ingest at high speed (about 200 megabytes per second... yes bytes). I'd like to make sure that we can feed the ingest at the max rate (if possible). Whether we can write the folder chunks one at a time or in parallel (to multiple tape drives) remains to be seen. Presumably, this would put a large random read load on the cluster. Once the media has been successfully written, the chunked copy can be deleted.
Notes:
Currently, planning for all access to be done via Linux servers. I'm eagerly watching the windows native CephFS beta.
The server performing the chunking job will be the only reader/writer of the data.
The server performing the streaming job will also be the only reader/writer of the data.
If we can support parallel, then there may be 2-3 chunking servers and 2-3 streaming servers operating concurrently.
There are only a few system in play... NOT hundreds of concurrent clients accessing the data.
One might assume that we could keep the raw data on cheaper disk and then "reconstruct" the copy on flash. In this scenario, we can stream from flash.
I'd definitely appreciate your feedback on whether CephFS would be a good fit.
Thanks in advance for your thoughts!
- Steve
Hi all,
we plan to add a kernel client mount to a server in our DMZ. I can't find information on how to allow a ceph client to access a ceph cluster through a firewall. Does somebody have a link or sample configs for both, iptables on the host itself and for a transparent firewall between host and cluster?
Note: performance is not an issue here.
Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
Hi Tom,
This is great! will look into the PR.
Regarding the tests, the unit tests for amqp are actually here [1].
However, they are testing against a mock amqp library [2] and not a real
broker, so, I don't think it is critical to cover SSL there.
The disabled tests you pointed out, are the integration tests,
currently they are disabled because we don't have the infrastructure in
teuthology (auto test automation framework) to run them. To run them
locally, you would need to set "skip_push_tests" to "False".
Note that the amqp tests there are running the rabbitmq server locally. If
you want to test against an existing broker running on a different machine,
you should comment out the "init_rabbitmq()" call, and set the endpoint
currently in the topic. As part of the tests we are also creating the
consumer for the notification (so we can validate the results), there [3]
also we assume that the broker runs locally, so if it is not the case. it
will have to be changed as well.
Information on how to run them locally is here [4]. Let me know if you need
help with that.
Yuval
[1] https://github.com/ceph/ceph/blob/master/src/test/rgw/test_rgw_amqp.cc
[2] https://github.com/ceph/ceph/blob/master/src/test/rgw/amqp_mock.cc
[3]
https://github.com/ceph/ceph/blob/master/src/test/rgw/rgw_multi/tests_ps.py…
[4] https://github.com/ceph/ceph/blob/master/src/test/rgw/test_multi.md
On Wed, Feb 10, 2021 at 11:03 AM Schoonjans, Tom (RFI,RAL,-) <
Tom.Schoonjans(a)rfi.ac.uk> wrote:
> Hi Yuval,
>
>
> I opened a PR <https://github.com/ceph/ceph/pull/39392> on Github to add
> SSL support for AMQP connections. I haven’t been able to test it though as
> the AMQP unit tests all look disabled
> <https://github.com/ceph/ceph/blob/fe69a2abf44ad38e9c3523d24cb173dbb2ccd026/…>.
> Could you provide some guidance please?
>
> Many thanks,
>
> Tom
>
>
>
> Dr Tom Schoonjans
>
> Research Software Engineer - HPC and Cloud
>
> Rosalind Franklin Institute
> Harwell Science & Innovation Campus
> Didcot
> Oxfordshire
> OX11 0FA
> United Kingdom
>
> https://www.rfi.ac.uk
>
> The Rosalind Franklin Institute is a registered charity in England and
> Wales, No. 1179810 Company Limited by Guarantee Registered in England
> and Wales, No.11266143. Funded by UK Research and Innovation through
> the Engineering and Physical Sciences Research Council.
>
> On 29 Jan 2021, at 09:14, Yuval Lifshitz <ylifshit(a)redhat.com> wrote:
>
>
>
> On Fri, Jan 29, 2021 at 9:18 AM Schoonjans, Tom (RFI,RAL,-) <
> Tom.Schoonjans(a)rfi.ac.uk> wrote:
>
>> Hi Yuval,
>>
>>
>> What do I need to do if I want to switch to using a different exchange on
>> the RabbitMQ endpoint? Or change the amqp-ack-level option that was used?
>> Would you expect the same problem again? Will the existing connections to
>> the RabbitMQ server be cleanly terminated?
>>
>> I think that changing the ack level would take effect on the next publish
> (as this is not a feature of the connection, but the calling code), but to
> change the exchange (or any other parameter of the connection itself), or
> even creating a new topic to the same endpoint with a different exchange,
> you would need a restart :-(
> (tracking this here: https://tracker.ceph.com/issues/46127)
>
>
>> I tried the topic example
>> <https://www.rabbitmq.com/tutorials/tutorial-five-python.html> from the
>> RabbitMQ tutorial and I actually got the same behaviour as with Ceph:
>> messages sent before the consumer queue is attached are lost. From what I
>> understand this is a *feature* of this type of exchange. See also this
>> <https://stackoverflow.com/questions/6148381/rabbitmq-persistent-message-wit…> stackoverflow
>> post.
>>
>>
> this is interesting. and looks like a better model than what we
> currently have.
> we should declare our own fanout "gateway" exchange, connected to an "eat
> all" queue. the users may then connect the consumers directly to it, or via
> a topic exchange they declare. that would actually fix 2 issues:
> - durability of messages before clients are connected
> - exchange name configuration issues
>
>
>> Best,
>>
>> Tom
>>
>>
>> Dr Tom Schoonjans
>>
>> Research Software Engineer - HPC and Cloud
>>
>> Rosalind Franklin Institute
>> Harwell Science & Innovation Campus
>> Didcot
>> Oxfordshire
>> OX11 0FA
>> United Kingdom
>>
>> https://www.rfi.ac.uk
>>
>> The Rosalind Franklin Institute is a registered charity in England and
>> Wales, No. 1179810 Company Limited by Guarantee Registered in England
>> and Wales, No.11266143. Funded by UK Research and Innovation through
>> the Engineering and Physical Sciences Research Council.
>>
>> On 28 Jan 2021, at 18:16, Yuval Lifshitz <ylifshit(a)redhat.com> wrote:
>>
>>
>>
>> On Thu, Jan 28, 2021 at 7:34 PM Schoonjans, Tom (RFI,RAL,-) <
>> Tom.Schoonjans(a)rfi.ac.uk> wrote:
>>
>>> Hi Yuval,
>>>
>>>
>>> Together with Tom Byrne I ran some more tests today while keeping an eye
>>> on the logs as well.
>>>
>>> We immediately noticed that the nodes were logging errors when uploading
>>> files like:
>>>
>>> 2021-01-28 16:10:45.825 7f56ff5cf700 1 ====== starting new request req=0x7f56ff5c87f0 =====
>>> 2021-01-28 16:10:45.828 7f5721e14700 1 AMQP connect: exchange mismatch
>>> 2021-01-28 16:10:45.828 7f5721e14700 1 ERROR: failed to create push endpoint: amqp://<username>:<password>@<my.rabbitmq.server>:5672 due to: pubsub endpoint configuration error: AMQP: failed to create connection to: amqp://<username>:<password>@<my.rabbitmq.server>:5672
>>> 2021-01-28 16:10:45.828 7f571ee0e700 1 ====== req done req=0x7f571ee077f0 op status=0 http_status=200 latency=0.0569997s ======
>>>
>>>
>>> Which resulted in no connections being established to the RabbitMQ
>>> server.
>>>
>>> Tom restarted then the Ceph services on one gateway node, which led to
>>> events being sent to RabbitMQ without blocking, but only if this particular
>>> node was picked up by the boto3 upload request in the round-robin DNS.
>>>
>>> Restarting the Ceph service on all nodes fixed the problem and I got a
>>> nice steady stream of events to my consumer Python script!
>>>
>>>
>> we should fix it. no restart should be needed if one of the connection
>> parameters was wrong
>>
>>
>>
>>> I did notice that any events that were sent while my consumer script was
>>> not running are lost, as they are not picked up after I restart the script.
>>> Any thoughts on this?
>>>
>>>
>> this is strange. in our code [1] we don't require immediate transfer of
>> messages.
>> how is the exchange declared?
>> can you check if this is happening when you send messages from a python
>> producer as well?
>>
>> [1] https://github.com/ceph/ceph/blob/master/src/rgw/rgw_amqp.cc#L575
>>
>>
>>
>>> Many thanks!!
>>>
>>> Best,
>>>
>>> Tom
>>>
>>>
>>>
>>> Dr Tom Schoonjans
>>>
>>> Research Software Engineer - HPC and Cloud
>>>
>>> Rosalind Franklin Institute
>>> Harwell Science & Innovation Campus
>>> Didcot
>>> Oxfordshire
>>> OX11 0FA
>>> United Kingdom
>>>
>>> https://www.rfi.ac.uk
>>>
>>> The Rosalind Franklin Institute is a registered charity in England and
>>> Wales, No. 1179810 Company Limited by Guarantee Registered in England
>>> and Wales, No.11266143. Funded by UK Research and Innovation through
>>> the Engineering and Physical Sciences Research Council.
>>>
>>> On 27 Jan 2021, at 16:21, Yuval Lifshitz <ylifshit(a)redhat.com> wrote:
>>>
>>>
>>> On Wed, Jan 27, 2021 at 5:34 PM Schoonjans, Tom (RFI,RAL,-) <
>>> Tom.Schoonjans(a)rfi.ac.uk> wrote:
>>>
>>>> Looks like there’s already a ticket open for AMQP SSL support:
>>>> https://tracker.ceph.com/issues/42902 (you opened it ;-))
>>>>
>>>> I will give a try myself if I have some time, but don’t hold your
>>>> breath with lockdown and home schooling. Also I am not much of a C++ coder.
>>>>
>>>> I need to go over the logs with Tom Byrne to see why it is not working
>>>> properly. And perhaps I will be able to come up with a fix then.
>>>>
>>>> However this is what I have run into so far today:
>>>>
>>>> 1. After configuring a bucket with a topic using the non-SSL port, I
>>>> tried a couple of uploads to this bucket. They all hanged, which seemed
>>>> like something was very wrong, so I Ctrl-C’ed every time. After some time I
>>>> figured out from the RabbitMQ admin UI that Ceph was indeed connecting to
>>>> it, and the connections remained so I killed them from the UI.
>>>>
>>>
>>> sending the notification to the rabbitmq server is synchronous with the
>>> upload to the bucket. so, if the server is slow or not acking the
>>> notification, the upload request would hang. not that the upload itself is
>>> done first, but the reply to the client does not happen until rabbitmq
>>> server acks.
>>>
>>> would be great if you can share the radosgw logs.
>>> maybe the issue is related to the user/password method we use? we use:
>>> AMQP_SASL_METHOD_PLAIN
>>>
>>> one possible workaround would be to set "amqp-ack-level" to "none". in
>>> this case the radosgw does not wait for an ack
>>>
>>> in "pacific" you could use "persistent topics" where the notifications
>>> are sent asynchronously to the endpoint.
>>>
>>> 2. I then wrote a python script with Pika to consume the events, hoping
>>>> that would stop the blocking. I had some minor success with this. Usually
>>>> the first three or four uploaded files would generate events that I could
>>>> consume with my script.
>>>>
>>>
>>> the radosgw is waiting for an ack from the broker, not the end consumer,
>>> so this should not have mattered...
>>> did you actually see any notifications delivered to the consumer?
>>>
>>>
>>>> However, the rest would block for ever. I repeated this a couple of
>>>> times but always the same result. I noticed that after I stopped uploading,
>>>> removed the bucket and the topic, the connection from Ceph in the RabbitMQ
>>>> UI remained. I killed it but it came back seconds later from another port
>>>> on the Ceph cluster. I ended up playing whack-a-mole with this until no
>>>> more connections would be established from Ceph to RabbitMQ. I probably
>>>> killed a 100 or so of them.
>>>>
>>>
>>> once you remove the bucket there cannot be new notification sent. if you
>>> create the bucket again you may see notifications again (this is fixed in
>>> "pacific").
>>> either way, even if the connection to the rabbitmq server would still be
>>> open, but no new notifications should be sent there. just having the
>>> connection should not be an issue but would be nice to fix that as well:
>>> https://tracker.ceph.com/issues/49033
>>>
>>> 3. After this I couldn’t get any events sent anymore. There is no more
>>>> blocking when uploading, files get written but nothing else happens. No
>>>> connections are made anymore from Ceph to RabbitMQ.
>>>>
>>>> Hope this helps…
>>>>
>>>
>>> yes, this is very helpful!
>>>
>>>
>>>> Best,
>>>>
>>>> Tom
>>>>
>>>>
>>>>
>>>>
>>>> Dr Tom Schoonjans
>>>>
>>>> Research Software Engineer - HPC and Cloud
>>>>
>>>> Rosalind Franklin Institute
>>>> Harwell Science & Innovation Campus
>>>> Didcot
>>>> Oxfordshire
>>>> OX11 0FA
>>>> United Kingdom
>>>>
>>>> https://www.rfi.ac.uk
>>>>
>>>> The Rosalind Franklin Institute is a registered charity in England and
>>>> Wales, No. 1179810 Company Limited by Guarantee Registered in England
>>>> and Wales, No.11266143. Funded by UK Research and Innovation through
>>>> the Engineering and Physical Sciences Research Council.
>>>>
>>>> On 27 Jan 2021, at 13:04, Yuval Lifshitz <ylifshit(a)redhat.com> wrote:
>>>>
>>>>
>>>>
>>>> On Wed, Jan 27, 2021 at 11:33 AM Schoonjans, Tom (RFI,RAL,-) <
>>>> Tom.Schoonjans(a)rfi.ac.uk> wrote:
>>>>
>>>>> Hi Yuval,
>>>>>
>>>>>
>>>>> Switching to non-SSL connections to RabbitMQ allowed us to get things
>>>>> working, although currently it’s not very reliable.
>>>>>
>>>>
>>>> can you please add more about that? what reliability issues did you see?
>>>>
>>>>
>>>>> I will open a new ticket over this if we can’t fix things ourselves.
>>>>>
>>>>>
>>>> this would be great. we have ssl support for kafka and http endpoint,
>>>> so, if you decide to give it a try you can look at them as examples.
>>>> and let me know if you have questions or need help.
>>>>
>>>>
>>>>
>>>>> I will open an issue on the tracker as soon as my account request has
>>>>> been approved :-)
>>>>>
>>>>> Best,
>>>>>
>>>>> Tom
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Dr Tom Schoonjans
>>>>>
>>>>> Research Software Engineer - HPC and Cloud
>>>>>
>>>>> Rosalind Franklin Institute
>>>>> Harwell Science & Innovation Campus
>>>>> Didcot
>>>>> Oxfordshire
>>>>> OX11 0FA
>>>>> United Kingdom
>>>>>
>>>>> https://www.rfi.ac.uk
>>>>>
>>>>> The Rosalind Franklin Institute is a registered charity in England and
>>>>> Wales, No. 1179810 Company Limited by Guarantee Registered in England
>>>>> and Wales, No.11266143. Funded by UK Research and Innovation through
>>>>> the Engineering and Physical Sciences Research Council.
>>>>>
>>>>> On 26 Jan 2021, at 20:02, Yuval Lifshitz <ylifshit(a)redhat.com> wrote:
>>>>>
>>>>>
>>>>>
>>>>> On Tue, Jan 26, 2021 at 9:48 PM Schoonjans, Tom (RFI,RAL,-) <
>>>>> Tom.Schoonjans(a)rfi.ac.uk> wrote:
>>>>>
>>>>>> Hi Yuval,
>>>>>>
>>>>>>
>>>>>> I worked on this earlier today with Tom Byrne and I think I may be
>>>>>> able to provide some more information.
>>>>>>
>>>>>> I set up the RabbitMQ server myself, and created the exchange with
>>>>>> type ’topic’ before configuring the bucket.
>>>>>>
>>>>>> Not sure if this matters, but the RabbitMQ endpoint is reached over
>>>>>> SSL, using certificates generated with Letsencrypt.
>>>>>>
>>>>>>
>>>>> it actually does. we don't support amqp over ssl.
>>>>> feel free to open a tracker for that - as we should probably support
>>>>> that!
>>>>> but note that it would probably be backported only to later versions
>>>>> than nautilus.
>>>>>
>>>>>
>>>>>
>>>>>> Many thanks,
>>>>>>
>>>>>> Tom
>>>>>>
>>>>>>
>>>>>>
>>>>>> Dr Tom Schoonjans
>>>>>>
>>>>>> Research Software Engineer - HPC and Cloud
>>>>>>
>>>>>> Rosalind Franklin Institute
>>>>>> Harwell Science & Innovation Campus
>>>>>> Didcot
>>>>>> Oxfordshire
>>>>>> OX11 0FA
>>>>>> United Kingdom
>>>>>>
>>>>>> https://www.rfi.ac.uk
>>>>>>
>>>>>> The Rosalind Franklin Institute is a registered charity
>>>>>> in England and Wales, No. 1179810 Company Limited by Guarantee Registered
>>>>>> in England and Wales, No.11266143. Funded by UK Research and
>>>>>> Innovation through the Engineering and Physical Sciences Research Council.
>>>>>>
>>>>>> On 26 Jan 2021, at 19:37, Yuval Lifshitz <ylifshit(a)redhat.com> wrote:
>>>>>>
>>>>>> Hi Tom,
>>>>>> Did you create the exchange in rabbitmq? The RGW does not create it
>>>>>> and assume it is already created?
>>>>>> Could you increase the log level in RGW and see if there are more log
>>>>>> messages that have "AMQP" in them?
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>> Yuval
>>>>>>
>>>>>> On Tue, Jan 26, 2021 at 7:33 PM Byrne, Thomas (STFC,RAL,SC) <
>>>>>> tom.byrne(a)stfc.ac.uk> wrote:
>>>>>>
>>>>>>> Hi all,
>>>>>>>
>>>>>>> We've been trying to get RGW Bucket notifications working with a
>>>>>>> RabbitMQ endpoint on our Nautilus 14.2.15 cluster. The gateway host can
>>>>>>> communicate with the rabbitMQ server just fine, but when RGW tries to send
>>>>>>> a message to the endpoint, the message never appears in the queue, and we
>>>>>>> get this error from in the RGW logs:
>>>>>>>
>>>>>>> 2021-01-26 16:28:17.271 7f0468b1f700 1 push to endpoint AMQP(0.9.1)
>>>>>>> Endpoint
>>>>>>> URI: amqp://user:pass@host:5671
>>>>>>> Topic: ceph-topic-test
>>>>>>> Exchange: ceph-test
>>>>>>> Ack Level: broker failed, with error: -4098
>>>>>>>
>>>>>>> We've confirmed the URI is correct, and that the gateway host can
>>>>>>> send messages to the RabbitMQ via a standalone script (using the same
>>>>>>> information as in the URI). Does anyone have any hints about how to dig
>>>>>>> into this?
>>>>>>>
>>>>>>> Cheers,
>>>>>>> Tom
>>>>>>>
>>>>>>> This email and any attachments are intended solely for the use of
>>>>>>> the named recipients. If you are not the intended recipient you must not
>>>>>>> use, disclose, copy or distribute this email or any of its attachments and
>>>>>>> should notify the sender immediately and delete this email from your
>>>>>>> system. UK Research and Innovation (UKRI) has taken every reasonable
>>>>>>> precaution to minimise risk of this email or any attachments containing
>>>>>>> viruses or malware but the recipient should carry out its own virus and
>>>>>>> malware checks before opening the attachments. UKRI does not accept any
>>>>>>> liability for any losses or damages which the recipient may sustain due to
>>>>>>> presence of any viruses. Opinions, conclusions or other information in this
>>>>>>> message and attachments that are not related directly to UKRI business are
>>>>>>> solely those of the author and do not represent the views of UKRI.
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> ceph-users mailing list -- ceph-users(a)ceph.io
>>>>>>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>>>>>>
>>>>>>
>>
>
Hi,
I'd like to know how DB device is expected to be handled by "orch osd rm".
What I see is that, DB device on SSD is untouched when OSD on HDD is removed
or replaced. "orch device zap" removes PV, VG and LV of the device.
It doesn't touch the DB LV on SSD.
To remove an OSD permanently, do I need to manually clean up the DB LV on SSD?
To replace and OSD, is the old DB LV going to be reused for the new OSD,
or a new DB LV will be created?
I am asking this because, to replace an OSD, when the OSD is removed,
I manually removed DB LV on SSD. Now, I try to add new OSD, but --try-run
doesn't show DB device.
```
# cat osd-spec.yaml
service_type: osd
service_id: osd-spec
placement:
hosts:
- ceph-osd-1
spec:
#objectstore: bluestore
#block_db_size: 32212254720
#block_db_size: 64424509440
data_devices:
rotational: 1
db_devices:
#rotational: 0
size: ":500GB"
#unmanaged: true
# ceph orch apply osd -i osd-spec.yaml --dry-run
+---------+----------+------------+----------+----+-----+
|SERVICE |NAME |HOST |DATA |DB |WAL |
+---------+----------+------------+----------+----+-----+
|osd |osd-spec |ceph-osd-1 |/dev/sdd |- |- |
+---------+----------+------------+----------+----+-----+
```
Any clues?
Thanks!
Tony
I'm happy to announce another release of the go-ceph API
bindings. This is a regular release following our every-two-months release
cadence.
https://github.com/ceph/go-ceph/releases/tag/v0.8.0
Changes in the release are detailed in the link above.
The bindings aim to play a similar role to the "pybind" python bindings in the
ceph tree but for the Go language. These API bindings require the use of cgo.
There are already a few consumers of this library in the wild, including the
ceph-csi project.
Specific questions, comments, bugs etc are best directed at our github issues
tracker.
--
John Mulligan
phlogistonjohn(a)asynchrono.us
jmulligan(a)redhat.com
_______________________________________________
Dev mailing list -- dev(a)ceph.io
To unsubscribe send an email to dev-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
Hi all,
We are testing our S3 Ceph endpoints and we are not satisfied with its
speed. Our results are something between around 120 - 150 MB/s depending
on small/bigger files. This is good for 1Gbps connection, but not for
10GE or more.
We've tried the most recent versions of the AWS CLI, s3cmd, s4cmd, s3fs
... programs. Of course we are using multipart upload/download which is
precondition for parallel upload/download. Also we tried multi-thread
(25 or more threads) transfer in s4cmd but still we don't get proper
results.
For proof of concept that high speed can be achieved we have written
small script in bash which uses multi-part & parallel transfer and can
saturate at least 10GE without problem.
I would like to ask you, if you know proper program and its parameters,
so we can saturate n x 10GE if needed?
We are using the latest nautilus.
S3 gateways have much more computer power and bandwidth to internet then
it is used right now.
Thank you
Regards
Michal Strnad
Hi,
We have in issue in our cluster (octopus 15.2.7) where we’re unable to remove orphaned objects from a pool, despite the fact these objects can be listed with “rados ls”.
Here is an example of an orphaned object which we can list (not sure why multiple objects are returned with the same name…related to the issue perhaps?)
rados ls -p default.rgw.buckets.data | grep -i 5a5c812a-3d31-xxxx-xxxx-xxxxxxxxxxxx.4811659.83__shadow_anon_backup_xxxx_xx_xx_090109_7812500.bak.vLHmbxS4DAnRMDVjBYG-5X6iSmepDD6
5a5c812a-3d31-xxxx-xxxx-xxxxxxxxxxxx.4811659.83__shadow_anon_backup_xxxx_xx_xx_090109_7812500.bak.vLHmbxS4DAnRMDVjBYG-5X6iSmepDD6
5a5c812a-3d31-xxxx-xxxx-xxxxxxxxxxxx.4811659.83__shadow_anon_backup_xxxx_xx_xx_090109_7812500.bak.vLHmbxS4DAnRMDVjBYG-5X6iSmepDD6
5a5c812a-3d31-xxxx-xxxx-xxxxxxxxxxxx.4811659.83__shadow_anon_backup_xxxx_xx_xx_090109_7812500.bak.vLHmbxS4DAnRMDVjBYG-5X6iSmepDD6
5a5c812a-3d31-xxxx-xxxx-xxxxxxxxxxxx.4811659.83__shadow_anon_backup_xxxx_xx_xx_090109_7812500.bak.vLHmbxS4DAnRMDVjBYG-5X6iSmepDD6
5a5c812a-3d31-xxxx-xxxx-xxxxxxxxxxxx.4811659.83__shadow_anon_backup_xxxx_xx_xx_090109_7812500.bak.vLHmbxS4DAnRMDVjBYG-5X6iSmepDD6
5a5c812a-3d31-xxxx-xxxx-xxxxxxxxxxxx.4811659.83__shadow_anon_backup_xxxx_xx_xx_090109_7812500.bak.vLHmbxS4DAnRMDVjBYG-5X6iSmepDD6
5a5c812a-3d31-xxxx-xxxx-xxxxxxxxxxxx.4811659.83__shadow_anon_backup_xxxx_xx_xx_090109_7812500.bak.vLHmbxS4DAnRMDVjBYG-5X6iSmepDD6
5a5c812a-3d31-xxxx-xxxx-xxxxxxxxxxxx.4811659.83__shadow_anon_backup_xxxx_xx_xx_090109_7812500.bak.vLHmbxS4DAnRMDVjBYG-5X6iSmepDD6
5a5c812a-3d31-xxxx-xxxx-xxxxxxxxxxxx.4811659.83__shadow_anon_backup_xxxx_xx_xx_090109_7812500.bak.vLHmbxS4DAnRMDVjBYG-5X6iSmepDD6
5a5c812a-3d31-xxxx-xxxx-xxxxxxxxxxxx.4811659.83__shadow_anon_backup_xxxx_xx_xx_090109_7812500.bak.vLHmbxS4DAnRMDVjBYG-5X6iSmepDD6
And the error message when we try to stat / rm the object:
rados stat -p default.rgw.buckets.data 5a5c812a-3d31-xxxx-xxxx-xxxxxxxxxxxx.4811659.83__shadow_anon_backup_xxxx_xx_xx_090109_7812500.bak.vLHmbxS4DAnRMDVjBYG-5X6iSmepDD6
error stat-ing default.rgw.buckets.data/5a5c812a-3d31-xxxx-xxxx-xxxxxxxxxxxx.4811659.83__shadow_anon_backup_xxxx_xx_xx_090109_7812500.bak.vLHmbxS4DAnRMDVjBYG-5X6iSmepDD6: (2) No such file or directory
rados -p default.rgw.buckets.data rm 5a5c812a-3d31-xxxx-xxxx-xxxxxxxxxxxx.4811659.83__shadow_anon_backup_xxxx_xx_xx_090109_7812500.bak.vLHmbxS4DAnRMDVjBYG-5X6iSmepDD6
error removing default.rgw.buckets.data>5a5c812a-3d31-xxxx-xxxx-xxxxxxxxxxxx.4811659.83__shadow_anon_backup_xxxx_xx_xx_090109_7812500.bak.vLHmbxS4DAnRMDVjBYG-5X6iSmepDD6: (2) No such file or directory
The bucket with id "5a5c812a-3d31-xxxx-xxxx-xxxxxxxxxxxx.4811659.83” was deleted from radosgw a few months ago, but we still have approximately 450,000 objects with this bucket id that are orphaned:
cat orphan-list-202101191211.out | grep -i 5a5c812a-3d31-xxxx-xxxx-xxxxxxxxxxxx.4811659.83 | wc -l
448683
I can also see from our metrics that prior to deletion there was about 10TB of compressed data stored in this bucket, and this has not been reclaimed in the pool usage after the bucket was deleted.
Anyone have any suggestions on how we can remove these objects and reclaim the space?
We’re not using snapshots or cache tiers in our environment.
Thanks,
James.