Hoi,
When writting a Rados/Rbd client I noticed that the sequence:
r = rbd_aio_readv(ri->ri_image, iov, iovcnt, offset, comp);
r = rbd_aio_wait_for_complete(comp);
nbytes = rbd_aio_get_return_value(comp);
returns the number of bytes read in nbytes.
Now if I do the same but with `rbd_aio_write(v)` I only receive
zero as value...
I would have expected that rbd_aio_get_return_value would more
or less function like aio_return(2):
DESCRIPTION
The aio_return() system call returns the final status of the
asynchronous
I/O request associated with the structure pointed to by iocb.
RETURN VALUES
If the asynchronous I/O request has completed, the status is
returned as
described in read(2), write(2), or fsync(2). Otherwise, aio_return()
returns -1 and sets errno to indicate the error condition.
So I was expecting the amount of bytes written?
--WjW
HI,
I have created an S3 bucket backed by CEPH and through java S3 client and
via S3 object gateway am listing all the files under the bucket and always
the listing is failing some times after listing 1k+ blobs or some times
after listing 2k+ blobs and am not able to figure out how to debug this
issue
This is the Exception am getting,
com.amazonaws.services.s3.model.AmazonS3Exception: null (Service:
Amazon S3; Status Code: 500; Error Code: UnknownError; Request ID:
tx00000000000000000e7df-005e626049-1146-rook-ceph-store; S3 Extended
Request ID: 1146-rook-ceph-store-rook-ceph-store), S3 Extended Request
ID: 1146-rook-ceph-store-rook-ceph-store
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1799)
I tried with Boto as well and it's the same error over there,
I have checked the s3 gateway pod logs but I couldn't find any relevant
logs over there, so, please let me know how do I debug this issue or
possible reasons for the same
I have attached the java code am using for reference.
Thanks & Regards,
Rajiv
UIPath
HI,
I have created an s3 bucket backed by CEPH and when am uploading a blob
which is greater than 220 characters the upload is getting failed and
anything lesser than this length is working.
I have installed CEPH on Centos and the file system type is XFS.
Is there any configuration through which I can overcome the 256 character
limitation on object length ?
Thanks & Regards,
Rajiv
Docubetter Meeting -- 2020 Mar 11
There is a general documentation meeting called the "DocuBetter Meeting",
and it is held every two weeks. The next DocuBetter Meeting will be on
March 11, 2020 at 0830 PST, and will run for thirty minutes. Everyone with
a documentation-related request or complaint is invited. The meeting will
be held here: https://bluejeans.com/908675367
Send documentation-related requests and complaints to me by replying to
this email and CCing me at zac.dover(a)gmail.com.
This message will be sent to dev(a)ceph.io every Monday morning, North
American time.
The next DocuBetter meeting is scheduled for:
11 Mar 2020 0830 PST
11 Mar 2020 1630 UTC
12 Mar 2020 0230 AEST
Etherpad: https://pad.ceph.com/p/Ceph_Documentation
Meeting: https://bluejeans.com/908675367
Thanks, everyone.
Zac Dover
Hi everyone,
Starting today, please direct your pull requests at the 'octopus' branch
if they need to go into the octopus release.
Until the v15.2.0 is created, we will periodically merge octopus back into
master so that we don't have to do a bunch of cherry-picks.
Feel free to merge post-octopus work directly into master.
Thanks!
sage
Hey all,
I wanted to get some input on how to divvy up the new baremetal builders
for our CI (I decided to name them braggi).
Friday, just as a litmus test, I set up 5 with CentOS 7, 5 with CentOS
8, and 10 with Bionic.
I'm SUPER happy to report that CentOS 7 and 8 builds (packaging AND
containers!) went from between 2 - 2.5 hours to UNDER 1 HOUR! Bionic
builds went from 1.5 - 2.5 hours to 40-50min!
So our current setup is:
- We have a few mira running ceph-volume tests
- We have 8 irvingi that each host 2 VMs (the 16 slave-{ubuntu,centos}##
builders)
- We have a few VMs I created in RHV to do CentOS 8 builds as a stopgap
when CentOS 8 came out (there were no cloud images at the time)
- When none of the aforementioned builders are available, an ephemeral
Openstack instance is spun up and is usually bit slower and always less
reliable than the slave-* builders
My proposal is:
- 3 braggi with CentOS 7 (default, notcmalloc)
- 6 braggi with CentOS 8 (default, notcmalloc)
- 10 braggi with Bionic (default, notcmalloc, crimson)
- 3 braggi with OpenSUSE
As as reminder, the Bionic slaves build packages for Xenial and Bionic
using pbuilder so we need more of them.
Of course we can always shuffle around a bit whenever we see a
particular distro waiting on a builder more than others.
Then we can take the irvingi (which would eliminate the slave-*
builders) and use 4-6 to do smaller less resource-intensive jobs (maybe
make check, ceph-dev-setup, kernel, nfs-ganesha, etc.)
The other 2-4 irvingi could go to the ceph-ansible and ceph-container
teams on 2.jenkins.ceph.com.
The ultimate goal is here to rely less (ideally not at all) on OVH to
provide ephemeral Jenkins slaves so some shuffling around of OSes is
inevitable to get to that point.
irvingi: https://wiki.sepia.ceph.com/doku.php?id=hardware:irvingi
braggi: https://wiki.sepia.ceph.com/doku.php?id=hardware:braggi
--
David Galloway
Systems Administrator, RDU
Ceph Engineering
IRC: dgalloway
+dev(a)ceph.io
On Sun, Mar 8, 2020 at 5:16 PM Abhinav Singh
<singhabhinav9051571833(a)gmail.com> wrote:
>
> I am trying to implement jaeger tracing in RGW, I need some advice
> regarding on which functions should I actually tracing to get a good actual
> performance status of clusters
>
> Till now I am able to deduce followings :
> 1.I think we need to provide tracing functions where the `rgw` is
> communicating with the librados, (particularly the librgw where the
> communication is actually happening), because http request and response not
> to be considered for tracing because that depends on clients internet speed.
> 2.In librgw the functions like this here
> <https://github.com/ceph/ceph/blob/0360bea127397a41eb282a1eef9af4ff4477b9d4/…>
> and
> its corresponding overloading methods and also the this function here
> <https://github.com/ceph/ceph/blob/0360bea127397a41eb282a1eef9af4ff4477b9d4/…>
> and
> its corresponding overloaded functions.
> 3.I see that pools are ultimately used to enter the crush algorithm for
> writing data, so I think the ceation of pools should also be taken into
> account while tracing,(creation of pool should be main span and these
> functions
> <https://github.com/ceph/ceph/blob/0360bea127397a41eb282a1eef9af4ff4477b9d4/…>
> should
> be its child span).
>
>
> Functionality of buckets like that of this
> <https://github.com/ceph/ceph/blob/0360bea127397a41eb282a1eef9af4ff4477b9d4/…>
> do not require tracing beacuse they are http requests.
>
> Any kind of guidance will be of great help.
>
> Thank You.
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>
--
Cheers,
Brad
Hello all,
I don't know how many of you folks are aware, but early last year,
Datto (full disclosure, my current employer, though I'm sending this
email pretty much on my own) released a tool called "zfs2ceph" as an
open source project[1]. This project was the result of a week-long
internal hackathon (SUSE folks may be familiar with this concept from
their own "HackWeek" program[2]) that Datto held internally in
December 2018. I was a member of that team, helping with research,
setting up infra, and making demos for it.
Anyway, I'm bringing it up here because I'd had some conversations
with some folks individually who suggested that I bring it up here in
the mailing list and to talk about some of the motivations and what
I'd like to see in the future from Ceph on this.
The main motivation here was to provide a seamless mechanism to
transfer ZFS based datasets with the full chain of historical
snapshots onto Ceph storage with as much fidelity as possible to allow
a storage migration without requiring 2x-4x system resources. Datto is
in the disaster recovery business, so working backups with full
history are extremely valuable to Datto, its partners, and their
customers. That's why the traditional path of just syncing the current
state and letting the old stuff die off is not workable. At the scale
of having literally thousands of servers with each server having
hundreds of terabytes of ZFS storage (making up in aggregate to
hundreds of petabytes of data), there's no feasible way to consider
alternative storage options without having a way to transfer datasets
from ZFS to Ceph so that we can cut over servers to being Ceph nodes
with minimal downtime and near zero new server purchasing requirements
(there's obviously a little bit of extra hardware needed to "seed" a
Ceph cluster, but that's fine).
The current zfs2ceph implementation handles zvol sends and transforms
them into rbd v1 import streams. I don't recall exactly the reason why
we don't use v2 anymore, but I think there was some gaps that made it
so it wasn't usable for our case back then (we were using Ceph
Luminous). I'm unsure if this is improved now, though it wouldn't
surprise me if it has. However, zvols aren't enough for us. Most of
our ZFS datasets are in the ZFS filesystem form, not the ZVol block
device form. Unfortunately, there is no import equivalent for CephFS,
which blocked an implemented of this capability[3]. I had filed a
request about it on the issue tracker, but it was rejected on the
basis of something was being worked on[4]. However, I haven't seen
something exactly like what I need land in CephFS yet.
The code is pretty simple, and I think it would be easy enough for it
to be incorporated into Ceph itself. However, there's a greater
question here. Is there interest from the Ceph developer community in
developing and supporting strategies to migrate from legacy data
stores to Ceph with as much fidelity as reasonably possible?
Personally, I hope so. My hope is that this post generates some
interesting conversation about how to make this a better supported
capability within Ceph for block and filesystem data. :)
Best regards,
Neal
[1]: https://github.com/datto/zfs2ceph
[2]: https://hackweek.suse.com/
[3]: https://github.com/datto/zfs2ceph/issues/1
[4]: https://tracker.ceph.com/issues/40390
--
真実はいつも一つ!/ Always, there's only one truth!
Hi
I wonder what's this arg "--setuser-match-path=", and with this arg set, what will do for ceph daemon.
Thanks!
Martin, Chen
IOTG, Software Engineer
021-61164330