hey Gal and Eric,
in today's standup, we discussed the version of our apache arrow
submodule. it's currently pinned at 6.0.1, which was tagged in nov.
2021. the centos9 builds are using the system package
libarrow-devel-9.0.0. arrow's upstream recently tagged an 11.0.0
release
as far as i know, there still aren't any system packages for ubuntu,
so we're likely to be stuck with the submodule for quite a while. how
do guys want to handle these updates? is it worth trying to update
before the reef release?
hi Ernesto and lists,
> [1] https://github.com/ceph/ceph/pull/47501
are we planning to backport this to quincy so we can support centos 9
there? enabling that upgrade path on centos 9 was one of the
conditions for dropping centos 8 support in reef, which i'm still keen
to do
if not, can we find another resolution to
https://tracker.ceph.com/issues/58832? as i understand it, all of
those python packages exist in centos 8. do we know why they were
dropped for centos 9? have we looked into making those available in
epel? (cc Ken and Kaleb)
On Fri, Sep 2, 2022 at 12:01 PM Ernesto Puerta <epuertat(a)redhat.com> wrote:
>
> Hi Kevin,
>
>>
>> Isn't this one of the reasons containers were pushed, so that the packaging isn't as big a deal?
>
>
> Yes, but the Ceph community has a strong commitment to provide distro packages for those users who are not interested in moving to containers.
>
>> Is it the continued push to support lots of distros without using containers that is the problem?
>
>
> If not a problem, it definitely makes it more challenging. Compiled components often sort this out by statically linking deps whose packages are not widely available in distros. The approach we're proposing here would be the closest equivalent to static linking for interpreted code (bundling).
>
> Thanks for sharing your questions!
>
> Kind regards,
> Ernesto
> _______________________________________________
> Dev mailing list -- dev(a)ceph.io
> To unsubscribe send an email to dev-leave(a)ceph.io
Hello,
There's so many ways to build ceph, from sources i'm pretty confused, so i
need some help.
I want to build regularly ceph from "main/master", to create debian packets
out of it.
I somehow have a solution which is working, but what's the best practice
for doing this right now?
On top of that, I didn't find ANY solution to build a "crimson-osd" packet
out of the latest sources, i even spent hours on this, what's the correct
way to do this?
Thanks!
Sascha
Details of this release are summarized here:
https://tracker.ceph.com/issues/59542#note-1
Release Notes - TBD
Seeking approvals for:
smoke - Radek, Laura
rados - Radek, Laura
rook - Sébastien Han
cephadm - Adam K
dashboard - Ernesto
rgw - Casey
rbd - Ilya
krbd - Ilya
fs - Venky, Patrick
upgrade/octopus-x (pacific) - Laura (look the same as in 16.2.8)
upgrade/pacific-p2p - Laura
powercycle - Brad (SELinux denials)
ceph-volume - Guillaume, Adam K
Thx
YuriW
We want to do the next urgent point release for pacific 16.2.13 ASAP.
The tip of the current pacific branch will be used as a base for this
release and we will build it later today.
Dev leads - if you have any outstanding PRs that must be included pls
merged them now.
Thx
YuriW
On Wed, Apr 12, 2023 at 12:32 PM Marc <Marc(a)f1-outsourcing.eu> wrote:
> >
> > We are excited to share with you the latest statistics from our Ceph
> > public telemetry dashboards <https://telemetry-public.ceph.com/> .
>
> :)
>
> > One of the things telemetry helps us to understand is version adoption
> > rate. See, for example, the trend of Quincy <https://telemetry-
> > public.ceph.com/d/ZFYuv1qWz/telemetry?viewPanel=28&orgId=1&var-
> > display=Minor&var-major=17&var-minor=All&var-daemons=All> deployments
> > in the community.
> >
>
> What is the 'weird' drop at the 5th of February?
>
> It is due to issues we had with the lab, the service was a bit unstable.
> > Ceph telemetry is on an opt-in basis. You can opt-in with:
> > `ceph telemetry on`
> > Learn more here
> > <https://docs.ceph.com/en/latest/mgr/telemetry/#enabling-telemetry> .
> >
> > Help us cross the exabyte mark by opting-in today!
> > Learn more about the latest developments around Telemetry
> > <https://sched.co/1JKZ2> at the upcoming Cephalocon.
> >
>
> :)
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>
>
Hi,
The cluster is with Pacific and deployed by cephadm on container.
The case is to import OSDs after host OS reinstallation.
All OSDs are SSD who has DB/WAL and data together.
Did some research, but not able to find a working solution.
Wondering if anyone has experiences in this?
What needs to be done before host OS reinstallation and what's after?
Thanks!
Tony
This came up again in the dev summit at cephalocon, so I figure it's
worth reviving this thread.
First, I'll try to recap the situation (Ilya, feel free to correct me
here). My understanding of the issue is that rbd has features (most
notably encryption) which depend on the librados SPARSE_READ operation
reflecting accurately which ranges have been written or trimmed at a
4k granularity. This appears to work correctly on replicated pools on
bluestore, but erasure coded pools always return the full object
contents up to the object size including regions the client has not
written to.
I don't think this was originally a guarantee of the interface. I
think the original guarantee was simply that SPARSE_READ would return
any non-zero regions, not that it was guaranteed not to return
unwritten or trimmed regions. The OSD does not track this state above
the ObjectStore layer -- SPARSE_READ and MAPEXT both rely directly on
ObjectStore::fiemap. MAPEXT actually returns -ENOTSUPP on erasure
coded pools.
Adam: the observed behavior is that fiemap on bluestore does
accurately reflect the client's written extents at a 4k granularity.
Is that reliable, or is it a property of only some bluestore
configurations?
As it appears desirable that we actually guarantee this, we probably
want to do two things:
1) codify this guarantee in the ObjectStore interface (4k in all
cases?), and ensure that all configurations satisfy it going forward
(including seastore)
2) update the ec implementation to track allocation at the granularity
of an EC stripe. HashInfo is the natural place to put the
information, probably? We'll need to also implement ZERO. Radek: I
know you're looking into EC for crimson, perhaps you can evaluate how
much work would be required here?
-Sam
On Mon, May 2, 2022 at 5:21 PM Sam Just <sjust(a)redhat.com> wrote:
>
> I don't think fiemap was ever intended as anything more than an
> optimization to permit a user to avoid transferring unnecessary
> zeroes. SeaStore will probably not track sparseness at more than a 4k
> granularity. I don't think the EC implementation is clever about
> sparse reads/writes at all since that information would probably need
> to be duplicated above the objectstore in the object_info.
> -Sam
>
> On Mon, May 2, 2022 at 7:47 AM Jeff Layton <jlayton(a)redhat.com> wrote:
> >
> > On Mon, 2022-05-02 at 16:41 +0200, Ilya Dryomov wrote:
> > > On Mon, May 2, 2022 at 4:22 PM Jeff Layton <jlayton(a)redhat.com> wrote:
> > > >
> > > > (sorry for the resend, but the first message got rejected by the list because it was from an unsubscribed address)
> > > >
> > > > On Mon, 2022-05-02 at 14:05 +0200, Ilya Dryomov wrote:
> > > > > Hi Sam,
> > > > >
> > > > > I wanted to clarify ObjectStore::fiemap API and sparse-read OSD op
> > > > > guarantees as this came up in Jeff's fscrypt work and just recently in
> > > > > RBD as well.
> > > > >
> > > > > In fscrypt for kcephfs, Jeff has opted to use sparse-read to ensure
> > > > > that file holes (which must contain all zeroes logically) don't get
> > > > > "decrypted" into seemingly random junk. (Unlike ecryptfs, fscrypt
> > > > > framework doesn't attempt to protect the information about existence
> > > > > and location of holes in files, so logical holes generally correspond
> > > > > to physical holes.)
> > > > >
> > > >
> > > > The fscrypt client infrastructure generally prevents you from reading a
> > > > file when you don't have the key, but you could always analyze the
> > > > backing device and determine where the holes are. The situation with
> > > > cephfs is analogous.
> > >
> > > Yup.
> > >
> > > >
> > > > I imagine this is the same with ecryptfs though. I don't believe it
> > > > fills in the holes when you do a write past the EOF either. Were you
> > > > thinking of LUKS? That operates at the device level, so finding holes
> > > > there is a much different matter.
> > >
> > > I'm pretty sure ecryptfs always fills holes by encrypting logical zeroes and
> > > writing the resulting ciphertext out to the backing filesystem. Quoting the
> > > FAQ:
> > >
> > > eCryptfs does not currently support sparse files. Sequences of encrypted
> > > extents with all 0's could be interpreted as sparse regions in eCryptfs
> > > without too much implementation complexity. However, this would open up
> > > a possible attack vector, since the fact that certain segments of data are
> > > all 0's could betray strategic information that the user does not
> > > necessarily want to reveal to an attacker. For instance, if the attacker
> > > knows that a certain database file with patient medical data keeps
> > > information about viral infections in one region of the file and
> > > information about diabetes in another section of the file, then the very
> > > fact that the segment for viral infection data is populated with data at
> > > all would reveal that the patient has a viral infection.
> > >
> >
> > I stand corrected then! That tends to be pretty horrible for performance
> > though. Prepare to wait for a while if you do create a file and then
> > start writing at the 2G offset.
> >
> > In principle, we could also have the client fill in holes instead. It
> > may be worthwhile to have a mode where it does that. That might alsogive
> > us a way to support this on non-bluestore pools if it's not feasible to
> > allow for sparseness there).
> > --
> > Jeff Layton <jlayton(a)redhat.com>
> >