hey Gal and Eric,
in today's standup, we discussed the version of our apache arrow
submodule. it's currently pinned at 6.0.1, which was tagged in nov.
2021. the centos9 builds are using the system package
libarrow-devel-9.0.0. arrow's upstream recently tagged an 11.0.0
release
as far as i know, there still aren't any system packages for ubuntu,
so we're likely to be stuck with the submodule for quite a while. how
do guys want to handle these updates? is it worth trying to update
before the reef release?
hi Ernesto and lists,
> [1] https://github.com/ceph/ceph/pull/47501
are we planning to backport this to quincy so we can support centos 9
there? enabling that upgrade path on centos 9 was one of the
conditions for dropping centos 8 support in reef, which i'm still keen
to do
if not, can we find another resolution to
https://tracker.ceph.com/issues/58832? as i understand it, all of
those python packages exist in centos 8. do we know why they were
dropped for centos 9? have we looked into making those available in
epel? (cc Ken and Kaleb)
On Fri, Sep 2, 2022 at 12:01 PM Ernesto Puerta <epuertat(a)redhat.com> wrote:
>
> Hi Kevin,
>
>>
>> Isn't this one of the reasons containers were pushed, so that the packaging isn't as big a deal?
>
>
> Yes, but the Ceph community has a strong commitment to provide distro packages for those users who are not interested in moving to containers.
>
>> Is it the continued push to support lots of distros without using containers that is the problem?
>
>
> If not a problem, it definitely makes it more challenging. Compiled components often sort this out by statically linking deps whose packages are not widely available in distros. The approach we're proposing here would be the closest equivalent to static linking for interpreted code (bundling).
>
> Thanks for sharing your questions!
>
> Kind regards,
> Ernesto
> _______________________________________________
> Dev mailing list -- dev(a)ceph.io
> To unsubscribe send an email to dev-leave(a)ceph.io
Hi everyone,
This is the second release candidate for Reef.
The Reef release comes with a new RockDB version (7.9.2) [0], which
incorporates several performance improvements and features. Our
internal testing doesn't show any side effects from the new version,
but we are very eager to hear community feedback on it. This is the
first release to have the ability to tune RockDB settings per column
family [1], which allows for more granular tunings to be applied to
different kinds of data stored in RocksDB. A new set of settings has
been used in Reef to optimize performance for most kinds of workloads
with a slight penalty in some cases, outweighed by large improvements
in use cases such as RGW, in terms of compactions and write
amplification. We would highly encourage community members to give
these a try against their performance benchmarks and use cases. The
detailed list of changes in terms of RockDB and BlueStore can be found
in https://pad.ceph.com/p/reef-rc-relnotes.
If any of our community members would like to help us with performance
investigations or regression testing of the Reef release candidate,
please feel free to provide feedback via email or in
https://pad.ceph.com/p/reef_scale_testing. For more active
discussions, please use the #ceph-at-scale slack channel in
ceph-storage.slack.com.
This RC has gone thru partial testing due to issues we are
experiencing in the sepia lab.
Please try it out and report any issues you encounter. Happy testing!
Thanks,
YuriW
Hello,
There's so many ways to build ceph, from sources i'm pretty confused, so i
need some help.
I want to build regularly ceph from "main/master", to create debian packets
out of it.
I somehow have a solution which is working, but what's the best practice
for doing this right now?
On top of that, I didn't find ANY solution to build a "crimson-osd" packet
out of the latest sources, i even spent hours on this, what's the correct
way to do this?
Thanks!
Sascha
Hi,
I am planning to support read localise feature for RGW servers similar to
what RBD volumes support. From code reading, it looks to pass
"librados::OPERATION_LOCALIZE_READS" before sending the request to RADOS.
I have created a tracker issue for this feature
https://tracker.ceph.com/issues/61701, this will be per server config
option. I don't know about any other technical hurdles in implementing this
feature. Please share your thoughts on the same.
Thanks and regards,
Jiffin
Reef RC linking failure on Alpine Linux. Do we worry about that?
1. https://tracker.ceph.com/issues/61718
2. Nice to fix, but not a requirement
3. If there are patches available, we should accept them, but probably
don't put too much work into it currently
debian bullseye build failure on reef rc:
1. https://tracker.ceph.com/issues/61845
2. want to fix before final release
clean-up in AuthMonitor – CephFS and core are fine. Any other component
interested in?
1. https://github.com/ceph/ceph/pull/52008#issuecomment-1606581139
2. Already tested for rados and cephfs
3. No other components requesting testing
Reef rc v18.1.2 in progress
1. Next rc build in progress
2. build issue to be looked at
1. Failure on a jammy arm build, not a platform we test on meaningfully
2. Think this is an infrastructure issue
3. Generally, this shouldn't be a release blocker
4. Priority of arm builds might rise in the future though
5. investigate today, if we can't figure it out quickly, publish rc
with a known issue
6. NOTE: Rook expects arm builds to be present
3. Would like to release next rc later this week if things work out
4. Would also like to upgrade lrc
CDS agenda https://pad.ceph.com/p/cds-squid
1. leads should add topics
2. plan is for this to happen week of July 17th
mempool monitoring in teuthology tests
https://github.com/ceph/ceph/pull/51853
1. Just an FYI
2. ceph task will now have ability to dump memtools
3. might add a bit of delay to how long tests take
4. expected to be merged soon
5. will follow up in performance meeting
iSCSI packages old/not signed -- want to fix before final release
1. https://tracker.ceph.com/issues/57388
2. tcmu-runner, since containerization of ceph, is being pulled from our
build system
3. This tcmu-runner package is not signed
4. ceph-iscsi package is signed, but outdated (seems to be because this
is the newest one that is signed and pushed to download.ceph.com)
5. someone with access to tools to sign the packages would have to help
fix this
6. been like this for a long time and nobody noticed
7. only ceph-iscsi package, not tcmu-runner, is distributed through
download.ceph.com
8. getting updated ceph-iscsi package on download.ceph.com should be
done before reef release
9. tcmu-runner inside the container being unsigned is not as big of a
deal (was this way in quincy/pacific as well)
Hi Michal,
Thank you for volunteering to help test the Reef release!
On Tue, Jun 27, 2023 at 6:44 AM Michal Strnad <michal.strnad(a)cesnet.cz>
wrote:
> Hi everyone,
>
> We read that you are looking for ceph users who would be willing to help
> with performance testing of a new version of Ceph called Reef. We would
> like to volunteer and offer our assistance :-).
>
> Currently, we are setting up a large cluster consisting of fifty storage
> nodes, each with 24 rotational disks and 8 NVMe drives, some of which
> are designated for Bluestore and others for data purposes. Each of these
> machines is equipped with an AMD EPYC 7282 16-Core processor, ~314GB of
> memory, and a 2x25Gbps network connection. The network on each of these
> machines is used for both public and cluster communication, and if
> necessary, we can prioritize one over the other through QoS adjustments
> within the VLAN. However, we haven't had the need to do so thus far.
>
> Furthermore, we have sixteen application servers for monitors, MGR,
> metadata servers, and radosgw gateways. Each of these application
> servers is equipped with an AMD EPYC 7502 32-Core processor, ~250GB of
> memory, and a 2x25Gbps network connection.
>
> Both the storage and application servers are connected to two Nexus 9000
> switches with connectivity reaching several 100Gbps towards the internet.
>
> The mentioned cluster will be operational within a few weeks, with Ceph
> already installed and ready to undergo performance testing. Once this is
> ready, it will be possible to start testing the Reef version. We
> anticipate having approximately 2-3 weeks for testing. Are you
> interested in the performance results? To achieve better results, it
> would be beneficial to coordinate these tests in some way, so that we
> don't repeat what others have already tried. Could you please guide us
> on what specific aspects we should focus on, which parameters to test,
> and how to properly conduct the tests?
>
We are particularly interested to see the performance impact of the new
RockDB version we'll be shipping with Reef. I am adding Mark to this email
to provide guidance on performance tests.
> After an agreement, it will be possible to arrange some form of access
> to the machines, for example, by meeting via video conference and
> fine-tuning them together. Alternatively, we can also work on it through
> email, IRC, Slack, or any other suitable means.
>
We are coordinating community efforts around such testing in #ceph-at-scale
slack channel in ceph-storage.slack.com. I sent you an invite.
Thanks,
Neha
> Kind regards,
> Michal Strnad
>
>
> On 6/13/23 22:27, Neha Ojha wrote:
> > Hi everyone,
> >
> > This is the first release candidate for Reef.
> >
> > The Reef release comes with a new RockDB version (7.9.2) [0], which
> > incorporates several performance improvements and features. Our internal
> > testing doesn't show any side effects from the new version, but we are
> very
> > eager to hear community feedback on it. This is the first release to have
> > the ability to tune RockDB settings per column family [1], which allows
> for
> > more granular tunings to be applied to different kinds of data stored in
> > RocksDB. A new set of settings has been used in Reef to optimize
> > performance for most kinds of workloads with a slight penalty in some
> > cases, outweighed by large improvements in use cases such as RGW, in
> terms
> > of compactions and write amplification. We would highly encourage
> community
> > members to give these a try against their performance benchmarks and use
> > cases. The detailed list of changes in terms of RockDB and BlueStore can
> be
> > found in https://pad.ceph.com/p/reef-rc-relnotes.
> >
> > If any of our community members would like to help us with performance
> > investigations or regression testing of the Reef release candidate,
> please
> > feel free to provide feedback via email or in
> > https://pad.ceph.com/p/reef_scale_testing. For more active discussions,
> > please use the #ceph-at-scale slack channel in ceph-storage.slack.com.
> >
> > Overall things are looking pretty good based on our testing. Please try
> it
> > out and report any issues you encounter. Happy testing!
> >
> > Thanks,
> > Neha
> >
> > Get the release from
> >
> > * Git at git://github.com/ceph/ceph.git
> > * Tarball at https://download.ceph.com/tarballs/ceph-18.1.0.tar.gz
> > * Containers at https://quay.io/repository/ceph/ceph
> > * For packages, see
> https://docs.ceph.com/en/latest/install/get-packages/
> > * Release git sha1: c2214eb5df9fa034cc571d81a32a5414d60f0405
> >
> > [0] https://github.com/ceph/ceph/pull/49006
> > [1] https://github.com/ceph/ceph/pull/51821
> > _______________________________________________
> > ceph-users mailing list -- ceph-users(a)ceph.io
> > To unsubscribe send an email to ceph-users-leave(a)ceph.io
>
Hi everyone,
The June CDM is coming up this week *Wednesday, May 7th @ 15:00 UTC.* See
more meeting details below.
Please add any topics you'd like to discuss to the agenda:
https://tracker.ceph.com/projects/ceph/wiki/CDM_07-JUN-2023
See you there,
Laura Flores
Meeting link:
https://meet.jit.si/ceph-dev-monthly
Time conversions:
UTC: Tuesday, June 7, 15:00 UTC
Mountain View, CA, US: Tuesday, June 7, 8:00 PDT
Phoenix, AZ, US: Tuesday, June 7, 8:00 MST
Denver, CO, US: Tuesday, June 7, 9:00 MDT
Huntsville, AL, US: Tuesday, June 7, 10:00 CDT
Raleigh, NC, US: Tuesday, June 7, 11:00 EDT
London, England: Tuesday, June 7, 16:00 BST
Paris, France: Tuesday, June 7, 17:00 CEST
Helsinki, Finland: Tuesday, June 7, 18:00 EEST
Tel Aviv, Israel: Tuesday, June 7, 18:00 IDT
Pune, India: Tuesday, June 7, 20:30 IST
Brisbane, Australia: Wednesday, June 8, 1:00 AEST
Singapore, Asia: Tuesday, June 7, 23:00 +08
Auckland, New Zealand: Wednesday, June 8, 3:00 NZST
--
Laura Flores
She/Her/Hers
Software Engineer, Ceph Storage <https://ceph.io>
Chicago, IL
lflores(a)ibm.com | lflores(a)redhat.com <lflores(a)redhat.com>
M: +17087388804