Hello-
I've been looking at Ceph's Jaeger tracing support, and was wondering why it uses port 6799 instead of the default 6831.
It isn't compatible with the documentation directions for running Jaeger using vstart.sh, and it is also hard-coded in common/tracer.cc. Is there any reason why Ceph doesn't use the standard port, and/or why it couldn't be made configurable?
Thanks!
-Mike Neufeld
Hi Ceph Developers,
The February CDM is coming up next week on *Wednesday, February 1st @ 16:00
UTC*. See more meeting details below.
Please add any topics you'd like to discuss to the agenda:
https://tracker.ceph.com/projects/ceph/wiki/CDM_01-FEB-2023
See you there,
Laura Flores
Meeting link:
https://bluejeans.com/908675367
Time conversions:
UTC: Wednesday, February 1, 16:00 UTC
Mountain View, CA, US: Wednesday, February 1, 8:00 PST
Phoenix, AZ, US: Wednesday, February 1, 9:00 MST
Denver, CO, US: Wednesday, February 1, 9:00 MST
Huntsville, AL, US: Wednesday, February 1, 10:00 CST
Raleigh, NC, US: Wednesday, February 1, 11:00 EST
London, England: Wednesday, February 1, 16:00 GMT
Paris, France: Wednesday, February 1, 17:00 CET
Helsinki, Finland: Wednesday, February 1, 18:00 EET
Tel Aviv, Israel: Wednesday, February 1, 18:00 IST
Pune, India: Wednesday, February 1, 21:30 IST
Brisbane, Australia: Thursday, February 2, 2:00 AEST
Singapore, Asia: Thursday, February 2, 0:00 +08
Auckland, New Zealand: Thursday, February 2, 5:00 NZDT
--
Laura Flores
She/Her/Hers
Software Engineer, Ceph Storage
Red Hat Inc. <https://www.redhat.com>
Chicago, IL
lflores(a)redhat.com
M: +17087388804
@RedHat <https://twitter.com/redhat> Red Hat
<https://www.linkedin.com/company/red-hat> Red Hat
<https://www.facebook.com/RedHatInc>
<https://www.redhat.com>
this stabilization branch has merged to main, along with the cls_fifo
fixes in https://github.com/ceph/ceph/pull/48632. thanks to everyone
that helped to curate and test these branches!
we'll continue multisite testing on main, and track any new bugs at
https://tracker.ceph.com/projects/rgw/issues/new as normal
there are still a lot of failures in the multisite functional tests,
and i'd really like to see these cleaned up before the reef release.
it's important to have reliable functional tests so we can validate
future changes and their backports
On Wed, Nov 23, 2022 at 12:17 PM Casey Bodley <cbodley(a)redhat.com> wrote:
>
> On Tue, Nov 22, 2022 at 12:34 PM Abhijeet Agrawal (BLOOMBERG/ 120
> PARK) <aagrawal159(a)bloomberg.net> wrote:
> >
> > Thanks Casey,
> >
> > Bloomberg will work off of this branch and put it through our test suite next week.
> > Please let us know if there is a tracker we could update with the findings or if we should post updates on individual PRs or the feature branch you have mentioned ?
>
> it would be great to summarize test results with comments in
> https://github.com/ceph/ceph/pull/48898. you can open issues under
> https://tracker.ceph.com/projects/rgw to track test failures in more
> detail - please just make sure the subject lines start with 'reef
> multisite:' so we know they're issues with this feature branch and not
> main
>
> >
> > Regards,
> > Abhijeet
> >
> > From: cbodley(a)redhat.com At: 11/15/22 15:41:27 UTC-5:00
> > To: dev(a)ceph.io
> > Subject: rgw multisite stabilization branch for reef
> >
> > the rgw team has made a lot of progress on multisite stabilization,
> > but a lot of related commits haven't made it to main yet. i've opened
> > https://github.com/ceph/ceph/pull/48898 to track these commits so we
> > have a stable baseline for upstream testing and validation
> >
> > _______________________________________________
> > Dev mailing list -- dev(a)ceph.io
> > To unsubscribe send an email to dev-leave(a)ceph.io
> >
> >
Hi Folks,
The weekly performance meeting will be starting in approximately 15
minutes at 8AM PST. If Adam is able to make it today, we can talk a bit
about his new shared blob and blob melding code and results. We also
had a question about RocksDB fast-to-fit mode from the users meeting we
might discuss. Please feel free to add your own topic as well!
Etherpad:
https://pad.ceph.com/p/performance_weekly
Bluejeans:
https://bluejeans.com/908675367
Mark
Today's meeting focused mostly on upstream infrastructure and
developer services:
- marc.info now archives the dev(a)ceph.io mailing list
- Some mailing list admin functionality still not behaving after restoration.
- Build/container failures continuing.
- Services the development team can offload to third parties and keep
in-house. These include the mailing lists, testing infrastructure
services, etherpad, quay images, git mirror, website, and telemetry.
We also discussed defragmenting developer chat. At the moment we're
going to try focusing on Slack but consensus is unclear.
- The next pacific minor release is blocked by testing hiccups.
Notably upgrade suites are failing due to missing packages.
- Reef will be packaged for Centos9 / Ubuntu 22.04. There are some
teuthology changes required [1].
- Dev freeze for reef will likely be moved to late Februrary or March.
Due to the delay testing many new features/fixes, added time will be
necessary to deliver a stable release. The actual release will not
occur until May or June.
[1] https://tracker.ceph.com/issues/58491
--
Patrick Donnelly, Ph.D.
He / Him / His
Red Hat Partner Engineer
IBM, Inc.
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
are we planning to support centos stream 9 and ubuntu 22.04 for Reef?
shaman has been building packages for these distros for a while now,
but they're still not enabled as supported distros in teuthology. i
opened https://github.com/ceph/ceph/pull/49443 to enable testing
against them, but the results in
https://pulpito.ceph.com/cbodley-2022-12-14_21:18:54-rgw-wip-cbodley-testin…
were unsuccessful:
all of the centos stream 9 jobs failed due to teuthology's use of the
'sudo lsb_release -is' command which was removed
all of the ubuntu 22.04 jobs died with ansible errors
we'll need to work through these issues if we expect to support these distros
Hi everyone,
From November into January, we experienced a series of outages with the
Ceph Community Infrastructure and its services:
-
Mailing lists
-
https://lists.ceph.io
-
Sepia (testing infrastructure)
-
https://wiki.sepia.ceph.com
-
https://pulpito.ceph.com
-
https://chacra.ceph.com
-
https://shaman.ceph.com
-
VPN to access testing services
-
Etherpad
-
https://pad.ceph.com
-
Images:
-
https://quay.ceph.io
-
Git mirror
-
https://git.ceph.com
-
https://ceph.io
-
Telemetry <https://telemetry-public.ceph.com/>
These services are now mostly restored, but we did experience some data
loss, notably in our mailing lists. We have restored them from backups, but
subscription changes after July 2021 need to be repeated. If you subscribed
or unsubscribed since then, please check your settings with the appropriate
list at https://lists.ceph.io. If your posts to our mailing lists are now
needing approval, that is also an indication that you need to re-subscribe
to the appropriate lists.
Keep an eye out for emails with subject lines such as “Your message to
ceph-users(a)ceph.io awaits moderator approval”.
When the community infrastructure was first created in late 2014, the VM
cluster management software selected by the team came with the benefit of
being widely entrenched and familiar to the lab administrators but didn't
support Ceph as a storage backend at the time. As services grew, we relied
more and more on its legacy storage solution, which was never migrated to
Ceph. Over the last few months, this legacy storage solution had several
instances of silent data corruption, rendering the VMs unbootable, taking
down various services, and requiring restoration from backups in many cases.
We are moving these services to a more reliable, mostly container-based,
infrastructure backed by Ceph, and planning for longer-term improvements to
monitoring, backups, deployment, and other pieces of the project
infrastructure.
This event highlights the need to better support the infrastructure. A
handful of contributors have stepped up to restore these services, but we
need an invested team focused.
If you or your company is looking for a great way to contribute to the Ceph
community, this could be your opportunity. Please contact council(a)ceph.io
if you can provide time to contribute to the Ceph Community Infrastructure
and would like to join the team. You can also join the upstream #sepia
slack channel to participate in these discussions using this link:
https://join.slack.com/t/ceph-storage/shared_invite/zt-1n1eh6po5-PF9sokUSoo…
Unfortunately, these events have slowed down our upstream development and
releases. We are currently working on publishing the next Pacific point
release. The development freeze and release deadline for the Reef release
will likely be pushed out, and more discussions to follow in the Ceph
Leadership Team meetings.
- The Ceph Leadership Team
Hi everyone,
This month's Ceph User + Dev Monthly meetup is on January 19, 15:00-16:00
UTC. There are some topics in the agenda regarding RGW backports, please
feel free to add other topics to
https://pad.ceph.com/p/ceph-user-dev-monthly-minutes.
Hope to see you there!
Thanks,
Neha
Hi Igor,
2022年12月22日(木) 20:33 Satoru Takeuchi <satoru.takeuchi(a)gmail.com>:
>
> Hi Igor,
>
>>> @Satoru - I'm curious if you can try a custom patch with a potential fix as you're able to consistently reproduce the issue?
>>
>>
>> Thank you for your effort. I'll try it. I'll also ask rook users who hit this problem to try this fix
>
>
> Here is the progress of the experiment.
>
> I've tried to reproduce this problem in Ceph v17.2.5 to verify the existence of this problem in the latest version.
> However, it hasn't happened yet.
> So I plan to continue the experiment with some modifications in my environment (e.g. NVMe SSD -> HDD, Ceph v17.2.5 -> v16.2.10).
>
> Once I succeed to hit this problem, I'll run my reproducer in the problematic version with your patch.
My reproducer ran for several weeks without hitting this problem. I'd
like to stop this work
because I should use the test environment for another purpose for now.
I hope this problem is fixed in the latest Ceph for some reason.
Best,
Satoru