Hi,
Other than get all objects of the pool and filter by image ID,
is there any easier way to get the number of allocated objects for
a RBD image?
What I really want to know is the actual usage of an image.
An allocated object could be used partially, but that's fine,
no need to be 100% accurate. To get the object count and
times object size, that should be sufficient.
"rbd export" exports actual used data, but to get the actual usage
by exporting the image seems too much. This brings up another
question, is there any way to know the export size before running it?
Thanks!
Tony
On 22-11-2023 15:54, Stefan Kooman wrote:
> Hi,
>
> In a IPv6 only deployment the ceph-exporter daemons are not listening on
> IPv6 address(es). This can be fixed by editing the unit.run file of the
> ceph-exporter by changing "--addrs=0.0.0.0" to "--addrs=::".
>
> Is this configurable? So that cephadm deploys ceph-exporter with proper
> unit.run arguments?
Related issue: https://tracker.ceph.com/issues/62220
A different fix is chosen as opposed to
https://github.com/ceph/ceph/pull/54285/. Maybe better to remove the
IPv4/IPv6 distinction and make the code IP family agnostic (i.e. go for
the fix in 54285)?
Gr. Stefan
Matt recently raised the issue of ceph assertions in production code,
which reminded me of Sage's 2016 pr
https://github.com/ceph/ceph/pull/9969 that added a
ceph_assert_always(). the idea was to eventually make ceph_assert()
conditional on NDEBUG to match the behavior of libc's assert(),
leaving ceph_assert_always() as the marked unconditional case
i would love to see this finally happen, but there are some potential risks:
* ceph_assert()s with side effects won't behave as expected in release
builds. assert() documents this same issue at
https://www.man7.org/linux/man-pages/man3/assert.3.html#BUGS. if we
could at least identify these cases, we can switch them to
ceph_assert_always()
* in teuthology, we test the same release builds that we ship to
users. that means teuthology won't catch the code paths that trigger
debug assertions. if those lead to crashes, they could be much harder
to debug without the assertions and backtraces
* conversely, merging pull requests after a successful teuthology run
may introduce new assertions in debug builds. it could be annoying for
developers to track down and fix new assertions after pulling the
latest main or stable release branch
* unused variable warnings in release builds where ceph_assert() was
the only reference. at least the compiler will catch all of these for
us, and [[maybe_unused]] annotations can clear them up
in general, do folks agree that this is a change worth making? if so,
what can we do to mitigate the risks?
if not, how should we handle the use of ceph_assert() vs raw assert()
in new code? should there be some guidance in CodingStyle?
as a half-measure, we might introduce a new ceph_assert_debug() as an
alternative to raw assert(), then convert some existing uses of
ceph_assert() on a case-by-case basis
Hello.
We are using Ceph storage to test whether we can run the service by uploading and saving more than 40 billion files.
So I'd like to check the contents below.
1) Maximum number of Rados gateway objects that can be stored in one cluster using the bucket index
2) Maximum number of Rados gateway objects that can be stored in one bucket
Although we have referred to the limitations on the number of Rados gateway objects mentioned in existing documents, it seems theoretically unlimited
If you have operated the number of files at the level we think in actual services or products, we would appreciate it if you could share them.
Below are related documents and related settings values.
> Related documents
- https://documentation.suse.com/ses/5.5/html/ses-all/cha-ceph-gw.html
- https://www.ibm.com/docs/en/storage-ceph/6?topic=resharding-limitations-buc…
- https://docs.ceph.com/en/latest/dev/radosgw/bucket_index/
> Related config
- rgw_dynamic_resharding: true
- rgw_max_objs_per_shard: 100000
- rgw_max_dynamic_shards : 65521
Everything Open (auspiced by Linux Australia) is happening again in
2024. The CFP closes at the end of this weekend (November 19):
https://2024.everythingopen.au/programme/proposals/
More details below.
-------- Forwarded Message --------
Date: Sun, 15 Oct 2023 09:16:31 +1000
From: Everything Open <contact(a)everythingopen.au>
To: eo-announce(a)lists.linux.org.au, announce(a)lists.linux.org.au
Subject: [Announce] Everything Open 2024: Call for Sessions Now Open
User-Agent: Roundcube Webmail/1.1.1
Submit your session proposals today - the Everything Open 2024 Call for
Sessions is now open.
## Call for Sessions
We invite you to submit a session proposal on a topic you are familiar
with via our proposals portal at
https://2024.everythingopen.au/programme/proposals/.
The Call for Sessions will remain open until 11:59pm on Sunday 19
November 2023 anywhere on earth (AoE).
There will be multiple streams catering for a wide range of interest
areas across the many facets of open technology, including Linux, open
source software, open hardware, standards, formats and documentation,
and our communities.
In keeping with the conference’s aim to be inclusive to all community
members, presentations can be aimed at any level, ranging from technical
deep-dives through to beginner and intermediate level presentations for
those who are newer to the subject.
There will be two types of sessions at Everything Open: talks and
tutorials. Talks will nominally be 45 minutes long on a single topic
presented in lecture format. We will also have a few short talk slots of
25 minutes available, which are perfect for people new to presenting at
a conference. Tutorials are interactive and hands-on in nature,
presented in classroom format.
Each accepted session will receive one Professional level ticket to
attend the conference.
The Session Selection Committee is looking forward to reading your
submissions. We would also like to thank them for coming together and
volunteering their time to help put this conference together.
## Sponsor Early
As usual, we have a range of sponsorship opportunities available, for
the conference overall as well as the ability to contribute towards
specific parts of the event.
We encourage you to sponsor the conference early, to get the maximum
promotion during the lead up to the event.
If you or your organisation is interested in sponsoring Everything Open,
please get in touch via https://2024.everythingopen.au/sponsors/prospectus/.
----
Read this online at
https://2024.everythingopen.au/news/call-for-sessions-open/
_______________________________________________
announce mailing list
announce(a)lists.linux.org.au
http://lists.linux.org.au/mailman/listinfo/announce
----- End forwarded message -----
Hi Yuri,
I've just backported to reef several fixes that I introduced in the last
months for the rook orchestrator. Most of them are fixes for dashboard
issues/crashes that only happen on Rook environments. The PR [1] has all
the changes and it was merged into reef this morning. We really
need these changes to be part of the next reef release as the upcoming Rook
stable version will be based on it.
Please, can you include those changes in the upcoming reef 18.2.1 release?
[1] https://github.com/ceph/ceph/pull/54224
Thanks a lot,
Redouane.
On Mon, Nov 13, 2023 at 6:03 PM Yuri Weinstein <yweinste(a)redhat.com> wrote:
> ---------- Forwarded message ---------
> From: Venky Shankar <vshankar(a)redhat.com>
> Date: Thu, Nov 9, 2023 at 11:52 PM
> Subject: Re: [ceph-users] Re: reef 18.2.1 QE Validation status
> To: Yuri Weinstein <yweinste(a)redhat.com>
> Cc: dev <dev(a)ceph.io>, ceph-users <ceph-users(a)ceph.io>
>
>
> Hi Yuri,
>
> On Fri, Nov 10, 2023 at 4:55 AM Yuri Weinstein <yweinste(a)redhat.com>
> wrote:
> >
> > I've updated all approvals and merged PRs in the tracker and it looks
> > like we are ready for gibba, LRC upgrades pending approval/update from
> > Venky.
>
> The smoke test failure is caused by missing (kclient) patches in
> Ubuntu 20.04 that certain parts of the fs suite (via smoke tests) rely
> on. More details here
>
> https://tracker.ceph.com/issues/63488#note-8
>
> The kclient tests in smoke pass with other distro's and the fs suite
> tests have been reviewed and look good. Run details are here
>
> https://tracker.ceph.com/projects/cephfs/wiki/Reef#07-Nov-2023
>
> The smoke failure is noted as a known issue for now. Consider this run
> as "fs approved".
>
> >
> > On Thu, Nov 9, 2023 at 1:31 PM Radoslaw Zarzynski <rzarzyns(a)redhat.com>
> wrote:
> > >
> > > rados approved!
> > >
> > > Details are here:
> https://tracker.ceph.com/projects/rados/wiki/REEF#1821-Review.
> > >
> > > On Mon, Nov 6, 2023 at 10:33 PM Yuri Weinstein <yweinste(a)redhat.com>
> wrote:
> > > >
> > > > Details of this release are summarized here:
> > > >
> > > > https://tracker.ceph.com/issues/63443#note-1
> > > >
> > > > Seeking approvals/reviews for:
> > > >
> > > > smoke - Laura, Radek, Prashant, Venky (POOL_APP_NOT_ENABLE failures)
> > > > rados - Neha, Radek, Travis, Ernesto, Adam King
> > > > rgw - Casey
> > > > fs - Venky
> > > > orch - Adam King
> > > > rbd - Ilya
> > > > krbd - Ilya
> > > > upgrade/quincy-x (reef) - Laura PTL
> > > > powercycle - Brad
> > > > perf-basic - Laura, Prashant (POOL_APP_NOT_ENABLE failures)
> > > >
> > > > Please reply to this email with approval and/or trackers of known
> > > > issues/PRs to address them.
> > > >
> > > > TIA
> > > > YuriW
> > > > _______________________________________________
> > > > Dev mailing list -- dev(a)ceph.io
> > > > To unsubscribe send an email to dev-leave(a)ceph.io
> > > >
> > >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users(a)ceph.io
> > To unsubscribe send an email to ceph-users-leave(a)ceph.io
>
>
>
> --
> Cheers,
> Venky
>
>
Hi Cephers,
These are the topics discussed today:
- 18.2.1
- Almost ready, packages built/signed
- Plan to release on Monday
- Last minute PR for Rook
- Lab update to be finished by tomorrow
- Finalize CDM APAC time
- Review
https://doodle.com/meeting/participate/id/aM9XGZ3a/vote?reauth=true
- New time: 9.30 Pacific Time
- Laura and Neha will sync up about changing the time
- Squid dev freeze date
- Proposal: end of January
- Docs question: https://tracker.ceph.com/issues/11385: Can a member of
the community just raise a PR attempting to standardize commands, without
coordinating with a team?
- ballpark date for pacific eol? docs still have the '2023-10-01'
estimate
- Discussion regarding EOL being at time of release or at some point
shortly after to account for regressions
- Conversation regarding clarity of messaging being important re:
expectations for post-release fixes
- concerns on stability of minor releases
- tentatively this year
- distro status update: still working to remove centos8/rhel8/ubuntu20
from main. https://github.com/ceph/ceph/pull/53901 stalled on container
stuff in teuthology, and the need to rebuild containers with centos9 base
- discuss the quincy/dashboard-v3 backports? was tabled from 11/1
[postponed to Nov 22]
- Docs (Zac): CQ January 2024
https://pad.ceph.com/p/ceph_quarterly_2024_01
- Unittestability of dencoding: ask for review --
https://trello.com/c/R0h47dq2/870-unittestability-of-dencoding#comment-6553…
- User + Dev meeting tomorrow
- Need RGW representatives and people with EC profile knowledge
Kind Regards,
Ernesto
Hi,is it possible decrease large size differences between pgs? I have 5PB cluster and differences between smalest and bigest pgs are somewhere about 25GB.thanks,Svoboda Miroslav
Hi Ceph users and developers,
You are invited to join us at the User + Dev meeting this week Thursday,
November 16th at 10:00 AM EST! See below for more meeting details.
The focus topic, "Operational Reliability and Flexibility in Ceph
Upgrades", will be presented by Christian Theune. His presentation will
highlight some issues encountered in a long running cluster when upgrading
to stable releases, including migration between EC profiles, challenges
related to RGW zone replication, and low-level bugs that need more
attention from developers.
The last part of the meeting will be dedicated to open discussion. Feel
free to add questions for the speakers or additional topics under the "Open
Discussion" section on the agenda:
https://pad.ceph.com/p/ceph-user-dev-monthly-minutes
If you have an idea for a focus topic you'd like to present at a future
meeting, you are welcome to submit it to this Google Form:
https://docs.google.com/forms/d/e/1FAIpQLSdboBhxVoBZoaHm8xSmeBoemuXoV_rmh4v…
Any Ceph user or developer is eligible to submit!
Thanks,
Laura Flores
Meeting link: https://meet.jit.si/ceph-user-dev-monthly
Time conversions:
UTC: Thursday, November 16, 15:00 UTC
Mountain View, CA, US: Thursday, November 16, 7:00 PST
Phoenix, AZ, US: Thursday, November 16, 8:00 MST
Denver, CO, US: Thursday, November 16, 8:00 MST
Huntsville, AL, US: Thursday, November 16, 9:00 CST
Raleigh, NC, US: Thursday, November 16, 10:00 EST
London, England: Thursday, November 16, 15:00 GMT
Paris, France: Thursday, November 16, 16:00 CET
Helsinki, Finland: Thursday, November 16, 17:00 EET
Tel Aviv, Israel: Thursday, November 16, 17:00 IST
Pune, India: Thursday, November 16, 20:30 IST
Brisbane, Australia: Friday, November 17, 1:00 AEST
Singapore, Asia: Thursday, November 16, 23:00 +08
Auckland, New Zealand: Friday, November 17, 4:00 NZDT
--
Laura Flores
She/Her/Hers
Software Engineer, Ceph Storage <https://ceph.io>
Chicago, IL
lflores(a)ibm.com | lflores(a)redhat.com <lflores(a)redhat.com>
M: +17087388804