Hi folks,
How could I make rgw built with tcmalloc/jemalloc? I had tcmalloc installed. Although build.ninja has libtcmalloc.so listed for bin/radosgw, it is still using libc’s malloc. Then I had jemalloc installed and the result is the same. What is the proper step to make rgw built with tcmalloc/jemalloc?
Thanks
Yixin
It seems that store_user_info() and read_user_info() do SysObj’s write/read, which use librados::ObjectWriteOperation and librados::ObjectReadOperation, which are supposed to be atomic. How could read_user_info() get either partial or inconsistent content, such as the old xatters with newer content?
Yixin
Hi folks,
What is the recommended way to do atomic update of rados update from rgw? I notice that when store_user_info() and read_user_info() are called at the same time, read_user_info() may get a partial content, which leads to the failure of decoding (buffer::error exception). Is there a way to prevent this scenario?
ThanksYixin
Hi Folks,
We've got the user/dev meeting today right before the performance
meeting and it turns out I'm double booked during the performance
meeting slot. Let's cancel today and all go to the user/dev meeting
instead. See you next week!
Thanks,
Mark
--
Best Regards,
Mark Nelson
Head of R&D (USA)
Clyso GmbH
p: +49 89 21552391 12
a: Loristraße 8 | 80335 München | Germany
w: https://clyso.com | e: mark.nelson(a)clyso.com
We are hiring: https://www.clyso.com/jobs/
Here are the notes from this week's CLT call. The call focused heavily on
release process, specifically around figuring out which patches are
required for a release.
- 17.2.7 status
- A few more FS PRs and one core PR then we can start release process
- Trying to finalize list of PRs needed for 18.2.1
- General discussion about the process for getting the list of required
PRs for a given release
- Using per-release github milestones. E.g. a milestone specifically
for 18.2.1 rather than just reef
- Would require fixing some scripts that refer to the milestone
-
https://github.com/ceph/ceph/blob/main/src/script/backport-resolve-issue
- For now, continue using etherpad until something more automated exists
- Create pads a lot earlier
- Could use existing clt call to try to finalize required PRs for
releases
- should be on agenda for every clt call
- couple of build related PRs that were stalled.
- for a while, it's not possible to build w/FIO
- PR https://github.com/ceph/ceph/pull/53346
- for a while, it's not possible to "make (or ninja) install" with
dashboard disabled
- PR https://github.com/ceph/ceph/pull/52313
- Some more general discussion of how to get more attention for build PRs
- Laura will start grouping some build PRs with RADOS PRs for
build/testing in the ci
- Can make CI builds with CMAKE_BUILD_TYPE=Debug
- https://github.com/ceph/ceph-build/pull/2167
- https://github.com/ceph/ceph/pull/53855#issuecomment-1751367302
-
https://shaman.ceph.com/builds/ceph/wip-batrick-testing-20231006.014828-deb…
- relies on us removing centos 8 from all testing suites and dropping
that as a build target
- Last Pacific?
- Yes, 17.2.7, then 18.2.1, then 16.2.15 (final)
- PTLs will need to go through and find what backports still need to get
into pacific
- A lot of open pacific backports right now
Thanks,
- Adam King
We are happy to announce another release of the go-ceph API library.
This is a regular release following our every-two-months release
cadence.
https://github.com/ceph/go-ceph/releases/tag/v0.24.0
Changes include fixes to the rgw admin and rbd packages.
More details are available at the link above.
The library includes bindings that aim to play a similar role to the
"pybind" python bindings in the ceph tree but for the Go language. The
library also includes additional APIs that can be used to administer
cephfs, rbd, and rgw subsystems.
There are already a few consumers of this library in the wild,
including the ceph-csi project.
--
John Mulligan
phlogistonjohn(a)asynchrono.us
jmulligan(a)redhat.com
Hello
We are getting very close to the next Quincy point release 17.2.7
Here is the list of must-have PRs https://pad.ceph.com/p/quincy_17.2.7_prs
We will start the release testing/review/approval process as soon as
all PRs from this list are merged.
If you see something missing please speak up and the dev leads will
make a decision on including it in this release.
TIA
Dear All,
I hope you are all well. I would like to introduce new tools I have developed, named "LBA tools" which including hd_write_verify & hd_write_verify_dump.
github: https://github.com/zhangyoujia/hd_write_verify
pdf: https://github.com/zhangyoujia/hd_write_verify/DISK&MEMORY stability testing and DATA consistency verifying tools and system.pdf
ppt: https://github.com/zhangyoujia/hd_write_verify/存储稳定性测试与数据一致性校验工具和系统.pptx
bin: https://github.com/zhangyoujia/hd_write_verify/bin
iso: https://github.com/zhangyoujia/hd_write_verify/iso
Data is a vital asset for many businesses, making storage stability and data consistency the most fundamental requirements in storage technology scenarios.
The purpose of storage stability testing is to ensure that storage devices or systems can operate normally and remain stable over time, while also handling various abnormal situations such as sudden power outages and network failures. This testing typically includes stress testing, load testing, fault tolerance testing, and other evaluations to assess the performance and reliability of the storage system.
Data consistency checking is designed to ensure that the data stored in the system is accurate and consistent. This means that whenever data changes occur, all replicas should be updated simultaneously to avoid data inconsistency. Data consistency checking typically involves aspects such as data integrity, accuracy, consistency, and reliability.
LBA tools are very useful for testing Storage stability and verifying DATA consistency, there are much better than FIO & vdbench's verifying functions.
I believe that LBA tools will have a positive impact on the community and help users handle storage data more effectively. Your feedback and suggestions are greatly appreciated, and I hope you can try using LBA tools and share your experiences and recommendations.
Best regards
Short summary: have "-debug" terminating your branch name. See:
https://github.com/ceph/ceph-build/pull/2167
and the integration branch helper script change:
https://github.com/ceph/ceph/pull/53855
The benefit for doing this is that mutex debugging will be enabled,
many compiler checks are enabled, and some optimizations will be
disabled (potentially making some debugging easier). One known
drawback will be that execution may be slower.
See also:
https://github.com/ceph/ceph-build/pull/2167#issuecomment-1751033910
There are build failures for CentOS 8 for which I will make tickets soon.
See also:
https://shaman.ceph.com/builds/ceph/wip-batrick-testing-20231006.014828-deb…
If this is shown to not create a lot of fallout in the QA suite
testing, this may be turned on by default without the "-debug" suffix
on branch names. I encourage QA testers to give this a try so any
issues can be shaken out.
--
Patrick Donnelly, Ph.D.
He / Him / His
Red Hat Partner Engineer
IBM, Inc.
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
Hi folks,
In Reef and newer, all RHEL versions on all arches have
HAVE_CXX11_ATOMIC=false, so they all build with -latomic.
Is this expected?
Obviously it "works", but I thought newer compilers (like RHEL 9's
gcc-c++-8.5.0-18.el8) should result in HAVE_CXX11_ATOMIC=true. Maybe I
have that backwards, and we should expect HAVE_CXX11_ATOMIC to be
false for all newer compilers? Am I mis-understanding the purpose of
CheckCxxAtomic.cmake?
Is HAVE_CXX11_ATOMIC better than libatomic for performance? Or something else?
The reason I ask is that we've fixed a couple corner-case bugs (eg
s390x) here over the past few years. Reef+ can build on modern GCC
with s390x now that we've fixed these bugs, but I'm wondering if the
consequence of always setting HAVE_CXX11_ATOMIC=false now is
intentional or desirable.