Hi,
I went through ceph open-source code (link: https://github.com/ceph/ceph)
and saw your contribution. It would be of great help if you answer some of
the questions.
Following are some queries:
1. Wanted to know the exact connection in ceph? Is it using iscsi protocol
to transfer RBD block devices to clients?
2. How do OSDs join forces to create a single rbd block?
3. Do you know the precise protocol that is used to send rbd images to
clients?
Hoping to hear from you soon.
Thanks and regards,
David Thomas
Hi everyone,
As some of you already know, I have been the tech lead of RADOS for over 4
years. In the last couple of years, I have taken on management
responsibilities and have also been a member of the Ceph Executive Council.
As we gear up for our next release, Reef, and start preparing for the S
release, I think it is the right time for me to hand over my tech lead
responsibilities. I am very happy to announce that Radoslaw Zarzynski
(Radek) is now the new tech lead of RADOS and I am very confident that
RADOS is in great hands!
Radek is a long-time member of the Ceph Core RADOS Team. He has contributed
multiple features to various areas in Ceph, including the OSD, Messenger
and RGW, to name of few. More recently, he has been a key member to the
Crimson project, which is a new incarnation of the Ceph OSD targeting
faster storage and networked devices.
I will continue to be an active member of the Ceph community as a part of
the Ceph Executive Council, the Ceph Leadership Team, and the RADOS team.
Please join me in congratulating Radek and wishing him good luck!
Thanks,
Neha
Hello-
I've been looking into an issue we've had when migrating from RGW 15.2.17 to RGW 17.2.5.
When getting bucket stats (e.g. radosgw-admin bucket stats) the mtime field in RGW 15.2.17 has an expected time/date value, but 17.2.5 has "0.0000". I've tracked that down to a bug introduced during a large refactoring checkin:
https://github.com/ceph/ceph/commit/72d1a363263cf707d022ee756122236ba175cda2
The fix for that is simple, just have to add a call to bucket->get_modification_time() in rgw_bucket.cc to match the corresponding call to bucket->get_creation_time().
This works fine in 17.2.5 - when creating a bucket and getting its stats on a local "vstart.sh" test cluster I have an mtime that is a few seconds later than the creation time.
However, when I try the same fix on the current head the behavior goes back to having an mtime of "0.0000". As it turns out, the mtime returned from bucket->get_modification_time() right after creation is the default initialized value - not updated as it was previously.
I haven't done more digging into this yet, but before I did was curious if there's been some intentional changes to how bucket mtime works. It'd be easy enough to add a check for an empty mtime and swap in the creation time instead, but wanted to make sure I wouldn't just be masking some other issue.
Thanks!
-Mike Neufeld
Hi everyone,
There is a new meeting called BlueStore: upkeep and evolution in the
community calendar [0] to discuss short-term improvements and long-term
goals for BlueStore evolution. This meeting will happen on every Tuesday, 6
- 7 am PT, please feel free to participate in any capacity you like!
Thanks,
Neha
[0]
https://calendar.google.com/calendar/u/0/embed?src=9ts9c7lt7u1vic2ijvvqqlfp…
Hi Folks,
The weekly performance meeting will be starting in approximately 15
minutes at 8AM PST! Today we will catch up after a 2 week break due to
Ceph Day NY. Lots of potential topics to discuss ranging from last
minute performance PR inclusions for Reef (New RocksDB!) to recent
customer upgrades and deployments. Please fee free to add your own
topic as well!
Etherpad:
https://pad.ceph.com/p/performance_weekly
Bluejeans:
https://bluejeans.com/908675367
Mark
Highlights from this week's CLT meeting
- Recent lab issues have been resolved. We are kicking off Ceph Infra
Weekly today - a community meeting to discuss short term and long term
infrastructure issues.
- The Reef release has now been cut. For further fixes, we'll use our usual
procedure of merging to main and backporting to reef.
- Extended teuthology, performance and scale testing for Reef will follow.
- gibba cluster for RGW workloads
- Mark will run performance tests using CBT in upstream mako hardware and
we'll use David Orman's hardware to explore running CBT outside of the
sepia lab.
- LRC upgrade will happen when we are ready for RC
- Yuri will run baseline teuthology runs for Reef for all suites
- Cephalocon!
- The schedule is live https://events.linuxfoundation.org/cephalocon/.
- Dev summit coordination etherpad -
https://pad.ceph.com/p/cephalocon-dev-summit-2023
Thanks,
Neha
Hi everyone,
We are past the dev freeze for the reef release and the branch has now been
cut https://github.com/ceph/ceph/tree/reef. For further fixes, let's use
our usual procedure of merging to main and backporting to reef.
Thanks,
Neha
Hello
We are planning to start QE validation release next week.
If you have PRs that are to be part of it, please let us know by
adding "needs-qa" for 'quincy' milestone ASAP.
Thx
YuriW
Hi Folks,
I'm Kevin Zhao from Linaro and cc'd Qinfei Liu from Huawei, he is the
author of this PR. We are working on building and supporting Ceph on
openEuler OS.
OpenEuler (https://www.openeuler.org/en/) is now a very active and popular
operating system community. It is based on RPM and all the packages are
built from source. openEuler now attracts a lot of end users and
developers, especially in China. Actually some companies already have some
trial deployment clusters for Ceph on openEuler.
The PR Link: https://github.com/ceph/ceph/pull/50182
So from the variety of the Ceph upstream, it's better to support openEuler
build in the community. So we have proposed the PR here and hope to see the
feedback. And also want to know the *process for Ceph to support a real new
operation system*.
One thing I want to mention is that we have submitted the patch to Ceph
16.2.7 brance first because in OpenEuler 22.03-LTS, the Ceph version is
16.2.7, and we will also submit another PR to the Main branch.
Hope to hear from the community's feedback soon, thanks in advance!
--
*Best Regards*
*Kevin Zhao*
Tech Lead, LDCG Cloud Infra & Storage
Linaro Vertical Technologies
IRC(freenode): kevinz
Slack(kubernetes.slack.com): kevinz
kevin.zhao(a)linaro.org | Mobile/Direct/Wechat: +86 18818270915