Hi Michal,
Thank you for volunteering to help test the Reef release!
On Tue, Jun 27, 2023 at 6:44 AM Michal Strnad <michal.strnad(a)cesnet.cz>
wrote:
Hi everyone,
We read that you are looking for ceph users who would be willing to help
with performance testing of a new version of Ceph called Reef. We would
like to volunteer and offer our assistance :-).
Currently, we are setting up a large cluster consisting of fifty storage
nodes, each with 24 rotational disks and 8 NVMe drives, some of which
are designated for Bluestore and others for data purposes. Each of these
machines is equipped with an AMD EPYC 7282 16-Core processor, ~314GB of
memory, and a 2x25Gbps network connection. The network on each of these
machines is used for both public and cluster communication, and if
necessary, we can prioritize one over the other through QoS adjustments
within the VLAN. However, we haven't had the need to do so thus far.
Furthermore, we have sixteen application servers for monitors, MGR,
metadata servers, and radosgw gateways. Each of these application
servers is equipped with an AMD EPYC 7502 32-Core processor, ~250GB of
memory, and a 2x25Gbps network connection.
Both the storage and application servers are connected to two Nexus 9000
switches with connectivity reaching several 100Gbps towards the internet.
The mentioned cluster will be operational within a few weeks, with Ceph
already installed and ready to undergo performance testing. Once this is
ready, it will be possible to start testing the Reef version. We
anticipate having approximately 2-3 weeks for testing. Are you
interested in the performance results? To achieve better results, it
would be beneficial to coordinate these tests in some way, so that we
don't repeat what others have already tried. Could you please guide us
on what specific aspects we should focus on, which parameters to test,
and how to properly conduct the tests?
We are particularly interested to see the performance impact of the new
RockDB version we'll be shipping with Reef. I am adding Mark to this email
to provide guidance on performance tests.
After an agreement, it will be possible to arrange
some form of access
to the machines, for example, by meeting via video conference and
fine-tuning them together. Alternatively, we can also work on it through
email, IRC, Slack, or any other suitable means.
We are coordinating community efforts around such testing in #ceph-at-scale
slack channel in
ceph-storage.slack.com. I sent you an invite.
Thanks,
Neha
Kind regards,
Michal Strnad
On 6/13/23 22:27, Neha Ojha wrote:
Hi everyone,
This is the first release candidate for Reef.
The Reef release comes with a new RockDB version (7.9.2) [0], which
incorporates several performance improvements and features. Our internal
testing doesn't show any side effects from the new version, but we are
very
eager to hear community feedback on it. This is
the first release to have
the ability to tune RockDB settings per column family [1], which allows
for
more granular tunings to be applied to different
kinds of data stored in
RocksDB. A new set of settings has been used in Reef to optimize
performance for most kinds of workloads with a slight penalty in some
cases, outweighed by large improvements in use cases such as RGW, in
terms
of compactions and write amplification. We would
highly encourage
community
members to give these a try against their
performance benchmarks and use
cases. The detailed list of changes in terms of RockDB and BlueStore can
be
found in
https://pad.ceph.com/p/reef-rc-relnotes.
If any of our community members would like to help us with performance
investigations or regression testing of the Reef release candidate,
please
feel free to provide feedback via email or in
https://pad.ceph.com/p/reef_scale_testing. For more active discussions,
please use the #ceph-at-scale slack channel in
ceph-storage.slack.com.
Overall things are looking pretty good based on our testing. Please try
it
out and report any issues you encounter. Happy
testing!
Thanks,
Neha
Get the release from
* Git at
git://github.com/ceph/ceph.git
* Tarball at
https://download.ceph.com/tarballs/ceph-18.1.0.tar.gz
* Containers at
https://quay.io/repository/ceph/ceph
* For packages, see
https://docs.ceph.com/en/latest/install/get-packages/
* Release git sha1:
c2214eb5df9fa034cc571d81a32a5414d60f0405
[0]
https://github.com/ceph/ceph/pull/49006
[1]
https://github.com/ceph/ceph/pull/51821
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io