Hi!
When we finish the GPFS and BeeFS tests and move on to Ceph, as
mentioned in my previous email, we can proceed with RockDB testing,
including its behavior under real workload conditions. Our users
primarily utilize S3 and then RBD. Regarding S3, typical tools used
include s3cmd, s5cmd, aws-cli, Veeam, Restic, Bacula, etc. For RBD
images, the common scenario involves attaching the block device to their
own server, encrypting it, creating a file system on top, and writing
data according to their preferences. I just wanted to quickly describe
to you the type of workload you can expect ...
We thought of providing you with easy access to information about all
clusters, including the one we are currently discussing, in the form of
telemetry. I can enable the perf module to give you performance metrics
and then the ident module, where I can set a defined email as an
identifier. Do you agree?
Thanks,
Michal
> After an agreement, it will be possible
to arrange some form of
> access
> to the machines, for example, by meeting via video conference and
> fine-tuning them together. Alternatively, we can also work on it
> through
> email, IRC, Slack, or any other suitable means.
>
>
> We are coordinating community efforts around such testing in
> #ceph-at-scale slack channel in
ceph-storage.slack.com
> <http://ceph-storage.slack.com>. I sent you an invite.
>
> Thanks,
> Neha
>
>
> Kind regards,
> Michal Strnad
>
>
> On 6/13/23 22:27, Neha Ojha wrote:
> > Hi everyone,
> >
> > This is the first release candidate for Reef.
> >
> > The Reef release comes with a new RockDB version (7.9.2) [0],
> which
> > incorporates several performance improvements and features. Our
> internal
> > testing doesn't show any side effects from the new version, but
> we are very
> > eager to hear community feedback on it. This is the first
> release to have
> > the ability to tune RockDB settings per column family [1], which
> allows for
> > more granular tunings to be applied to different kinds of data
> stored in
> > RocksDB. A new set of settings has been used in Reef to optimize
> > performance for most kinds of workloads with a slight penalty in
> some
> > cases, outweighed by large improvements in use cases such as
> RGW, in terms
> > of compactions and write amplification. We would highly
> encourage community
> > members to give these a try against their performance benchmarks
> and use
> > cases. The detailed list of changes in terms of RockDB and
> BlueStore can be
> > found in
https://pad.ceph.com/p/reef-rc-relnotes.
> >
> > If any of our community members would like to help us with
> performance
> > investigations or regression testing of the Reef release
> candidate, please
> > feel free to provide feedback via email or in
> >
https://pad.ceph.com/p/reef_scale_testing. For more active
> discussions,
> > please use the #ceph-at-scale slack channel in
>
ceph-storage.slack.com <http://ceph-storage.slack.com>.
> >
> > Overall things are looking pretty good based on our testing.
> Please try it
> > out and report any issues you encounter. Happy testing!
> >
> > Thanks,
> > Neha
> >
> > Get the release from
> >
> > * Git at
git://github.com/ceph/ceph.git
> <http://github.com/ceph/ceph.git>
> > * Tarball at
https://download.ceph.com/tarballs/ceph-18.1.0.tar.gz
> > * Containers at
https://quay.io/repository/ceph/ceph
> > * For packages, see
>
https://docs.ceph.com/en/latest/install/get-packages/
> > * Release git sha1: c2214eb5df9fa034cc571d81a32a5414d60f0405
> >
> > [0]
https://github.com/ceph/ceph/pull/49006
> > [1]
https://github.com/ceph/ceph/pull/51821
> > _______________________________________________
> > ceph-users mailing list -- ceph-users(a)ceph.io
> > To unsubscribe send an email to ceph-users-leave(a)ceph.io
>