Hi Developers,
Ceph is happy to be participating in Grace Hopper Open Source Day [1], an
all-day hackathon for beginning programmers!
You may see some pull requests coming into the Ceph repository from
participants today, labeled with the tag "open-source-day-2023" [2].
If you'd like to help out, you're welcome to provide feedback for these
first-time contributors on the PRs! For many, this is their first
introduction to Ceph, so they may need help signing off their commits,
passing checks, etc.
Thanks everyone, and please reach out to me if you have any questions!
- Laura
1. https://ghc.anitab.org/awards-programs/open-source-day/
2. https://github.com/ceph/ceph/labels/open-source-day-2023
--
Laura Flores
She/Her/Hers
Software Engineer, Ceph Storage <https://ceph.io>
Chicago, IL
lflores(a)ibm.com | lflores(a)redhat.com <lflores(a)redhat.com>
M: +17087388804
Hello All,
I found a weird issue with ceph_readdirplus_r() when used along
with ceph_ll_lookup_vino().
On ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy
(stable)
Any help is really appreciated.
Thanks in advance,
-Joe
Test Scenario :
A. Create a Ceph Fs Subvolume "4" and created a directory in root of
subvolume "user_root"
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23#
ceph fs subvolume ls cephfs
[
{
"name": "4"
}
]
root@ss-joe-01
(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23#
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23#
ls -l
total 0
drwxrwxrwx 2 root root 0 Sep 22 09:16 user_root
root@ss-joe-01
(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23#
B. In the "user_root" directory create some files and directories
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root#
mkdir dir1 dir2
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root#
ls
dir1 dir2
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root#
echo
"Hello Worldls!" > file1
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root#
echo "Hello Worldls!" > file2
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root#
ls
dir1 dir2 file1 file2
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root#
cat file*
Hello Worldls!
Hello Worldls!
C. Create a subvolume snapshot "sofs-4-5". Please ignore the older
snapshots.
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23#
ceph fs subvolume snapshot ls cephfs 4
[
{
"name": "sofs-4-1"
},
{
"name": "sofs-4-2"
},
{
"name": "sofs-4-3"
},
{
"name": "sofs-4-4"
},
{
"name": "sofs-4-5"
}
]
root@ss-joe-01
(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23#
Here "sofs-4-5" has snapshot id 6.
Got this from libcephfs and have verified at Line
snapshot_inode_lookup.cpp#L212. (Attached to the email)
#Content within the snapshot
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23#
cd .snap/
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/.snap#
ls
_sofs-4-1_1099511627778 _sofs-4-2_1099511627778 _sofs-4-3_1099511627778
_sofs-4-4_1099511627778 _sofs-4-5_1099511627778
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/.snap#
cd _sofs-4-5_1099511627778/
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/.snap/_sofs-4-5_1099511627778#
ls
user_root
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/.snap/_sofs-4-5_1099511627778#
cd user_root/
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/.snap/_sofs-4-5_1099511627778/user_root#
ls
dir1 dir2 file1 file2
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/.snap/_sofs-4-5_1099511627778/user_root#
cat file*
Hello Worldls!
Hello Worldls!
root@ss-joe-01
(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/.snap/_sofs-4-5_1099511627778/user_root#
D. Delete all the files and directories in "user_root"
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root#
rm -rf *
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root#
ls
root@ss-joe-01
(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root#
E. Using Libcephfs in a C++ program do the following,(Attached to this
email)
1. Get the Inode of "user_root" using ceph_ll_walk().
2. Open the directory using Inode received from ceph_ll_walk() and do
ceph_readdirplus_r()
We don't see any dentries(except "." and "..") as we have deleted all
files and directories in the active filesystem. This is expected and
correct!
=================================/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root/=====================================
Path/Name
:"/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root/"
Inode Address : 0x7f5ce0009900
Inode Number : 1099511629282
Snapshot Number : 18446744073709551614
Inode Number : 1099511629282
Snapshot Number : 18446744073709551614
. Ino: 1099511629282 SnapId: 18446744073709551614 Address: 0x7f5ce0009900
.. Ino: 1099511627779 SnapId: 18446744073709551614 Address:
0x7f5ce00090f0
3. Using ceph_ll_lookup_vino() get the Inode * of "user_root" for
snapshot 6, Here "sofs-4-5" has snapshot id 6.
Got this from libcephfs and have verified at Line
snapshot_inode_lookup.cpp#L212. (Attached to the email
4. Open the directory using Inode * received from ceph_ll_lookup_vino()
and do ceph_readdirplus_r()
We don't see any dentries (except "." and "..") This is NOT expected and
NOT correct, as there are files and directories in the snaphot 6.
=================================1099511629282:6=====================================
Path/Name :"1099511629282:6"
Inode Address : 0x7f5ce000a110
Inode Number : 1099511629282
Snapshot Number : 6
Inode Number : 1099511629282
Snapshot Number : 6
. Ino: 1099511629282 SnapId: 6 Address: 0x7f5ce000a110
.. Ino: 1099511629282 SnapId: 6 Address: 0x7f5ce000a110
5. Get the Inode of "user_root/ .snap/_sofs-4-5_1099511627778 / " using
ceph_ll_walk().
6. Open the directory using Inode received from ceph_ll_walk() and do
ceph_readdirplus_r()
We see ALL dentries of all files and directories in the snapshot. This
is expected and correct!
=================================/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root/.snap/_sofs-4-5_1099511627778/=====================================
Path/Name
:"/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root/.snap/_sofs-4-5_1099511627778/"
Inode Address : 0x7f5ce000a110
Inode Number : 1099511629282
Snapshot Number : 6
Inode Number : 1099511629282
Snapshot Number : 6
. Ino: 1099511629282 SnapId: 6 Address: 0x7f5ce000a110
.. Ino: 1099511629282 SnapId: 18446744073709551615 Address:
0x5630ab946340
file1 Ino: 1099511628291 SnapId: 6 Address: 0x7f5ce000aa90
dir1 Ino: 1099511628289 SnapId: 6 Address: 0x7f5ce000b180
dir2 Ino: 1099511628290 SnapId: 6 Address: 0x7f5ce000b800
file2 Ino: 1099511628292 SnapId: 6 Address: 0x7f5ce000be80
7. Now Again using ceph_ll_lookup_vino() get the Inode * of "user_root"
for snapshot 6, Here "sofs-4-5" has snapshot id 6.
8. Open the directory using Inode * received from
ceph_ll_lookup_vino() and do ceph_readdirplus_r()
Now! we see all the files and Directories in the snapshot!
=================================1099511629282:6=====================================
Path/Name :"1099511629282:6"
Inode Address : 0x7f5ce000a110
Inode Number : 1099511629282
Snapshot Number : 6
Inode Number : 1099511629282
Snapshot Number : 6
. Ino: 1099511629282 SnapId: 6 Address: 0x7f5ce000a110
.. Ino: 1099511629282 SnapId: 18446744073709551615 Address:
0x5630ab946340
file1 Ino: 1099511628291 SnapId: 6 Address: 0x7f5ce000aa90
dir1 Ino: 1099511628289 SnapId: 6 Address: 0x7f5ce000b180
dir2 Ino: 1099511628290 SnapId: 6 Address: 0x7f5ce000b800
file2 Ino: 1099511628292 SnapId: 6 Address: 0x7f5ce000be80
Am I missing something using these APIs?
File attached to this email
Full out of the program attached to the email.
- snapshot_inode_lookup.cpp_output.txt <Attached>
C++ Program - snapshot_inode_lookup.cpp <Attached>
/etc/ceph/ceph.conf - <attached>
Ceph Client Log during the run of this C++ program - client.log<attached>
Compile Command:
g++ -o snapshot_inode_lookup ./snapshot_inode_lookup.cpp -g -ldl -ldw
-lcephfs -lboost_filesystem --std=c++17
Linux Details,
root@ss-joe-01(bash):/home/hydrauser# uname -a
Linux ss-joe-01 5.10.0-23-amd64 #1 SMP Debian 5.10.179-1 (2023-05-12)
x86_64 GNU/Linux
root@ss-joe-01(bash):/home/hydrauser#
Ceph Details,
root@ss-joe-01(bash):/home/hydrauser# ceph -v
ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy
(stable)
root@ss-joe-01(bash):/home/hydrauser#
root@ss-joe-01(bash):/home/hydrauser# ceph -s
cluster:
id: fb43d857-d165-4189-87fc-cf1debce9170
health: HEALTH_OK
services:
mon: 3 daemons, quorum ss-joe-01,ss-joe-02,ss-joe-03 (age 4d)
mgr: ss-joe-01(active, since 4d), standbys: ss-joe-03, ss-joe-02
mds: 1/1 daemons up
osd: 3 osds: 3 up (since 4d), 3 in (since 4d)
data:
volumes: 1/1 healthy
pools: 3 pools, 49 pgs
objects: 39 objects, 1.0 MiB
usage: 96 MiB used, 30 GiB / 30 GiB avail
pgs: 49 active+clean
root@ss-joe-01(bash):/home/hydrauser#
root@ss-joe-01(bash):/home/hydrauser# dpkg -l | grep ceph
ii ceph 17.2.5-1~bpo11+1
amd64 distributed storage and file system
ii ceph-base 17.2.5-1~bpo11+1
amd64 common ceph daemon libraries and management tools
ii ceph-base-dbg 17.2.5-1~bpo11+1
amd64 debugging symbols for ceph-base
ii ceph-common 17.2.5-1~bpo11+1
amd64 common utilities to mount and interact with a ceph
storage cluster
ii ceph-common-dbg 17.2.5-1~bpo11+1
amd64 debugging symbols for ceph-common
ii ceph-fuse 17.2.5-1~bpo11+1
amd64 FUSE-based client for the Ceph distributed file
system
ii ceph-fuse-dbg 17.2.5-1~bpo11+1
amd64 debugging symbols for ceph-fuse
ii ceph-mds 17.2.5-1~bpo11+1
amd64 metadata server for the ceph distributed file system
ii ceph-mds-dbg 17.2.5-1~bpo11+1
amd64 debugging symbols for ceph-mds
ii ceph-mgr 17.2.5-1~bpo11+1
amd64 manager for the ceph distributed storage system
ii ceph-mgr-cephadm 17.2.5-1~bpo11+1
all cephadm orchestrator module for ceph-mgr
ii ceph-mgr-dashboard 17.2.5-1~bpo11+1
all dashboard module for ceph-mgr
ii ceph-mgr-dbg 17.2.5-1~bpo11+1
amd64 debugging symbols for ceph-mgr
ii ceph-mgr-diskprediction-local 17.2.5-1~bpo11+1
all diskprediction-local module for ceph-mgr
ii ceph-mgr-k8sevents 17.2.5-1~bpo11+1
all kubernetes events module for ceph-mgr
ii ceph-mgr-modules-core 17.2.5-1~bpo11+1
all ceph manager modules which are always enabled
ii ceph-mon 17.2.5-1~bpo11+1
amd64 monitor server for the ceph storage system
ii ceph-mon-dbg 17.2.5-1~bpo11+1
amd64 debugging symbols for ceph-mon
ii ceph-osd 17.2.5-1~bpo11+1
amd64 OSD server for the ceph storage system
ii ceph-osd-dbg 17.2.5-1~bpo11+1
amd64 debugging symbols for ceph-osd
ii ceph-volume 17.2.5-1~bpo11+1
all tool to facilidate OSD deployment
ii cephadm 17.2.5-1~bpo11+1
amd64 cephadm utility to bootstrap ceph daemons with
systemd and containers
ii libcephfs2 17.2.5-1~bpo11+1
amd64 Ceph distributed file system client library
ii libcephfs2-dbg 17.2.5-1~bpo11+1
amd64 debugging symbols for libcephfs2
ii libsqlite3-mod-ceph 17.2.5-1~bpo11+1
amd64 SQLite3 VFS for Ceph
ii libsqlite3-mod-ceph-dbg 17.2.5-1~bpo11+1
amd64 debugging symbols for libsqlite3-mod-ceph
ii python3-ceph-argparse 17.2.5-1~bpo11+1
all Python 3 utility libraries for Ceph CLI
ii python3-ceph-common 17.2.5-1~bpo11+1
all Python 3 utility libraries for Ceph
ii python3-cephfs 17.2.5-1~bpo11+1
amd64 Python 3 libraries for the Ceph libcephfs library
ii python3-cephfs-dbg 17.2.5-1~bpo11+1
amd64 Python 3 libraries for the Ceph libcephfs library
root@ss-joe-01(bash):/home/hydrauser#
Hi Ceph users and developers,
We invite you to join us at the User + Dev Relaunch, happening this
Thursday at 10:00 AM EST! See below for more meeting details. Also see this
blog post to read more about the relaunch:
https://ceph.io/en/news/blog/2023/user-dev-meeting-relaunch/
We have two guest speakers who will present their focus topics during the
first 40 minutes of the session:
1. "What to do when Ceph isn't Ceph-ing" by Cory Snyder
Topics include troubleshooting tips, effective ways to gather help
from the community, ways to improve cluster health and insights, and more!
2. "Ceph Usability Improvements" by Jonas Sterr
A continuation of a talk from Cephalocon 2023, updated after trying
out the Reef Dashboard.
The last 20 minutes of the meeting will be dedicated to open discussion.
Feel free to add questions for the speakers or additional topics under the
"Open Discussion" section on the agenda:
https://pad.ceph.com/p/ceph-user-dev-monthly-minutes
If you have an idea for a focus topic you'd like to present at a future
meeting, you are welcome to submit it to this Google Form:
https://docs.google.com/forms/d/e/1FAIpQLSdboBhxVoBZoaHm8xSmeBoemuXoV_rmh4v…
Any Ceph user or developer is eligible to submit!
Thanks,
Laura Flores
Meeting link: https://meet.jit.si/ceph-user-dev-monthly
Time conversions:
UTC: Thursday, September 21, 14:00 UTC
Mountain View, CA, US: Thursday, September 21, 7:00 PDT
Phoenix, AZ, US: Thursday, September 21, 7:00 MST
Denver, CO, US: Thursday, September 21, 8:00 MDT
Huntsville, AL, US: Thursday, September 21, 9:00 CDT
Raleigh, NC, US: Thursday, September 21, 10:00 EDT
London, England: Thursday, September 21, 15:00 BST
Paris, France: Thursday, September 21, 16:00 CEST
Helsinki, Finland: Thursday, September 21, 17:00 EEST
Tel Aviv, Israel: Thursday, September 21, 17:00 IDT
Pune, India: Thursday, September 21, 19:30 IST
Brisbane, Australia: Friday, September 22, 0:00 AEST
Singapore, Asia: Thursday, September 21, 22:00 +08
Auckland, New Zealand: Friday, September 22, 2:00 NZST
--
Laura Flores
She/Her/Hers
Software Engineer, Ceph Storage <https://ceph.io>
Chicago, IL
lflores(a)ibm.com | lflores(a)redhat.com <lflores(a)redhat.com>
M: +17087388804
Hi Folks,
We're going to be relaunching the user+dev meeting tomorrow and I'm
double booked during the perf meeting time slot, so let's cancel this
week and we'll reconvene next week. Have a good week folks!
Mark
--
Best Regards,
Mark Nelson
Head of R&D (USA)
Clyso GmbH
p: +49 89 21552391 12
a: Loristraße 8 | 80335 München | Germany
w: https://clyso.com | e: mark.nelson(a)clyso.com
We are hiring: https://www.clyso.com/jobs/
Hello,
A packed agenda today:
- User + Dev meeting relaunch happening tomorrow!
- https://ceph.io/en/news/blog/2023/user-dev-meeting-relaunch/
- https://pad.ceph.com/p/ceph-user-dev-monthly-minutes
- dropping CentOS/RHEL 8 and Ubuntu 20.04 for Squid (Casey)
- https://github.com/ceph/ceph/pull/53517
- CentOS 8 -> CentOS 9, Ubuntu 20.04 -> Ubuntu 22.04
- RHEL 8 facet would be dropped without replacement, CephFS team
to take on adding RHEL 9 for testing if need arises
- Debian packages for Reef
- lacking review on https://github.com/ceph/ceph/pull/53342
- it's a bare minimum change but can't "just" merge because it
may affect Ubuntu packages
- looks like further improvements are being consolidated in
https://github.com/ceph/ceph/pull/53546
- Sepia user cleanup/expiration policy (Patrick)
- currently ad-hoc, needs to be formally defined
- https://tracker.ceph.com/issues/62909
- using an Ansible playbook for this isn't ideal, LDAP anyone?
- https://tracker.ceph.com/issues/62908
- can rely on tickets under Infrastructure project more, they should
be scrubbed scrubbed regularly now!
- backport release cadence
- very long gaps (six months from 17.2.5 to 17.2.6, five months
and counting from 17.2.6), folks end up using private builds in
some cases
- let's start by defining a 3-4 month cadence: it's still pretty
long but should be realistic, the key is to become predictable
- what resources are needed to reduce that time to ideally 6-8 weeks
- TBD, need Yuri in the room
- Ceph Quarterly to be published on Oct 2 (Zac)
Thanks,
Ilya
Hi,
When Reef was released, the announcement said that Debian packages would
be built once the blocking bug in Bookworm was fixed. As I noted on the
tracker item https://tracker.ceph.com/issues/61845 a couple of weeks
ago, that is now the case after the most recent Bookworm point release.
I also opened a PR to make the minimal change that would build Reef
packages on Bookworm[0]. I subsequently opened another PR to fix some
low-hanging fruit in terms of packaging errors - missing #! in
maintscripts, syntax errors in debian/control, erroneous dependencies on
Essential packages[1]. Neither PR has had any feedback/review as far as
I can see.
Those packages (and the previous state of the debian/ tree) had some
significant problems - no copyright file, and some of them contain
python scripts without declaring a python dependency, so I've today
submitted a slightly larger PR that brings the dh compatibility level up
to what I think the latest lowest-common-denominator level is, as well
as fixing these errors[2].
I believe these changes all ought to go into the reef branch, but
obviously you might prefer to just make the bare-minimum-to-build change
in the first PR.
Is there any chance of having some reef packages for Bookworm, please?
Relatedly, is there interest in further packaging fixes for future
branches? lintian still has quite a lot to say about the .debs for Ceph,
and while you might reasonably not want to care about crossing every t
of Debian policy, I think there are still changes that would be worth
doing...
I should declare a bit of an interest here - I'd like to evaluate
cephadm for work use, which would require us to be able to build our own
packages per local policy[3], which in turn would mean I'd want to get
Debian-based images going again. But that requires Reef .debs being
available to install onto said images :)
Thanks,
Matthew
[0] https://github.com/ceph/ceph/pull/53342
[1] https://github.com/ceph/ceph/pull/53397
[2] https://github.com/ceph/ceph/pull/53546
[3] https://wikitech.wikimedia.org/wiki/Kubernetes/Images#Production_images
Hello, Ceph families.
I know that Ceph stores metadata in Rocksdb.
So, I studied Rocksdb, and what I learned is as follows.
(https://github.com/facebook/rocksdb/wiki/Administration-and-Data-Access-Tool, https://docs.ceph.com/en/latest/man/8/ceph-bluestore-tool/, etc)
1. Check the Rocksdb sst file using the ldb tool.
2. Allocates prefixes to the Bluetooth metadata and saves them for each Rocksdb column family.
3. Using the "ceph-kvstore-tool rocksdb [db path] list" command , we can check that the object name exists in the sst file corresponding to prefix O (thought to be the prefix of the object).
I leave some questions because I want to know more information about Ceph and Rocksdb. Here's my question.
1. I saw the following in the code, but there are many parts that I do not understand.
// kv store prefixes
const string PREFIX_SUPER = "S"; // field -> value
What is the exact meaning of each prefix of Bluestore metadata? (e.g., O=object name)
2. The Meaning and Interpretation Method of Rocksdb sst File Contents
In particular, I wonder how to read the key-value of Data blcok.
3. It may be included in the above questions, but is it possible to determine the physical location of the object through rocksdb?
First of all, the question that comes to mind is as above.
Have a good day, and thank you.
At the infrastructure meeting today, we decided on a course of action
for migrating the existing /home directory to CephFS. This is being
done for a few reasons:
- Alleviate load on the root file system device (which is also hosted
on the LRC via iscsi)
- Avoid disk space full scenarios we've regularly hit
- Is more recoverable in the event of teuthology corruption/catastrophe
- Is generally much faster.
- Use as a home file system on other sepia resources (maybe)
To effect this:
- The new "home" CephFS file system is mounted at /cephfs/home
- User's home /home/$USER has been or will be (again) rsync'd to
/cephfs/home/$USER
- User's account "home" (/etc/passwd) is being updated to /cephfs/home/$USER
- User's old home /home/$USER will be archived to /home/.archive/$USER
- A symlink will be placed in /home/$USER pointing to
/cephfs/home/$USER for compatibility with existing
(mis-)configurations.
The main reason for not simply updating /home is to allow
administrators continued access to teuthology in the event of a
Ceph(FS) outage.
Most home directories have already been rsync'd as of 2 weeks ago. A
final rsync will be performed prior to each user's terminal migration.
In order to update a user's home directory, the user must be logged
out. Generally no action need be taken but I may kindly ask you to log
out of teuthology if necessary.
Thanks to Laura Flores, Venky Shankar, Yuri Weinstein, and Leonid Usov
for volunteering as guinea pigs for my early testing. They have
already been migrated. The rest of the users will be migrated in a few
days time incrementally.
--
Patrick Donnelly, Ph.D.
He / Him / His
Red Hat Partner Engineer
IBM, Inc.
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
Hi.
Finally, we are nearing the completion of the GPFS and BeeGFS tests.
Unfortunately, we encountered several issues along the way, such as
significant disk failures, etc. We estimate that in two to three weeks,
we could start the Ceph installation. Are you still interested in trying
out some configurations and running tests, although they might be
different now due to the time delay?
Thanks,
Michal
On 7/31/23 21:10, Michal Strnad wrote:
> Hi!
>
> When we finish the GPFS and BeeFS tests and move on to Ceph, as
> mentioned in my previous email, we can proceed with RockDB testing,
> including its behavior under real workload conditions. Our users
> primarily utilize S3 and then RBD. Regarding S3, typical tools used
> include s3cmd, s5cmd, aws-cli, Veeam, Restic, Bacula, etc. For RBD
> images, the common scenario involves attaching the block device to their
> own server, encrypting it, creating a file system on top, and writing
> data according to their preferences. I just wanted to quickly describe
> to you the type of workload you can expect ...
>
> We thought of providing you with easy access to information about all
> clusters, including the one we are currently discussing, in the form of
> telemetry. I can enable the perf module to give you performance metrics
> and then the ident module, where I can set a defined email as an
> identifier. Do you agree?
>
> Thanks,
> Michal
>
>
>>> After an agreement, it will be possible to arrange some form of
>>> access
>>> to the machines, for example, by meeting via video conference and
>>> fine-tuning them together. Alternatively, we can also work on it
>>> through
>>> email, IRC, Slack, or any other suitable means.
>>>
>>>
>>> We are coordinating community efforts around such testing in
>>> #ceph-at-scale slack channel in ceph-storage.slack.com
>>> <http://ceph-storage.slack.com>. I sent you an invite.
>>>
>>> Thanks,
>>> Neha
>>>
>>>
>>> Kind regards,
>>> Michal Strnad
>>>
>>>
>>> On 6/13/23 22:27, Neha Ojha wrote:
>>> > Hi everyone,
>>> >
>>> > This is the first release candidate for Reef.
>>> >
>>> > The Reef release comes with a new RockDB version (7.9.2) [0],
>>> which
>>> > incorporates several performance improvements and features. Our
>>> internal
>>> > testing doesn't show any side effects from the new version, but
>>> we are very
>>> > eager to hear community feedback on it. This is the first
>>> release to have
>>> > the ability to tune RockDB settings per column family [1], which
>>> allows for
>>> > more granular tunings to be applied to different kinds of data
>>> stored in
>>> > RocksDB. A new set of settings has been used in Reef to optimize
>>> > performance for most kinds of workloads with a slight penalty in
>>> some
>>> > cases, outweighed by large improvements in use cases such as
>>> RGW, in terms
>>> > of compactions and write amplification. We would highly
>>> encourage community
>>> > members to give these a try against their performance benchmarks
>>> and use
>>> > cases. The detailed list of changes in terms of RockDB and
>>> BlueStore can be
>>> > found in https://pad.ceph.com/p/reef-rc-relnotes.
>>> >
>>> > If any of our community members would like to help us with
>>> performance
>>> > investigations or regression testing of the Reef release
>>> candidate, please
>>> > feel free to provide feedback via email or in
>>> > https://pad.ceph.com/p/reef_scale_testing. For more active
>>> discussions,
>>> > please use the #ceph-at-scale slack channel in
>>> ceph-storage.slack.com <http://ceph-storage.slack.com>.
>>> >
>>> > Overall things are looking pretty good based on our testing.
>>> Please try it
>>> > out and report any issues you encounter. Happy testing!
>>> >
>>> > Thanks,
>>> > Neha
>>> >
>>> > Get the release from
>>> >
>>> > * Git at git://github.com/ceph/ceph.git
>>> <http://github.com/ceph/ceph.git>
>>> > * Tarball at https://download.ceph.com/tarballs/ceph-18.1.0.tar.gz
>>> > * Containers at https://quay.io/repository/ceph/ceph
>>> > * For packages, see
>>> https://docs.ceph.com/en/latest/install/get-packages/
>>> > * Release git sha1: c2214eb5df9fa034cc571d81a32a5414d60f0405
>>> >
>>> > [0] https://github.com/ceph/ceph/pull/49006
>>> > [1] https://github.com/ceph/ceph/pull/51821
>>> > _______________________________________________
>>> > ceph-users mailing list -- ceph-users(a)ceph.io
>>> > To unsubscribe send an email to ceph-users-leave(a)ceph.io
>>>
>
--
Michal Strnad
Oddeleni datovych ulozist
CESNET z.s.p.o.