hey Gal and Eric,
in today's standup, we discussed the version of our apache arrow
submodule. it's currently pinned at 6.0.1, which was tagged in nov.
2021. the centos9 builds are using the system package
libarrow-devel-9.0.0. arrow's upstream recently tagged an 11.0.0
release
as far as i know, there still aren't any system packages for ubuntu,
so we're likely to be stuck with the submodule for quite a while. how
do guys want to handle these updates? is it worth trying to update
before the reef release?
Hi,
When Reef was released, the announcement said that Debian packages would
be built once the blocking bug in Bookworm was fixed. As I noted on the
tracker item https://tracker.ceph.com/issues/61845 a couple of weeks
ago, that is now the case after the most recent Bookworm point release.
I also opened a PR to make the minimal change that would build Reef
packages on Bookworm[0]. I subsequently opened another PR to fix some
low-hanging fruit in terms of packaging errors - missing #! in
maintscripts, syntax errors in debian/control, erroneous dependencies on
Essential packages[1]. Neither PR has had any feedback/review as far as
I can see.
Those packages (and the previous state of the debian/ tree) had some
significant problems - no copyright file, and some of them contain
python scripts without declaring a python dependency, so I've today
submitted a slightly larger PR that brings the dh compatibility level up
to what I think the latest lowest-common-denominator level is, as well
as fixing these errors[2].
I believe these changes all ought to go into the reef branch, but
obviously you might prefer to just make the bare-minimum-to-build change
in the first PR.
Is there any chance of having some reef packages for Bookworm, please?
Relatedly, is there interest in further packaging fixes for future
branches? lintian still has quite a lot to say about the .debs for Ceph,
and while you might reasonably not want to care about crossing every t
of Debian policy, I think there are still changes that would be worth
doing...
I should declare a bit of an interest here - I'd like to evaluate
cephadm for work use, which would require us to be able to build our own
packages per local policy[3], which in turn would mean I'd want to get
Debian-based images going again. But that requires Reef .debs being
available to install onto said images :)
Thanks,
Matthew
[0] https://github.com/ceph/ceph/pull/53342
[1] https://github.com/ceph/ceph/pull/53397
[2] https://github.com/ceph/ceph/pull/53546
[3] https://wikitech.wikimedia.org/wiki/Kubernetes/Images#Production_images
Hi developers,
Has anyone else been experiencing this kind of failure when running the
"install-deps.sh" script on ubuntu jammy?
Reading state information... Done
E: Unable to locate package ceph-libboost-atomic1.82-dev
E: Couldn't find any package by glob 'ceph-libboost-atomic1.82-dev'
E: Couldn't find any package by regex 'ceph-libboost-atomic1.82-dev'
E: Unable to locate package ceph-libboost-chrono1.82-dev
E: Couldn't find any package by glob 'ceph-libboost-chrono1.82-dev'
E: Couldn't find any package by regex 'ceph-libboost-chrono1.82-dev'
E: Unable to locate package ceph-libboost-container1.82-dev
E: Couldn't find any package by glob 'ceph-libboost-container1.82-dev'
E: Couldn't find any package by regex 'ceph-libboost-container1.82-dev'
E: Unable to locate package ceph-libboost-context1.82-dev
E: Couldn't find any package by glob 'ceph-libboost-context1.82-dev'
E: Couldn't find any package by regex 'ceph-libboost-context1.82-dev'
E: Unable to locate package ceph-libboost-coroutine1.82-dev
E: Couldn't find any package by glob 'ceph-libboost-coroutine1.82-dev'
E: Couldn't find any package by regex 'ceph-libboost-coroutine1.82-dev'
E: Unable to locate package ceph-libboost-date-time1.82-dev
E: Couldn't find any package by glob 'ceph-libboost-date-time1.82-dev'
E: Couldn't find any package by regex 'ceph-libboost-date-time1.82-dev'
E: Unable to locate package ceph-libboost-filesystem1.82-dev
E: Couldn't find any package by glob 'ceph-libboost-filesystem1.82-dev'
E: Couldn't find any package by regex 'ceph-libboost-filesystem1.82-dev'
E: Unable to locate package ceph-libboost-iostreams1.82-dev
Apologies for lack of further output- I have not experienced the issue
myself, but several others have reported it, and I've taken this output
from one reported instance.
So far, I haven't been able to reproduce it on a jammy container (the
install-deps.sh script runs fine to completion when I run the script).
Wondering if some environmental factor is related to this issue, or if
anyone has found a workaround.
Thanks,
Laura Flores
--
Laura Flores
She/Her/Hers
Software Engineer, Ceph Storage <https://ceph.io>
Chicago, IL
lflores(a)ibm.com | lflores(a)redhat.com <lflores(a)redhat.com>
M: +17087388804
dear sir/madam,
I am akshat khatri a computer science undergrad and i am in my 1st year in
P.I.E.T college i am new to open source contributions but i am well aware
of javascript and some c++ i would love to contribute to your organization
but could you please tell me how to get started?
I hope to hear from you soon.
Regards
Akshat Khatri
Hi everybody,
The CLT met today as usual. We only had a few topics under discussion:
* the User + Dev relaunch went off well! We’d like reliable recordings and
have found Jitsi to be somewhat glitchy; Laura will communicate about
workarounds for that while we work on a longer-term solution (self-hosting
Jitsi has a better reputation and is a possibility). We also discussed a
GitHub repo for hosting presentation files, and organizing them on the
website.
* CVE handling. As noted elsewhere on the mailing list, CVE-2023-43040 (a
privilege escalation impacting RGW) was disclosed elsewhere, and we do not
have coordinated releases for it. This was not deemed important enough on
the security list for that effort, but we do want to be more prepared for
it than we were — our CVE handling process has broken down a bit since some
of the CVE work is now being handled by IBM instead of Red Hat. Tech leads
and IBM employees will be working on refining that so we have better
disclosures.
Also, if you were previously on the security mailing list and a did not see
these emails, please reach out to the team — some subscribers were lost and
not recovered in the lab disaster end of last year. (For obvious reasons
this is a closed list — if you do not work for a Linux distribution or at a
large deployer with established relationships in Ceph and security
communities, it’s hard for us to put you there.)
-Greg
Hi Developers,
Ceph is happy to be participating in Grace Hopper Open Source Day [1], an
all-day hackathon for beginning programmers!
You may see some pull requests coming into the Ceph repository from
participants today, labeled with the tag "open-source-day-2023" [2].
If you'd like to help out, you're welcome to provide feedback for these
first-time contributors on the PRs! For many, this is their first
introduction to Ceph, so they may need help signing off their commits,
passing checks, etc.
Thanks everyone, and please reach out to me if you have any questions!
- Laura
1. https://ghc.anitab.org/awards-programs/open-source-day/
2. https://github.com/ceph/ceph/labels/open-source-day-2023
--
Laura Flores
She/Her/Hers
Software Engineer, Ceph Storage <https://ceph.io>
Chicago, IL
lflores(a)ibm.com | lflores(a)redhat.com <lflores(a)redhat.com>
M: +17087388804
Hello All,
I found a weird issue with ceph_readdirplus_r() when used along
with ceph_ll_lookup_vino().
On ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy
(stable)
Any help is really appreciated.
Thanks in advance,
-Joe
Test Scenario :
A. Create a Ceph Fs Subvolume "4" and created a directory in root of
subvolume "user_root"
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23#
ceph fs subvolume ls cephfs
[
{
"name": "4"
}
]
root@ss-joe-01
(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23#
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23#
ls -l
total 0
drwxrwxrwx 2 root root 0 Sep 22 09:16 user_root
root@ss-joe-01
(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23#
B. In the "user_root" directory create some files and directories
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root#
mkdir dir1 dir2
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root#
ls
dir1 dir2
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root#
echo
"Hello Worldls!" > file1
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root#
echo "Hello Worldls!" > file2
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root#
ls
dir1 dir2 file1 file2
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root#
cat file*
Hello Worldls!
Hello Worldls!
C. Create a subvolume snapshot "sofs-4-5". Please ignore the older
snapshots.
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23#
ceph fs subvolume snapshot ls cephfs 4
[
{
"name": "sofs-4-1"
},
{
"name": "sofs-4-2"
},
{
"name": "sofs-4-3"
},
{
"name": "sofs-4-4"
},
{
"name": "sofs-4-5"
}
]
root@ss-joe-01
(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23#
Here "sofs-4-5" has snapshot id 6.
Got this from libcephfs and have verified at Line
snapshot_inode_lookup.cpp#L212. (Attached to the email)
#Content within the snapshot
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23#
cd .snap/
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/.snap#
ls
_sofs-4-1_1099511627778 _sofs-4-2_1099511627778 _sofs-4-3_1099511627778
_sofs-4-4_1099511627778 _sofs-4-5_1099511627778
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/.snap#
cd _sofs-4-5_1099511627778/
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/.snap/_sofs-4-5_1099511627778#
ls
user_root
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/.snap/_sofs-4-5_1099511627778#
cd user_root/
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/.snap/_sofs-4-5_1099511627778/user_root#
ls
dir1 dir2 file1 file2
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/.snap/_sofs-4-5_1099511627778/user_root#
cat file*
Hello Worldls!
Hello Worldls!
root@ss-joe-01
(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/.snap/_sofs-4-5_1099511627778/user_root#
D. Delete all the files and directories in "user_root"
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root#
rm -rf *
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root#
ls
root@ss-joe-01
(bash):/mnt/cephfs/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root#
E. Using Libcephfs in a C++ program do the following,(Attached to this
email)
1. Get the Inode of "user_root" using ceph_ll_walk().
2. Open the directory using Inode received from ceph_ll_walk() and do
ceph_readdirplus_r()
We don't see any dentries(except "." and "..") as we have deleted all
files and directories in the active filesystem. This is expected and
correct!
=================================/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root/=====================================
Path/Name
:"/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root/"
Inode Address : 0x7f5ce0009900
Inode Number : 1099511629282
Snapshot Number : 18446744073709551614
Inode Number : 1099511629282
Snapshot Number : 18446744073709551614
. Ino: 1099511629282 SnapId: 18446744073709551614 Address: 0x7f5ce0009900
.. Ino: 1099511627779 SnapId: 18446744073709551614 Address:
0x7f5ce00090f0
3. Using ceph_ll_lookup_vino() get the Inode * of "user_root" for
snapshot 6, Here "sofs-4-5" has snapshot id 6.
Got this from libcephfs and have verified at Line
snapshot_inode_lookup.cpp#L212. (Attached to the email
4. Open the directory using Inode * received from ceph_ll_lookup_vino()
and do ceph_readdirplus_r()
We don't see any dentries (except "." and "..") This is NOT expected and
NOT correct, as there are files and directories in the snaphot 6.
=================================1099511629282:6=====================================
Path/Name :"1099511629282:6"
Inode Address : 0x7f5ce000a110
Inode Number : 1099511629282
Snapshot Number : 6
Inode Number : 1099511629282
Snapshot Number : 6
. Ino: 1099511629282 SnapId: 6 Address: 0x7f5ce000a110
.. Ino: 1099511629282 SnapId: 6 Address: 0x7f5ce000a110
5. Get the Inode of "user_root/ .snap/_sofs-4-5_1099511627778 / " using
ceph_ll_walk().
6. Open the directory using Inode received from ceph_ll_walk() and do
ceph_readdirplus_r()
We see ALL dentries of all files and directories in the snapshot. This
is expected and correct!
=================================/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root/.snap/_sofs-4-5_1099511627778/=====================================
Path/Name
:"/volumes/_nogroup/4/f0fae76f-196d-4ebd-b8d0-528985505b23/user_root/.snap/_sofs-4-5_1099511627778/"
Inode Address : 0x7f5ce000a110
Inode Number : 1099511629282
Snapshot Number : 6
Inode Number : 1099511629282
Snapshot Number : 6
. Ino: 1099511629282 SnapId: 6 Address: 0x7f5ce000a110
.. Ino: 1099511629282 SnapId: 18446744073709551615 Address:
0x5630ab946340
file1 Ino: 1099511628291 SnapId: 6 Address: 0x7f5ce000aa90
dir1 Ino: 1099511628289 SnapId: 6 Address: 0x7f5ce000b180
dir2 Ino: 1099511628290 SnapId: 6 Address: 0x7f5ce000b800
file2 Ino: 1099511628292 SnapId: 6 Address: 0x7f5ce000be80
7. Now Again using ceph_ll_lookup_vino() get the Inode * of "user_root"
for snapshot 6, Here "sofs-4-5" has snapshot id 6.
8. Open the directory using Inode * received from
ceph_ll_lookup_vino() and do ceph_readdirplus_r()
Now! we see all the files and Directories in the snapshot!
=================================1099511629282:6=====================================
Path/Name :"1099511629282:6"
Inode Address : 0x7f5ce000a110
Inode Number : 1099511629282
Snapshot Number : 6
Inode Number : 1099511629282
Snapshot Number : 6
. Ino: 1099511629282 SnapId: 6 Address: 0x7f5ce000a110
.. Ino: 1099511629282 SnapId: 18446744073709551615 Address:
0x5630ab946340
file1 Ino: 1099511628291 SnapId: 6 Address: 0x7f5ce000aa90
dir1 Ino: 1099511628289 SnapId: 6 Address: 0x7f5ce000b180
dir2 Ino: 1099511628290 SnapId: 6 Address: 0x7f5ce000b800
file2 Ino: 1099511628292 SnapId: 6 Address: 0x7f5ce000be80
Am I missing something using these APIs?
File attached to this email
Full out of the program attached to the email.
- snapshot_inode_lookup.cpp_output.txt <Attached>
C++ Program - snapshot_inode_lookup.cpp <Attached>
/etc/ceph/ceph.conf - <attached>
Ceph Client Log during the run of this C++ program - client.log<attached>
Compile Command:
g++ -o snapshot_inode_lookup ./snapshot_inode_lookup.cpp -g -ldl -ldw
-lcephfs -lboost_filesystem --std=c++17
Linux Details,
root@ss-joe-01(bash):/home/hydrauser# uname -a
Linux ss-joe-01 5.10.0-23-amd64 #1 SMP Debian 5.10.179-1 (2023-05-12)
x86_64 GNU/Linux
root@ss-joe-01(bash):/home/hydrauser#
Ceph Details,
root@ss-joe-01(bash):/home/hydrauser# ceph -v
ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy
(stable)
root@ss-joe-01(bash):/home/hydrauser#
root@ss-joe-01(bash):/home/hydrauser# ceph -s
cluster:
id: fb43d857-d165-4189-87fc-cf1debce9170
health: HEALTH_OK
services:
mon: 3 daemons, quorum ss-joe-01,ss-joe-02,ss-joe-03 (age 4d)
mgr: ss-joe-01(active, since 4d), standbys: ss-joe-03, ss-joe-02
mds: 1/1 daemons up
osd: 3 osds: 3 up (since 4d), 3 in (since 4d)
data:
volumes: 1/1 healthy
pools: 3 pools, 49 pgs
objects: 39 objects, 1.0 MiB
usage: 96 MiB used, 30 GiB / 30 GiB avail
pgs: 49 active+clean
root@ss-joe-01(bash):/home/hydrauser#
root@ss-joe-01(bash):/home/hydrauser# dpkg -l | grep ceph
ii ceph 17.2.5-1~bpo11+1
amd64 distributed storage and file system
ii ceph-base 17.2.5-1~bpo11+1
amd64 common ceph daemon libraries and management tools
ii ceph-base-dbg 17.2.5-1~bpo11+1
amd64 debugging symbols for ceph-base
ii ceph-common 17.2.5-1~bpo11+1
amd64 common utilities to mount and interact with a ceph
storage cluster
ii ceph-common-dbg 17.2.5-1~bpo11+1
amd64 debugging symbols for ceph-common
ii ceph-fuse 17.2.5-1~bpo11+1
amd64 FUSE-based client for the Ceph distributed file
system
ii ceph-fuse-dbg 17.2.5-1~bpo11+1
amd64 debugging symbols for ceph-fuse
ii ceph-mds 17.2.5-1~bpo11+1
amd64 metadata server for the ceph distributed file system
ii ceph-mds-dbg 17.2.5-1~bpo11+1
amd64 debugging symbols for ceph-mds
ii ceph-mgr 17.2.5-1~bpo11+1
amd64 manager for the ceph distributed storage system
ii ceph-mgr-cephadm 17.2.5-1~bpo11+1
all cephadm orchestrator module for ceph-mgr
ii ceph-mgr-dashboard 17.2.5-1~bpo11+1
all dashboard module for ceph-mgr
ii ceph-mgr-dbg 17.2.5-1~bpo11+1
amd64 debugging symbols for ceph-mgr
ii ceph-mgr-diskprediction-local 17.2.5-1~bpo11+1
all diskprediction-local module for ceph-mgr
ii ceph-mgr-k8sevents 17.2.5-1~bpo11+1
all kubernetes events module for ceph-mgr
ii ceph-mgr-modules-core 17.2.5-1~bpo11+1
all ceph manager modules which are always enabled
ii ceph-mon 17.2.5-1~bpo11+1
amd64 monitor server for the ceph storage system
ii ceph-mon-dbg 17.2.5-1~bpo11+1
amd64 debugging symbols for ceph-mon
ii ceph-osd 17.2.5-1~bpo11+1
amd64 OSD server for the ceph storage system
ii ceph-osd-dbg 17.2.5-1~bpo11+1
amd64 debugging symbols for ceph-osd
ii ceph-volume 17.2.5-1~bpo11+1
all tool to facilidate OSD deployment
ii cephadm 17.2.5-1~bpo11+1
amd64 cephadm utility to bootstrap ceph daemons with
systemd and containers
ii libcephfs2 17.2.5-1~bpo11+1
amd64 Ceph distributed file system client library
ii libcephfs2-dbg 17.2.5-1~bpo11+1
amd64 debugging symbols for libcephfs2
ii libsqlite3-mod-ceph 17.2.5-1~bpo11+1
amd64 SQLite3 VFS for Ceph
ii libsqlite3-mod-ceph-dbg 17.2.5-1~bpo11+1
amd64 debugging symbols for libsqlite3-mod-ceph
ii python3-ceph-argparse 17.2.5-1~bpo11+1
all Python 3 utility libraries for Ceph CLI
ii python3-ceph-common 17.2.5-1~bpo11+1
all Python 3 utility libraries for Ceph
ii python3-cephfs 17.2.5-1~bpo11+1
amd64 Python 3 libraries for the Ceph libcephfs library
ii python3-cephfs-dbg 17.2.5-1~bpo11+1
amd64 Python 3 libraries for the Ceph libcephfs library
root@ss-joe-01(bash):/home/hydrauser#
Hi Ceph users and developers,
We invite you to join us at the User + Dev Relaunch, happening this
Thursday at 10:00 AM EST! See below for more meeting details. Also see this
blog post to read more about the relaunch:
https://ceph.io/en/news/blog/2023/user-dev-meeting-relaunch/
We have two guest speakers who will present their focus topics during the
first 40 minutes of the session:
1. "What to do when Ceph isn't Ceph-ing" by Cory Snyder
Topics include troubleshooting tips, effective ways to gather help
from the community, ways to improve cluster health and insights, and more!
2. "Ceph Usability Improvements" by Jonas Sterr
A continuation of a talk from Cephalocon 2023, updated after trying
out the Reef Dashboard.
The last 20 minutes of the meeting will be dedicated to open discussion.
Feel free to add questions for the speakers or additional topics under the
"Open Discussion" section on the agenda:
https://pad.ceph.com/p/ceph-user-dev-monthly-minutes
If you have an idea for a focus topic you'd like to present at a future
meeting, you are welcome to submit it to this Google Form:
https://docs.google.com/forms/d/e/1FAIpQLSdboBhxVoBZoaHm8xSmeBoemuXoV_rmh4v…
Any Ceph user or developer is eligible to submit!
Thanks,
Laura Flores
Meeting link: https://meet.jit.si/ceph-user-dev-monthly
Time conversions:
UTC: Thursday, September 21, 14:00 UTC
Mountain View, CA, US: Thursday, September 21, 7:00 PDT
Phoenix, AZ, US: Thursday, September 21, 7:00 MST
Denver, CO, US: Thursday, September 21, 8:00 MDT
Huntsville, AL, US: Thursday, September 21, 9:00 CDT
Raleigh, NC, US: Thursday, September 21, 10:00 EDT
London, England: Thursday, September 21, 15:00 BST
Paris, France: Thursday, September 21, 16:00 CEST
Helsinki, Finland: Thursday, September 21, 17:00 EEST
Tel Aviv, Israel: Thursday, September 21, 17:00 IDT
Pune, India: Thursday, September 21, 19:30 IST
Brisbane, Australia: Friday, September 22, 0:00 AEST
Singapore, Asia: Thursday, September 21, 22:00 +08
Auckland, New Zealand: Friday, September 22, 2:00 NZST
--
Laura Flores
She/Her/Hers
Software Engineer, Ceph Storage <https://ceph.io>
Chicago, IL
lflores(a)ibm.com | lflores(a)redhat.com <lflores(a)redhat.com>
M: +17087388804