Hi all,
One of the items we need to do in preparation for supporting sharing CephFS
with SMB is to make the management layer, in particular our smb and nfs MGR
modules, assist the admins by avoiding the situation where the same
directories are configured for nfs and smb access. We're mostly focused on
sharing out subvolumes. I've taken to calling the need to make a subvolume
exclusive to one protocol or the other as "earmarking" the subvolume. We have
rough design sketched out but I want to ask a few questions, largely aimed at
the file system team, before we proceed any further.
First off, we intended to apply metadata to a subvolume when it is used by nfs
or smb for sharing. I've identified two candidates for storing the metadata:
1. storing it in xattrs in the root of the subvolume
2. using the `ceph fs subvolume metadata ...` commands
I am fairly familiar with xattrs and think they would be pretty appropriate
for this. Accessing them is easy with standard (libcephfs) file system APIs.
One advantage of using xattrs is that they'd also be visible to the protocol
servers (samba/ganesha) and if we wanted to we could (re)use the metadata
there too, if needed.
I'm less familiar with the subvolume metadata commands. The docs say they're
for "custom metadata", something I think this falls under. What I'm less
certain of is if this is a good user case for this particular kind of custom
metadata. One good thing about this is that is clearly specific to subvolumes.
It also has the advantage that you can't accidentally modify/remove an xattr
via the network fs protocol like you possibly could with an xattr over nfs/
smb.
One last thought, and this is not a requirement but something that might be
interesting down the road is if we could hide directories, at the CephFS
level, from some clients unless they explicitly opt-in to having subvolumes
earmarked to type X visible. IIRC hiding directories is something that's
already done for .snapshot dirs, right? If this seems too strange, do not
worry, I am not going to insist on it... I just thought it might be cool to
have in the future, but only if it's easy. I only mention it in case this idea
tips the scales a bit to either option (1) or (2).
Once we decide on a method of storing metadata for a subvolume, I am also
curious if anyone has opinions if we should have a single key-value pair for
all protocol specific "earmarking" metadata or split things across multiple
items. At least for smb, I want to reuse the feature not only to block sharing
a dir that's already shared with nfs but for blocking (re)sharing dirs with
incompatible idmapping/acl metadata. I see two options, one where we store
everything in a single key (hypothetically, something like "smb.ad,idmapV1" or
"smb.ad,id=foo". The other option is to use multiple key-value pairs. The first
option has an advantage of being more "atomic" and keeping everything together
but the disadvantage of needing to be parsed.
Any other thoughts on the subject would be welcome!
Hi Folks,
Several of us will be traveling on 4/25 for Ceph Day NY 2024! We'll be
canceling the meeting tomorrow. Have a great week folks!
Thanks,
Mark
--
Best Regards,
Mark Nelson
Head of R&D (USA)
Clyso GmbH
p: +49 89 21552391 12
a: Loristraße 8 | 80335 München | Germany
w: https://clyso.com | e: mark.nelson(a)clyso.com
We are hiring: https://www.clyso.com/jobs/
Hi,
I am looking for a bit of reassurance and guidance on running Nautilus with el9. I have applied a patch to rgw[2] and the ceph.spec[3]. I have managed to build ceph on a container with rocky9 using these commands[1]
I had to run the rpmbuild a few times, because it looks like the concurrent compiling causes some libraries/binaries not to be available on time.
I can't really believe having so little issues with compiling that this could be problematic running in production. I even have the impression the changes between el8 and el9 are not really that significant.
What could be bad about doing such upgrade? As a development team you should sort of know what is doable and what gives problems, not? I have tried to keep the set up as much standard as possible using rbd 3x replicated. If rgw and mds are unavailable for a while it would not be that much of an issue.
I was thinking of upgrading an osd node and check if there is any reporting on data being corrupted from this el9 node. Maybe reduce the amount of mgr / mon for a short while to only be on el7 nodes, although I can imagine that issues with monitor and manager would reveal themselves quickly when having one also on the el9 node.
[1]
docker run -v /dev/log:/dev/log -v /home/software/ceph-nautilus/rpmbuild:/root/rpmbuild -it --security-opt seccomp=unconfined --network host rockylinux:9 /bin/bash
dnf install -y rpm-build epel-release
dnf install 'dnf-command(builddep)'
dnf config-manager --set-enabled crb
cd /root/rpmbuild
rpm -iv el8/ceph-14.2.22-0.el8.src.rpm
dnf builddep SPECS/ceph.spec
yum downgrade RPMS/x86_64/librabbitmq-0.9.0-3.el9.x86_64.rpm RPMS/x86_64/librabbitmq-devel-0.9.0-3.el9.x86_64.rpm
rpmbuild -bb --with el9 --without ceph_test_package --noprep --noclean SPECS/ceph.spec
[2]
--- ceph-14.2.22/src/rgw/CMakeLists.txt 2023-04-05 17:41:52.041300639 +0000
+++ ceph-14.2.22/src/rgw/CMakeLists.new.txt 2023-04-05 17:42:07.704412029 +0000
@@ -1,3 +1,7 @@
+if(Boost_VERSION VERSION_GREATER_EQUAL 1.74)
+ add_definitions(-DBOOST_ASIO_USE_TS_EXECUTOR_AS_DEFAULT)
+endif()
+
add_custom_target(civetweb_h
COMMAND ${CMAKE_COMMAND} -E make_directory
"${CMAKE_BINARY_DIR}/src/include/civetweb"
[3] ceph.spec patch
25d24
< %bcond_without ceph_test_package
30a30,37
> %bcond el9 0
> %if 0%{with el9}
> %define tmpvalue el9
> %bcond_without ceph_test_package
> %else
> %bcond_with ceph_test_package
> %define tmpvalue notel9
> %endif
128c135,136
< Source0: %{?_remote_tarball_prefix}ceph-14.2.22.tar.bz2
---
> Source0: ceph-14.2.22.tar.gz
> Patch0: ceph-14.2.22.el9.1.patch
269c277
< #BuildRequires: redhat-lsb-core
---
> # RIT BuildRequires: redhat-lsb-core
929c937
< Obsoletes: ceph-libcephfs
---
> # RIT Obsoletes: ceph-libcephfs
1151a1160
>
1156a1166
>
1160a1171,1182
> %if 0%{with el9}
> echo "building el9" >> /tmp/read.log
> %endif
>
> %if 0%{without ceph_test_package}
> echo "building without test" >> /tmp/read.log
> %endif
>
> %if 0%{with ceph_test_package}
> echo "building with test" >> /tmp/read.log
> %endif
>
1178a1201
> CEPH_MFLAGS_JOBS="1"
1202c1225
< mkdir build
---
> mkdir -p build
1270,1271c1293,1302
< -DBOOST_J=$CEPH_SMP_NCPUS \
< -DWITH_GRAFANA=ON
---
> -DWITH_GRAFANA=ON \
> %if 0%{el9}
> -DWITH_SYSTEM_BOOST=ON \
> -DWITH_MGR_DASHBOARD_FRONTEND=OFF \
> -DWITH_SYSTEM_NPM=OFF \
> -DWITH_RADOSGW_KAFKA_ENDPOINT=OFF \
> -DWITH_RADOSGW=ON \
> -DWITH_GRAFANA=ON \
> %endif
> -DBOOST_J=$CEPH_SMP_NCPUS
1273c1304,1306
< make "$CEPH_MFLAGS_JOBS"
---
>
> make "$CEPH_MFLAGS_JOBS" 2>/tmp/build.log
> #make 2>/tmp/build.log
1355d1387
< %if "%{noclean}" == ""
1357d1388
< %endif
this ceph-object-corpus repo is the basis of our ceph-dencoder test
src/test/encoding/readable.sh, which verifies that we can still decode
all of the data structures encoded by older ceph versions
i'd like to raise awareness that this ceph-object-corpus repo hasn't
been updated with new encodings since pacific 16.2.0, so we're missing
important regression test coverage since then
Nitzan prepared the encodings for reef 18.2.0 in
https://github.com/ceph/ceph-object-corpus/pull/17, but those haven't
merged yet. i had opened https://github.com/ceph/ceph/pull/54735 to
test that, but 'make check' identified failures like:
> The following tests FAILED:
> 147 - readable.sh (Failed)
>
> **** reencode of /home/jenkins-build/build/workspace/ceph-pull-requests/ceph-object-corpus/archive/18.2.0/objects/chunk_refs_t/ccb69d9ecd572c1f6ed9598899773cf1 resulted in a different dump ****
can we find a way to prioritize this? it would be great to have these
reef encodings while we're validating the squid release
Which script is responsible for execution of the command:
ceph config set DAEMON CONFIG-OPTION VALUE
I want to add a check by creating regex for acceptable daemon ids/names.
Regards,
Suyash Dongre
Hi all,
Today we discussed:
2024/04/08
- [Zac] CQ#4 is going out this week -
https://pad.ceph.com/p/ceph_quarterly_2024_04
- Last chance to review!
- [Zac] IcePic Initiative - context-sensitive help - do we regard the
docs as a part of the online help?
- https://pad.ceph.com/p/2024_04_08_cephadm_context_sensitive_help
- docs.ceph.com should be main source of truth; can link to this or
reference it generally as "see docs.ceph.com"
- Squid RC status
- Blockers tracked in: https://pad.ceph.com/p/squid-upgrade-failures
- rgw: topic changes merged to main, but introduced some test failures.
account changes blocked on topics
- Non-blocker for RC0
- centos 9 containerization (status unknown?)
- Non-blocker for RC0
- Follow up with Dan / Guillaume
- RADOS has one outstanding blocker awaiting QA
- Failing to register new account at Ceph tracker - error 404.
- Likely related to Redmine upgrade over the weekend
- Pacific eol:
- Action item: in https://docs.ceph.com/en/latest/releases/, move to
"archived"
- 18.2.3
- one or two PRs from cephfs left
- Milestone: https://github.com/ceph/ceph/milestone/19
Thanks,
Laura
--
Laura Flores
She/Her/Hers
Software Engineer, Ceph Storage <https://ceph.io>
Chicago, IL
lflores(a)ibm.com | lflores(a)redhat.com <lflores(a)redhat.com>
M: +17087388804
Hi everyone,
On behalf of the Ceph Foundation Board, I would like to announce the
creation of, and cordially invite you to, the first of a recurring series
of meetings focused solely on gathering feedback from the users of
Ceph. The overarching goal of these meetings is to elicit feedback from the
users, companies, and organizations who use Ceph in their production
environments. You can find more details about the motivation behind this
effort in our user survey [1] that we highly encourage all of you to take.
This is an extension of the Ceph User Dev Meeting with concerted focus on
Performance (led by Vincent Hsu, IBM) and Orchestration/Deployment (led by
Matt Leonard, Bloomberg), to start off with. We would like to kick off this
series of meetings on March 21, 2024. The survey will be open until March
18, 2024.
Looking forward to hearing from you!
Thanks,
Neha
[1]
https://docs.google.com/forms/d/15aWxoG4wSQz7ziBaReVNYVv94jA0dSNQsDJGqmHCLM…
Hi everyone,
I’d like to extend a warm thank you to Mike Perez for his years of service
as community manager for Ceph. He is changing focuses now to engineering.
The Ceph Foundation board decided to use services from the Linux Foundation
to fulfill some community management responsibilities, rather than rely on
a single member organization employing a community manager. The Linux
Foundation will assist with Ceph Foundation membership and governance
matters.
Please welcome Noah Lehman (cc’d) as our social media and marketing point
person - for anything related to this area, including the Ceph YouTube
channel, please reach out to him.
Ceph days will continue to be organized and funded by organizations around
the world, with the help of the Ceph Ambassadors (
https://ceph.io/en/community/ambassadors/). Gaurav Sitlani (cc’d) will help
organize the ambassadors going forward.
For other matters, please contact council(a)ceph.io and we’ll direct the
matter to the appropriate people.
Thanks,
Neha Ojha, Dan van der Ster, Josh Durgin
Ceph Executive Council
We are happy to announce another release of the go-ceph API library. This is a
regular release following our every-two-months release cadence.
https://github.com/ceph/go-ceph/releases/tag/v0.27.0
The library includes bindings that aim to play a similar role to the "pybind"
python bindings in the ceph tree but for the Go language. The library also
includes additional APIs that can be used to administer cephfs, rbd, rgw, and
other subsystems.
There are already a few consumers of this library in the wild, including the
ceph-csi project.
--
John Mulligan
phlogistonjohn(a)asynchrono.us
jmulligan(a)redhat.com