Hello folks, there was renewed interest in code walkthroughs at Cephalocon
this year, so we're starting to do more of these.
Sam Just will take us through SeaStore on June 7, at 10am PT. SeaStore is
the new objectstore for Crimson, and is under heavy development. For
background, check out the design docs:
https://docs.ceph.com/en/latest/dev/seastore/
Join the walkthrough here: https://meet.jit.si/ceph-code-walkthrough
As usual this will be recorded and posted to the Ceph youtube channel
afterwards.
Josh
Hi Chris,
Kindly request you that follow steps given in previous mail and paste the output here.
The reason behind this request is that we have encountered an issue which is easily reproducible on
Latest version of both quincy and pacific, also we have thoroughly investigated the matter and we are certain that
No other factors are at play in this scenario.
Note : We have used Debian 11 for testing.
sdsadmin@ceph-pacific-1:~$ uname -a
Linux ceph-pacific-1 5.10.0-10-amd64 #1 SMP Debian 5.10.84-1 (2021-12-08) x86_64 GNU/Linux
sdsadmin@ceph-pacific-1:~$ sudo ceph -v
ceph version 16.2.13 (5378749ba6be3a0868b51803968ee9cde4833a3e) pacific (stable)
Thanks for your prompt reply.
Regards
Sandip Divekar
-----Original Message-----
From: Chris Palmer <chris.palmer(a)idnet.com>
Sent: Thursday, May 25, 2023 7:25 PM
To: ceph-users(a)ceph.io
Subject: [ceph-users] Re: Unexpected behavior of directory mtime after being set explicitly
***** EXTERNAL EMAIL *****
Hi Milind
I just tried this using the ceph kernel client and ceph-common 17.2.6 package in the latest Fedora kernel, against Ceph 17.2.6 and it worked perfectly...
There must be some other factor in play.
Chris
On 25/05/2023 13:04, Sandip Divekar wrote:
> Hello Milind,
>
> We are using Ceph Kernel Client.
> But we found this same behavior while using Libcephfs library.
>
> Should we treat this as a bug? Or
> Is there any existing bug for similar issue ?
>
> Thanks and Regards,
> Sandip Divekar
>
>
> From: Milind Changire <mchangir(a)redhat.com>
> Sent: Thursday, May 25, 2023 4:24 PM
> To: Sandip Divekar <sandip.divekar(a)hitachivantara.com>
> Cc: ceph-users(a)ceph.io; dev(a)ceph.io
> Subject: Re: [ceph-users] Unexpected behavior of directory mtime after
> being set explicitly
>
> ***** EXTERNAL EMAIL *****
> Sandip,
> What type of client are you using ?
> kernel client or fuse client ?
>
> If it's the kernel client, then it's a bug.
>
> FYI - Pacific and Quincy fuse clients do the right thing
>
>
> On Wed, May 24, 2023 at 9:24 PM Sandip Divekar <sandip.divekar(a)hitachivantara.com<mailto:sandip.divekar@hitachivantara.com>> wrote:
> Hi Team,
>
> I'm writing to bring to your attention an issue we have encountered with the "mtime" (modification time) behavior for directories in the Ceph filesystem.
>
> Upon observation, we have noticed that when the mtime of a directory
> (let's say: dir1) is explicitly changed in CephFS, subsequent additions of files or directories within 'dir1' fail to update the directory's mtime as expected.
>
> This behavior appears to be specific to CephFS - we have reproduced this issue on both Quincy and Pacific. Similar steps work as expected in the ext4 filesystem amongst others.
>
> Reproduction steps:
> 1. Create a directory - mkdir dir1
> 2. Modify mtime using the touch command - touch dir1 3. Create a file
> or directory inside of 'dir1' - mkdir dir1/dir2 Expected result:
> mtime for dir1 should change to the time the file or directory was
> created in step 3 Actual result:
> there was no change to the mtime for 'dir1'
>
> Note : For more detail, kindly find the attached logs.
>
> Our queries are :
> 1. Is this expected behavior for CephFS?
> 2. If so, can you explain why the directory behavior is inconsistent depending on whether the mtime for the directory has previously been manually updated.
>
>
> Best Regards,
> Sandip Divekar
> Component QA Lead SDET.
>
> _______________________________________________
> ceph-users mailing list --
> ceph-users(a)ceph.io<mailto:ceph-users@ceph.io>
> To unsubscribe send an email to
> ceph-users-leave(a)ceph.io<mailto:ceph-users-leave@ceph.io>
>
>
> --
> Milind
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an
> email to ceph-users-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an email to ceph-users-leave(a)ceph.io
Hi Team,
I'm writing to bring to your attention an issue we have encountered with the "mtime" (modification time) behavior for directories in the Ceph filesystem.
Upon observation, we have noticed that when the mtime of a directory (let's say: dir1) is explicitly changed in CephFS, subsequent additions of files or directories within
'dir1' fail to update the directory's mtime as expected.
This behavior appears to be specific to CephFS - we have reproduced this issue on both Quincy and Pacific. Similar steps work as expected in the ext4 filesystem amongst others.
Reproduction steps:
1. Create a directory - mkdir dir1
2. Modify mtime using the touch command - touch dir1
3. Create a file or directory inside of 'dir1' - mkdir dir1/dir2
Expected result:
mtime for dir1 should change to the time the file or directory was created in step 3
Actual result:
there was no change to the mtime for 'dir1'
Note : For more detail, kindly find the attached logs.
Our queries are :
1. Is this expected behavior for CephFS?
2. If so, can you explain why the directory behavior is inconsistent depending on whether the mtime for the directory has previously been manually updated.
Best Regards,
Sandip Divekar
Component QA Lead SDET.
Highlights from this week's CLT meeting:
- centos 9 testing has been unblocked by getting missing python
dependencies into copr, component leads to start testing the PR that adds
teuthology coverage for it https://github.com/ceph/ceph/pull/50441 (Thanks
Ken and Casey!)
- The Reef release on track for RC next week, QE validation and gibba
cluster upgrade has been planned for later this week.
- David Orman shared various RGW related fixes and behavior changes that
came out of their thorough investigation at scale. The complete list of
issues and fixes will be sent out to the list by David's group. Here are
some relevant PRs and trackers https://github.com/ceph/ceph/pull/49466,
https://github.com/ceph/ceph/pull/45754,
https://tracker.ceph.com/issues/61359,
https://github.com/ceph/ceph/pull/51700. Matthew Leonard showed great
interest in collaborating with David on this effort. Casey suggested using
the weekly RGW community meeting for further knowledge sharing and
collaboration.
Thanks,
Neha
FYI—
The story about python shebangs in rhel8 and centos stream 8 is complicated.
Short answer: the shebangs on rhel8 are probably wrong now (even in
downstream) and even once they're fixed, cephadm (and other python apps in
ceph) from rhel8 builds won't ever be suitable to use on other platforms.
The longer answer seems to be that someone needs to decide what version of
python is the correct one on rhel8. Note that when I look at my vanilla
rhel8 and stream8 boxes the answer seems to be python3.6.
---------- Forwarded message ---------
From: Miro Hrončok <mhroncok(a)redhat.com>
Date: Sun, May 21, 2023 at 4:26 AM
Subject: Re: question about python shebangs in rhel8 and rhel9
To: Kaleb Keithley <kkeithle(a)redhat.com>, Tomas Orsava <torsava(a)redhat.com>
On 20. 05. 23 14:53, Kaleb Keithley wrote:
> Hi,
Hello Kaleb and sorry for making this so complex. It was coerced.
> I'm trying to understand—
>
> Starting with Fedora and Debian packaging guidelines that say the shebang
> should/must be #!/usr/bin/python3.
>
> I maintain a handful of packages in Fedora and CentOS (Storage) SIG that
have
> python, and delta any python files that have escaped my notice, the
python
> files all do have "correct" shebangs in the source.
>
> I do know about the brp_mangle_shebangs in rpmbuilds.
>
> But I just discovered that on rhel8 and centos stream 8 that the shebangs
are
> being mangled to /usr/libexec/platform-python. (Compared to fedora and
stream 9
> where they remain [unmangled] as /usr/bin/python3.)
>
> And AFAICT this comes from the "%__python3 /usr/libexec/platform-python"
line
> in /usr/lib/rpm/macros.d/macros.python3
Correct.
> (If I understand things, platform-python is there to ensure that a known
> version of python can be found when the end user installs later python
from
> AppStream or CRB.)
Actually, when they don't install Python 3.6. Presence or absence of other
versions is meaningless in this context, but I think we understand each
other here.
> In contrast to the analogous lines
in /usr/lib/rpm/macros.d/macros.python-srpm
> on Fedora and /usr/lib/rpm/macros.d/macros.python-srpm on rhel9/stream9.
Still correct.
> I also notice that on stream8, koji/cbs's shebang is
#!/usr/bin/python3.6, but
> I don't see anything in the koji.spec file that would override the
default
> shebang mangling. Most or all of the other python programs in /usr/bin on
rhel8
> and stream8 have #!/usr/bin/python3.6 too.
If Koji is built from EPEL, that would explain this. EPEL 8 is overriding
%__python3.
See
https://src.fedoraproject.org/rpms/epel-rpm-macros/blob/epel8/f/macros.zzz-…
and the linked email there.
Basically, EPEL first overrode it to /usr/bin/python3 to undo the RHEL 8
Python
mess and be consistent with Fedora and future RHELs. But since it is
supported
to change where /usr/bin/python3 links to in RHEL 8 (using alternatives),
we
had to make it more specific.
> So my question is, what really is the correct shebang that should be used
on
> rhel8/stream8,
If it a system tool that needs to work even when "no Python is installed",
use
/usr/libexec/platform-python.
If you want a specific Python version instead, use that one.
> and how do I achieve it.
BuildRequire python36-rpm-macros, or python39-rpm-macros, or
python3.11-rpm-macros.
Or alternatively, set %__python3 to /usr/bin/python3.6 etc.
DO NOT set it to /usr/bin/python3 unless it is a standalone script without
external dependencies that can be executed with any Python 3 version (even
all
the future ones).
> I don't want to completely disable
> shebang mangling because it fixes some /bin/sh shebangs. I don't see an
easy
> way to use the __mangle_shebangs_exclude macros, but maybe I just have to
bite
> the bullet and cons up a file for __mangle_shebangs_exclude_file?
I hope you won't need it.
--
Miro Hrončok
--
Phone: +420777974800
IRC: mhroncok
--
Kaleb
hi Shilpa and team,
it sounds like we're preparing for a reef release candidate by the end
of May. it's been several months since we've done any workload testing
on the upstream multisite bits. can we please try to organize some
testing for validation of the reef branch?
We plan to start the gibba cluster upgrade testing and test all suites
the following week.
Pls, make sure that all needed PRs are ready to be included/merged in
the reef RC ASAP.
Thx
YuriW
Hi Dan & others,
A few months back we (the Orchestration team working on cephadm), discussed
the new compiled [1] cephadm in a CLT call and we briefly discussed it on the
list [2]. I wanted to revisit that conversation as Reef's release is quickly
coming up.
To summarize:
Previous versions of Ceph contained a single python source file for cephadm.
Upstream users were instructed to copy this source file directly out of the
Ceph git tree and execute the file using python. Now, the ceph build process
creates an executable python zipapp from the source file and the old
instructions are out of date.
We would like to request assistance with making it possible for users who are
planning on bootstrapping clusters to download an already compiled version of
cephadm from a canonical location. A secondary goal would also be to sign that
binary. However, I would treat that as a nice to have since the current
workflow doesn't have this.
I'd love to discuss the technical aspects of this and get some code to
implement this in place. I'd be happy to continue the conversation here - or
if you prefer - in the discussion forum of your choice. Thanks very much!
[1] - It is still python, and it's not compiled to native binaries but is
rather a zipapp - https://docs.python.org/3/library/zipapp.html
[2] - I wanted to link to the previous discussion but neither the Archived-At
header url or search turn it up. So maybe it got lost in the infra issues a
while back. If you need references to the previous thread, I can forward them
because I still have them on my mailserver.
Hi, I'm doing a master's degree in distributed file systems, specifically
CephFS in HPC, and I have some questions about the metadata migration
process of MDSes that balances the MDS cluster.
* Sage Weil Ph.D. thesis' and the documentation in the Ceph website say
that before migrating to balance the MDS, the first step is to freeze the
subtree and when the migration is complete, to unfreeze it. Doesn't it
affect the performance since the subtree stays unavaliable until migration
is over?
* Could you explain why the freezes are required at all?
* Is It possible to change the migration process to migrate a copy of the
inodes and then migrate inodes that have modifications after the migration?
Like taking a snapshot of the subtree and migrate that, and then migrate
any changes that happened after the snapshot migration is done, reducing
the time spent with the subtree frozen.
Note: ignore my prior e-mail, some errors occurred ;)
--
Odair M. Ditkun Jr
Centro de Computação Científica e Software Livre — C3SL
Mestrando em Redes e Sistemas Distribuídos — PPGINF
Bacharel Ciência da Computação — UFPR
Hi folks,
I desperately need some help to figure out how to trigger metadata sync update in a multisite environment. I'd like to change any data item in RGWBucketInfo at the master zone and want the other zones to pick up the changes. I have tried to use rgw::sal::Bucket::put_info() and RGWBucketCtl::store_bucket_instance_info(). Either approach only updates the local bucket info and the other zones do not pick up the updates. What am I missing? Could someone please show me an example on how to trigger sync update?
Thanks a lot,
Yixin