hey Gal and Eric,
in today's standup, we discussed the version of our apache arrow
submodule. it's currently pinned at 6.0.1, which was tagged in nov.
2021. the centos9 builds are using the system package
libarrow-devel-9.0.0. arrow's upstream recently tagged an 11.0.0
release
as far as i know, there still aren't any system packages for ubuntu,
so we're likely to be stuck with the submodule for quite a while. how
do guys want to handle these updates? is it worth trying to update
before the reef release?
We weren't targeting bullseye once we discovered the compiler version
problem, the focus shifted to bookworm. If anyone would like to help
maintaining debian builds, or looking into these issues, it would be
welcome:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1030129https://tracker.ceph.com/issues/61845
On Mon, Aug 21, 2023 at 7:50 AM Matthew Darwin <bugs(a)mdarwin.ca> wrote:
> Thanks for the link to the issue. Any reason it wasn't added to the
> release notes (for bullseye).
>
> I am also waiting for this to be available to start testing.
> On 2023-08-21 10:25, Josh Durgin wrote:
>
> There was difficulty building on bullseye due to the older version of GCC
> available: https://tracker.ceph.com/issues/61845
>
> On Mon, Aug 21, 2023 at 3:01 AM Chris Palmer <chris.palmer(a)idnet.com> <chris.palmer(a)idnet.com> wrote:
>
>
> I'd like to try reef, but we are on debian 11 (bullseye).
> In the ceph repos, there is debian-quincy/bullseye and
> debian-quincy/focal, but under reef there is only focal & jammy.
>
> Is there a reason why there is no reef/bullseye build? I had thought
> that the blocker only affected debian-bookworm builds.
>
> Thanks, Chris
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>
>
Hello,
What was the motivation for the "issue pool application warning even if
pool is empty" change [1]? This didn't occur to me what I saw the PR, but
it basically makes it impossible to create a pool without HEALTH_WARN
popping up. The reason is that pool creation and pool application
enablement are separate monitor commands: there is no way to create
a pool and enable an application on a pool atomically. And this is why
this health check has always been limited to in-use pools (also pointed
out by Greg when it was being introduced [2]).
[1] https://github.com/ceph/ceph/pull/47560
[2] https://github.com/ceph/ceph/pull/15763#discussion_r123084421
Thanks,
Ilya
Hello,
Finish v18.2.0 upgrade on LRC? It seems to be running v18.1.3
not much of a difference in code commits
news on teuthology jobs hanging?
cephfs issues because of network troubles
Its resolved by Patrick
User council discussion follow-up
Detailed info on this pad: https://pad.ceph.com/p/user_dev_relaunch
First topic will come from David's team
16.2.14 release
Pushing to release by this week.
Regards,
Nizam
--
Nizamudeen A
Software Engineer
Red Hat <https://www.redhat.com/>
<https://www.redhat.com/>
Hi,
With Pacific and afterwards, which of the following two settings takes precedence?
bluestore_compression_min_blob_size (default 0)
bluestore_compression_min_blob_size_<type> (default 8192)
I'd assume the more specific setting takes precedence?
What's this "blob"? Is it write block?
Then, there is no compression for 4K write block?
I was not able to find it out from doc, hope someone could help to clarify.
Thanks!
Tony
Hi,
Say, source image has snapshot s1, s2 and s3.
I expect "export" behaves the same as "deep cp", when specify a snapshot,
with "--export-format 2", only the specified snapshot and all snapshots
earlier than that will be exported.
What I see is that, no matter which snapshot I specify, "export" with
"--export-format 2" always exports the whole image with all snapshots.
Is this expected?
Could anyone help to clarify?
Thanks!
Tony
Hi,
I'm using rbd import and export to copy image from one cluster to another.
Also using import-diff and export-diff to update image in remote cluster.
For example, "rbd --cluster local export-diff ... | rbd --cluster remote import-diff ...".
Sometimes, the whole command is stuck. I can't tell it's stuck on which end of the pipe.
I did some search, [1] seems the same issue and [2] is also related.
Wonder if there is any way to identify where it's stuck and get more debugging info.
Given [2], I'd suspect the import-diff is stuck, cause rbd client is importing to the
remote cluster. Networking latency could be involved here? Ping latency is 7~8 ms.
Any comments is appreciated!
[1] https://bugs.launchpad.net/cinder/+bug/2031897
[2] https://stackoverflow.com/questions/69858763/ceph-rbd-import-hangs
Thanks!
Tony