For developers submitting jobs using teuthology, we now have
recommendations on what priority level to use:
https://docs.ceph.com/docs/master/dev/developer_guide/#testing-priority
--
Patrick Donnelly, Ph.D.
He / Him / His
Senior Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
Hi,
I created a test ceph cluster of 3 nodes on a single host (following the readme in https://github.com/ceph/ceph/tree/pacific).
I am trying navigate the code using gdb breakpoints but it seems to not be working for me (I attached the pid of an osd node to gdb and set breakpoints, but the breakpoints are never hit).
Could you please tell me what I could be missing and what is the recommended way to set up a debugger with the ceph source code? If there is a write-up that I might have missed, could you also please point me to that?
Thank You,
Surabhi Gupta
Graduate Student - University of Wisconsin Madison
We're happy to announce the 5th backport release in the Pacific series.
We recommend users to update to this release. For a detailed release
notes with links & changelog please refer to the official blog entry at
https://ceph.io/en/news/blog/2021/v16-2-5-pacific-released
Notable Changes
---------------
* `ceph-mgr-modules-core` debian package does not recommend
`ceph-mgr-rook` anymore. As the latter depends on `python3-numpy` which
cannot be imported in different Python sub-interpreters multi-times if
the version of `python3-numpy` is older than 1.19. Since `apt-get`
installs the `Recommends` packages by default, `ceph-mgr-rook` was
always installed along with `ceph-mgr` debian package as an indirect
dependency. If your workflow depends on this behavior, you might want to
install `ceph-mgr-rook` separately.
* mgr/nfs: `nfs` module is moved out of volumes plugin. Prior using the
`ceph nfs` commands, `nfs` mgr module must be enabled.
* volumes/nfs: The `cephfs` cluster type has been removed from the `nfs
cluster create` subcommand. Clusters deployed by cephadm can support an
NFS export of both `rgw` and `cephfs` from a single NFS cluster instance.
* The `nfs cluster update` command has been removed. You can modify the
placement of an existing NFS service (and/or its associated ingress
service) using `orch ls --export` and `orch apply -i ...`.
* The `orch apply nfs` command no longer requires a pool or namespace
argument. We strongly encourage users to use the defaults so that the
`nfs cluster ls` and related commands will work properly.
* The `nfs cluster delete` and `nfs export delete` commands are
deprecated and will be removed in a future release. Please use `nfs
cluster rm` and `nfs export rm` instead.
* A long-standing bug that prevented 32-bit and 64-bit client/server
interoperability under msgr v2 has been fixed. In particular, mixing
armv7l (armhf) and x86_64 or aarch64 servers in the same cluster now works.
Getting Ceph
------------
* Git at git://github.com/ceph/ceph.git
* Tarball at https://download.ceph.com/tarballs/ceph-16.2.5.tar.gz
* For packages, see https://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: 0883bdea7337b95e4b611c768c0279868462204a
On 7/5/21 09:55, Dominik Csapak wrote:
> Hi,
>
> just wanted to ask if it is intentional that
>
> http://ceph.com/pgcalc/
>
> results in a 404 error?
>
> is there any alternative url?
> it is still linked from the offical docs.
>
> with kind regards
> Dominik
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>
hi,
is there any info on this?
(cc'ing the dev list too, maybe it helps)
kind regards
Dominik
We plan to publish this release on or before 7/9/21.
Details of this release summarized here:
https://tracker.ceph.com/issues/51480#note-1
Please review and add commits to the release notes:
https://github.com/ceph/ceph/pull/42170
Seeking review/approval (please review failures and add ticket numbers
if possible):
rados - Neha, Ernesto ?
rgw - Casey?
rbd - Ilya?
krbd - Ilya?
fs - Patrick?
upgrade/nautilus-x (pacific) - Casey ? (are we missing a backport?)
upgrade/pacific-p2p - Neha, Josh ?
Thx
YuriW
Hi all,
I'm reading the ceph survey results: https://ceph.io/community/2021-ceph-user-survey-results.
Do we have the data about which type of AsyncMessenger is used? TCP/RDMA/DPDK.
What's the reason that RDMA & DPDK isn't often used?
B.R.
Jerry
Hi Folks,
The performance meeting will be starting in about 35 minutes! Today we
will talk a little bit about osd_client_message_cap and the
implementation of throttling and flow control.
Hope to see you there!
Etherpad:
https://pad.ceph.com/p/performance_weekly
Bluejeans:
https://bluejeans.com/908675367
Mark
Hi Dimitri,
Right now we're using CentOS for the base image for Ceph containers.
Now that CentOS is moving to a rolling-upgrade-esque style release
with CentOS Stream, it's an open question if we should stick with it.
A more stable base image that gets reliable security fixes would be
preferable. One thought is to use Red Hat's Universal Base Image (UBI)
[1] which is just RHEL-lite with a target audience of upstream
projects. Or perhaps we can select another base image.
What do you think?
[1] https://www.redhat.com/en/blog/introducing-red-hat-universal-base-image
--
Patrick Donnelly, Ph.D.
He / Him / His
Principal Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D