I created a test ceph cluster of 3 nodes on a single host (following the readme in https://github.com/ceph/ceph/tree/pacific).
I am trying navigate the code using gdb breakpoints but it seems to not be working for me (I attached the pid of an osd node to gdb and set breakpoints, but the breakpoints are never hit).
Could you please tell me what I could be missing and what is the recommended way to set up a debugger with the ceph source code? If there is a write-up that I might have missed, could you also please point me to that?
Graduate Student - University of Wisconsin Madison
We're happy to announce the 5th backport release in the Pacific series.
We recommend users to update to this release. For a detailed release
notes with links & changelog please refer to the official blog entry at
* `ceph-mgr-modules-core` debian package does not recommend
`ceph-mgr-rook` anymore. As the latter depends on `python3-numpy` which
cannot be imported in different Python sub-interpreters multi-times if
the version of `python3-numpy` is older than 1.19. Since `apt-get`
installs the `Recommends` packages by default, `ceph-mgr-rook` was
always installed along with `ceph-mgr` debian package as an indirect
dependency. If your workflow depends on this behavior, you might want to
install `ceph-mgr-rook` separately.
* mgr/nfs: `nfs` module is moved out of volumes plugin. Prior using the
`ceph nfs` commands, `nfs` mgr module must be enabled.
* volumes/nfs: The `cephfs` cluster type has been removed from the `nfs
cluster create` subcommand. Clusters deployed by cephadm can support an
NFS export of both `rgw` and `cephfs` from a single NFS cluster instance.
* The `nfs cluster update` command has been removed. You can modify the
placement of an existing NFS service (and/or its associated ingress
service) using `orch ls --export` and `orch apply -i ...`.
* The `orch apply nfs` command no longer requires a pool or namespace
argument. We strongly encourage users to use the defaults so that the
`nfs cluster ls` and related commands will work properly.
* The `nfs cluster delete` and `nfs export delete` commands are
deprecated and will be removed in a future release. Please use `nfs
cluster rm` and `nfs export rm` instead.
* A long-standing bug that prevented 32-bit and 64-bit client/server
interoperability under msgr v2 has been fixed. In particular, mixing
armv7l (armhf) and x86_64 or aarch64 servers in the same cluster now works.
* Git at git://github.com/ceph/ceph.git
* Tarball at https://download.ceph.com/tarballs/ceph-16.2.5.tar.gz
* For packages, see https://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: 0883bdea7337b95e4b611c768c0279868462204a
On 7/5/21 09:55, Dominik Csapak wrote:
> just wanted to ask if it is intentional that
> results in a 404 error?
> is there any alternative url?
> it is still linked from the offical docs.
> with kind regards
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
is there any info on this?
(cc'ing the dev list too, maybe it helps)
We plan to publish this release on or before 7/9/21.
Details of this release summarized here:
Please review and add commits to the release notes:
Seeking review/approval (please review failures and add ticket numbers
rados - Neha, Ernesto ?
rgw - Casey?
rbd - Ilya?
krbd - Ilya?
fs - Patrick?
upgrade/nautilus-x (pacific) - Casey ? (are we missing a backport?)
upgrade/pacific-p2p - Neha, Josh ?
Right now we're using CentOS for the base image for Ceph containers.
Now that CentOS is moving to a rolling-upgrade-esque style release
with CentOS Stream, it's an open question if we should stick with it.
A more stable base image that gets reliable security fixes would be
preferable. One thought is to use Red Hat's Universal Base Image (UBI)
 which is just RHEL-lite with a target audience of upstream
projects. Or perhaps we can select another base image.
What do you think?
Patrick Donnelly, Ph.D.
He / Him / His
Principal Software Engineer
Red Hat Sunnyvale, CA