We're happy to announce the fourth bugfix release in the Octopus series.
In addition to a security fix in RGW, this release brings a range of fixes
across all components. We recommend that all Octopus users upgrade to this
release. For a detailed release notes with links & changelog please
refer to the official blog entry at https://ceph.io/releases/v15-2-4-octopus-released
Notable Changes
---------------
* CVE-2020-10753: rgw: sanitize newlines in s3 CORSConfiguration's ExposeHeader
(William Bowling, Adam Mohammed, Casey Bodley)
* Cephadm: There were a lot of small usability improvements and bug fixes:
* Grafana when deployed by Cephadm now binds to all network interfaces.
* `cephadm check-host` now prints all detected problems at once.
* Cephadm now calls `ceph dashboard set-grafana-api-ssl-verify false`
when generating an SSL certificate for Grafana.
* The Alertmanager is now correctly pointed to the Ceph Dashboard
* `cephadm adopt` now supports adopting an Alertmanager
* `ceph orch ps` now supports filtering by service name
* `ceph orch host ls` now marks hosts as offline, if they are not
accessible.
* Cephadm can now deploy NFS Ganesha services. For example, to deploy NFS with
a service id of mynfs, that will use the RADOS pool nfs-ganesha and namespace
nfs-ns::
ceph orch apply nfs mynfs nfs-ganesha nfs-ns
* Cephadm: `ceph orch ls --export` now returns all service specifications in
yaml representation that is consumable by `ceph orch apply`. In addition,
the commands `orch ps` and `orch ls` now support `--format yaml` and
`--format json-pretty`.
* Cephadm: `ceph orch apply osd` supports a `--preview` flag that prints a preview of
the OSD specification before deploying OSDs. This makes it possible to
verify that the specification is correct, before applying it.
* RGW: The `radosgw-admin` sub-commands dealing with orphans --
`radosgw-admin orphans find`, `radosgw-admin orphans finish`, and
`radosgw-admin orphans list-jobs` -- have been deprecated. They have
not been actively maintained and they store intermediate results on
the cluster, which could fill a nearly-full cluster. They have been
replaced by a tool, currently considered experimental,
`rgw-orphan-list`.
* RBD: The name of the rbd pool object that is used to store
rbd trash purge schedule is changed from "rbd_trash_trash_purge_schedule"
to "rbd_trash_purge_schedule". Users that have already started using
`rbd trash purge schedule` functionality and have per pool or namespace
schedules configured should copy "rbd_trash_trash_purge_schedule"
object to "rbd_trash_purge_schedule" before the upgrade and remove
"rbd_trash_purge_schedule" using the following commands in every RBD
pool and namespace where a trash purge schedule was previously
configured::
rados -p <pool-name> [-N namespace] cp rbd_trash_trash_purge_schedule rbd_trash_purge_schedule
rados -p <pool-name> [-N namespace] rm rbd_trash_trash_purge_schedule
or use any other convenient way to restore the schedule after the
upgrade.
Getting Ceph
------------
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-14.2.10.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: 7447c15c6ff58d7fce91843b705a268a1917325c
--
David Galloway
Systems Administrator, RDU
Ceph Engineering
IRC: dgalloway
Hi,
I am trying to use CEPH dmclock to see how it works for QoS control.
Especially, I want to set “osd_op_queue” as “mclock_client” to config
different [r, w, l] for each client. The CEPH version I use is nautilus
14.2.9.
I noticed that in "OSD CONFIG REFERENCE" section of CEPH documentation, it
states that "the mClock based ClientQueue (mclock_client) also incorporates
the client identifier in order to promote fairness between clients.", so I
believe librados can support per-client configurations right now. I wonder
how I can set up the CEPH configuration to config different (r, w, l) for
different clients using such “client identifier"? Thanks.
Best,
Zhenbo Qiao
Hi,
I am trying to use CEPH dmclock to see how it works for QoS control. Especially, I want to set “osd_op_queue” as “mclock_client” to config different [r, w, l] for each client. The CEPH version I use is nautilus 14.2.9.
I noticed that in "OSD CONFIG REFERENCE" section of CEPH documentation, it states that "the mClock based ClientQueue (mclock_client) also incorporates the client identifier in order to promote fairness between clients.", so I believe librados can support per-client configurations right now. I wonder how I can set up the CEPH configuration to config different (r, w, l) for different clients using such “client identifier"? Thanks.
Best,
Zhenbo Qiao
Hello,
I have a question about "ceph dashboard backend API tests" status
and its ceph-dashboard-pr-backend job. It appears to be completely
standalone which means that we are building everything twice for
each PR: once for "make check" and once for "ceph dashboard backend
API tests". The only difference between these builds is that for
"make check" WITH_SEASTAR=ON is added.
Can ceph-pull-requests job which is responsible for "make check"
be changed to run run-backend-api-tests.sh at the end and report
two statuses instead of one? That is:
- run make
- if successful, run ctest and report "make check" status
- if successful, run run-backend-api-tests.sh and report "ceph
dashboard backend API tests" status
Thanks,
Ilya
Hey folks,
having a development environment that makes use of containers has at
least two benefits compared to plain vstart etc.:
* deploying miscellaneous on demand services is easier (e.g. monitoring
stack)
* Using containers makes the development environment more similar to
future production environments.
Turns out, we (as the developer community) already have plenty of
different approaches for this problem:
# We have two similar dashboard projects
https://github.com/ricardoasmarques/ceph-dev-dockerhttps://github.com/rhcs-dashboard/ceph-dev/
They're based on docker-compose and in addition to starting the core
services, they also can deploy a monitoring stack. They differ in the
detail that ceph-dev exclusively uses containers and ceph-dev-docker
uses vstart for the core services.
# cstart
https://github.com/ceph/ceph/blob/master/src/cstart.sh
Similar to ceph-dev, but uses `cephadm bootstrap` to setup the cluster.
It builds a container image containing binaries from build/bin
# kcli
https://github.com/karmab/kcli-plans/tree/master/ceph
Which is ceph-ansible based.
# vstart --cephadm
Which is similar to https://github.com/ricardoasmarques/ceph-dev-docker
but uses cephadm to deploy additional services instead of docker-compose.
# cephadm bootstrap --shared_ceph_folder
Deploys a pure cephadm based cluster, but mounts different folders
My questions are now:
* Is this list complete or did I miss anything?
* Are there use cases which are not possible right now?
* As we have a lot of similar solutions here. Is there a possibility to
reduce the maintenance overhead somehow?
Best,
Sebastian
and we have plenty of them.
--
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg). Geschäftsführer: Felix Imendörffer
Hi, all
I find the in cephfs kernel module fs/ceph/file.c, the
function ceph_fallocate return -EOPNOTSUPP,when mode !=
(FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE)。
Recently,we try to use cephfs but need supporting fallocate syscall to
generate the file writing not failed after reserved space. But we find the
cephfs kernel module does not support this right now. Can anyone explain
why we don't implement this now?
We also find out ceph-fuse can support the falllocate syscall but endwith a
pool writing performance vs cephfs kernel mount. There is a large
performance gap under fio.cfg below:
[global]
name=fio-seq-write
filename=/opt/fio-seq-write
rw=write
bs=128k
direct=0
numjobs=
time_based=1
runtime=900
[file1]
size=32G
ioengine=libaio
iodepth=16
So is that also some optimazation options for ceph-fuse can be tuned. I not
quite familiar with cephfs code now, anyone can help thanks.
Regards
Ning Yao