having a development environment that makes use of containers has at
least two benefits compared to plain vstart etc.:
* deploying miscellaneous on demand services is easier (e.g. monitoring
* Using containers makes the development environment more similar to
future production environments.
Turns out, we (as the developer community) already have plenty of
different approaches for this problem:
# We have two similar dashboard projects
They're based on docker-compose and in addition to starting the core
services, they also can deploy a monitoring stack. They differ in the
detail that ceph-dev exclusively uses containers and ceph-dev-docker
uses vstart for the core services.
Similar to ceph-dev, but uses `cephadm bootstrap` to setup the cluster.
It builds a container image containing binaries from build/bin
Which is ceph-ansible based.
# vstart --cephadm
Which is similar to https://github.com/ricardoasmarques/ceph-dev-docker
but uses cephadm to deploy additional services instead of docker-compose.
# cephadm bootstrap --shared_ceph_folder
Deploys a pure cephadm based cluster, but mounts different folders
My questions are now:
* Is this list complete or did I miss anything?
* Are there use cases which are not possible right now?
* As we have a lot of similar solutions here. Is there a possibility to
reduce the maintenance overhead somehow?
and we have plenty of them.
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg). Geschäftsführer: Felix Imendörffer
I find the in cephfs kernel module fs/ceph/file.c, the
function ceph_fallocate return -EOPNOTSUPP，when mode !=
(FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE)。
Recently，we try to use cephfs but need supporting fallocate syscall to
generate the file writing not failed after reserved space. But we find the
cephfs kernel module does not support this right now. Can anyone explain
why we don't implement this now?
We also find out ceph-fuse can support the falllocate syscall but endwith a
pool writing performance vs cephfs kernel mount. There is a large
performance gap under fio.cfg below:
So is that also some optimazation options for ceph-fuse can be tuned. I not
quite familiar with cephfs code now, anyone can help thanks.
I'm studying the Ceph Watch mechanism.
Does anyone one know how to run ceph_test_stress_watch test?
ceph$ ./do_cmake.sh -DWITH_MGR_DASHBOARD_FRONTEND=OFF -DWITH_SPDK=OFF
ceph$ cd build
build$ ls -l bin/ceph_test_stress_watch
Could some one help review below PR?
The purpose of this PR is differentiate the FrontEnd which use
BlockDevice to access the backend block driver. The FrontEnd could
be BlueStore or RBD.
we just migrated ceph:teuthology and all tests under qa/ in ceph:ceph
to python3. and from now on, the teuthology-worker runs in a python3
environment by default unless specified otherwise using
- we need to write tests in python3 in master now
- teuthology should be python3 compatible.
- teuthology bug fixes should be backported to "py2" branch.
if you run into any issues related to python3 due to the above
changes, please let me know. and i will try to fix it ASAP.
currently, the tests under qa/ directories in ceph:ceph master branch
are python2 and python3 compatible. but since we've moved to python3,
there is no need to be python2 compatible anymore. since the sepia lab
is still using ubuntu xenial, we cannot use features offered by
python3.6 at this moment yet. but we do plan to upgrade the OS to
bionic soon. before that happens, the tests need to be compatible with
the next step is to
- drop python2 support in ceph:ceph master branch, and
- drop python2 support in ceph:teuthology master.
- backport python3 compatible changes to octopus and nautilus to ease
the pain of backport
I have a question regarding pointer variables used in the __crush_do_rule__
function of CRUSH __mapper.c__. Can someone please help me understand the
purpose of following four pointer variables inside __crush_do_rule__:
int *b = a + result_max;
int *c = b + result_max;
int *w = a;
int *o = b;
The function __crush_do_rule__ is below:
* crush_do_rule - calculate a mapping with the given input and rule
* @map: the crush_map
* @ruleno: the rule id
* @x: hash input
* @result: pointer to result vector
* @result_max: maximum result size
* @weight: weight vector (for map leaves)
* @weight_max: size of weight vector
* @cwin: Pointer to at least map->working_size bytes of memory or NULL.
int crush_do_rule(const struct crush_map *map,
int ruleno, int x, int *result, int result_max,
const __u32 *weight, int weight_max,
void *cwin, const struct crush_choose_arg *choose_args)
struct crush_work *cw = cwin;
int *a = (int *)((char *)cw + map->working_size);
int *b = a + result_max;
int *c = b + result_max;
int *w = a;
int *o = b;
int wsize = 0;
const struct crush_rule *rule;
int i, j;
We're happy to announce the tenth release in the Nautilus series. In
addition to fixing a security-related bug in RGW, this release brings a
number of bugfixes across all major components of Ceph. We recommend
that all Nautilus users upgrade to this release. For a detailed
changelog please refer to the ceph release blog at:
* CVE-2020-10753: rgw: sanitize newlines in s3 CORSConfiguration's ExposeHeader
(William Bowling, Adam Mohammed, Casey Bodley)
* RGW: Bucket notifications now support Kafka endpoints. This requires librdkafka of
version 0.9.2 and up. Note that Ubuntu 16.04.6 LTS (Xenial Xerus) has an older
version of librdkafka, and would require an update to the library.
* The pool parameter `target_size_ratio`, used by the pg autoscaler,
has changed meaning. It is now normalized across pools, rather than
specifying an absolute ratio. For details, see :ref:`pg-autoscaler`.
If you have set target size ratios on any pools, you may want to set
these pools to autoscale `warn` mode to avoid data movement during
ceph osd pool set <pool-name> pg_autoscale_mode warn
* The behaviour of the `-o` argument to the rados tool has been reverted to
its orignal behaviour of indicating an output file. This reverts it to a more
consistent behaviour when compared to other tools. Specifying object size is now
accomplished by using an upper case O `-O`.
* The format of MDSs in `ceph fs dump` has changed.
* Ceph will issue a health warning if a RADOS pool's `size` is set to 1
or in other words the pool is configured with no redundancy. This can
be fixed by setting the pool size to the minimum recommended value
ceph osd pool set <pool-name> size <num-replicas>
The warning can be silenced with::
ceph config set global mon_warn_on_pool_no_redundancy false
* RGW: bucket listing performance on sharded bucket indexes has been
notably improved by heuristically -- and significantly, in many
cases -- reducing the number of entries requested from each bucket
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-14.2.10.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: b340acf629a010a74d90da5782a2c5fe0b54ac20
SUSE Software Solutions Germany GmbH
GF: Felix Imendörffer, HRB 36809 (AG Nürnberg)
The weekly performance meeting will be starting in ~15 minutes. The only
thing I have on the agenda this week is a brief update regarding adding
denc encode/decode to the MDS. Potentially we may have some updates
regarding performance regression testing. Please feel free to add your own!
I m facing some issue while doing dynamic debug with rbd kernel
1. sudo cat /boot/config-`uname -r` | grep DYNAMIC_DEBUG
2.sudo mount -t debugfs none /sys/kernel/debug
3.sudo echo 9 > /proc/sysrq-trigger
4.sudo echo 'module rbd +p' | sudo tee -a
in the last step I am getting error
`tee: /sys/kernel/debug/dynamic_debug/control: Invalid argument`
Can anyone tell me how to resolve this?