Hi, all
I find the in cephfs kernel module fs/ceph/file.c, the
function ceph_fallocate return -EOPNOTSUPP,when mode !=
(FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE)。
Recently,we try to use cephfs but need supporting fallocate syscall to
generate the file writing not failed after reserved space. But we find the
cephfs kernel module does not support this right now. Can anyone explain
why we don't implement this now?
We also find out ceph-fuse can support the falllocate syscall but endwith a
pool writing performance vs cephfs kernel mount. There is a large
performance gap under fio.cfg below:
[global]
name=fio-seq-write
filename=/opt/fio-seq-write
rw=write
bs=128k
direct=0
numjobs=
time_based=1
runtime=900
[file1]
size=32G
ioengine=libaio
iodepth=16
So is that also some optimazation options for ceph-fuse can be tuned. I not
quite familiar with cephfs code now, anyone can help thanks.
Regards
Ning Yao
Hi all,
I'm studying the Ceph Watch mechanism.
Does anyone one know how to run ceph_test_stress_watch test?
ceph$ ./do_cmake.sh -DWITH_MGR_DASHBOARD_FRONTEND=OFF -DWITH_SPDK=OFF
ceph$ cd build
ceph$ make
build$ ls -l bin/ceph_test_stress_watch
B.R.
Changcheng
Hi all,
Could some one help review below PR?
https://github.com/ceph/ceph/pull/35306
The purpose of this PR is differentiate the FrontEnd which use
BlockDevice to access the backend block driver. The FrontEnd could
be BlueStore or RBD.
B.R.
Changcheng
hi folks,
we just migrated ceph:teuthology and all tests under qa/ in ceph:ceph
to python3. and from now on, the teuthology-worker runs in a python3
environment by default unless specified otherwise using
"--teuthology-branch py2".
which means:
- we need to write tests in python3 in master now
- teuthology should be python3 compatible.
- teuthology bug fixes should be backported to "py2" branch.
if you run into any issues related to python3 due to the above
changes, please let me know. and i will try to fix it ASAP.
currently, the tests under qa/ directories in ceph:ceph master branch
are python2 and python3 compatible. but since we've moved to python3,
there is no need to be python2 compatible anymore. since the sepia lab
is still using ubuntu xenial, we cannot use features offered by
python3.6 at this moment yet. but we do plan to upgrade the OS to
bionic soon. before that happens, the tests need to be compatible with
Python3.5.
the next step is to
- drop python2 support in ceph:ceph master branch, and
- drop python2 support in ceph:teuthology master.
- backport python3 compatible changes to octopus and nautilus to ease
the pain of backport
--
Regards
Kefu Chai
Hi all,
I have a question regarding pointer variables used in the __crush_do_rule__
function of CRUSH __mapper.c__. Can someone please help me understand the
purpose of following four pointer variables inside __crush_do_rule__:
int *b = a + result_max;
int *c = b + result_max;
int *w = a;
int *o = b;
The function __crush_do_rule__ is below:
/**
* crush_do_rule - calculate a mapping with the given input and rule
* @map: the crush_map
* @ruleno: the rule id
* @x: hash input
* @result: pointer to result vector
* @result_max: maximum result size
* @weight: weight vector (for map leaves)
* @weight_max: size of weight vector
* @cwin: Pointer to at least map->working_size bytes of memory or NULL.
*/
int crush_do_rule(const struct crush_map *map,
int ruleno, int x, int *result, int result_max,
const __u32 *weight, int weight_max,
void *cwin, const struct crush_choose_arg *choose_args)
{
int result_len;
struct crush_work *cw = cwin;
int *a = (int *)((char *)cw + map->working_size);
int *b = a + result_max;
int *c = b + result_max;
int *w = a;
int *o = b;
int recurse_to_leaf;
int wsize = 0;
int osize;
int *tmp;
const struct crush_rule *rule;
__u32 step;
int i, j;
int numrep;
int out_size;
Thanks
Bobby !
We're happy to announce the tenth release in the Nautilus series. In
addition to fixing a security-related bug in RGW, this release brings a
number of bugfixes across all major components of Ceph. We recommend
that all Nautilus users upgrade to this release. For a detailed
changelog please refer to the ceph release blog at:
https://ceph.io/releases/v14-2-10-nautilus-released
Notable Changes
---------------
* CVE-2020-10753: rgw: sanitize newlines in s3 CORSConfiguration's ExposeHeader
(William Bowling, Adam Mohammed, Casey Bodley)
* RGW: Bucket notifications now support Kafka endpoints. This requires librdkafka of
version 0.9.2 and up. Note that Ubuntu 16.04.6 LTS (Xenial Xerus) has an older
version of librdkafka, and would require an update to the library.
* The pool parameter `target_size_ratio`, used by the pg autoscaler,
has changed meaning. It is now normalized across pools, rather than
specifying an absolute ratio. For details, see :ref:`pg-autoscaler`.
If you have set target size ratios on any pools, you may want to set
these pools to autoscale `warn` mode to avoid data movement during
the upgrade::
ceph osd pool set <pool-name> pg_autoscale_mode warn
* The behaviour of the `-o` argument to the rados tool has been reverted to
its orignal behaviour of indicating an output file. This reverts it to a more
consistent behaviour when compared to other tools. Specifying object size is now
accomplished by using an upper case O `-O`.
* The format of MDSs in `ceph fs dump` has changed.
* Ceph will issue a health warning if a RADOS pool's `size` is set to 1
or in other words the pool is configured with no redundancy. This can
be fixed by setting the pool size to the minimum recommended value
with::
ceph osd pool set <pool-name> size <num-replicas>
The warning can be silenced with::
ceph config set global mon_warn_on_pool_no_redundancy false
* RGW: bucket listing performance on sharded bucket indexes has been
notably improved by heuristically -- and significantly, in many
cases -- reducing the number of entries requested from each bucket
index shard.
Getting Ceph
------------
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-14.2.10.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: b340acf629a010a74d90da5782a2c5fe0b54ac20
--
Abhishek Lekshmanan
SUSE Software Solutions Germany GmbH
GF: Felix Imendörffer, HRB 36809 (AG Nürnberg)
Hi Folks,
The weekly performance meeting will be starting in ~15 minutes. The only
thing I have on the agenda this week is a brief update regarding adding
denc encode/decode to the MDS. Potentially we may have some updates
regarding performance regression testing. Please feel free to add your own!
Etherpad:
https://pad.ceph.com/p/performance_weekly
Bluejeans:
https://bluejeans.com/908675367
Thanks,
Mark
I m facing some issue while doing dynamic debug with rbd kernel
steps:
1. sudo cat /boot/config-`uname -r` | grep DYNAMIC_DEBUG
CONFIG_DYNAMIC_DEBUG=y
2.sudo mount -t debugfs none /sys/kernel/debug
3.sudo echo 9 > /proc/sysrq-trigger
4.sudo echo 'module rbd +p' | sudo tee -a
/sys/kernel/debug/dynamic_debug/control
in the last step I am getting error
`tee: /sys/kernel/debug/dynamic_debug/control: Invalid argument`
Can anyone tell me how to resolve this?
Thank You.