For the benefit of our new folks and for posterity:
Many of our QA tests for CephFS are located in qa/tasks/cephfs/*.
These get run in teuthology with various cluster configurations. What
everyone will need to be able to do is develop these tests locally
without waiting for teuthology so you can rapidly find errors in your
test cases and development builds.
To do this, you need to use the qa/tasks/vstart_runner.py script. This
allows you to use a vstart cluster to execute your tests by providing
the necessary frameworks the tests expect.
On a development box*, build ceph. If you're just testing CephFS, you
can usually get away with a smaller build without rbd/rgw:
/do_cmake.sh -DWITH_PYTHON3:BOOL=ON -DWITH_BABELTRACE=OFF
-DWITH_MANPAGE=OFF -DWITH_RBD=OFF -DWITH_RADOSGW=OFF && time (cd build
&& make -j24 CMAKE_BUILD_TYPE=Debug -k)
Next, build teuthology:
git clone https://github.com/ceph/teuthology.git && cd teuthology &&
virtualenv ./venv && source venv/bin/activate && pip install --upgrade
pip && pip install -r requirements.txt && python setup.py develop
Next, start a vstart cluster:
cd ceph/build && env MDS=3 ../src/vstart.sh -d -b -l -n --without-dashboard
Finally, run vstart_runner:
python2 ../qa/tasks/vstart_runner.py --interactive
tasks.cephfs.test_snapshots.TestSnapshots
^ That's an example test. The format is based on the directory
structure of qa/tasks/cephfs/test_snapshots. The final part is the
class we're testing, TestSnapshots. This invocation of
vstart_runner.py will run every test in TestSnapshots, methods
beginning with "test_". If you want to run a specific test, then we
could do:
python2 ../qa/tasks/vstart_runner.py --interactive
tasks.cephfs.test_snapshots.TestSnapshots.test_snapclient_cache
Please give the above a try sometime soon so you know how to do it and
we can resolve any problems. This is an important skill to have for
developing CephFS.
* Hopefully you're using one of the beefy development boxes that make
compiling Ceph fast. I recommend one of the senta boxes like
senta03.front.sepia.ceph.com.
--
Patrick Donnelly
Hi,
I tried mounting CephFS with kernel driver on a vstart cluster on
master branch (latest commit SHA:
d33c281b6437523a66d7802a39514f1ae74ec8e7) without secret key, but I
was unsuccessful.Following is a copy of stdout while I was trying to
mount. The first mount failed because I picked the wrong port.
However, the second attempt (with key) was successful and the third
(without key) wasn't.
build$ sudo mount -t ceph 192.168.0.218:40112:/ /mnt/kcephfs -o
name=admin,secret=AQDrjqpdy0fGKhAATIRQrdPhXB/uIi+86xuijQ==
^C
build$ sudo mount -t ceph 192.168.0.218:40113:/ /mnt/kcephfs -o
name=admin,secret=AQDrjqpdy0fGKhAATIRQrdPhXB/uIi+86xuijQ==
build$ sudo umount /mnt/kcephfs/
build$ sudo mount -t ceph 192.168.0.218:40113:/ /mnt/kcephfs -o name=admin
mount: /mnt/kcephfs: wrong fs type, bad option, bad superblock on
192.168.0.218:40113:/, missing codepage or helper program, or other
error.
build$ dmesg | tail
[ 806.561086] libceph: mon0 192.168.0.218:40112 socket closed (con
state CONNECTING)
[ 810.770148] libceph: mon0 192.168.0.218:40113 session established
[ 810.772603] libceph: client4275 fsid 54b1853a-1a08-482d-baf4-644eec15e830
[ 822.452439] libceph: no secret set (for auth_x protocol)
[ 822.452443] libceph: error -22 on auth protocol 2 init
build$
Just to make sure, I tried a fourth time with key to -
build$ mount -t ceph 192.168.0.218:40113:/ /mnt/kcephfs -o
name=admin,secret=AQDrjqpdy0fGKhAATIRQrdPhXB/uIi+86xuijQ==
build$ $ mount | grep kcephfs
192.168.0.218:40113:/ on /mnt/kcephfs type ceph
(rw,relatime,name=admin,secret=<hidden>,acl)
Thinking that mount.ceph helper might be looking for file
`ceph.client.admin.keyring`, I copied the admin keyring in a file, and
placed it build/ as well as in /etc/ceph. However, that too didn't
help. I've copied shell output for mount commands and contents keyring
files here, in case that helps -
https://paste.fedoraproject.org/paste/YbFY235S3DaEryje9HDPAw.
Thanks,
- Rishabh