For the benefit of our new folks and for posterity:
Many of our QA tests for CephFS are located in qa/tasks/cephfs/*.
These get run in teuthology with various cluster configurations. What
everyone will need to be able to do is develop these tests locally
without waiting for teuthology so you can rapidly find errors in your
test cases and development builds.
To do this, you need to use the qa/tasks/vstart_runner.py script. This
allows you to use a vstart cluster to execute your tests by providing
the necessary frameworks the tests expect.
On a development box*, build ceph. If you're just testing CephFS, you
can usually get away with a smaller build without rbd/rgw:
/do_cmake.sh -DWITH_PYTHON3:BOOL=ON -DWITH_BABELTRACE=OFF
-DWITH_MANPAGE=OFF -DWITH_RBD=OFF -DWITH_RADOSGW=OFF && time (cd build
&& make -j24 CMAKE_BUILD_TYPE=Debug -k)
Next, build teuthology:
git clone https://github.com/ceph/teuthology.git && cd teuthology &&
virtualenv ./venv && source venv/bin/activate && pip install --upgrade
pip && pip install -r requirements.txt && python setup.py develop
Next, start a vstart cluster:
cd ceph/build && env MDS=3 ../src/vstart.sh -d -b -l -n --without-dashboard
Finally, run vstart_runner:
python2 ../qa/tasks/vstart_runner.py --interactive
tasks.cephfs.test_snapshots.TestSnapshots
^ That's an example test. The format is based on the directory
structure of qa/tasks/cephfs/test_snapshots. The final part is the
class we're testing, TestSnapshots. This invocation of
vstart_runner.py will run every test in TestSnapshots, methods
beginning with "test_". If you want to run a specific test, then we
could do:
python2 ../qa/tasks/vstart_runner.py --interactive
tasks.cephfs.test_snapshots.TestSnapshots.test_snapclient_cache
Please give the above a try sometime soon so you know how to do it and
we can resolve any problems. This is an important skill to have for
developing CephFS.
* Hopefully you're using one of the beefy development boxes that make
compiling Ceph fast. I recommend one of the senta boxes like
senta03.front.sepia.ceph.com.
--
Patrick Donnelly
Hello,
I was unsuccessful in mounting CephFS with kernel on Ubuntu (it was
senta02) today. Here's the command I was using to mount -
$ sudo mount -t ceph 172.21.9.32:40886:/ /mnt/kcephfs1 -o
name=admin,secret=AQCpa8ld1ahOIhAACPhh2qncfv0LkuI6+kUsEA==
mount: mount 172.21.9.32:40886:/ on /mnt/kcephfs1 failed: Connection timed out
I got the following message in dmesg logs -
[328388.882391] libceph: mon0 172.21.9.32:40886 feature set mismatch,
my 107b84a842aca < server's 40107b84a842aca, missing 400000000000000
[328388.894612] libceph: mon0 172.21.9.32:40886 missing required
protocol features
Port 40886 spoke msgv1, so AFAIS, the command looks fine. I tried
40885 too but I got "Connection timed out" on stdout and "socket
closed (con state CONNECTING) in dmesg logs as usual. I also tried
running cluster on loopback/localhost and used 127.0.0.1:40917:/ but
even that was unsuccessful.
To make sure that I am not missing anything I tried the same thing on
Fedora 29 and the mount was successful. I've attached logs containing
mount commands, dmesg logs and keyring for cluster. I was using this
branch to build and run Ceph cluster -
https://github.com/rishabh-d-dave/ceph/tree/add-test-for-acls.
Thanks,
- Rishabh