On Thu, Jan 23, 2020 at 2:36 PM Ilya Dryomov <idryomov(a)gmail.com> wrote:
On Wed, Jan 22, 2020 at 6:18 PM Hayashida, Mami <mami.hayashida(a)uky.edu> wrote:
Thanks, Ilya.
I just tried modifying the osd cap for client.testuser by getting rid of "tag cephfs
data=cephfs_test" part and confirmed this key does work (i.e. lets the CephFS client
read/write). It now reads:
[client.testuser]
key = XXXYYYYZZZ
caps mds = "allow rw"
caps mon = "allow r"
caps osd = "allow rw" // previously "allow rw tag cephfs
data=cephfs_test"
I tried removing either "tag cephfs" or "data=cephfs_test" (and
leaving the other), but neither worked.
Now, here is my question: will not having the "allow rw tag cephfs data=<file
system name>" (under osd caps) result in a security/privacy loophole in a
production cluster? (I am still trying to assess whether having a Cache Tier behind
CephFS is worth all the headaches...)
It's probably not worth it. Unless you have a specific tiered
workload in mind and your cache pool is large enough for it, I'd
recommend staying away from cache tiering.
"allow rw" for osd is only marginally more restrictive than
client.admin's "allow *", allowing the user to read/write every object
in the cluster. Scratch my reply about doing it by hand -- try the
following:
$ ceph osd pool application enable cephfs-data-cache cephfs
$ ceph osd pool application set cephfs-data-cache cephfs data cephfs_test
$ ceph fs authorize cephfs_test ... (as before)
You will see the same "allow rw tag cephfs data=cephfs_test" cap in
"ceph auth list" output, but it should allow accessing cephfs-data-cache.
Dropping ceph-users(a)lists.ceph.com and resending to ceph-users(a)ceph.io.
Thanks,
Ilya