Hi,
Context: one of our users is mounting 350 ceph kernel PVCs per 30GB VM
and they notice "memory pressure".
When planning for k8s hosts, what would be a reasonable limit on the
number of ceph kernel PVCs to mount per host? If one kernel mounts the
same cephfs several times (with different prefixes), we observed that
this is a unique client session. But does the ceph module globally
share a single copy of cluster metadata, e.g. osdmaps, or is that all
duplicated per session? Can anyone estimate how much memory is
consumed by each mount (assuming it is a client of an O(1k) osd ceph
cluster)?
Also, k8s makes it trivial for a user to mount a single PVC from
hundreds or thousands of clients. Suppose we wanted to be able to
limit the number of clients per PVC -- Do you think a new
`max_sessions=N` cephx cap would be the best approach for this?
Best Regards,
Dan