Am Dienstag, den 22.09.2020, 11:25 +0200 schrieb Stefan Kooman:
On 2020-09-21 21:12, Wout van Heeswijk wrote:
> Hi Rene,
>
> Yes, cephfs is a good filesystem for concurrent writing. When using
> CephFS with ganesha you can even scale out.
>
> It will perform better but why don't you mount CephFS inside the
> VM?
Mounting CephFS directly in the VMs in what I want to do.
^^ This. But it depends on the VMs you are going to
use as clients.
Do
you trust those clients enough that they are allowed to be part of
your
cluster.
All VMs will be based on Debian 10.5 and be under our control.
The main VMs will run Apache, PHP 7.3/7.4 and OpenSSH (SFTP). We will
also run MariaDB-VMs for a Galera-Cluster. Additional VMs will provide
HA-Proxy for Webserver-/SFTP- and Galera-Load-Balancing and a few other
services (Postfix, Certbot, PowerDNS, ClamAV, etc.). My plan was to
mount CephFS as shared file storage in all VMs for Vhost-confgurations,
PHP, HTML, Javascript and documents/media-files and any other files we
need on all hosts/VMs in sync.
The data-center operator suggested to use a SSD-Hardware-Raid 1 with
EXT4/LVM for the Webserver- and MariaDB-VMs and store the other VMs for
live migration capability on Ceph-Storage. The SSD-based Ceph-storage
with CephFS and NFS should be to used to share files between all VMs.
We will start with 3 EPYC servers with a 10 GBit/s ethernet-mesh but
want to be able to scale-up as needed.
Basically my question is if there are any benefits of NFS on Ceph
compared to direct CephFS mounts for sharing files between VMs.
Clients are really part of the cluster, at least that
is how I
see it. If possible, you want to use modern (5.7, 5.8) linux kernels
for
cephfs (rm operations: is slower on 4.15/5.3/5.4 for files created
with
5.3/5.4 kernel). We have issues (sometimes) with older kernel clients
(Ubuntu xenial, 4.15 kernel) and "MDS messages client failed to
rdlock")
but we don't have 100% proof yet it is because of this kernel
version.
They generally fix themselves though, so not a big issue.
Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io