Hi,
The quotas work fine on my env:
PS x:\quota> echo exceeding_quota > test
out-file : Not enough quota is available to process this command.
ceph-dokan: WinCephWriteFile /quota/test: ceph_write failed. Error: -311. Offset:
0 Buffer length: 36
I’m wondering if this has something to do with the ceph cluster version. Are you using
Ceph Octopus?
About the –user/--id option, I’ve just pushed a larger change that covers this, also
improving the CLI syntax and logging. Please try out the latest MSI.
In case you’ll need to change the ceph-dokan log level, please note that we’re now using
the “client” ceph log subsystem.
Thanks,
Lucian
From: Lucian Petrut<mailto:lpetrut@cloudbasesolutions.com>
Sent: Thursday, February 25, 2021 2:27 PM
To: Jake Grimmett<mailto:jog@mrc-lmb.cam.ac.uk>;
dev@ceph.io<mailto:dev@ceph.io>
Subject: RE: Windows port
Hi,
Makes sense. I’ll look into it ASAP, thanks for bringing it up!
Lucian
From: Jake Grimmett<mailto:jog@mrc-lmb.cam.ac.uk>
Sent: Thursday, February 25, 2021 2:21 PM
To: Lucian Petrut<mailto:lpetrut@cloudbasesolutions.com>;
dev@ceph.io<mailto:dev@ceph.io>
Subject: Re: Windows port
Hi Lucian,
Exceeding the cephfs quota seems to "upset" Windows:
e.g. on the cephfs system
# setfattr -n ceph.quota.max_bytes -v 1000 /ctestfs/quota
And then on the Windows system
PS Z:\Jake_tests> copy .\4GB_file X:\quota\
(Shell hangs)
back on the cephfs system
# setfattr -n ceph.quota.max_bytes -v 1000000000 /ctestfs/quota
Shell (eventually) returns on Windows, after much CTRL+C
I can then repeat the "copy .\4GB_file X:\quota\" successfully
It would be ideal if Windows gave an error, such as "no space" or "out
of quota" rather than hanging...
thanks again for your work :)
Jake
On 2/25/21 10:51 AM, Lucian Petrut wrote:
Hi,
That’s great. I’ll push the user option setting ASAP.
About the quotas, you mean Windows quotas or Ceph quotas?
Regars,
Lucian
*From: *Jake Grimmett <mailto:jog@mrc-lmb.cam.ac.uk>
*Sent: *Thursday, February 25, 2021 11:37 AM
*To: *Lucian Petrut <mailto:lpetrut@cloudbasesolutions.com>; dev(a)ceph.io
<mailto:dev@ceph.io>
*Subject: *Re: Windows port
Hi Lucian,
So I've now copied 32TB of data from a Windows server to our test ceph
cluster, without any crashes.
Average speed over one 16TB copy was 468MB/s, i.e. just under 11 hours
to transfer 16TB, with a data set that has 1,144,495 files.
The transfer was from Windows 10 server (hardware RAID) > Windows 10 VM
(with the cephfs driver installed) > cephfs test cluster.
If you can get the user option working, I'll put the driver on a
physical Windows 10 system, and see how fast a direct transfer is.
One other thing that would be useful, is over-quota error handling.
If I try writing and exceed the quota, the mount just hangs.
If I increase the quota, the mount recovers, but it would be nice if we
had a quota error in Windows.
Test cluster consists of 6 Dell C2100 nodes, bought in 2012.
Each has 10 x 900GB 10k HDD, we use EC 4+2 for the cephfs pool, metadata
is on 3 x NVMe. Dual Xeon X5650, 12 threads, 96GB RAM, 2x10GB bond.
Ceph 15.2.8, Scientific Linux 7.9.
best regards,
Jake
On 2/23/21 11:49 AM, Lucian Petrut wrote:
Hi,
That’s great, thanks for the confirmation.
I hope I’ll get to push a larger change by the end of the week, covering
the user option as well as a few other fixes.
Regards,
Lucian
*From: *Jake Grimmett <mailto:jog@mrc-lmb.cam.ac.uk
<mailto:jog@mrc-lmb.cam.ac.uk>>
*Sent: *Tuesday, February 23, 2021 11:48 AM
*To: *dev(a)ceph.io <mailto:dev@ceph.io <mailto:dev@ceph.io>>; Lucian
Petrut
<mailto:lpetrut@cloudbasesolutions.com
<mailto:lpetrut@cloudbasesolutions.com>>
*Subject: *Re: Windows port
Hi Lucian,
Good news - your fix works; I was able to write 7TB from a Windows 10 VM
to cephfs last night.
I'll carry on testing the Windows client with large workloads using
robocopy.
thanks again for working on this, a cephx "user" option should provide
the core requirements we need for a usable system :)
best regards,
Jake
Note: I am working from home until further notice.
For help, contact unixadmin(a)mrc-lmb.cam.ac.uk
--
Dr Jake Grimmett
Head Of Scientific Computing
MRC Laboratory of Molecular Biology
Francis Crick Avenue,
Cambridge CB2 0QH, UK.
Phone 01223 267019
Mobile 0776 9886539
Note: I am working from home until further notice.
For help, contact unixadmin(a)mrc-lmb.cam.ac.uk
--
Dr Jake Grimmett
Head Of Scientific Computing
MRC Laboratory of Molecular Biology
Francis Crick Avenue,
Cambridge CB2 0QH, UK.
Phone 01223 267019
Mobile 0776 9886539