Hi Lucian,


Thanks for your reply :)


Our two main requirements are:


1) Security (...it's great news you can fix this.)


2) Stability (which is understandably harder)


Our testing so far has been on a Windows 10 Pro VM (8 cores, 8GB RAM)


We mount one share from a real WIndows 10 system as Z (hardware RAID, battery backed 400TB)

Then use ceph-dokan to mount /cephfs on the VM as X


We then use robosync to copy data from the Windows server to the cephfs mount


C:\>robocopy z:\TestDatasets "X:\jog\Kates Data" /np /mt:128 /log:c:/Users/unixadmin/desktop/robolog.txt /E


After copying 5TB from the WIndows server to /cephfs the ceph-dokan mount crashes.


C:\WINDOWS\system32>ceph-dokan -l x -o
ceph_conf_read_file OK
ceph_mount OK
ceph_getcwd [/]
/home/abuild/rpmbuild/BUILD/ceph/src/include/interval_set.h: In function 'void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = short unsigned int; C = std::map]' thread 12 time 2021-02-11T16:56:04.874874GMT Standard Time
/home/abuild/rpmbuild/BUILD/ceph/src/include/interval_set.h: 527: FAILED ceph_assert(p->first <= start)
 ceph version IT-NOTFOUND (f762f7c3be560c11ea0dd51896c976c45137f5ed) pacific (dev)


restarting robocopy with a lower thread count results in another crash.


Finally, one other feature request: ceph-dokan reports it's version as "16.0.0" perhaps a "-V" or "--version" switch would be useful?


We are using https://github.com/dokan-dev/dokany/releases/download/v1.4.1.1000/DokanSetup.exe


best regards,


Jake


On 15/02/2021 15:09, Lucian Petrut wrote:

Hi,

 

Thanks for trying it out. Indeed, this option is currently missing from ceph-dokan but it’s an easy thing to add. We’ll take care of it as soon as possible, hopefully it will be included in the Pacific release.

 

Please let us know if you have any other suggestions.

 

Regards,

Lucian Petrut

 

From: Jake Grimmett
Sent: Thursday, February 11, 2021 3:00 PM
To: Lucian Petrut; Lucian Petrut; dev@ceph.io
Subject: Re: Windows port

 

Hi,

We have been testing ceph-dokan, based on the guide here:
<https://documentation.suse.com/ses/7/single-html/ses-windows/index.html#windows-cephfs>

And watching <https://www.youtube.com/watch?v=BWZIwXLcNts&ab_channel=SUSE>

Initial tests on a Windows 10 VM show good write speed - around 600MB/s,
which is faster than our samba server.

What worries us, is using the "root" ceph.client.admin.keyring on a
Windows system, as it gives access to the entire cephfs cluster - which
in our case is 5PB.

I'd really like this to work, as it would let user administrated Windows
systems that control microscopes to save data directly to cephfs, so
that we can process the data on our HPC cluster.

I'd normally use cephx, and make a key that allows access to a directory
off the root.

e.g.

[root@ceph-s1 users]# ceph auth get client.x_lab
exported keyring for client.x_lab
[client.x_lab]
        key = xXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX==
        caps mds = "allow r path=/users/, allow rw path=/users/x_lab"
        caps mon = "allow r"
        caps osd = "allow class-read object_prefix rbd_children, allow rw
pool=ec82pool"

The real key works fine on linux, but when we try this key with
ceph-dokan, and specify the ceph directory (x_lab) as a ceph path, there
is no option to specify the user - is this hard-coded as admin?

Have I just missed something? Or is this a missing feature?

anyhow, ceph-dokan looks like it could be quite useful,
thank you Cloudbase :)

best regards,

Jake

--
Dr Jake Grimmett
Head Of Scientific Computing
MRC Laboratory of Molecular Biology
Francis Crick Avenue,
Cambridge CB2 0QH, UK.

On 4/30/20 1:54 PM, Lucian Petrut wrote:
> Hi,
>
> We’ve just pushed the final part of the Windows PR series[1], allowing
> RBD images as well as CephFS to be mounted on Windows.
>
> There’s a comprehensive guide[2], describing the build, installation,
> configuration and usage steps.
>
> 2 out of 12 PRs have been merged already, we look forward to merging the
> others as well.
>
> Lucian Petrut
>
> Cloudbase Solutions
>
> [1] https://github.com/ceph/ceph/pull/34859
> <https://github.com/ceph/ceph/pull/34859>
>
> [2]
> https://github.com/petrutlucian94/ceph/blob/windows.12/README.windows.rst <https://github.com/petrutlucian94/ceph/blob/windows.12/README.windows.rst>
>
> *From: *Lucian Petrut
> <mailto:/O=CLOUDBASE/OU=EXCHANGE%20ADMINISTRATIVE%20GROUP%20(FYDIBOHF23SPDLT)/CN=RECIPIENTS/CN=LUCIAN%20PETRUT77C>
> *Sent: *Monday, December 16, 2019 10:12 AM
> *To: *dev@ceph.io <mailto:dev@ceph.io>
> *Subject: *Windows port
>
> Hi,
>
> We're happy to announce that a couple of weeks ago, we've submitted a
> few Github pull requests[1][2][3] adding initial Windows support. A big
> thank you to the people that have already reviewed the patches.
>
> To bring some context about the scope and current status of our work:
> we're mostly targeting the client side, allowing Windows hosts to
> consume rados, rbd and cephfs resources.
>
> We have Windows binaries capable of writing to rados pools[4]. We're
> using mingw to build the ceph components, mostly due to the fact that it
> requires the minimum amount of changes to cross compile ceph for
> Windows. However, we're soon going to switch to MSVC/Clang due to mingw
> limitations and long standing bugs[5][6]. Porting the unit tests is also
> something that we're currently working on.
>
> The next step will be implementing a virtual miniport driver so that RBD
> volumes can be exposed to Windows hosts and Hyper-V guests. We're hoping
> to leverage librbd as much as possible as part of a daemon that will
> communicate with the driver. We're also aiming at cephfs and considering
> using Dokan, which is FUSE compatible.
>
> Merging the open PRs would allow us to move forward, focusing on the
> drivers and avoiding rebase issues. Any help on that is greatly appreciated.
>
> Last but not least, I'd like to thank Suse, who's sponsoring this effort!
>
> Lucian Petrut
>
> Cloudbase Solutions
>
> [1] https://github.com/ceph/ceph/pull/31981
>
> [2] https://github.com/ceph/ceph/pull/32027
>
> [3] https://github.com/ceph/rocksdb/pull/42
>
> [4] http://paste.openstack.org/raw/787534/
>
> [5] https://sourceforge.net/p/mingw-w64/bugs/816/
>
> [6] https://sourceforge.net/p/mingw-w64/bugs/527/
>
>
> _______________________________________________
> Dev mailing list -- dev@ceph.io
> To unsubscribe send an email to dev-leave@ceph.io
>


 

Note: I am working from home until further notice.
For help, contact unixadmin@mrc-lmb.cam.ac.uk
-- 
Dr Jake Grimmett
Head Of Scientific Computing
MRC Laboratory of Molecular Biology
Francis Crick Avenue,
Cambridge CB2 0QH, UK.
Phone 01223 267019
Mobile 0776 9886539