The office is a productivity suite, developed and maintained by one of the biggest companies, market leaders in technology, Microsoft Office. You must already have gotten your answer, Microsoft is not just another company that poses to be developing great software with just a team of 100 members but Microsoft is a big company that needs to keep its reputation and work always on par with the revenue they are getting. This means they need to hire the best people to get the job done and that’s how it works in Microsoft. There is no issue with Microsoft, but the only issue is that Microsoft has been the target for hackers for quite a while now, It is one of the easiest targets anyone can find when it comes to the customer base and data theft. The data Microsoft has stored on their cloud servers is enormous and is really important for most of the people.
I'm seeing some of our internal Red Hat builders going OOM and killing
ceph builds. This is happening across architectures.
Upstream our braggi builders have 48 vCPUs and 256GB of RAM. That's not small.
What is the minimum memory and CPU requirement for building pacific?
Internally, to use one ppc64le example, we're running with 14Gb RAM
and 16 CPUs, and the RPM spec file chooses -j5, hitting OOM. We tuned
mem_per_process from 2500 to 2700 a while back to alleviate this, but
we're still hitting OOM consistently with the pacific branch now.
Hey folks, we've got some training sessions around using and developing
teuthology lined up on the ceph community calendar .
These will be every Thursday over the next 4 weeks, at 7am PT at:
The schedule is:
Feb 18: Intro - Greg Farnum
Basics of using teuthology and overview of how it works.
Feb 25: Analyzing Test Results - Neha Ojha
How to debug and figure out the root cause of failures - logs
and tools available, e.g. scrape.py, sentry.
March 4: Developing Tests - Josh Durgin
Tour of common tasks and building blocks for testing
ceph. Where different kinds of tests live, and how to write
and run them.
March 11: Teuthology Internals: Scheduling - Josh Durgin
A deeper look at the architecture and code behind running a
test suite with teuthology.
Depending on how much we get through there may be more follow-up
sessions. There are several potential code walkthroughs as well - if
anyone would like to volunteer to present these, let me know!
We have been testing ceph-dokan, based on the guide here:
And watching <https://www.youtube.com/watch?v=BWZIwXLcNts&ab_channel=SUSE>
Initial tests on a Windows 10 VM show good write speed - around 600MB/s,
which is faster than our samba server.
What worries us, is using the "root" ceph.client.admin.keyring on a
Windows system, as it gives access to the entire cephfs cluster - which
in our case is 5PB.
I'd really like this to work, as it would let user administrated Windows
systems that control microscopes to save data directly to cephfs, so
that we can process the data on our HPC cluster.
I'd normally use cephx, and make a key that allows access to a directory
off the root.
[root@ceph-s1 users]# ceph auth get client.x_lab
exported keyring for client.x_lab
key = xXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX==
caps mds = "allow r path=/users/, allow rw path=/users/x_lab"
caps mon = "allow r"
caps osd = "allow class-read object_prefix rbd_children, allow rw
The real key works fine on linux, but when we try this key with
ceph-dokan, and specify the ceph directory (x_lab) as a ceph path, there
is no option to specify the user - is this hard-coded as admin?
Have I just missed something? Or is this a missing feature?
anyhow, ceph-dokan looks like it could be quite useful,
thank you Cloudbase :)
Dr Jake Grimmett
Head Of Scientific Computing
MRC Laboratory of Molecular Biology
Francis Crick Avenue,
Cambridge CB2 0QH, UK.
On 4/30/20 1:54 PM, Lucian Petrut wrote:
> We’ve just pushed the final part of the Windows PR series, allowing
> RBD images as well as CephFS to be mounted on Windows.
> There’s a comprehensive guide, describing the build, installation,
> configuration and usage steps.
> 2 out of 12 PRs have been merged already, we look forward to merging the
> others as well.
> Lucian Petrut
> Cloudbase Solutions
>  https://github.com/ceph/ceph/pull/34859
> https://github.com/petrutlucian94/ceph/blob/windows.12/README.windows.rst <https://github.com/petrutlucian94/ceph/blob/windows.12/README.windows.rst>
> *From: *Lucian Petrut
> *Sent: *Monday, December 16, 2019 10:12 AM
> *To: *dev(a)ceph.io <mailto:email@example.com>
> *Subject: *Windows port
> We're happy to announce that a couple of weeks ago, we've submitted a
> few Github pull requests adding initial Windows support. A big
> thank you to the people that have already reviewed the patches.
> To bring some context about the scope and current status of our work:
> we're mostly targeting the client side, allowing Windows hosts to
> consume rados, rbd and cephfs resources.
> We have Windows binaries capable of writing to rados pools. We're
> using mingw to build the ceph components, mostly due to the fact that it
> requires the minimum amount of changes to cross compile ceph for
> Windows. However, we're soon going to switch to MSVC/Clang due to mingw
> limitations and long standing bugs. Porting the unit tests is also
> something that we're currently working on.
> The next step will be implementing a virtual miniport driver so that RBD
> volumes can be exposed to Windows hosts and Hyper-V guests. We're hoping
> to leverage librbd as much as possible as part of a daemon that will
> communicate with the driver. We're also aiming at cephfs and considering
> using Dokan, which is FUSE compatible.
> Merging the open PRs would allow us to move forward, focusing on the
> drivers and avoiding rebase issues. Any help on that is greatly appreciated.
> Last but not least, I'd like to thank Suse, who's sponsoring this effort!
> Lucian Petrut
> Cloudbase Solutions
>  https://github.com/ceph/ceph/pull/31981
>  https://github.com/ceph/ceph/pull/32027
>  https://github.com/ceph/rocksdb/pull/42
>  http://paste.openstack.org/raw/787534/
>  https://sourceforge.net/p/mingw-w64/bugs/816/
>  https://sourceforge.net/p/mingw-w64/bugs/527/
> Dev mailing list -- dev(a)ceph.io
> To unsubscribe send an email to dev-leave(a)ceph.io
The performance meeting will be starting in a little over an hour! This
week, Theofilos Mouratidis and Dan van der Ster will be presenting on
Teo's thesis research for using merkel trees to optimize backfill
workloads. Sounds like it should be a very interesting presentation!
Hope to see you there!
Greetings folks, next week is APAC-friendly time: 2100 ET on Mar 3rd
We've got 3 topics on the agend so far:
preloaded EC profiles - kbader -
In general the idea is to make it easier to tune ceph out of the box
with profiles of common settings across the ceph stack.
build/test optimizations - joshd/nojha -
Brainstorming about ways to approach efficient use of the sepia lab
and improve testing and build times for developers. This will be a
gsoc/outreachy project this summer.
libcephsqlite - batrick - https://github.com/ceph/ceph/pull/39191
A single-client SQL interface for rados - the first application is as
a persistence interface for mgr modules.
See you there!