For developers submitting jobs using teuthology, we now have
recommendations on what priority level to use:
https://docs.ceph.com/docs/master/dev/developer_guide/#testing-priority
--
Patrick Donnelly, Ph.D.
He / Him / His
Senior Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
The office is a productivity suite, developed and maintained by one of the biggest companies, market leaders in technology, Microsoft Office. You must already have gotten your answer, Microsoft is not just another company that poses to be developing great software with just a team of 100 members but Microsoft is a big company that needs to keep its reputation and work always on par with the revenue they are getting. This means they need to hire the best people to get the job done and that’s how it works in Microsoft. There is no issue with Microsoft, but the only issue is that Microsoft has been the target for hackers for quite a while now, It is one of the easiest targets anyone can find when it comes to the customer base and data theft. The data Microsoft has stored on their cloud servers is enormous and is really important for most of the people.
office.com/setuphttps://www.officesetup.helpoffice.com/setuphttps://w-ww-office.com/setupmcafee.com/activatehttps://www.help-mcafee.memcafee.com/activatehttps://w-w-w-mcafee.com/activate
Hi folks,
I'm seeing some of our internal Red Hat builders going OOM and killing
ceph builds. This is happening across architectures.
Upstream our braggi builders have 48 vCPUs and 256GB of RAM. That's not small.
What is the minimum memory and CPU requirement for building pacific?
Internally, to use one ppc64le example, we're running with 14Gb RAM
and 16 CPUs, and the RPM spec file chooses -j5, hitting OOM. We tuned
mem_per_process from 2500 to 2700 a while back to alleviate this, but
we're still hitting OOM consistently with the pacific branch now.
- Ken
Hey folks, we've got some training sessions around using and developing
teuthology lined up on the ceph community calendar [0].
These will be every Thursday over the next 4 weeks, at 7am PT at:
https://bluejeans.com/908675367
The schedule is:
Feb 18: Intro - Greg Farnum
Basics of using teuthology and overview of how it works.
Feb 25: Analyzing Test Results - Neha Ojha
How to debug and figure out the root cause of failures - logs
and tools available, e.g. scrape.py, sentry.
March 4: Developing Tests - Josh Durgin
Tour of common tasks and building blocks for testing
ceph. Where different kinds of tests live, and how to write
and run them.
March 11: Teuthology Internals: Scheduling - Josh Durgin
A deeper look at the architecture and code behind running a
test suite with teuthology.
Depending on how much we get through there may be more follow-up
sessions. There are several potential code walkthroughs as well - if
anyone would like to volunteer to present these, let me know!
Josh
[0]
https://calendar.google.com/calendar/b/1?cid=OXRzOWM3bHQ3dTF2aWMyaWp2dnFxbG…
Hi everyone!
I'm excited to announce two talks we have on the schedule for February 2021:
Jason Dillaman will be giving part 2 to the librbd code walk-through.
The stream starts on February 23rd at 18:00 UTC / 19:00 CET / 1:00 PM
EST / 10:00 AM PST
https://tracker.ceph.com/projects/ceph/wiki/Code_Walkthroughs
Part 1: https://www.youtube.com/watch?v=L0x61HpREy4
--------------
What's New in the Pacific Release
Hear Sage Weil give a live update on the development of the Pacific Release.
The stream starts on February 25th at 17:00 UTC / 18:00 CET / 12 PM
EST / 9 AM PST.
https://ceph.io/ceph-tech-talks/
All live streams will be recorded and
--
Mike Perez
Hi,
We have been testing ceph-dokan, based on the guide here:
<https://documentation.suse.com/ses/7/single-html/ses-windows/index.html#win…>
And watching <https://www.youtube.com/watch?v=BWZIwXLcNts&ab_channel=SUSE>
Initial tests on a Windows 10 VM show good write speed - around 600MB/s,
which is faster than our samba server.
What worries us, is using the "root" ceph.client.admin.keyring on a
Windows system, as it gives access to the entire cephfs cluster - which
in our case is 5PB.
I'd really like this to work, as it would let user administrated Windows
systems that control microscopes to save data directly to cephfs, so
that we can process the data on our HPC cluster.
I'd normally use cephx, and make a key that allows access to a directory
off the root.
e.g.
[root@ceph-s1 users]# ceph auth get client.x_lab
exported keyring for client.x_lab
[client.x_lab]
key = xXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX==
caps mds = "allow r path=/users/, allow rw path=/users/x_lab"
caps mon = "allow r"
caps osd = "allow class-read object_prefix rbd_children, allow rw
pool=ec82pool"
The real key works fine on linux, but when we try this key with
ceph-dokan, and specify the ceph directory (x_lab) as a ceph path, there
is no option to specify the user - is this hard-coded as admin?
Have I just missed something? Or is this a missing feature?
anyhow, ceph-dokan looks like it could be quite useful,
thank you Cloudbase :)
best regards,
Jake
--
Dr Jake Grimmett
Head Of Scientific Computing
MRC Laboratory of Molecular Biology
Francis Crick Avenue,
Cambridge CB2 0QH, UK.
On 4/30/20 1:54 PM, Lucian Petrut wrote:
> Hi,
>
> We’ve just pushed the final part of the Windows PR series[1], allowing
> RBD images as well as CephFS to be mounted on Windows.
>
> There’s a comprehensive guide[2], describing the build, installation,
> configuration and usage steps.
>
> 2 out of 12 PRs have been merged already, we look forward to merging the
> others as well.
>
> Lucian Petrut
>
> Cloudbase Solutions
>
> [1] https://github.com/ceph/ceph/pull/34859
> <https://github.com/ceph/ceph/pull/34859>
>
> [2]
> https://github.com/petrutlucian94/ceph/blob/windows.12/README.windows.rst <https://github.com/petrutlucian94/ceph/blob/windows.12/README.windows.rst>
>
> *From: *Lucian Petrut
> <mailto:/O=CLOUDBASE/OU=EXCHANGE%20ADMINISTRATIVE%20GROUP%20(FYDIBOHF23SPDLT)/CN=RECIPIENTS/CN=LUCIAN%20PETRUT77C>
> *Sent: *Monday, December 16, 2019 10:12 AM
> *To: *dev(a)ceph.io <mailto:dev@ceph.io>
> *Subject: *Windows port
>
> Hi,
>
> We're happy to announce that a couple of weeks ago, we've submitted a
> few Github pull requests[1][2][3] adding initial Windows support. A big
> thank you to the people that have already reviewed the patches.
>
> To bring some context about the scope and current status of our work:
> we're mostly targeting the client side, allowing Windows hosts to
> consume rados, rbd and cephfs resources.
>
> We have Windows binaries capable of writing to rados pools[4]. We're
> using mingw to build the ceph components, mostly due to the fact that it
> requires the minimum amount of changes to cross compile ceph for
> Windows. However, we're soon going to switch to MSVC/Clang due to mingw
> limitations and long standing bugs[5][6]. Porting the unit tests is also
> something that we're currently working on.
>
> The next step will be implementing a virtual miniport driver so that RBD
> volumes can be exposed to Windows hosts and Hyper-V guests. We're hoping
> to leverage librbd as much as possible as part of a daemon that will
> communicate with the driver. We're also aiming at cephfs and considering
> using Dokan, which is FUSE compatible.
>
> Merging the open PRs would allow us to move forward, focusing on the
> drivers and avoiding rebase issues. Any help on that is greatly appreciated.
>
> Last but not least, I'd like to thank Suse, who's sponsoring this effort!
>
> Lucian Petrut
>
> Cloudbase Solutions
>
> [1] https://github.com/ceph/ceph/pull/31981
>
> [2] https://github.com/ceph/ceph/pull/32027
>
> [3] https://github.com/ceph/rocksdb/pull/42
>
> [4] http://paste.openstack.org/raw/787534/
>
> [5] https://sourceforge.net/p/mingw-w64/bugs/816/
>
> [6] https://sourceforge.net/p/mingw-w64/bugs/527/
>
>
> _______________________________________________
> Dev mailing list -- dev(a)ceph.io
> To unsubscribe send an email to dev-leave(a)ceph.io
>
Hi Folks,
The performance meeting will be starting in a little over an hour! This
week, Theofilos Mouratidis and Dan van der Ster will be presenting on
Teo's thesis research for using merkel trees to optimize backfill
workloads. Sounds like it should be a very interesting presentation!
Hope to see you there!
Etherpad:
https://pad.ceph.com/p/performance_weekly
Bluejeans:
https://bluejeans.com/908675367
Mark
There will be a DocuBetter meeting on Thursday, 25 Feb 2021 at 0200 UTC.
We will discuss the Google Season of Docs proposals, the cephadm docs
reorganization
being undertaken by Sebastian, and the Teuthology Guide cleanup.
DocuBetter Meeting -- APAC
24 Feb 2021
0200 UTC
https://bluejeans.com/908675367https://pad.ceph.com/p/Ceph_Documentation