For developers submitting jobs using teuthology, we now have
recommendations on what priority level to use:
https://docs.ceph.com/docs/master/dev/developer_guide/#testing-priority
--
Patrick Donnelly, Ph.D.
He / Him / His
Senior Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
Hi,
We're happy to announce that a couple of weeks ago, we've submitted a few Github pull requests[1][2][3] adding initial Windows support. A big thank you to the people that have already reviewed the patches.
To bring some context about the scope and current status of our work: we're mostly targeting the client side, allowing Windows hosts to consume rados, rbd and cephfs resources.
We have Windows binaries capable of writing to rados pools[4]. We're using mingw to build the ceph components, mostly due to the fact that it requires the minimum amount of changes to cross compile ceph for Windows. However, we're soon going to switch to MSVC/Clang due to mingw limitations and long standing bugs[5][6]. Porting the unit tests is also something that we're currently working on.
The next step will be implementing a virtual miniport driver so that RBD volumes can be exposed to Windows hosts and Hyper-V guests. We're hoping to leverage librbd as much as possible as part of a daemon that will communicate with the driver. We're also aiming at cephfs and considering using Dokan, which is FUSE compatible.
Merging the open PRs would allow us to move forward, focusing on the drivers and avoiding rebase issues. Any help on that is greatly appreciated.
Last but not least, I'd like to thank Suse, who's sponsoring this effort!
Lucian Petrut
Cloudbase Solutions
[1] https://github.com/ceph/ceph/pull/31981
[2] https://github.com/ceph/ceph/pull/32027
[3] https://github.com/ceph/rocksdb/pull/42
[4] http://paste.openstack.org/raw/787534/
[5] https://sourceforge.net/p/mingw-w64/bugs/816/
[6] https://sourceforge.net/p/mingw-w64/bugs/527/
I opened a pull request https://github.com/ceph/ceph/pull/33539 with a
design doc for dynamic resharding in multisite. Please review and give
feedback, either here or in PR comments!
Hi all. In OPA integration from Ceph there is no integration for bucket
policy.
When user is setting bucket policy to his/her bucket the OPA server won't
get who get's access to that bucket so after that if the request is
coming from the user (that gets access to that bucket via bucket policy) to
access that bucket (PUT, GET,...), OPA will reject that because of no data
in database.
I have create a pull request for this problem so if user creates a bucket
policy for his/her bucket, the policy data will send to OPA server to be
update on the database.
I think the main idea of having OPA is to have all authorization in OPA and
Ceph don't authorize any request by it self.
Here is the pull request and I would be thankful to hear about your
comments.
https://github.com/ceph/ceph/pull/32294
Thanks.
Hello all,
I don't know how many of you folks are aware, but early last year,
Datto (full disclosure, my current employer, though I'm sending this
email pretty much on my own) released a tool called "zfs2ceph" as an
open source project[1]. This project was the result of a week-long
internal hackathon (SUSE folks may be familiar with this concept from
their own "HackWeek" program[2]) that Datto held internally in
December 2018. I was a member of that team, helping with research,
setting up infra, and making demos for it.
Anyway, I'm bringing it up here because I'd had some conversations
with some folks individually who suggested that I bring it up here in
the mailing list and to talk about some of the motivations and what
I'd like to see in the future from Ceph on this.
The main motivation here was to provide a seamless mechanism to
transfer ZFS based datasets with the full chain of historical
snapshots onto Ceph storage with as much fidelity as possible to allow
a storage migration without requiring 2x-4x system resources. Datto is
in the disaster recovery business, so working backups with full
history are extremely valuable to Datto, its partners, and their
customers. That's why the traditional path of just syncing the current
state and letting the old stuff die off is not workable. At the scale
of having literally thousands of servers with each server having
hundreds of terabytes of ZFS storage (making up in aggregate to
hundreds of petabytes of data), there's no feasible way to consider
alternative storage options without having a way to transfer datasets
from ZFS to Ceph so that we can cut over servers to being Ceph nodes
with minimal downtime and near zero new server purchasing requirements
(there's obviously a little bit of extra hardware needed to "seed" a
Ceph cluster, but that's fine).
The current zfs2ceph implementation handles zvol sends and transforms
them into rbd v1 import streams. I don't recall exactly the reason why
we don't use v2 anymore, but I think there was some gaps that made it
so it wasn't usable for our case back then (we were using Ceph
Luminous). I'm unsure if this is improved now, though it wouldn't
surprise me if it has. However, zvols aren't enough for us. Most of
our ZFS datasets are in the ZFS filesystem form, not the ZVol block
device form. Unfortunately, there is no import equivalent for CephFS,
which blocked an implemented of this capability[3]. I had filed a
request about it on the issue tracker, but it was rejected on the
basis of something was being worked on[4]. However, I haven't seen
something exactly like what I need land in CephFS yet.
The code is pretty simple, and I think it would be easy enough for it
to be incorporated into Ceph itself. However, there's a greater
question here. Is there interest from the Ceph developer community in
developing and supporting strategies to migrate from legacy data
stores to Ceph with as much fidelity as reasonably possible?
Personally, I hope so. My hope is that this post generates some
interesting conversation about how to make this a better supported
capability within Ceph for block and filesystem data. :)
Best regards,
Neal
[1]: https://github.com/datto/zfs2ceph
[2]: https://hackweek.suse.com/
[3]: https://github.com/datto/zfs2ceph/issues/1
[4]: https://tracker.ceph.com/issues/40390
--
真実はいつも一つ!/ Always, there's only one truth!
Hi,
I'm the first to acknowledge that I do not know enough of python.
But still I can get by most of the times.
However during the tests of my Ceph port one of the tests complains:
==============
orchestrator/_interface.py:701: ImportError
------------------------------ Captured log call -------------------------------
ERROR orchestrator._interface:_interface.py:391 _Promise failed
Traceback (most recent call last):
File "/home/jenkins/workspace/ceph-master/src/pybind/mgr/cephadm/module.py", line 334, in do_work
res = self._on_complete_(*args, **kwargs)
File "/home/jenkins/workspace/ceph-master/src/pybind/mgr/cephadm/module.py", line 398, in call_self
return f(self, *inner_args)
File "/home/jenkins/workspace/ceph-master/src/pybind/mgr/cephadm/module.py", line 2352, in _create_grafana
return self._create_daemon('grafana', daemon_id, host)
File "/home/jenkins/workspace/ceph-master/src/pybind/mgr/cephadm/module.py", line 1874, in _create_daemon
j = self._generate_grafana_config()
File "/home/jenkins/workspace/ceph-master/src/pybind/mgr/cephadm/module.py", line 2288, in _generate_grafana_config
cert, pkey = create_self_signed_cert('Ceph', 'cephadm')
File "/home/jenkins/workspace/ceph-master/src/pybind/mgr/mgr_util.py", line 134, in create_self_signed_cert
from OpenSSL import crypto
File "/home/jenkins/workspace/ceph-master/src/pybind/mgr/.tox/py3/lib/python3.7/site-packages/OpenSSL/__init__.py", line 8, in <module>
from OpenSSL import crypto, SSL
File "/home/jenkins/workspace/ceph-master/src/pybind/mgr/.tox/py3/lib/python3.7/site-packages/OpenSSL/crypto.py", line 15, in <module>
from OpenSSL._util import (
File "/home/jenkins/workspace/ceph-master/src/pybind/mgr/.tox/py3/lib/python3.7/site-packages/OpenSSL/_util.py", line 6, in <module>
from cryptography.hazmat.bindings.openssl.binding import Binding
File "/home/jenkins/workspace/ceph-master/src/pybind/mgr/.tox/py3/lib/python3.7/site-packages/cryptography/hazmat/bindings/openssl/binding.py", line 15, in <module>
from cryptography.hazmat.bindings._openssl import ffi, lib
ImportError: /home/jenkins/workspace/ceph-master/src/pybind/mgr/.tox/py3/lib/python3.7/site-packages/cryptography/hazmat/bindings/_openssl.abi3.so: Undefined symbol "SSLv3_client_method"
==============
This is due to the fact that on FreeBSD openSSL has its SSLv3 code disabled.
Now this is fixable on an individual basis, by recompiling the openSSL
port with SSLv3 enabled.
But for a generic port that is not really an option. The user than has
to jump thru loops to build
its own openSSL, and even then he/she needs to keep up with security
isssues. One should not want this.
The problem stems from virtualenv/tox fetching from public sources,
instead of using the ports system.
This can be overruled by: --system-site-packages.
I know that I'll need to load all packages before running
virtualenv/tox, but that is "just" a matter of
collecting the list.
but I'm wondering if this is a feasable solution?
--WjW
Hi all,
I need to update the Ceph kernel client on the teuthology VM. This will
involve instructing all teuthology workers to die after they finish
their running job, updating packages on the teuthology VM, then
rebooting it.
I plan on instructing workers to die this afternoon around 8PM Eastern.
Then in the morning after the lab has quieted down, I will perform the
maintenance and bring workers back up. Hopefully no later than 10AM
Eastern.
--
David Galloway
Systems Administrator, RDU
Ceph Engineering
IRC: dgalloway