$ rm -rf venv/ && (virtualenv -p python2 ./venv && source
venv/bin/activate && pip install --upgrade pip && pip install -r
requirements2.txt && python2 setup.py develop)
Collecting typing; python_version < "3.4.0"
Using cached typing-184.108.40.206-py2-none-any.whl (26 kB)
Requirement already satisfied: setuptools in
./venv/lib/python2.7/site-packages (from pytest==3.7.1->-r
requirements2.txt (line 84)) (45.0.0)
ERROR: Package 'setuptools' requires a different Python: 2.7.12 not in '>=3.5'
Anybody know what changed or how to fix this? My usual bash one-liner
to update my virtualenv failed with this packaging issue.
Patrick Donnelly, Ph.D.
He / Him / His
Senior Software Engineer
Red Hat Sunnyvale, CA
The latest docker.io/ceph/daemon-base:latest-master-devel appears to have
been built last night (according to docker), but it contains
"ceph_version": "ceph version 15.1.0-1962-gbf6df4b
(bf6df4b9422153f5672da72e06cdc498b05c2146) octopus (rc)",
which is 5 days old at this point:
Merge: c39f46fc6c7 7e30c261c0c
Author: Lenz Grimmer <lgrimmer(a)suse.com>
Date: Wed Mar 11 14:53:35 2020 +0100
Actual master contains
commit 6421605d5f16396a285a9515bd210870f27f233d (refs/remotes/gh/master,
Merge: fae60f65f32 a19a81edcd0
Author: Kefu Chai <kchai(a)redhat.com>
Date: Mon Mar 16 11:12:37 2020 +0800
Any idea what's going on?
First, we have an empty agenda for the CDM tonight, so we're canceling.
Everyone is pretty focused on squashing bugs for Octopus.
Once O is out the door, though, we have a whole agenda of topics to
discuss that was planned for the on-site developer summit at Cephalocon
Seoul. We'd like to cover those same topics (and/or anything else) online
as soon as Octopus is released so we can plan for the next release,
We're looking at either the week of Mar 30 - Apr 2 or Apr 6-10.
Generally I think sooner is better, but the first week is KubeCon
Amsterdam (not cancled...yet?), and the week before is SUSEcon.
The original agenda is here:
I've translated those topics to this pad:
Please add any additional items, and/or indicate which sessions you are
interested in so we can map sessions to either EMEA or APAC compatible
There is a general documentation meeting called the "DocuBetter Meeting",
and it is held every two weeks. The next DocuBetter Meeting will be on
March 25, 2020 at 1800 PST, and will run for thirty minutes. Everyone with
a documentation-related request or complaint is invited. The meeting will
be held here: https://bluejeans.com/908675367
Send documentation-related requests and complaints to me by replying to
this email and CCing me at zac.dover(a)gmail.com.
This message will be sent to dev(a)ceph.io every Monday morning, North
The next DocuBetter meeting is scheduled for:
25 Mar 2020 1800 PST
25 Mar 2020 0100 UTC
26 Mar 2020 1100 AEST
I noticed there are no newer Ceph builds for Ubuntu anymore. Is someone working on those? Is this eventual or there won't be more Ceph builds available for Ubuntu?
This is a concern for my team (OpenStack Manila) and also other OpenStack projects that use Ceph, since our CI uses Ubuntu.
Right now we are using Ubuntu Trusty (some gates) and Bionic.
Any feedback will be much appreciated.
Hi all. In OPA integration from Ceph there is no integration for bucket
When user is setting bucket policy to his/her bucket the OPA server won't
get who get's access to that bucket so after that if the request is
coming from the user (that gets access to that bucket via bucket policy) to
access that bucket (PUT, GET,...), OPA will reject that because of no data
I have create a pull request for this problem so if user creates a bucket
policy for his/her bucket, the policy data will send to OPA server to be
update on the database.
I think the main idea of having OPA is to have all authorization in OPA and
Ceph don't authorize any request by it self.
Here is the pull request and I would be thankful to hear about your
On Thu, Mar 12, 2020 at 5:45 PM Rui Chang <Rui.Chang(a)arm.com> wrote:
> Is the following link all up to date for crimson project?
at least i am trying to keep it up-to-date.
> From: Rui Chang
> Sent: Thursday, March 12, 2020 17:32
> To: 'Kefu Chai' <kchai(a)redhat.com>; 'kefu chai' <tchaikov(a)gmail.com>
> Subject: ceph features
> Hi, Kefu
> Is there any place to check current features that are under development for ceph?
we are busy with fixing bugs before rolling out octopus out recently.
probably you could check the latest CDM?
> IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
More details (and full backtrace) in https://tracker.ceph.com/issues/44570
We are able to reproduce this 100% of the time when trying to upload a large file, radowgw just segfaults about 50MB in. Here is the backtrace from the main thread, all threads are in the backtrace on the trac bug.
There are a few different backtraces that we have seen...
> Thread 1 (Thread 0x64a6e4bba700 (LWP 14952)):
> #0 raise (sig=sig@entry=11) at ../sysdeps/unix/sysv/linux/raise.c:50
> #1 0x000013f502628651 in reraise_fatal (signum=11) at /var/tmp/portage/sys-cluster/ceph-14.2.8/work/ceph-14.2.8/src/global/signal_handler.cc:326
> #2 handle_fatal_signal (signum=11) at /var/tmp/portage/sys-cluster/ceph-14.2.8/work/ceph-14.2.8/src/global/signal_handler.cc:326
> #3 <signal handler called>
> #4 0x000064a6ef114963 in ceph::buffer::v14_2_0::list::buffers_t::clear_and_dispose (this=0x13f5082457b8) at /var/tmp/portage/sys-cluster/ceph-14.2.8/work/ceph-14.2.8/src/include/buffer.h:648
> #5 ceph::buffer::v14_2_0::list::buffers_t::clone_from (other=..., this=0x13f5082457b8) at /var/tmp/portage/sys-cluster/ceph-14.2.8/work/ceph-14.2.8/src/include/buffer.h:639
> #6 ceph::buffer::v14_2_0::list::operator= (other=..., this=0x13f5082457b8) at /var/tmp/portage/sys-cluster/ceph-14.2.8/work/ceph-14.2.8/src/include/buffer.h:1003
> #7 ceph::buffer::v14_2_0::list::operator= (other=..., this=0x13f5082457b8) at /var/tmp/portage/sys-cluster/ceph-14.2.8/work/ceph-14.2.8/src/include/buffer.h:1000
> #8 Objecter::handle_osd_op_reply (this=0x13f507c75700, m=0x13f50820eb00) at /var/tmp/portage/sys-cluster/ceph-14.2.8/work/ceph-14.2.8/src/osdc/Objecter.cc:3502
> #9 0x000064a6ef115cab in Objecter::ms_dispatch (this=0x13f507c75700, m=0x13f50820eb00) at /var/tmp/portage/sys-cluster/ceph-14.2.8/work/ceph-14.2.8/src/osdc/Objecter.cc:966
> #10 0x000064a6ef1347d7 in non-virtual thunk to Objecter::ms_fast_dispatch(Message*) () at /var/tmp/portage/sys-cluster/ceph-14.2.8/work/ceph-14.2.8/src/osdc/Objecter.h:2110
> #11 0x000064a6ef0e5440 in Dispatcher::ms_fast_dispatch2 (this=0x13f507c75708, m=...) at /var/tmp/portage/sys-cluster/ceph-14.2.8/work/ceph-14.2.8/src/common/RefCountedObj.h:41
> #12 0x000064a6e67d530f in Messenger::ms_fast_dispatch (m=..., this=<optimized out>) at /var/tmp/portage/sys-cluster/ceph-14.2.8/work/ceph-14.2.8_build/boost/include/boost/container/new_allocator.hpp:165
> #13 DispatchQueue::fast_dispatch (this=0x13f506fcf558, m=...) at /var/tmp/portage/sys-cluster/ceph-14.2.8/work/ceph-14.2.8/src/msg/DispatchQueue.cc:72
> #14 0x000064a6e68fa75e in DispatchQueue::fast_dispatch (m=0x13f50820eb00, this=<optimized out>) at /var/tmp/portage/sys-cluster/ceph-14.2.8/work/ceph-14.2.8_build/boost/include/boost/smart_ptr/intrusive_ptr.hpp:67
> #15 ProtocolV2::handle_message (this=<optimized out>) at /var/tmp/portage/sys-cluster/ceph-14.2.8/work/ceph-14.2.8/src/msg/async/ProtocolV2.cc:1501
> #16 0x000064a6e690cec8 in ProtocolV2::handle_read_frame_dispatch (this=0x13f5080fb700) at /var/tmp/portage/sys-cluster/ceph-14.2.8/work/ceph-14.2.8/src/msg/async/ProtocolV2.cc:1128
> #17 0x000064a6e690d245 in ProtocolV2::handle_read_frame_epilogue_main (this=0x13f5080fb700, buffer=..., r=<optimized out>) at /var/tmp/portage/sys-cluster/ceph-14.2.8/work/ceph-14.2.8/src/msg/async/ProtocolV2.cc:1352
> #18 0x000064a6e68f203f in ProtocolV2::run_continuation (this=0x13f5080fb700, continuation=...) at /var/tmp/portage/sys-cluster/ceph-14.2.8/work/ceph-14.2.8/src/msg/async/ProtocolV2.cc:45
> #19 0x000064a6e68b898e in std::function<void (char*, long)>::operator()(char*, long) const (__args#1=<optimized out>, __args#0=<optimized out>, this=0x13f508165610) at /usr/lib/gcc/x86_64-pc-linux-gnu/9.2.0/include/g++-v9/bits/std_function.h:685
> #20 AsyncConnection::process (this=0x13f508165200) at /var/tmp/portage/sys-cluster/ceph-14.2.8/work/ceph-14.2.8/src/msg/async/AsyncConnection.cc:450
> #21 0x000064a6e69181b8 in EventCenter::process_events (this=this@entry=0x13f506f53700, timeout_microseconds=<optimized out>, timeout_microseconds@entry=30000000, working_dur=working_dur@entry=0x64a6e4bb7d28) at /usr/lib/gcc/x86_64-pc-linux-gnu/9.2.0/include/g++-v9/bits/basic_ios.h:282
> #22 0x000064a6e691e25c in NetworkStack::<lambda()>::operator() (__closure=0x13f506f93808, __closure=0x13f506f93808) at /var/tmp/portage/sys-cluster/ceph-14.2.8/work/ceph-14.2.8/src/msg/async/Stack.cc:53
> #23 std::_Function_handler<void(), NetworkStack::add_thread(unsigned int)::<lambda()> >::_M_invoke(const std::_Any_data &) (__functor=...) at /usr/lib/gcc/x86_64-pc-linux-gnu/9.2.0/include/g++-v9/bits/std_function.h:300
> #24 0x000064a6e5ca2130 in std::execute_native_thread_routine (__p=0x13f506f93800) at /var/tmp/portage/sys-devel/gcc-9.2.0-r2/work/gcc-9.2.0/libstdc++-v3/src/c++11/thread.cc:80
> #25 0x000064a6e5e2c482 in start_thread (arg=<optimized out>) at pthread_create.c:486
> #26 0x000064a6e5abc5cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
I've gotten a couple requests in the past 24 hours asking how to "lock"
the new dev machines https://wiki.sepia.ceph.com/doku.php?id=hardware:vossi
These systems aren't in paddles so `teuthology-lock` isn't going to work
here. Is that something you all want?
My understanding is, historically, the rex and senta have been shared
machines where there is a chance devs can step on each others' toes. I
get the desire to have exclusive use of a machine but I don't want to
have to be the one to police machine-hogging.
Systems Administrator, RDU