lists.ceph.io
Sign In
Sign Up
Sign In
Sign Up
Manage this list
×
Keyboard Shortcuts
Thread View
j
: Next unread message
k
: Previous unread message
j a
: Jump to all threads
j l
: Jump to MailingList overview
2023
September
August
July
June
May
April
March
February
January
2022
December
November
October
September
August
July
June
May
April
March
February
January
2021
December
November
October
September
August
July
June
May
April
March
February
January
2020
December
November
October
September
August
July
June
May
April
March
February
January
2019
December
November
October
September
August
July
June
List overview
Download
Dev
October 2019
----- 2023 -----
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
----- 2022 -----
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
----- 2021 -----
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
----- 2020 -----
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
----- 2019 -----
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
dev@ceph.io
82 participants
83 discussions
Start a n
N
ew thread
Re: Ski suit and Jackets
by yizi5159030833
3 years, 11 months
1
0
0
0
Re: Professional photographic equipment
by shaozhuo1368
3 years, 11 months
1
0
0
0
Re: The leading grow light kit manufacturer of China
by sizhaogou8020114
3 years, 11 months
1
0
0
0
no rgw refactoring meeting today
by Casey Bodley
Please join us in the CDM instead!
3 years, 11 months
1
0
0
0
CDM is today
by Sage Weil
Hi everyone, CDM is today, 12:30pm ET, 1630 UTC:
https://redhat.bluejeans.com/908675367
The agenda current has two items: - [Sage] Update on the ssh orchestrator, ceph-daemon, and baremetal deployments. - [Igor] Bluestore: keeping small objects and/or blobs in rocksdb Please add any additional topics here:
https://tracker.ceph.com/projects/ceph/wiki/CDM_02-OCT-2019
See you soon! sage
3 years, 11 months
1
0
0
0
Re: Customized packaging case / EVA case supplier
by Cicy Chan
3 years, 11 months
1
0
0
0
Re: OFTTH Solutions and KVM Solutions
by Shirley
3 years, 11 months
1
0
0
0
Re: [ceph-users] OSD crashed during the fio test
by Brad Hubbard
Removed ceph-devel(a)vger.kernel.org and added dev(a)ceph.io On Tue, Oct 1, 2019 at 4:26 PM Alex Litvak <alexander.v.litvak(a)gmail.com> wrote: > > Hellow everyone, > > Can you shed the line on the cause of the crash? Could actually client request trigger it? > > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 2019-09-30 22:52:58.867 7f093d71e700 -1 bdev(0x55b72c156000 /var/lib/ceph/osd/ceph-17/block) aio_submit retries 16 > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 2019-09-30 22:52:58.867 7f093d71e700 -1 bdev(0x55b72c156000 /var/lib/ceph/osd/ceph-17/block) aio submit got (11) Resource temporarily unavailable The KernelDevice::aio_submit function has tried to submit Io 16 times (a hard coded limit) and received an error each time causing it to assert. Can you check the status of the underlying device(s)? > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: > /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.2.2/rpm/el7/BUILD/ceph-14.2.2/src/os/bluestore/KernelDevice.cc: > In fun > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: > /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.2.2/rpm/el7/BUILD/ceph-14.2.2/src/os/bluestore/KernelDevice.cc: > 757: F > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: ceph version 14.2.2 (4f8fa0a0024755aae7d95567c63f11d6862d55be) nautilus (stable) > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x14a) [0x55b71f668cf4] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 2: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0x55b71f668ec2] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 3: (KernelDevice::aio_submit(IOContext*)+0x701) [0x55b71fd61ca1] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 4: (BlueStore::_txc_aio_submit(BlueStore::TransContext*)+0x42) [0x55b71fc29892] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 5: (BlueStore::_txc_state_proc(BlueStore::TransContext*)+0x42b) [0x55b71fc496ab] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 6: (BlueStore::queue_transactions(boost::intrusive_ptr<ObjectStore::CollectionImpl>&, std::vector<ObjectStore::Transaction, > std::allocator<ObjectStore::Transaction> >&, boost::intrusive_ptr<TrackedOp>, ThreadPool::T > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 7: (non-virtual thunk to PrimaryLogPG::queue_transactions(std::vector<ObjectStore::Transaction, std::allocator<ObjectStore::Transaction> >&, > boost::intrusive_ptr<OpRequest>)+0x54) [0x55b71f9b1b84] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 8: (ReplicatedBackend::submit_transaction(hobject_t const&, object_stat_sum_t const&, eversion_t const&, std::unique_ptr<PGTransaction, > std::default_delete<PGTransaction> >&&, eversion_t const&, eversion_t const&, s > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 9: (PrimaryLogPG::issue_repop(PrimaryLogPG::RepGather*, PrimaryLogPG::OpContext*)+0xf12) [0x55b71f90e322] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 10: (PrimaryLogPG::execute_ctx(PrimaryLogPG::OpContext*)+0xfae) [0x55b71f969b7e] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 11: (PrimaryLogPG::do_op(boost::intrusive_ptr<OpRequest>&)+0x3965) [0x55b71f96de15] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 12: (PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&, ThreadPool::TPHandle&)+0xbd4) [0x55b71f96f8a4] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 13: (OSD::dequeue_op(boost::intrusive_ptr<PG>, boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x1a9) [0x55b71f7a9ea9] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 14: (PGOpItem::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x62) [0x55b71fa475d2] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 15: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x9f4) [0x55b71f7c6ef4] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 16: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x433) [0x55b71fdc5ce3] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 17: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x55b71fdc8d80] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 18: (()+0x7dd5) [0x7f0971da9dd5] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 19: (clone()+0x6d) [0x7f0970c7002d] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 2019-09-30 22:52:58.879 7f093d71e700 -1 > /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.2.2/rpm/el7/BUILD/ceph-14.2.2/ > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: > /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.2.2/rpm/el7/BUILD/ceph-14.2.2/src/os/bluestore/KernelDevice.cc: > 757: F > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: ceph version 14.2.2 (4f8fa0a0024755aae7d95567c63f11d6862d55be) nautilus (stable) > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x14a) [0x55b71f668cf4] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 2: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0x55b71f668ec2] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 3: (KernelDevice::aio_submit(IOContext*)+0x701) [0x55b71fd61ca1] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 4: (BlueStore::_txc_aio_submit(BlueStore::TransContext*)+0x42) [0x55b71fc29892] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 5: (BlueStore::_txc_state_proc(BlueStore::TransContext*)+0x42b) [0x55b71fc496ab] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 6: (BlueStore::queue_transactions(boost::intrusive_ptr<ObjectStore::CollectionImpl>&, std::vector<ObjectStore::Transaction, > std::allocator<ObjectStore::Transaction> >&, boost::intrusive_ptr<TrackedOp>, ThreadPool::T > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 7: (non-virtual thunk to PrimaryLogPG::queue_transactions(std::vector<ObjectStore::Transaction, std::allocator<ObjectStore::Transaction> >&, > boost::intrusive_ptr<OpRequest>)+0x54) [0x55b71f9b1b84] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 8: (ReplicatedBackend::submit_transaction(hobject_t const&, object_stat_sum_t const&, eversion_t const&, std::unique_ptr<PGTransaction, > std::default_delete<PGTransaction> >&&, eversion_t const&, eversion_t const&, s > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 9: (PrimaryLogPG::issue_repop(PrimaryLogPG::RepGather*, PrimaryLogPG::OpContext*)+0xf12) [0x55b71f90e322] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 10: (PrimaryLogPG::execute_ctx(PrimaryLogPG::OpContext*)+0xfae) [0x55b71f969b7e] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 11: (PrimaryLogPG::do_op(boost::intrusive_ptr<OpRequest>&)+0x3965) [0x55b71f96de15] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 12: (PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&, ThreadPool::TPHandle&)+0xbd4) [0x55b71f96f8a4] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 13: (OSD::dequeue_op(boost::intrusive_ptr<PG>, boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x1a9) [0x55b71f7a9ea9] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 14: (PGOpItem::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x62) [0x55b71fa475d2] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 15: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x9f4) [0x55b71f7c6ef4] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 16: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x433) [0x55b71fdc5ce3] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 17: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x55b71fdc8d80] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 18: (()+0x7dd5) [0x7f0971da9dd5] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 19: (clone()+0x6d) [0x7f0970c7002d] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: *** Caught signal (Aborted) ** > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: in thread 7f093d71e700 thread_name:tp_osd_tp > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: ceph version 14.2.2 (4f8fa0a0024755aae7d95567c63f11d6862d55be) nautilus (stable) > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 1: (()+0xf5d0) [0x7f0971db15d0] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 2: (gsignal()+0x37) [0x7f0970ba82c7] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 3: (abort()+0x148) [0x7f0970ba99b8] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x199) [0x55b71f668d43] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 5: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0x55b71f668ec2] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 6: (KernelDevice::aio_submit(IOContext*)+0x701) [0x55b71fd61ca1] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 7: (BlueStore::_txc_aio_submit(BlueStore::TransContext*)+0x42) [0x55b71fc29892] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 8: (BlueStore::_txc_state_proc(BlueStore::TransContext*)+0x42b) [0x55b71fc496ab] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 9: (BlueStore::queue_transactions(boost::intrusive_ptr<ObjectStore::CollectionImpl>&, std::vector<ObjectStore::Transaction, > std::allocator<ObjectStore::Transaction> >&, boost::intrusive_ptr<TrackedOp>, ThreadPool::T > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 10: (non-virtual thunk to PrimaryLogPG::queue_transactions(std::vector<ObjectStore::Transaction, std::allocator<ObjectStore::Transaction> >&, > boost::intrusive_ptr<OpRequest>)+0x54) [0x55b71f9b1b84] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 11: (ReplicatedBackend::submit_transaction(hobject_t const&, object_stat_sum_t const&, eversion_t const&, std::unique_ptr<PGTransaction, > std::default_delete<PGTransaction> >&&, eversion_t const&, eversion_t const&, > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 12: (PrimaryLogPG::issue_repop(PrimaryLogPG::RepGather*, PrimaryLogPG::OpContext*)+0xf12) [0x55b71f90e322] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 13: (PrimaryLogPG::execute_ctx(PrimaryLogPG::OpContext*)+0xfae) [0x55b71f969b7e] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 14: (PrimaryLogPG::do_op(boost::intrusive_ptr<OpRequest>&)+0x3965) [0x55b71f96de15] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 15: (PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&, ThreadPool::TPHandle&)+0xbd4) [0x55b71f96f8a4] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 16: (OSD::dequeue_op(boost::intrusive_ptr<PG>, boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x1a9) [0x55b71f7a9ea9] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 17: (PGOpItem::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x62) [0x55b71fa475d2] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 18: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x9f4) [0x55b71f7c6ef4] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 19: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x433) [0x55b71fdc5ce3] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 20: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x55b71fdc8d80] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 21: (()+0x7dd5) [0x7f0971da9dd5] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 22: (clone()+0x6d) [0x7f0970c7002d] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 2019-09-30 22:52:58.883 7f093d71e700 -1 *** Caught signal (Aborted) ** > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: in thread 7f093d71e700 thread_name:tp_osd_tp > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: ceph version 14.2.2 (4f8fa0a0024755aae7d95567c63f11d6862d55be) nautilus (stable) > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 1: (()+0xf5d0) [0x7f0971db15d0] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 2: (gsignal()+0x37) [0x7f0970ba82c7] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 3: (abort()+0x148) [0x7f0970ba99b8] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x199) [0x55b71f668d43] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 5: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0x55b71f668ec2] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 6: (KernelDevice::aio_submit(IOContext*)+0x701) [0x55b71fd61ca1] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 7: (BlueStore::_txc_aio_submit(BlueStore::TransContext*)+0x42) [0x55b71fc29892] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 8: (BlueStore::_txc_state_proc(BlueStore::TransContext*)+0x42b) [0x55b71fc496ab] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 9: (BlueStore::queue_transactions(boost::intrusive_ptr<ObjectStore::CollectionImpl>&, std::vector<ObjectStore::Transaction, > std::allocator<ObjectStore::Transaction> >&, boost::intrusive_ptr<TrackedOp>, ThreadPool::T > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 10: (non-virtual thunk to PrimaryLogPG::queue_transactions(std::vector<ObjectStore::Transaction, std::allocator<ObjectStore::Transaction> >&, > boost::intrusive_ptr<OpRequest>)+0x54) [0x55b71f9b1b84] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 11: (ReplicatedBackend::submit_transaction(hobject_t const&, object_stat_sum_t const&, eversion_t const&, std::unique_ptr<PGTransaction, > std::default_delete<PGTransaction> >&&, eversion_t const&, eversion_t const&, > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 12: (PrimaryLogPG::issue_repop(PrimaryLogPG::RepGather*, PrimaryLogPG::OpContext*)+0xf12) [0x55b71f90e322] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 13: (PrimaryLogPG::execute_ctx(PrimaryLogPG::OpContext*)+0xfae) [0x55b71f969b7e] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 14: (PrimaryLogPG::do_op(boost::intrusive_ptr<OpRequest>&)+0x3965) [0x55b71f96de15] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 15: (PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&, ThreadPool::TPHandle&)+0xbd4) [0x55b71f96f8a4] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 16: (OSD::dequeue_op(boost::intrusive_ptr<PG>, boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x1a9) [0x55b71f7a9ea9] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 17: (PGOpItem::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x62) [0x55b71fa475d2] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 18: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x9f4) [0x55b71f7c6ef4] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 19: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x433) [0x55b71fdc5ce3] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 20: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x55b71fdc8d80] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 21: (()+0x7dd5) [0x7f0971da9dd5] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 22: (clone()+0x6d) [0x7f0970c7002d] > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this. > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: --- begin dump of recent events --- > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: -9999> 2019-09-30 20:46:02.076 7f0937f13700 5 osd.17 6328 heartbeat osd_stat(store_statfs(0x1a485594000/0x40000000/0x1bf00000000, data > 0x19cfcadc2a/0x1a3aa68000, compress 0x0/0x0/0x0, omap 0xcdefc78, meta 0x3321038 > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: -9998> 2019-09-30 20:46:03.776 7f0937f13700 5 osd.17 6328 heartbeat osd_stat(store_statfs(0x1a485594000/0x40000000/0x1bf00000000, data > 0x19cfcadc2a/0x1a3aa68000, compress 0x0/0x0/0x0, omap 0xcdefc78, meta 0x3321038 > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: -9997> 2019-09-30 20:46:04.277 7f0937f13700 5 osd.17 6328 heartbeat osd_stat(store_statfs(0x1a485594000/0x40000000/0x1bf00000000, data > 0x19cfcadc2a/0x1a3aa68000, compress 0x0/0x0/0x0, omap 0xcdefc78, meta 0x3321038 > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: -9996> 2019-09-30 20:46:04.777 7f0937f13700 5 osd.17 6328 heartbeat osd_stat(store_statfs(0x1a485594000/0x40000000/0x1bf00000000, data > 0x19cfcadc2a/0x1a3aa68000, compress 0x0/0x0/0x0, omap 0xcdefc78, meta 0x3321038 > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: -9995> 2019-09-30 20:46:04.905 7f0950744700 5 bluestore.MempoolThread(0x55b72c210a88) _tune_cache_size target: 8485076992 heap: 398680064 unmapped: > 9035776 mapped: 389644288 old cache_size: 5064831794 new cache siz > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: -9994> 2019-09-30 20:46:04.905 7f0950744700 5 bluestore.MempoolThread(0x55b72c210a88) _trim_shards cache_size: 5064831794 kv_alloc: 1979711488 > kv_used: 120635251 meta_alloc: 1979711488 meta_used: 117486636 data_all > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: -9993> 2019-09-30 20:46:05.813 7f094ccd2700 10 monclient: tick > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: -9992> 2019-09-30 20:46:05.813 7f094ccd2700 10 monclient: _check_auth_rotating have uptodate secrets (they expire after 2019-09-30 20:45:35.819641) > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: -9991> 2019-09-30 20:46:09.913 7f0950744700 5 bluestore.MempoolThread(0x55b72c210a88) _tune_cache_size target: 8485076992 heap: 398680064 unmapped: > 9035776 mapped: 389644288 old cache_size: 5064831794 new cache siz > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: -9990> 2019-09-30 20:46:09.913 7f0950744700 5 bluestore.MempoolThread(0x55b72c210a88) _trim_shards cache_size: 5064831794 kv_alloc: 1979711488 > kv_used: 120635251 meta_alloc: 1979711488 meta_used: 117486836 data_all > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: -9989> 2019-09-30 20:46:10.681 7f0937f13700 5 osd.17 6328 heartbeat osd_stat(store_statfs(0x1a485594000/0x40000000/0x1bf00000000, data > 0x19cfcadc2a/0x1a3aa68000, compress 0x0/0x0/0x0, omap 0xcdefc78, meta 0x3321038 > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: -9988> 2019-09-30 20:46:14.569 7f094dcd4700 2 osd.17 6328 ms_handle_reset con 0x55b72c797000 session 0x55b7401f1800 > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: -9987> 2019-09-30 20:46:14.569 7f096da92700 10 monclient: handle_auth_request added challenge on 0x55b733bf0400 > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: -9986> 2019-09-30 20:46:14.917 7f0950744700 5 bluestore.MempoolThread(0x55b72c210a88) _tune_cache_size target: 8485076992 heap: 398680064 unmapped: > 9035776 mapped: 389644288 old cache_size: 5064831794 new cache siz > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: -9985> 2019-09-30 20:46:14.917 7f0950744700 5 bluestore.MempoolThread(0x55b72c210a88) _trim_shards cache_size: 5064831794 kv_alloc: 1979711488 > kv_used: 120635251 meta_alloc: 1979711488 meta_used: 117487036 data_all > Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: -9984> 2019-09-30 20:46:15.381 7f0937f13700 5 osd.17 6328 heartbeat osd_stat(store_statfs(0x1a485594000/0x40000000/0x1bf00000000, data > 0x19cfcadc2a/0x1a3aa68000, compress 0x0/0x0/0x0, omap 0xcdefc78, meta 0x3321038 > > _______________________________________________ > ceph-users mailing list > ceph-users(a)lists.ceph.com >
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
-- Cheers, Brad
3 years, 11 months
2
3
0
0
Re: Top China Cashmere Products New Designs
by zhongji818027255
3 years, 11 months
1
0
0
0
Re: Industrial chiller (ISO & CE) / Haitian & FCS & Jwell &Powerjet supplier/ Ding Yu Chiller
by Kirsten Liu
3 years, 12 months
1
0
0
0
← Newer
1
2
3
4
5
6
7
8
9
Older →
Jump to page:
1
2
3
4
5
6
7
8
9
Results per page:
10
25
50
100
200