Hi all,
we run a Ceph Luminous 12.2.12 cluster, 7 osds servers 12x4TB disks each.
Recently we redeployed the osds of one of them using bluestore backend,
however, after this, we're facing Out of memory errors(invoked oom-killer)
and the OS kills one of the ceph-osd process.
The osd is restarted automatically and back online after one minute.
We're running Ubuntu 16.04, kernel 4.15.0-55-generic.
The server has 32GB of RAM and 4GB of swap partition.
All the disks are hdd, no ssd disks.
Bluestore settings are the default ones
"osd_memory_target": "4294967296"
"osd_memory_cache_min": "134217728"
"bluestore_cache_size": "0"
"bluestore_cache_size_hdd": "1073741824"
"bluestore_cache_autotune": "true"
As stated in the documentation, bluestore assigns by default 4GB of
RAM per osd(1GB of RAM for 1TB).
So in this case 48GB of RAM would be needed. Am I right?
Are these the minimun requirements for bluestore?
In case adding more RAM is not an option, can any of
osd_memory_target, osd_memory_cache_min, bluestore_cache_size_hdd
be decrease to fit in our server specs?
Would this have any impact on performance?
Thanks
Jaime
--
Jaime Ibar
High Performance & Research Computing, IS Services
Lloyd Building, Trinity College Dublin, Dublin 2, Ireland.
http://www.tchpc.tcd.ie/ | jaime(a)tchpc.tcd.ie
Tel: +353-1-896-3725
Hi All,
ceph mgr module disable balancer
Error EINVAL: module 'balancer' cannot be disabled (always-on)
Whats the way to restart balanacer? Restart MGR service?
I wanna suggest to Balancer developer to setup a ceph-balancer.log for this
module get more information about whats doing.
Regards
Manuel
On Centos7, the option "secretfile" requires installation of ceph-fuse.
Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
________________________________________
From: ceph-users <ceph-users-bounces(a)lists.ceph.com> on behalf of Yan, Zheng <ukernel(a)gmail.com>
Sent: 07 August 2019 10:10:19
To: DHilsbos(a)performair.com
Cc: ceph-users
Subject: Re: [ceph-users] Error Mounting CephFS
On Wed, Aug 7, 2019 at 3:46 PM <DHilsbos(a)performair.com> wrote:
>
> All;
>
> I have a server running CentOS 7.6 (1810), that I want to set up with CephFS (full disclosure, I'm going to be running samba on the CephFS). I can mount the CephFS fine when I use the option secret=, but when I switch to secretfile=, I get an error "No such process." I installed ceph-common.
>
> Is there a service that I'm not aware I should be starting?
> Do I need to install another package?
>
mount.ceph is missing. check if it exists and is located in $PATH
> Thank you,
>
> Dominic L. Hilsbos, MBA
> Director - Information Technology
> Perform Air International Inc.
> DHilsbos(a)PerformAir.com
> www.PerformAir.com
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users(a)lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users(a)lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
as a follow-up:
* a full log of one OSD failing to start https://pastebin.com/T8UQ2rZ6
* our ec-pool cration in the fist place https://pastebin.com/20cC06Jn
* ceph osd dump and ceph osd erasure-code-profile get cephfs
https://pastebin.com/TRLPaWcH
as we try to dig more into it, it looks like a bug in the cephfs or
erasure-coding part of ceph.
Ansgar
Am Di., 6. Aug. 2019 um 14:50 Uhr schrieb Ansgar Jazdzewski
<a.jazdzewski(a)googlemail.com>:
>
> hi folks,
>
> we had to move one of our clusters so we had to boot all servers, now
> we found an Error on all OSD with the EC-Pool.
>
> do we miss some opitons, will an upgrade to 13.2.6 help?
>
>
> Thanks,
> Ansgar
>
> 2019-08-06 12:10:16.265 7fb337b83200 -1
> /build/ceph-13.2.4/src/osd/ECUtil.h: In function
> 'ECUtil::stripe_info_t::stripe_info_t(uint64_t, uint64_t)' thread
> 7fb337b83200 time 2019-08-06 12:10:16.263025
> /build/ceph-13.2.4/src/osd/ECUtil.h: 34: FAILED assert(stripe_width %
> stripe_size == 0)
>
> ceph version 13.2.4 (b10be4d44915a4d78a8e06aa31919e74927b142e) mimic
> (stable) 1: (ceph::ceph_assert_fail(char const, char const, int, char
> const)+0x102) [0x7fb32eeb83c2] 2: (()+0x2e5587) [0x7fb32eeb8587] 3:
> (ECBackend::ECBackend(PGBackend::Listener, coll_t const&,
> boost::intrusive_ptr<ObjectStore::CollectionImpl>&, ObjectStore,
> CephContext, std::shared_ptr<ceph::ErasureCodeInterface>, unsigned
> long)+0x4de) [0xa4cbbe] 4: (PGBackend::build_pg_backend(pg_pool_t
> const&, std::map<std::cxx11::basic_string<char,
> std::char_traits<char>, std::allocator<char> >,
> std::cxx11::basic_string<char, std::char_traits<char>,
> std::allocator<char> >, std::less<std::cxx11::basic_string<char,
> std::char_traits<char>, std::allocator<char> > >, std
> ::allocator<std::pair<std::__cxx11::basic_string<char,
> std::char_traits<char>, std::allocator<char> > const,
> std::cxx11::basic_string<char, std::char_traits<char>,
> std::allocator<char> > > > > const&, PGBackend::Listener, coll_t,
> boost::intrusive_ptr<ObjectStore::CollectionImpl>&, ObjectStore,
> CephContext)+0x2f9 ) [0x9474e9] 5:
> (PrimaryLogPG::PrimaryLogPG(OSDService, std::shared_ptr<OSDMap const>,
> PGPool const&, std::map<std::cxx11::basic_string<char,
> std::char_traits<char>, std::allocator<char> >,
> std::cxx11::basic_string<char, std::char_traits<char>,
> std::allocator<char> >, std::less<std::cxx11::basic_string<char,
> std::char_tra its<char>, std::allocator<char> > >,
> std::allocator<std::pair<std::__cxx11::basic_string<char,
> std::char_traits<char>, std::allocator<char> > const,
> std::cxx11::basic_string<char, std::char_traits<char>,
> std::allocator<char> > > > > const&, spg_t)+0x138) [0x8f96e8] 6:
> (OSD::_make_pg(std::shared_ptr<OSDMap const>, spg_t)+0x11d3)
> [0x753553] 7: (OSD::load_pgs()+0x4a9) [0x758339] 8:
> (OSD::init()+0xcd3) [0x7619c3] 9: (main()+0x3678) [0x64d6a8] 10:
> (libc_start_main()+0xf0) [0x7fb32ca68830] 11: (_start()+0x29)
> [0x717389] NOTE: a copy of the executable, or objdump -rdS
> <executable> is needed to interpret this.
Including new ceph-users list.
----- Forwarded message from Mike Perez <miperez(a)redhat.com> -----
Date: Fri, 2 Aug 2019 10:08:20 -0700
From: Mike Perez <miperez(a)redhat.com>
To: Kevin Hrpcek <kevin.hrpcek(a)ssec.wisc.edu>
CC: "ceph-users(a)lists.ceph.com" <ceph-users(a)lists.ceph.com>
Subject: Re: [ceph-users] Ceph Scientific Computing User Group
We have scheduled the next meeting on the community calendar for August
28 at 14:30 UTC. Each meeting will then take place on the last
Wednesday of each month.
Here's the pad to collect agenda/notes:
[1]https://pad.ceph.com/p/Ceph_Science_User_Group_Index
--
Mike Perez (thingee)
On Tue, Jul 23, 2019 at 10:40 AM Kevin Hrpcek
<[2]kevin.hrpcek(a)ssec.wisc.edu> wrote:
Update
We're going to hold off until August for this so we can promote it on
the Ceph twitter with more notice. Sorry for the inconvenience if you
were planning on the meeting tomorrow. Keep a watch on the list,
twitter, or ceph calendar for updates.
Kevin
On 7/5/19 11:15 PM, Kevin Hrpcek wrote:
We've had some positive feedback and will be moving forward with
this user group. The first virtual user group meeting is planned for
July 24th at 4:30pm central European time/10:30am American eastern
time. We will keep it to an hour in length. The plan is to use the
ceph bluejeans video conferencing and it will be put on the ceph
community calendar. I will send out links when it is closer to the
24th.
The goal of this user group is to promote conversations and sharing
ideas for how ceph is used in the the scientific/hpc/htc
communities. Please be willing to discuss your use cases, cluster
configs, problems you've had, shortcomings in ceph, etc... Not
everyone pays attention to the ceph lists so feel free to share the
meeting information with others you know that may be interested in
joining in.
Contact me if you have questions, comments, suggestions, or want to
volunteer a topic for meetings. I will be brainstorming some
conversation starters but it would also be interesting to have
people give a deep dive into their use of ceph and what they have
built around it to support the science being done at their facility.
Kevin
On 6/17/19 10:43 AM, Kevin Hrpcek wrote:
Hey all,
At cephalocon some of us who work in scientific computing got
together for a BoF and had a good conversation. There was some
interest in finding a way to continue the conversation focused on
ceph in scientific computing and htc/hpc environments. We are
considering putting together monthly video conference user group
meeting to facilitate sharing thoughts and ideas for this part of
the ceph community. At cephalocon we mostly had teams present from
the EU so I'm interested in hearing how much community interest
there is in a ceph+science/HPC/HTC user group meeting. It will be
impossible to pick a time that works well for everyone but initially
we considered something later in the work day for EU countries.
Reply to me if you're interested and please include your timezone.
Kevin
_______________________________________________
ceph-users mailing list
[3]ceph-users(a)lists.ceph.com
[4]http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[5]ceph-users(a)lists.ceph.com
[6]http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[7]ceph-users(a)lists.ceph.com
[8]http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
References
1. https://pad.ceph.com/p/Ceph_Science_User_Group_Index
2. mailto:kevin.hrpcek@ssec.wisc.edu
3. mailto:ceph-users@lists.ceph.com
4. http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
5. mailto:ceph-users@lists.ceph.com
6. http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
7. mailto:ceph-users@lists.ceph.com
8. http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users(a)lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
----- End forwarded message -----
--
Jan Fajerski
Engineer Enterprise Storage
SUSE Linux GmbH, GF: Felix Imendörffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG Nürnberg)
Hi, All,
When deploying a development cluster, there are three types of OSD objectstore backend: filestore, bluestore and kstore.
But there is no "--kstore" option when using "ceph-deploy osd"command to deploy a real ceph cluster.
Can kstore be used as OSD objectstore backend when deploy a real ceph cluster? If can, how to ?
Thanks a lot
R.R.Yuan
Hi, All,
When deploying a development cluster, there are three types of OSD objectstore backend: filestore, bluestore and kstore.
But there is no "--kestore" option when using "ceph-deploy osd" command to deploy a real ceph cluster.
Can kstore be used as OSD objectstore backend when deploy a real ceph cluster?
If can, how to ?
Thanks a lot!
Hi,
I have tried to do this twice now and can’t get it to work 100%. In my last attempt I have secondary zone correctly syncing:
sudo radosgw-admin sync status
realm 2cc7226c-2cce-4b75-a719-97ba22756289 (prod)
zonegroup 04702b5e-faa4-4110-9e98-6097f5e5e4b9 (us)
zone ece9f510-e910-49be-a89f-7b1f47b3faf8 (zone2)
metadata sync syncing
full sync: 0/64 shards
incremental sync: 64/64 shards
metadata is caught up with master
data sync source: 8f0e7a41-f71c-45e8-869a-9bfbadbcbf2c (zone1)
syncing
full sync: 0/128 shards
incremental sync: 128/128 shards
data is caught up with source
but master zone can’t sync from secondary:
sudo radosgw-admin sync status
realm 2cc7226c-2cce-4b75-a719-97ba22756289 (prod)
zonegroup 04702b5e-faa4-4110-9e98-6097f5e5e4b9 (us)
zone 8f0e7a41-f71c-45e8-869a-9bfbadbcbf2c (zone1)
metadata sync no sync (zone is master)
2019-08-06 16:43:36.228 7f059c7ab640 0 data sync zone:ece9f510 ERROR: failed to fetch datalog info
data sync source: ece9f510-e910-49be-a89f-7b1f47b3faf8 (zone2)
failed to retrieve sync info: (13) Permission denied
I can see in our secondary rgw hosts that there is a 403 from master zone
2019-08-06 16:27:58.137 7ff3f3d30700 1 ====== req done req=0x7ff3f3d298d0 op status=0 http_status=403 latency=0s ======
2019-08-06 16:27:58.137 7ff3f3d30700 1 civetweb: 0x563d52f64000: 10.130.30.210 - - [06/Aug/2019:16:27:58 -0700] "GET /admin/log?type=data&rgwx-zonegroup=04702b5e-faa4-4110-9e98-6097f5e5e4b9 HTTP/1.1" 403 320 - -
hi,
I created a cache layer using ceph osd tier. When I remove the cache layer,
I use cache-flush-evict-all to synchronize the data, but the cache layer sometimes
has several objects left, is this a bug?
thanks