Hi,
I have installed Ceph Octopus cluster using cephadm with a single network
now I want to add a second network and configure it as a cluster address.
How do I configure ceph to use second Network as cluster network?.
Amudhan
Hi,
# ceph health detail
HEALTH_WARN too few PGs per OSD (24 < min 30)
TOO_FEW_PGS too few PGs per OSD (24 < min 30)
ceph version 14.2.9
This warning popped up when autoscale shrunk a pool from pg_num and pgp_num from 512 to 256 on its own. The hdd35 storage is only used by this pool.
I have three different storage classes and the pools use the different classes as appropriate. How can I convert the warning into something useful which then helps me make the appropriate change to the right class of storage? I'm guessing it's referring to hdd35.
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd25 129 TiB 83 TiB 46 TiB 46 TiB 35.87
hdd35 269 TiB 220 TiB 49 TiB 49 TiB 18.12
ssd 256 TiB 164 TiB 92 TiB 92 TiB 35.84
TOTAL 655 TiB 468 TiB 186 TiB 187 TiB 28.56
If I follow: https://docs.ceph.com/en/latest/rados/operations/health-checks/#too-few-pgs
Which then links to: https://docs.ceph.com/en/latest/rados/operations/placement-groups/#choosing…
The math for this would want the pool to have pg/p_num of 2048 -- where autoscale just recently shrunk the count. Which is more right?
Thanks!
peter
Peter Eisch
Senior Site Reliability Engineer
T1.612.445.5135
virginpulse.com
Australia | Bosnia and Herzegovina | Brazil | Canada | Singapore | Switzerland | United Kingdom | USA
Confidentiality Notice: The information contained in this e-mail, including any attachment(s), is intended solely for use by the designated recipient(s). Unauthorized use, dissemination, distribution, or reproduction of this message by anyone other than the intended recipient(s), or a person designated as responsible for delivering such messages to the intended recipient, is strictly prohibited and may be unlawful. This e-mail may contain proprietary, confidential or privileged information. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Virgin Pulse, Inc. If you have received this message in error, or are not the named recipient(s), please immediately notify the sender and delete this e-mail message.
v2.66
Hi,
We're considering the merits of enabling CephFS for our main Ceph
cluster (which provides object storage for OpenStack), and one of the
obvious questions is what sort of hardware we would need for the MDSs
(and how many!).
These would be for our users scientific workloads, so they would need to
provide reasonably high performance. For reference, we have 3060 6TB
OSDs across 51 OSD hosts, and 6 dedicated RGW nodes.
The minimum specs are very modest (2-3GB RAM, a tiny amount of disk,
similar networking to the OSD nodes), but I'm not sure how much going
beyond that is likely to be useful in production.
I've also seen it suggested that an SSD-only pool is sensible for the
CephFS metadata pool; how big is that likely to get?
I'd be grateful for any pointers :)
Regards,
Matthew
--
The Wellcome Sanger Institute is operated by Genome Research
Limited, a charity registered in England with number 1021457 and a
company registered in England with number 2742969, whose registered
office is 215 Euston Road, London, NW1 2BE.
I'm not sure I understood the question.
If you're asking if you can run octopus via RPMs on el7 without the
cephadm and containers orchestration, then the answer is yes.
-- dan
On Fri, Oct 23, 2020 at 9:47 AM Marc Roos <M.Roos(a)f1-outsourcing.eu> wrote:
>
>
> No clarity on this?
>
> -----Original Message-----
> To: ceph-users
> Subject: [ceph-users] ceph octopus centos7, containers, cephadm
>
>
> I am running Nautilus on centos7. Does octopus run similar as nautilus
> thus:
>
> - runs on el7/centos7
> - runs without containers by default
> - runs without cephadm by default
>
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
I am running Nautilus on centos7. Does octopus run similar as nautilus
thus:
- runs on el7/centos7
- runs without containers by default
- runs without cephadm by default
Hi all,
FYI, we're currently reviewing and discussing several design documents
that cover some topics related to "Day 1 installation" and "Day 2
operations" using the functionality provided by cephadm in Ceph Dashboard.
We'd like to solicit your input on these plans, so feel free to review
and comment on these drafts here (many thanks to Paul Cuzner for kicking
this off):
docs: Dashboard host management
— https://github.com/ceph/ceph/pull/37292
doc/dev/cephadm: Doc defining the design for host maintenance
— https://github.com/ceph/ceph/pull/37607
doc/dev/cephadm: high level design for a compliance check feature
— https://github.com/ceph/ceph/pull/37519
Thank you for your input!
Lenz
--
SUSE Software Solutions Germany GmbH - Maxfeldstr. 5 - 90409 Nuernberg
GF: Felix Imendörffer, HRB 36809 (AG Nürnberg)
Hi, today mi Infra provider has a blackout, then the Ceph was try to
recover but are in an inconsistent state because many OSD can recover
itself because the kernel kill it by OOM. Even now one OSD that was OK,
go down by OOM killed.
Even in a server with 32GB RAM the OSD use ALL that and never recover, i
think that can be a memory leak, ceph version octopus 15.2.3
In: https://pastebin.pl/view/59089adc
You can see that buffer_anon get 32GB, but why?? all my cluster is down
because that.
We recently did some work on the Ceph cluster, and a few disks ended up
offline at the same time. There are now 6 PG's that are stuck in a
"remapped" state, and this is all of their recovery states:
*recovery_state: 0: name: Started/Primary/WaitActingChangeenter_time:
2020-10-21 18:48:02.034430comment: waiting for pg acting set to change1:
name: Startedenter_time: 2020-10-21 18:48:01.752957*
Any ideas?
Mac Wynkoop