ceph version is: v16.2.10
But I had close the ration with "ceph config set mon mon_warn_pg_not_deep_scrubbed_ratio 0"
show the prints:
[root@smd-node01 deeproute]# ceph config get mon mon_warn_pg_not_deep_scrubbed_ratio
0.000000
Several users have complained for some time that our DMARC/DKIM handling
is not correct. I've recently had time to go study DMARC, DKIM, SPF,
SRS, and other tasty morsels of initialisms, and have thus made a change
to how Mailman handles DKIM signatures for the list:
If a domain advertises that it will reject or quarantine messages that
fail DKIM (through its DMARC policy in the DNS text record
_dmarc.<domain>), the message will be rewritten to be "From" ceph.io,
and SPF should be correct. I do not know if it will regenerate a DKIM
signature in that case for what is now it's own message. The From:
address will say something like "From Original Sender via ceph-users
<ceph-users(a)ceph.io> so it's somewhat clear who first sent the message,
and Reply-To will be set to Original Sender.
Again, this will only happen for senders from domains that advertise a
strict DMARC policy. This does not include gmail.com, surprisingly.
Let me know if you notice anything that seems to have gotten worse.
Next on the list is to investigate if DKIM-signing outbound messages, or
at least ones that don't already have an ARC-Seal, is appropriate and/or
workable.
Hi,
This seems not very relevant since all ceph components are running in
containers. Any ideas to get over this issue? Any other ideas or
suggestions on this kind of deployment?
sudo ./cephadm --image 10.21.22.1:5000/ceph:v17.2.5-20230316 --docker
bootstrap --mon-ip 10.21.22.1 --skip-monitoring-stack
Creating directory /etc/ceph for ceph.conf
Verifying podman|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
No time sync service is running; checked for ['chrony.service',
'chronyd.service', 'systemd-timesyncd.service', 'ntpd.service',
'ntp.service', 'ntpsec.service', 'openntpd.service']
ERROR: Distro uos version 20 not supported
uname -a
Linux aaaa 4.19.0-91.82.42.uelc20.x86_64 #1 SMP Sat May 15 13:50:04 CST
2021 x86_64 x86_64 x86_64 GNU/Linux
Thank you in advance
Ben
Dear all,
I am writing to seek your assistance in resolving an issue with my Ceph cluster.
Currently, the cluster is experiencing a problem where the number of Placement Groups (PGs) that need to undergo deepscrub and regular scrub cannot be completed in a timely manner. I have noticed that these PGs appear to be concentrated on a single OSD. I am seeking your guidance on how to address this issue and would appreciate any insights or suggestions you may have.
Please find attached the Ceph health detail output for your reference.
HEALTH_WARN 13 pgs not deep-scrubbed in time; 7 pgs not scrubbed in time
[WRN] PG_NOT_DEEP_SCRUBBED: 13 pgs not deep-scrubbed in time
pg 4.426 not deep-scrubbed since 2023-04-22T10:00:21.529716+0800
pg 4.f0 not deep-scrubbed since 2023-04-22T04:55:17.868881+0800
pg 4.b9 not deep-scrubbed since 2023-04-22T16:47:25.219603+0800
pg 4.87 not deep-scrubbed since 2023-04-22T20:01:02.508600+0800
pg 4.31 not deep-scrubbed since 2023-04-23T00:27:39.299893+0800
pg 4.5b9 not deep-scrubbed since 2023-04-19T21:03:47.041934+0800
pg 4.68a not deep-scrubbed since 2023-04-21T19:52:39.251293+0800
pg 4.6a4 not deep-scrubbed since 2023-04-22T16:20:51.078431+0800
pg 4.6ec not deep-scrubbed since 2023-04-21T11:20:33.661595+0800
pg 4.7a4 not deep-scrubbed since 2023-04-20T22:30:44.506420+0800
pg 4.7a2 not deep-scrubbed since 2023-04-16T12:05:56.586205+0800
pg 4.7b4 not deep-scrubbed since 2023-04-17T15:50:10.595292+0800
pg 4.7c8 not deep-scrubbed since 2023-04-19T15:10:12.673655+0800
[WRN] PG_NOT_SCRUBBED: 7 pgs not scrubbed in time
pg 4.31e not scrubbed since 2023-04-24T13:34:26.103257+0800
pg 4.5b9 not scrubbed since 2023-04-24T07:20:53.891175+0800
pg 4.68a not scrubbed since 2023-04-24T03:37:58.070854+0800
pg 4.7a4 not scrubbed since 2023-04-24T02:55:25.912789+0800
pg 4.7b4 not scrubbed since 2023-04-24T10:04:46.889422+0800
pg 4.7c8 not scrubbed since 2023-04-24T13:36:07.284271+0800
pg 4.7d2 not scrubbed since 2023-04-24T14:47:19.365551+0800
Peter
Details of this release are summarized here:
https://tracker.ceph.com/issues/59542#note-1
Release Notes - TBD
Seeking approvals for:
smoke - Radek, Laura
rados - Radek, Laura
rook - Sébastien Han
cephadm - Adam K
dashboard - Ernesto
rgw - Casey
rbd - Ilya
krbd - Ilya
fs - Venky, Patrick
upgrade/octopus-x (pacific) - Laura (look the same as in 16.2.8)
upgrade/pacific-p2p - Laura
powercycle - Brad (SELinux denials)
ceph-volume - Guillaume, Adam K
Thx
YuriW
Hi,
We have an octopus cluster where we want to move from centos to Ubuntu, after activate all the osd, class is not shown in ceph osd tree.
However ceph-volume list shows the crush device class :/
Should I just add it or?
________________________________
This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
Hi everyone,
how do you test features that are used by your users before
updating/upgrading to a newer version of Ceph? For example, if you have
users or customers using RBD images with special sets of
permissions/caps, or users/customers using S3 with versioning... None of
these should be broken by the new code.
Do you use any framework, set of scripts, or anything else? So far, I
have only found Teuthology for integration tests and then for S3
[https://github.com/ceph/s3-tests].
Thank you
Michal
hiya folks!
i am excited to be learning about ceph in my homelab, where i have a couple
trios of machines service, currently, a cephfs, but i’m looking fw to using
ceph for block devices for vms, etc..
my phyiscal machines have multiple network interfaces, one trio of them are
ARM machines with up to 4x SFP+ 10GBe interfaces.
I have a couple of 10GBe switches - one which has exactly three ports I can
connect these nodes to for general distribution to the rest of the network,
and another…
I would like to configure my Ceph cluster to replicate over a different
network interface than it serves clients over..
Is this a common configuration, and/or can anyone provide me some guidance?!
Thanks in advance!
Best!
J
--
Justin Alan Ryan