Hello there,
Thank you for your response.
There is no error at syslog, dmesg, or SMART.
# ceph health detail
HEALTH_WARN Too many repaired reads on 2 OSDs
OSD_TOO_MANY_REPAIRS Too many repaired reads on 2 OSDs
osd.29 had 38 reads repaired
osd.16 had 17 reads repaired
How can i clear this waning ?
My ceph is version 14.2.9(clear_shards_repaired is not supported.)
/dev/sdh1 on /var/lib/ceph/osd/ceph-16 type xfs (rw,relatime,attr2,inode64,noquota)
# cat dmesg | grep sdh
[ 12.990728] sd 5:2:3:0: [sdh] 19531825152 512-byte logical blocks: (10.0 TB/9.09 TiB)
[ 12.990728] sd 5:2:3:0: [sdh] Write Protect is off
[ 12.990728] sd 5:2:3:0: [sdh] Mode Sense: 1f 00 00 08
[ 12.990728] sd 5:2:3:0: [sdh] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 13.016616] sdh: sdh1 sdh2
[ 13.017780] sd 5:2:3:0: [sdh] Attached SCSI disk
# ceph tell osd.29 bench
{
"bytes_written": 1073741824,
"blocksize": 4194304,
"elapsed_sec": 6.464404,
"bytes_per_sec": 166100668.21318716,
"iops": 39.60148530320815
}
# ceph tell osd.16 bench
{
"bytes_written": 1073741824,
"blocksize": 4194304,
"elapsed_sec": 9.6168945000000008,
"bytes_per_sec": 111651617.26584397,
"iops": 26.619819942914003
}
Thank you
> On 26 Mar 2021, at 16:04, Anthony D'Atri <anthony.datri(a)gmail.com> wrote:
>
> Did you look at syslog, dmesg, or SMART? Mostly likely the drives are failing.
>
>
>> On Mar 25, 2021, at 9:55 PM, jinguk.kwon(a)ungleich.ch wrote:
>>
>> Hello there,
>>
>> Thank you for advanced.
>> My ceph is ceph version 14.2.9
>> I have a repair issue too.
>>
>> ceph health detail
>> HEALTH_WARN Too many repaired reads on 2 OSDs
>> OSD_TOO_MANY_REPAIRS Too many repaired reads on 2 OSDs
>> osd.29 had 38 reads repaired
>> osd.16 had 17 reads repaired
>>
>> ~# ceph tell osd.16 bench
>> {
>> "bytes_written": 1073741824,
>> "blocksize": 4194304,
>> "elapsed_sec": 7.1486738159999996,
>> "bytes_per_sec": 150201541.10217974,
>> "iops": 35.81083800844663
>> }
>> ~# ceph tell osd.29 bench
>> {
>> "bytes_written": 1073741824,
>> "blocksize": 4194304,
>> "elapsed_sec": 6.9244327500000002,
>> "bytes_per_sec": 155065672.9246161,
>> "iops": 36.970537406114602
>> }
>>
>> But it looks like those osds are ok. how can i clear this warning ?
>>
>> Best regards
>> JG
>> _______________________________________________
>> ceph-users mailing list -- ceph-users(a)ceph.io
>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>
Hi,
Do I need to update ceph.conf and restart each OSD after adding more MONs?
This is with 15.2.8 deployed by cephadm.
When adding MON, "mon_host" should be updated accordingly.
Given [1], is that update "the monitor cluster’s centralized configuration
database" or "runtime overrides set by an administrator"?
[1] https://docs.ceph.com/en/latest/rados/configuration/ceph-conf/#config-sourc…
Thanks!
Tony
Request the moderators to approve the same.
It is since long and the solution to the issue is not found yet.
-Lokendra
On Tue, 23 Mar 2021, 10:17 Lokendra Rathour, <lokendrarathour(a)gmail.com>
wrote:
> Hi Team,
> I am trying to upgrade my existing Ceph Cluster (using Ceph-ansible) from
> current release Octopus to pacific for which I am using a rolling upgrade.
> Facing various issues in getting it done, please note as below and suggest:
>
> issue 1: when updating the all.yml file with ceph release number to 16 and
> and ceph release as Pacific:
>
> *TASK [ceph-validate : validate ceph_repository_community]
> **************************************************************************
>
>
>
>
>
>
>
> *task path:
> /home/ansible/ceph-ansible/roles/ceph-validate/tasks/main.yml:20Tuesday 23
> March 2021 10:00:09 +0530 (0:00:00.141) 0:01:06.028 *********fatal:
> [cephnode1]: FAILED! => changed=false msg: ceph_stable_release must be
> either 'nautilus' or 'octopus'fatal: [cephnode2]: FAILED! => changed=false
> msg: ceph_stable_release must be either 'nautilus' or 'octopus'fatal:
> [cephnode3]: FAILED! => changed=false msg: ceph_stable_release must be
> either 'nautilus' or 'octopus'*
>
> *Issue 2: by keeping the ceph_stable_release to octopus and changing the
> ceph release number to 16 it gives error as :*
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *fatal: [cephnode1 -> cephnode1]: FAILED! => changed=true cmd: - ceph -
> --cluster - ceph - osd - require-osd-release - pacific delta:
> '0:00:00.478257' end: '2021-03-22 19:54:22.892994' invocation:
> module_args: _raw_params: ceph --cluster ceph osd require-osd-release
> pacific _uses_shell: false argv: null chdir: null
> creates: null executable: null removes: null stdin: null
> stdin_add_newline: true strip_empty_ends: true warn: true msg:
> non-zero return code rc: 22 start: '2021-03-22 19:54:22.414737' stderr:
> |- Invalid command: pacific not in luminous|mimic|nautilus|octopus
> osd require-osd-release luminous|mimic|nautilus|octopus
> [--yes-i-really-mean-it] : set the minimum allowed OSD release to
> participate in the cluster Error EINVAL: invalid command stderr_lines:
> <omitted> stdout: '' stdout_lines: <omitted>*
>
> Problem Statement :
>
> Not able to upgrade to the upper version of Ceph-ansible from octopus to
> Pacific. Please suggest/Support.
>
> --
> ~ Lokendra
> skype: lokendrarathour
>
>
>
HI Robert,
Just checked your email on ceph users list.
I will try to look deep into the question.
For now i have a qery related to the uphrade itself.
Is it possible for you to send any link /documents that you are following
to upgrade ceph.
I am trying to upgrade ceph cluster using ceph ansible on centos 7 from
nautilus to octopus but have many el7 related upgrade issue.
And similarly i am trying to do it in centos8 but no luck.
I have tried posting my query to community (refer mail.in the trail) but it
is yet to be posted after moderators approval.
Any support would be appreciated thank you once again for your help.
-Lokendra
On Tue, 23 Mar 2021, 10:17 Lokendra Rathour, <lokendrarathour(a)gmail.com>
wrote:
> Hi Team,
> I am trying to upgrade my existing Ceph Cluster (using Ceph-ansible) from
> current release Octopus to pacific for which I am using a rolling upgrade.
> Facing various issues in getting it done, please note as below and suggest:
>
> issue 1: when updating the all.yml file with ceph release number to 16 and
> and ceph release as Pacific:
>
> *TASK [ceph-validate : validate ceph_repository_community]
> **************************************************************************
>
>
>
>
>
>
>
> *task path:
> /home/ansible/ceph-ansible/roles/ceph-validate/tasks/main.yml:20Tuesday 23
> March 2021 10:00:09 +0530 (0:00:00.141) 0:01:06.028 *********fatal:
> [cephnode1]: FAILED! => changed=false msg: ceph_stable_release must be
> either 'nautilus' or 'octopus'fatal: [cephnode2]: FAILED! => changed=false
> msg: ceph_stable_release must be either 'nautilus' or 'octopus'fatal:
> [cephnode3]: FAILED! => changed=false msg: ceph_stable_release must be
> either 'nautilus' or 'octopus'*
>
> *Issue 2: by keeping the ceph_stable_release to octopus and changing the
> ceph release number to 16 it gives error as :*
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *fatal: [cephnode1 -> cephnode1]: FAILED! => changed=true cmd: - ceph -
> --cluster - ceph - osd - require-osd-release - pacific delta:
> '0:00:00.478257' end: '2021-03-22 19:54:22.892994' invocation:
> module_args: _raw_params: ceph --cluster ceph osd require-osd-release
> pacific _uses_shell: false argv: null chdir: null
> creates: null executable: null removes: null stdin: null
> stdin_add_newline: true strip_empty_ends: true warn: true msg:
> non-zero return code rc: 22 start: '2021-03-22 19:54:22.414737' stderr:
> |- Invalid command: pacific not in luminous|mimic|nautilus|octopus
> osd require-osd-release luminous|mimic|nautilus|octopus
> [--yes-i-really-mean-it] : set the minimum allowed OSD release to
> participate in the cluster Error EINVAL: invalid command stderr_lines:
> <omitted> stdout: '' stdout_lines: <omitted>*
>
> Problem Statement :
>
> Not able to upgrade to the upper version of Ceph-ansible from octopus to
> Pacific. Please suggest/Support.
>
> --
> ~ Lokendra
> skype: lokendrarathour
>
>
>
Hello.
I have 5 node Cluster in A datacenter. Also I have same 5 node in B datacenter.
They're gonna be 10 node 8+2 EC cluster for backup but I need to add
the 5 node later.
I have to sync my S3 data with multisite on the 5 node cluster in A
datacenter and move
them to the B and add the other 5 node to the same cluster.
The question is: Can I create 8+2 ec pool on 5 node cluster and add
the 5 node later? How can I rebalance the data after that?
Or is there any better solution in my case? what should I do?
Dear Ceph’ers
I am about to upgrade MDS nodes for Cephfs in the Ceph cluster (erasure code 8+3 ) I am administrating.
Since they will get plenty of memory and CPU cores, I was wondering if it would be a good idea to move metadata OSDs (NVMe's currently on OSD nodes together with cephfs_data ODS (HDD)) to the MDS nodes?
Configured as:
4 x MDS with each a metadata OSD and configured with 4 x replication
so each metadata OSD would have a complete copy of metadata.
I know MDS, stores al lot of metadata in RAM, but if metadata OSDs were on MDS nodes, would that not bring down latency?
Anyway, I am just asking for your opinion on this? Pros and cons or even better somebody who actually have tried this?
Best regards,
Jesper
--------------------------
Jesper Lykkegaard Karlsen
Scientific Computing
Centre for Structural Biology
Department of Molecular Biology and Genetics
Aarhus University
Gustav Wieds Vej 10
8000 Aarhus C
E-mail: jelka(a)mbg.au.dk<mailto:jelka@mbg.au.dk>
Tlf: +45 50906203
Hi,
I am running a Ceph Octopus cluster in version 15.2.10 setup with the
cephadm orchestrator and official Docker hub container images.
Yesterday an important security fix was released for libssl and is
already packaged for all major distributions.
I tried to run "ceph orch upgrade check docker.io/ceph/ceph:v15" but it
tells me that the containers do not need to be upgraded.
How will this security fix of OpenSSL be deployed in a timely manner to
users of the Ceph container images?
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
Hi,
Is it possible to do a big jump or needs to go slower to luminous latest, then mimic latest, then nautilus latest?
Istvan Szabo
Senior Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: istvan.szabo(a)agoda.com<mailto:istvan.szabo@agoda.com>
---------------------------------------------------
________________________________
This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.