Resending the response back to the list.
Zitat von "Lomayani S. Laizer" <lomlaizer(a)gmail.com>om>:
Hello,
I have been running Nautilus from May last year so this is separate issue
from recent bug
I think the problem is between systemd and ceph-volume. No any logs hitting
osd logs because osd dont start at all.
starting osd manually works fine (/usr/bin/ceph-osd -f --cluster ceph --id
29 --setuser ceph --setgroup ceph)
You can see starting osd just exit with no usable log(RuntimeError: command
returned non-zero exit status: 1)
Below is the logs of ceph-volume-systemd.log
Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir
--dev /dev/ceph-3811ddf5-02be-40f1-a53e-053131aa5712/osd-bl
ock-3e52d340-5416-46e6-b697-c15ca85f6883 --path /var/lib/ceph/osd/ceph-29
--no-mon-config
Running command: /bin/ln -snf
/dev/ceph-3811ddf5-02be-40f1-a53e-053131aa5712/osd-block-3e52d340-5416-46e6-b697-c15ca85f6883
/var/lib/c
eph/osd/ceph-29/block
Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-29/block
Running command: /bin/chown -R ceph:ceph /dev/dm-4
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-29
Running command: /bin/systemctl enable
ceph-volume@lvm-29-3e52d340-5416-46e6-b697-c15ca85f6883
Running command: /bin/systemctl enable --runtime ceph-osd@29
stderr: Created symlink /run/systemd/system/ce
[2020-04-01 12:17:31,111][ceph_volume.process][INFO ] stderr
ph-osd.target.wants/ceph-osd(a)33.service → /lib/systemd/system/ceph-osd@.
service.
[2020-04-01 12:17:31,111][ceph_volume.process][INFO ] stderr
ph-osd.target.wants/ceph-osd(a)29.service → /lib/systemd/system/ceph-osd@.
service.
Running command: /bin/systemctl start ceph-osd@29
stderr: Job for ceph-osd(a)29.service canceled.
--> RuntimeError: command returned non-zero exit status: 1
On Wed, Apr 1, 2020 at 1:22 PM Eugen Block <eblock(a)nde.ag> wrote:
> Hi,
>
> are you hitting [1]? Did you run Nautilus only for a short period of
> time before upgrading to Octopus?
>
> If this doesn't apply to you, can you see anything in the OSD logs
> (/var/log/ceph/ceph-osd.<ID>.log)?
>
> Regards,
> Eugen
>
> [1]
https://tracker.ceph.com/issues/44770
>
> Zitat von "Lomayani S. Laizer" <lomlaizer(a)gmail.com>om>:
>
> > Hello,
> > I have upgraded nautilus cluster to octopus few days ago. the cluster was
> > running ok and even after to octopus everything was running ok
> >
> > the issue came when i rebooted the servers for updating the kernel. Two
> > servers out of 6 osd's servers osd cant start. No error reported in
> > ceph-volume.log and ceph-volume-systemd.log
> >
> > Starting osd with /usr/bin/ceph-osd -f --cluster ceph --id 30 --setuser
> > ceph --setgroup ceph works just fine. the issue is starting osd in
> systemd
> >
> > ceph-volume-systemd.log
> >
> > 16:36:28,193][systemd][WARNING] failed activating OSD, retries left: 30
> > [2020-03-31 16:36:28,196][systemd][WARNING] command returned non-zero
> exit
> > status: 1
> > [2020-03-31 16:36:28,196][systemd][WARNING] failed activating OSD,
> retries
> > left: 30
> > [2020-03-31 16:41:25,054][systemd][INFO ] raw systemd input received:
> > lvm-28-7f4113c8-c5cf-4f70-9f7a-7a32de9d6587
> > [2020-03-31 16:41:25,054][systemd][INFO ] raw systemd input received:
> > lvm-30-8a70ad95-1c79-4502-a9a3-d5d7b9df84b6
> > [2020-03-31 16:41:25,054][systemd][INFO ] raw systemd input received:
> > lvm-31-a8efb7db-686b-4789-a9c4-01442c28577f
> > [2020-03-31 16:41:25,096][systemd][INFO ] parsed sub-command: lvm, extra
> > data: 28-7f4113c8-c5cf-4f70-9f7a-7a32de9d6587
> > [2020-03-31 16:41:25,096][systemd][INFO ] parsed sub-command: lvm, extra
> > data: 30-8a70ad95-1c79-4502-a9a3-d5d7b9df84b6
> > [2020-03-31 16:41:25,054][systemd][INFO ] raw systemd input received:
> > lvm-33-7d688fc1-ed7b-45ae-ac0e-7b1787e0b64f
> > [2020-03-31 16:41:25,096][systemd][INFO ] parsed sub-command: lvm, extra
> > data: 31-a8efb7db-686b-4789-a9c4-01442c28577f
> > [2020-03-31 16:41:25,068][systemd][INFO ] raw systemd input received:
> > lvm-29-3e52d340-5416-46e6-b697-c15ca85f6883
> > [2020-03-31 16:41:25,096][systemd][INFO ] parsed sub-command: lvm, extra
> > data: 33-7d688fc1-ed7b-45ae-ac0e-7b1787e0b64f
> > [2020-03-31 16:41:25,068][systemd][INFO ] raw systemd input received:
> > lvm-32-3841a62d-d6bc-404a-8762-163530b2d5d4
> > [2020-03-31 16:41:25,096][systemd][INFO ] parsed sub-command: lvm, extra
> > data: 29-3e52d340-5416-46e6-b697-c15ca85f6883
> > [2020-03-31 16:41:25,096][systemd][INFO ] parsed sub-command: lvm, extra
> > data: 32-3841a62d-d6bc-404a-8762-163530b2d5d4
> > [2020-03-31 16:41:25,108][ceph_volume.process][INFO ] Running command:
> > /usr/sbin/ceph-volume lvm trigger 29-3e52d340-5416-46e6-b697-c15ca85f6883
> >
> > ceph-volume.log
> > 2-163530b2d5d4
> > [2020-03-31 17:17:23,679][ceph_volume.process][INFO ] Running command:
> > /bin/systemctl enable --runtime ceph-osd@31
> > [2020-03-31 17:17:23,863][ceph_volume.process][INFO ] Running command:
> > /bin/systemctl enable --runtime ceph-osd@30
> > [2020-03-31 17:17:24,045][ceph_volume.process][INFO ] Running command:
> > /bin/systemctl enable --runtime ceph-osd@33
> > [2020-03-31 17:17:24,241][ceph_volume.process][INFO ] Running command:
> > /bin/systemctl enable --runtime ceph-osd@32
> > [2020-03-31 17:17:24,449][ceph_volume.process][INFO ] Running command:
> > /bin/systemctl enable --runtime ceph-osd@28
> > [2020-03-31 17:17:24,629][ceph_volume.process][INFO ] Running command:
> > /bin/systemctl enable --runtime ceph-osd@29
> > [2020-03-31 17:17:24,652][ceph_volume.process][INFO ] stderr Created
> > symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd(a)31.service →
> > /lib/systemd/system/ceph-osd@.service.
> > [2020-03-31 17:17:24,664][ceph_volume.process][INFO ] stderr Created
> > symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd(a)30.service →
> > /lib/systemd/system/ceph-osd@.service.
> > [2020-03-31 17:17:24,872][ceph_volume.process][INFO ] Running command:
> > /bin/systemctl start ceph-osd@31
> > [2020-03-31 17:17:24,875][ceph_volume.process][INFO ] stderr Created
> > symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd(a)33.service →
> > /lib/systemd/system/ceph-osd@.service.
> > [2020-03-31 17:17:25,072][ceph_volume.process][INFO ] stderr Created
> > symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd(a)32.service →
> > /lib/systemd/system/ceph-osd@.service.
> > [2020-03-31 17:17:25,075][ceph_volume.process][INFO ] Running command:
> > /bin/systemctl start ceph-osd@30
> > [2020-03-31 17:17:25,282][ceph_volume.process][INFO ] Running command:
> > /bin/systemctl start ceph-osd@33
> > [2020-03-31 17:17:25,497][ceph_volume.process][INFO ] stderr Created
> > symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd(a)28.service →
> > /lib/systemd/system/ceph-osd@.service.
> > [2020-03-31 17:17:25,499][ceph_volume.process][INFO ] Running command:
> > /bin/systemctl start ceph-osd@32
> > [2020-03-31 17:17:25,520][ceph_volume.process][INFO ] stderr Created
> > symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd(a)29.service →
> > /lib/systemd/system/ceph-osd@.service.
> > [2020-03-31 17:17:25,705][ceph_volume.process][INFO ] Running command:
> > /bin/systemctl start ceph-osd@28
> > [2020-03-31 17:17:25,887][ceph_volume.process][INFO ] Running command:
> > /bin/systemctl start ceph-osd@29
> >
> > --
> > Lomayani
> > _______________________________________________
> > ceph-users mailing list -- ceph-users(a)ceph.io
> > To unsubscribe send an email to ceph-users-leave(a)ceph.io
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>