running as ceph user and not root.
Following is the startup configuration which can be found via the
https://paste.ubuntu.com/p/2kV8KhrRfV/.
[Unit]
Description=Ceph object storage daemon osd.%i
PartOf=ceph-osd.target
After=network-online.target local-fs.target time-sync.target
Before=remote-fs-pre.target ceph-osd.target
Wants=network-online.target local-fs.target time-sync.target
remote-fs-pre.target ceph-osd.target
[Service]
Environment=CLUSTER=ceph
EnvironmentFile=-/etc/default/ceph
ExecReload=/bin/kill -HUP $MAINPID
ExecStart=/usr/bin/ceph-osd -f --cluster ${CLUSTER} --id %i --setuser ceph
--setgroup ceph
ExecStartPre=/usr/libexec/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER}
--id %i
LimitNOFILE=1048576
LimitNPROC=1048576
LockPersonality=true
MemoryDenyWriteExecute=true
# Need NewPrivileges via `sudo smartctl`
NoNewPrivileges=false
PrivateTmp=true
ProtectClock=true
ProtectControlGroups=true
ProtectHome=true
ProtectHostname=true
ProtectKernelLogs=true
ProtectKernelModules=true
# flushing filestore requires access to /proc/sys/vm/drop_caches
ProtectKernelTunables=false
ProtectSystem=full
Restart=on-failure
RestartSec=10
RestrictSUIDSGID=true
StartLimitBurst=3
StartLimitInterval=30min
TasksMax=infinity
[Install]
WantedBy=ceph-osd.target
When I issue the following command, the ceph osd starts successfully.
However, when it is failed when launching from systemctl.
root@osd03:~# /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph
--setgroup ceph
2021-04-05T11:24:08.823+0430 7f91772c5f00 -1 osd.2 496 log_to_monitors
{default=true}
2021-04-05T11:24:09.943+0430 7f916f7b9700 -1 osd.2 496 set_numa_affinity
unable to identify public interface 'ens160' numa node: (0) Success
On Mon, Apr 5, 2021, 10:51 AM Behzad Khoshbakhti <khoshbakhtib(a)gmail.com>
wrote:
running as ceph user
On Mon, Apr 5, 2021, 10:49 AM Anthony D'Atri <anthony.datri(a)gmail.com>
wrote:
> Running as root, or as ceph?
>
> > On Apr 4, 2021, at 3:51 AM, Behzad Khoshbakhti <khoshbakhtib(a)gmail.com>
> wrote:
> >
> > It worth mentioning as I issue the following command, the Ceph OSD
> starts
> > and joins the cluster:
> > /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph --setgroup
> ceph
> >
> >
> >
> > On Sun, Apr 4, 2021 at 3:00 PM Behzad Khoshbakhti <
> khoshbakhtib(a)gmail.com>
> > wrote:
> >
> >> Hi all,
> >>
> >> As I have upgrade my Ceph cluster from 15.2.10 to 16.2.0, during the
> >> manual upgrade using the precompiled packages, the OSDs was down with
> the
> >> following messages:
> >>
> >> root@osd03:/var/lib/ceph/osd/ceph-2# ceph-volume lvm activate --all
> >> --> Activating OSD ID 2 FSID 2d3ffc61-e430-4b89-bcd4-105b2df26352
> >> Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
> >> Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph
> prime-osd-dir
> >> --dev
> >>
>
/dev/ceph-9d37674b-a269-4239-aa9e-66a3c74df76c/osd-block-2d3ffc61-e430-4b89-bcd4-105b2df26352
> >> --path /var/lib/ceph/osd/ceph-2 --no-mon-config
> >> Running command: /usr/bin/ln -snf
> >>
>
/dev/ceph-9d37674b-a269-4239-aa9e-66a3c74df76c/osd-block-2d3ffc61-e430-4b89-bcd4-105b2df26352
> >> /var/lib/ceph/osd/ceph-2/block
> >> Running command: /usr/bin/chown -h ceph:ceph
> /var/lib/ceph/osd/ceph-2/block
> >> Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
> >> Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
> >> Running command: /usr/bin/systemctl enable
> >> ceph-volume@lvm-2-2d3ffc61-e430-4b89-bcd4-105b2df26352
> >> Running command: /usr/bin/systemctl enable --runtime ceph-osd@2
> >> Running command: /usr/bin/systemctl start ceph-osd@2
> >> --> ceph-volume lvm activate successful for osd ID: 2
> >>
> >> Content of /var/log/ceph/ceph-osd.2.log
> >> 2021-04-04T14:54:56.625+0430 7f4afbac0f00 0 set uid:gid to 64045:64045
> >> (ceph:ceph)
> >> 2021-04-04T14:54:56.625+0430 7f4afbac0f00 0 ceph version 16.2.0
> >> (0c2054e95bcd9b30fdd908a79ac1d8bbc3394442) pacific (stable), process
> >> ceph-osd, pid 5484
> >> 2021-04-04T14:54:56.625+0430 7f4afbac0f00 0 pidfile_write: ignore
> empty
> >> --pid-file
> >> 2021-04-04T14:54:56.625+0430 7f4afbac0f00 -1*
> >> bluestore(/var/lib/ceph/osd/ceph-2/block) _read_bdev_label failed to
> open
> >> /var/lib/ceph/osd/ceph-2/block: (1) Operation not permitted*
> >> 2021-04-04T14:54:56.625+0430 7f4afbac0f00 -1 *** ERROR: unable to open
> >> OSD superblock on /var/lib/ceph/osd/ceph-2: (2) No such file or
> directory*
> >>
> >>
> >> root@osd03:/var/lib/ceph/osd/ceph-2# systemctl status ceph-osd@2
> >> â— ceph-osd(a)2.service - Ceph object storage daemon osd.2
> >> Loaded: loaded (/lib/systemd/system/ceph-osd@.service; enabled;
> >> vendor preset: enabled)
> >> Active: failed (Result: exit-code) since Sun 2021-04-04 14:55:06
> >> +0430; 50s ago
> >> Process: 5471 ExecStartPre=/usr/libexec/ceph/ceph-osd-prestart.sh
> >> --cluster ${CLUSTER} --id 2 (code=exited, status=0/SUCCESS)
> >> Process: 5484 ExecStart=/usr/bin/ceph-osd -f --cluster ${CLUSTER}
> --id
> >> 2 --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
> >> Main PID: 5484 (code=exited, status=1/FAILURE)
> >>
> >> Apr 04 14:55:06 osd03 systemd[1]: ceph-osd(a)2.service: Scheduled
> restart
> >> job, restart counter is at 3.
> >> Apr 04 14:55:06 osd03 systemd[1]: Stopped Ceph object storage daemon
> osd.2.
> >> Apr 04 14:55:06 osd03 systemd[1]: ceph-osd(a)2.service: Start request
> >> repeated too quickly.
> >> Apr 04 14:55:06 osd03 systemd[1]: ceph-osd(a)2.service: Failed with
> result
> >> 'exit-code'.
> >> Apr 04 14:55:06 osd03 systemd[1]: Failed to start Ceph object storage
> >> daemon osd.2.
> >> root@osd03:/var/lib/ceph/osd/ceph-2#
> >>
> >> root@osd03:~# lsblk
> >> NAME MAJ:MIN RM SIZE RO TYPE
> MOUNTPOINT
> >> fd0 2:0 1 4K 0 disk
> >> loop0 7:0 0 55.5M 1 loop
> >> /snap/core18/1988
> >> loop1 7:1 0 69.9M 1 loop
> >> /snap/lxd/19188
> >> loop2 7:2 0 55.5M 1 loop
> >> /snap/core18/1997
> >> loop3 7:3 0 70.4M 1 loop
> >> /snap/lxd/19647
> >> loop4 7:4 0 32.3M 1 loop
> >> /snap/snapd/11402
> >> loop5 7:5 0 32.3M 1 loop
> >> /snap/snapd/11107
> >> sda 8:0 0 80G 0 disk
> >> ├─sda1 8:1 0 1M 0 part
> >> ├─sda2 8:2 0 1G 0 part /boot
> >> └─sda3 8:3 0 79G 0 part
> >> └─ubuntu--vg-ubuntu--lv 253:0 0 69.5G 0 lvm /
> >> sdb 8:16 0 16G 0 disk
> >> └─sdb1 8:17 0 16G 0 part
> >>
> >>
> └─ceph--9d37674b--a269--4239--aa9e--66a3c74df76c-osd--block--2d3ffc61--e430--4
> >> b89--bcd4--105b2df26352
> >> 253:1 0 16G 0 lvm
> >> root@osd03:~#
> >>
> >> root@osd03:/var/lib/ceph/osd/ceph-2# mount | grep -i ceph
> >> tmpfs on /var/lib/ceph/osd/ceph-2 type tmpfs (rw,relatime)
> >> root@osd03:/var/lib/ceph/osd/ceph-2#
> >>
> >> any help is much appreciated
> >> --
> >>
> >> Regards
> >> Behzad Khoshbakhti
> >> Computer Network Engineer (CCIE #58887)
> >>
> >>
> >
> > --
> >
> > Regards
> > Behzad Khoshbakhti
> > Computer Network Engineer (CCIE #58887)
> > +989128610474
> > _______________________________________________
> > ceph-users mailing list -- ceph-users(a)ceph.io
> > To unsubscribe send an email to ceph-users-leave(a)ceph.io
>
>