Can you please run: "cat /sys/kernel/security/apparmor/profiles"? See if
any of the lines have a label but no mode. Let us know what you find!
Thanks,
David
On Mon, May 3, 2021 at 8:58 AM Ashley Merrick <ashley(a)amerrick.co.uk> wrote:
Created BugTicket :
https://tracker.ceph.com/issues/50616
On Mon May 03 2021 21:49:41 GMT+0800 (Singapore
Standard Time), Ashley
Merrick <ashley(a)amerrick.co.uk> wrote:
Just checked cluster logs and they are full
of:cephadm exited with an
error code: 1, stderr:Reconfig daemon osd.16 ...
Traceback (most recent
call last): File
"/var/lib/ceph/30449cba-44e4-11eb-ba64-dda10beff041/cephadm.17068a0b484bdc911a9c50d6408adfca696c2faaa65c018d660a3b697d119482",
line 7931, in <module> main() File
"/var/lib/ceph/30449cba-44e4-11eb-ba64-dda10beff041/cephadm.17068a0b484bdc911a9c50d6408adfca696c2faaa65c018d660a3b697d119482",
line 7919, in main r = ctx.func(ctx) File
"/var/lib/ceph/30449cba-44e4-11eb-ba64-dda10beff041/cephadm.17068a0b484bdc911a9c50d6408adfca696c2faaa65c018d660a3b697d119482",
line 1717, in defaultimage return func(ctx) File
"/var/lib/ceph/30449cba-44e4-11eb-ba64-dda10beff041/cephadm.17068a0b484bdc911a9c50d6408adfca696c2faaa65c018d660a3b697d119482",
line 4162, in command_deploy c = get_container(ctx, ctx.fsid, daemon_type,
daemon_id, File
"/var/lib/ceph/30449cba-44e4-11eb-ba64-dda10beff041/cephadm.17068a0b484bdc911a9c50d6408adfca696c2faaa65c018d660a
3b697d119482", line 2451, in get_container
volume_mounts=get_container_mounts(ctx, fsid, daemon_type, daemon_id), File
"/var/lib/ceph/30449cba-44e4-11eb-ba64-dda10beff041/cephadm.17068a0b484bdc911a9c50d6408adfca696c2faaa65c018d660a3b697d119482",
line 2292, in get_container_mounts if HostFacts(ctx).selinux_enabled: File
"/var/lib/ceph/30449cba-44e4-11eb-ba64-dda10beff041/cephadm.17068a0b484bdc911a9c50d6408adfca696c2faaa65c018d660a3b697d119482",
line 6451, in selinux_enabled return (self.kernel_security['type'] ==
'SELinux') and \ File
"/var/lib/ceph/30449cba-44e4-11eb-ba64-dda10beff041/cephadm.17068a0b484bdc911a9c50d6408adfca696c2faaa65c018d660a3b697d119482",
line 6434, in kernel_security ret = _fetch_apparmor() File
"/var/lib/ceph/30449cba-44e4-11eb-ba64-dda10beff041/cephadm.17068a0b484bdc911a9c50d6408adfca696c2faaa65c018d660a3b697d119482",
line 6415, in _fetch_apparmor item, mode = line.split(' ') ValueError: not
enough values to unpack (expected 2, got 1) Traceback (most recent ca
ll last): File "/usr/share/ceph/mgr/cephadm/serve.py", line 1172, in
_remote_connection yield (conn, connr) File
"/usr/share/ceph/mgr/cephadm/serve.py", line 1087, in _run_cephadm code,
'\n'.join(err))) orchestrator._interface.OrchestratorError: cephadm exited
with an error code: 1, stderr:Reconfig daemon osd.16 ... Traceback (most
recent call last): File
"/var/lib/ceph/30449cba-44e4-11eb-ba64-dda10beff041/cephadm.17068a0b484bdc911a9c50d6408adfca696c2faaa65c018d660a3b697d119482",
line 7931, in <module> main() File
"/var/lib/ceph/30449cba-44e4-11eb-ba64-dda10beff041/cephadm.17068a0b484bdc911a9c50d6408adfca696c2faaa65c018d660a3b697d119482",
line 7919, in main r = ctx.func(ctx) File
"/var/lib/ceph/30449cba-44e4-11eb-ba64-dda10beff041/cephadm.17068a0b484bdc911a9c50d6408adfca696c2faaa65c018d660a3b697d119482",
line 1717, in _default_image return func(ctx) File
"/var/lib/ceph/30449cba-44e4-11eb-ba64-dda10beff041/cephadm.17068a0b484bdc911a9c50d6408adfca696c2faaa65c018d660a3b697d119482",
lin
e 4162, in command_deploy c = get_container(ctx, ctx.fsid, daemon_type,
daemon_id, File
"/var/lib/ceph/30449cba-44e4-11eb-ba64-dda10beff041/cephadm.17068a0b484bdc911a9c50d6408adfca696c2faaa65c018d660a3b697d119482",
line 2451, in get_container volume_mounts=get_container_mounts(ctx, fsid,
daemon_type, daemon_id), File
"/var/lib/ceph/30449cba-44e4-11eb-ba64-dda10beff041/cephadm.17068a0b484bdc911a9c50d6408adfca696c2faaa65c018d660a3b697d119482",
line 2292, in get_container_mounts if HostFacts(ctx).selinux_enabled: File
"/var/lib/ceph/30449cba-44e4-11eb-ba64-dda10beff041/cephadm.17068a0b484bdc911a9c50d6408adfca696c2faaa65c018d660a3b697d119482",
line 6451, in selinux_enabled return (self.kernel_security['type'] ==
'SELinux') and \ File
"/var/lib/ceph/30449cba-44e4-11eb-ba64-dda10beff041/cephadm.17068a0b484bdc911a9c50d6408adfca696c2faaa65c018d660a3b697d119482",
line 6434, in kernel_security ret = _fetch_apparmor() File
"/var/lib/ceph/30449cba-44e4-11eb-ba64-dda10beff041/cephadm.17068a0b484
bdc911a9c50d6408adfca696c2faaa65c018d660a3b697d119482", line 6415, in
_fetch_apparmor item, mode = line.split(' ') ValueError: not enough values
to unpack (expected 2, got 1)being repeated over and over again for each
OSD.Again listing "ValueError: not enough values to unpack (expected 2, got
1)"
> On Mon May 03 2021 17:20:59 GMT+0800
(Singapore Standard Time), Ashley
Merrick <ashley(a)amerrick.co.uk> wrote:
> Hello,Wondering if anyone had any feedback on
some commands I could try
to manually update the current OSD that is down to 16.2.1
so I can at least
get around this upgrade bug and back to 100%?If there is any log's or if it
seems a new bug and I should create a bugzilla report do let me know.Thanks
>> On Fri Apr 30 2021 21:54:30 GMT+0800
(Singapore Standard Time), Ashley
Merrick <ashley(a)amerrick.co.uk> wrote:
>> Hello All,I was running 15.2.8 via
cephadm on docker Ubuntu 20.04I
just attempted to upgrade to 16.2.1 via the
automated method, it
successfully upgraded the mon/mgr/mds and some OSD's, however it then
failed on an OSD and hasn't been able to pass even after stopping and
restarting the upgrade.It reported the following ""message": "Error:
UPGRADEREDEPLOYDAEMON: Upgrading daemon osd.35 on host sn-s01 failed.""If I
run 'ceph health detail' I get lot's of the following error :
"ValueError:
not enough values to unpack (expected 2, got 1)" throughout the detail
reportUpon googling, it looks like I am hitting something along the lines
of
https://158.69.68.89/issues/48924 &
https://tracker.ceph.com/issues/49522What do I need to do to either get
around this bug, or a way I can manually upgrade the remaining ceph OSD's
to 16.2.1, currently my cluster is working but the last OSD it failed to
upgrade is currently offline (I guess as no image attached to it now as it
failed to pull it), and
I have a cluster with OSD's from not 15.2.8 and 16.2.1Thanks
Sent via MXlogin
Sent via MXlogin
Sent via MXlogin
Sent via MXlogin
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io