Hello Gert,
I recreated the self signed certificate.
SELinux was disabled and I temporarely disabled the firewall.
It still doesn't work and there is no entry in journalctl -f.
Somewhere there is still something from the previous nautilus or centos7 installation,
causing this problem.
I think I'll have to reinstall the node.
I'll update you.
Thanks and kind regards,
Simon
________________________________
Von: Gert Wieberdink <gert.wieberdink(a)ziggo.nl
Gesendet: Dienstag, 28. April 2020 21:16:10
An: Simon Sutter; ceph-users(a)ceph.io
Betreff: Re: [ceph-users] Re: Upgrading to Octopus
Sorry for the typo: must be journalctl -f instead of syslogctl -f.
-gw
On Tue, 2020-04-28 at 19:12 +0000, Gert Wieberdink wrote:
Hello Simon,ceph-mgr and dashboard installation should be
straightforward.
These are tough ones (internal server error 500). Did you create a self
signed cert for dashboard?Did you check firewalld (port 8443) and/or
SELinux? Does syslogctl -f show anything?
rgds,-gw
On Tue, 2020-04-28 at 12:17 +0000, Simon Sutter wrote:
<!--
p
{margin-top:0;
margin-bottom:0}
--
Hello,
Yes I upgraded the system to Centos8 and now I can install the
dashboard module.
But the problem now is, I cannot log in to the dashboard.
I deleted every cached file on my end and reinstalled the mgr and
dashboard several times.
If I try to log in with a wrong password, it tells me that it's
wrong, but if i use the right password, it just gives me a "500
Internal Server Error".
I enabled the debug mode for the mgr: ceph config set mgr
mgr/dashboard/log_level debug
But in the /var/log/ceph/ceph-mgr.node1.log it just tells me this
generic message (ips replaced with 0.0.0.0):
2020-04-28T14:11:15.191+0200 7f0baba8c700 0 [dashboard DEBUG
request] [::ffff:0.0.0.0:61383] [POST] [None] /api/auth
2020-04-28T14:11:15.282+0200 7f0bcf164700 0 log_channel(cluster) log
[DBG] : pgmap v316: 273 pgs: 273 active+clean; 2.4 TiB data, 7.1 TiB
used, 18 TiB / 25 TiB avail
2020-04-28T14:11:15.453+0200 7f0baba8c700 0 [dashboard DEBUG
controllers.auth] Login successful
2020-04-28T14:11:15.453+0200 7f0baba8c700 0 [dashboard ERROR
request] [::ffff:0.0.0.0:61383] [POST] [500] [0.264s] [513.0B]
[100ecd9a-5d09-419f-8b9f-31bc3d4042b4] /api/auth
2020-04-28T14:11:15.453+0200 7f0baba8c700 0 [dashboard ERROR
request] [b'{"status": "500 Internal Server Error",
"detail": "The
server encountered an unexpected condition which prevented it from
fulfilling the request.", "request_id": "100ecd9a-5d09-419f-8b9f-
31bc3d4042b4"}
']
2020-04-28T14:11:15.454+0200 7f0baba8c700 0 [dashboard INFO request]
[::ffff:0.0.0.0:61383] [POST] [500] [0.264s] [513.0B] [100ecd9a-5d09-
419f-8b9f-31bc3d4042b4] /api/auth
How can I find out, where the problem is?
Thanks in advance,
Simon
Von:
<mailto:gert.wieberdink@ziggo.nl
gert.wieberdink(a)ziggo.nl
<
<mailto:gert.wieberdink@ziggo.nl
gert.wieberdink(a)ziggo.nl
Gesendet: Donnerstag, 23. April 2020 20:34:58
An:
<mailto:ceph-users@ceph.io
ceph-users(a)ceph.io
Betreff: [ceph-users] Re: Upgrading to Octopus
Hello Simon,
I think that Khodayar is right. I managed to install a new Ceph
cluster on CentOS 8.1. Therefore you will need the ceph-el8.repo for
the time being. For some reason, "they" left the py3 packages you
mentioned out of EPEL (as with leveldb, but this package appeared
luckily last week in EPEL).
Please find below the ceph-el8.repo file, which you have to create in
/etc/yum.repos.d/
[copr:copr.fedorainfracloud.org:ktdreyer:ceph-el8]
name=Copr repo for ceph-el8 owned by ktdreyer
baseurl=
<https://download.copr.fedorainfracloud.org/results/ktdreyer/ceph-el8/epel-8-$basearch/
https://download.copr.fedorainfracloud.org/results/ktdreyer/ceph-el8/epel-8…
type=rpm-md
skip_if_unavailable=True
gpgcheck=1
gpgkey=
<https://download.copr.fedorainfracloud.org/results/ktdreyer/ceph-el8/pubkey.gpg
https://download.copr.fedorainfracloud.org/results/ktdreyer/ceph-el8/pubkey…
repo_gpgcheck=0
enabled=1
enabled_metadata=1
This repository - and CentOS 8.x - should have been sufficient to
bring up a fresh Ceph cluster.
Please let me know if you still have problems in configuring your
Ceph cluster.
rgds,
-gw
_______________________________________________
ceph-users mailing list --
<mailto:ceph-users@ceph.io
ceph-users(a)ceph.io
To unsubscribe send an email to
<mailto:ceph-users-leave@ceph.io
ceph-users-leave(a)ceph.io
_______________________________________________
ceph-users mailing list --
<mailto:ceph-users@ceph.io
ceph-users(a)ceph.io
To unsubscribe send an email to
<mailto:ceph-users-leave@ceph.io
ceph-users-leave(a)ceph.io