Hello,
Clients and cluster are running Octopus. the only config changed after
upgrading to octopus is rbd_read_from_replica_policy set to balance.
Is this a risk configuration? Although the performance of vms is really
good now in my hdd based cluster
On Mon, Apr 6, 2020 at 5:17 PM Jason Dillaman <jdillama(a)redhat.com> wrote:
On Mon, Apr 6, 2020 at 3:55 AM Lomayani S. Laizer
<lomlaizer(a)gmail.com>
wrote:
Hello,
After upgrade our ceph cluster to octopus few days ago we are seeing vms
crashes with below error. We are using ceph with openstack(rocky).
Everything running ubuntu 18.04 with kernel 5.3. We seeing this crashes
in
busy vms. this is cluster was upgraded from
nautilus.
Just for clarity, have the hypervisor hosts been upgraded to Octopus
clients or was this just a cluster upgrade to Octopus and the clients
are still running an older version?
kernel: [430751.176904] fn-radosclient[3905]:
segfault at da0801 ip
00007fe78e076686 sp 00007fe7697f9470 erro
r 4 in librbd.so.1.12.0[7fe78de73000+5cb000]
Apr 6 03:26:00 compute6 kernel: [430751.176922] Code: 00 64 48 8b 04
25 28 00 00 00 48 89 44 24 18 31 c0 48 85 db 0f 84 fa 00 00 00 8
0 bf 38 01 00 00 00 48 89 fd 0f 84 ea 00 00 00 <83> bb 20 3f 00 00 ff
0f 84 dd 00 00 00 48 8b 83 18 3f 00 00 48 8d
Apr 6 03:26:11 compute6 libvirtd[1671]: 2020-04-06 03:26:11.955+0000:
1671: error : qemuMonitorIO:719 : internal error: End of file f
rom qemu monitor
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
--
Jason