On Thu, May 14, 2020 at 9:59 AM David Galloway <dgallowa(a)redhat.com> wrote:
On 5/13/20 3:20 PM, Casey Bodley wrote:
On Wed, May 13, 2020 at 1:10 PM David Galloway
<dgallowa(a)redhat.com> wrote:
On 5/13/20 12:28 PM, Casey Bodley wrote:
On Wed, May 13, 2020 at 11:35 AM Yuri Weinstein
<yweinste(a)redhat.com> wrote:
>
> Details of this release summarized here:
>
https://tracker.ceph.com/issues/45455#note-1
>
> rados - approved Neha?
> rgw - approved Casey?
pretty much all of the failures and dead jobs are due to 'No space
left on device' errors. can we find a way to address those?
We'd have to get qty 205 larger hard drives for the smithi.
I see we're downloading a 134GB file as part of a test. Can we stop
doing that?
2020-05-09T22:05:32.971 INFO:teuthology.orchestra.run.smithi049.stderr:
92900K 100% 134G=1.0s
that's downloading gradle-6.0.1-bin.zip, which is a 92M file
Welp, that's embarrassing.
Is it possible to logrotate more often? I think there's stuff in the
/qa dir in ceph.git that does that during tests.
sorry, i remember working around these ENOSPC failures for master in
https://github.com/ceph/ceph/pull/34253. i just cherry-picked that for
octopus in
https://github.com/ceph/ceph/pull/35067. the root cause is
too much debug logging - maybe logrotate would help
> rbd - PASSED
> krbd - approved Jason, Ilya?
> fs - PASSED
> kcephfs - approved Patrick?
> multimds - approved Patrick?
> ceph-deploy - N/A
>
> upgrade/mimic-x (octopus) - PASSED
> upgrade/nautilus-x (octopus) - PASSED
> upgrade/octopus-p2p - PASSED
> upgrade/client-upgrade-luminous-octopus - PASSED
> upgrade/client-upgrade-mimic-octopus - PASSED
> upgrade/client-upgrade-nautilus-octopus - PASSED
> powercycle - in progress
> ceph-ansible - approved Brad?
> ceph-volume - approved Jan? (ceph-ansible bug)
>
> (please speak up if something is missing)
>
> sepia had been upgraded to this point release and performs well.
>
> Thx
> YuriW
> _______________________________________________
> Dev mailing list -- dev(a)ceph.io
> To unsubscribe send an email to dev-leave(a)ceph.io
>
_______________________________________________
Dev mailing list -- dev(a)ceph.io
To unsubscribe send an email to dev-leave(a)ceph.io