Zhenshi,
I've been doing the same periodically over the past couple weeks. I
haven't had to do it a second time on any of my OSDs, but I'm told that I
can expect to do so in the future. I believe that the conclusion in this
list was that for a workload with many small files it might be necessary to
allocate 300GB for DB per OSD.
-Dave
--
Dave Hall
Binghamton University
kdhall(a)binghamton.edu
On Mon, Nov 16, 2020 at 12:41 AM Zhenshi Zhou <deaderzzs(a)gmail.com> wrote:
well, the warning message disappeared after I executed
"ceph tell osd.63
compact".
Zhenshi Zhou <deaderzzs(a)gmail.com> 于2020年11月16日周一 上午10:04写道:
Has anyone met this issue yet?
Zhenshi Zhou <deaderzzs(a)gmail.com> 于2020年11月14日周六 下午12:36写道:
Hi,
I have a cluster of 14.2.8.
I created OSDs with dedicated PCIE for wal/db when deployed the cluster.
I set 72G for db and 3G for wal on each OSD.
And now my cluster is in a WARN stats until a long health time.
# ceph health detail
HEALTH_WARN BlueFS spillover detected on 1 OSD(s)
BLUEFS_SPILLOVER BlueFS spillover detected on 1 OSD(s)
osd.63 spilled over 33 MiB metadata from 'db' device (1.5 GiB used
of 72 GiB) to slow device
I lookup on google and find
https://tracker.ceph.com/issues/38745
I'm not sure if it's the same issue.
How can I deal with this?
THANKS
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io