I finally figured out this problem: swap memory was assigned to OSD processes for some
reasons (vm.swappiness is already set to 0) which decreased the performance of KV. I
restarted OSDs and switch swap off. Now the warning seems disappeared from OSD logs.
2020年3月4日 上午11:08,徐蕴 <yunxu(a)me.com> 写道:
Hi,
Our cluster (14.2.6) has sporadic slow ops warnings since upgrading from Jewel 1 month
ago. Today I checked the OSD log files and found out a lot of entries like:
ceph-osd.5.log:2020-03-04 10:33:31.592 7f18ca41f700 0
bluestore(/var/lib/ceph/osd/ceph-5) log_latency_fn slow operation observed for
_txc_committed_kv, latency = 5.16871s, txc = 0x55e33ae41b80
ceph-osd.5.log:2020-03-04 10:33:31.592 7f18ca41f700 0
bluestore(/var/lib/ceph/osd/ceph-5) log_latency_fn slow operation observed for
_txc_committed_kv, latency = 5.15158s, txc = 0x55e3639b3340
ceph-osd.5.log:2020-03-04 10:33:31.592 7f18ca41f700 0
bluestore(/var/lib/ceph/osd/ceph-5) log_latency_fn slow operation observed for
_txc_committed_kv, latency = 6.77361s, txc = 0x55e3379cc840
ceph-osd.5.log:2020-03-04 10:33:52.666 7f18ca41f700 0
bluestore(/var/lib/ceph/osd/ceph-5) log_latency_fn slow operation observed for
_txc_committed_kv, latency = 5.42519s, txc = 0x55e33722d600
or
/var/log/kolla/ceph/ceph-osd.7.log:2020-03-04 00:41:31.110 7f3dc0bc8700 0
bluestore(/var/lib/ceph/osd/ceph-7) log_latency slow operation observed for
submit_transact, latency = 8.1279s
/var/log/kolla/ceph/ceph-osd.7.log:2020-03-04 00:41:31.110 7f3dd1bea700 0
bluestore(/var/lib/ceph/osd/ceph-7) log_latency slow operation observed for kv_final,
latency = 7.88786s
/var/log/kolla/ceph/ceph-osd.7.log:2020-03-04 02:21:35.180 7f3dd1bea700 0
bluestore(/var/lib/ceph/osd/ceph-7) log_latency slow operation observed for kv_final,
latency = 6.06171s
/var/log/kolla/ceph/ceph-osd.7.log:2020-03-04 05:31:30.298 7f3dc1bca700 0
bluestore(/var/lib/ceph/osd/ceph-7) log_latency slow operation observed for
submit_transact, latency = 5.34228s
The cluster setup is: SATA SSD (as DB) + SATA HDD 1:3.
Any suggest how to debug this problem? Thank you!
br,
Xu Yun
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io