Sorry that had to be Wido/Stefan
Another question is: hoe to use this ceph-kvstore-tool tool to compact the
rocksdb? (can't find a lot of examples)
The WAL and DB are on a separate NVMe. The directoy structure for an osd
looks like:
root@se-rc3-st8vfr2t2:/var/lib/ceph/osd# ls -l ceph-174
total 24
lrwxrwxrwx 1 ceph ceph 93 Aug 27 10:12 block ->
/dev/ceph-97d39775-65ef-41a6-a9fe-94a108c0816d/osd-block-7f83916e-7250-4935-89af-d678a9bb9f29
lrwxrwxrwx 1 ceph ceph 27 Aug 27 10:12 block.db ->
/dev/ceph-db-nvme0n1/db-sdd
-rw------- 1 ceph ceph 37 Aug 27 10:12 ceph_fsid
-rw------- 1 ceph ceph 37 Aug 27 10:12 fsid
-rw------- 1 ceph ceph 57 Aug 27 10:12 keyring
-rw------- 1 ceph ceph 6 Aug 27 10:12 ready
-rw------- 1 ceph ceph 10 Aug 27 10:12 type
-rw------- 1 ceph ceph 4 Aug 27 10:12 whoami
Kind Regards
Marcel Kuiper
Hi Wido/Joost
pg_num is 64. It is not that we use 'rados ls' for operations. We just
noticed as a difference that on this cluster it takes about 15 seconds to
return on pool .rgw.root or rc3-se.rgw.buckets.index and our other
clusters return almost instantaniously
Is there a way that I can determine from statistics that manual compaction
might help (besides doing the compaction and notice the difference in
behaviour). Any pointers in investigating this further would be much
appreciated
Is there operational impact to be expected when compacting manually?
Kind Regards
Marcel Kuiper
>
>
> On 26/08/2020 15:59, Stefan Kooman wrote: