Hi,
Just want to post the updates here. The object count decreased to 4 now.
But I don't know if it is time matter or a system reboot make it normal.
All nodes rebooted after scheduled system updates, but i forgot to jot down
the object counts before the maintenance.
Anyways, issue fixed.
Thanks everyone especially Eugen.
Regs,
Icy
On Thu, 21 May 2020 at 08:33, icy chan <icy.kf.chan(a)gmail.com> wrote:
Hi Eugen,
Thanks for the suggestion. The object counts of rbd pool are still stay on
430.11K. (all images were deleted 3 days +.)
I will keep monitor it and post the results here.
Regs,
Icy
On Wed, 20 May 2020 at 15:12, Eugen Block <eblock(a)nde.ag> wrote:
> The rbd_info, rbd_directory objects will remain until you delete the
> pool, you don't need to clean that up, e.g. if you decide to create
> new rbd images in there.
> The number of remaining objects usually slowly decreases depending on
> the amount of data that was deleted. Just last week I deleted a 2 TB
> rbd image from a pool and it took two days until the number of objects
> cleaned up, so it's nothing unusual. Just watch the number from time
> to time and this thread in case the numbers don't decrease.
>
>
> Zitat von icy chan <icy.kf.chan(a)gmail.com>om>:
>
> > Hi Eugen,
> >
> > Thanks for your reply.
> >
> > The problem is all rbd images were removed from pool rbd days ago. i.e.
> > both below commands also return empty:
> > $ rbd ls rbd
> > $ rados -p rbd listomapkeys rbd_directory
> >
> > But below rados df still located 430K objects. Any other methods can I
> dig
> > out those ghost objects?
> >
> > $ rados -p rbd df
> > POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND
> DEGRADED
> > RD_OPS RD WR_OPS WR USED COMPR UNDER COMPR
> > rbd 18 MiB 430107 0 1290321 0 0
> 0
> > 141454780 6.9 TiB 42395431 11 TiB 0 B 0 B
> >
> > $ rados -p rbd ls | while read obj; do echo "- "$obj; rados -p rbd
stat
> > $obj; rados -p rbd listomapkeys $obj; echo ; done
> > - gateway.conf
> > rbd/gateway.conf mtime 2020-05-18 11:15:30.000000, size 8869
> >
> > - rbd_directory
> > rbd/rbd_directory mtime 2020-05-18 10:59:50.000000, size 0
> >
> > - rbd_info
> > rbd/rbd_info mtime 2020-04-16 15:07:42.000000, size 19
> >
> > - rbd_trash
> > rbd/rbd_trash mtime 2020-05-18 11:15:30.000000, size 0
> >
> >
> > Regs,
> > Icy
> >
> >
> >
> > On Tue, 19 May 2020 at 14:57, Eugen Block <eblock(a)nde.ag> wrote:
> >
> >> That's not wrong, those are expected objects that contain information
> >> about your rbd images. If you take a look into the rbd_directory
> >> (while you have images in there) you'll find something like this:
> >>
> >> host:~ $ rados -p pool listomapkeys rbd_directory
> >>
> >> id_fe976bcfb968bf
> >> id_ffc37728edbdab
> >> name_01673d5d-4b12-4a44-8793-403581f7d808_disk
> >> name_01673d5d-4b12-4a44-8793-403581f7d808_disk.config
> >> name_volume-8a1a0825-1163-44bc-abe2-1a711daea07b
> >>
> >>
> >> The rbd ls command uses the rbd_directory object to read from, an
> >> excerpt from rbd man page:
> >>
> >> ---snip---
> >> ls [-l | –long] [pool-name]
> >>
> >> Will list all rbd images listed in the rbd_directory object.
> >> ---snip---
> >>
> >>
> >> The gateway.conf is your iSCSI gateway configuration stored in the
> cluster.
> >>
> >>
> >> Zitat von icy chan <icy.kf.chan(a)gmail.com>om>:
> >>
> >> > Hi,
> >> >
> >> > The numbers of object counts from "rados df" and "rados
ls" are
> different
> >> > in my testing environment. I think it maybe some zero bytes or
> unclean
> >> > objects since I removed all rbd images on top of it few days ago.
> >> > How can I make it right / found out where are those ghost objects?
> Or i
> >> > should ignore it since the numbers was not that high.
> >> >
> >> > $ rados -p rbd df
> >> > POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND
> >> DEGRADED
> >> > RD_OPS RD WR_OPS WR USED COMPR UNDER COMPR
> >> > rbd 18 MiB 430107 0 1290321 0 0
> >> 0
> >> > 141243877 6.9 TiB 42395431 11 TiB 0 B 0 B
> >> >
> >> > $ rados -p rbd ls | wc -l
> >> > 4
> >> >
> >> > $ rados -p rbd ls
> >> > gateway.conf
> >> > rbd_directory
> >> > rbd_info
> >> > rbd_trash
> >> >
> >> > Regs,
> >> > Icy
> >> > _______________________________________________
> >> > ceph-users mailing list -- ceph-users(a)ceph.io
> >> > To unsubscribe send an email to ceph-users-leave(a)ceph.io
> >>
> >>
> >> _______________________________________________
> >> ceph-users mailing list -- ceph-users(a)ceph.io
> >> To unsubscribe send an email to ceph-users-leave(a)ceph.io
> >>
>
>
>
>