Hi Ilya,
Ok, I've migrated the ceph-dev image to a separate ecpool for rbd and now
the backup works fine again.
root@zephir:~# umount /opt/ceph-dev
root@zephir:~# rbd unmap ceph-dev
root@zephir:~# rbd migration prepare --data-pool rbd_ecpool ceph-dev
root@zephir:~# rbd migration execute ceph-dev
Image migration: 100% complete...done.
root@zephir:~# rbd migration commit ceph-dev
Commit image migration: 100% complete...done.
root@zephir:~# rbd map ceph-dev
/dev/rbd1
root@zephir:~# mount /opt/ceph-dev/
root@zephir:~# ls -l /opt/ceph-dev/
< files are there>
root@zephir:~# rbd snap create ceph-dev@backup
Creating snap: 100% complete...done.
root@zephir:~# rbd snap ls ceph-dev
SNAPID NAME SIZE PROTECTED
TIMESTAMP
4 ceph-dev_2023-03-05T02:00:09.030+01:00 10 GiB Wed Apr
19 18:41:39 2023
5 ceph-dev_2023-03-06T02:00:03.832+01:00 10 GiB Wed Apr
19 18:41:40 2023
6 ceph-dev_2023-04-05T03:22:01.315+02:00 10 GiB Wed Apr
19 18:41:41 2023
7 ceph-dev_2023-04-05T03:35:56.748+02:00 10 GiB Wed Apr
19 18:41:45 2023
8 ceph-dev_2023-04-05T03:37:23.778+02:00 10 GiB Wed Apr
19 18:41:46 2023
9 ceph-dev_2023-04-06T02:00:06.159+02:00 10 GiB Wed Apr
19 18:41:47 2023
10 ceph-dev_2023-04-07T02:00:05.913+02:00 10 GiB Wed Apr
19 18:41:50 2023
11 ceph-dev_2023-04-08T02:00:06.534+02:00 10 GiB Wed Apr
19 18:41:51 2023
12 ceph-dev_2023-04-09T02:00:06.430+02:00 10 GiB Wed Apr
19 18:41:52 2023
13 ceph-dev_2023-04-11T02:00:09.750+02:00 10 GiB Wed Apr
19 18:41:53 2023
14 ceph-dev_2023-04-12T02:00:09.528+02:00 10 GiB Wed Apr
19 18:41:54 2023
15 backup 10 GiB Wed Apr
19 18:50:04 2023
root@zephir:~#
root@zephir:~# rbd info ceph-dev
rbd image 'ceph-dev':
size 10 GiB in 2560 objects
order 22 (4 MiB objects)
snapshot_count: 12
id: 26027367d55572
data_pool: rbd_ecpool
block_name_prefix: rbd_data.7.26027367d55572
format: 2
features: layering, exclusive-lock, object-map, fast-diff,
deep-flatten, data-pool
op_features:
flags:
create_timestamp: Wed Apr 19 18:41:38 2023
access_timestamp: Wed Apr 19 18:41:38 2023
modify_timestamp: Wed Apr 19 18:41:38 2023
root@zephir:~#
Thank you very much.
So I will wait and see if Venky or Shankar give feedback if the 2 cephfs
file systems should use different ec pools
Thanks & Cheers
Reto
Am Mi., 19. Apr. 2023 um 18:04 Uhr schrieb Ilya Dryomov <idryomov(a)gmail.com
:
> On Wed, Apr 19, 2023 at 5:57 PM Reto Gysi <rlgysi(a)gmail.com> wrote:
> >
> >
> > Hi,
> >
> > Am Mi., 19. Apr. 2023 um 11:02 Uhr schrieb Ilya Dryomov <
> idryomov(a)gmail.com
:
> >>
> >> On Wed, Apr 19, 2023 at 10:29 AM Reto Gysi <rlgysi(a)gmail.com> wrote:
> >> >
> >> > yes, I used the same ecpool_hdd also for cephfs file systems. The new
> pool ecpool_test I've created for a test, I've also created it with
> application profile 'cephfs', but there aren't any cephfs filesystem
> attached to it.
> >>
> >> This is not and has never been supported.
> >
> >
> > Do you mean 1) using the same erasure coded pool for both rbd and
> cephfs, or 2) multiple cephfs filesystem using the same erasure coded pool
> via ceph.dir.layout.pool="ecpool_hdd"?
>
> (1), using the same EC pool for both RBD and CephFS.
>
> >
> > 1)
> >
> >
> > 2)
> > rgysi cephfs filesystem
> > rgysi - 5 clients
> > =====
> > RANK STATE MDS ACTIVITY DNS INOS DIRS
> CAPS
> > 0 active rgysi.debian.uhgqen Reqs: 0 /s 409k 408k 40.8k
> 16.5k
> > POOL TYPE USED AVAIL
> > cephfs.rgysi.meta metadata 1454M 2114G
> > cephfs.rgysi.data data 4898G 17.6T
> > ecpool_hdd data 29.3T 29.6T
> >
> > root@zephir:~# getfattr -n ceph.dir.layout /home/rgysi/am/ecpool/
> > getfattr: Removing leading '/' from absolute path names
> > # file: home/rgysi/am/ecpool/
> > ceph.dir.layout="stripe_unit=4194304 stripe_count=1 object_size=4194304
> pool=ecpool_hdd"
> >
> > root@zephir:~#
> >
> > backups cephfs filesystem
> > backups - 2 clients
> > =======
> > RANK STATE MDS ACTIVITY DNS INOS DIRS
> CAPS
> > 0 active backups.debian.runngh Reqs: 0 /s 253k 253k 21.3k
> 899
> > POOL TYPE USED AVAIL
> > cephfs.backups.meta metadata 1364M 2114G
> > cephfs.backups.data data 16.7T 16.4T
> > ecpool_hdd data 29.3T 29.6T
> >
> > root@zephir:~# getfattr -n ceph.dir.layout
> /mnt/backups/windows/windows-drives/
> > getfattr: Removing leading '/' from absolute path names
> > # file: mnt/backups/windows/windows-drives/
> > ceph.dir.layout="stripe_unit=4194304 stripe_count=1 object_size=4194304
> pool=ecpool_hdd"
> >
> > root@zephir:~#
> >
> >
> >
> > So I guess I should use a different ec datapool for rbd and for each of
> the cephfs filesystems in the future, correct?
>
> Definitely a different EC pool for RBD (i.e. don't mix with CephFS).
> Not sure about the _each_ of the filesystems bit -- Venky or Patrick can
> comment on whether sharing an EC pool between filesystems is OK.
>
> Thanks,
>
> Ilya
>