Thank you Glen and Frank for your experience sharing!


Cheers

Francois



--


EveryWare AG
François Scheurer
Senior Systems Engineer
Zurlindenstrasse 52a
CH-8003 Zürich

tel: +41 44 466 60 00
fax: +41 44 466 60 10
mail: francois.scheurer@everyware.ch
web: http://www.everyware.ch



From: Frank Schilder <frans@dtu.dk>
Sent: Saturday, January 9, 2021 12:10 PM
To: Glen Baars; Scheurer François; ceph-users@ceph.io
Subject: Re: performance impact by pool deletion?
 
Hi all,

I deleted a ceph fs data pool (EC 8+2) of size 240TB with about 150M objects and it had no observable impact at all. Client IO and admin operations worked just like before. In fact, I was surprised how fast it went and how fast the capacity became available again. It was probably just a few days, but don't remember the exact times any more.

My version back then was mimic 13.2.8. All OSDs had collocated WAL/DB, everything on spindle. My impression from reports in this list is, that this started becoming a problem with changes made in nautilus.

Best regards,
=================
Frank Schilder
AIT Risĝ Campus
Bygning 109, rum S14

________________________________________
From: Glen Baars <glen@onsitecomputers.com.au>
Sent: 09 January 2021 08:15:51
To: Scheurer François; ceph-users@ceph.io
Subject: [ceph-users] Re: performance impact by pool deletion?

I deleted a 240TB rgw pool a few weeks ago, caused a huge slowdown. Lucky it wasn't a important cluster otherwise it would've taken it down for a week.

From: Scheurer François <francois.scheurer@everyware.ch>
Sent: Wednesday, 6 January 2021 11:32 PM
To: ceph-users@ceph.io
Subject: [ceph-users] performance impact by pool deletion?


Hi everybody





Does somebody had experience with important performance degradations during

a pool deletion?



We are asking because we are going to delete a 370 TiB with 120 M objects and have never done this in the past.

The pool is using erasure coding 8+2 on nvme ssd's with rocksdb/wal on nvme optane disks.

Openstack VM's are running on the other rbd pools.



Thank you in advance for your feedback!



Cheers

Francois



PS:

this is no option, as it take about 100 year to complete ;-) :
rados -p ch-zh1-az1.rgw.buckets.data ls | while read i; do rados -p ch-zh1-az1.rgw.buckets.data rm "$i"; done






--


EveryWare AG
François Scheurer
Senior Systems Engineer
Zurlindenstrasse 52a
CH-8003 Zürich

tel: +41 44 466 60 00
fax: +41 44 466 60 10
mail: francois.scheurer@everyware.ch<mailto:francois.scheurer@everyware.ch>
web: http://www.everyware.ch
This e-mail is intended solely for the benefit of the addressee(s) and any other named recipient. It is confidential and may contain legally privileged or confidential information. If you are not the recipient, any use, distribution, disclosure or copying of this e-mail is prohibited. The confidentiality and legal privilege attached to this communication is not waived or lost by reason of the mistaken transmission or delivery to you. If you have received this e-mail in error, please notify us immediately.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-leave@ceph.io