What Kingston SSD model?
=== START OF INFORMATION SECTION ===
Model Family: SandForce Driven SSDs
Device Model: KINGSTON SE50S3100G
Serial Number: xxxxxxxxxxxxxxxx
LU WWN Device Id: xxxxxxxxxxxxxxxx
Firmware Version: 611ABBF0
User Capacity: 100,030,242,816 bytes [100 GB]
Sector Size: 512 bytes logical/physical
Rotation Rate: Solid State Device
Form Factor: 2.5 inches
Device is: In smartctl database [for details use: -P show]
ATA Version is: ATA8-ACS, ACS-2 T13/2015-D revision 3
SATA Version is: SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Tue Aug 4 14:31:36 2020 MSK
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
вт, 4 авг. 2020 г. в 14:17, Eneko Lacunza <elacunza(a)binovo.es>es>:
> Hi Vladimir,
>
What Kingston SSD model?
>
> El 4/8/20 a las 12:22, Vladimir Prokofev escribió:
> > Here's some more insight into the issue.
> > Looks like the load is triggered because of a snaptrim operation. We
> have a
> > backup pool that serves as Openstack cinder-backup storage, performing
> > snapshot backups every night. Old backups are also deleted every night,
> so
> > snaptrim is initiated.
> > This snaptrim increased load on the block.db devices after upgrade, and
> > just kills one SSD's performance in particular. It serves as a
> block.db/wal
> > device for one of the fatter backup pool OSDs which has more PGs placed
> > there.
> > This is a Kingston SSD, and we see this issue on other Kingston SSD
> > journals too, Intel SSD journals are not that affected, though they too
> > experience increased load.
> > Nevertheless, there're now a lot of read IOPS on block.db devices after
> > upgrade that were not there before.
> > I wonder how 600 IOPS can destroy SSDs performance that hard.
> >
> > вт, 4 авг. 2020 г. в 12:54, Vladimir Prokofev <v(a)prokofev.me>me>:
> >
> >> Good day, cephers!
> >>
> >> We've recently upgraded our cluster from 14.2.8 to 14.2.10 release,
also
> >> performing full system packages upgrade(Ubuntu 18.04 LTS).
> >> After that performance significantly dropped, main reason beeing that
> >> journal SSDs are now have no merges, huge queues, and increased latency.
> >> There's a few screenshots in attachments. This is for an SSD journal
> that
> >> supports block.db/block.wal for 3 spinning OSDs, and it looks like this
> for
> >> all our SSD block.db/wal devices across all nodes.
> >> Any ideas what may cause that? Maybe I've missed something important in
> >> release notes?
> >>
> > _______________________________________________
> > ceph-users mailing list -- ceph-users(a)ceph.io
> > To unsubscribe send an email to ceph-users-leave(a)ceph.io
>
>
> --
> Eneko Lacunza | Tel. 943 569 206
> | Email elacunza(a)binovo.es
> Director Técnico | Site.
https://www.binovo.es
> BINOVO IT HUMAN PROJECT S.L | Dir. Astigarragako Bidea, 2 - 2º izda.
> Oficina 10-11, 20180 Oiartzun
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>