SlideShare une entreprise Scribd logo
1  sur  41
Télécharger pour lire hors ligne
Storage Benchmarks
The Good, the Bad and the Ugly
Wolfgang Stief
data://disrupted 2020
# whoami
‣ open minded geek and engineer
‣ Dipl.-Ing. Elektrische Energietechnik
‣ selbständig (2011), Mitgründer sys4 AG (2012)
technisches Marketing, Erklärbär, E-Mail, Projektkümmerer, Vorstand
‣ ws@stief-consulting.de
https://www.linkedin.com/in/wstief/
@SpeicherStief (Twitter)
@stiefkind (Twitter)
stiefkind@mastodon.social
Agenda
‣Warum überhaupt Benchmarks?
Aussagekraft, Herausforderungen, I/O-Stack, Tools
‣ Storage Performance Council
SPC-1, SPC-2, Industriestandard, Terminologie, Reports
‣ Und was heißt das jetzt für’s Tagesgeschäft?
Benchmarks selber programmieren?
Benchmarks für die Beschaffung?
Glaskugel
Warum überhaupt Benchmarks?
‣ Vergleichbarkeit von Komponenten, Geräten, Systemen
‣ Auswahlkriterium bei Beschaffung
‣ Festlegen von SLAs
Baselining
Wie aussagekräftig sind Benchmarks?
Wie aussagekräftig sind Benchmarks?
»≈100% of all benchmarks are wrong«
— Brendan Gregg
Wie aussagekräftig sind Benchmarks?
Heutiges Ziel:
Bild’ Dir Deine Meinung!
Herausforderungen @ Benchmarks (generell)
‣ synthetische Last vs. reale Last
➛ Benchmarks müssen vergleichbar sein
➛ Lastverhalten in RZs individuell stark unterschiedlich
‣ Man kann das Falsche messen oder das falsche messen wollen.
‣ Man kann das falsche Ergebnis schlussfolgern.
‣ Man kann Fehler ignorieren.
‣ Bugs in der Benchmark-Software
➛ selber schreiben i. d. R. keine Option, weil sehr (zeit)aufwendig.
‣ »Active Benchmarking«
Brendan Gregg, http://www.brendangregg.com/activebenchmarking.html
»Wer misst, misst Mist«— überlieferte Ingenieursweisheit
I/O-Stack (1) — einfach
VMVM
Filesystem ($)
Block I/O Driver (ASYNC)
Host System
HBA
Filesystem ($)
Block I/O Driver (ASYNC)
Filesystem ($)
Block I/O Driver (ASYNC)
HBA HBA HBA
Storage Controller ($) Storage Controller ($)
I/OCommandQueues,Buffering
I/O-Stack (2) — komplex
VMVM
Filesystem ($)
Block I/O Driver (ASYNC)
Host System
HBA
Filesystem ($)
Block I/O Driver (ASYNC)
Filesystem ($)
Block I/O Driver (ASYNC)
HBA HBA HBA
Storage Controller ($) Storage Controller ($)
SAN Virtualisierung
($, Command Queues, Buffering)
Storage Controller ($) Storage Controller ($)
Flash Layer (auto tiering)
VMVM
Filesystem ($)
Block I/O Driver (ASYNC)
Host System
HBA
Filesystem ($)
Block I/O Driver (ASYNC)
Filesystem ($)
Block I/O Driver (ASYNC)
HBA HBA HBA
Storage Controller ($) Storage Controller ($)
I/OCommandQueues,Buffering
I/O-Stack (1-3) — simple, komplex, Cloud
Cloud: something totally different
viel SDS ➛ $, ASYNC_IO
VMVM
Filesystem ($)
Block I/O Driver (ASYNC)
Host System
HBA
Filesystem ($)
Block I/O Driver (ASYNC)
Filesystem ($)
Block I/O Driver (ASYNC)
HBA HBA HBA
Storage Controller ($) Storage Controller ($)
SAN Virtualisierung
($, Command Queues, Buffering)
Storage Controller ($) Storage Controller ($)
Flash Layer (auto tiering)
VMVM
Filesystem ($)
Block I/O Driver (ASYNC)
Host System
HBA
Filesystem ($)
Block I/O Driver (ASYNC)
Filesystem ($)
Block I/O Driver (ASYNC)
HBA HBA HBA
Storage Controller ($) Storage Controller ($)
I/OCommandQueues,Buffering
I/O-Stack (4)
Cloud: something totally different
viel SDS ➛ $, ASYNC_IO
‣ eigene Benchmarks entwickeln?
➛ genaue Kenntnis von Tools, Komponenten, Stack, Plattform erforderlich
VMVM
Filesystem ($)
Block I/O Driver (ASYNC)
Host System
HBA
Filesystem ($)
Block I/O Driver (ASYNC)
Filesystem ($)
Block I/O Driver (ASYNC)
HBA HBA HBA
Storage Controller ($) Storage Controller ($)
SAN Virtualisierung
($, Command Queues, Buffering)
Storage Controller ($) Storage Controller ($)
Flash Layer (auto tiering)
VMVM
Filesystem ($)
Block I/O Driver (ASYNC)
Host System
HBA
Filesystem ($)
Block I/O Driver (ASYNC)
Filesystem ($)
Block I/O Driver (ASYNC)
HBA HBA HBA
Storage Controller ($) Storage Controller ($)
I/OCommandQueues,Buffering
Storage Benchmark Tools (unvollständig)
‣ Vdbench (2000, jetzt Oracle, Java)
https://www.oracle.com/technetwork/server-storage/vdbench-downloads-1901681.html
‣ fio (seit 2005, OpenSource, Linux)
https://github.com/axboe/fio
‣ filebench (2002, Sun, jetzt OpenSource, WML, Microbenchmark)
https://github.com/filebench/filebench/wiki
‣ Iometer Project (Intel 1998-2001, jetzt OSDL)
http://www.iometer.org
‣ IOzone Filesystem Benchmark (ab 1991, u. a. Android)
http://www.iozone.org
‣ IOR (seit 2001, Parallel I/O Benchmark, HPC/MPI, versch. Forks)
https://github.com/hpc/ior
‣ COSbench (ca. 2015(?), Cloud Object Storage, Intel)
https://github.com/intel-cloud/cosbench
Warum kein dd?
‣ dd(1) ➛ disk dump
‣ dd if=<input_file> of=<output_file> bs=<blocksize>
➛ sequential only
➛ genau ein Stream mit genau einer Blocksize
‣ if=/dev/zero ➛ liefert Strom von Nullen
➛ lässt sich hervorragend cachen
➛ lässt sich hervorragend komprimieren
➛ lässt sich hervorragend deduplizieren
‣ if=/dev/random oder /dev/urandom
➛ Bottleneck ist häufig CPU
Agenda
‣ Warum überhaupt Benchmarks?
Aussagekraft, Herausforderungen, I/O-Stack, Tools
‣Storage Performance Council
SPC-1, SPC-2, Industriestandard, Terminologie, Reports
‣ Und was heißt das jetzt für’s Tagesgeschäft?
Benchmarks selber programmieren?
Benchmarks für die Beschaffung?
Glaskugel
Industriestandard SPC-1 und SPC-2
‣ Storage Performance Council
➛ https://spcresults.org/
➛ Full Member: 12.000U$, Toolkit 4.000U$, je Submission 1.000U$ (139 Mitglieder)
➛ Associate Member: 5.000U$, Toolkit 6.000U$, je Submission 1.500U$ (140 Mitglieder)
➛ Academic Member: 0U$, Toolkit 500U$ (limited license), keine Submissions (141 Mitglieder)
➛ »Sponsors«
‣ formale Definition und Spezifikation
➛ reproduzierbar
➛ vgl. auch SPECint/SPECfp für CPUs
‣ Vergleichbarkeit von Ergebnissen
‣ SPC-1 für zeitkritische Anforderungen (OLTP, Response Time)
‣ SPC-2 für »large scale sequential movement of data«
Industriestandard SPC-1 und SPC-2
‣ Erweiterung »C« — Components (und kleine Systeme)
‣ Ergänzung »E« — Energieverbrauch
➛ vorgeschriebenes Messequipment für Leistungsmessung
‣ 4 Regelsätze
Benchmark Version Last Updated (# Submissions) Latest Submission
SPC-1
SPC-1E
3.9 27.05.2020
208
(8)
03.08.2020
(29.12.2015)
SPC-2
SPC-2E
1.7 15.10.2017
82
(2)
24.07.2020
(24.08.2014)
SPC-1C
SPC-1C/E
1.5 12.05.2013
(18)
(2)
(12.06.2010)
(02.08.2009)
SPC-2C
SPC-2C/E
1.4 12.05.2013
(8)
(1)
(08.02.2009)
(18.12.2011)
SPC — Terminologie (1)
‣ ES, FDR, SF
➛ Executive Summary (PDF)
➛ Full Disclosure Report (PDF)
➛ Supporting Files (ZIP)
‣ Protected 1 vs. Protected 2
➛ 1: jedes Storage Device kann ausfallen ohne Datenverlust (≙ RAID, Erasure Coding)
➛ 2: eine beliebige Komponente der TSC kann ausfallen ohne Datenverlust
‣ TSC, PSC (häufig ist TSC = PSC)
➛ Tested Storage Configuration
➛ Priced Storage Configuration
‣ SPC-1 IOPS vs. SPC-2 MBPS
‣ Price-Performance
➛ $/SPC-1 kIOPS
➛ $/SPC-2 MBPS
EXECUTIVE SUMMARY Page 1 of 6
SPC Benchmark 1™ V3.8 Submission Identifier: A31020
Huawei Technologies Co., Ltd Submitted for Review: December 27, 2018
Huawei OCEANSTOR 5600 V5
EXECUTIVE SUMMARY
SPC BENCHMARK 1™
EXECUTIVE SUMMARY
HUAWEI TECHNOLOGIES CO., LTD
HUAWEI OCEANSTOR 5600 V5
SPC-1 IOPS 1,100,252
SPC-1 Price-Pe a ce $405.39/SPC-1 KIOPS
SPC-1 IOPS Re pon e Time 0.710 ms
SPC-1 Overall Response Time 0.445 ms
SPC-1 ASU Capacity 26,124 GB
SPC-1 ASU Price $17.08/GB
SPC-1 Total System Price $446,024.48
Data Protection Level Protected 2 (RAID-10 and full redundancy)
Physical Storage Capacity 69,120 GB
Pricing Currency / Target Country U.S. Dollars / USA
SPC-1 V3.8
SUBMISSION IDENTIFIER: A31020
SUBMITTED FOR REVIEW: DECEMBER 27, 2018
SPC BENCHMARK 1™
FULL DISCLOSURE REPORT
HUAWEI TECHNOLOGIES CO., LTD
HUAWEI OCEANSTOR 5600 V5
SPC-1 V3.8
SUBMISSION IDENTIFIER: A31020
SUBMITTED FOR REVIEW: DECEMBER 27, 2018
SPC — Terminologie (2)
• Storage Hierarchy
➛ Physical Storage Capacity ≙ Brutto-Kapazität, nichtflüchtiger Speicher
➛ Logical Volume Adressable Capacity ≙ Summe der Kapazität aller LUNs
➛ Application Storage Unit, ASU ➛ darauf wird Benchmark gefahren
➛ nahezu beliebiges Mapping LV ⇌ ASU
Logical
Volume
Application
Storage Unit 1
Application
Storage Unit 2
Application
Storage Unit 3
Logical
Volume
Logical
Volume
Logical
Volume
Logical
Volume
Application
Storage Unit 1
Logical
Volume
Logical
Volume
Application
Storage Unit 2
Logical
Volume
Logical
Volume
Application
Storage Unit 3
Application
Storage Unit 1
Application
Storage Unit 2
Application
Storage Unit 3
Logical
Volume
SPC-1 — Executive Summary (1)
EXECUTIVE SUMMARY Page 1 of 6
SPC Benchmark 1™ V3.8 Submission Identifier: A31020
Huawei Technologies Co., Ltd Submitted for Review: December 27, 2018
Huawei OCEANSTOR 5600 V5
EXECUTIVE SUMMARY
SPC BENCHMARK 1™
EXECUTIVE SUMMARY
HUAWEI TECHNOLOGIES CO., LTD
HUAWEI OCEANSTOR 5600 V5
SPC-1 IOPS 1,100,252
SPC-1 Price-Pe a ce $405.39/SPC-1 KIOPS
SPC-1 IOPS Re pon e Time 0.710 ms
SPC-1 Overall Response Time 0.445 ms
SPC-1 ASU Capacity 26,124 GB
SPC-1 ASU Price $17.08/GB
SPC-1 Total System Price $446,024.48
Data Protection Level Protected 2 (RAID-10 and full redundancy)
Physical Storage Capacity 69,120 GB
Pricing Currency / Target Country U.S. Dollars / USA
SPC-1 V3.8
SUBMISSION IDENTIFIER: A31020
SUBMITTED FOR REVIEW: DECEMBER 27, 2018
SPC-1 — Executive Summary (2)
EXECUTIVE SUMMARY Page 2 of 6
SPC Benchmark 1 3. Submission Identifier: A31020
Huawei Technologies Co., Ltd Submitted for Review: December 27, 2018
Huawei OCEANSTOR 5600 V5
Benchmark Configuration Diagram
SPC-1 — Executive Summary (3)
EXECUTIVE SUMMARY Page 4 of 6
SPC Benchmark 1™ V3.8 Submission Identifier: A31020
Huawei Technologies Co., Ltd Submitted for Review: December 27, 2018
Huawei OCEANSTOR 5600 V5
Storage Configuration Pricing
Third-Party Reseller: Huawei Technologies Co., Ltd. only sells its products to third-
party resellers who, in turn, sell those products to U.S. customers. The above reflects
the pricing quoted by one of those third-party resellers. See Appendix B of the Full
Disclosure Report for a copy of the third-party reseller s quotation.
Description Qty Unit Price Ext. Price Disc. Disc. Price
02351LWK 56V5-256G-AC2 OceanStor 5600 V5 Engine(3U,Dual
Controller,AC240HVDC,256GB
Cache,SPE63C0300) 2 116,820.00 233,640.00 68% 74,764.80
SMARTIO10ETH 4 port SmartIO I/O module(SFP+,10Gb
Eth/FCoE(VN2VF)/Scale-out) 4 6,288.00 25,152.00 68% 8,048.64
SMARTIO8FC 4 port SmartIO I/O module(SFP+,8Gb FC) 8 3,192.00 25,536.00 68% 8,171.52
LPU4S12V3 4 port 4*12Gb SAS I/O module(MiniSAS HD) 8 4,963.00 39,704.00 68% 12,705.28
HSSD-960G2S-A9 960GB SSD SAS Disk Unit(2.5") 72 10,176.00 732,672.00 70% 219,801.60
DAE52525U2-AC-A2
Disk Enclosure(2U,AC240HVDC,2.5",Expanding
Module,25 Disk Slots,without Disk
Unit,DAE52525U2) 8 10,584.00 84,672.00 68% 27,095.04
N8GHBA000 QLOGIC QLE2562 HBA Card,PCIE,8Gbps
DualPort ,Fiber Channel Multimode LC Optic
Interface,English Manual, No Drive CD 12 1,698.00 20,376.00 0% 20,376.00
SN2F01FCPC Patch Cord,DLC/PC,DLC/PC,Multi-
mode,3m,A1a.2,2mm,42mm DLC,OM3 bending
insensitive 24 14.00 336.00 0% 336.00
LIC-56V5-BS Basic Software License(Including
DeviceManager,SmartThin,SmartMulti-
tenant,SmartMigration,SmartErase,SmartMotion,
SystemReporter,eService,SmartQuota,NFS,CIFS,
NDMP 1 9,852.00 9,852.00 70% 2,955.60
374,254.48
02351LWK-88134ULF-36 OceanStor 5600 V5 Engine(3U,Dual
Controller,AC240HVDC,256GB
Cache,SPE63C0300&4*Disk Enclosure-
2U,AC240HVDC,2.5",DAE52525U2&36*960GB
SSD SAS Disk Unit(2.5"))-Hi-Care Onsite
Premier 24x7x4H Engineer Onsite Service-
36Month(s) 2 29,292.00 58,584.00 0% 58,584.00
88034JNY-88134UHK-36 Basic Software License(Including
DeviceManager,SmartThin,SmartMulti-
tenant,SmartMigration,SmartErase,SmartMotion,
SystemReporter,eService,SmartQuota,NFS,CIFS,
NDMP)-Hi-Care Application Software Upgrade
Support Service-36Month(s) 1 2,919.00 2,919.00 0% 2,919.00
8812153244 OceanStor 5600 V5 Installation Service -
Engineering 1 10,267.00 10,267.00 0% 10,267.00
71,770.00
446,024.48
1,100,252
405.39
26,124
17.08
SPC-1 Total System Price
SPC-1 ASU Capacity (GB)
SPC-1 ASU Price ($/GB)
SPC-1 IOPS
SPC-1 P ce-Pe a ce ($/SPC-1 KIOPS )
Hardware & Software
Hardware & Software Subtotal
Support & Maintenance
Support & Maintenance Subtotal
SPC-1 — Full Disclosure Report (1)
CONFIGURATION INFORMATION Page 13 of 42
SPC Benchmark 1 3. FULL DISCLOSURE REPORT Submission Identifier: A31020
Huawei Technologies Co., Ltd Submitted for Review: December 27, 2018
Huawei OCEANSTOR 5600 V5
CONFIGURATION INFORMATION
Benchmark Configuration and Tested Storage Configuration
The following diagram illustrates the Benchmark Configuration (BC), including the
Tested Storage Configuration (TSC) and the Host System(s).
Storage Network Configuration
The Tested Storage Configuration (TSC) involved an external storage subsystem
made of 4 Huawei OCEANSTOR 5600 V5, driven by 6 host systems (Huawei
BENCHMARK EXECUTION RESULTS Page 16 of 42
Overview
SPC Benchmark 1 3. FULL DISCLOSURE REPORT Submission Identifier: A31020
Huawei Technologies Co., Ltd Submitted for Review: December 27, 2018
Huawei OCEANSTOR 5600 V5
BENCHMARK EXECUTION RESULTS
This portion of the Full Disclosure Report documents the results of the various SPC-1 Tests,
Test Phases, and Test Runs.
Benchmark Execution Overview
Workload Generator Input Parameters
The SPC-1 Workload Generator commands and input parameters for the Test Phases
are presented in the Supporting Files (see Appendix A).
Primary Metrics Test Phases
The benchmark execution consists of the Primary Metrics Test Phases, including the
Test Phases SUSTAIN, RAMPD_100 to RAMPD_10, RAMPU_50 to RAMPU_100,
RAMP_0, REPEAT_1 and REPEAT_2.
Each Test Phase starts with a transition period followed by a Measurement Interval.
Measurement Intervals by Test Phase Graph
The following graph presents the average IOPS and the average Response Times
measured over the Measurement Interval (MI) of each Test Phase.
Exception and Waiver
None.
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0
200,000
400,000
600,000
800,000
1,000,000
1,200,000
AverageMeasuredResponseTime(ms)
AverageMeasuredIOPS
Measurement Intervals by Test Phase Graph
IOPS Response Time
SPC-1 — Full Disclosure Report (2)
BENCHMARK EXECUTION RESULTS Page 24 of 42
Primary Metrics – Response Time Ramp Test
SPC Benchmark 1™ V3.8 FULL DISCLOSURE REPORT Submission Identifier: A31020
Huawei Technologies Co., Ltd Submitted for Review: December 27, 2018
Huawei OCEANSTOR 5600 V5
Response Time Ramp Test – Average Response Time Graph
Response Time Ramp Test – RAMPD_10 Response Time Graph
0.000
0.100
0.200
0.300
0.400
0.500
0.600
0.700
0.800
AverageMeasuredResponseTime(ms)
Average Response Time Graph (Response Time Ramp Test)
MI
0.0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
0 2 4 6 8 10 12 14
ResponseTime(ms)
Relative Run Time (minutes)
Response Time Graph (RAMPD_10 @ 110,020 IOPS)
ASU1 ASU2 ASU3 All ASUs
BENCHMARK EXECUTION RESULTS Page 27 of 42
Repeatability Tests
SPC Benchmark 1 3. FULL DISCLOSURE REPORT Submission Identifier: A31020
Huawei Technologies Co., Ltd Submitted for Review: December 27, 2018
Huawei OCEANSTOR 5600 V5
REPEAT_2_100 – Response Time Graph
Repeatability Test – Intensity Multiplier
The following tables lists the targeted intensity multiplier (Defined), the measured
intensity multiplier (Measured) for each I/O STREAM, its coefficient of variation
(Variation) and the percent of difference (Difference) between Target and Measured.
REPEAT_1_100 Test Phase
ASU1-1 ASU1-2 ASU1-3 ASU1-4 ASU2-1 ASU2-2 ASU2-3 ASU3-1
Defined 0.0350 0.2810 0.0700 0.2100 0.0180 0.0700 0.0350 0.2810
Measured 0.0350 0.2810 0.0700 0.2100 0.0180 0.0700 0.0350 0.2810
Variation 0.0005 0.0002 0.0007 0.0003 0.0008 0.0005 0.0005 0.0001
Difference 0.002% 0.005% 0.010% 0.005% 0.025% 0.005% 0.015% 0.003%
REPEAT_2_100 Test Phase
ASU1-1 ASU1-2 ASU1-3 ASU1-4 ASU2-1 ASU2-2 ASU2-3 ASU3-1
Defined 0.0350 0.2810 0.0700 0.2100 0.0180 0.0700 0.0350 0.2810
Measured 0.0350 0.2810 0.0700 0.2100 0.0180 0.0700 0.0350 0.2810
Variation 0.0004 0.0002 0.0005 0.0002 0.0011 0.0003 0.0008 0.0002
Difference 0.043% 0.010% 0.016% 0.003% 0.045% 0.006% 0.011% 0.005%
MI
0.0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
0 2 4 6 8 10 12 14
ResponseTime(ms)
Relative Run Time (minutes)
Response Time Graph (REPEAT_2_100 @ 1,100,200 IOPS)
ASU1 ASU2 ASU3 All ASUs
SPC-1 — Supporting Files (1)
$ tree SPC-1_A31020_Supporting-Files_Huawei-OS5600-V5
SPC-1_A31020_Supporting-Files_Huawei-OS5600-V5
└── Supporting Files
├── C_Tuning
│   ├── aio-max-nr.sh
│   ├── nr_requests.sh
│   └── scheduler.sh
├── D_Creation
│   ├── mklun.txt
│   └── mkvolume.sh
├── E_Inventory
│   ├── profile1_storage.log
│   ├── profile1_volume.log
│   ├── profile2_storage.log
│   └── profile2_volume.log
├── F_Generator
│   ├── full_run.sh
│   ├── host.HST
│   └── slave_asu.asu
└── SPC1_RESULTS
├── SPC1_INIT_0_Raw_Results.xlsx
├── SPC1_METRICS_0_Quick_Look.xlsx
├── SPC1_METRICS_0_Raw_Results.xlsx
├── SPC1_METRICS_0_Summary_Results.xlsx
├── SPC1_PERSIST_1_0_Raw_Results.xlsx
├── SPC1_PERSIST_2_0_Raw_Results.xlsx
├── SPC1_Run_Set_Overview.xlsx
├── SPC1_VERIFY_0_Raw_Results.xlsx
└── SPC1_VERIFY_1_Raw_Results.xlsx
6 directories, 21 files
SPC-1 — Supporting Files (2)
$ cat SPC-1_A31020_Supporting-Files_Huawei-OS5600-V5/Supporting Files/C_Tuning/aio-max-nr.sh
echo 10485760 > /proc/sys/fs/aio-max-nr
$ cat SPC-1_A31020_Supporting-Files_Huawei-OS5600-V5/Supporting Files/D_Creation/mklun.txt
create disk_domain name=dd00 disk_list=DAE000.0-8 tier0_hotspare_strategy=low disk_domain_id=0
create disk_domain name=dd01 disk_list=DAE030.0-8 tier0_hotspare_strategy=low disk_domain_id=1
create disk_domain name=dd02 disk_list=DAE040.0-8 tier0_hotspare_strategy=low disk_domain_id=2
…
create storage_pool name=sp00 disk_type=SSD capacity=3139GB raid_level=RAID10 pool_id=0 disk_domain_id=0
create storage_pool name=sp01 disk_type=SSD capacity=3139GB raid_level=RAID10 pool_id=1 disk_domain_id=1
…
create lun name=lun_sp00 lun_id_list=0-3 pool_id=0 capacity=784GB prefetch_policy=none
create lun name=lun_sp01 lun_id_list=4-7 pool_id=1 capacity=784GB prefetch_policy=none
…
create host name=host0 operating_system=Linux host_id=0
create host name=host1 operating_system=Linux host_id=1
…
create host_group name=hg0 host_group_id=0 host_id_list=0-5
create lun_group name=lg0 lun_group_id=0
add lun_group lun lun_group_id=0 lun_id_list=0-31
create mapping_view name=mv1 mapping_view_id=1 lun_group_id=0 host_group_id=0
add host initiator host_id=0 initiator_type=FC wwn=21000024ff4b81fc
add host initiator host_id=0 initiator_type=FC wwn=21000024ff4b81fd
…
$ cat SPC-1_A31020_Supporting-Files_Huawei-OS5600-V5/Supporting Files/D_Creation/mkvolume.sh
pvcreate /dev/sdb
pvcreate /dev/sdc
pvcreate /dev/sdd
…
vgcreate vg1 /dev/sdb /dev/sdc /dev/sdd /dev/sde…
…
lvcreate -n asu101 -i 32 -I 512 -C y -L 608.25g vg1
lvcreate -n asu102 -i 32 -I 512 -C y -L 608.25g vg1
lvcreate -n asu103 -i 32 -I 512 -C y -L 608.25g vg1
lvcreate -n asu104 -i 32 -I 512 -C y -L 608.25g vg1
…
SPC-2 — Executive Summary
EXECUTIVE SUMMARY Page 4 of 9
C BE CH A K 2 V1.7.0 Executive Summary
Vexata Inc. Submitted: August 29, 2018
VX100-F Scalable NVMe Flash Array Submission ID: B12004
SPC-2 Reported Data
VX100-F Scalable NVMe Flash Array
SPC-2 MB
SPC-2 Price-
Performance
ASU Capacity (GB) Total Price
Data Protection
Level
49,042.39 $5.35 20,615.843 $262,572.59
Protected 1
(RAID 5 (N+1).)
The above SPC-2 MBPS al e e e en he agg ega e da a a e of all h ee SPC-2 workloads: Large File Processing (LFP), Large
Database Query (LDQ), and Video On Demand (VOD).
Currency Used: "Target Country":
U.S. Dollars USA
SPC-2 Large File Processing (LFP) Reported Data
Data Rate (MB/second) Number of Streams
Data Rate per
Stream
Price-Performance
LFP Composite 47,554.98 $5.52
Write Only:
1024 KiB Transfer 35,532.23 40 888.31
256 KiB Transfer 34,763.83 80 434.55
Read-Write:
1024 KiB Transfer 59,486.68 184 323.30
256 KiB Transfer 59,810.01 184 325.05
Read Only:
1024 KiB Transfer 48,190.46 184 261.90
256 KiB Transfer 47,546.68 184 258.41
The above SPC-2 Data Rate value for LFP Composite represents the aggregate performance of all three LFP Test Phases: (Write Only,
Read-Write, and Read Only).
SPC-2 Large Database Query (LDQ) Reported Data
Data Rate (MB/second) Number of Streams
Data Rate per
Stream
Price-Performance
LDQ Composite 49,869.23 $5.27
1024 KiB Transfer Size
4 I/Os Outstanding 50,425.48 32 1,575.80
1 I/O Outstanding 50,390.42 96 524.90
64 KiB Transfer Size
4 I/Os Outstanding 50,609.64 96 527.18
1 I/O Outstanding 48,051.39 320 150.16
The above SPC-2 Data Rate value for LDQ Composite represents the aggregate performance of the two LDQ Test Phases: (1024 KiB
and 64 KiB Transfer Sizes).
SPC-2 Video On Demand (VOD) Reported Data
Data Rate (MB/second) Number of Streams
Data Rate per
Stream
Price-Performance
49,702.97 63,200 0.79 $5.28
SPC-2 — Full Disclosure Report (1)
SPC-2 DATA REPOSITORY Page 22 of 61
C BE CH A K 2 V1.7.0 Full Disclosure Report
Vexata Inc. Submitted: August 29, 2018
VX100-F Scalable NVMe Flash Array Submission ID: B12004
Storage Hierarchy Ratios
Addressable
Storage
Capacity
Configured
Storage
Capacity
Physical
Storage
Capacity
Total ASU Capacity 100.00% 32.21% 32.21%
Data Protection (RAID 5 (N+1).) 2.18% 2.18%
Addressable Storage Capacity 32.21% 32.21%
Required Storage 37.45% 37.45%
Configured Storage Capacity 100.00%
Global Storage Overhead 0.00%
Unused Storage:
Addressable 0.00%
Configured 25.57%
Physical 0.00%
Storage Capacity Charts
GlobalStorage
Overhead:
0.000 GB (0.00%)
UnusedPhysical
Capacity:
0.000 GB (0.00%)
Data Capacity:
22,284.902 GB
(34.81%)
Data
Protection
Capacity:
1,392.806 GB
(2.18%)
SparingCapacity:
0.000 GB (0.00%)
Overhead&
Metadata:
23,970.195 GB
(37.45%)
ConfiguredStorage
Capacity:
64,013.113 GB
(100.00%)
Physical Storage Capacity: 64,013.113 GB
SPC-2 DATA REPOSITORY Page 23 of 61
C BE CH A K 2 V1.7.0 Full Disclosure Report
Vexata Inc. Submitted: August 29, 2018
VX100-F Scalable NVMe Flash Array Submission ID: B12004
Data Protection
Capacity:
1,392.806 GB (2.18%)
Spares:
0.000 GB (0.00%)
Overhead&
Metadata:
23,970.195 GB
(37.45%)
Addressable
Storage Capacity:
20,615.843 GB
(32.21%)
UnusedData Capacity:
1,669.059 GB (2.61%)
Data Capacity:
22,284.902 GB
(34.81%)
Configured Storage Capacity: 64,013.113 GB
ASU Capacity:
20,615.843 GB
(100.00%)
AddressableStorageCapacity: 20,615.843 GB
UnusedAddressable
Capacity:
0.000 GB (0.00%)
SPC-2 — Full Disclosure Report (2)
SPC-2 BENCHMARK EXECUTION RESULTS Page 29 of 61
Large File Processing Test
C BE CH A K 2 V1.7.0 Full Disclosure Report
Vexata Inc. Submitted: August 29, 2018
VX100-F Scalable NVMe Flash Array Submission ID: B12004
Average Data Rates (MB/s)
The average Data Rate (MB/s) for each Test Run in the three Test Phases of the SPC-2 Large File
Processing Test is listed in the table below as well as illustrated in the following graph.
Test Run Sequence 1 Stream
Variable
Streams
Variable
Streams
Variable
Streams
Variable
Streams
Write 1024KiB 2,326.81 9,508.26 17,833.15 31,292.20 35,532.23
Write 256KiB 1,020.06 8,946.47 16,496.39 27,368.93 34,763.83
Read/Write 1024KiB 1,427.00 24,075.32 40,743.49 55,172.27 59,486.68
Read/Write 256KiB 938.41 20,474.16 36,077.14 54,004.62 59,810.01
Read 1024KiB 1,671.50 23,246.57 35,676.06 45,920.89 48,190.46
Read 256KiB 1,181.65 20,775.14 33,564.49 45,945.52 47,546.68
1 Stream, 1,181.65 MB/s
1 Stream, 1,671.50 MB/s
1 Stream, 938.41 MB/s
1 Stream, 1,427.00 MB/s
1 Stream, 1,020.06 MB/s
1 Stream, 2,326.81 MB/s
23 Streams, 20,775.14 MB/s
23 Streams, 23,246.57 MB/s
23 Streams, 20,474.16 MB/s
23 Streams, 24,075.32 MB/s
10 Streams, 8,946.47 MB/s
5 Streams, 9,508.26 MB/s
46 Streams, 33,564.49 MB/s
46 Streams, 35,676.06 MB/s
46 Streams, 36,077.14 MB/s
46 Streams, 40,743.49 MB/s
20 Streams, 16,496.39 MB/s
10 Streams, 17,833.15 MB/s
92 Streams, 45,945.52 MB/s
92 Streams, 45,920.89 MB/s
92 Streams, 54,004.62 MB/s
92 Streams, 55,172.27 MB/s
40 Streams, 27,368.93 MB/s
20 Streams, 31,292.20 MB/s
184 Streams, 47,546.68 MB/s
184 Streams, 48,190.46 MB/s
184 Streams, 59,810.01 MB/s
184 Streams, 59,486.68 MB/s
80 Streams, 34,763.83 MB/s
40 Streams, 35,532.23 MB/s
0 10,000 20,000 30,000 40,000 50,000 60,000 70,000
256KiB transfers
with only
Read operations
1024KiB transfers
with only
Read operations
256KiB transfers
with
50% Read operations
50% Write operations
1024KiB transfers
with
50% Read operations
50% Write operations
256KiB transfers
with only
Write operations
1024KiB transfers
with only
Write operations
Data Rate, MB/sec
Large File Processing - Data Rate
SPC-2 BENCHMARK EXECUTION RESULTS Page 39 of 61
Large Database Query Test
C BE CH A K 2 V1.7.0 Full Disclosure Report
Vexata Inc. Submitted: August 29, 2018
VX100-F Scalable NVMe Flash Array Submission ID: B12004
Average Response Time
The average Response Time, in milliseconds, for each Test Run in the two Test Phases of the SPC-2 Large
Database Query Test is listed in the table below as well as illustrated in the following graph.
Test Run Sequence 1 Stream
Variable
Streams
Variable
Streams
Variable
Streams
Variable
Streams
1024KiB w/ 4 IOs/Stream 0.50 1.26 1.26 2.52 2.66
1024KiB w/ 1 IO/Stream 0.45 0.61 0.68 1.08 2.00
64KiB w/ 4 IOs/Stream 0.11 0.15 0.17 0.26 0.50
64KiB w/ 1 IO/Stream 0.11 0.12 0.15 0.22 0.44
1 Stream, 0.11 ms
1 Stream, 0.11 ms
1 Stream, 0.45 ms
1 Stream, 0.50 ms
40 Streams, 0.12 ms
12 Streams, 0.15 ms
12 Streams, 0.61 ms
4 Streams, 1.26 ms
80 Streams, 0.15 ms
24 Streams, 0.17 ms
24 Streams, 0.68 ms
8 Streams, 1.26 ms
160 Streams, 0.22 ms
48 Streams, 0.26 ms
48 Streams, 1.08 ms
16 Streams, 2.52 ms
320 Streams, 0.44 ms
96 Streams, 0.50 ms
96 Streams, 2.00 ms
32 Streams, 2.66 ms
0 1 1 2 2 3 3
64KiB transfers
with
1 IO outstanding
per Stream
64KiB transfers
with
4 IOs outstanding
per Stream
1024KiB transfers
with
1 IO outstanding
per Stream
1024KiB transfers
with
4 IOs outstanding
per Stream
Response Time, ms
Large Database Query - Average Response Time
SPC-1/2 Energy Extension
‣ kompletter Messzyklus ≥ 3 Tage
‣ Temperaturmessung
➛ am Anfang der Idle-Tests
➛ während letzter Minute Last-Test
‣ RMS ≙ quadratischer Mittelwert
SPC — Pricing (im Report)
‣ Hardware, Software, zusätzlich erforderliche Komponenten für
Storage-Funktionalität, 3 Jahre Support, alle anfallenden
Gebühren (Steuern, Zoll u. ä.)
‣ ausgenommen: HW für Benchmark-Setup ohne Storage-Funktion
➛ Server, die Workload erzeugen
➛ evtl. HBAs, FC-Switches, Verkabelung
➛ Fracht/Verpackung
‣ Projektpreise sind nicht erlaubt (»individually negotiated«)
➛ Wie aussagekräftig ist dann noch $/IOPS oder $/MBPS?
‣ Support ≙ 4h Response Time + 4h Vor-Ort-Service
➛ Vor-Ort = Ersatzteil und/oder Techniker
Agenda
‣ Warum überhaupt Benchmarks?
Aussagekraft, Herausforderungen, I/O-Stack, Tools
‣ Storage Performance Council
SPC-1, SPC-2, Industriestandard, Terminologie, Reports
‣Und was heißt das jetzt für’s Tagesgeschäft?
Benchmarks selber programmieren?
Benchmarks für die Beschaffung?
Glaskugel
SPC-1/2 selber bauen?
‣ ja, geht, und ist prinzipiell auch erlaubt
➛ aufwendig in der Entwicklung
➛ muss für offizielle Benchmarks von einem Auditor abgenommen werden
$ cat spc1-preflight.vdbench
***
*** vdbench Parameterfile to emulate SPC-1 workload
***
** storage definitions
**
sd=asu11,lun=/dev/rdsk/c25t2100000E1E19FB51d0s2
sd=asu12,lun=/dev/rdsk/c25t2100000E1E19FB51d13s2
sd=asu21,lun=/dev/rdsk/c26t2100000E1E19F170d32s2
sd=asu22,lun=/dev/rdsk/c26t2100000E1E19F240d68s2
sd=asu31,lun=/dev/rdsk/c27t2100000E1E19F5A1d29s2
sd=asu32,lun=/dev/rdsk/c27t2100000E1E19FB71d39s2
sd=asu41,lun=/dev/rdsk/c28t2100000E1E19F1B1d21s2
sd=asu42,lun=/dev/rdsk/c28t2100000E1E19F261d8s2
** workload definitions
**
wd=asu111,sd=asu11,rdpct=50,xfersize=4k,skew=1
wd=asu112,sd=asu11,rdpct=50,xfersize=4k,skew=6,range=(15,20)
wd=asu113,sd=asu11,rdpct=100,xfersize=(8k,40,16k,24,32k,20,64k,8,128k,8),skew=2,range=(40,50)
wd=asu114,sd=asu11,rdpct=50,xfersize=4k,skew=5,range=(70,75)
wd=asu121,sd=asu12,rdpct=30,xfersize=4k,skew=1
wd=asu122,sd=asu12,rdpct=30,xfersize=4k,skew=2,range=(47,52)
wd=asu123,sd=asu12,rdpct=100,xfersize=(8k,40,16k,24,32k,20,64k,8,128k,8),skew=1,range=(40,50)
wd=asu131,sd=asu13,rdpct=0,xfersize=(8k,40,16k,24,32k,20,64k,8,128k,8),skew=7,range=(35,65)
…
** run definition (raw I/O)
**
rd=spc1emu,wd=(asu111,asu112,asu113,asu114,asu121,…,),iorate=max,elapsed=300
Benchmarks in der Storage-Beschaffung?
‣ Wie genau kennen Sie die im Unternehmen benötigten
➛ IOPS und MBPS,
Benchmarks in der Storage-Beschaffung?
‣ Wie genau kennen Sie die im Unternehmen benötigten
➛ IOPS und MBPS,
➛ als Funktion der I/O-Blocksize,
Benchmarks in der Storage-Beschaffung?
‣ Wie genau kennen Sie die im Unternehmen benötigten
➛ IOPS und MBPS,
➛ als Funktion der I/O-Blocksize,
➛ mit der Verteilung nach read/write?
Benchmarks in der Storage-Beschaffung?
‣ Wie genau kennen Sie die im Unternehmen benötigten
➛ IOPS und MBPS,
➛ als Funktion der I/O-Blocksize,
➛ mit der Verteilung nach read/write?
Wie sind dann SPC-Werte zu bewerten?
Benchmarks in der Storage-Beschaffung?
‣ Wie genau kennen Sie die im Unternehmen benötigten
➛ IOPS und MBPS,
➛ als Funktion der I/O-Blocksize,
➛ mit der Verteilung nach read/write?
Wie sind dann SPC-Werte zu bewerten?
‣ keine allgemeingültigen IOPS-Muster
➛ »Fingerprint« des Unternehmens, abhängig von vielen Faktoren/Randbedingungen
‣ Herstellerdarstellung immer »so große Zahl als wie gehen tut«
➛ Anpassen der Benchmark-Optionen
‣ Lösungen?
➛ umfangreicher, lange laufender PoC (aufwendig)
➛ flexibles, in alle Richtungen skalierbares Storage (»Wollmilchsau«)
Glaskugel — Was bringt die Zukunft?
‣ Cloud Storage (public und private)
➛ viel Software involviert, mehrere Abstraktionslayer
➛ COSbench
‣ Solid State Memory (NAND Flash, Optane u. a.)
➛ keine »bremsende« Mechanik mehr
➛ Applikations-Debugging kann Thema werden, wenn plötzlich
Bottleneck ≠ Storage wird (z. B. komplexe oder »kaputte« SQL Queries)
Bootlenecks verschieben sich nur durch das System,
verschwinden aber nicht.
Quellen und »further learning«
‣ Spezifikationen zu den SPC-1/2 Benchmarks
https://spcresults.org/benchmarks
‣ Avishay Traeger et al.
A Nine Year Study of File System and Storage Benchmarking
https://www.fsl.cs.sunysb.edu/docs/fsbench/fsbench.pdf
‣ Brendan Gregg
Broken Linux Performance Tools
SCALE 14x, 2016
https://www.youtube.com/watch?v=OPio8V-z03c
‣ Raj Jain
The Art of Computer Systems Performance Analysis
John Wiley & Sons, Inc., 1991
Danke!
Fragen?
41
https://twitter.com/SpeicherStief
ws@stief-consulting.de
http://www.speakerdeck.com/stiefkind/
Bild: Wolfgang Stief, CC0
EOF

Contenu connexe

Tendances

Ceph Day Beijing - Storage Modernization with Intel and Ceph
Ceph Day Beijing - Storage Modernization with Intel and CephCeph Day Beijing - Storage Modernization with Intel and Ceph
Ceph Day Beijing - Storage Modernization with Intel and CephDanielle Womboldt
 
AMD Next Horizon
AMD Next HorizonAMD Next Horizon
AMD Next HorizonAMD
 
Getting The Most Out Of Your Flash/SSDs
Getting The Most Out Of Your Flash/SSDsGetting The Most Out Of Your Flash/SSDs
Getting The Most Out Of Your Flash/SSDsAerospike, Inc.
 
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Danielle Womboldt
 
HP 3Par StoreServ Storage: HP All Flash Array SSD
HP 3Par StoreServ Storage: HP All Flash Array SSDHP 3Par StoreServ Storage: HP All Flash Array SSD
HP 3Par StoreServ Storage: HP All Flash Array SSDUnitiv
 
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph Ceph Community
 
AMD EPYC 7002 World Records
AMD EPYC 7002 World RecordsAMD EPYC 7002 World Records
AMD EPYC 7002 World RecordsAMD
 
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster Ceph Community
 
Ceph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Community
 
3PAR and VMWare
3PAR and VMWare3PAR and VMWare
3PAR and VMWarevmug
 
Ceph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for CephCeph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for CephDanielle Womboldt
 
AMD EPYC™ Microprocessor Architecture
AMD EPYC™ Microprocessor ArchitectureAMD EPYC™ Microprocessor Architecture
AMD EPYC™ Microprocessor ArchitectureAMD
 
Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK Ceph Community
 
HP Storage pre virtuálne systémy (Prehľad riešení na zálohovanie a ukladanie ...
HP Storage pre virtuálne systémy (Prehľad riešení na zálohovanie a ukladanie ...HP Storage pre virtuálne systémy (Prehľad riešení na zálohovanie a ukladanie ...
HP Storage pre virtuálne systémy (Prehľad riešení na zálohovanie a ukladanie ...ASBIS SK
 
Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Community
 
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...Odinot Stanislas
 

Tendances (19)

Ceph Day Beijing - Storage Modernization with Intel and Ceph
Ceph Day Beijing - Storage Modernization with Intel and CephCeph Day Beijing - Storage Modernization with Intel and Ceph
Ceph Day Beijing - Storage Modernization with Intel and Ceph
 
AMD Next Horizon
AMD Next HorizonAMD Next Horizon
AMD Next Horizon
 
Getting The Most Out Of Your Flash/SSDs
Getting The Most Out Of Your Flash/SSDsGetting The Most Out Of Your Flash/SSDs
Getting The Most Out Of Your Flash/SSDs
 
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
 
HP 3Par StoreServ Storage: HP All Flash Array SSD
HP 3Par StoreServ Storage: HP All Flash Array SSDHP 3Par StoreServ Storage: HP All Flash Array SSD
HP 3Par StoreServ Storage: HP All Flash Array SSD
 
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
 
AMD EPYC 7002 World Records
AMD EPYC 7002 World RecordsAMD EPYC 7002 World Records
AMD EPYC 7002 World Records
 
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster
 
Ceph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash Storage
 
Ceph's journey at SUSE
Ceph's journey at SUSECeph's journey at SUSE
Ceph's journey at SUSE
 
3PAR and VMWare
3PAR and VMWare3PAR and VMWare
3PAR and VMWare
 
3 Par
3 Par3 Par
3 Par
 
Ceph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for CephCeph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for Ceph
 
AMD EPYC™ Microprocessor Architecture
AMD EPYC™ Microprocessor ArchitectureAMD EPYC™ Microprocessor Architecture
AMD EPYC™ Microprocessor Architecture
 
Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK
 
Optimizing Flash at Scale
Optimizing Flash at ScaleOptimizing Flash at Scale
Optimizing Flash at Scale
 
HP Storage pre virtuálne systémy (Prehľad riešení na zálohovanie a ukladanie ...
HP Storage pre virtuálne systémy (Prehľad riešení na zálohovanie a ukladanie ...HP Storage pre virtuálne systémy (Prehľad riešení na zálohovanie a ukladanie ...
HP Storage pre virtuálne systémy (Prehľad riešení na zálohovanie a ukladanie ...
 
Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage
 
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
 

Similaire à Storage Benchmarks - Voodoo oder Wissenschaft? – data://disrupted® 2020

Cowboy dating with big data TechDays at Lohika-2020
Cowboy dating with big data TechDays at Lohika-2020Cowboy dating with big data TechDays at Lohika-2020
Cowboy dating with big data TechDays at Lohika-2020b0ris_1
 
Leveraging Open Source to Manage SAN Performance
Leveraging Open Source to Manage SAN PerformanceLeveraging Open Source to Manage SAN Performance
Leveraging Open Source to Manage SAN Performancebrettallison
 
Jboss World 2011 Infinispan
Jboss World 2011 InfinispanJboss World 2011 Infinispan
Jboss World 2011 Infinispancbo_
 
2015-06-25 Red Hat Summit 2015 - Security Compliance Made Easy
2015-06-25 Red Hat Summit 2015 - Security Compliance Made Easy2015-06-25 Red Hat Summit 2015 - Security Compliance Made Easy
2015-06-25 Red Hat Summit 2015 - Security Compliance Made EasyShawn Wells
 
Cowboy dating with big data
Cowboy dating with big data Cowboy dating with big data
Cowboy dating with big data b0ris_1
 
Resume_CQ_Edward
Resume_CQ_EdwardResume_CQ_Edward
Resume_CQ_Edwardcaiqi wang
 
2014 05 07 btt-Veeam-VMvare
2014 05 07 btt-Veeam-VMvare2014 05 07 btt-Veeam-VMvare
2014 05 07 btt-Veeam-VMvareDaiva Zaveckiene
 
What’s New in ScyllaDB Open Source 5.0
What’s New in ScyllaDB Open Source 5.0What’s New in ScyllaDB Open Source 5.0
What’s New in ScyllaDB Open Source 5.0ScyllaDB
 
Storage Is Not Virtualized Enough - part 1
Storage Is Not Virtualized Enough - part 1Storage Is Not Virtualized Enough - part 1
Storage Is Not Virtualized Enough - part 1Zhipeng Huang
 
Dell Storage Management
Dell Storage ManagementDell Storage Management
Dell Storage ManagementDell World
 
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Community
 
Webinar replay: MySQL Query Tuning Trilogy: Query tuning process and tools
Webinar replay: MySQL Query Tuning Trilogy: Query tuning process and toolsWebinar replay: MySQL Query Tuning Trilogy: Query tuning process and tools
Webinar replay: MySQL Query Tuning Trilogy: Query tuning process and toolsSeveralnines
 
Stac.report.platform.symphony.hadoop.comparison.111212
Stac.report.platform.symphony.hadoop.comparison.111212Stac.report.platform.symphony.hadoop.comparison.111212
Stac.report.platform.symphony.hadoop.comparison.111212Accenture
 
IBM SAN Volume Controller Performance Analysis
IBM SAN Volume Controller Performance AnalysisIBM SAN Volume Controller Performance Analysis
IBM SAN Volume Controller Performance Analysisbrettallison
 
Avoiding Chaos: Methodology for Managing Performance in a Shared Storage A...
Avoiding Chaos:  Methodology for Managing Performance in a Shared Storage A...Avoiding Chaos:  Methodology for Managing Performance in a Shared Storage A...
Avoiding Chaos: Methodology for Managing Performance in a Shared Storage A...brettallison
 
IMCSummit 2015 - Day 2 IT Business Track - Drive IMC Efficiency with Flash E...
IMCSummit 2015 - Day 2  IT Business Track - Drive IMC Efficiency with Flash E...IMCSummit 2015 - Day 2  IT Business Track - Drive IMC Efficiency with Flash E...
IMCSummit 2015 - Day 2 IT Business Track - Drive IMC Efficiency with Flash E...In-Memory Computing Summit
 
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red_Hat_Storage
 

Similaire à Storage Benchmarks - Voodoo oder Wissenschaft? – data://disrupted® 2020 (20)

Cowboy dating with big data TechDays at Lohika-2020
Cowboy dating with big data TechDays at Lohika-2020Cowboy dating with big data TechDays at Lohika-2020
Cowboy dating with big data TechDays at Lohika-2020
 
Leveraging Open Source to Manage SAN Performance
Leveraging Open Source to Manage SAN PerformanceLeveraging Open Source to Manage SAN Performance
Leveraging Open Source to Manage SAN Performance
 
Jboss World 2011 Infinispan
Jboss World 2011 InfinispanJboss World 2011 Infinispan
Jboss World 2011 Infinispan
 
2015-06-25 Red Hat Summit 2015 - Security Compliance Made Easy
2015-06-25 Red Hat Summit 2015 - Security Compliance Made Easy2015-06-25 Red Hat Summit 2015 - Security Compliance Made Easy
2015-06-25 Red Hat Summit 2015 - Security Compliance Made Easy
 
Cowboy dating with big data
Cowboy dating with big data Cowboy dating with big data
Cowboy dating with big data
 
Resume_CQ_Edward
Resume_CQ_EdwardResume_CQ_Edward
Resume_CQ_Edward
 
2014 05 07 btt-Veeam-VMvare
2014 05 07 btt-Veeam-VMvare2014 05 07 btt-Veeam-VMvare
2014 05 07 btt-Veeam-VMvare
 
Troubleshooting Storage Devices Using vRealize Operations (formerly vC Ops)
Troubleshooting Storage Devices Using vRealize Operations (formerly vC Ops)Troubleshooting Storage Devices Using vRealize Operations (formerly vC Ops)
Troubleshooting Storage Devices Using vRealize Operations (formerly vC Ops)
 
What’s New in ScyllaDB Open Source 5.0
What’s New in ScyllaDB Open Source 5.0What’s New in ScyllaDB Open Source 5.0
What’s New in ScyllaDB Open Source 5.0
 
Make IT Simple, Make Business Agile
Make IT Simple, Make Business AgileMake IT Simple, Make Business Agile
Make IT Simple, Make Business Agile
 
Storage Is Not Virtualized Enough - part 1
Storage Is Not Virtualized Enough - part 1Storage Is Not Virtualized Enough - part 1
Storage Is Not Virtualized Enough - part 1
 
Dell Storage Management
Dell Storage ManagementDell Storage Management
Dell Storage Management
 
Sql ppt
Sql pptSql ppt
Sql ppt
 
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
 
Webinar replay: MySQL Query Tuning Trilogy: Query tuning process and tools
Webinar replay: MySQL Query Tuning Trilogy: Query tuning process and toolsWebinar replay: MySQL Query Tuning Trilogy: Query tuning process and tools
Webinar replay: MySQL Query Tuning Trilogy: Query tuning process and tools
 
Stac.report.platform.symphony.hadoop.comparison.111212
Stac.report.platform.symphony.hadoop.comparison.111212Stac.report.platform.symphony.hadoop.comparison.111212
Stac.report.platform.symphony.hadoop.comparison.111212
 
IBM SAN Volume Controller Performance Analysis
IBM SAN Volume Controller Performance AnalysisIBM SAN Volume Controller Performance Analysis
IBM SAN Volume Controller Performance Analysis
 
Avoiding Chaos: Methodology for Managing Performance in a Shared Storage A...
Avoiding Chaos:  Methodology for Managing Performance in a Shared Storage A...Avoiding Chaos:  Methodology for Managing Performance in a Shared Storage A...
Avoiding Chaos: Methodology for Managing Performance in a Shared Storage A...
 
IMCSummit 2015 - Day 2 IT Business Track - Drive IMC Efficiency with Flash E...
IMCSummit 2015 - Day 2  IT Business Track - Drive IMC Efficiency with Flash E...IMCSummit 2015 - Day 2  IT Business Track - Drive IMC Efficiency with Flash E...
IMCSummit 2015 - Day 2 IT Business Track - Drive IMC Efficiency with Flash E...
 
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
 

Plus de data://disrupted®

Benchmarking your cloud performance with top 4 global public clouds
Benchmarking your cloud performance with top 4 global public cloudsBenchmarking your cloud performance with top 4 global public clouds
Benchmarking your cloud performance with top 4 global public cloudsdata://disrupted®
 
Achieving the Ultimate Performance with KVM
Achieving the Ultimate Performance with KVMAchieving the Ultimate Performance with KVM
Achieving the Ultimate Performance with KVMdata://disrupted®
 
​Muss es wirklich wieder Tape sein?
​Muss es wirklich wieder Tape sein? ​Muss es wirklich wieder Tape sein?
​Muss es wirklich wieder Tape sein? data://disrupted®
 
​Tape-basierter Object-Storage als S3 Speicherklasse und Cloud-Absicherung
​Tape-basierter Object-Storage als S3 Speicherklasse und Cloud-Absicherung​Tape-basierter Object-Storage als S3 Speicherklasse und Cloud-Absicherung
​Tape-basierter Object-Storage als S3 Speicherklasse und Cloud-Absicherungdata://disrupted®
 
Rook: Storage for Containers in Containers – data://disrupted® 2020
Rook: Storage for Containers in Containers  – data://disrupted® 2020Rook: Storage for Containers in Containers  – data://disrupted® 2020
Rook: Storage for Containers in Containers – data://disrupted® 2020data://disrupted®
 
Speichermedium Tape – Warum es keine Alternative gibt – data://disrupted® 2020
Speichermedium Tape – Warum es keine Alternative gibt – data://disrupted® 2020Speichermedium Tape – Warum es keine Alternative gibt – data://disrupted® 2020
Speichermedium Tape – Warum es keine Alternative gibt – data://disrupted® 2020data://disrupted®
 
Ransomware: Ohne Air Gap & Tape sind Sie verloren! – data://disrupted® 2020
Ransomware: Ohne Air Gap & Tape sind Sie verloren! – data://disrupted® 2020Ransomware: Ohne Air Gap & Tape sind Sie verloren! – data://disrupted® 2020
Ransomware: Ohne Air Gap & Tape sind Sie verloren! – data://disrupted® 2020data://disrupted®
 
HCI einfach einfach! IT-Infrastruktur wie ein Smartphone! – data://disrupted®...
HCI einfach einfach! IT-Infrastruktur wie ein Smartphone! – data://disrupted®...HCI einfach einfach! IT-Infrastruktur wie ein Smartphone! – data://disrupted®...
HCI einfach einfach! IT-Infrastruktur wie ein Smartphone! – data://disrupted®...data://disrupted®
 
Erasure coding stief.tech 2020-03
Erasure coding stief.tech 2020-03Erasure coding stief.tech 2020-03
Erasure coding stief.tech 2020-03data://disrupted®
 
Nextcloud als On-Premises Lösung für hochsicheren Datenaustausch (Frank Karli...
Nextcloud als On-Premises Lösung für hochsicheren Datenaustausch (Frank Karli...Nextcloud als On-Premises Lösung für hochsicheren Datenaustausch (Frank Karli...
Nextcloud als On-Premises Lösung für hochsicheren Datenaustausch (Frank Karli...data://disrupted®
 
Operation Unthinkable – Software Defined Storage @ Booking.com (Peter Buschman)
Operation Unthinkable – Software Defined Storage @ Booking.com (Peter Buschman)Operation Unthinkable – Software Defined Storage @ Booking.com (Peter Buschman)
Operation Unthinkable – Software Defined Storage @ Booking.com (Peter Buschman)data://disrupted®
 
Die IBM 3592 Speicherlösung: Ein Vorgeschmack auf die Zukunft (Anne Ingenhaag)
Die IBM 3592 Speicherlösung: Ein Vorgeschmack auf die Zukunft (Anne Ingenhaag)Die IBM 3592 Speicherlösung: Ein Vorgeschmack auf die Zukunft (Anne Ingenhaag)
Die IBM 3592 Speicherlösung: Ein Vorgeschmack auf die Zukunft (Anne Ingenhaag)data://disrupted®
 
CANDIDATE EXPERIENCE – Was Bewerber tatsächlich erwarten.
CANDIDATE EXPERIENCE – Was Bewerber tatsächlich erwarten.CANDIDATE EXPERIENCE – Was Bewerber tatsächlich erwarten.
CANDIDATE EXPERIENCE – Was Bewerber tatsächlich erwarten.data://disrupted®
 
Cloud/Object-basierte Datenspeicherung mit HSM/ILM in S3 Speicherklassen (Tho...
Cloud/Object-basierte Datenspeicherung mit HSM/ILM in S3 Speicherklassen (Tho...Cloud/Object-basierte Datenspeicherung mit HSM/ILM in S3 Speicherklassen (Tho...
Cloud/Object-basierte Datenspeicherung mit HSM/ILM in S3 Speicherklassen (Tho...data://disrupted®
 
Buzzword Bingo Storage Edition 2019 (Wolfgang Stief)
Buzzword Bingo Storage Edition 2019 (Wolfgang Stief)Buzzword Bingo Storage Edition 2019 (Wolfgang Stief)
Buzzword Bingo Storage Edition 2019 (Wolfgang Stief)data://disrupted®
 
Hochleistungsspeichersysteme für Datenanalyse an der TU Dresden (Michael Kluge)
Hochleistungsspeichersysteme für Datenanalyse an der TU Dresden (Michael Kluge)Hochleistungsspeichersysteme für Datenanalyse an der TU Dresden (Michael Kluge)
Hochleistungsspeichersysteme für Datenanalyse an der TU Dresden (Michael Kluge)data://disrupted®
 
Intelligent Edge - breaking the storage hype (Michael Beeck, mibeeck GmbH)
Intelligent Edge - breaking the storage hype (Michael Beeck, mibeeck GmbH)Intelligent Edge - breaking the storage hype (Michael Beeck, mibeeck GmbH)
Intelligent Edge - breaking the storage hype (Michael Beeck, mibeeck GmbH)data://disrupted®
 

Plus de data://disrupted® (17)

Benchmarking your cloud performance with top 4 global public clouds
Benchmarking your cloud performance with top 4 global public cloudsBenchmarking your cloud performance with top 4 global public clouds
Benchmarking your cloud performance with top 4 global public clouds
 
Achieving the Ultimate Performance with KVM
Achieving the Ultimate Performance with KVMAchieving the Ultimate Performance with KVM
Achieving the Ultimate Performance with KVM
 
​Muss es wirklich wieder Tape sein?
​Muss es wirklich wieder Tape sein? ​Muss es wirklich wieder Tape sein?
​Muss es wirklich wieder Tape sein?
 
​Tape-basierter Object-Storage als S3 Speicherklasse und Cloud-Absicherung
​Tape-basierter Object-Storage als S3 Speicherklasse und Cloud-Absicherung​Tape-basierter Object-Storage als S3 Speicherklasse und Cloud-Absicherung
​Tape-basierter Object-Storage als S3 Speicherklasse und Cloud-Absicherung
 
Rook: Storage for Containers in Containers – data://disrupted® 2020
Rook: Storage for Containers in Containers  – data://disrupted® 2020Rook: Storage for Containers in Containers  – data://disrupted® 2020
Rook: Storage for Containers in Containers – data://disrupted® 2020
 
Speichermedium Tape – Warum es keine Alternative gibt – data://disrupted® 2020
Speichermedium Tape – Warum es keine Alternative gibt – data://disrupted® 2020Speichermedium Tape – Warum es keine Alternative gibt – data://disrupted® 2020
Speichermedium Tape – Warum es keine Alternative gibt – data://disrupted® 2020
 
Ransomware: Ohne Air Gap & Tape sind Sie verloren! – data://disrupted® 2020
Ransomware: Ohne Air Gap & Tape sind Sie verloren! – data://disrupted® 2020Ransomware: Ohne Air Gap & Tape sind Sie verloren! – data://disrupted® 2020
Ransomware: Ohne Air Gap & Tape sind Sie verloren! – data://disrupted® 2020
 
HCI einfach einfach! IT-Infrastruktur wie ein Smartphone! – data://disrupted®...
HCI einfach einfach! IT-Infrastruktur wie ein Smartphone! – data://disrupted®...HCI einfach einfach! IT-Infrastruktur wie ein Smartphone! – data://disrupted®...
HCI einfach einfach! IT-Infrastruktur wie ein Smartphone! – data://disrupted®...
 
Erasure coding stief.tech 2020-03
Erasure coding stief.tech 2020-03Erasure coding stief.tech 2020-03
Erasure coding stief.tech 2020-03
 
Nextcloud als On-Premises Lösung für hochsicheren Datenaustausch (Frank Karli...
Nextcloud als On-Premises Lösung für hochsicheren Datenaustausch (Frank Karli...Nextcloud als On-Premises Lösung für hochsicheren Datenaustausch (Frank Karli...
Nextcloud als On-Premises Lösung für hochsicheren Datenaustausch (Frank Karli...
 
Operation Unthinkable – Software Defined Storage @ Booking.com (Peter Buschman)
Operation Unthinkable – Software Defined Storage @ Booking.com (Peter Buschman)Operation Unthinkable – Software Defined Storage @ Booking.com (Peter Buschman)
Operation Unthinkable – Software Defined Storage @ Booking.com (Peter Buschman)
 
Die IBM 3592 Speicherlösung: Ein Vorgeschmack auf die Zukunft (Anne Ingenhaag)
Die IBM 3592 Speicherlösung: Ein Vorgeschmack auf die Zukunft (Anne Ingenhaag)Die IBM 3592 Speicherlösung: Ein Vorgeschmack auf die Zukunft (Anne Ingenhaag)
Die IBM 3592 Speicherlösung: Ein Vorgeschmack auf die Zukunft (Anne Ingenhaag)
 
CANDIDATE EXPERIENCE – Was Bewerber tatsächlich erwarten.
CANDIDATE EXPERIENCE – Was Bewerber tatsächlich erwarten.CANDIDATE EXPERIENCE – Was Bewerber tatsächlich erwarten.
CANDIDATE EXPERIENCE – Was Bewerber tatsächlich erwarten.
 
Cloud/Object-basierte Datenspeicherung mit HSM/ILM in S3 Speicherklassen (Tho...
Cloud/Object-basierte Datenspeicherung mit HSM/ILM in S3 Speicherklassen (Tho...Cloud/Object-basierte Datenspeicherung mit HSM/ILM in S3 Speicherklassen (Tho...
Cloud/Object-basierte Datenspeicherung mit HSM/ILM in S3 Speicherklassen (Tho...
 
Buzzword Bingo Storage Edition 2019 (Wolfgang Stief)
Buzzword Bingo Storage Edition 2019 (Wolfgang Stief)Buzzword Bingo Storage Edition 2019 (Wolfgang Stief)
Buzzword Bingo Storage Edition 2019 (Wolfgang Stief)
 
Hochleistungsspeichersysteme für Datenanalyse an der TU Dresden (Michael Kluge)
Hochleistungsspeichersysteme für Datenanalyse an der TU Dresden (Michael Kluge)Hochleistungsspeichersysteme für Datenanalyse an der TU Dresden (Michael Kluge)
Hochleistungsspeichersysteme für Datenanalyse an der TU Dresden (Michael Kluge)
 
Intelligent Edge - breaking the storage hype (Michael Beeck, mibeeck GmbH)
Intelligent Edge - breaking the storage hype (Michael Beeck, mibeeck GmbH)Intelligent Edge - breaking the storage hype (Michael Beeck, mibeeck GmbH)
Intelligent Edge - breaking the storage hype (Michael Beeck, mibeeck GmbH)
 

Dernier

How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
unit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxunit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxBkGupta21
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embeddingZilliz
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxLoriGlavin3
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxLoriGlavin3
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxLoriGlavin3
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionDilum Bandara
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxLoriGlavin3
 
Sample pptx for embedding into website for demo
Sample pptx for embedding into website for demoSample pptx for embedding into website for demo
Sample pptx for embedding into website for demoHarshalMandlekar2
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxLoriGlavin3
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 

Dernier (20)

How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
unit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxunit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptx
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embedding
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An Introduction
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
 
Sample pptx for embedding into website for demo
Sample pptx for embedding into website for demoSample pptx for embedding into website for demo
Sample pptx for embedding into website for demo
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 

Storage Benchmarks - Voodoo oder Wissenschaft? – data://disrupted® 2020

  • 1. Storage Benchmarks The Good, the Bad and the Ugly Wolfgang Stief data://disrupted 2020
  • 2. # whoami ‣ open minded geek and engineer ‣ Dipl.-Ing. Elektrische Energietechnik ‣ selbständig (2011), Mitgründer sys4 AG (2012) technisches Marketing, Erklärbär, E-Mail, Projektkümmerer, Vorstand ‣ ws@stief-consulting.de https://www.linkedin.com/in/wstief/ @SpeicherStief (Twitter) @stiefkind (Twitter) stiefkind@mastodon.social
  • 3. Agenda ‣Warum überhaupt Benchmarks? Aussagekraft, Herausforderungen, I/O-Stack, Tools ‣ Storage Performance Council SPC-1, SPC-2, Industriestandard, Terminologie, Reports ‣ Und was heißt das jetzt für’s Tagesgeschäft? Benchmarks selber programmieren? Benchmarks für die Beschaffung? Glaskugel
  • 4. Warum überhaupt Benchmarks? ‣ Vergleichbarkeit von Komponenten, Geräten, Systemen ‣ Auswahlkriterium bei Beschaffung ‣ Festlegen von SLAs Baselining
  • 6. Wie aussagekräftig sind Benchmarks? »≈100% of all benchmarks are wrong« — Brendan Gregg
  • 7. Wie aussagekräftig sind Benchmarks? Heutiges Ziel: Bild’ Dir Deine Meinung!
  • 8. Herausforderungen @ Benchmarks (generell) ‣ synthetische Last vs. reale Last ➛ Benchmarks müssen vergleichbar sein ➛ Lastverhalten in RZs individuell stark unterschiedlich ‣ Man kann das Falsche messen oder das falsche messen wollen. ‣ Man kann das falsche Ergebnis schlussfolgern. ‣ Man kann Fehler ignorieren. ‣ Bugs in der Benchmark-Software ➛ selber schreiben i. d. R. keine Option, weil sehr (zeit)aufwendig. ‣ »Active Benchmarking« Brendan Gregg, http://www.brendangregg.com/activebenchmarking.html »Wer misst, misst Mist«— überlieferte Ingenieursweisheit
  • 9. I/O-Stack (1) — einfach VMVM Filesystem ($) Block I/O Driver (ASYNC) Host System HBA Filesystem ($) Block I/O Driver (ASYNC) Filesystem ($) Block I/O Driver (ASYNC) HBA HBA HBA Storage Controller ($) Storage Controller ($) I/OCommandQueues,Buffering
  • 10. I/O-Stack (2) — komplex VMVM Filesystem ($) Block I/O Driver (ASYNC) Host System HBA Filesystem ($) Block I/O Driver (ASYNC) Filesystem ($) Block I/O Driver (ASYNC) HBA HBA HBA Storage Controller ($) Storage Controller ($) SAN Virtualisierung ($, Command Queues, Buffering) Storage Controller ($) Storage Controller ($) Flash Layer (auto tiering) VMVM Filesystem ($) Block I/O Driver (ASYNC) Host System HBA Filesystem ($) Block I/O Driver (ASYNC) Filesystem ($) Block I/O Driver (ASYNC) HBA HBA HBA Storage Controller ($) Storage Controller ($) I/OCommandQueues,Buffering
  • 11. I/O-Stack (1-3) — simple, komplex, Cloud Cloud: something totally different viel SDS ➛ $, ASYNC_IO VMVM Filesystem ($) Block I/O Driver (ASYNC) Host System HBA Filesystem ($) Block I/O Driver (ASYNC) Filesystem ($) Block I/O Driver (ASYNC) HBA HBA HBA Storage Controller ($) Storage Controller ($) SAN Virtualisierung ($, Command Queues, Buffering) Storage Controller ($) Storage Controller ($) Flash Layer (auto tiering) VMVM Filesystem ($) Block I/O Driver (ASYNC) Host System HBA Filesystem ($) Block I/O Driver (ASYNC) Filesystem ($) Block I/O Driver (ASYNC) HBA HBA HBA Storage Controller ($) Storage Controller ($) I/OCommandQueues,Buffering
  • 12. I/O-Stack (4) Cloud: something totally different viel SDS ➛ $, ASYNC_IO ‣ eigene Benchmarks entwickeln? ➛ genaue Kenntnis von Tools, Komponenten, Stack, Plattform erforderlich VMVM Filesystem ($) Block I/O Driver (ASYNC) Host System HBA Filesystem ($) Block I/O Driver (ASYNC) Filesystem ($) Block I/O Driver (ASYNC) HBA HBA HBA Storage Controller ($) Storage Controller ($) SAN Virtualisierung ($, Command Queues, Buffering) Storage Controller ($) Storage Controller ($) Flash Layer (auto tiering) VMVM Filesystem ($) Block I/O Driver (ASYNC) Host System HBA Filesystem ($) Block I/O Driver (ASYNC) Filesystem ($) Block I/O Driver (ASYNC) HBA HBA HBA Storage Controller ($) Storage Controller ($) I/OCommandQueues,Buffering
  • 13. Storage Benchmark Tools (unvollständig) ‣ Vdbench (2000, jetzt Oracle, Java) https://www.oracle.com/technetwork/server-storage/vdbench-downloads-1901681.html ‣ fio (seit 2005, OpenSource, Linux) https://github.com/axboe/fio ‣ filebench (2002, Sun, jetzt OpenSource, WML, Microbenchmark) https://github.com/filebench/filebench/wiki ‣ Iometer Project (Intel 1998-2001, jetzt OSDL) http://www.iometer.org ‣ IOzone Filesystem Benchmark (ab 1991, u. a. Android) http://www.iozone.org ‣ IOR (seit 2001, Parallel I/O Benchmark, HPC/MPI, versch. Forks) https://github.com/hpc/ior ‣ COSbench (ca. 2015(?), Cloud Object Storage, Intel) https://github.com/intel-cloud/cosbench
  • 14. Warum kein dd? ‣ dd(1) ➛ disk dump ‣ dd if=<input_file> of=<output_file> bs=<blocksize> ➛ sequential only ➛ genau ein Stream mit genau einer Blocksize ‣ if=/dev/zero ➛ liefert Strom von Nullen ➛ lässt sich hervorragend cachen ➛ lässt sich hervorragend komprimieren ➛ lässt sich hervorragend deduplizieren ‣ if=/dev/random oder /dev/urandom ➛ Bottleneck ist häufig CPU
  • 15. Agenda ‣ Warum überhaupt Benchmarks? Aussagekraft, Herausforderungen, I/O-Stack, Tools ‣Storage Performance Council SPC-1, SPC-2, Industriestandard, Terminologie, Reports ‣ Und was heißt das jetzt für’s Tagesgeschäft? Benchmarks selber programmieren? Benchmarks für die Beschaffung? Glaskugel
  • 16. Industriestandard SPC-1 und SPC-2 ‣ Storage Performance Council ➛ https://spcresults.org/ ➛ Full Member: 12.000U$, Toolkit 4.000U$, je Submission 1.000U$ (139 Mitglieder) ➛ Associate Member: 5.000U$, Toolkit 6.000U$, je Submission 1.500U$ (140 Mitglieder) ➛ Academic Member: 0U$, Toolkit 500U$ (limited license), keine Submissions (141 Mitglieder) ➛ »Sponsors« ‣ formale Definition und Spezifikation ➛ reproduzierbar ➛ vgl. auch SPECint/SPECfp für CPUs ‣ Vergleichbarkeit von Ergebnissen ‣ SPC-1 für zeitkritische Anforderungen (OLTP, Response Time) ‣ SPC-2 für »large scale sequential movement of data«
  • 17. Industriestandard SPC-1 und SPC-2 ‣ Erweiterung »C« — Components (und kleine Systeme) ‣ Ergänzung »E« — Energieverbrauch ➛ vorgeschriebenes Messequipment für Leistungsmessung ‣ 4 Regelsätze Benchmark Version Last Updated (# Submissions) Latest Submission SPC-1 SPC-1E 3.9 27.05.2020 208 (8) 03.08.2020 (29.12.2015) SPC-2 SPC-2E 1.7 15.10.2017 82 (2) 24.07.2020 (24.08.2014) SPC-1C SPC-1C/E 1.5 12.05.2013 (18) (2) (12.06.2010) (02.08.2009) SPC-2C SPC-2C/E 1.4 12.05.2013 (8) (1) (08.02.2009) (18.12.2011)
  • 18. SPC — Terminologie (1) ‣ ES, FDR, SF ➛ Executive Summary (PDF) ➛ Full Disclosure Report (PDF) ➛ Supporting Files (ZIP) ‣ Protected 1 vs. Protected 2 ➛ 1: jedes Storage Device kann ausfallen ohne Datenverlust (≙ RAID, Erasure Coding) ➛ 2: eine beliebige Komponente der TSC kann ausfallen ohne Datenverlust ‣ TSC, PSC (häufig ist TSC = PSC) ➛ Tested Storage Configuration ➛ Priced Storage Configuration ‣ SPC-1 IOPS vs. SPC-2 MBPS ‣ Price-Performance ➛ $/SPC-1 kIOPS ➛ $/SPC-2 MBPS EXECUTIVE SUMMARY Page 1 of 6 SPC Benchmark 1™ V3.8 Submission Identifier: A31020 Huawei Technologies Co., Ltd Submitted for Review: December 27, 2018 Huawei OCEANSTOR 5600 V5 EXECUTIVE SUMMARY SPC BENCHMARK 1™ EXECUTIVE SUMMARY HUAWEI TECHNOLOGIES CO., LTD HUAWEI OCEANSTOR 5600 V5 SPC-1 IOPS 1,100,252 SPC-1 Price-Pe a ce $405.39/SPC-1 KIOPS SPC-1 IOPS Re pon e Time 0.710 ms SPC-1 Overall Response Time 0.445 ms SPC-1 ASU Capacity 26,124 GB SPC-1 ASU Price $17.08/GB SPC-1 Total System Price $446,024.48 Data Protection Level Protected 2 (RAID-10 and full redundancy) Physical Storage Capacity 69,120 GB Pricing Currency / Target Country U.S. Dollars / USA SPC-1 V3.8 SUBMISSION IDENTIFIER: A31020 SUBMITTED FOR REVIEW: DECEMBER 27, 2018 SPC BENCHMARK 1™ FULL DISCLOSURE REPORT HUAWEI TECHNOLOGIES CO., LTD HUAWEI OCEANSTOR 5600 V5 SPC-1 V3.8 SUBMISSION IDENTIFIER: A31020 SUBMITTED FOR REVIEW: DECEMBER 27, 2018
  • 19. SPC — Terminologie (2) • Storage Hierarchy ➛ Physical Storage Capacity ≙ Brutto-Kapazität, nichtflüchtiger Speicher ➛ Logical Volume Adressable Capacity ≙ Summe der Kapazität aller LUNs ➛ Application Storage Unit, ASU ➛ darauf wird Benchmark gefahren ➛ nahezu beliebiges Mapping LV ⇌ ASU Logical Volume Application Storage Unit 1 Application Storage Unit 2 Application Storage Unit 3 Logical Volume Logical Volume Logical Volume Logical Volume Application Storage Unit 1 Logical Volume Logical Volume Application Storage Unit 2 Logical Volume Logical Volume Application Storage Unit 3 Application Storage Unit 1 Application Storage Unit 2 Application Storage Unit 3 Logical Volume
  • 20. SPC-1 — Executive Summary (1) EXECUTIVE SUMMARY Page 1 of 6 SPC Benchmark 1™ V3.8 Submission Identifier: A31020 Huawei Technologies Co., Ltd Submitted for Review: December 27, 2018 Huawei OCEANSTOR 5600 V5 EXECUTIVE SUMMARY SPC BENCHMARK 1™ EXECUTIVE SUMMARY HUAWEI TECHNOLOGIES CO., LTD HUAWEI OCEANSTOR 5600 V5 SPC-1 IOPS 1,100,252 SPC-1 Price-Pe a ce $405.39/SPC-1 KIOPS SPC-1 IOPS Re pon e Time 0.710 ms SPC-1 Overall Response Time 0.445 ms SPC-1 ASU Capacity 26,124 GB SPC-1 ASU Price $17.08/GB SPC-1 Total System Price $446,024.48 Data Protection Level Protected 2 (RAID-10 and full redundancy) Physical Storage Capacity 69,120 GB Pricing Currency / Target Country U.S. Dollars / USA SPC-1 V3.8 SUBMISSION IDENTIFIER: A31020 SUBMITTED FOR REVIEW: DECEMBER 27, 2018
  • 21. SPC-1 — Executive Summary (2) EXECUTIVE SUMMARY Page 2 of 6 SPC Benchmark 1 3. Submission Identifier: A31020 Huawei Technologies Co., Ltd Submitted for Review: December 27, 2018 Huawei OCEANSTOR 5600 V5 Benchmark Configuration Diagram
  • 22. SPC-1 — Executive Summary (3) EXECUTIVE SUMMARY Page 4 of 6 SPC Benchmark 1™ V3.8 Submission Identifier: A31020 Huawei Technologies Co., Ltd Submitted for Review: December 27, 2018 Huawei OCEANSTOR 5600 V5 Storage Configuration Pricing Third-Party Reseller: Huawei Technologies Co., Ltd. only sells its products to third- party resellers who, in turn, sell those products to U.S. customers. The above reflects the pricing quoted by one of those third-party resellers. See Appendix B of the Full Disclosure Report for a copy of the third-party reseller s quotation. Description Qty Unit Price Ext. Price Disc. Disc. Price 02351LWK 56V5-256G-AC2 OceanStor 5600 V5 Engine(3U,Dual Controller,AC240HVDC,256GB Cache,SPE63C0300) 2 116,820.00 233,640.00 68% 74,764.80 SMARTIO10ETH 4 port SmartIO I/O module(SFP+,10Gb Eth/FCoE(VN2VF)/Scale-out) 4 6,288.00 25,152.00 68% 8,048.64 SMARTIO8FC 4 port SmartIO I/O module(SFP+,8Gb FC) 8 3,192.00 25,536.00 68% 8,171.52 LPU4S12V3 4 port 4*12Gb SAS I/O module(MiniSAS HD) 8 4,963.00 39,704.00 68% 12,705.28 HSSD-960G2S-A9 960GB SSD SAS Disk Unit(2.5") 72 10,176.00 732,672.00 70% 219,801.60 DAE52525U2-AC-A2 Disk Enclosure(2U,AC240HVDC,2.5",Expanding Module,25 Disk Slots,without Disk Unit,DAE52525U2) 8 10,584.00 84,672.00 68% 27,095.04 N8GHBA000 QLOGIC QLE2562 HBA Card,PCIE,8Gbps DualPort ,Fiber Channel Multimode LC Optic Interface,English Manual, No Drive CD 12 1,698.00 20,376.00 0% 20,376.00 SN2F01FCPC Patch Cord,DLC/PC,DLC/PC,Multi- mode,3m,A1a.2,2mm,42mm DLC,OM3 bending insensitive 24 14.00 336.00 0% 336.00 LIC-56V5-BS Basic Software License(Including DeviceManager,SmartThin,SmartMulti- tenant,SmartMigration,SmartErase,SmartMotion, SystemReporter,eService,SmartQuota,NFS,CIFS, NDMP 1 9,852.00 9,852.00 70% 2,955.60 374,254.48 02351LWK-88134ULF-36 OceanStor 5600 V5 Engine(3U,Dual Controller,AC240HVDC,256GB Cache,SPE63C0300&4*Disk Enclosure- 2U,AC240HVDC,2.5",DAE52525U2&36*960GB SSD SAS Disk Unit(2.5"))-Hi-Care Onsite Premier 24x7x4H Engineer Onsite Service- 36Month(s) 2 29,292.00 58,584.00 0% 58,584.00 88034JNY-88134UHK-36 Basic Software License(Including DeviceManager,SmartThin,SmartMulti- tenant,SmartMigration,SmartErase,SmartMotion, SystemReporter,eService,SmartQuota,NFS,CIFS, NDMP)-Hi-Care Application Software Upgrade Support Service-36Month(s) 1 2,919.00 2,919.00 0% 2,919.00 8812153244 OceanStor 5600 V5 Installation Service - Engineering 1 10,267.00 10,267.00 0% 10,267.00 71,770.00 446,024.48 1,100,252 405.39 26,124 17.08 SPC-1 Total System Price SPC-1 ASU Capacity (GB) SPC-1 ASU Price ($/GB) SPC-1 IOPS SPC-1 P ce-Pe a ce ($/SPC-1 KIOPS ) Hardware & Software Hardware & Software Subtotal Support & Maintenance Support & Maintenance Subtotal
  • 23. SPC-1 — Full Disclosure Report (1) CONFIGURATION INFORMATION Page 13 of 42 SPC Benchmark 1 3. FULL DISCLOSURE REPORT Submission Identifier: A31020 Huawei Technologies Co., Ltd Submitted for Review: December 27, 2018 Huawei OCEANSTOR 5600 V5 CONFIGURATION INFORMATION Benchmark Configuration and Tested Storage Configuration The following diagram illustrates the Benchmark Configuration (BC), including the Tested Storage Configuration (TSC) and the Host System(s). Storage Network Configuration The Tested Storage Configuration (TSC) involved an external storage subsystem made of 4 Huawei OCEANSTOR 5600 V5, driven by 6 host systems (Huawei BENCHMARK EXECUTION RESULTS Page 16 of 42 Overview SPC Benchmark 1 3. FULL DISCLOSURE REPORT Submission Identifier: A31020 Huawei Technologies Co., Ltd Submitted for Review: December 27, 2018 Huawei OCEANSTOR 5600 V5 BENCHMARK EXECUTION RESULTS This portion of the Full Disclosure Report documents the results of the various SPC-1 Tests, Test Phases, and Test Runs. Benchmark Execution Overview Workload Generator Input Parameters The SPC-1 Workload Generator commands and input parameters for the Test Phases are presented in the Supporting Files (see Appendix A). Primary Metrics Test Phases The benchmark execution consists of the Primary Metrics Test Phases, including the Test Phases SUSTAIN, RAMPD_100 to RAMPD_10, RAMPU_50 to RAMPU_100, RAMP_0, REPEAT_1 and REPEAT_2. Each Test Phase starts with a transition period followed by a Measurement Interval. Measurement Intervals by Test Phase Graph The following graph presents the average IOPS and the average Response Times measured over the Measurement Interval (MI) of each Test Phase. Exception and Waiver None. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0 200,000 400,000 600,000 800,000 1,000,000 1,200,000 AverageMeasuredResponseTime(ms) AverageMeasuredIOPS Measurement Intervals by Test Phase Graph IOPS Response Time
  • 24. SPC-1 — Full Disclosure Report (2) BENCHMARK EXECUTION RESULTS Page 24 of 42 Primary Metrics – Response Time Ramp Test SPC Benchmark 1™ V3.8 FULL DISCLOSURE REPORT Submission Identifier: A31020 Huawei Technologies Co., Ltd Submitted for Review: December 27, 2018 Huawei OCEANSTOR 5600 V5 Response Time Ramp Test – Average Response Time Graph Response Time Ramp Test – RAMPD_10 Response Time Graph 0.000 0.100 0.200 0.300 0.400 0.500 0.600 0.700 0.800 AverageMeasuredResponseTime(ms) Average Response Time Graph (Response Time Ramp Test) MI 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 0 2 4 6 8 10 12 14 ResponseTime(ms) Relative Run Time (minutes) Response Time Graph (RAMPD_10 @ 110,020 IOPS) ASU1 ASU2 ASU3 All ASUs BENCHMARK EXECUTION RESULTS Page 27 of 42 Repeatability Tests SPC Benchmark 1 3. FULL DISCLOSURE REPORT Submission Identifier: A31020 Huawei Technologies Co., Ltd Submitted for Review: December 27, 2018 Huawei OCEANSTOR 5600 V5 REPEAT_2_100 – Response Time Graph Repeatability Test – Intensity Multiplier The following tables lists the targeted intensity multiplier (Defined), the measured intensity multiplier (Measured) for each I/O STREAM, its coefficient of variation (Variation) and the percent of difference (Difference) between Target and Measured. REPEAT_1_100 Test Phase ASU1-1 ASU1-2 ASU1-3 ASU1-4 ASU2-1 ASU2-2 ASU2-3 ASU3-1 Defined 0.0350 0.2810 0.0700 0.2100 0.0180 0.0700 0.0350 0.2810 Measured 0.0350 0.2810 0.0700 0.2100 0.0180 0.0700 0.0350 0.2810 Variation 0.0005 0.0002 0.0007 0.0003 0.0008 0.0005 0.0005 0.0001 Difference 0.002% 0.005% 0.010% 0.005% 0.025% 0.005% 0.015% 0.003% REPEAT_2_100 Test Phase ASU1-1 ASU1-2 ASU1-3 ASU1-4 ASU2-1 ASU2-2 ASU2-3 ASU3-1 Defined 0.0350 0.2810 0.0700 0.2100 0.0180 0.0700 0.0350 0.2810 Measured 0.0350 0.2810 0.0700 0.2100 0.0180 0.0700 0.0350 0.2810 Variation 0.0004 0.0002 0.0005 0.0002 0.0011 0.0003 0.0008 0.0002 Difference 0.043% 0.010% 0.016% 0.003% 0.045% 0.006% 0.011% 0.005% MI 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 0 2 4 6 8 10 12 14 ResponseTime(ms) Relative Run Time (minutes) Response Time Graph (REPEAT_2_100 @ 1,100,200 IOPS) ASU1 ASU2 ASU3 All ASUs
  • 25. SPC-1 — Supporting Files (1) $ tree SPC-1_A31020_Supporting-Files_Huawei-OS5600-V5 SPC-1_A31020_Supporting-Files_Huawei-OS5600-V5 └── Supporting Files ├── C_Tuning │   ├── aio-max-nr.sh │   ├── nr_requests.sh │   └── scheduler.sh ├── D_Creation │   ├── mklun.txt │   └── mkvolume.sh ├── E_Inventory │   ├── profile1_storage.log │   ├── profile1_volume.log │   ├── profile2_storage.log │   └── profile2_volume.log ├── F_Generator │   ├── full_run.sh │   ├── host.HST │   └── slave_asu.asu └── SPC1_RESULTS ├── SPC1_INIT_0_Raw_Results.xlsx ├── SPC1_METRICS_0_Quick_Look.xlsx ├── SPC1_METRICS_0_Raw_Results.xlsx ├── SPC1_METRICS_0_Summary_Results.xlsx ├── SPC1_PERSIST_1_0_Raw_Results.xlsx ├── SPC1_PERSIST_2_0_Raw_Results.xlsx ├── SPC1_Run_Set_Overview.xlsx ├── SPC1_VERIFY_0_Raw_Results.xlsx └── SPC1_VERIFY_1_Raw_Results.xlsx 6 directories, 21 files
  • 26. SPC-1 — Supporting Files (2) $ cat SPC-1_A31020_Supporting-Files_Huawei-OS5600-V5/Supporting Files/C_Tuning/aio-max-nr.sh echo 10485760 > /proc/sys/fs/aio-max-nr $ cat SPC-1_A31020_Supporting-Files_Huawei-OS5600-V5/Supporting Files/D_Creation/mklun.txt create disk_domain name=dd00 disk_list=DAE000.0-8 tier0_hotspare_strategy=low disk_domain_id=0 create disk_domain name=dd01 disk_list=DAE030.0-8 tier0_hotspare_strategy=low disk_domain_id=1 create disk_domain name=dd02 disk_list=DAE040.0-8 tier0_hotspare_strategy=low disk_domain_id=2 … create storage_pool name=sp00 disk_type=SSD capacity=3139GB raid_level=RAID10 pool_id=0 disk_domain_id=0 create storage_pool name=sp01 disk_type=SSD capacity=3139GB raid_level=RAID10 pool_id=1 disk_domain_id=1 … create lun name=lun_sp00 lun_id_list=0-3 pool_id=0 capacity=784GB prefetch_policy=none create lun name=lun_sp01 lun_id_list=4-7 pool_id=1 capacity=784GB prefetch_policy=none … create host name=host0 operating_system=Linux host_id=0 create host name=host1 operating_system=Linux host_id=1 … create host_group name=hg0 host_group_id=0 host_id_list=0-5 create lun_group name=lg0 lun_group_id=0 add lun_group lun lun_group_id=0 lun_id_list=0-31 create mapping_view name=mv1 mapping_view_id=1 lun_group_id=0 host_group_id=0 add host initiator host_id=0 initiator_type=FC wwn=21000024ff4b81fc add host initiator host_id=0 initiator_type=FC wwn=21000024ff4b81fd … $ cat SPC-1_A31020_Supporting-Files_Huawei-OS5600-V5/Supporting Files/D_Creation/mkvolume.sh pvcreate /dev/sdb pvcreate /dev/sdc pvcreate /dev/sdd … vgcreate vg1 /dev/sdb /dev/sdc /dev/sdd /dev/sde… … lvcreate -n asu101 -i 32 -I 512 -C y -L 608.25g vg1 lvcreate -n asu102 -i 32 -I 512 -C y -L 608.25g vg1 lvcreate -n asu103 -i 32 -I 512 -C y -L 608.25g vg1 lvcreate -n asu104 -i 32 -I 512 -C y -L 608.25g vg1 …
  • 27. SPC-2 — Executive Summary EXECUTIVE SUMMARY Page 4 of 9 C BE CH A K 2 V1.7.0 Executive Summary Vexata Inc. Submitted: August 29, 2018 VX100-F Scalable NVMe Flash Array Submission ID: B12004 SPC-2 Reported Data VX100-F Scalable NVMe Flash Array SPC-2 MB SPC-2 Price- Performance ASU Capacity (GB) Total Price Data Protection Level 49,042.39 $5.35 20,615.843 $262,572.59 Protected 1 (RAID 5 (N+1).) The above SPC-2 MBPS al e e e en he agg ega e da a a e of all h ee SPC-2 workloads: Large File Processing (LFP), Large Database Query (LDQ), and Video On Demand (VOD). Currency Used: "Target Country": U.S. Dollars USA SPC-2 Large File Processing (LFP) Reported Data Data Rate (MB/second) Number of Streams Data Rate per Stream Price-Performance LFP Composite 47,554.98 $5.52 Write Only: 1024 KiB Transfer 35,532.23 40 888.31 256 KiB Transfer 34,763.83 80 434.55 Read-Write: 1024 KiB Transfer 59,486.68 184 323.30 256 KiB Transfer 59,810.01 184 325.05 Read Only: 1024 KiB Transfer 48,190.46 184 261.90 256 KiB Transfer 47,546.68 184 258.41 The above SPC-2 Data Rate value for LFP Composite represents the aggregate performance of all three LFP Test Phases: (Write Only, Read-Write, and Read Only). SPC-2 Large Database Query (LDQ) Reported Data Data Rate (MB/second) Number of Streams Data Rate per Stream Price-Performance LDQ Composite 49,869.23 $5.27 1024 KiB Transfer Size 4 I/Os Outstanding 50,425.48 32 1,575.80 1 I/O Outstanding 50,390.42 96 524.90 64 KiB Transfer Size 4 I/Os Outstanding 50,609.64 96 527.18 1 I/O Outstanding 48,051.39 320 150.16 The above SPC-2 Data Rate value for LDQ Composite represents the aggregate performance of the two LDQ Test Phases: (1024 KiB and 64 KiB Transfer Sizes). SPC-2 Video On Demand (VOD) Reported Data Data Rate (MB/second) Number of Streams Data Rate per Stream Price-Performance 49,702.97 63,200 0.79 $5.28
  • 28. SPC-2 — Full Disclosure Report (1) SPC-2 DATA REPOSITORY Page 22 of 61 C BE CH A K 2 V1.7.0 Full Disclosure Report Vexata Inc. Submitted: August 29, 2018 VX100-F Scalable NVMe Flash Array Submission ID: B12004 Storage Hierarchy Ratios Addressable Storage Capacity Configured Storage Capacity Physical Storage Capacity Total ASU Capacity 100.00% 32.21% 32.21% Data Protection (RAID 5 (N+1).) 2.18% 2.18% Addressable Storage Capacity 32.21% 32.21% Required Storage 37.45% 37.45% Configured Storage Capacity 100.00% Global Storage Overhead 0.00% Unused Storage: Addressable 0.00% Configured 25.57% Physical 0.00% Storage Capacity Charts GlobalStorage Overhead: 0.000 GB (0.00%) UnusedPhysical Capacity: 0.000 GB (0.00%) Data Capacity: 22,284.902 GB (34.81%) Data Protection Capacity: 1,392.806 GB (2.18%) SparingCapacity: 0.000 GB (0.00%) Overhead& Metadata: 23,970.195 GB (37.45%) ConfiguredStorage Capacity: 64,013.113 GB (100.00%) Physical Storage Capacity: 64,013.113 GB SPC-2 DATA REPOSITORY Page 23 of 61 C BE CH A K 2 V1.7.0 Full Disclosure Report Vexata Inc. Submitted: August 29, 2018 VX100-F Scalable NVMe Flash Array Submission ID: B12004 Data Protection Capacity: 1,392.806 GB (2.18%) Spares: 0.000 GB (0.00%) Overhead& Metadata: 23,970.195 GB (37.45%) Addressable Storage Capacity: 20,615.843 GB (32.21%) UnusedData Capacity: 1,669.059 GB (2.61%) Data Capacity: 22,284.902 GB (34.81%) Configured Storage Capacity: 64,013.113 GB ASU Capacity: 20,615.843 GB (100.00%) AddressableStorageCapacity: 20,615.843 GB UnusedAddressable Capacity: 0.000 GB (0.00%)
  • 29. SPC-2 — Full Disclosure Report (2) SPC-2 BENCHMARK EXECUTION RESULTS Page 29 of 61 Large File Processing Test C BE CH A K 2 V1.7.0 Full Disclosure Report Vexata Inc. Submitted: August 29, 2018 VX100-F Scalable NVMe Flash Array Submission ID: B12004 Average Data Rates (MB/s) The average Data Rate (MB/s) for each Test Run in the three Test Phases of the SPC-2 Large File Processing Test is listed in the table below as well as illustrated in the following graph. Test Run Sequence 1 Stream Variable Streams Variable Streams Variable Streams Variable Streams Write 1024KiB 2,326.81 9,508.26 17,833.15 31,292.20 35,532.23 Write 256KiB 1,020.06 8,946.47 16,496.39 27,368.93 34,763.83 Read/Write 1024KiB 1,427.00 24,075.32 40,743.49 55,172.27 59,486.68 Read/Write 256KiB 938.41 20,474.16 36,077.14 54,004.62 59,810.01 Read 1024KiB 1,671.50 23,246.57 35,676.06 45,920.89 48,190.46 Read 256KiB 1,181.65 20,775.14 33,564.49 45,945.52 47,546.68 1 Stream, 1,181.65 MB/s 1 Stream, 1,671.50 MB/s 1 Stream, 938.41 MB/s 1 Stream, 1,427.00 MB/s 1 Stream, 1,020.06 MB/s 1 Stream, 2,326.81 MB/s 23 Streams, 20,775.14 MB/s 23 Streams, 23,246.57 MB/s 23 Streams, 20,474.16 MB/s 23 Streams, 24,075.32 MB/s 10 Streams, 8,946.47 MB/s 5 Streams, 9,508.26 MB/s 46 Streams, 33,564.49 MB/s 46 Streams, 35,676.06 MB/s 46 Streams, 36,077.14 MB/s 46 Streams, 40,743.49 MB/s 20 Streams, 16,496.39 MB/s 10 Streams, 17,833.15 MB/s 92 Streams, 45,945.52 MB/s 92 Streams, 45,920.89 MB/s 92 Streams, 54,004.62 MB/s 92 Streams, 55,172.27 MB/s 40 Streams, 27,368.93 MB/s 20 Streams, 31,292.20 MB/s 184 Streams, 47,546.68 MB/s 184 Streams, 48,190.46 MB/s 184 Streams, 59,810.01 MB/s 184 Streams, 59,486.68 MB/s 80 Streams, 34,763.83 MB/s 40 Streams, 35,532.23 MB/s 0 10,000 20,000 30,000 40,000 50,000 60,000 70,000 256KiB transfers with only Read operations 1024KiB transfers with only Read operations 256KiB transfers with 50% Read operations 50% Write operations 1024KiB transfers with 50% Read operations 50% Write operations 256KiB transfers with only Write operations 1024KiB transfers with only Write operations Data Rate, MB/sec Large File Processing - Data Rate SPC-2 BENCHMARK EXECUTION RESULTS Page 39 of 61 Large Database Query Test C BE CH A K 2 V1.7.0 Full Disclosure Report Vexata Inc. Submitted: August 29, 2018 VX100-F Scalable NVMe Flash Array Submission ID: B12004 Average Response Time The average Response Time, in milliseconds, for each Test Run in the two Test Phases of the SPC-2 Large Database Query Test is listed in the table below as well as illustrated in the following graph. Test Run Sequence 1 Stream Variable Streams Variable Streams Variable Streams Variable Streams 1024KiB w/ 4 IOs/Stream 0.50 1.26 1.26 2.52 2.66 1024KiB w/ 1 IO/Stream 0.45 0.61 0.68 1.08 2.00 64KiB w/ 4 IOs/Stream 0.11 0.15 0.17 0.26 0.50 64KiB w/ 1 IO/Stream 0.11 0.12 0.15 0.22 0.44 1 Stream, 0.11 ms 1 Stream, 0.11 ms 1 Stream, 0.45 ms 1 Stream, 0.50 ms 40 Streams, 0.12 ms 12 Streams, 0.15 ms 12 Streams, 0.61 ms 4 Streams, 1.26 ms 80 Streams, 0.15 ms 24 Streams, 0.17 ms 24 Streams, 0.68 ms 8 Streams, 1.26 ms 160 Streams, 0.22 ms 48 Streams, 0.26 ms 48 Streams, 1.08 ms 16 Streams, 2.52 ms 320 Streams, 0.44 ms 96 Streams, 0.50 ms 96 Streams, 2.00 ms 32 Streams, 2.66 ms 0 1 1 2 2 3 3 64KiB transfers with 1 IO outstanding per Stream 64KiB transfers with 4 IOs outstanding per Stream 1024KiB transfers with 1 IO outstanding per Stream 1024KiB transfers with 4 IOs outstanding per Stream Response Time, ms Large Database Query - Average Response Time
  • 30. SPC-1/2 Energy Extension ‣ kompletter Messzyklus ≥ 3 Tage ‣ Temperaturmessung ➛ am Anfang der Idle-Tests ➛ während letzter Minute Last-Test ‣ RMS ≙ quadratischer Mittelwert
  • 31. SPC — Pricing (im Report) ‣ Hardware, Software, zusätzlich erforderliche Komponenten für Storage-Funktionalität, 3 Jahre Support, alle anfallenden Gebühren (Steuern, Zoll u. ä.) ‣ ausgenommen: HW für Benchmark-Setup ohne Storage-Funktion ➛ Server, die Workload erzeugen ➛ evtl. HBAs, FC-Switches, Verkabelung ➛ Fracht/Verpackung ‣ Projektpreise sind nicht erlaubt (»individually negotiated«) ➛ Wie aussagekräftig ist dann noch $/IOPS oder $/MBPS? ‣ Support ≙ 4h Response Time + 4h Vor-Ort-Service ➛ Vor-Ort = Ersatzteil und/oder Techniker
  • 32. Agenda ‣ Warum überhaupt Benchmarks? Aussagekraft, Herausforderungen, I/O-Stack, Tools ‣ Storage Performance Council SPC-1, SPC-2, Industriestandard, Terminologie, Reports ‣Und was heißt das jetzt für’s Tagesgeschäft? Benchmarks selber programmieren? Benchmarks für die Beschaffung? Glaskugel
  • 33. SPC-1/2 selber bauen? ‣ ja, geht, und ist prinzipiell auch erlaubt ➛ aufwendig in der Entwicklung ➛ muss für offizielle Benchmarks von einem Auditor abgenommen werden $ cat spc1-preflight.vdbench *** *** vdbench Parameterfile to emulate SPC-1 workload *** ** storage definitions ** sd=asu11,lun=/dev/rdsk/c25t2100000E1E19FB51d0s2 sd=asu12,lun=/dev/rdsk/c25t2100000E1E19FB51d13s2 sd=asu21,lun=/dev/rdsk/c26t2100000E1E19F170d32s2 sd=asu22,lun=/dev/rdsk/c26t2100000E1E19F240d68s2 sd=asu31,lun=/dev/rdsk/c27t2100000E1E19F5A1d29s2 sd=asu32,lun=/dev/rdsk/c27t2100000E1E19FB71d39s2 sd=asu41,lun=/dev/rdsk/c28t2100000E1E19F1B1d21s2 sd=asu42,lun=/dev/rdsk/c28t2100000E1E19F261d8s2 ** workload definitions ** wd=asu111,sd=asu11,rdpct=50,xfersize=4k,skew=1 wd=asu112,sd=asu11,rdpct=50,xfersize=4k,skew=6,range=(15,20) wd=asu113,sd=asu11,rdpct=100,xfersize=(8k,40,16k,24,32k,20,64k,8,128k,8),skew=2,range=(40,50) wd=asu114,sd=asu11,rdpct=50,xfersize=4k,skew=5,range=(70,75) wd=asu121,sd=asu12,rdpct=30,xfersize=4k,skew=1 wd=asu122,sd=asu12,rdpct=30,xfersize=4k,skew=2,range=(47,52) wd=asu123,sd=asu12,rdpct=100,xfersize=(8k,40,16k,24,32k,20,64k,8,128k,8),skew=1,range=(40,50) wd=asu131,sd=asu13,rdpct=0,xfersize=(8k,40,16k,24,32k,20,64k,8,128k,8),skew=7,range=(35,65) … ** run definition (raw I/O) ** rd=spc1emu,wd=(asu111,asu112,asu113,asu114,asu121,…,),iorate=max,elapsed=300
  • 34. Benchmarks in der Storage-Beschaffung? ‣ Wie genau kennen Sie die im Unternehmen benötigten ➛ IOPS und MBPS,
  • 35. Benchmarks in der Storage-Beschaffung? ‣ Wie genau kennen Sie die im Unternehmen benötigten ➛ IOPS und MBPS, ➛ als Funktion der I/O-Blocksize,
  • 36. Benchmarks in der Storage-Beschaffung? ‣ Wie genau kennen Sie die im Unternehmen benötigten ➛ IOPS und MBPS, ➛ als Funktion der I/O-Blocksize, ➛ mit der Verteilung nach read/write?
  • 37. Benchmarks in der Storage-Beschaffung? ‣ Wie genau kennen Sie die im Unternehmen benötigten ➛ IOPS und MBPS, ➛ als Funktion der I/O-Blocksize, ➛ mit der Verteilung nach read/write? Wie sind dann SPC-Werte zu bewerten?
  • 38. Benchmarks in der Storage-Beschaffung? ‣ Wie genau kennen Sie die im Unternehmen benötigten ➛ IOPS und MBPS, ➛ als Funktion der I/O-Blocksize, ➛ mit der Verteilung nach read/write? Wie sind dann SPC-Werte zu bewerten? ‣ keine allgemeingültigen IOPS-Muster ➛ »Fingerprint« des Unternehmens, abhängig von vielen Faktoren/Randbedingungen ‣ Herstellerdarstellung immer »so große Zahl als wie gehen tut« ➛ Anpassen der Benchmark-Optionen ‣ Lösungen? ➛ umfangreicher, lange laufender PoC (aufwendig) ➛ flexibles, in alle Richtungen skalierbares Storage (»Wollmilchsau«)
  • 39. Glaskugel — Was bringt die Zukunft? ‣ Cloud Storage (public und private) ➛ viel Software involviert, mehrere Abstraktionslayer ➛ COSbench ‣ Solid State Memory (NAND Flash, Optane u. a.) ➛ keine »bremsende« Mechanik mehr ➛ Applikations-Debugging kann Thema werden, wenn plötzlich Bottleneck ≠ Storage wird (z. B. komplexe oder »kaputte« SQL Queries) Bootlenecks verschieben sich nur durch das System, verschwinden aber nicht.
  • 40. Quellen und »further learning« ‣ Spezifikationen zu den SPC-1/2 Benchmarks https://spcresults.org/benchmarks ‣ Avishay Traeger et al. A Nine Year Study of File System and Storage Benchmarking https://www.fsl.cs.sunysb.edu/docs/fsbench/fsbench.pdf ‣ Brendan Gregg Broken Linux Performance Tools SCALE 14x, 2016 https://www.youtube.com/watch?v=OPio8V-z03c ‣ Raj Jain The Art of Computer Systems Performance Analysis John Wiley & Sons, Inc., 1991