Tiger Lake CPU Security Mitigations

Tests for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2010267-FI-MIT49760730
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Web Browsers 1 Tests
Timed Code Compilation 3 Tests
C/C++ Compiler Tests 5 Tests
CPU Massive 11 Tests
Creator Workloads 12 Tests
Database Test Suite 5 Tests
Disk Test Suite 2 Tests
Go Language Tests 2 Tests
HPC - High Performance Computing 7 Tests
Imaging 7 Tests
Java 2 Tests
Common Kernel Benchmarks 8 Tests
Machine Learning 6 Tests
Multi-Core 4 Tests
Networking Test Suite 2 Tests
NVIDIA GPU Compute 2 Tests
Productivity 5 Tests
Programmer / Developer System Benchmarks 8 Tests
Python 2 Tests
Server 5 Tests
Server CPU Tests 9 Tests
Single-Threaded 5 Tests
Speech 3 Tests
Telephony 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Default
October 22 2020
  11 Hours, 52 Minutes
mitigations=off
October 23 2020
  12 Hours, 28 Minutes
Ice Lake: Default
October 24 2020
  15 Hours, 39 Minutes
Ice Lake: mitigations=off
October 25 2020
  18 Hours, 8 Minutes
Invert Hiding All Results Option
  14 Hours, 32 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Tiger Lake CPU Security MitigationsProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLVulkanCompilerFile-SystemScreen ResolutionDefaultmitigations=offIce Lake: DefaultIce Lake: mitigations=offIntel Core i7-1165G7 @ 4.70GHz (4 Cores / 8 Threads)Dell 0GG9PT (1.0.3 BIOS)Intel Tiger Lake-LP16GBKioxia KBG40ZNS256G NVMe 256GBIntel UHD 3GB (1300MHz)Realtek ALC289Intel Wi-Fi 6 AX201Ubuntu 20.105.8.0-25-generic (x86_64)GNOME Shell 3.38.1X Server 1.20.9modesetting 1.20.94.6 Mesa 20.2.11.2.145GCC 10.2.0ext41920x1200Intel Core i7-1065G7 @ 3.90GHz (4 Cores / 8 Threads)Dell 06CDVY (1.0.9 BIOS)Intel Device 34efToshiba KBG40ZPZ512G NVMe 512GBIntel Iris Plus G7 3GB (1100MHz)Intel Killer Wi-Fi 6 AX1650i 160MHzOpenBenchmarking.orgCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Disk Details- NONE / errors=remount-ro,relatime,rwProcessor Details- Default: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x60 - Thermald 2.3- mitigations=off: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x60 - Thermald 2.3- Ice Lake: Default: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x78 - Thermald 2.3- Ice Lake: mitigations=off: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x78 - Thermald 2.3Java Details- OpenJDK Runtime Environment (build 11.0.9+10-post-Ubuntu-0ubuntu1)Python Details- Python 3.8.6Security Details- Default: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected - mitigations=off: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Vulnerable + spectre_v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers + spectre_v2: Vulnerable IBPB: disabled STIBP: disabled + srbds: Not affected + tsx_async_abort: Not affected - Ice Lake: Default: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected - Ice Lake: mitigations=off: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Vulnerable + spectre_v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers + spectre_v2: Vulnerable IBPB: disabled STIBP: disabled + srbds: Not affected + tsx_async_abort: Not affected

Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=offResult OverviewPhoronix Test Suite100%134%169%203%238%SQLiteLevelDBFS-Markctx_clockeSpeak-NG Speech EngineSockperfStress-NGZstd CompressionTimed Apache CompilationTimed Linux Kernel CompilationDaCapo BenchmarkTimed GDB GNU Debugger CompilationG'MICRenaissanceASTC EncoderLibreOfficeGEGLRawTherapeeWireGuard + Linux Networking Stack Stress TestRNNoiseTensorFlow LiteDarktableMobile Neural NetworkOSBenchSQLite SpeedtestPyBenchPyPerformanceSeleniumFacebook RocksDBlibrsvgGIMPLibRawNCNNTesseract OCRDeepSpeechEthrGitGNU Octave BenchmarkCaffe

Tiger Lake CPU Security Mitigationssqlite: 1sqlite: 8fs-mark: 1000 Files, 1MB Sizefs-mark: 5000 Files, 1MB Size, 4 Threadsfs-mark: 4000 Files, 32 Sub Dirs, 1MB Sizeethr: HTTP - Bandwidth - 1ethr: TCP - Connections/s - 1wireguard: sockperf: Throughputsockperf: Latency Ping Pongosbench: Create Processesdacapobench: H2dacapobench: Jythondacapobench: Tradesoapdacapobench: Tradebeansrenaissance: Scala Dottyrenaissance: Rand Forestrenaissance: Apache Spark ALSrenaissance: Twitter HTTP Requestsrenaissance: In-Memory Database Shootoutrenaissance: Akka Unbalanced Cobwebbed Treecompress-zstd: 3libraw: Post-Processing Benchmarkbuild-apache: Time To Compilebuild-gdb: Time To Compilebuild-linux-kernel: Time To Compiledeepspeech: CPUespeak: Text-To-Speech Synthesisrnnoise: leveldb: Fill Syncleveldb: Fill Syncleveldb: Rand Deletetensorflow-lite: SqueezeNettensorflow-lite: Inception V4tensorflow-lite: NASNet Mobiletensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quanttensorflow-lite: Inception ResNet V2astcenc: Fastastcenc: Mediumastcenc: Thoroughsqlite-speedtest: Timed Time - Size 1,000darktable: Boat - CPU-onlydarktable: Masskrug - CPU-onlydarktable: Server Rack - CPU-onlygegl: Cropgegl: Scalegegl: Cartoongegl: Reflectgegl: Antialiasgegl: Tile Glassgegl: Wavelet Blurgegl: Color Enhancegegl: Rotate 90 Degreesgimp: resizegimp: rotategimp: auto-levelsgimp: unsharp-maskgmic: 2D Function Plotting, 1000 Timesgmic: 3D Elevated Function In Rand Colors, 100 Timeslibreoffice: 20 Documents To PDFoctave-benchmark: rawtherapee: Total Benchmark Timersvg: SVG Files To PNGstress-ng: MMAPstress-ng: Mallocstress-ng: Forkingstress-ng: Socket Activitystress-ng: Context Switchingcaffe: AlexNet - CPU - 100caffe: GoogleNet - CPU - 100mnn: SqueezeNetV1.0mnn: resnet-v2-50mnn: MobileNetV2_224mnn: inception-v3ncnn: CPU - squeezenetncnn: CPU - mobilenetncnn: CPU - blazefacencnn: CPU - googlenetncnn: CPU - vgg16ncnn: CPU - resnet18ncnn: CPU - alexnetncnn: CPU - resnet50ncnn: CPU - yolov4-tinyctx-clock: Context Switch Timerocksdb: Rand Readrocksdb: Seq Fillrocksdb: Rand Fill Syncrocksdb: Read While Writingpybench: Total For Average Test Timespyperformance: gopyperformance: 2to3pyperformance: chaospyperformance: floatpyperformance: nbodypyperformance: pathlibpyperformance: raytracepyperformance: json_loadspyperformance: crypto_pyaespyperformance: regex_compilepyperformance: python_startuppyperformance: django_templatepyperformance: pickle_pure_pythonselenium: ARES-6 - Firefoxselenium: Kraken - Firefoxselenium: Octane - Firefoxselenium: WebXPRT - Firefoxselenium: Jetstream - Firefoxselenium: CanvasMark - Firefoxselenium: StyleBench - Firefoxselenium: Jetstream 2 - Firefoxselenium: Maze Solver - Firefoxselenium: ARES-6 - Google Chromeselenium: Kraken - Google Chromeselenium: Octane - Google Chromeselenium: WebXPRT - Google Chromeselenium: Jetstream - Google Chromeselenium: CanvasMark - Google Chromeselenium: MotionMark - Google Chromeselenium: StyleBench - Google Chromeselenium: Jetstream 2 - Google Chromeselenium: Maze Solver - Google Chromeselenium: WASM imageConvolute - Firefoxselenium: WASM collisionDetection - Firefoxselenium: WASM imageConvolute - Google Chromeselenium: WASM collisionDetection - Google Chromegit: Time To Complete Common Git Commandstesseract-ocr: Time To OCR 7 Imagesinfluxdb: 4 - 10000 - 2,5000,1 - 10000influxdb: 64 - 10000 - 2,5000,1 - 10000mnn: mobilenet-v1-1.0ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - shufflenet-v2ncnn: CPU - mnasnetncnn: CPU - efficientnet-b0Defaultmitigations=offIce Lake: DefaultIce Lake: mitigations=off29.24899.139274.3186.261.41384.2112183273.1387671172.86123.98379632263843816740801621.3962138.8453187.1132487.3674605.17212529.4024153.226.6433.647168.621252.13860.8786928.04421.0130.33493.27724.731564437816690740643037790237204573825406.8811.7587.1051.25116.4899.1410.1967.3626.03578.38426.32733.86726.74450.29048.38237.9829.3669.71911.26613.34596.95145.5455.5256.14499.06417.01938.5531930889.5138181.933321.481413871.948493121642011.56354.5416.16368.28826.0931.042.1624.5371.0421.3018.6449.9840.441281579113690063193567099573820225283.295.089.613.938118.683.51326.3638.233435.95671.939987288237.751474898.2108.4235.316.60668.261608281289.5517698546.8039.6165.9094.727.9328.128.2575281.393347.22720.391869356.1881790.263.986256.19671.4106.170.01397.0212367281.8267380322.80421.31350834344113869942221708.5372187.7423388.5352533.8084991.33512954.0934093.126.8835.082173.769255.55261.8013239.83521.6900.18406.10425.495588506858294343044940006939451477655007.1011.8888.4953.13817.1139.3320.2007.6486.20581.68227.43036.79628.13355.26551.63740.7059.7349.98311.67013.99099.93546.6065.7036.232101.52417.41825.0031273086.7235111.573510.091250676.618984622678812.13357.6686.89673.11628.4134.052.3327.3075.9823.2020.1554.0343.821271530407274755899365286576420926285.497.891.414.239118.985.71346.6739.334134.22610.545545299263.991523399.6115.0805.217.32701.858235269276.8816864523.8738.5158.4734.825.3320.929.2972283.510749.03421.063774218.3860922.032.587106.745231.0189.639.41645.6111837334.6715545373.72120.492673333051011170854832078.5872560.7193982.6463842.1645633.05513943.8653179.322.9644.122212.567324.80568.7291435.80325.5340.24522.20823.316674153984725349418745273045517589175538.5514.85110.4461.21519.54711.6180.2259.5918.17292.51730.67841.11431.43059.69858.09744.37611.64411.31613.00815.709131.58853.9586.9086.712122.24220.20025.6621289487.3529707.302962.551131793.836691217993413.68666.5887.48780.48130.2435.882.5528.5980.6825.0222.0559.1847.5583129255158032531784604305874240303100.810810316.845921.697.61578.0046.240543.56830.734398241199.701196684.691.6485.419.76799.353717225242.7814994410.5233.5138.4394.730.4415.432.6245344.560352.03923.11810.5168.277.295.548.3513.5036.473107.502215.272.050.61688.0712273323.5515821513.55720.055771318844141080552932050.1252633.2443873.6703525.2945821.30813670.5923165.222.9243.615217.670329.50367.9796838.96225.7090.111614.35723.947646988972707749517945353245640189050908.5214.84110.2859.98619.57511.5710.2259.9218.36097.56231.26040.43532.76762.58158.33444.43711.47511.56612.31315.740132.09052.8516.7576.698118.83520.09021.3721342273.2030270.523055.461251244.806414017690813.37663.4447.52977.03229.2734.132.4827.1575.5924.3321.4757.5546.83831373208086106576362033087823229898.310610216.144321.393.31537.8744.939940.09752.739712264224.671228694.099.7205.419.77806.053635227250.9415149439.9134.0140.1464.628.5417.532.6082341.810052.08523.357627537.6723413.210.0938.227.295.788.0713.12OpenBenchmarking.org

SQLite

This is a simple benchmark of SQLite. At present this test profile just measures the time to perform a pre-defined number of insertions on an indexed database. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.30.1Threads / Copies: 1Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault1428425670SE +/- 1.60, N = 15SE +/- 0.35, N = 15SE +/- 0.54, N = 12SE +/- 0.08, N = 336.4732.5963.9929.251. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.30.1Threads / Copies: 1Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault1326395265Min: 31.45 / Avg: 36.47 / Max: 55.81Min: 31.64 / Avg: 32.59 / Max: 35.94Min: 58.99 / Avg: 63.99 / Max: 65.38Min: 29.1 / Avg: 29.25 / Max: 29.371. (CC) gcc options: -O2 -lz -lm -ldl -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.30.1Threads / Copies: 8Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault60120180240300SE +/- 0.90, N = 3SE +/- 1.42, N = 3SE +/- 0.46, N = 3SE +/- 0.09, N = 3107.50106.75256.2099.141. (CC) gcc options: -O2 -lz -lm -ldl -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.30.1Threads / Copies: 8Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault50100150200250Min: 105.79 / Avg: 107.5 / Max: 108.84Min: 103.9 / Avg: 106.75 / Max: 108.2Min: 255.39 / Avg: 256.2 / Max: 256.97Min: 99.04 / Avg: 99.14 / Max: 99.321. (CC) gcc options: -O2 -lz -lm -ldl -lpthread

FS-Mark

FS_Mark is designed to test a system's file-system performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.3Test: 1000 Files, 1MB SizeIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault60120180240300SE +/- 13.14, N = 15SE +/- 13.94, N = 12SE +/- 0.99, N = 15SE +/- 1.18, N = 3215.2231.071.4274.31. (CC) gcc options: -static
OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.3Test: 1000 Files, 1MB SizeIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault50100150200250Min: 136.8 / Avg: 215.17 / Max: 265.1Min: 132.2 / Avg: 230.98 / Max: 263.3Min: 60.9 / Avg: 71.44 / Max: 76.1Min: 272.2 / Avg: 274.27 / Max: 276.31. (CC) gcc options: -static

OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.3Test: 5000 Files, 1MB Size, 4 ThreadsIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault4080120160200SE +/- 18.91, N = 9SE +/- 27.80, N = 10SE +/- 3.97, N = 12SE +/- 45.40, N = 972.0189.6106.1186.21. (CC) gcc options: -static
OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.3Test: 5000 Files, 1MB Size, 4 ThreadsIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault306090120150Min: 36.8 / Avg: 72.02 / Max: 216Min: 63.2 / Avg: 189.61 / Max: 290Min: 76 / Avg: 106.14 / Max: 115.5Min: 87.9 / Avg: 186.21 / Max: 418.81. (CC) gcc options: -static

OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.3Test: 4000 Files, 32 Sub Dirs, 1MB SizeIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault1632486480SE +/- 3.42, N = 12SE +/- 1.38, N = 15SE +/- 1.02, N = 15SE +/- 0.32, N = 350.639.470.061.41. (CC) gcc options: -static
OpenBenchmarking.orgFiles/s, More Is BetterFS-Mark 3.3Test: 4000 Files, 32 Sub Dirs, 1MB SizeIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault1428425670Min: 38.9 / Avg: 50.58 / Max: 77.4Min: 32.2 / Avg: 39.39 / Max: 51.1Min: 61.4 / Avg: 70.02 / Max: 75.7Min: 61 / Avg: 61.37 / Max: 621. (CC) gcc options: -static

Ethr

Ethr is a cross-platform Golang-written network performance measurement tool developed by Microsoft that is capable of testing multiple protocols and different measurements. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbits/sec, More Is BetterEthr 2019-01-02Server Address: localhost - Protocol: HTTP - Test: Bandwidth - Threads: 1Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault400800120016002000SE +/- 1.15, N = 3SE +/- 3.86, N = 3SE +/- 1.23, N = 3SE +/- 0.30, N = 31688.071645.611397.021384.21MIN: 1670 / MAX: 1700MIN: 1630 / MAX: 1660MIN: 1390 / MAX: 1410MIN: 1380 / MAX: 1400
OpenBenchmarking.orgMbits/sec, More Is BetterEthr 2019-01-02Server Address: localhost - Protocol: HTTP - Test: Bandwidth - Threads: 1Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault30060090012001500Min: 1685.79 / Avg: 1688.07 / Max: 1689.47Min: 1637.89 / Avg: 1645.61 / Max: 1649.47Min: 1394.74 / Avg: 1397.02 / Max: 1398.95Min: 1383.68 / Avg: 1384.21 / Max: 1384.74

OpenBenchmarking.orgConnections/sec, More Is BetterEthr 2019-01-02Server Address: localhost - Protocol: TCP - Test: Connections/s - Threads: 1Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault3K6K9K12K15KSE +/- 37.12, N = 3SE +/- 44.10, N = 3SE +/- 95.63, N = 3SE +/- 17.64, N = 312273118371236712183
OpenBenchmarking.orgConnections/sec, More Is BetterEthr 2019-01-02Server Address: localhost - Protocol: TCP - Test: Connections/s - Threads: 1Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault2K4K6K8K10KMin: 12200 / Avg: 12273.33 / Max: 12320Min: 11770 / Avg: 11836.67 / Max: 11920Min: 12210 / Avg: 12366.67 / Max: 12540Min: 12150 / Avg: 12183.33 / Max: 12210

WireGuard + Linux Networking Stack Stress Test

This is a benchmark of the WireGuard secure VPN tunnel and Linux networking stack stress test. The test runs on the local host but does require root permissions to run. The way it works is it creates three namespaces. ns0 has a loopback device. ns1 and ns2 each have wireguard devices. Those two wireguard devices send traffic through the loopback device of ns0. The end result of this is that tests wind up testing encryption and decryption at the same time -- a pretty CPU and scheduler-heavy workflow. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault70140210280350SE +/- 7.10, N = 9SE +/- 6.73, N = 9SE +/- 3.08, N = 3SE +/- 2.00, N = 3323.55334.67281.83273.14
OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault60120180240300Min: 269.42 / Avg: 323.55 / Max: 338.06Min: 285.67 / Avg: 334.67 / Max: 351.26Min: 276.34 / Avg: 281.83 / Max: 287Min: 269.28 / Avg: 273.14 / Max: 276.01

Sockperf

This is a network socket API performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMessages Per Second, More Is BetterSockperf 3.4Test: ThroughputIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault160K320K480K640K800KSE +/- 6985.56, N = 5SE +/- 2421.79, N = 5SE +/- 8183.20, N = 5SE +/- 9040.07, N = 255821515545377380327671171. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread
OpenBenchmarking.orgMessages Per Second, More Is BetterSockperf 3.4Test: ThroughputIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault130K260K390K520K650KMin: 559547 / Avg: 582151.4 / Max: 596407Min: 547029 / Avg: 554537.4 / Max: 560837Min: 713039 / Avg: 738031.6 / Max: 752112Min: 632628 / Avg: 767116.96 / Max: 8017441. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread

OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.4Test: Latency Ping PongIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault0.83721.67442.51163.34884.186SE +/- 0.034, N = 25SE +/- 0.020, N = 5SE +/- 0.028, N = 8SE +/- 0.009, N = 53.5573.7212.8042.8611. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread
OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.4Test: Latency Ping PongIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault246810Min: 3.44 / Avg: 3.56 / Max: 4.09Min: 3.68 / Avg: 3.72 / Max: 3.79Min: 2.67 / Avg: 2.8 / Max: 2.88Min: 2.84 / Avg: 2.86 / Max: 2.891. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread

OSBench

OSBench is a collection of micro-benchmarks for measuring operating system primitives like time to create threads/processes, launching programs, creating files, and memory allocation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create ProcessesIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault612182430SE +/- 0.19, N = 3SE +/- 0.29, N = 4SE +/- 0.51, N = 15SE +/- 0.76, N = 1520.0620.4921.3123.981. (CC) gcc options: -lm
OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create ProcessesIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault612182430Min: 19.76 / Avg: 20.06 / Max: 20.41Min: 20.18 / Avg: 20.49 / Max: 21.36Min: 17.93 / Avg: 21.31 / Max: 24.43Min: 17.67 / Avg: 23.98 / Max: 27.161. (CC) gcc options: -lm

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault7001400210028003500SE +/- 36.55, N = 20SE +/- 69.83, N = 17SE +/- 37.47, N = 20SE +/- 43.85, N = 203188333034343226
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault6001200180024003000Min: 2873 / Avg: 3188.3 / Max: 3535Min: 2967 / Avg: 3329.82 / Max: 3895Min: 2919 / Avg: 3433.7 / Max: 3669Min: 2653 / Avg: 3225.5 / Max: 3443

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault11002200330044005500SE +/- 45.07, N = 4SE +/- 93.22, N = 16SE +/- 50.55, N = 5SE +/- 38.95, N = 204414510141133843
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault9001800270036004500Min: 4323 / Avg: 4413.5 / Max: 4537Min: 4440 / Avg: 5100.75 / Max: 5737Min: 3939 / Avg: 4113.4 / Max: 4219Min: 3466 / Avg: 3842.7 / Max: 4094

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradesoapIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault3K6K9K12K15KSE +/- 112.50, N = 20SE +/- 82.01, N = 18SE +/- 58.57, N = 20SE +/- 195.83, N = 16108051170886998167
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradesoapIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault2K4K6K8K10KMin: 9264 / Avg: 10804.5 / Max: 11509Min: 10761 / Avg: 11708.39 / Max: 12194Min: 7832 / Avg: 8699.15 / Max: 8985Min: 5299 / Avg: 8166.88 / Max: 8709

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault12002400360048006000SE +/- 59.72, N = 20SE +/- 42.46, N = 20SE +/- 65.83, N = 20SE +/- 87.78, N = 165293548342224080
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault10002000300040005000Min: 4492 / Avg: 5293.05 / Max: 5727Min: 5089 / Avg: 5482.7 / Max: 5824Min: 3115 / Avg: 4222.15 / Max: 4623Min: 3035 / Avg: 4079.63 / Max: 4455

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Scala DottyIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault400800120016002000SE +/- 22.98, N = 7SE +/- 47.08, N = 17SE +/- 22.58, N = 5SE +/- 18.99, N = 52050.132078.591708.541621.40
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Scala DottyIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault400800120016002000Min: 1957.16 / Avg: 2050.13 / Max: 2115.74Min: 1792.18 / Avg: 2078.59 / Max: 2738.08Min: 1654.67 / Avg: 1708.54 / Max: 1770.39Min: 1575.81 / Avg: 1621.4 / Max: 1691.01

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Random ForestIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault6001200180024003000SE +/- 36.40, N = 15SE +/- 24.37, N = 5SE +/- 21.76, N = 8SE +/- 23.75, N = 252633.242560.722187.742138.85
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Random ForestIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault5001000150020002500Min: 2350.31 / Avg: 2633.24 / Max: 2852.48Min: 2497.77 / Avg: 2560.72 / Max: 2612.35Min: 2112.39 / Avg: 2187.74 / Max: 2273.88Min: 1908.57 / Avg: 2138.84 / Max: 2386.27

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Apache Spark ALSIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault9001800270036004500SE +/- 43.24, N = 18SE +/- 36.06, N = 20SE +/- 34.32, N = 25SE +/- 17.71, N = 53873.673982.653388.543187.11
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Apache Spark ALSIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault7001400210028003500Min: 3539.65 / Avg: 3873.67 / Max: 4153.31Min: 3638.03 / Avg: 3982.65 / Max: 4222.22Min: 3003.66 / Avg: 3388.54 / Max: 3702.41Min: 3132.72 / Avg: 3187.11 / Max: 3231.56

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Twitter HTTP RequestsIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault8001600240032004000SE +/- 33.64, N = 25SE +/- 45.02, N = 5SE +/- 11.39, N = 5SE +/- 10.30, N = 53525.293842.162533.812487.37
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Twitter HTTP RequestsIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault7001400210028003500Min: 3285.93 / Avg: 3525.29 / Max: 3843.65Min: 3669.4 / Avg: 3842.16 / Max: 3918.64Min: 2511.41 / Avg: 2533.81 / Max: 2569.64Min: 2462.06 / Avg: 2487.37 / Max: 2511.52

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: In-Memory Database ShootoutIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault12002400360048006000SE +/- 280.74, N = 20SE +/- 93.41, N = 20SE +/- 90.54, N = 25SE +/- 50.76, N = 55821.315633.064991.344605.17
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: In-Memory Database ShootoutIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault10002000300040005000Min: 4991.1 / Avg: 5821.31 / Max: 10800.32Min: 4987.14 / Avg: 5633.06 / Max: 6984.36Min: 4239.89 / Avg: 4991.33 / Max: 6224.03Min: 4479.16 / Avg: 4605.17 / Max: 4744.79

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Akka Unbalanced Cobwebbed TreeIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault3K6K9K12K15KSE +/- 120.68, N = 15SE +/- 137.47, N = 9SE +/- 125.63, N = 5SE +/- 116.84, N = 513670.5913943.8712954.0912529.40
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Akka Unbalanced Cobwebbed TreeIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault2K4K6K8K10KMin: 12835.41 / Avg: 13670.59 / Max: 14622.75Min: 13275.84 / Avg: 13943.86 / Max: 14552.3Min: 12692.41 / Avg: 12954.09 / Max: 13361.67Min: 12196.8 / Avg: 12529.4 / Max: 12890.28

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault9001800270036004500SE +/- 7.13, N = 3SE +/- 5.57, N = 3SE +/- 4.52, N = 3SE +/- 13.27, N = 33165.23179.34093.14153.21. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault7001400210028003500Min: 3152.8 / Avg: 3165.17 / Max: 3177.5Min: 3169.5 / Avg: 3179.33 / Max: 3188.8Min: 4086.6 / Avg: 4093.1 / Max: 4101.8Min: 4127.9 / Avg: 4153.2 / Max: 4172.81. (CC) gcc options: -O3 -pthread -lz -llzma

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault612182430SE +/- 0.31, N = 3SE +/- 0.16, N = 3SE +/- 0.26, N = 3SE +/- 0.41, N = 1522.9222.9626.8826.641. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault612182430Min: 22.52 / Avg: 22.92 / Max: 23.54Min: 22.65 / Avg: 22.96 / Max: 23.21Min: 26.59 / Avg: 26.88 / Max: 27.39Min: 23.58 / Avg: 26.64 / Max: 28.271. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault1020304050SE +/- 0.48, N = 3SE +/- 0.64, N = 4SE +/- 0.32, N = 3SE +/- 0.30, N = 343.6244.1235.0833.65
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault918273645Min: 42.68 / Avg: 43.62 / Max: 44.26Min: 42.64 / Avg: 44.12 / Max: 45.73Min: 34.46 / Avg: 35.08 / Max: 35.47Min: 33.05 / Avg: 33.65 / Max: 33.95

Timed GDB GNU Debugger Compilation

This test times how long it takes to build the GNU Debugger (GDB) in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 9.1Time To CompileIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault50100150200250SE +/- 2.35, N = 3SE +/- 0.47, N = 3SE +/- 0.29, N = 3SE +/- 0.25, N = 3217.67212.57173.77168.62
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 9.1Time To CompileIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault4080120160200Min: 214.12 / Avg: 217.67 / Max: 222.1Min: 211.69 / Avg: 212.57 / Max: 213.28Min: 173.21 / Avg: 173.77 / Max: 174.18Min: 168.36 / Avg: 168.62 / Max: 169.12

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault70140210280350SE +/- 1.55, N = 3SE +/- 0.97, N = 3SE +/- 0.61, N = 3SE +/- 0.66, N = 3329.50324.81255.55252.14
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault60120180240300Min: 327.4 / Avg: 329.5 / Max: 332.53Min: 323.13 / Avg: 324.81 / Max: 326.5Min: 254.85 / Avg: 255.55 / Max: 256.77Min: 251.38 / Avg: 252.14 / Max: 253.46

DeepSpeech

Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault1530456075SE +/- 0.12, N = 3SE +/- 0.11, N = 3SE +/- 0.16, N = 3SE +/- 0.09, N = 367.9868.7361.8060.88
OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault1326395265Min: 67.82 / Avg: 67.98 / Max: 68.22Min: 68.61 / Avg: 68.73 / Max: 68.95Min: 61.48 / Avg: 61.8 / Max: 61.99Min: 60.7 / Avg: 60.88 / Max: 60.99

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault918273645SE +/- 0.99, N = 20SE +/- 0.76, N = 20SE +/- 1.01, N = 20SE +/- 0.33, N = 438.9635.8039.8428.041. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault816243240Min: 33.06 / Avg: 38.96 / Max: 50.01Min: 31.24 / Avg: 35.8 / Max: 44.9Min: 31.81 / Avg: 39.83 / Max: 48.79Min: 27.05 / Avg: 28.04 / Max: 28.441. (CC) gcc options: -O2 -std=c99

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault612182430SE +/- 0.19, N = 3SE +/- 0.01, N = 3SE +/- 0.32, N = 3SE +/- 0.06, N = 325.7125.5321.6921.011. (CC) gcc options: -O2 -pedantic -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault612182430Min: 25.33 / Avg: 25.71 / Max: 25.98Min: 25.51 / Avg: 25.53 / Max: 25.55Min: 21.09 / Avg: 21.69 / Max: 22.2Min: 20.9 / Avg: 21.01 / Max: 21.091. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Fill SyncIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault0.06750.1350.20250.270.3375SE +/- 0.00, N = 12SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 150.10.20.10.31. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Fill SyncIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault12345Min: 0.1 / Avg: 0.1 / Max: 0.1Min: 0.2 / Avg: 0.2 / Max: 0.2Min: 0.1 / Avg: 0.1 / Max: 0.1Min: 0.2 / Avg: 0.27 / Max: 0.31. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Fill SyncIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault2K4K6K8K10KSE +/- 1222.10, N = 15SE +/- 260.74, N = 3SE +/- 58.40, N = 3SE +/- 4.51, N = 1511614.364522.218406.103493.281. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Fill SyncIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault2K4K6K8K10KMin: 8024.55 / Avg: 11614.36 / Max: 21514.35Min: 4112.65 / Avg: 4522.21 / Max: 5006.55Min: 8302.5 / Avg: 8406.1 / Max: 8504.6Min: 3458.68 / Avg: 3493.28 / Max: 3520.321. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random DeleteIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault612182430SE +/- 0.26, N = 3SE +/- 0.02, N = 3SE +/- 0.29, N = 12SE +/- 0.35, N = 1223.9523.3225.5024.731. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random DeleteIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault612182430Min: 23.63 / Avg: 23.95 / Max: 24.46Min: 23.27 / Avg: 23.32 / Max: 23.35Min: 22.29 / Avg: 25.49 / Max: 25.95Min: 21.14 / Avg: 24.73 / Max: 26.111. (CXX) g++ options: -O3 -lsnappy -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault140K280K420K560K700KSE +/- 12430.05, N = 14SE +/- 5183.16, N = 15SE +/- 6328.07, N = 3SE +/- 4132.54, N = 3646988674153588506564437
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault120K240K360K480K600KMin: 490679 / Avg: 646987.93 / Max: 677309Min: 611223 / Avg: 674153.2 / Max: 682946Min: 575850 / Avg: 588505.67 / Max: 594929Min: 556234 / Avg: 564436.67 / Max: 569416

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault2M4M6M8M10MSE +/- 15084.72, N = 3SE +/- 2111.67, N = 3SE +/- 5358.28, N = 3SE +/- 4887.54, N = 39727077984725385829438166907
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault2M4M6M8M10MMin: 9707650 / Avg: 9727076.67 / Max: 9756780Min: 9843520 / Avg: 9847253.33 / Max: 9850830Min: 8575380 / Avg: 8582943.33 / Max: 8593300Min: 8158390 / Avg: 8166906.67 / Max: 8175320

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault110K220K330K440K550KSE +/- 2851.71, N = 3SE +/- 3688.05, N = 3SE +/- 3141.38, N = 3SE +/- 2818.20, N = 3495179494187430449406430
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault90K180K270K360K450KMin: 489486 / Avg: 495179 / Max: 498324Min: 486812 / Avg: 494187.33 / Max: 497967Min: 424167 / Avg: 430448.67 / Max: 433691Min: 400797 / Avg: 406430 / Max: 409416

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault100K200K300K400K500KSE +/- 7243.74, N = 3SE +/- 7063.92, N = 3SE +/- 2425.81, N = 3SE +/- 2564.11, N = 3453532452730400069377902
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault80K160K240K320K400KMin: 439048 / Avg: 453532 / Max: 461049Min: 438634 / Avg: 452730 / Max: 460599Min: 395221 / Avg: 400068.67 / Max: 402662Min: 372792 / Avg: 377902 / Max: 380831

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault100K200K300K400K500KSE +/- 3375.54, N = 3SE +/- 3093.36, N = 3SE +/- 2832.59, N = 3SE +/- 2509.46, N = 3456401455175394514372045
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault80K160K240K320K400KMin: 449652 / Avg: 456401.33 / Max: 459909Min: 449010 / Avg: 455175 / Max: 458706Min: 388866 / Avg: 394513.67 / Max: 397723Min: 367028 / Avg: 372045 / Max: 374674

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault2M4M6M8M10MSE +/- 14703.15, N = 3SE +/- 2473.50, N = 3SE +/- 1537.93, N = 3SE +/- 5564.33, N = 38905090891755377655007382540
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault1.5M3M4.5M6M7.5MMin: 8875720 / Avg: 8905090 / Max: 8921040Min: 8912610 / Avg: 8917553.33 / Max: 8920190Min: 7762510 / Avg: 7765500 / Max: 7767620Min: 7371560 / Avg: 7382540 / Max: 7389600

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: FastIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault246810SE +/- 0.10, N = 14SE +/- 0.12, N = 12SE +/- 0.06, N = 15SE +/- 0.06, N = 158.528.557.106.881. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: FastIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault3691215Min: 7.33 / Avg: 8.52 / Max: 8.82Min: 7.22 / Avg: 8.55 / Max: 8.84Min: 6.3 / Avg: 7.1 / Max: 7.32Min: 6.13 / Avg: 6.88 / Max: 7.151. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: MediumIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault48121620SE +/- 0.15, N = 8SE +/- 0.15, N = 8SE +/- 0.10, N = 12SE +/- 0.08, N = 1514.8414.8511.8811.751. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: MediumIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault48121620Min: 13.79 / Avg: 14.84 / Max: 15.01Min: 13.77 / Avg: 14.85 / Max: 15.03Min: 10.76 / Avg: 11.88 / Max: 12.03Min: 10.7 / Avg: 11.75 / Max: 12.181. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault20406080100SE +/- 0.53, N = 3SE +/- 0.42, N = 3SE +/- 0.39, N = 3SE +/- 0.46, N = 3110.28110.4488.4987.101. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault20406080100Min: 109.22 / Avg: 110.28 / Max: 110.91Min: 109.61 / Avg: 110.44 / Max: 110.87Min: 87.71 / Avg: 88.49 / Max: 88.92Min: 86.22 / Avg: 87.1 / Max: 87.751. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault1428425670SE +/- 0.22, N = 3SE +/- 0.96, N = 3SE +/- 0.50, N = 3SE +/- 0.29, N = 359.9961.2253.1451.251. (CC) gcc options: -O2 -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault1224364860Min: 59.68 / Avg: 59.99 / Max: 60.41Min: 60.13 / Avg: 61.21 / Max: 63.13Min: 52.14 / Avg: 53.14 / Max: 53.75Min: 50.68 / Avg: 51.25 / Max: 51.581. (CC) gcc options: -O2 -ldl -lz -lpthread

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Boat - Acceleration: CPU-onlyIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault510152025SE +/- 0.22, N = 6SE +/- 0.29, N = 3SE +/- 0.16, N = 9SE +/- 0.11, N = 1419.5819.5517.1116.49
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Boat - Acceleration: CPU-onlyIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault510152025Min: 18.46 / Avg: 19.58 / Max: 19.86Min: 18.96 / Avg: 19.55 / Max: 19.87Min: 15.82 / Avg: 17.11 / Max: 17.33Min: 15.04 / Avg: 16.49 / Max: 16.9

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Masskrug - Acceleration: CPU-onlyIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault3691215SE +/- 0.087, N = 15SE +/- 0.109, N = 12SE +/- 0.139, N = 12SE +/- 0.128, N = 1311.57111.6189.3329.141
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Masskrug - Acceleration: CPU-onlyIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault3691215Min: 10.45 / Avg: 11.57 / Max: 11.77Min: 10.43 / Avg: 11.62 / Max: 11.76Min: 7.81 / Avg: 9.33 / Max: 9.54Min: 7.64 / Avg: 9.14 / Max: 9.51

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Rack - Acceleration: CPU-onlyIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault0.05060.10120.15180.20240.253SE +/- 0.002, N = 3SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.002, N = 30.2250.2250.2000.196
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Rack - Acceleration: CPU-onlyIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault12345Min: 0.22 / Avg: 0.23 / Max: 0.23Min: 0.22 / Avg: 0.23 / Max: 0.23Min: 0.2 / Avg: 0.2 / Max: 0.2Min: 0.19 / Avg: 0.2 / Max: 0.2

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: CropIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault3691215SE +/- 0.104, N = 12SE +/- 0.077, N = 15SE +/- 0.062, N = 3SE +/- 0.018, N = 39.9219.5917.6487.362
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: CropIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault3691215Min: 9.06 / Avg: 9.92 / Max: 10.65Min: 8.87 / Avg: 9.59 / Max: 9.79Min: 7.57 / Avg: 7.65 / Max: 7.77Min: 7.33 / Avg: 7.36 / Max: 7.39

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: ScaleIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault246810SE +/- 0.082, N = 15SE +/- 0.084, N = 12SE +/- 0.051, N = 13SE +/- 0.054, N = 158.3608.1726.2056.035
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: ScaleIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault3691215Min: 7.38 / Avg: 8.36 / Max: 8.97Min: 7.26 / Avg: 8.17 / Max: 8.36Min: 5.68 / Avg: 6.21 / Max: 6.33Min: 5.53 / Avg: 6.03 / Max: 6.35

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: CartoonIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault20406080100SE +/- 1.15, N = 3SE +/- 0.77, N = 3SE +/- 0.18, N = 3SE +/- 0.33, N = 397.5692.5281.6878.38
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: CartoonIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault20406080100Min: 95.33 / Avg: 97.56 / Max: 99.11Min: 91.14 / Avg: 92.52 / Max: 93.8Min: 81.34 / Avg: 81.68 / Max: 81.96Min: 77.77 / Avg: 78.38 / Max: 78.91

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: ReflectIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault714212835SE +/- 0.12, N = 3SE +/- 0.38, N = 3SE +/- 0.35, N = 3SE +/- 0.26, N = 331.2630.6827.4326.33
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: ReflectIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault714212835Min: 31.13 / Avg: 31.26 / Max: 31.51Min: 29.91 / Avg: 30.68 / Max: 31.1Min: 26.75 / Avg: 27.43 / Max: 27.87Min: 25.81 / Avg: 26.33 / Max: 26.63

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: AntialiasIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault918273645SE +/- 0.31, N = 3SE +/- 0.44, N = 15SE +/- 0.59, N = 3SE +/- 0.23, N = 340.4441.1136.8033.87
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: AntialiasIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault918273645Min: 39.81 / Avg: 40.44 / Max: 40.77Min: 40.01 / Avg: 41.11 / Max: 44.91Min: 35.61 / Avg: 36.8 / Max: 37.46Min: 33.41 / Avg: 33.87 / Max: 34.12

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Tile GlassIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault816243240SE +/- 0.37, N = 15SE +/- 0.45, N = 4SE +/- 0.26, N = 3SE +/- 0.28, N = 332.7731.4328.1326.74
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Tile GlassIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault714212835Min: 30.89 / Avg: 32.77 / Max: 35.21Min: 30.49 / Avg: 31.43 / Max: 32.66Min: 27.72 / Avg: 28.13 / Max: 28.6Min: 26.2 / Avg: 26.74 / Max: 27.13

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Wavelet BlurIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault1428425670SE +/- 0.99, N = 15SE +/- 0.21, N = 3SE +/- 0.71, N = 3SE +/- 0.15, N = 362.5859.7055.2750.29
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Wavelet BlurIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault1224364860Min: 59.39 / Avg: 62.58 / Max: 72.5Min: 59.45 / Avg: 59.7 / Max: 60.11Min: 53.94 / Avg: 55.27 / Max: 56.35Min: 49.99 / Avg: 50.29 / Max: 50.46

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Color EnhanceIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault1326395265SE +/- 0.83, N = 4SE +/- 0.65, N = 6SE +/- 0.59, N = 3SE +/- 0.18, N = 358.3358.1051.6448.38
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Color EnhanceIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault1224364860Min: 56.59 / Avg: 58.33 / Max: 60.57Min: 56.97 / Avg: 58.1 / Max: 61.13Min: 50.76 / Avg: 51.64 / Max: 52.76Min: 48.04 / Avg: 48.38 / Max: 48.65

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Rotate 90 DegreesIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault1020304050SE +/- 0.27, N = 3SE +/- 0.05, N = 3SE +/- 0.59, N = 3SE +/- 0.19, N = 344.4444.3840.7137.98
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Rotate 90 DegreesIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault918273645Min: 43.91 / Avg: 44.44 / Max: 44.75Min: 44.3 / Avg: 44.38 / Max: 44.46Min: 39.68 / Avg: 40.71 / Max: 41.74Min: 37.61 / Avg: 37.98 / Max: 38.19

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: resizeIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault3691215SE +/- 0.113, N = 9SE +/- 0.108, N = 12SE +/- 0.100, N = 12SE +/- 0.066, N = 1511.47511.6449.7349.366
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: resizeIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault3691215Min: 10.57 / Avg: 11.47 / Max: 11.62Min: 10.57 / Avg: 11.64 / Max: 12.01Min: 8.66 / Avg: 9.73 / Max: 9.98Min: 8.55 / Avg: 9.37 / Max: 9.72

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: rotateIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault3691215SE +/- 0.145, N = 4SE +/- 0.009, N = 3SE +/- 0.024, N = 3SE +/- 0.005, N = 311.56611.3169.9839.719
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: rotateIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault3691215Min: 11.29 / Avg: 11.57 / Max: 11.98Min: 11.3 / Avg: 11.32 / Max: 11.33Min: 9.95 / Avg: 9.98 / Max: 10.03Min: 9.71 / Avg: 9.72 / Max: 9.73

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: auto-levelsIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault3691215SE +/- 0.12, N = 3SE +/- 0.15, N = 7SE +/- 0.12, N = 8SE +/- 0.11, N = 912.3113.0111.6711.27
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: auto-levelsIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault48121620Min: 12.14 / Avg: 12.31 / Max: 12.55Min: 12.14 / Avg: 13.01 / Max: 13.22Min: 10.82 / Avg: 11.67 / Max: 11.9Min: 10.5 / Avg: 11.27 / Max: 11.81

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: unsharp-maskIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault48121620SE +/- 0.16, N = 15SE +/- 0.21, N = 5SE +/- 0.14, N = 9SE +/- 0.13, N = 915.7415.7113.9913.35
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.18Test: unsharp-maskIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault48121620Min: 14.19 / Avg: 15.74 / Max: 16.73Min: 14.9 / Avg: 15.71 / Max: 16.09Min: 12.96 / Avg: 13.99 / Max: 14.32Min: 12.31 / Avg: 13.34 / Max: 13.64

G'MIC

G'MIC is an open-source framework for image processing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 2D Function Plotting, 1000 TimesIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault306090120150SE +/- 1.58, N = 12SE +/- 1.69, N = 4SE +/- 0.53, N = 3SE +/- 0.35, N = 3132.09131.5999.9496.951. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.
OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 2D Function Plotting, 1000 TimesIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault20406080100Min: 126.6 / Avg: 132.09 / Max: 143.73Min: 128.78 / Avg: 131.59 / Max: 136.1Min: 99.02 / Avg: 99.94 / Max: 100.85Min: 96.43 / Avg: 96.95 / Max: 97.611. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 3D Elevated Function In Random Colors, 100 TimesIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault1224364860SE +/- 0.02, N = 3SE +/- 0.32, N = 3SE +/- 0.29, N = 3SE +/- 0.17, N = 352.8553.9646.6145.551. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.
OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 3D Elevated Function In Random Colors, 100 TimesIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault1122334455Min: 52.81 / Avg: 52.85 / Max: 52.88Min: 53.41 / Avg: 53.96 / Max: 54.53Min: 46.07 / Avg: 46.61 / Max: 47.05Min: 45.33 / Avg: 45.55 / Max: 45.881. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.

LibreOffice

Various benchmarking operations with the LibreOffice open-source office suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLibreOfficeTest: 20 Documents To PDFIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault246810SE +/- 0.061, N = 10SE +/- 0.131, N = 25SE +/- 0.067, N = 6SE +/- 0.054, N = 96.7576.9085.7035.5251. LibreOffice 7.0.2.2 00(Build:2)
OpenBenchmarking.orgSeconds, Fewer Is BetterLibreOfficeTest: 20 Documents To PDFIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault3691215Min: 6.65 / Avg: 6.76 / Max: 7.3Min: 6.53 / Avg: 6.91 / Max: 8.85Min: 5.56 / Avg: 5.7 / Max: 6.03Min: 5.43 / Avg: 5.53 / Max: 5.961. LibreOffice 7.0.2.2 00(Build:2)

GNU Octave Benchmark

This test profile measures how long it takes to complete several reference GNU Octave files via octave-benchmark. GNU Octave is used for numerical computations and is an open-source alternative to MATLAB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 5.2.0Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault246810SE +/- 0.019, N = 5SE +/- 0.026, N = 5SE +/- 0.014, N = 5SE +/- 0.012, N = 56.6986.7126.2326.144
OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 5.2.0Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault3691215Min: 6.65 / Avg: 6.7 / Max: 6.76Min: 6.67 / Avg: 6.71 / Max: 6.81Min: 6.19 / Avg: 6.23 / Max: 6.27Min: 6.12 / Avg: 6.14 / Max: 6.19

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault306090120150SE +/- 1.04, N = 11SE +/- 1.39, N = 6SE +/- 0.67, N = 3SE +/- 0.40, N = 3118.84122.24101.5299.061. RawTherapee, version 5.8, command line.
OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault20406080100Min: 109.07 / Avg: 118.84 / Max: 120.66Min: 115.28 / Avg: 122.24 / Max: 123.78Min: 100.21 / Avg: 101.52 / Max: 102.38Min: 98.26 / Avg: 99.06 / Max: 99.481. RawTherapee, version 5.8, command line.

librsvg

RSVG/librsvg is an SVG vector graphics library. This test profile times how long it takes to complete various operations by rsvg-convert. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterlibrsvgOperation: SVG Files To PNGIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault510152025SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.15, N = 3SE +/- 0.08, N = 320.0920.2017.4217.021. rsvg-convert version 2.50.1
OpenBenchmarking.orgSeconds, Fewer Is BetterlibrsvgOperation: SVG Files To PNGIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault510152025Min: 20.06 / Avg: 20.09 / Max: 20.14Min: 20.14 / Avg: 20.2 / Max: 20.24Min: 17.25 / Avg: 17.42 / Max: 17.72Min: 16.94 / Avg: 17.02 / Max: 17.171. rsvg-convert version 2.50.1

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MMAPIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault918273645SE +/- 1.41, N = 15SE +/- 1.09, N = 12SE +/- 0.53, N = 15SE +/- 0.32, N = 1221.3725.6625.0038.551. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MMAPIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault816243240Min: 13.65 / Avg: 21.37 / Max: 34.06Min: 17.57 / Avg: 25.66 / Max: 32.73Min: 21.13 / Avg: 25 / Max: 29.13Min: 36.16 / Avg: 38.55 / Max: 41.261. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MallocIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault7M14M21M28M35MSE +/- 157319.06, N = 15SE +/- 206722.39, N = 3SE +/- 346720.57, N = 3SE +/- 309611.94, N = 321342273.2021289487.3531273086.7231930889.511. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MallocIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault6M12M18M24M30MMin: 20999628.09 / Avg: 21342273.2 / Max: 23237262.92Min: 21067814.84 / Avg: 21289487.35 / Max: 21702562.6Min: 30908682.14 / Avg: 31273086.72 / Max: 31966221.91Min: 31595618.05 / Avg: 31930889.51 / Max: 32549383.81. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: ForkingIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault8K16K24K32K40KSE +/- 276.55, N = 3SE +/- 369.67, N = 3SE +/- 220.10, N = 3SE +/- 344.79, N = 330270.5229707.3035111.5738181.931. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: ForkingIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault7K14K21K28K35KMin: 29905.28 / Avg: 30270.52 / Max: 30812.85Min: 29039.99 / Avg: 29707.3 / Max: 30316.6Min: 34796.09 / Avg: 35111.57 / Max: 35535.18Min: 37612.54 / Avg: 38181.93 / Max: 38803.491. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Socket ActivityIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault8001600240032004000SE +/- 24.47, N = 13SE +/- 40.22, N = 3SE +/- 38.12, N = 15SE +/- 33.45, N = 83055.462962.553510.093321.481. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Socket ActivityIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault6001200180024003000Min: 2969.03 / Avg: 3055.46 / Max: 3262.32Min: 2914.62 / Avg: 2962.55 / Max: 3042.46Min: 3327.66 / Avg: 3510.09 / Max: 3927.07Min: 3163.35 / Avg: 3321.48 / Max: 3505.391. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context SwitchingIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault300K600K900K1200K1500KSE +/- 16138.47, N = 3SE +/- 17116.57, N = 3SE +/- 13179.93, N = 7SE +/- 19012.48, N = 31251244.801131793.831250676.611413871.941. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context SwitchingIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault200K400K600K800K1000KMin: 1226301.61 / Avg: 1251244.8 / Max: 1281456.98Min: 1107449.86 / Avg: 1131793.83 / Max: 1164809.49Min: 1206267.88 / Avg: 1250676.61 / Max: 1297748.64Min: 1386928.15 / Avg: 1413871.94 / Max: 1450580.611. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault20K40K60K80K100KSE +/- 873.74, N = 3SE +/- 396.22, N = 3SE +/- 434.17, N = 3SE +/- 561.91, N = 3641406691289846849311. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault16K32K48K64K80KMin: 62393 / Avg: 64139.67 / Max: 65059Min: 66145 / Avg: 66912 / Max: 67468Min: 88986 / Avg: 89846 / Max: 90380Min: 83813 / Avg: 84931 / Max: 855891. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault50K100K150K200K250KSE +/- 739.79, N = 3SE +/- 269.90, N = 3SE +/- 351.65, N = 3SE +/- 393.32, N = 31769081799342267882164201. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault40K80K120K160K200KMin: 175487 / Avg: 176908.33 / Max: 177975Min: 179544 / Avg: 179933.67 / Max: 180452Min: 226174 / Avg: 226788.33 / Max: 227392Min: 215784 / Avg: 216420.33 / Max: 2171391. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault48121620SE +/- 0.22, N = 3SE +/- 0.10, N = 3SE +/- 0.08, N = 3SE +/- 0.06, N = 313.3813.6912.1311.56MIN: 11.73 / MAX: 33.3MIN: 12.96 / MAX: 33.2MIN: 10.65 / MAX: 29.64MIN: 11.25 / MAX: 27.291. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault48121620Min: 12.98 / Avg: 13.38 / Max: 13.74Min: 13.57 / Avg: 13.69 / Max: 13.88Min: 12.02 / Avg: 12.13 / Max: 12.3Min: 11.48 / Avg: 11.56 / Max: 11.671. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault1530456075SE +/- 0.20, N = 3SE +/- 0.34, N = 3SE +/- 0.10, N = 3SE +/- 0.05, N = 363.4466.5957.6754.54MIN: 56.81 / MAX: 91.45MIN: 62.42 / MAX: 109.96MIN: 52.17 / MAX: 88MIN: 52.47 / MAX: 71.351. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault1326395265Min: 63.08 / Avg: 63.44 / Max: 63.77Min: 66.23 / Avg: 66.59 / Max: 67.27Min: 57.47 / Avg: 57.67 / Max: 57.77Min: 54.48 / Avg: 54.54 / Max: 54.641. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault246810SE +/- 0.011, N = 3SE +/- 0.025, N = 3SE +/- 0.060, N = 3SE +/- 0.008, N = 37.5297.4876.8966.163MIN: 6.99 / MAX: 23.15MIN: 7 / MAX: 25.22MIN: 6.03 / MAX: 21.38MIN: 6.07 / MAX: 22.291. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault3691215Min: 7.52 / Avg: 7.53 / Max: 7.55Min: 7.46 / Avg: 7.49 / Max: 7.54Min: 6.83 / Avg: 6.9 / Max: 7.02Min: 6.15 / Avg: 6.16 / Max: 6.181. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault20406080100SE +/- 0.72, N = 3SE +/- 0.50, N = 3SE +/- 0.04, N = 3SE +/- 0.11, N = 377.0380.4873.1268.29MIN: 70.4 / MAX: 108.37MIN: 77.49 / MAX: 99.85MIN: 68.33 / MAX: 117.84MIN: 66.59 / MAX: 84.151. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault1530456075Min: 75.75 / Avg: 77.03 / Max: 78.25Min: 79.93 / Avg: 80.48 / Max: 81.47Min: 73.04 / Avg: 73.12 / Max: 73.17Min: 68.09 / Avg: 68.29 / Max: 68.461. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenetIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault714212835SE +/- 0.42, N = 4SE +/- 0.01, N = 3SE +/- 0.47, N = 3SE +/- 0.18, N = 329.2730.2428.4126.09MIN: 27.06 / MAX: 43.42MIN: 29.25 / MAX: 42.03MIN: 23.73 / MAX: 197.67MIN: 25.43 / MAX: 37.41. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenetIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault714212835Min: 28.53 / Avg: 29.27 / Max: 30.17Min: 30.22 / Avg: 30.24 / Max: 30.26Min: 27.74 / Avg: 28.41 / Max: 29.32Min: 25.89 / Avg: 26.09 / Max: 26.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenetIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault816243240SE +/- 0.15, N = 4SE +/- 0.02, N = 3SE +/- 0.12, N = 3SE +/- 0.30, N = 334.1335.8834.0531.04MIN: 32.05 / MAX: 48.09MIN: 34.42 / MAX: 48.29MIN: 31.96 / MAX: 68.49MIN: 29.81 / MAX: 43.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenetIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault816243240Min: 33.79 / Avg: 34.13 / Max: 34.49Min: 35.86 / Avg: 35.88 / Max: 35.92Min: 33.84 / Avg: 34.05 / Max: 34.27Min: 30.69 / Avg: 31.04 / Max: 31.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazefaceIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault0.57381.14761.72142.29522.869SE +/- 0.04, N = 4SE +/- 0.02, N = 3SE +/- 0.06, N = 3SE +/- 0.01, N = 32.482.552.332.16MIN: 2.2 / MAX: 12.69MIN: 2.32 / MAX: 14.92MIN: 2.04 / MAX: 3.21MIN: 1.99 / MAX: 4.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazefaceIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault246810Min: 2.42 / Avg: 2.48 / Max: 2.57Min: 2.52 / Avg: 2.55 / Max: 2.58Min: 2.23 / Avg: 2.33 / Max: 2.45Min: 2.13 / Avg: 2.16 / Max: 2.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenetIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault714212835SE +/- 0.66, N = 3SE +/- 0.03, N = 3SE +/- 0.24, N = 3SE +/- 0.05, N = 327.1528.5927.3024.53MIN: 25.04 / MAX: 41.26MIN: 26.28 / MAX: 41.46MIN: 21.44 / MAX: 219.45MIN: 22.42 / MAX: 35.811. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenetIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault612182430Min: 26.42 / Avg: 27.15 / Max: 28.47Min: 28.55 / Avg: 28.59 / Max: 28.66Min: 26.82 / Avg: 27.3 / Max: 27.58Min: 24.43 / Avg: 24.53 / Max: 24.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg16Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault20406080100SE +/- 1.31, N = 4SE +/- 0.11, N = 3SE +/- 0.19, N = 3SE +/- 0.12, N = 375.5980.6875.9871.04MIN: 70.5 / MAX: 95.89MIN: 76.5 / MAX: 103.41MIN: 71.7 / MAX: 102.99MIN: 67.45 / MAX: 85.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg16Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault1530456075Min: 72.81 / Avg: 75.59 / Max: 79.13Min: 80.55 / Avg: 80.68 / Max: 80.9Min: 75.69 / Avg: 75.98 / Max: 76.33Min: 70.81 / Avg: 71.04 / Max: 71.221. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet18Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault612182430SE +/- 0.51, N = 4SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 324.3325.0223.2021.30MIN: 20.42 / MAX: 41.66MIN: 22.42 / MAX: 37.92MIN: 20.27 / MAX: 38MIN: 18.47 / MAX: 34.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet18Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault612182430Min: 22.83 / Avg: 24.33 / Max: 24.98Min: 24.98 / Avg: 25.02 / Max: 25.1Min: 23.16 / Avg: 23.2 / Max: 23.24Min: 21.22 / Avg: 21.3 / Max: 21.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnetIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault510152025SE +/- 0.38, N = 4SE +/- 0.02, N = 3SE +/- 0.06, N = 3SE +/- 0.01, N = 321.4722.0520.1518.64MIN: 19.64 / MAX: 34.78MIN: 20.64 / MAX: 34.28MIN: 18.45 / MAX: 58.96MIN: 17.18 / MAX: 29.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnetIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault510152025Min: 20.79 / Avg: 21.47 / Max: 22.19Min: 22.02 / Avg: 22.05 / Max: 22.09Min: 20.03 / Avg: 20.15 / Max: 20.25Min: 18.62 / Avg: 18.64 / Max: 18.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault1326395265SE +/- 1.25, N = 4SE +/- 0.04, N = 3SE +/- 0.08, N = 3SE +/- 0.06, N = 357.5559.1854.0349.98MIN: 51.93 / MAX: 71.08MIN: 54.99 / MAX: 74.7MIN: 50.79 / MAX: 72.47MIN: 46.96 / MAX: 62.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault1224364860Min: 53.87 / Avg: 57.55 / Max: 59.15Min: 59.09 / Avg: 59.18 / Max: 59.24Min: 53.89 / Avg: 54.03 / Max: 54.15Min: 49.89 / Avg: 49.98 / Max: 50.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tinyIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault1122334455SE +/- 0.64, N = 4SE +/- 0.04, N = 3SE +/- 0.07, N = 3SE +/- 0.19, N = 346.8347.5543.8240.44MIN: 42.33 / MAX: 59.43MIN: 45.86 / MAX: 62.55MIN: 41.38 / MAX: 88.84MIN: 38.52 / MAX: 52.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tinyIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault1020304050Min: 44.94 / Avg: 46.83 / Max: 47.67Min: 47.5 / Avg: 47.55 / Max: 47.62Min: 43.7 / Avg: 43.82 / Max: 43.94Min: 40.2 / Avg: 40.44 / Max: 40.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

ctx_clock

Ctx_clock is a simple test program to measure the context switch time in clock cycles. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgClocks, Fewer Is Betterctx_clockContext Switch TimeIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault306090120150SE +/- 0.67, N = 3SE +/- 1.33, N = 3SE +/- 1.53, N = 3SE +/- 1.20, N = 38383127128
OpenBenchmarking.orgClocks, Fewer Is Betterctx_clockContext Switch TimeIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault20406080100Min: 82 / Avg: 82.67 / Max: 84Min: 80 / Avg: 82.67 / Max: 84Min: 125 / Avg: 127 / Max: 130Min: 126 / Avg: 128.33 / Max: 130

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random ReadIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault3M6M9M12M15MSE +/- 208149.62, N = 15SE +/- 123171.35, N = 15SE +/- 150814.30, N = 3SE +/- 224908.53, N = 3137320801292551515304072157911361. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random ReadIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault3M6M9M12M15MMin: 13179330 / Avg: 13732079.87 / Max: 16527585Min: 12559704 / Avg: 12925515.33 / Max: 14496776Min: 15125634 / Avg: 15304072.33 / Max: 15603897Min: 15437905 / Avg: 15791135.67 / Max: 162089411. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Sequential FillIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault200K400K600K800K1000KSE +/- 41408.12, N = 12SE +/- 11089.18, N = 15SE +/- 30961.97, N = 12SE +/- 14748.19, N = 158610658032537475589006311. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Sequential FillIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault160K320K480K640K800KMin: 456313 / Avg: 861064.58 / Max: 1047137Min: 741241 / Avg: 803252.6 / Max: 948580Min: 614143 / Avg: 747557.67 / Max: 913120Min: 796904 / Avg: 900631.07 / Max: 10031391. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random Fill SyncIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault400800120016002000SE +/- 39.22, N = 12SE +/- 23.13, N = 3SE +/- 21.98, N = 14SE +/- 21.21, N = 1376317849939351. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random Fill SyncIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault30060090012001500Min: 507 / Avg: 763 / Max: 997Min: 1753 / Avg: 1783.67 / Max: 1829Min: 764 / Avg: 993.07 / Max: 1087Min: 726 / Avg: 935.08 / Max: 10181. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Read While WritingIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault140K280K420K560K700KSE +/- 17734.70, N = 15SE +/- 12034.86, N = 15SE +/- 6731.14, N = 12SE +/- 5302.48, N = 156203306043056528656709951. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread
OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Read While WritingIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault120K240K360K480K600KMin: 561461 / Avg: 620330.33 / Max: 789535Min: 568062 / Avg: 604305.2 / Max: 759644Min: 620016 / Avg: 652865.33 / Max: 711340Min: 647652 / Avg: 670995.47 / Max: 7194611. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault2004006008001000SE +/- 9.53, N = 3SE +/- 7.42, N = 3SE +/- 8.33, N = 3SE +/- 5.24, N = 3878874764738
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault150300450600750Min: 860 / Avg: 878.33 / Max: 892Min: 859 / Avg: 873.67 / Max: 883Min: 747 / Avg: 763.67 / Max: 772Min: 730 / Avg: 738.33 / Max: 748

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault50100150200250SE +/- 1.73, N = 3SE +/- 2.33, N = 3SE +/- 1.45, N = 3SE +/- 1.67, N = 3232240209202
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault4080120160200Min: 229 / Avg: 232 / Max: 235Min: 236 / Avg: 240.33 / Max: 244Min: 206 / Avg: 208.67 / Max: 211Min: 199 / Avg: 202.33 / Max: 204

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault70140210280350SE +/- 1.33, N = 3SE +/- 1.20, N = 3SE +/- 1.67, N = 3SE +/- 1.20, N = 3298303262252
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3Ice Lake: mitigations=offIce Lake: Defaultmitigations=offDefault50100150200250Min: 295 / Avg: 297.67 / Max: 299Min: 301 / Avg: 303.33 / Max: 305Min: 259 / Avg: 262.33 / Max: 264Min: 250 / Avg: 252.33 / Max: 254

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault20406080100SE +/- 0.19, N = 3SE +/- 0.76, N = 3SE +/- 0.61, N = 3SE +/- 0.19, N = 398.3100.885.483.2
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault20406080100Min: 97.9 / Avg: 98.27 / Max: 98.5Min: 99.4 / Avg: 100.8 / Max: 102Min: 84.3 / Avg: 85.37 / Max: 86.4Min: 82.8 / Avg: 83.17 / Max: 83.4

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault20406080100SE +/- 1.00, N = 3SE +/- 0.88, N = 3SE +/- 0.80, N = 3106.0108.097.895.0
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault20406080100Min: 106 / Avg: 108 / Max: 109Min: 96.1 / Avg: 97.83 / Max: 99Min: 93.4 / Avg: 95 / Max: 95.8

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault20406080100SE +/- 0.84, N = 3SE +/- 0.76, N = 3102.0103.091.489.6
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault20406080100Min: 89.7 / Avg: 91.37 / Max: 92.3Min: 88.4 / Avg: 89.6 / Max: 91

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault48121620SE +/- 0.03, N = 3SE +/- 0.09, N = 3SE +/- 0.15, N = 3SE +/- 0.09, N = 316.116.814.213.9
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault48121620Min: 16.1 / Avg: 16.13 / Max: 16.2Min: 16.6 / Avg: 16.77 / Max: 16.9Min: 13.9 / Avg: 14.2 / Max: 14.4Min: 13.7 / Avg: 13.87 / Max: 14

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault100200300400500SE +/- 0.58, N = 3SE +/- 1.15, N = 3SE +/- 1.76, N = 3SE +/- 0.88, N = 3443459391381
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault80160240320400Min: 442 / Avg: 443 / Max: 444Min: 457 / Avg: 459 / Max: 461Min: 388 / Avg: 391.33 / Max: 394Min: 379 / Avg: 380.67 / Max: 382

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault510152025SE +/- 0.18, N = 3SE +/- 0.13, N = 3SE +/- 0.10, N = 3SE +/- 0.07, N = 321.321.618.918.6
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault510152025Min: 21 / Avg: 21.33 / Max: 21.6Min: 21.3 / Avg: 21.57 / Max: 21.7Min: 18.7 / Avg: 18.9 / Max: 19Min: 18.5 / Avg: 18.63 / Max: 18.7

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault20406080100SE +/- 0.62, N = 3SE +/- 0.52, N = 3SE +/- 0.54, N = 3SE +/- 0.35, N = 393.397.685.783.5
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault20406080100Min: 92.1 / Avg: 93.33 / Max: 94Min: 96.6 / Avg: 97.57 / Max: 98.4Min: 84.6 / Avg: 85.67 / Max: 86.3Min: 82.8 / Avg: 83.5 / Max: 83.9

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault306090120150SE +/- 1.86, N = 3SE +/- 1.53, N = 3SE +/- 1.20, N = 3153157134132
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault306090120150Min: 153 / Avg: 156.67 / Max: 159Min: 131 / Avg: 134 / Max: 136Min: 130 / Avg: 132.33 / Max: 134

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault246810SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 37.878.006.676.36
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault3691215Min: 7.86 / Avg: 7.87 / Max: 7.88Min: 7.97 / Avg: 8 / Max: 8.02Min: 6.64 / Avg: 6.67 / Max: 6.69Min: 6.33 / Avg: 6.36 / Max: 6.38

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault1020304050SE +/- 0.18, N = 3SE +/- 0.23, N = 3SE +/- 0.35, N = 3SE +/- 0.20, N = 344.946.239.338.2
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault918273645Min: 44.6 / Avg: 44.93 / Max: 45.2Min: 45.8 / Avg: 46.23 / Max: 46.6Min: 38.6 / Avg: 39.27 / Max: 39.8Min: 37.8 / Avg: 38.2 / Max: 38.4

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault90180270360450SE +/- 1.67, N = 3SE +/- 3.33, N = 3SE +/- 4.70, N = 3SE +/- 3.18, N = 3399405341334
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault70140210280350Min: 396 / Avg: 399.33 / Max: 401Min: 398 / Avg: 404.67 / Max: 408Min: 332 / Avg: 341.33 / Max: 347Min: 328 / Avg: 334.33 / Max: 338

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: ARES-6 - Browser: FirefoxIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault1020304050SE +/- 0.32, N = 3SE +/- 0.19, N = 3SE +/- 0.31, N = 3SE +/- 0.37, N = 340.0943.5634.2235.951. Ice Lake: mitigations=off: firefox 82.02. Ice Lake: Default: firefox 82.03. mitigations=off: firefox 81.0.24. Default: firefox 81.0.2
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: ARES-6 - Browser: FirefoxIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault918273645Min: 39.57 / Avg: 40.09 / Max: 40.68Min: 43.3 / Avg: 43.56 / Max: 43.93Min: 33.8 / Avg: 34.22 / Max: 34.82Min: 35.37 / Avg: 35.95 / Max: 36.631. Ice Lake: mitigations=off: firefox 82.02. Ice Lake: Default: firefox 82.03. mitigations=off: firefox 81.0.24. Default: firefox 81.0.2

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: FirefoxIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault2004006008001000SE +/- 1.72, N = 3SE +/- 1.20, N = 3SE +/- 0.73, N = 3SE +/- 1.35, N = 3752.7830.7610.5671.91. Ice Lake: mitigations=off: firefox 82.02. Ice Lake: Default: firefox 82.03. mitigations=off: firefox 81.0.24. Default: firefox 81.0.2
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: FirefoxIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault150300450600750Min: 749.5 / Avg: 752.67 / Max: 755.4Min: 828.3 / Avg: 830.67 / Max: 832.2Min: 609.2 / Avg: 610.53 / Max: 611.7Min: 670.2 / Avg: 671.93 / Max: 674.61. Ice Lake: mitigations=off: firefox 82.02. Ice Lake: Default: firefox 82.03. mitigations=off: firefox 81.0.24. Default: firefox 81.0.2

OpenBenchmarking.orgGeometric Mean, More Is BetterSeleniumBenchmark: Octane - Browser: FirefoxIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault10K20K30K40K50KSE +/- 94.24, N = 3SE +/- 70.32, N = 3SE +/- 534.74, N = 3SE +/- 455.76, N = 3397123439845545399871. Ice Lake: mitigations=off: firefox 82.02. Ice Lake: Default: firefox 82.03. mitigations=off: firefox 81.0.24. Default: firefox 81.0.2
OpenBenchmarking.orgGeometric Mean, More Is BetterSeleniumBenchmark: Octane - Browser: FirefoxIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault8K16K24K32K40KMin: 39578 / Avg: 39712.33 / Max: 39894Min: 34285 / Avg: 34398 / Max: 34527Min: 44892 / Avg: 45545 / Max: 46605Min: 39385 / Avg: 39987.33 / Max: 408811. Ice Lake: mitigations=off: firefox 82.02. Ice Lake: Default: firefox 82.03. mitigations=off: firefox 81.0.24. Default: firefox 81.0.2

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: WebXPRT - Browser: FirefoxIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault70140210280350SE +/- 1.53, N = 3SE +/- 0.88, N = 3SE +/- 0.33, N = 3SE +/- 1.00, N = 22642412992881. Ice Lake: mitigations=off: firefox 82.02. Ice Lake: Default: firefox 82.03. mitigations=off: firefox 81.0.24. Default: firefox 81.0.2
OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: WebXPRT - Browser: FirefoxIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault50100150200250Min: 261 / Avg: 264 / Max: 266Min: 239 / Avg: 240.67 / Max: 242Min: 299 / Avg: 299.33 / Max: 300Min: 287 / Avg: 288 / Max: 2891. Ice Lake: mitigations=off: firefox 82.02. Ice Lake: Default: firefox 82.03. mitigations=off: firefox 81.0.24. Default: firefox 81.0.2

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream - Browser: FirefoxIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault60120180240300SE +/- 0.29, N = 3SE +/- 0.17, N = 3SE +/- 0.84, N = 3SE +/- 0.24, N = 3224.67199.70263.99237.751. Ice Lake: mitigations=off: firefox 82.02. Ice Lake: Default: firefox 82.03. mitigations=off: firefox 81.0.24. Default: firefox 81.0.2
OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream - Browser: FirefoxIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault50100150200250Min: 224.11 / Avg: 224.67 / Max: 225.09Min: 199.47 / Avg: 199.7 / Max: 200.02Min: 262.63 / Avg: 263.99 / Max: 265.53Min: 237.28 / Avg: 237.75 / Max: 2381. Ice Lake: mitigations=off: firefox 82.02. Ice Lake: Default: firefox 82.03. mitigations=off: firefox 81.0.24. Default: firefox 81.0.2

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: CanvasMark - Browser: FirefoxIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault3K6K9K12K15KSE +/- 38.97, N = 3SE +/- 98.75, N = 3SE +/- 65.36, N = 3SE +/- 69.48, N = 3122861196615233147481. Ice Lake: mitigations=off: firefox 82.02. Ice Lake: Default: firefox 82.03. mitigations=off: firefox 81.0.24. Default: firefox 81.0.2
OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: CanvasMark - Browser: FirefoxIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault3K6K9K12K15KMin: 12219 / Avg: 12286 / Max: 12354Min: 11820 / Avg: 11965.67 / Max: 12154Min: 15147 / Avg: 15232.67 / Max: 15361Min: 14620 / Avg: 14747.67 / Max: 148591. Ice Lake: mitigations=off: firefox 82.02. Ice Lake: Default: firefox 82.03. mitigations=off: firefox 81.0.24. Default: firefox 81.0.2

OpenBenchmarking.orgRuns / Minute, More Is BetterSeleniumBenchmark: StyleBench - Browser: FirefoxIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault20406080100SE +/- 0.25, N = 3SE +/- 0.72, N = 15SE +/- 0.15, N = 3SE +/- 0.19, N = 394.084.699.698.21. Ice Lake: mitigations=off: firefox 82.02. Ice Lake: Default: firefox 82.03. mitigations=off: firefox 81.0.24. Default: firefox 81.0.2
OpenBenchmarking.orgRuns / Minute, More Is BetterSeleniumBenchmark: StyleBench - Browser: FirefoxIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault20406080100Min: 93.7 / Avg: 94 / Max: 94.5Min: 82 / Avg: 84.55 / Max: 90.7Min: 99.3 / Avg: 99.57 / Max: 99.8Min: 97.8 / Avg: 98.17 / Max: 98.41. Ice Lake: mitigations=off: firefox 82.02. Ice Lake: Default: firefox 82.03. mitigations=off: firefox 81.0.24. Default: firefox 81.0.2

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream 2 - Browser: FirefoxIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault306090120150SE +/- 0.55, N = 3SE +/- 0.36, N = 3SE +/- 1.47, N = 5SE +/- 0.86, N = 399.7291.65115.08108.421. Ice Lake: mitigations=off: firefox 82.02. Ice Lake: Default: firefox 82.03. mitigations=off: firefox 81.0.24. Default: firefox 81.0.2
OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream 2 - Browser: FirefoxIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault20406080100Min: 98.68 / Avg: 99.72 / Max: 100.58Min: 90.93 / Avg: 91.65 / Max: 92.03Min: 112.24 / Avg: 115.08 / Max: 119.73Min: 106.71 / Avg: 108.42 / Max: 109.41. Ice Lake: mitigations=off: firefox 82.02. Ice Lake: Default: firefox 82.03. mitigations=off: firefox 81.0.24. Default: firefox 81.0.2

OpenBenchmarking.orgSeconds, Fewer Is BetterSeleniumBenchmark: Maze Solver - Browser: FirefoxIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault1.2152.433.6454.866.075SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 35.45.45.25.31. Ice Lake: mitigations=off: firefox 82.02. Ice Lake: Default: firefox 82.03. mitigations=off: firefox 81.0.24. Default: firefox 81.0.2
OpenBenchmarking.orgSeconds, Fewer Is BetterSeleniumBenchmark: Maze Solver - Browser: FirefoxIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault246810Min: 5.4 / Avg: 5.4 / Max: 5.4Min: 5.4 / Avg: 5.43 / Max: 5.5Min: 5.2 / Avg: 5.23 / Max: 5.3Min: 5.3 / Avg: 5.33 / Max: 5.41. Ice Lake: mitigations=off: firefox 82.02. Ice Lake: Default: firefox 82.03. mitigations=off: firefox 81.0.24. Default: firefox 81.0.2

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: ARES-6 - Browser: Google ChromeIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault510152025SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.14, N = 3SE +/- 0.13, N = 319.7719.7617.3216.601. chrome 86.0.4240.111
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: ARES-6 - Browser: Google ChromeIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault510152025Min: 19.7 / Avg: 19.77 / Max: 19.82Min: 19.71 / Avg: 19.76 / Max: 19.8Min: 17.05 / Avg: 17.32 / Max: 17.53Min: 16.35 / Avg: 16.6 / Max: 16.741. chrome 86.0.4240.111

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: Google ChromeIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault2004006008001000SE +/- 3.34, N = 3SE +/- 2.47, N = 3SE +/- 9.46, N = 3SE +/- 1.55, N = 3806.0799.3701.8668.21. chrome 86.0.4240.111
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: Google ChromeIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault140280420560700Min: 799.3 / Avg: 805.97 / Max: 809.6Min: 796.3 / Avg: 799.3 / Max: 804.2Min: 688.4 / Avg: 701.83 / Max: 720.1Min: 665.8 / Avg: 668.23 / Max: 671.11. chrome 86.0.4240.111

OpenBenchmarking.orgGeometric Mean, More Is BetterSeleniumBenchmark: Octane - Browser: Google ChromeIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault13K26K39K52K65KSE +/- 159.67, N = 3SE +/- 67.56, N = 3SE +/- 745.81, N = 5SE +/- 1033.11, N = 3536355371758235616081. chrome 86.0.4240.111
OpenBenchmarking.orgGeometric Mean, More Is BetterSeleniumBenchmark: Octane - Browser: Google ChromeIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault11K22K33K44K55KMin: 53316 / Avg: 53635.33 / Max: 53797Min: 53599 / Avg: 53717 / Max: 53833Min: 57022 / Avg: 58234.6 / Max: 61104Min: 60493 / Avg: 61608 / Max: 636721. chrome 86.0.4240.111

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: WebXPRT - Browser: Google ChromeIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault60120180240300SE +/- 2.08, N = 3SE +/- 1.00, N = 3SE +/- 0.58, N = 32272252692811. chrome 86.0.4240.111
OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: WebXPRT - Browser: Google ChromeIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault50100150200250Min: 222 / Avg: 225 / Max: 229Min: 267 / Avg: 269 / Max: 270Min: 280 / Avg: 281 / Max: 2821. chrome 86.0.4240.111

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream - Browser: Google ChromeIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault60120180240300SE +/- 0.84, N = 3SE +/- 0.87, N = 3SE +/- 0.73, N = 3SE +/- 1.20, N = 3250.94242.78276.88289.551. chrome 86.0.4240.111
OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream - Browser: Google ChromeIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault50100150200250Min: 250.08 / Avg: 250.94 / Max: 252.63Min: 241.09 / Avg: 242.78 / Max: 243.96Min: 275.97 / Avg: 276.88 / Max: 278.32Min: 287.69 / Avg: 289.55 / Max: 291.791. chrome 86.0.4240.111

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: CanvasMark - Browser: Google ChromeIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault4K8K12K16K20KSE +/- 107.22, N = 3SE +/- 144.97, N = 9SE +/- 151.73, N = 10SE +/- 200.87, N = 12151491499416864176981. chrome 86.0.4240.111
OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: CanvasMark - Browser: Google ChromeIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault3K6K9K12K15KMin: 14949 / Avg: 15149 / Max: 15316Min: 14456 / Avg: 14994.22 / Max: 15839Min: 16357 / Avg: 16863.5 / Max: 17640Min: 16711 / Avg: 17697.67 / Max: 185991. chrome 86.0.4240.111

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: MotionMark - Browser: Google ChromeIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault120240360480600SE +/- 10.06, N = 9SE +/- 12.09, N = 9SE +/- 2.04, N = 3SE +/- 6.75, N = 9439.91410.52523.87546.801. chrome 86.0.4240.111
OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: MotionMark - Browser: Google ChromeIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault100200300400500Min: 362.13 / Avg: 439.91 / Max: 458.87Min: 314.26 / Avg: 410.52 / Max: 428.36Min: 520.09 / Avg: 523.87 / Max: 527.09Min: 527.73 / Avg: 546.8 / Max: 583.381. chrome 86.0.4240.111

OpenBenchmarking.orgRuns / Minute, More Is BetterSeleniumBenchmark: StyleBench - Browser: Google ChromeIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault918273645SE +/- 0.03, N = 3SE +/- 0.13, N = 3SE +/- 0.03, N = 3SE +/- 0.12, N = 334.033.538.539.61. chrome 86.0.4240.111
OpenBenchmarking.orgRuns / Minute, More Is BetterSeleniumBenchmark: StyleBench - Browser: Google ChromeIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault816243240Min: 33.9 / Avg: 33.97 / Max: 34Min: 33.2 / Avg: 33.47 / Max: 33.6Min: 38.4 / Avg: 38.47 / Max: 38.5Min: 39.4 / Avg: 39.6 / Max: 39.81. chrome 86.0.4240.111

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream 2 - Browser: Google ChromeIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault4080120160200SE +/- 0.64, N = 3SE +/- 0.40, N = 3SE +/- 0.72, N = 3SE +/- 0.51, N = 3140.15138.44158.47165.911. chrome 86.0.4240.111
OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream 2 - Browser: Google ChromeIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault306090120150Min: 139.07 / Avg: 140.15 / Max: 141.28Min: 137.95 / Avg: 138.44 / Max: 139.23Min: 157.29 / Avg: 158.47 / Max: 159.78Min: 164.92 / Avg: 165.91 / Max: 166.631. chrome 86.0.4240.111

OpenBenchmarking.orgSeconds, Fewer Is BetterSeleniumBenchmark: Maze Solver - Browser: Google ChromeIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault1.082.163.244.325.4SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 34.64.74.84.71. chrome 86.0.4240.111
OpenBenchmarking.orgSeconds, Fewer Is BetterSeleniumBenchmark: Maze Solver - Browser: Google ChromeIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault246810Min: 4.6 / Avg: 4.63 / Max: 4.7Min: 4.7 / Avg: 4.7 / Max: 4.7Min: 4.7 / Avg: 4.77 / Max: 4.8Min: 4.7 / Avg: 4.7 / Max: 4.71. chrome 86.0.4240.111

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: FirefoxIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault714212835SE +/- 0.49, N = 3SE +/- 0.41, N = 3SE +/- 0.25, N = 3SE +/- 0.31, N = 728.530.425.327.91. Ice Lake: mitigations=off: firefox 82.02. Ice Lake: Default: firefox 82.03. mitigations=off: firefox 81.0.24. Default: firefox 81.0.2
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: FirefoxIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault714212835Min: 27.9 / Avg: 28.53 / Max: 29.5Min: 29.6 / Avg: 30.37 / Max: 31Min: 25 / Avg: 25.3 / Max: 25.8Min: 27.2 / Avg: 27.94 / Max: 29.51. Ice Lake: mitigations=off: firefox 82.02. Ice Lake: Default: firefox 82.03. mitigations=off: firefox 81.0.24. Default: firefox 81.0.2

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM collisionDetection - Browser: FirefoxIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault90180270360450SE +/- 1.82, N = 3SE +/- 2.08, N = 3SE +/- 0.56, N = 3SE +/- 3.80, N = 3417.5415.4320.9328.11. Ice Lake: mitigations=off: firefox 82.02. Ice Lake: Default: firefox 82.03. mitigations=off: firefox 81.0.24. Default: firefox 81.0.2
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM collisionDetection - Browser: FirefoxIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault70140210280350Min: 413.9 / Avg: 417.53 / Max: 419.5Min: 411.8 / Avg: 415.43 / Max: 419Min: 320.1 / Avg: 320.93 / Max: 322Min: 323.4 / Avg: 328.07 / Max: 335.61. Ice Lake: mitigations=off: firefox 82.02. Ice Lake: Default: firefox 82.03. mitigations=off: firefox 81.0.24. Default: firefox 81.0.2

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: Google ChromeIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault816243240SE +/- 0.09, N = 3SE +/- 0.34, N = 3SE +/- 0.25, N = 3SE +/- 0.10, N = 332.6132.6229.3028.261. chrome 86.0.4240.111
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: Google ChromeIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault714212835Min: 32.44 / Avg: 32.61 / Max: 32.75Min: 32.01 / Avg: 32.62 / Max: 33.19Min: 29.05 / Avg: 29.3 / Max: 29.79Min: 28.13 / Avg: 28.26 / Max: 28.461. chrome 86.0.4240.111

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM collisionDetection - Browser: Google ChromeIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault70140210280350SE +/- 0.24, N = 3SE +/- 0.58, N = 3SE +/- 0.25, N = 3SE +/- 0.17, N = 3341.81344.56283.51281.391. chrome 86.0.4240.111
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM collisionDetection - Browser: Google ChromeIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault60120180240300Min: 341.41 / Avg: 341.81 / Max: 342.26Min: 343.72 / Avg: 344.56 / Max: 345.66Min: 283.21 / Avg: 283.51 / Max: 284.01Min: 281.15 / Avg: 281.39 / Max: 281.721. chrome 86.0.4240.111

Git

This test measures the time needed to carry out some sample Git operations on an example, static repository that happens to be a copy of the GNOME GTK tool-kit repository. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGitTime To Complete Common Git CommandsIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault1224364860SE +/- 0.05, N = 3SE +/- 0.12, N = 3SE +/- 0.66, N = 3SE +/- 0.40, N = 352.0952.0449.0347.231. git version 2.27.0
OpenBenchmarking.orgSeconds, Fewer Is BetterGitTime To Complete Common Git CommandsIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault1020304050Min: 52.02 / Avg: 52.09 / Max: 52.19Min: 51.92 / Avg: 52.04 / Max: 52.27Min: 47.71 / Avg: 49.03 / Max: 49.75Min: 46.44 / Avg: 47.23 / Max: 47.71. git version 2.27.0

Tesseract OCR

Tesseract-OCR is the open-source optical character recognition (OCR) engine for the conversion of text within images to raw text output. This test profile relies upon a system-supplied Tesseract installation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.1.1Time To OCR 7 ImagesIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault612182430SE +/- 0.22, N = 9SE +/- 0.11, N = 3SE +/- 0.28, N = 3SE +/- 0.14, N = 323.3623.1221.0620.39
OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.1.1Time To OCR 7 ImagesIce Lake: mitigations=offIce Lake: Defaultmitigations=offDefault510152025Min: 22.81 / Avg: 23.36 / Max: 25.06Min: 22.97 / Avg: 23.12 / Max: 23.32Min: 20.5 / Avg: 21.06 / Max: 21.35Min: 20.23 / Avg: 20.39 / Max: 20.68

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Ice Lake: mitigations=offmitigations=offDefault200K400K600K800K1000KSE +/- 6927.38, N = 12SE +/- 11766.77, N = 3SE +/- 2712.83, N = 3627537.6774218.3869356.1
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Ice Lake: mitigations=offmitigations=offDefault150K300K450K600K750KMin: 597046.6 / Avg: 627537.61 / Max: 660271.8Min: 752154.4 / Avg: 774218.3 / Max: 792339.6Min: 865387.4 / Avg: 869356.07 / Max: 874544.4

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Ice Lake: mitigations=offmitigations=offDefault200K400K600K800K1000KSE +/- 1358.93, N = 3SE +/- 4238.82, N = 3SE +/- 1914.13, N = 3723413.2860922.0881790.2
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Ice Lake: mitigations=offmitigations=offDefault150K300K450K600K750KMin: 720712.5 / Avg: 723413.23 / Max: 725027.4Min: 856190 / Avg: 860922 / Max: 869379.7Min: 879405.3 / Avg: 881790.23 / Max: 885576.1

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0mitigations=offDefault3691215SE +/- 0.11, N = 3SE +/- 0.00, N = 310.0910.52MIN: 9.21 / MAX: 30.26MIN: 9.74 / MAX: 26.111. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0mitigations=offDefault3691215Min: 9.98 / Avg: 10.09 / Max: 10.31Min: 10.51 / Avg: 10.52 / Max: 10.521. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v2mitigations=offDefault246810SE +/- 0.88, N = 4SE +/- 1.33, N = 38.228.27MIN: 5.46 / MAX: 24.56MIN: 5.47 / MAX: 22.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v2mitigations=offDefault3691215Min: 5.63 / Avg: 8.22 / Max: 9.58Min: 5.62 / Avg: 8.27 / Max: 9.631. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v3mitigations=offDefault246810SE +/- 0.76, N = 4SE +/- 1.14, N = 37.297.29MIN: 4.87 / MAX: 20.55MIN: 4.88 / MAX: 18.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v3mitigations=offDefault3691215Min: 5.04 / Avg: 7.29 / Max: 8.44Min: 5.02 / Avg: 7.29 / Max: 8.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v2mitigations=offDefault1.30052.6013.90155.2026.5025SE +/- 0.60, N = 4SE +/- 0.80, N = 35.785.54MIN: 3.81 / MAX: 18.56MIN: 3.78 / MAX: 17.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v2mitigations=offDefault246810Min: 3.98 / Avg: 5.78 / Max: 6.41Min: 3.93 / Avg: 5.54 / Max: 6.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnetmitigations=offDefault246810SE +/- 0.91, N = 4SE +/- 0.70, N = 38.078.35MIN: 5.21 / MAX: 20.81MIN: 5.2 / MAX: 21.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnetmitigations=offDefault3691215Min: 5.36 / Avg: 8.07 / Max: 9.11Min: 6.96 / Avg: 8.35 / Max: 9.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b0mitigations=offDefault3691215SE +/- 0.24, N = 4SE +/- 0.08, N = 313.1213.50MIN: 11.89 / MAX: 29.29MIN: 12.59 / MAX: 25.811. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b0mitigations=offDefault48121620Min: 12.63 / Avg: 13.12 / Max: 13.66Min: 13.41 / Avg: 13.5 / Max: 13.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

136 Results Shown

SQLite:
  1
  8
FS-Mark:
  1000 Files, 1MB Size
  5000 Files, 1MB Size, 4 Threads
  4000 Files, 32 Sub Dirs, 1MB Size
Ethr:
  HTTP - Bandwidth - 1
  TCP - Connections/s - 1
WireGuard + Linux Networking Stack Stress Test
Sockperf:
  Throughput
  Latency Ping Pong
OSBench
DaCapo Benchmark:
  H2
  Jython
  Tradesoap
  Tradebeans
Renaissance:
  Scala Dotty
  Rand Forest
  Apache Spark ALS
  Twitter HTTP Requests
  In-Memory Database Shootout
  Akka Unbalanced Cobwebbed Tree
Zstd Compression
LibRaw
Timed Apache Compilation
Timed GDB GNU Debugger Compilation
Timed Linux Kernel Compilation
DeepSpeech
eSpeak-NG Speech Engine
RNNoise
LevelDB:
  Fill Sync:
    MB/s
    Microseconds Per Op
  Rand Delete:
    Microseconds Per Op
TensorFlow Lite:
  SqueezeNet
  Inception V4
  NASNet Mobile
  Mobilenet Float
  Mobilenet Quant
  Inception ResNet V2
ASTC Encoder:
  Fast
  Medium
  Thorough
SQLite Speedtest
Darktable:
  Boat - CPU-only
  Masskrug - CPU-only
  Server Rack - CPU-only
GEGL:
  Crop
  Scale
  Cartoon
  Reflect
  Antialias
  Tile Glass
  Wavelet Blur
  Color Enhance
  Rotate 90 Degrees
GIMP:
  resize
  rotate
  auto-levels
  unsharp-mask
G'MIC:
  2D Function Plotting, 1000 Times
  3D Elevated Function In Rand Colors, 100 Times
LibreOffice
GNU Octave Benchmark
RawTherapee
librsvg
Stress-NG:
  MMAP
  Malloc
  Forking
  Socket Activity
  Context Switching
Caffe:
  AlexNet - CPU - 100
  GoogleNet - CPU - 100
Mobile Neural Network:
  SqueezeNetV1.0
  resnet-v2-50
  MobileNetV2_224
  inception-v3
NCNN:
  CPU - squeezenet
  CPU - mobilenet
  CPU - blazeface
  CPU - googlenet
  CPU - vgg16
  CPU - resnet18
  CPU - alexnet
  CPU - resnet50
  CPU - yolov4-tiny
ctx_clock
Facebook RocksDB:
  Rand Read
  Seq Fill
  Rand Fill Sync
  Read While Writing
PyBench
PyPerformance:
  go
  2to3
  chaos
  float
  nbody
  pathlib
  raytrace
  json_loads
  crypto_pyaes
  regex_compile
  python_startup
  django_template
  pickle_pure_python
Selenium:
  ARES-6 - Firefox
  Kraken - Firefox
  Octane - Firefox
  WebXPRT - Firefox
  Jetstream - Firefox
  CanvasMark - Firefox
  StyleBench - Firefox
  Jetstream 2 - Firefox
  Maze Solver - Firefox
  ARES-6 - Google Chrome
  Kraken - Google Chrome
  Octane - Google Chrome
  WebXPRT - Google Chrome
  Jetstream - Google Chrome
  CanvasMark - Google Chrome
  MotionMark - Google Chrome
  StyleBench - Google Chrome
  Jetstream 2 - Google Chrome
  Maze Solver - Google Chrome
  WASM imageConvolute - Firefox
  WASM collisionDetection - Firefox
  WASM imageConvolute - Google Chrome
  WASM collisionDetection - Google Chrome
Git
Tesseract OCR
InfluxDB:
  4 - 10000 - 2,5000,1 - 10000
  64 - 10000 - 2,5000,1 - 10000
Mobile Neural Network
NCNN:
  CPU-v2-v2 - mobilenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU - shufflenet-v2
  CPU - mnasnet
  CPU - efficientnet-b0