Xeon E3 1270 v5 Xmas

Intel Xeon E3-1270 v5 testing with a ASUS E3 PRO GAMING V5 (2606 BIOS) and ASUS NVIDIA NV84 256MB on Clear Linux OS 31470 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2012279-HA-XEONE312725
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 4 Tests
Chess Test Suite 2 Tests
Timed Code Compilation 2 Tests
C/C++ Compiler Tests 10 Tests
CPU Massive 10 Tests
Creator Workloads 12 Tests
Database Test Suite 2 Tests
Encoding 7 Tests
Game Development 2 Tests
HPC - High Performance Computing 6 Tests
Machine Learning 3 Tests
Molecular Dynamics 2 Tests
MPI Benchmarks 2 Tests
Multi-Core 12 Tests
NVIDIA GPU Compute 3 Tests
OpenMPI Tests 2 Tests
Programmer / Developer System Benchmarks 5 Tests
Python Tests 2 Tests
Scientific Computing 3 Tests
Server 4 Tests
Server CPU Tests 6 Tests
Single-Threaded 4 Tests
Texture Compression 2 Tests
Video Encoding 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
1
December 26 2020
  6 Hours
2
December 26 2020
  5 Hours, 59 Minutes
3
December 27 2020
  6 Hours
Invert Hiding All Results Option
  5 Hours, 59 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Xeon E3 1270 v5 XmasProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen Resolution123Intel Xeon E3-1270 v5 @ 4.00GHz (4 Cores / 8 Threads)ASUS E3 PRO GAMING V5 (2606 BIOS)Intel Xeon E3-1200 v5/E3-15008GB256GB Samsung SSD 850ASUS NVIDIA NV84 256MBRealtek ALC1150DELL S2409WIntel I219-LMClear Linux OS 314705.3.8-854.native (x86_64)GNOME Shell 3.34.1X Server 1.20.5nouveau 1.0.163.3 Mesa 19.3.0-develGCC 9.2.1 20191101 gcc-9-branch@277702 + Clang 9.0.0 + LLVM 9.0.0ext41920x1080OpenBenchmarking.orgEnvironment Details- CFFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=32 -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z -Wl,now -Wl,-z -Wl,relro -malign-data=abi -fno-semantic-interposition -ftree-vectorize -ftree-loop-vectorize -Wl,-sort-common -Wl,--enable-new-dtags" FFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=32 -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z -Wl,now -Wl,-z -Wl,relro -malign-data=abi -fno-semantic-interposition -ftree-vectorize -ftree-loop-vectorize -Wl,--enable-new-dtags" CXXFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=32 -Wformat -Wformat-security -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z -Wl,now -Wl,-z -Wl,relro -fno-semantic-interposition -ffat-lto-objects -fno-trapping-math -Wl,-sort-common -Wl,--enable-new-dtags -mtune=skylake -fvisibility-inlines-hidden -Wl,--enable-new-dtags" MESA_GLSL_CACHE_DISABLE=0 CFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=32 -Wformat -Wformat-security -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z -Wl,now -Wl,-z -Wl,relro -fno-semantic-interposition -ffat-lto-objects -fno-trapping-math -Wl,-sort-common -Wl,--enable-new-dtags -mtune=skylake" THEANO_FLAGS="floatX=float32,openmp=true,gcc.cxxflags="-ftree-vectorize -mavx"" Compiler Details- --build=x86_64-generic-linux --disable-libmpx --disable-libunwind-exceptions --disable-multiarch --disable-vtable-verify --disable-werror --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-clocale=gnu --enable-default-pie --enable-gnu-indirect-function --enable-languages=c,c++,fortran,go --enable-ld=default --enable-libstdcxx-pch --enable-lto --enable-multilib --enable-plugin --enable-shared --enable-threads=posix --exec-prefix=/usr --includedir=/usr/include --target=x86_64-generic-linux --with-arch=westmere --with-gcc-major-version-only --with-glibc-version=2.19 --with-gnu-ld --with-isl --with-ppl=yes --with-tune=haswell Disk Details- BFQ / relatime,rw,stripe=256 / Block Size: 4096Processor Details- Scaling Governor: intel_pstate performance - CPU Microcode: 0xccPython Details- Python 3.7.5Security Details- l1tf: Mitigation of PTE Inversion + mds: Vulnerable: Clear buffers attempted no microcode; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Vulnerable + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling

123Result OverviewPhoronix Test Suite100%103%106%109%112%Compile BenchRedisCoremarkasmFisheSpeak-NG Speech EngineSQLite SpeedtestMonkey Audio EncodingStockfishGROMACSNumpy BenchmarkIndigoBenchsimdjsonLAMMPS Molecular Dynamics Simulatorx265WavPack Audio EncodingoneDNNLZ4 Compressionrav1eNCNNTimed Eigen CompilationTimed FFmpeg CompilationOgg Audio EncodingKvazaarNode.js V8 Web Tooling BenchmarkASTC EncoderOpus Codec EncodingBasis UniversalTimed HMMer SearchCLOMP

Xeon E3 1270 v5 Xmasredis: LPOPredis: GETcompilebench: Compilecoremark: CoreMark Size 666 - Iterations Per Secondredis: SETonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUasmfish: 1024 Hash Memory, 26 Depthespeak: Text-To-Speech Synthesissimdjson: LargeRandsqlite-speedtest: Timed Time - Size 1,000redis: SADDcompress-lz4: 9 - Compression Speedcompress-lz4: 1 - Compression Speedx265: Bosphorus 1080px265: Bosphorus 4Kencode-ape: WAV To APEstockfish: Total Timeonednn: Deconvolution Batch shapes_1d - f32 - CPUindigobench: CPU - Supercarredis: LPUSHncnn: CPU - vgg16rav1e: 6compress-lz4: 1 - Decompression Speedgromacs: Water Benchmarknumpy: ncnn: CPU - efficientnet-b0simdjson: Kostyancnn: CPU - yolov4-tinyncnn: CPU - blazefacencnn: CPU - googlenetkvazaar: Bosphorus 4K - Slowrav1e: 10lammps: Rhodopsin Proteinindigobench: CPU - Bedroomonednn: Recurrent Neural Network Training - f32 - CPUcompress-lz4: 9 - Decompression Speedsimdjson: PartialTweetssimdjson: DistinctUserIDonednn: IP Shapes 3D - u8s8f32 - CPUncnn: CPU-v3-v3 - mobilenet-v3astcenc: Exhaustivebasis: UASTC Level 0ncnn: CPU - resnet18onednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUkvazaar: Bosphorus 1080p - Very Fastcompress-lz4: 3 - Decompression Speedonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUencode-wavpack: WAV To WavPackkvazaar: Bosphorus 4K - Ultra Fastncnn: CPU - resnet50onednn: Convolution Batch Shapes Auto - u8s8f32 - CPUrav1e: 5compress-lz4: 3 - Compression Speedkvazaar: Bosphorus 1080p - Mediumonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUncnn: CPU - mobilenetkvazaar: Bosphorus 1080p - Ultra Fastncnn: CPU - alexnetncnn: CPU - mnasnetonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUbuild-eigen: Time To Compileonednn: IP Shapes 1D - f32 - CPUncnn: CPU - squeezenet_ssdbasis: ETC1Sncnn: CPU-v2-v2 - mobilenet-v2onednn: IP Shapes 1D - u8s8f32 - CPUncnn: CPU - shufflenet-v2encode-ogg: WAV To Oggonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUencode-opus: WAV To Opus Encodenode-web-tooling: onednn: Convolution Batch Shapes Auto - f32 - CPUbasis: UASTC Level 2 + RDO Post-Processingonednn: Deconvolution Batch shapes_3d - f32 - CPUncnn: CPU - regnety_400mastcenc: Thoroughhmmer: Pfam Database Searchbasis: UASTC Level 2basis: UASTC Level 3build-ffmpeg: Time To Compileonednn: IP Shapes 3D - f32 - CPUastcenc: Mediumastcenc: Fastrav1e: 1kvazaar: Bosphorus 4K - Very Fastkvazaar: Bosphorus 1080p - Slowkvazaar: Bosphorus 4K - Mediumclomp: Static OMP Speedupcompilebench: Read Compiled Treecompilebench: Initial Create1232550048.172316062.031046.88162404.2708841775770.4710.132291280986530.4850.8667.4081987429.7146.877759.3133.937.3512.410839161810.25741.8061555951.7588.661.3948068.80.523353.6710.352.2936.752.3619.492.443.0053.0570.7876890.248044.52.933.032.684146.13541.159.61620.103674.986906.913671.1628.248035.35.6969215.19312.7644.5320.87211.04748.1510.896901.2127.9451.7517.466.263684.4073.6007.4911528.7272.2667.713.436998.2320.4015.712267.171448.99011.3722.0175795.73213.356216.0566.84115.98268.917135.591110.57110.630610.207.850.3666.9510.612.491.51293.47491.401596414.462257650.581045.42166523.8670291734581.089.907661270298230.0080.8566.7511965680.8747.357707.3733.627.3912.429844719310.25541.8161553337.4288.151.3868115.10.526351.8010.342.2936.672.3519.462.453.0173.0630.7906864.538040.02.943.032.691406.15539.899.59020.093682.676895.933660.6028.328025.95.6888015.17212.7444.4320.84161.04848.1010.916913.7827.8951.8117.436.253678.6973.5177.4858228.6872.3607.73.441248.2420.4225.706947.164838.99411.3622.0025795.49413.365216.0466.82115.99668.940135.617110.56210.631810.27.850.3666.9510.612.491.51298.04462.791587957.582236387.871015.72164904.3640491771304.379.997291253151830.1740.8567.5251971803.8747.117687.1333.777.3312.497842535410.19311.8171562717.1788.331.3898101.90.524352.8410.302.2836.592.3519.412.443.0063.0690.7906877.218014.82.943.022.692906.13539.499.58720.043671.726886.743665.0228.318013.15.7030315.15712.7344.4420.83031.04648.1910.916903.0227.9451.8417.456.263683.5073.4907.4801528.7272.3637.713.438108.2420.4025.707387.169648.98611.3622.0203796.13213.366116.0566.80116.04368.930135.626110.55010.630310.27.850.3666.9510.612.491.5993.62473.49OpenBenchmarking.org

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOP123500K1000K1500K2000K2500KSE +/- 36104.85, N = 3SE +/- 5729.05, N = 3SE +/- 12529.50, N = 32550048.171596414.461587957.581. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOP123400K800K1200K1600K2000KMin: 2481627.75 / Avg: 2550048.17 / Max: 2604250Min: 1585090.38 / Avg: 1596414.46 / Max: 1603589.75Min: 1562900 / Avg: 1587957.58 / Max: 1600716.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GET123500K1000K1500K2000K2500KSE +/- 29970.17, N = 15SE +/- 12822.26, N = 3SE +/- 18610.84, N = 152316062.032257650.582236387.871. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GET123400K800K1200K1600K2000KMin: 2024680.25 / Avg: 2316062.03 / Max: 2433712.75Min: 2242224.25 / Avg: 2257650.58 / Max: 2283105Min: 2096503.12 / Avg: 2236387.87 / Max: 2326176.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake

Compile Bench

Compilebench tries to age a filesystem by simulating some of the disk IO common in creating, compiling, patching, stating and reading kernel trees. It indirectly measures how well filesystems can maintain directory locality as the disk fills up and directories age. This current test is setup to use the makej mode with 10 initial directories Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Compile1232004006008001000SE +/- 7.56, N = 3SE +/- 3.80, N = 3SE +/- 6.75, N = 31046.881045.421015.72
OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Compile1232004006008001000Min: 1033.02 / Avg: 1046.88 / Max: 1059.06Min: 1038.31 / Avg: 1045.42 / Max: 1051.29Min: 1002.65 / Avg: 1015.72 / Max: 1025.17

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second12340K80K120K160K200KSE +/- 707.58, N = 3SE +/- 2647.76, N = 3SE +/- 1019.61, N = 3162404.27166523.87164904.361. (CC) gcc options: -O2 -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second12330K60K90K120K150KMin: 160990.09 / Avg: 162404.27 / Max: 163157.09Min: 161265.94 / Avg: 166523.87 / Max: 169698.26Min: 163315.3 / Avg: 164904.36 / Max: 166805.671. (CC) gcc options: -O2 -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -lrt" -lrt

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SET123400K800K1200K1600K2000KSE +/- 16979.65, N = 9SE +/- 28054.56, N = 3SE +/- 14894.97, N = 31775770.471734581.081771304.371. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SET123300K600K900K1200K1500KMin: 1664053.25 / Avg: 1775770.47 / Max: 1828153.62Min: 1678496.62 / Avg: 1734581.08 / Max: 1764063.62Min: 1742439 / Avg: 1771304.37 / Max: 1792114.621. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU1233691215SE +/- 0.08192, N = 3SE +/- 0.02840, N = 3SE +/- 0.10539, N = 310.132299.907669.99729MIN: 9.9MIN: 9.83MIN: 9.841. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU1233691215Min: 9.97 / Avg: 10.13 / Max: 10.22Min: 9.87 / Avg: 9.91 / Max: 9.96Min: 9.89 / Avg: 10 / Max: 10.211. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Depth1233M6M9M12M15MSE +/- 114329.80, N = 3SE +/- 102809.00, N = 3SE +/- 115883.84, N = 3128098651270298212531518
OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Depth1232M4M6M8M10MMin: 12592603 / Avg: 12809865 / Max: 12980236Min: 12592621 / Avg: 12702981.67 / Max: 12908410Min: 12301874 / Avg: 12531517.67 / Max: 12673451

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech Synthesis123714212835SE +/- 0.09, N = 4SE +/- 0.03, N = 4SE +/- 0.03, N = 430.4930.0130.171. (CC) gcc options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c99 -lpthread -lm
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech Synthesis123714212835Min: 30.25 / Avg: 30.48 / Max: 30.65Min: 29.94 / Avg: 30.01 / Max: 30.06Min: 30.12 / Avg: 30.17 / Max: 30.251. (CC) gcc options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c99 -lpthread -lm

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandom1230.19350.3870.58050.7740.9675SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.860.850.851. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandom123246810Min: 0.85 / Avg: 0.86 / Max: 0.86Min: 0.85 / Avg: 0.85 / Max: 0.85Min: 0.85 / Avg: 0.85 / Max: 0.851. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -pthread

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,0001231530456075SE +/- 0.23, N = 3SE +/- 0.08, N = 3SE +/- 0.45, N = 367.4166.7567.531. (CC) gcc options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,0001231326395265Min: 66.95 / Avg: 67.41 / Max: 67.65Min: 66.62 / Avg: 66.75 / Max: 66.9Min: 66.66 / Avg: 67.53 / Max: 68.21. (CC) gcc options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -ldl -lz -lpthread

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADD123400K800K1200K1600K2000KSE +/- 9920.35, N = 3SE +/- 25503.06, N = 3SE +/- 21310.59, N = 31987429.711965680.871971803.871. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADD123300K600K900K1200K1500KMin: 1969700.75 / Avg: 1987429.71 / Max: 2004008Min: 1916076.62 / Avg: 1965680.87 / Max: 2000768Min: 1934421.62 / Avg: 1971803.87 / Max: 2008224.881. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression Speed1231122334455SE +/- 0.37, N = 3SE +/- 0.01, N = 3SE +/- 0.15, N = 346.8747.3547.111. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression Speed1231020304050Min: 46.13 / Avg: 46.87 / Max: 47.25Min: 47.33 / Avg: 47.35 / Max: 47.37Min: 46.8 / Avg: 47.11 / Max: 47.271. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression Speed12317003400510068008500SE +/- 10.41, N = 3SE +/- 31.16, N = 3SE +/- 44.31, N = 37759.317707.377687.131. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression Speed12313002600390052006500Min: 7739.26 / Avg: 7759.31 / Max: 7774.18Min: 7646.06 / Avg: 7707.37 / Max: 7747.76Min: 7602.46 / Avg: 7687.13 / Max: 7752.111. (CC) gcc options: -O3

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080p123816243240SE +/- 0.14, N = 3SE +/- 0.08, N = 3SE +/- 0.08, N = 333.9333.6233.771. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080p123714212835Min: 33.76 / Avg: 33.93 / Max: 34.2Min: 33.49 / Avg: 33.62 / Max: 33.77Min: 33.61 / Avg: 33.77 / Max: 33.891. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4K123246810SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 37.357.397.331. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4K1233691215Min: 7.31 / Avg: 7.35 / Max: 7.39Min: 7.34 / Avg: 7.39 / Max: 7.44Min: 7.3 / Avg: 7.33 / Max: 7.351. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic -lpthread -lrt -ldl -lnuma

Monkey Audio Encoding

This test times how long it takes to encode a sample WAV file to Monkey's Audio APE format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APE1233691215SE +/- 0.04, N = 5SE +/- 0.04, N = 5SE +/- 0.04, N = 512.4112.4312.501. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -pedantic -rdynamic -lrt
OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APE12348121620Min: 12.33 / Avg: 12.41 / Max: 12.58Min: 12.35 / Avg: 12.43 / Max: 12.59Min: 12.38 / Avg: 12.5 / Max: 12.611. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -pedantic -rdynamic -lrt

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total Time1232M4M6M8M10MSE +/- 38275.35, N = 3SE +/- 120522.70, N = 4SE +/- 26943.43, N = 38391618844719384253541. (CXX) g++ options: -m64 -lpthread -O3 -pipe -fexceptions -fstack-protector -ffat-lto-objects -fno-trapping-math -mtune=skylake -fno-exceptions -std=c++17 -pedantic -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total Time1231.5M3M4.5M6M7.5MMin: 8316576 / Avg: 8391618 / Max: 8442236Min: 8263189 / Avg: 8447193 / Max: 8794607Min: 8385290 / Avg: 8425354 / Max: 84765951. (CXX) g++ options: -m64 -lpthread -O3 -pipe -fexceptions -fstack-protector -ffat-lto-objects -fno-trapping-math -mtune=skylake -fno-exceptions -std=c++17 -pedantic -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU1233691215SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 310.2610.2610.19MIN: 10.16MIN: 10.14MIN: 10.11. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU1233691215Min: 10.23 / Avg: 10.26 / Max: 10.29Min: 10.19 / Avg: 10.26 / Max: 10.33Min: 10.15 / Avg: 10.19 / Max: 10.251. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Supercar1230.40880.81761.22641.63522.044SE +/- 0.004, N = 3SE +/- 0.004, N = 3SE +/- 0.005, N = 31.8061.8161.817
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Supercar123246810Min: 1.8 / Avg: 1.81 / Max: 1.81Min: 1.81 / Avg: 1.82 / Max: 1.82Min: 1.81 / Avg: 1.82 / Max: 1.83

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSH123300K600K900K1200K1500KSE +/- 12177.30, N = 3SE +/- 24273.96, N = 3SE +/- 5194.67, N = 31555951.751553337.421562717.171. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSH123300K600K900K1200K1500KMin: 1532226.62 / Avg: 1555951.75 / Max: 1572578.62Min: 1506554.25 / Avg: 1553337.42 / Max: 1587961.88Min: 1555309.5 / Avg: 1562717.17 / Max: 1572729.621. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg1612320406080100SE +/- 0.14, N = 3SE +/- 0.28, N = 3SE +/- 0.26, N = 388.6688.1588.33MIN: 87.86 / MAX: 97.1MIN: 87.13 / MAX: 95.68MIN: 87.3 / MAX: 89.541. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg1612320406080100Min: 88.39 / Avg: 88.66 / Max: 88.82Min: 87.61 / Avg: 88.15 / Max: 88.51Min: 87.93 / Avg: 88.33 / Max: 88.831. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic -lgomp -lpthread

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 61230.31370.62740.94111.25481.5685SE +/- 0.003, N = 3SE +/- 0.004, N = 3SE +/- 0.004, N = 31.3941.3861.389
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 6123246810Min: 1.39 / Avg: 1.39 / Max: 1.4Min: 1.38 / Avg: 1.39 / Max: 1.39Min: 1.38 / Avg: 1.39 / Max: 1.4

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression Speed1232K4K6K8K10KSE +/- 4.89, N = 3SE +/- 6.60, N = 3SE +/- 13.85, N = 38068.88115.18101.91. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression Speed12314002800420056007000Min: 8059.4 / Avg: 8068.83 / Max: 8075.8Min: 8104.6 / Avg: 8115.13 / Max: 8127.3Min: 8074.2 / Avg: 8101.9 / Max: 8116.21. (CC) gcc options: -O3

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water Benchmark1230.11840.23680.35520.47360.592SE +/- 0.003, N = 3SE +/- 0.003, N = 3SE +/- 0.002, N = 30.5230.5260.5241. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -pthread -lrt -lpthread -lm
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water Benchmark123246810Min: 0.52 / Avg: 0.52 / Max: 0.53Min: 0.52 / Avg: 0.53 / Max: 0.53Min: 0.52 / Avg: 0.52 / Max: 0.531. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -pthread -lrt -lpthread -lm

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy Benchmark12380160240320400SE +/- 1.64, N = 3SE +/- 1.12, N = 3SE +/- 0.90, N = 3353.67351.80352.84
OpenBenchmarking.orgScore, More Is BetterNumpy Benchmark12360120180240300Min: 351.75 / Avg: 353.67 / Max: 356.94Min: 349.63 / Avg: 351.8 / Max: 353.35Min: 351.1 / Avg: 352.84 / Max: 354.13

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b01233691215SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 310.3510.3410.30MIN: 10.31 / MAX: 10.41MIN: 10.26 / MAX: 17.07MIN: 10.26 / MAX: 10.351. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b01233691215Min: 10.34 / Avg: 10.35 / Max: 10.36Min: 10.3 / Avg: 10.34 / Max: 10.37Min: 10.29 / Avg: 10.3 / Max: 10.311. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic -lgomp -lpthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: Kostya1230.51531.03061.54592.06122.5765SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 32.292.292.281. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: Kostya123246810Min: 2.29 / Avg: 2.29 / Max: 2.29Min: 2.29 / Avg: 2.29 / Max: 2.29Min: 2.27 / Avg: 2.28 / Max: 2.291. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -pthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tiny123816243240SE +/- 0.05, N = 3SE +/- 0.06, N = 3SE +/- 0.07, N = 336.7536.6736.59MIN: 36.55 / MAX: 45.59MIN: 36.34 / MAX: 37.78MIN: 36.35 / MAX: 45.551. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tiny123816243240Min: 36.67 / Avg: 36.75 / Max: 36.84Min: 36.55 / Avg: 36.67 / Max: 36.75Min: 36.51 / Avg: 36.59 / Max: 36.731. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazeface1230.5311.0621.5932.1242.655SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 32.362.352.35MIN: 2.32 / MAX: 2.4MIN: 2.32 / MAX: 2.42MIN: 2.32 / MAX: 2.391. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazeface123246810Min: 2.35 / Avg: 2.36 / Max: 2.37Min: 2.35 / Avg: 2.35 / Max: 2.36Min: 2.34 / Avg: 2.35 / Max: 2.361. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenet123510152025SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 319.4919.4619.41MIN: 19.41 / MAX: 20.57MIN: 19.34 / MAX: 19.62MIN: 19.35 / MAX: 19.661. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenet123510152025Min: 19.47 / Avg: 19.49 / Max: 19.5Min: 19.41 / Avg: 19.46 / Max: 19.5Min: 19.4 / Avg: 19.41 / Max: 19.431. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic -lgomp -lpthread

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Slow1230.55131.10261.65392.20522.7565SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 32.442.452.441. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Slow123246810Min: 2.44 / Avg: 2.44 / Max: 2.45Min: 2.44 / Avg: 2.45 / Max: 2.45Min: 2.44 / Avg: 2.44 / Max: 2.451. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -lpthread -lm -lrt

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 101230.67881.35762.03642.71523.394SE +/- 0.028, N = 3SE +/- 0.023, N = 3SE +/- 0.015, N = 33.0053.0173.006
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 10123246810Min: 2.96 / Avg: 3.01 / Max: 3.05Min: 2.97 / Avg: 3.02 / Max: 3.04Min: 2.98 / Avg: 3.01 / Max: 3.03

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Protein1230.69051.3812.07152.7623.4525SE +/- 0.006, N = 3SE +/- 0.008, N = 3SE +/- 0.006, N = 33.0573.0633.0691. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Protein123246810Min: 3.05 / Avg: 3.06 / Max: 3.07Min: 3.05 / Avg: 3.06 / Max: 3.07Min: 3.06 / Avg: 3.07 / Max: 3.081. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -pthread -lm

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Bedroom1230.17780.35560.53340.71120.889SE +/- 0.000, N = 3SE +/- 0.001, N = 3SE +/- 0.002, N = 30.7870.7900.790
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Bedroom123246810Min: 0.79 / Avg: 0.79 / Max: 0.79Min: 0.79 / Avg: 0.79 / Max: 0.79Min: 0.79 / Avg: 0.79 / Max: 0.79

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU12315003000450060007500SE +/- 15.93, N = 3SE +/- 21.21, N = 3SE +/- 15.30, N = 36890.246864.536877.21MIN: 6803.22MIN: 6760.08MIN: 6781.481. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU12312002400360048006000Min: 6874.03 / Avg: 6890.24 / Max: 6922.1Min: 6826.65 / Avg: 6864.53 / Max: 6899.99Min: 6848.61 / Avg: 6877.21 / Max: 6900.951. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression Speed1232K4K6K8K10KSE +/- 1.82, N = 3SE +/- 4.43, N = 3SE +/- 1.47, N = 38044.58040.08014.81. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression Speed12314002800420056007000Min: 8041.2 / Avg: 8044.5 / Max: 8047.5Min: 8034.7 / Avg: 8040 / Max: 8048.8Min: 8012 / Avg: 8014.8 / Max: 80171. (CC) gcc options: -O3

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweets1230.66151.3231.98452.6463.3075SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 32.932.942.941. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweets123246810Min: 2.91 / Avg: 2.93 / Max: 2.94Min: 2.94 / Avg: 2.94 / Max: 2.94Min: 2.94 / Avg: 2.94 / Max: 2.951. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -pthread

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserID1230.68181.36362.04542.72723.409SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 33.033.033.021. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserID123246810Min: 3.02 / Avg: 3.03 / Max: 3.03Min: 3.02 / Avg: 3.03 / Max: 3.03Min: 3.02 / Avg: 3.02 / Max: 3.031. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU1230.60591.21181.81772.42363.0295SE +/- 0.00502, N = 3SE +/- 0.00691, N = 3SE +/- 0.00357, N = 32.684142.691402.69290MIN: 2.62MIN: 2.63MIN: 2.641. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU123246810Min: 2.67 / Avg: 2.68 / Max: 2.69Min: 2.68 / Avg: 2.69 / Max: 2.7Min: 2.69 / Avg: 2.69 / Max: 2.71. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3123246810SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 36.136.156.13MIN: 6.11 / MAX: 6.3MIN: 6.11 / MAX: 6.23MIN: 6.09 / MAX: 6.171. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3123246810Min: 6.13 / Avg: 6.13 / Max: 6.14Min: 6.13 / Avg: 6.15 / Max: 6.18Min: 6.12 / Avg: 6.13 / Max: 6.141. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic -lgomp -lpthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Exhaustive123120240360480600SE +/- 1.57, N = 3SE +/- 0.22, N = 3SE +/- 0.12, N = 3541.15539.89539.491. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Exhaustive123100200300400500Min: 539.33 / Avg: 541.15 / Max: 544.27Min: 539.61 / Avg: 539.89 / Max: 540.33Min: 539.28 / Avg: 539.49 / Max: 539.691. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 01233691215SE +/- 0.016, N = 3SE +/- 0.001, N = 3SE +/- 0.003, N = 39.6169.5909.5871. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 01233691215Min: 9.6 / Avg: 9.62 / Max: 9.65Min: 9.59 / Avg: 9.59 / Max: 9.59Min: 9.58 / Avg: 9.59 / Max: 9.591. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18123510152025SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 320.1020.0920.04MIN: 19.65 / MAX: 20.5MIN: 19.78 / MAX: 20.46MIN: 19.64 / MAX: 26.191. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18123510152025Min: 20.03 / Avg: 20.1 / Max: 20.16Min: 20.05 / Avg: 20.09 / Max: 20.13Min: 20 / Avg: 20.04 / Max: 20.071. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU1238001600240032004000SE +/- 4.94, N = 3SE +/- 11.74, N = 3SE +/- 4.36, N = 33674.983682.673671.72MIN: 3619.26MIN: 3611.92MIN: 3623.531. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU1236001200180024003000Min: 3665.12 / Avg: 3674.98 / Max: 3680.39Min: 3660.21 / Avg: 3682.67 / Max: 3699.81Min: 3666.71 / Avg: 3671.72 / Max: 3680.411. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU12315003000450060007500SE +/- 18.32, N = 3SE +/- 13.21, N = 3SE +/- 15.39, N = 36906.916895.936886.74MIN: 6791.64MIN: 6790.76MIN: 6781.261. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU12312002400360048006000Min: 6871.97 / Avg: 6906.91 / Max: 6933.96Min: 6870.1 / Avg: 6895.93 / Max: 6913.66Min: 6858.26 / Avg: 6886.74 / Max: 6911.081. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU1238001600240032004000SE +/- 9.05, N = 3SE +/- 7.55, N = 3SE +/- 4.53, N = 33671.163660.603665.02MIN: 3612.47MIN: 3599.27MIN: 36141. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU1236001200180024003000Min: 3654.77 / Avg: 3671.16 / Max: 3685.99Min: 3645.94 / Avg: 3660.6 / Max: 3671.04Min: 3659.14 / Avg: 3665.02 / Max: 3673.921. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Very Fast123714212835SE +/- 0.10, N = 3SE +/- 0.09, N = 3SE +/- 0.13, N = 328.2428.3228.311. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Very Fast123612182430Min: 28.08 / Avg: 28.24 / Max: 28.42Min: 28.13 / Avg: 28.32 / Max: 28.44Min: 28.06 / Avg: 28.31 / Max: 28.451. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -lpthread -lm -lrt

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression Speed1232K4K6K8K10KSE +/- 6.16, N = 3SE +/- 3.90, N = 3SE +/- 8.78, N = 38035.38025.98013.11. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression Speed12314002800420056007000Min: 8025 / Avg: 8035.33 / Max: 8046.3Min: 8019.1 / Avg: 8025.87 / Max: 8032.6Min: 7997.6 / Avg: 8013.1 / Max: 80281. (CC) gcc options: -O3

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU1231.28322.56643.84965.13286.416SE +/- 0.00435, N = 3SE +/- 0.00822, N = 3SE +/- 0.01227, N = 35.696925.688805.70303MIN: 5.45MIN: 5.45MIN: 5.441. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU123246810Min: 5.69 / Avg: 5.7 / Max: 5.7Min: 5.68 / Avg: 5.69 / Max: 5.71Min: 5.69 / Avg: 5.7 / Max: 5.731. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

WavPack Audio Encoding

This test times how long it takes to encode a sample WAV file to WavPack format with very high quality settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPack12348121620SE +/- 0.02, N = 5SE +/- 0.00, N = 5SE +/- 0.01, N = 515.1915.1715.161. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic
OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPack12348121620Min: 15.17 / Avg: 15.19 / Max: 15.28Min: 15.17 / Avg: 15.17 / Max: 15.18Min: 15.14 / Avg: 15.16 / Max: 15.191. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra Fast1233691215SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 312.7612.7412.731. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra Fast12348121620Min: 12.75 / Avg: 12.76 / Max: 12.78Min: 12.73 / Avg: 12.74 / Max: 12.74Min: 12.7 / Avg: 12.73 / Max: 12.751. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -lpthread -lm -lrt

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet501231020304050SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 344.5344.4344.44MIN: 44.2 / MAX: 53.53MIN: 44.16 / MAX: 44.84MIN: 44.05 / MAX: 53.611. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50123918273645Min: 44.49 / Avg: 44.53 / Max: 44.56Min: 44.42 / Avg: 44.43 / Max: 44.45Min: 44.41 / Avg: 44.44 / Max: 44.491. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU123510152025SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 320.8720.8420.83MIN: 20.68MIN: 20.6MIN: 20.561. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU123510152025Min: 20.84 / Avg: 20.87 / Max: 20.9Min: 20.83 / Avg: 20.84 / Max: 20.85Min: 20.82 / Avg: 20.83 / Max: 20.841. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 51230.23580.47160.70740.94321.179SE +/- 0.005, N = 3SE +/- 0.003, N = 3SE +/- 0.003, N = 31.0471.0481.046
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 5123246810Min: 1.04 / Avg: 1.05 / Max: 1.06Min: 1.05 / Avg: 1.05 / Max: 1.05Min: 1.04 / Avg: 1.05 / Max: 1.05

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression Speed1231122334455SE +/- 0.17, N = 3SE +/- 0.07, N = 3SE +/- 0.09, N = 348.1548.1048.191. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression Speed1231020304050Min: 47.81 / Avg: 48.15 / Max: 48.32Min: 48.03 / Avg: 48.1 / Max: 48.23Min: 48.02 / Avg: 48.19 / Max: 48.321. (CC) gcc options: -O3

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Medium1233691215SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 310.8910.9110.911. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Medium1233691215Min: 10.87 / Avg: 10.89 / Max: 10.9Min: 10.9 / Avg: 10.91 / Max: 10.92Min: 10.9 / Avg: 10.91 / Max: 10.921. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -lpthread -lm -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU12315003000450060007500SE +/- 22.22, N = 3SE +/- 2.64, N = 3SE +/- 13.53, N = 36901.216913.786903.02MIN: 6786.8MIN: 6834.84MIN: 6804.481. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU12312002400360048006000Min: 6862.42 / Avg: 6901.21 / Max: 6939.39Min: 6909.88 / Avg: 6913.78 / Max: 6918.82Min: 6877.64 / Avg: 6903.02 / Max: 6923.861. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenet123714212835SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.05, N = 327.9427.8927.94MIN: 27.83 / MAX: 28.25MIN: 27.69 / MAX: 28.15MIN: 27.76 / MAX: 57.151. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenet123612182430Min: 27.91 / Avg: 27.94 / Max: 27.96Min: 27.8 / Avg: 27.89 / Max: 27.97Min: 27.87 / Avg: 27.94 / Max: 28.051. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic -lgomp -lpthread

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Ultra Fast1231224364860SE +/- 0.04, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 351.7551.8151.841. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Ultra Fast1231020304050Min: 51.69 / Avg: 51.75 / Max: 51.82Min: 51.7 / Avg: 51.81 / Max: 51.92Min: 51.78 / Avg: 51.84 / Max: 51.911. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -lpthread -lm -lrt

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnet12348121620SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 317.4617.4317.45MIN: 17.18 / MAX: 17.79MIN: 17.19 / MAX: 17.69MIN: 17.21 / MAX: 17.711. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnet12348121620Min: 17.42 / Avg: 17.46 / Max: 17.5Min: 17.4 / Avg: 17.43 / Max: 17.47Min: 17.38 / Avg: 17.45 / Max: 17.491. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnet123246810SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 36.266.256.26MIN: 6.21 / MAX: 6.32MIN: 6.22 / MAX: 6.31MIN: 6.23 / MAX: 6.311. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnet1233691215Min: 6.24 / Avg: 6.26 / Max: 6.28Min: 6.24 / Avg: 6.25 / Max: 6.26Min: 6.25 / Avg: 6.26 / Max: 6.271. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU1238001600240032004000SE +/- 5.10, N = 3SE +/- 15.81, N = 3SE +/- 4.57, N = 33684.403678.693683.50MIN: 3627.75MIN: 3596.98MIN: 3623.361. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU1236001200180024003000Min: 3674.35 / Avg: 3684.4 / Max: 3690.95Min: 3647.1 / Avg: 3678.69 / Max: 3695.41Min: 3676.62 / Avg: 3683.5 / Max: 3692.151. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Timed Eigen Compilation

This test times how long it takes to build all Eigen examples. The Eigen examples are compiled serially. Eigen is a C++ template library for linear algebra. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To Compile1231632486480SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 373.6073.5273.49
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To Compile1231428425670Min: 73.56 / Avg: 73.6 / Max: 73.66Min: 73.47 / Avg: 73.52 / Max: 73.58Min: 73.43 / Avg: 73.49 / Max: 73.55

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU123246810SE +/- 0.00245, N = 3SE +/- 0.01082, N = 3SE +/- 0.00655, N = 37.491157.485827.48015MIN: 7.27MIN: 7.25MIN: 7.261. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU1233691215Min: 7.49 / Avg: 7.49 / Max: 7.5Min: 7.47 / Avg: 7.49 / Max: 7.51Min: 7.47 / Avg: 7.48 / Max: 7.491. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssd123714212835SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 328.7228.6828.72MIN: 28.52 / MAX: 28.98MIN: 28.51 / MAX: 29.57MIN: 28.56 / MAX: 37.321. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssd123612182430Min: 28.65 / Avg: 28.72 / Max: 28.77Min: 28.64 / Avg: 28.68 / Max: 28.77Min: 28.71 / Avg: 28.72 / Max: 28.741. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic -lgomp -lpthread

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1S1231632486480SE +/- 0.19, N = 3SE +/- 0.10, N = 3SE +/- 0.11, N = 372.2772.3672.361. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1S1231428425670Min: 71.96 / Avg: 72.27 / Max: 72.6Min: 72.22 / Avg: 72.36 / Max: 72.56Min: 72.17 / Avg: 72.36 / Max: 72.561. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2123246810SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 37.717.707.71MIN: 7.66 / MAX: 7.79MIN: 7.65 / MAX: 7.99MIN: 7.64 / MAX: 7.891. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v21233691215Min: 7.7 / Avg: 7.71 / Max: 7.71Min: 7.7 / Avg: 7.7 / Max: 7.7Min: 7.69 / Avg: 7.71 / Max: 7.721. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU1230.77431.54862.32293.09723.8715SE +/- 0.00076, N = 3SE +/- 0.00368, N = 3SE +/- 0.00077, N = 33.436993.441243.43810MIN: 3.41MIN: 3.41MIN: 3.411. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU123246810Min: 3.44 / Avg: 3.44 / Max: 3.44Min: 3.44 / Avg: 3.44 / Max: 3.45Min: 3.44 / Avg: 3.44 / Max: 3.441. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2123246810SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 38.238.248.24MIN: 8.18 / MAX: 8.44MIN: 8.19 / MAX: 8.31MIN: 8.18 / MAX: 8.361. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v21233691215Min: 8.21 / Avg: 8.23 / Max: 8.25Min: 8.23 / Avg: 8.24 / Max: 8.25Min: 8.22 / Avg: 8.24 / Max: 8.271. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic -lgomp -lpthread

Ogg Audio Encoding

This test times how long it takes to encode a sample WAV file to Ogg format using the reference Xiph.org tools/libraries. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOgg Audio Encoding 1.3.4WAV To Ogg123510152025SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 320.4020.4220.401. (CC) gcc options: -O2 -ffast-math -fsigned-char -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake
OpenBenchmarking.orgSeconds, Fewer Is BetterOgg Audio Encoding 1.3.4WAV To Ogg123510152025Min: 20.37 / Avg: 20.4 / Max: 20.43Min: 20.39 / Avg: 20.42 / Max: 20.46Min: 20.34 / Avg: 20.4 / Max: 20.461. (CC) gcc options: -O2 -ffast-math -fsigned-char -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU1231.28532.57063.85595.14126.4265SE +/- 0.00136, N = 3SE +/- 0.00499, N = 3SE +/- 0.00345, N = 35.712265.706945.70738MIN: 5.57MIN: 5.58MIN: 5.571. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU123246810Min: 5.71 / Avg: 5.71 / Max: 5.71Min: 5.7 / Avg: 5.71 / Max: 5.71Min: 5.7 / Avg: 5.71 / Max: 5.711. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU123246810SE +/- 0.00373, N = 3SE +/- 0.00530, N = 3SE +/- 0.00921, N = 37.171447.164837.16964MIN: 7.14MIN: 7.13MIN: 7.131. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU1233691215Min: 7.17 / Avg: 7.17 / Max: 7.18Min: 7.15 / Avg: 7.16 / Max: 7.17Min: 7.15 / Avg: 7.17 / Max: 7.191. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus Encode1233691215SE +/- 0.012, N = 5SE +/- 0.013, N = 5SE +/- 0.012, N = 58.9908.9948.9861. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus Encode1233691215Min: 8.97 / Avg: 8.99 / Max: 9.04Min: 8.98 / Avg: 8.99 / Max: 9.05Min: 8.97 / Avg: 8.99 / Max: 9.031. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -fvisibility=hidden -logg -lm

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling Benchmark1233691215SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 311.3711.3611.361. Nodejs v12.13.0
OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling Benchmark1233691215Min: 11.27 / Avg: 11.37 / Max: 11.47Min: 11.31 / Avg: 11.36 / Max: 11.39Min: 11.31 / Avg: 11.36 / Max: 11.411. Nodejs v12.13.0

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU123510152025SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 322.0222.0022.02MIN: 21.82MIN: 21.85MIN: 21.841. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU123510152025Min: 22 / Avg: 22.02 / Max: 22.03Min: 21.99 / Avg: 22 / Max: 22.01Min: 22.01 / Avg: 22.02 / Max: 22.041. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2 + RDO Post-Processing1232004006008001000SE +/- 0.56, N = 3SE +/- 0.40, N = 3SE +/- 0.03, N = 3795.73795.49796.131. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2 + RDO Post-Processing123140280420560700Min: 794.67 / Avg: 795.73 / Max: 796.56Min: 794.69 / Avg: 795.49 / Max: 795.91Min: 796.08 / Avg: 796.13 / Max: 796.161. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU1233691215SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 313.3613.3713.37MIN: 13.24MIN: 13.2MIN: 13.241. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU12348121620Min: 13.33 / Avg: 13.36 / Max: 13.39Min: 13.31 / Avg: 13.37 / Max: 13.39Min: 13.34 / Avg: 13.37 / Max: 13.391. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400m12348121620SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 316.0516.0416.05MIN: 15.98 / MAX: 16.15MIN: 15.98 / MAX: 16.19MIN: 15.97 / MAX: 17.21. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400m12348121620Min: 16.04 / Avg: 16.05 / Max: 16.08Min: 16.02 / Avg: 16.04 / Max: 16.06Min: 16.03 / Avg: 16.05 / Max: 16.071. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic -lgomp -lpthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Thorough1231530456075SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 366.8466.8266.801. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Thorough1231326395265Min: 66.78 / Avg: 66.84 / Max: 66.89Min: 66.79 / Avg: 66.82 / Max: 66.88Min: 66.78 / Avg: 66.8 / Max: 66.831. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database Search123306090120150SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 3115.98116.00116.041. (CC) gcc options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database Search12320406080100Min: 115.95 / Avg: 115.98 / Max: 116.04Min: 115.94 / Avg: 116 / Max: 116.05Min: 116.02 / Avg: 116.04 / Max: 116.081. (CC) gcc options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -pthread -lhmmer -leasel -lm

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 21231530456075SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 368.9268.9468.931. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 21231326395265Min: 68.9 / Avg: 68.92 / Max: 68.93Min: 68.93 / Avg: 68.94 / Max: 68.95Min: 68.9 / Avg: 68.93 / Max: 68.971. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 3123306090120150SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 3135.59135.62135.631. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 3123306090120150Min: 135.55 / Avg: 135.59 / Max: 135.63Min: 135.56 / Avg: 135.62 / Max: 135.65Min: 135.54 / Avg: 135.63 / Max: 135.731. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To Compile12320406080100SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 3110.57110.56110.55
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To Compile12320406080100Min: 110.55 / Avg: 110.57 / Max: 110.62Min: 110.48 / Avg: 110.56 / Max: 110.61Min: 110.47 / Avg: 110.55 / Max: 110.65

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU1233691215SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 310.6310.6310.63MIN: 10.5MIN: 10.48MIN: 10.481. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU1233691215Min: 10.63 / Avg: 10.63 / Max: 10.63Min: 10.62 / Avg: 10.63 / Max: 10.64Min: 10.63 / Avg: 10.63 / Max: 10.631. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Medium1233691215SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 310.2010.2010.201. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Medium1233691215Min: 10.2 / Avg: 10.2 / Max: 10.21Min: 10.2 / Avg: 10.2 / Max: 10.2Min: 10.2 / Avg: 10.2 / Max: 10.21. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Fast123246810SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 37.857.857.851. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Fast1233691215Min: 7.83 / Avg: 7.85 / Max: 7.86Min: 7.84 / Avg: 7.85 / Max: 7.86Min: 7.83 / Avg: 7.85 / Max: 7.871. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 11230.08240.16480.24720.32960.412SE +/- 0.000, N = 3SE +/- 0.001, N = 3SE +/- 0.000, N = 30.3660.3660.366
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 112312345Min: 0.37 / Avg: 0.37 / Max: 0.37Min: 0.37 / Avg: 0.37 / Max: 0.37Min: 0.37 / Avg: 0.37 / Max: 0.37

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very Fast123246810SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 36.956.956.951. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very Fast1233691215Min: 6.94 / Avg: 6.95 / Max: 6.96Min: 6.94 / Avg: 6.95 / Max: 6.96Min: 6.94 / Avg: 6.95 / Max: 6.961. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Slow1233691215SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 310.6110.6110.611. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Slow1233691215Min: 10.59 / Avg: 10.61 / Max: 10.62Min: 10.59 / Avg: 10.61 / Max: 10.63Min: 10.59 / Avg: 10.61 / Max: 10.631. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Medium1230.56031.12061.68092.24122.8015SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 32.492.492.491. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Medium123246810Min: 2.49 / Avg: 2.49 / Max: 2.49Min: 2.49 / Avg: 2.49 / Max: 2.49Min: 2.49 / Avg: 2.49 / Max: 2.491. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -lpthread -lm -lrt

CLOMP

CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading in order to influence future system designs. This particular test profile configuration is currently set to look at the OpenMP static schedule speed-up across all available CPU cores using the recommended test configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP Speedup1230.33750.6751.01251.351.6875SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 31.51.51.51. (CC) gcc options: -fopenmp -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -lm
OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP Speedup123246810Min: 1.5 / Avg: 1.5 / Max: 1.5Min: 1.5 / Avg: 1.5 / Max: 1.5Min: 1.5 / Avg: 1.5 / Max: 1.51. (CC) gcc options: -fopenmp -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -lm

Compile Bench

Compilebench tries to age a filesystem by simulating some of the disk IO common in creating, compiling, patching, stating and reading kernel trees. It indirectly measures how well filesystems can maintain directory locality as the disk fills up and directories age. This current test is setup to use the makej mode with 10 initial directories Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Read Compiled Tree12330060090012001500SE +/- 12.37, N = 3SE +/- 6.46, N = 3SE +/- 248.56, N = 31293.471298.04993.62
OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Read Compiled Tree1232004006008001000Min: 1276.96 / Avg: 1293.47 / Max: 1317.69Min: 1288.72 / Avg: 1298.04 / Max: 1310.46Min: 504.48 / Avg: 993.62 / Max: 1314.98

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Initial Create123110220330440550SE +/- 16.31, N = 3SE +/- 37.60, N = 3SE +/- 34.14, N = 3491.40462.79473.49
OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Initial Create12390180270360450Min: 458.78 / Avg: 491.4 / Max: 507.8Min: 388.3 / Avg: 462.79 / Max: 508.98Min: 407.25 / Avg: 473.49 / Max: 520.96