Xeon E5 1680 v3 Xmas

Intel Xeon E5-1680 v3 testing with a ASUS X99-A (3902 BIOS) and eVGA NVIDIA NVE7 1GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2012271-HA-XEONE516824
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 4 Tests
Bioinformatics 2 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 2 Tests
Chess Test Suite 4 Tests
Timed Code Compilation 3 Tests
C/C++ Compiler Tests 13 Tests
CPU Massive 19 Tests
Creator Workloads 17 Tests
Database Test Suite 4 Tests
Encoding 7 Tests
Fortran Tests 3 Tests
Game Development 2 Tests
HPC - High Performance Computing 15 Tests
Machine Learning 9 Tests
Molecular Dynamics 3 Tests
MPI Benchmarks 2 Tests
Multi-Core 16 Tests
NVIDIA GPU Compute 5 Tests
Intel oneAPI 3 Tests
OpenMPI Tests 2 Tests
Programmer / Developer System Benchmarks 5 Tests
Python 2 Tests
Scientific Computing 6 Tests
Server 6 Tests
Server CPU Tests 7 Tests
Single-Threaded 7 Tests
Speech 2 Tests
Telephony 2 Tests
Texture Compression 2 Tests
Video Encoding 3 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
1
December 26 2020
  9 Hours, 34 Minutes
2
December 26 2020
  9 Hours, 50 Minutes
3
December 27 2020
  9 Hours, 58 Minutes
Invert Hiding All Results Option
  9 Hours, 47 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Xeon E5 1680 v3 XmasProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen Resolution123Intel Xeon E5-1680 v3 @ 3.80GHz (8 Cores / 16 Threads)ASUS X99-A (3902 BIOS)Intel Xeon E7 v3/Xeon16GBPNY CS900 240GBeVGA NVIDIA NVE7 1GBRealtek ALC1150G237HLIntel I218-VUbuntu 20.045.4.0-47-generic (x86_64)GNOME Shell 3.36.4X Server 1.20.8modesetting 1.20.84.3 Mesa 20.0.8GCC 9.3.0ext41920x1080OpenBenchmarking.orgCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Disk Details- MQ-DEADLINE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Details- Scaling Governor: intel_pstate powersave - CPU Microcode: 0x43Python Details- Python 3.8.5Security Details- itlb_multihit: KVM: Vulnerable + l1tf: Mitigation of PTE Inversion + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

123Result OverviewPhoronix Test Suite100%103%105%108%RediseSpeak-NG Speech EngineMlpack BenchmarkBuild2CLOMPNode.js V8 Web Tooling BenchmarkCompile Benchyquake2FFTENumpy BenchmarkSockperfStockfishKvazaarUnpacking FirefoxCraftyLeelaChessZeroKeyDBCoremarkGraphicsMagickApache CouchDBNCNNLAMMPS Molecular Dynamics SimulatorTimed MAFFT AlignmentasmFishDolfynSQLite SpeedtestBYTE Unix BenchmarkRNNoiseTimed Eigen CompilationOpus Codec EncodingTimed HMMer Searchx265AI Benchmark AlphaoneDNNPHPBenchASTC EncoderIndigoBenchHierarchical INTegrationOpenVINOGROMACSrav1eMonkey Audio EncodingWavPack Audio EncodingBRL-CADEmbreeOgg Audio EncodingLZ4 CompressionBasis UniversalTimed FFmpeg CompilationCaffe

Xeon E5 1680 v3 Xmasncnn: CPU - yolov4-tinymlpack: scikit_icaredis: GETbuild2: Time To Compilencnn: CPU - mnasnetonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUclomp: Static OMP Speedupnode-web-tooling: graphics-magick: Rotateyquake2: Software CPU - 1920 x 1080compilebench: Read Compiled Treeopenvino: Person Detection 0106 FP16 - CPUncnn: CPU - blazefacegraphics-magick: HWB Color Spacekvazaar: Bosphorus 1080p - Very Fastncnn: CPU-v2-v2 - mobilenet-v2x265: Bosphorus 1080pffte: N=256, 3D Complex FFT Routinekvazaar: Bosphorus 4K - Mediumnumpy: kvazaar: Bosphorus 4K - Very Faststockfish: Total Timeunpack-firefox: firefox-84.0.source.tar.xzastcenc: Fastkvazaar: Bosphorus 1080p - Mediumkvazaar: Bosphorus 1080p - Ultra Fastkvazaar: Bosphorus 4K - Ultra Fastonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUx265: Bosphorus 4Kcrafty: Elapsed Timencnn: CPU - mobilenetonednn: IP Shapes 3D - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUncnn: CPU - shufflenet-v2ncnn: CPU - regnety_400mlczero: BLASncnn: CPU - googlenetgraphics-magick: Resizingcompress-lz4: 9 - Compression Speedonednn: IP Shapes 1D - f32 - CPUlczero: Eigenkeydb: coremark: CoreMark Size 666 - Iterations Per Secondcompress-lz4: 1 - Compression Speedonednn: IP Shapes 3D - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUncnn: CPU-v3-v3 - mobilenet-v3lammps: Rhodopsin Proteinmafft: Multiple Sequence Alignment - LSU RNAcouchdb: 100 - 1000 - 24asmfish: 1024 Hash Memory, 26 Depthembree: Pathtracer ISPC - Asian Dragonembree: Pathtracer ISPC - Crowndolfyn: Computational Fluid Dynamicssqlite-speedtest: Timed Time - Size 1,000onednn: IP Shapes 1D - u8s8f32 - CPUbyte: Dhrystone 2graphics-magick: Swirlrnnoise: embree: Pathtracer - Asian Dragon Objembree: Pathtracer - Crownindigobench: CPU - Supercarbuild-eigen: Time To Compileencode-opus: WAV To Opus Encoderav1e: 6onednn: Deconvolution Batch shapes_3d - f32 - CPUbasis: ETC1Sonednn: Convolution Batch Shapes Auto - f32 - CPUgraphics-magick: Noise-Gaussianonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUindigobench: CPU - Bedroomhmmer: Pfam Database Searchsockperf: Latency Ping Pongncnn: CPU - resnet18openvino: Age Gender Recognition Retail 0013 FP16 - CPUncnn: CPU - vgg16ai-benchmark: Device Training Scorerav1e: 10embree: Pathtracer - Asian Dragonai-benchmark: Device AI Scorephpbench: PHP Benchmark Suiterav1e: 5ncnn: CPU - resnet50ncnn: CPU - squeezenet_ssdcaffe: AlexNet - CPU - 200basis: UASTC Level 0openvino: Age Gender Recognition Retail 0013 FP32 - CPUcompress-lz4: 9 - Decompression Speedmlpack: scikit_linearridgeregressionai-benchmark: Device Inference Scorehint: FLOATonednn: Recurrent Neural Network Training - u8s8f32 - CPUopenvino: Person Detection 0106 FP16 - CPUncnn: CPU - alexnetgromacs: Water Benchmarkbasis: UASTC Level 2onednn: Recurrent Neural Network Inference - f32 - CPUembree: Pathtracer ISPC - Asian Dragon Objencode-ape: WAV To APEencode-wavpack: WAV To WavPacksockperf: Throughputonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUbrl-cad: VGR Performance Metriconednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUopenvino: Person Detection 0106 FP32 - CPUcaffe: AlexNet - CPU - 100compress-lz4: 3 - Compression Speedonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUastcenc: Mediumcompress-lz4: 3 - Decompression Speedbuild-ffmpeg: Time To Compileopenvino: Face Detection 0106 FP32 - CPUmlpack: scikit_svmcompress-lz4: 1 - Decompression Speedbasis: UASTC Level 3openvino: Face Detection 0106 FP16 - CPUencode-ogg: WAV To Oggcaffe: GoogleNet - CPU - 100caffe: GoogleNet - CPU - 200astcenc: Thoroughastcenc: Exhaustiveopenvino: Age Gender Recognition Retail 0013 FP32 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Person Detection 0106 FP32 - CPUopenvino: Face Detection 0106 FP32 - CPUopenvino: Face Detection 0106 FP16 - CPUrav1e: 1graphics-magick: Enhancedgraphics-magick: Sharpenmlpack: scikit_qdancnn: CPU - efficientnet-b0redis: SETredis: LPUSHredis: SADDredis: LPOPespeak: Text-To-Speech Synthesissockperf: Latency Under Loadcompilebench: Initial Createcompilebench: Compile12329.9167.171834428.00173.7205.457.876959.99.7759875.42243.771.32.5556536.786.1935.5130109.6107896683.62285.1510.281412440423.1607.0015.6569.7018.563.509938.85700296820.506.877523.013736.8820.68110515.3964842.064.530121046470540.93270945.8630966098.312.552046.334795.405.05511.358134.6022170791812.03199.680723.20779.0223.1334137390438.726225.7119.40028.68042.96697.3169.6781.0889.1772765.82212.551917510.81602480.101.297162.5754.49914.025069.4747.9510602.41510.014620796264800.81527.3724.8111808610.2385077.117283.63.181019348188152.871624529.123070.2111.160.75447.3832474.8810.812014.15715.9772655434533.01888174531.112472.673074.795930742.976.203196.597267.381.8432113.8722.307499.088.8742114.4024.75515099430196041.52334.100.770.771.31.881.880.2721019061.468.171481853.311289264.921639601.212000353.8635.72125.468357.55760.8428.2067.931750688.54172.5925.637.9035410.29.7161273.62297.291.272.5557536.786.2336.2230542.4636811963.61289.2010.281414134222.8377.1115.6769.2118.523.496808.75692672120.236.787912.988146.9520.87110415.5865142.114.583501050475933.28273924.3357036163.862.534576.400145.425.02611.317134.0852157760312.13739.735323.39779.6623.1587437461981.326425.6699.41918.61702.96397.1799.6431.0939.2346366.20412.498017610.75462469.551.291163.4544.48714.075044.3348.1410602.42510.054020826262180.81627.4624.7211850710.2025066.127259.43.191022349172823.112384523.863078.7011.190.75347.4182468.9710.798614.17015.9802660974523.92889954525.422470.373069.325920542.996.212956.587256.981.8602115.3122.337494.088.9862115.2924.74915085330168641.51334.170.770.771.31.881.880.2721019063.568.341456546.481306966.331625787.131685556.7237.47925.961339.32773.5828.9271.221852979.38178.6005.607.6647410.29.5061374.82286.431.302.6157836.026.3235.6030708.4487632543.55283.8810.101390057523.2177.0415.4368.6818.293.460348.87702127020.446.811392.974476.9720.95109115.4765641.604.557781038475766.15273946.9709046152.722.524946.372705.455.07211.256135.2982176866212.09849.762923.32379.3163.1447537177008.126425.5219.34998.66752.94697.8079.6161.0869.1992166.22312.478117610.80742466.091.298163.3134.51014.005066.6148.1810652.41910.047720876241670.81827.4724.7311814910.2045060.177278.73.191022348871675.898174536.473073.9511.190.75547.5062468.8010.786814.13916.0122656224526.33888854533.642468.253073.875922143.046.210706.597267.981.7432116.8422.337503.588.9332116.9924.77715102030176041.50334.210.770.771.31.881.880.2721019067.828.671464928.241243644.161635637.531279010.1738.49724.475376.15653.68OpenBenchmarking.org

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tiny123714212835SE +/- 0.25, N = 3SE +/- 0.07, N = 3SE +/- 0.76, N = 329.9128.2028.92MIN: 27.91 / MAX: 34.95MIN: 27.88 / MAX: 28.62MIN: 27.88 / MAX: 35.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tiny123714212835Min: 29.55 / Avg: 29.91 / Max: 30.39Min: 28.07 / Avg: 28.2 / Max: 28.31Min: 28.07 / Avg: 28.92 / Max: 30.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_ica1231632486480SE +/- 0.62, N = 15SE +/- 0.78, N = 6SE +/- 1.19, N = 367.1767.9371.22
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_ica1231428425670Min: 63.52 / Avg: 67.17 / Max: 72.3Min: 65.82 / Avg: 67.93 / Max: 70.93Min: 68.87 / Avg: 71.22 / Max: 72.69

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GET123400K800K1200K1600K2000KSE +/- 22966.71, N = 3SE +/- 26369.12, N = 15SE +/- 28672.73, N = 31834428.001750688.541852979.381. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GET123300K600K900K1200K1500KMin: 1805342.88 / Avg: 1834428 / Max: 1879759.38Min: 1466932.62 / Avg: 1750688.54 / Max: 1883480.12Min: 1802147.75 / Avg: 1852979.38 / Max: 19013841. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To Compile1234080120160200SE +/- 2.07, N = 5SE +/- 2.37, N = 4SE +/- 1.61, N = 3173.72172.59178.60
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To Compile123306090120150Min: 168.63 / Avg: 173.72 / Max: 178.93Min: 169.04 / Avg: 172.59 / Max: 179.33Min: 175.61 / Avg: 178.6 / Max: 181.15

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnet1231.26682.53363.80045.06726.334SE +/- 0.07, N = 3SE +/- 0.06, N = 3SE +/- 0.07, N = 35.455.635.60MIN: 5.28 / MAX: 16.44MIN: 5.37 / MAX: 17.72MIN: 5.39 / MAX: 16.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnet123246810Min: 5.35 / Avg: 5.45 / Max: 5.59Min: 5.52 / Avg: 5.63 / Max: 5.73Min: 5.53 / Avg: 5.6 / Max: 5.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU123246810SE +/- 0.10075, N = 5SE +/- 0.12917, N = 3SE +/- 0.01396, N = 37.876957.903547.66474MIN: 7.57MIN: 7.65MIN: 7.581. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU1233691215Min: 7.62 / Avg: 7.88 / Max: 8.12Min: 7.74 / Avg: 7.9 / Max: 8.16Min: 7.64 / Avg: 7.66 / Max: 7.691. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

CLOMP

CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading in order to influence future system designs. This particular test profile configuration is currently set to look at the OpenMP static schedule speed-up across all available CPU cores using the recommended test configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP Speedup1233691215SE +/- 0.14, N = 4SE +/- 0.15, N = 4SE +/- 0.09, N = 39.910.210.21. (CC) gcc options: -fopenmp -O3 -lm
OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP Speedup1233691215Min: 9.7 / Avg: 9.9 / Max: 10.3Min: 9.9 / Avg: 10.23 / Max: 10.6Min: 10 / Avg: 10.17 / Max: 10.31. (CC) gcc options: -fopenmp -O3 -lm

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling Benchmark1233691215SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.05, N = 39.779.719.501. Nodejs v10.19.0
OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling Benchmark1233691215Min: 9.72 / Avg: 9.77 / Max: 9.84Min: 9.66 / Avg: 9.71 / Max: 9.81Min: 9.41 / Avg: 9.5 / Max: 9.551. Nodejs v10.19.0

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Rotate123130260390520650SE +/- 0.33, N = 35986126131. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Rotate123110220330440550Min: 613 / Avg: 613.33 / Max: 6141. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

yquake2

This is a test of Yamagi Quake II. Yamagi Quake II is an enhanced client for id Software's Quake II with focus on offline and coop gameplay. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: Software CPU - Resolution: 1920 x 108012320406080100SE +/- 0.98, N = 5SE +/- 0.35, N = 3SE +/- 0.26, N = 375.473.674.81. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC
OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: Software CPU - Resolution: 1920 x 10801231428425670Min: 72.3 / Avg: 75.36 / Max: 77.7Min: 73.1 / Avg: 73.63 / Max: 74.3Min: 74.3 / Avg: 74.77 / Max: 75.21. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC

Compile Bench

Compilebench tries to age a filesystem by simulating some of the disk IO common in creating, compiling, patching, stating and reading kernel trees. It indirectly measures how well filesystems can maintain directory locality as the disk fills up and directories age. This current test is setup to use the makej mode with 10 initial directories Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Read Compiled Tree1235001000150020002500SE +/- 25.31, N = 3SE +/- 12.51, N = 3SE +/- 10.79, N = 32243.772297.292286.43
OpenBenchmarking.orgMB/s, More Is BetterCompile Bench 0.6Test: Read Compiled Tree123400800120016002000Min: 2197.96 / Avg: 2243.77 / Max: 2285.33Min: 2284.23 / Avg: 2297.29 / Max: 2322.31Min: 2268.32 / Avg: 2286.43 / Max: 2305.64

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPU1230.29250.5850.87751.171.4625SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 31.301.271.30
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPU123246810Min: 1.3 / Avg: 1.3 / Max: 1.3Min: 1.26 / Avg: 1.27 / Max: 1.3Min: 1.29 / Avg: 1.3 / Max: 1.3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazeface1230.58731.17461.76192.34922.9365SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 32.552.552.61MIN: 2.48 / MAX: 2.62MIN: 2.51 / MAX: 2.59MIN: 2.55 / MAX: 3.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazeface123246810Min: 2.5 / Avg: 2.55 / Max: 2.57Min: 2.53 / Avg: 2.55 / Max: 2.56Min: 2.56 / Avg: 2.61 / Max: 2.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: HWB Color Space123120240360480600SE +/- 0.67, N = 3SE +/- 0.33, N = 3SE +/- 0.67, N = 35655755781. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: HWB Color Space123100200300400500Min: 564 / Avg: 565.33 / Max: 566Min: 575 / Avg: 575.33 / Max: 576Min: 577 / Avg: 577.67 / Max: 5791. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Very Fast123816243240SE +/- 0.02, N = 3SE +/- 0.08, N = 3SE +/- 0.03, N = 336.7836.7836.021. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Very Fast123816243240Min: 36.73 / Avg: 36.78 / Max: 36.81Min: 36.68 / Avg: 36.78 / Max: 36.93Min: 35.96 / Avg: 36.02 / Max: 36.081. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2123246810SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.08, N = 36.196.236.32MIN: 6 / MAX: 15.89MIN: 6.01 / MAX: 16.48MIN: 6.04 / MAX: 17.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v21233691215Min: 6.13 / Avg: 6.19 / Max: 6.24Min: 6.18 / Avg: 6.23 / Max: 6.3Min: 6.24 / Avg: 6.32 / Max: 6.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080p123816243240SE +/- 0.53, N = 3SE +/- 0.16, N = 3SE +/- 0.18, N = 335.5136.2235.601. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080p123816243240Min: 34.49 / Avg: 35.51 / Max: 36.29Min: 35.99 / Avg: 36.22 / Max: 36.53Min: 35.35 / Avg: 35.6 / Max: 35.961. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFTE

FFTE is a package by Daisuke Takahashi to compute Discrete Fourier Transforms of 1-, 2- and 3- dimensional sequences of length (2^p)*(3^q)*(5^r). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT Routine1237K14K21K28K35KSE +/- 340.92, N = 3SE +/- 209.55, N = 3SE +/- 89.40, N = 330109.6130542.4630708.451. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp
OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT Routine1235K10K15K20K25KMin: 29747.65 / Avg: 30109.61 / Max: 30791Min: 30129.68 / Avg: 30542.46 / Max: 30811.66Min: 30612.86 / Avg: 30708.45 / Max: 30887.11. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Medium1230.81451.6292.44353.2584.0725SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 33.623.613.551. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Medium123246810Min: 3.6 / Avg: 3.62 / Max: 3.63Min: 3.59 / Avg: 3.61 / Max: 3.62Min: 3.53 / Avg: 3.55 / Max: 3.571. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy Benchmark12360120180240300SE +/- 0.88, N = 3SE +/- 0.38, N = 3SE +/- 0.75, N = 3285.15289.20283.88
OpenBenchmarking.orgScore, More Is BetterNumpy Benchmark12350100150200250Min: 283.6 / Avg: 285.15 / Max: 286.63Min: 288.44 / Avg: 289.2 / Max: 289.67Min: 282.93 / Avg: 283.88 / Max: 285.36

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very Fast1233691215SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 310.2810.2810.101. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very Fast1233691215Min: 10.27 / Avg: 10.28 / Max: 10.29Min: 10.26 / Avg: 10.28 / Max: 10.3Min: 10.09 / Avg: 10.1 / Max: 10.11. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total Time1233M6M9M12M15MSE +/- 166711.74, N = 3SE +/- 71363.03, N = 3SE +/- 194479.46, N = 41412440414141342139005751. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total Time1232M4M6M8M10MMin: 13811064 / Avg: 14124404.33 / Max: 14379775Min: 14003298 / Avg: 14141342.33 / Max: 14241763Min: 13445217 / Avg: 13900575.25 / Max: 143864721. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver

Unpacking Firefox

This simple test profile measures how long it takes to extract the .tar.xz source package of the Mozilla Firefox Web Browser. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xz123612182430SE +/- 0.31, N = 4SE +/- 0.28, N = 6SE +/- 0.33, N = 423.1622.8423.22
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xz123510152025Min: 22.57 / Avg: 23.16 / Max: 23.84Min: 22.39 / Avg: 22.84 / Max: 24.2Min: 22.3 / Avg: 23.22 / Max: 23.87

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Fast123246810SE +/- 0.01, N = 3SE +/- 0.07, N = 3SE +/- 0.04, N = 37.007.117.041. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Fast1233691215Min: 6.99 / Avg: 7 / Max: 7.01Min: 7 / Avg: 7.11 / Max: 7.23Min: 7 / Avg: 7.04 / Max: 7.111. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Medium12348121620SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 315.6515.6715.431. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Medium12348121620Min: 15.61 / Avg: 15.65 / Max: 15.68Min: 15.61 / Avg: 15.67 / Max: 15.72Min: 15.41 / Avg: 15.43 / Max: 15.451. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Ultra Fast1231632486480SE +/- 0.11, N = 3SE +/- 0.21, N = 3SE +/- 0.22, N = 369.7069.2168.681. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Ultra Fast1231326395265Min: 69.51 / Avg: 69.7 / Max: 69.89Min: 68.93 / Avg: 69.21 / Max: 69.62Min: 68.44 / Avg: 68.68 / Max: 69.111. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra Fast123510152025SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 318.5618.5218.291. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra Fast123510152025Min: 18.51 / Avg: 18.56 / Max: 18.62Min: 18.5 / Avg: 18.52 / Max: 18.56Min: 18.25 / Avg: 18.29 / Max: 18.321. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU1230.78971.57942.36913.15883.9485SE +/- 0.00979, N = 3SE +/- 0.02742, N = 3SE +/- 0.01650, N = 33.509933.496803.46034MIN: 3.44MIN: 3.33MIN: 3.321. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU123246810Min: 3.49 / Avg: 3.51 / Max: 3.52Min: 3.44 / Avg: 3.5 / Max: 3.53Min: 3.44 / Avg: 3.46 / Max: 3.491. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4K123246810SE +/- 0.09, N = 8SE +/- 0.12, N = 4SE +/- 0.08, N = 38.858.758.871. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4K1233691215Min: 8.52 / Avg: 8.85 / Max: 9.25Min: 8.45 / Avg: 8.75 / Max: 9.01Min: 8.72 / Avg: 8.87 / Max: 8.991. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed Time1231.5M3M4.5M6M7.5MSE +/- 8272.92, N = 3SE +/- 19512.27, N = 3SE +/- 33788.32, N = 37002968692672170212701. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm
OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed Time1231.2M2.4M3.6M4.8M6MMin: 6988999 / Avg: 7002968.33 / Max: 7017632Min: 6897715 / Avg: 6926720.67 / Max: 6963833Min: 6985935 / Avg: 7021270.33 / Max: 70888231. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenet123510152025SE +/- 0.15, N = 3SE +/- 0.01, N = 3SE +/- 0.20, N = 320.5020.2320.44MIN: 20.12 / MAX: 38.51MIN: 20.1 / MAX: 22.33MIN: 20.12 / MAX: 27.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenet123510152025Min: 20.25 / Avg: 20.5 / Max: 20.77Min: 20.22 / Avg: 20.23 / Max: 20.24Min: 20.23 / Avg: 20.44 / Max: 20.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU123246810SE +/- 0.01639, N = 3SE +/- 0.02409, N = 3SE +/- 0.03424, N = 36.877526.787916.81139MIN: 6.77MIN: 6.64MIN: 6.681. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU1233691215Min: 6.85 / Avg: 6.88 / Max: 6.9Min: 6.75 / Avg: 6.79 / Max: 6.83Min: 6.74 / Avg: 6.81 / Max: 6.861. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU1230.67811.35622.03432.71243.3905SE +/- 0.00785, N = 3SE +/- 0.00758, N = 3SE +/- 0.00748, N = 33.013732.988142.97447MIN: 2.93MIN: 2.91MIN: 2.91. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU123246810Min: 3 / Avg: 3.01 / Max: 3.02Min: 2.97 / Avg: 2.99 / Max: 3Min: 2.96 / Avg: 2.97 / Max: 2.991. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2123246810SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.07, N = 36.886.956.97MIN: 6.77 / MAX: 17.73MIN: 6.77 / MAX: 18.58MIN: 6.8 / MAX: 18.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v21233691215Min: 6.81 / Avg: 6.88 / Max: 6.93Min: 6.9 / Avg: 6.95 / Max: 7.02Min: 6.83 / Avg: 6.97 / Max: 7.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400m123510152025SE +/- 0.08, N = 3SE +/- 0.07, N = 3SE +/- 0.10, N = 320.6820.8720.95MIN: 20.41 / MAX: 37.64MIN: 20.52 / MAX: 52.3MIN: 20.63 / MAX: 34.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400m123510152025Min: 20.53 / Avg: 20.68 / Max: 20.78Min: 20.73 / Avg: 20.87 / Max: 20.96Min: 20.75 / Avg: 20.95 / Max: 21.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLAS1232004006008001000SE +/- 8.29, N = 3SE +/- 4.93, N = 3SE +/- 8.14, N = 31105110410911. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLAS1232004006008001000Min: 1089 / Avg: 1105.33 / Max: 1116Min: 1095 / Avg: 1104 / Max: 1112Min: 1076 / Avg: 1091 / Max: 11041. (CXX) g++ options: -flto -pthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenet12348121620SE +/- 0.06, N = 3SE +/- 0.07, N = 3SE +/- 0.06, N = 315.3915.5815.47MIN: 15.11 / MAX: 19.35MIN: 15.19 / MAX: 16.78MIN: 15.2 / MAX: 16.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenet12348121620Min: 15.33 / Avg: 15.39 / Max: 15.52Min: 15.46 / Avg: 15.58 / Max: 15.69Min: 15.36 / Avg: 15.47 / Max: 15.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Resizing123140280420560700SE +/- 1.15, N = 3SE +/- 1.15, N = 3SE +/- 0.67, N = 36486516561. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Resizing123120240360480600Min: 646 / Avg: 648 / Max: 650Min: 649 / Avg: 651 / Max: 653Min: 655 / Avg: 655.67 / Max: 6571. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression Speed1231020304050SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.56, N = 342.0642.1141.601. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression Speed123918273645Min: 42.05 / Avg: 42.06 / Max: 42.08Min: 42.08 / Avg: 42.11 / Max: 42.13Min: 40.48 / Avg: 41.6 / Max: 42.161. (CC) gcc options: -O3

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU1231.03132.06263.09394.12525.1565SE +/- 0.01478, N = 3SE +/- 0.03023, N = 3SE +/- 0.02200, N = 34.530124.583504.55778MIN: 4.43MIN: 4.48MIN: 4.451. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU123246810Min: 4.5 / Avg: 4.53 / Max: 4.55Min: 4.53 / Avg: 4.58 / Max: 4.63Min: 4.51 / Avg: 4.56 / Max: 4.591. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: Eigen1232004006008001000SE +/- 2.89, N = 3SE +/- 5.03, N = 3SE +/- 11.79, N = 31046105010381. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: Eigen1232004006008001000Min: 1041 / Avg: 1046 / Max: 1051Min: 1040 / Avg: 1050 / Max: 1056Min: 1023 / Avg: 1037.67 / Max: 10611. (CXX) g++ options: -flto -pthread

KeyDB

A benchmark of KeyDB as a multi-threaded fork of the Redis server. The KeyDB benchmark is conducted using memtier-benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.16123100K200K300K400K500KSE +/- 1362.98, N = 3SE +/- 111.39, N = 3SE +/- 531.93, N = 3470540.93475933.28475766.151. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.1612380K160K240K320K400KMin: 468194.9 / Avg: 470540.93 / Max: 472916.13Min: 475711.77 / Avg: 475933.28 / Max: 476064.59Min: 474818.91 / Avg: 475766.15 / Max: 476659.181. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second12360K120K180K240K300KSE +/- 1053.72, N = 3SE +/- 590.16, N = 3SE +/- 486.71, N = 3270945.86273924.34273946.971. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second12350K100K150K200K250KMin: 269780.38 / Avg: 270945.86 / Max: 273049.19Min: 272804.77 / Avg: 273924.34 / Max: 274807.85Min: 273014.25 / Avg: 273946.97 / Max: 274654.541. (CC) gcc options: -O2 -lrt" -lrt

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression Speed12313002600390052006500SE +/- 1.20, N = 3SE +/- 5.37, N = 3SE +/- 8.03, N = 36098.316163.866152.721. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression Speed12311002200330044005500Min: 6096.7 / Avg: 6098.31 / Max: 6100.66Min: 6153.52 / Avg: 6163.86 / Max: 6171.53Min: 6142.4 / Avg: 6152.72 / Max: 6168.531. (CC) gcc options: -O3

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU1230.57421.14841.72262.29682.871SE +/- 0.01421, N = 3SE +/- 0.01518, N = 3SE +/- 0.01253, N = 32.552042.534572.52494MIN: 2.5MIN: 2.49MIN: 2.481. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU123246810Min: 2.53 / Avg: 2.55 / Max: 2.57Min: 2.51 / Avg: 2.53 / Max: 2.56Min: 2.5 / Avg: 2.52 / Max: 2.541. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU123246810SE +/- 0.00863, N = 3SE +/- 0.01093, N = 3SE +/- 0.03639, N = 36.334796.400146.37270MIN: 6.26MIN: 6.3MIN: 6.261. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU1233691215Min: 6.32 / Avg: 6.33 / Max: 6.35Min: 6.38 / Avg: 6.4 / Max: 6.41Min: 6.32 / Avg: 6.37 / Max: 6.441. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v31231.22632.45263.67894.90526.1315SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 35.405.425.45MIN: 5.25 / MAX: 17.21MIN: 5.27 / MAX: 16.73MIN: 5.3 / MAX: 16.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3123246810Min: 5.29 / Avg: 5.4 / Max: 5.5Min: 5.36 / Avg: 5.42 / Max: 5.45Min: 5.39 / Avg: 5.45 / Max: 5.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Protein1231.14122.28243.42364.56485.706SE +/- 0.005, N = 3SE +/- 0.072, N = 3SE +/- 0.007, N = 35.0555.0265.0721. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Protein123246810Min: 5.05 / Avg: 5.05 / Max: 5.06Min: 4.88 / Avg: 5.03 / Max: 5.1Min: 5.06 / Avg: 5.07 / Max: 5.091. (CXX) g++ options: -O3 -pthread -lm

Timed MAFFT Alignment

This test performs an alignment of 100 pyruvate decarboxylase sequences. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNA1233691215SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 311.3611.3211.261. (CC) gcc options: -std=c99 -O3 -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNA1233691215Min: 11.3 / Avg: 11.36 / Max: 11.42Min: 11.25 / Avg: 11.32 / Max: 11.38Min: 11.15 / Avg: 11.26 / Max: 11.311. (CC) gcc options: -std=c99 -O3 -lm -lpthread

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.1.1Bulk Size: 100 - Inserts: 1000 - Rounds: 24123306090120150SE +/- 0.34, N = 3SE +/- 0.62, N = 3SE +/- 2.21, N = 3134.60134.09135.301. (CXX) g++ options: -std=c++14 -lmozjs-68 -lm -lerl_interface -lei -fPIC -MMD
OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.1.1Bulk Size: 100 - Inserts: 1000 - Rounds: 24123306090120150Min: 133.94 / Avg: 134.6 / Max: 135.03Min: 132.95 / Avg: 134.09 / Max: 135.1Min: 132.33 / Avg: 135.3 / Max: 139.621. (CXX) g++ options: -std=c++14 -lmozjs-68 -lm -lerl_interface -lei -fPIC -MMD

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Depth1235M10M15M20M25MSE +/- 260870.39, N = 3SE +/- 222609.71, N = 3SE +/- 288716.85, N = 3217079182157760321768662
OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Depth1234M8M12M16M20MMin: 21197417 / Avg: 21707918 / Max: 22056451Min: 21262613 / Avg: 21577603.33 / Max: 22007589Min: 21421602 / Avg: 21768662.33 / Max: 22341860

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon1233691215SE +/- 0.01, N = 3SE +/- 0.11, N = 3SE +/- 0.10, N = 312.0312.1412.10MIN: 11.97 / MAX: 12.2MIN: 11.91 / MAX: 12.45MIN: 11.87 / MAX: 12.37
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon12348121620Min: 12.01 / Avg: 12.03 / Max: 12.05Min: 11.95 / Avg: 12.14 / Max: 12.33Min: 11.91 / Avg: 12.1 / Max: 12.22

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Crown1233691215SE +/- 0.0256, N = 3SE +/- 0.0397, N = 3SE +/- 0.0193, N = 39.68079.73539.7629MIN: 9.6 / MAX: 9.86MIN: 9.62 / MAX: 9.95MIN: 9.68 / MAX: 9.94
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Crown1233691215Min: 9.65 / Avg: 9.68 / Max: 9.73Min: 9.67 / Avg: 9.74 / Max: 9.81Min: 9.73 / Avg: 9.76 / Max: 9.8

Dolfyn

Dolfyn is a Computational Fluid Dynamics (CFD) code of modern numerical simulation techniques. The Dolfyn test profile measures the execution time of the bundled computational fluid dynamics demos that are bundled with Dolfyn. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid Dynamics123612182430SE +/- 0.10, N = 3SE +/- 0.16, N = 3SE +/- 0.17, N = 323.2123.4023.32
OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid Dynamics123510152025Min: 23.02 / Avg: 23.21 / Max: 23.33Min: 23.12 / Avg: 23.4 / Max: 23.69Min: 23.1 / Avg: 23.32 / Max: 23.65

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,00012320406080100SE +/- 0.33, N = 3SE +/- 0.41, N = 3SE +/- 0.06, N = 379.0279.6679.321. (CC) gcc options: -O2 -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,0001231530456075Min: 78.55 / Avg: 79.02 / Max: 79.66Min: 78.84 / Avg: 79.66 / Max: 80.14Min: 79.22 / Avg: 79.32 / Max: 79.421. (CC) gcc options: -O2 -ldl -lz -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU1230.71071.42142.13212.84283.5535SE +/- 0.01227, N = 3SE +/- 0.01228, N = 3SE +/- 0.01108, N = 33.133413.158743.14475MIN: 3.07MIN: 3.1MIN: 3.071. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU123246810Min: 3.11 / Avg: 3.13 / Max: 3.15Min: 3.14 / Avg: 3.16 / Max: 3.18Min: 3.12 / Avg: 3.14 / Max: 3.161. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

BYTE Unix Benchmark

This is a test of BYTE. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 21238M16M24M32M40MSE +/- 165452.15, N = 3SE +/- 46692.87, N = 3SE +/- 122181.12, N = 337390438.737461981.337177008.1
OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 21236M12M18M24M30MMin: 37093823.7 / Avg: 37390438.73 / Max: 37665781.2Min: 37388069 / Avg: 37461981.27 / Max: 37548368.2Min: 36974070.1 / Avg: 37177008.13 / Max: 37396361.8

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Swirl12360120180240300SE +/- 0.58, N = 3SE +/- 0.33, N = 3SE +/- 0.58, N = 32622642641. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Swirl12350100150200250Min: 261 / Avg: 262 / Max: 263Min: 263 / Avg: 263.67 / Max: 264Min: 263 / Avg: 264 / Max: 2651. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28123612182430SE +/- 0.02, N = 3SE +/- 0.14, N = 3SE +/- 0.06, N = 325.7125.6725.521. (CC) gcc options: -O2 -pedantic -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28123612182430Min: 25.68 / Avg: 25.71 / Max: 25.74Min: 25.41 / Avg: 25.67 / Max: 25.9Min: 25.43 / Avg: 25.52 / Max: 25.631. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon Obj1233691215SE +/- 0.0214, N = 3SE +/- 0.0255, N = 3SE +/- 0.0092, N = 39.40029.41919.3499MIN: 9.33 / MAX: 9.52MIN: 9.36 / MAX: 9.54MIN: 9.3 / MAX: 9.45
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon Obj1233691215Min: 9.37 / Avg: 9.4 / Max: 9.44Min: 9.39 / Avg: 9.42 / Max: 9.47Min: 9.33 / Avg: 9.35 / Max: 9.36

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Crown123246810SE +/- 0.0232, N = 3SE +/- 0.0079, N = 3SE +/- 0.0281, N = 38.68048.61708.6675MIN: 8.59 / MAX: 8.82MIN: 8.56 / MAX: 8.76MIN: 8.57 / MAX: 8.83
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Crown1233691215Min: 8.63 / Avg: 8.68 / Max: 8.71Min: 8.6 / Avg: 8.62 / Max: 8.63Min: 8.62 / Avg: 8.67 / Max: 8.71

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Supercar1230.66741.33482.00222.66963.337SE +/- 0.004, N = 3SE +/- 0.004, N = 3SE +/- 0.002, N = 32.9662.9632.946
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Supercar123246810Min: 2.96 / Avg: 2.97 / Max: 2.97Min: 2.96 / Avg: 2.96 / Max: 2.97Min: 2.94 / Avg: 2.95 / Max: 2.95

Timed Eigen Compilation

This test times how long it takes to build all Eigen examples. The Eigen examples are compiled serially. Eigen is a C++ template library for linear algebra. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To Compile12320406080100SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 397.3297.1897.81
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To Compile12320406080100Min: 97.28 / Avg: 97.32 / Max: 97.36Min: 97.09 / Avg: 97.18 / Max: 97.24Min: 97.76 / Avg: 97.81 / Max: 97.87

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus Encode1233691215SE +/- 0.024, N = 5SE +/- 0.033, N = 5SE +/- 0.008, N = 59.6789.6439.6161. (CXX) g++ options: -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus Encode1233691215Min: 9.65 / Avg: 9.68 / Max: 9.77Min: 9.59 / Avg: 9.64 / Max: 9.76Min: 9.6 / Avg: 9.62 / Max: 9.641. (CXX) g++ options: -fvisibility=hidden -logg -lm

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 61230.24590.49180.73770.98361.2295SE +/- 0.001, N = 3SE +/- 0.006, N = 3SE +/- 0.003, N = 31.0881.0931.086
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 6123246810Min: 1.09 / Avg: 1.09 / Max: 1.09Min: 1.09 / Avg: 1.09 / Max: 1.1Min: 1.08 / Avg: 1.09 / Max: 1.09

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU1233691215SE +/- 0.01399, N = 3SE +/- 0.02794, N = 3SE +/- 0.00829, N = 39.177279.234639.19921MIN: 9.08MIN: 9.09MIN: 9.091. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU1233691215Min: 9.15 / Avg: 9.18 / Max: 9.2Min: 9.18 / Avg: 9.23 / Max: 9.28Min: 9.19 / Avg: 9.2 / Max: 9.221. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1S1231530456075SE +/- 0.44, N = 3SE +/- 0.19, N = 3SE +/- 0.05, N = 365.8266.2066.221. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1S1231326395265Min: 64.96 / Avg: 65.82 / Max: 66.42Min: 65.83 / Avg: 66.2 / Max: 66.46Min: 66.17 / Avg: 66.22 / Max: 66.331. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU1233691215SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 312.5512.5012.48MIN: 12.41MIN: 12.37MIN: 12.341. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU12348121620Min: 12.52 / Avg: 12.55 / Max: 12.59Min: 12.47 / Avg: 12.5 / Max: 12.55Min: 12.46 / Avg: 12.48 / Max: 12.521. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-Gaussian1234080120160200SE +/- 0.33, N = 31751761761. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-Gaussian123306090120150Min: 175 / Avg: 175.67 / Max: 1761. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU1233691215SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 310.8210.7510.81MIN: 10.52MIN: 10.48MIN: 10.471. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU1233691215Min: 10.79 / Avg: 10.82 / Max: 10.86Min: 10.72 / Avg: 10.75 / Max: 10.81Min: 10.78 / Avg: 10.81 / Max: 10.861. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU1235001000150020002500SE +/- 10.28, N = 3SE +/- 2.48, N = 3SE +/- 0.45, N = 32480.102469.552466.09MIN: 2465MIN: 2462.7MIN: 2461.631. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU123400800120016002000Min: 2467.03 / Avg: 2480.1 / Max: 2500.39Min: 2465.54 / Avg: 2469.55 / Max: 2474.07Min: 2465.35 / Avg: 2466.09 / Max: 2466.91. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Bedroom1230.29210.58420.87631.16841.4605SE +/- 0.001, N = 3SE +/- 0.002, N = 3SE +/- 0.002, N = 31.2971.2911.298
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Bedroom123246810Min: 1.3 / Avg: 1.3 / Max: 1.3Min: 1.29 / Avg: 1.29 / Max: 1.29Min: 1.3 / Avg: 1.3 / Max: 1.3

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database Search1234080120160200SE +/- 0.21, N = 3SE +/- 0.36, N = 3SE +/- 0.13, N = 3162.58163.45163.311. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database Search123306090120150Min: 162.19 / Avg: 162.57 / Max: 162.89Min: 162.79 / Avg: 163.45 / Max: 164.05Min: 163.09 / Avg: 163.31 / Max: 163.551. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

Sockperf

This is a network socket API performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.4Test: Latency Ping Pong1231.01482.02963.04444.05925.074SE +/- 0.037, N = 25SE +/- 0.032, N = 5SE +/- 0.055, N = 64.4994.4874.5101. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread
OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.4Test: Latency Ping Pong123246810Min: 4.28 / Avg: 4.5 / Max: 4.99Min: 4.38 / Avg: 4.49 / Max: 4.58Min: 4.36 / Avg: 4.51 / Max: 4.741. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet1812348121620SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 314.0214.0714.00MIN: 13.88 / MAX: 14.41MIN: 13.87 / MAX: 19.55MIN: 13.86 / MAX: 15.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet1812348121620Min: 14.02 / Avg: 14.02 / Max: 14.03Min: 14.05 / Avg: 14.07 / Max: 14.1Min: 13.98 / Avg: 14 / Max: 14.021. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU12311002200330044005500SE +/- 13.21, N = 3SE +/- 8.75, N = 3SE +/- 7.38, N = 35069.475044.335066.61
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU1239001800270036004500Min: 5043.06 / Avg: 5069.47 / Max: 5083.03Min: 5027.32 / Avg: 5044.33 / Max: 5056.39Min: 5051.91 / Avg: 5066.61 / Max: 5075.09

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg161231122334455SE +/- 0.06, N = 3SE +/- 0.01, N = 3SE +/- 0.07, N = 347.9548.1448.18MIN: 47.64 / MAX: 50.16MIN: 47.78 / MAX: 64.77MIN: 47.69 / MAX: 57.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg161231020304050Min: 47.83 / Avg: 47.95 / Max: 48.04Min: 48.13 / Avg: 48.14 / Max: 48.17Min: 48.04 / Avg: 48.18 / Max: 48.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training Score1232004006008001000106010601065

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 101230.54561.09121.63682.18242.728SE +/- 0.001, N = 3SE +/- 0.004, N = 3SE +/- 0.007, N = 32.4152.4252.419
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 10123246810Min: 2.41 / Avg: 2.41 / Max: 2.42Min: 2.42 / Avg: 2.42 / Max: 2.43Min: 2.4 / Avg: 2.42 / Max: 2.43

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon1233691215SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.08, N = 310.0110.0510.05MIN: 9.97 / MAX: 10.12MIN: 9.94 / MAX: 10.21MIN: 9.86 / MAX: 10.24
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon1233691215Min: 10 / Avg: 10.01 / Max: 10.05Min: 9.97 / Avg: 10.05 / Max: 10.13Min: 9.89 / Avg: 10.05 / Max: 10.13

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI Score123400800120016002000207920822087

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark Suite123130K260K390K520K650KSE +/- 676.23, N = 3SE +/- 872.14, N = 3SE +/- 1083.96, N = 3626480626218624167
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark Suite123110K220K330K440K550KMin: 625406 / Avg: 626480.33 / Max: 627729Min: 624749 / Avg: 626218 / Max: 627767Min: 622404 / Avg: 624166.67 / Max: 626141

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 51230.18410.36820.55230.73640.9205SE +/- 0.002, N = 3SE +/- 0.005, N = 3SE +/- 0.005, N = 30.8150.8160.818
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 5123246810Min: 0.81 / Avg: 0.81 / Max: 0.82Min: 0.81 / Avg: 0.82 / Max: 0.82Min: 0.81 / Avg: 0.82 / Max: 0.82

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50123612182430SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 327.3727.4627.47MIN: 26.85 / MAX: 35.85MIN: 27.01 / MAX: 44.87MIN: 27.05 / MAX: 45.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50123612182430Min: 27.27 / Avg: 27.37 / Max: 27.45Min: 27.42 / Avg: 27.46 / Max: 27.52Min: 27.42 / Avg: 27.47 / Max: 27.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssd123612182430SE +/- 0.11, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 324.8124.7224.73MIN: 24.62 / MAX: 48.23MIN: 24.64 / MAX: 26.71MIN: 24.64 / MAX: 27.271. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssd123612182430Min: 24.7 / Avg: 24.81 / Max: 25.02Min: 24.71 / Avg: 24.72 / Max: 24.74Min: 24.7 / Avg: 24.73 / Max: 24.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 20012330K60K90K120K150KSE +/- 18.73, N = 3SE +/- 244.34, N = 3SE +/- 86.87, N = 31180861185071181491. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 20012320K40K60K80K100KMin: 118050 / Avg: 118086 / Max: 118113Min: 118222 / Avg: 118506.67 / Max: 118993Min: 118019 / Avg: 118149.33 / Max: 1183141. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 01233691215SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 310.2410.2010.201. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 01233691215Min: 10.18 / Avg: 10.24 / Max: 10.34Min: 10.19 / Avg: 10.2 / Max: 10.22Min: 10.2 / Avg: 10.2 / Max: 10.211. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPU12311002200330044005500SE +/- 9.42, N = 3SE +/- 9.51, N = 3SE +/- 7.20, N = 35077.115066.125060.17
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPU1239001800270036004500Min: 5059.35 / Avg: 5077.11 / Max: 5091.42Min: 5048.76 / Avg: 5066.12 / Max: 5081.51Min: 5052.9 / Avg: 5060.17 / Max: 5074.56

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression Speed12316003200480064008000SE +/- 5.97, N = 3SE +/- 4.05, N = 3SE +/- 14.52, N = 37283.67259.47278.71. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression Speed12313002600390052006500Min: 7272.2 / Avg: 7283.6 / Max: 7292.4Min: 7251.3 / Avg: 7259.4 / Max: 7263.5Min: 7253.5 / Avg: 7278.67 / Max: 7303.81. (CC) gcc options: -O3

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregression1230.71781.43562.15342.87123.589SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 33.183.193.19
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregression123246810Min: 3.16 / Avg: 3.18 / Max: 3.2Min: 3.18 / Avg: 3.19 / Max: 3.21Min: 3.16 / Avg: 3.19 / Max: 3.2

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference Score1232004006008001000101910221022

Hierarchical INTegration

This test runs the U.S. Department of Energy's Ames Laboratory Hierarchical INTegration (HINT) benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOAT12370M140M210M280M350MSE +/- 671953.04, N = 3SE +/- 348495.09, N = 3SE +/- 110846.40, N = 3348188152.87349172823.11348871675.901. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOAT12360M120M180M240M300MMin: 347092025.64 / Avg: 348188152.87 / Max: 349409597.14Min: 348518551.75 / Avg: 349172823.11 / Max: 349708029.01Min: 348693153.28 / Avg: 348871675.9 / Max: 349074770.541. (CC) gcc options: -O3 -march=native -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU12310002000300040005000SE +/- 0.35, N = 3SE +/- 3.64, N = 3SE +/- 7.29, N = 34529.124523.864536.47MIN: 4522.47MIN: 4513.41MIN: 4520.131. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU1238001600240032004000Min: 4528.53 / Avg: 4529.12 / Max: 4529.74Min: 4518.42 / Avg: 4523.86 / Max: 4530.78Min: 4526.87 / Avg: 4536.47 / Max: 4550.771. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPU1237001400210028003500SE +/- 2.92, N = 3SE +/- 2.89, N = 3SE +/- 0.75, N = 33070.213078.703073.95
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPU1235001000150020002500Min: 3064.47 / Avg: 3070.21 / Max: 3073.99Min: 3072.98 / Avg: 3078.7 / Max: 3082.27Min: 3072.46 / Avg: 3073.95 / Max: 3074.82

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnet1233691215SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 311.1611.1911.19MIN: 11.09 / MAX: 11.53MIN: 11.11 / MAX: 11.28MIN: 11.11 / MAX: 20.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnet1233691215Min: 11.14 / Avg: 11.16 / Max: 11.19Min: 11.18 / Avg: 11.19 / Max: 11.2Min: 11.16 / Avg: 11.19 / Max: 11.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water Benchmark1230.16990.33980.50970.67960.8495SE +/- 0.002, N = 3SE +/- 0.002, N = 3SE +/- 0.002, N = 30.7540.7530.7551. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water Benchmark123246810Min: 0.75 / Avg: 0.75 / Max: 0.76Min: 0.75 / Avg: 0.75 / Max: 0.76Min: 0.75 / Avg: 0.76 / Max: 0.761. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 21231122334455SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.07, N = 347.3847.4247.511. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 21231020304050Min: 47.36 / Avg: 47.38 / Max: 47.41Min: 47.4 / Avg: 47.42 / Max: 47.44Min: 47.38 / Avg: 47.51 / Max: 47.611. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU1235001000150020002500SE +/- 2.75, N = 3SE +/- 3.28, N = 3SE +/- 1.82, N = 32474.882468.972468.80MIN: 2467.18MIN: 2460.79MIN: 2463.371. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU123400800120016002000Min: 2469.83 / Avg: 2474.88 / Max: 2479.3Min: 2462.95 / Avg: 2468.97 / Max: 2474.24Min: 2465.8 / Avg: 2468.8 / Max: 2472.091. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon Obj1233691215SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 310.8110.8010.79MIN: 10.7 / MAX: 11.01MIN: 10.71 / MAX: 11.01MIN: 10.7 / MAX: 10.96
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon Obj1233691215Min: 10.75 / Avg: 10.81 / Max: 10.89Min: 10.76 / Avg: 10.8 / Max: 10.86Min: 10.74 / Avg: 10.79 / Max: 10.82

Monkey Audio Encoding

This test times how long it takes to encode a sample WAV file to Monkey's Audio APE format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APE12348121620SE +/- 0.04, N = 5SE +/- 0.06, N = 5SE +/- 0.05, N = 514.1614.1714.141. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt
OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APE12348121620Min: 14.08 / Avg: 14.16 / Max: 14.3Min: 14.07 / Avg: 14.17 / Max: 14.41Min: 14.06 / Avg: 14.14 / Max: 14.341. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt

WavPack Audio Encoding

This test times how long it takes to encode a sample WAV file to WavPack format with very high quality settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPack12348121620SE +/- 0.04, N = 5SE +/- 0.05, N = 5SE +/- 0.05, N = 515.9815.9816.011. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPack12348121620Min: 15.92 / Avg: 15.98 / Max: 16.14Min: 15.93 / Avg: 15.98 / Max: 16.18Min: 15.92 / Avg: 16.01 / Max: 16.181. (CXX) g++ options: -rdynamic

Sockperf

This is a network socket API performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMessages Per Second, More Is BetterSockperf 3.4Test: Throughput12360K120K180K240K300KSE +/- 2778.72, N = 25SE +/- 2697.01, N = 25SE +/- 2962.76, N = 52655432660972656221. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread
OpenBenchmarking.orgMessages Per Second, More Is BetterSockperf 3.4Test: Throughput12350K100K150K200K250KMin: 243684 / Avg: 265543 / Max: 291111Min: 252080 / Avg: 266096.8 / Max: 302074Min: 257661 / Avg: 265621.6 / Max: 2756881. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU12310002000300040005000SE +/- 2.65, N = 3SE +/- 1.37, N = 3SE +/- 3.38, N = 34533.014523.924526.33MIN: 4522.75MIN: 4517.94MIN: 4516.821. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU1238001600240032004000Min: 4528.09 / Avg: 4533.01 / Max: 4537.17Min: 4522.47 / Avg: 4523.92 / Max: 4526.65Min: 4522.76 / Avg: 4526.33 / Max: 4533.081. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

BRL-CAD

BRL-CAD 7.28.0 is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.30.8VGR Performance Metric12320K40K60K80K100K8881788995888851. (CXX) g++ options: -std=c++11 -pipe -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -rdynamic -lSM -lICE -lXi -lGLU -lGL -lGLdispatch -lX11 -lXext -lXrender -lpthread -ldl -luuid -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU12310002000300040005000SE +/- 1.69, N = 3SE +/- 1.21, N = 3SE +/- 2.17, N = 34531.114525.424533.64MIN: 4524.03MIN: 4516.35MIN: 4525.251. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU1238001600240032004000Min: 4527.89 / Avg: 4531.11 / Max: 4533.6Min: 4523.65 / Avg: 4525.42 / Max: 4527.73Min: 4529.34 / Avg: 4533.64 / Max: 4536.221. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU1235001000150020002500SE +/- 0.91, N = 3SE +/- 0.84, N = 3SE +/- 1.99, N = 32472.672470.372468.25MIN: 2467.6MIN: 2466.62MIN: 2462.291. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU123400800120016002000Min: 2471.4 / Avg: 2472.67 / Max: 2474.44Min: 2468.9 / Avg: 2470.37 / Max: 2471.81Min: 2464.94 / Avg: 2468.25 / Max: 2471.811. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPU1237001400210028003500SE +/- 1.72, N = 3SE +/- 1.16, N = 3SE +/- 5.64, N = 33074.793069.323073.87
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPU1235001000150020002500Min: 3072.72 / Avg: 3074.79 / Max: 3078.2Min: 3067.36 / Avg: 3069.32 / Max: 3071.37Min: 3062.6 / Avg: 3073.87 / Max: 3079.61

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 10012313K26K39K52K65KSE +/- 170.12, N = 3SE +/- 51.35, N = 3SE +/- 27.33, N = 35930759205592211. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 10012310K20K30K40K50KMin: 59080 / Avg: 59307 / Max: 59640Min: 59129 / Avg: 59205.33 / Max: 59303Min: 59166 / Avg: 59220.67 / Max: 592481. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression Speed1231020304050SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 342.9742.9943.041. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression Speed123918273645Min: 42.94 / Avg: 42.97 / Max: 42.99Min: 42.98 / Avg: 42.99 / Max: 43Min: 43.01 / Avg: 43.04 / Max: 43.061. (CC) gcc options: -O3

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU123246810SE +/- 0.01219, N = 3SE +/- 0.02072, N = 3SE +/- 0.01601, N = 36.203196.212956.21070MIN: 6.12MIN: 6.11MIN: 6.11. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU123246810Min: 6.18 / Avg: 6.2 / Max: 6.22Min: 6.18 / Avg: 6.21 / Max: 6.25Min: 6.19 / Avg: 6.21 / Max: 6.241. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Medium123246810SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 36.596.586.591. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Medium1233691215Min: 6.58 / Avg: 6.59 / Max: 6.59Min: 6.58 / Avg: 6.58 / Max: 6.59Min: 6.58 / Avg: 6.59 / Max: 6.591. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression Speed12316003200480064008000SE +/- 4.72, N = 3SE +/- 6.48, N = 3SE +/- 4.32, N = 37267.37256.97267.91. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression Speed12313002600390052006500Min: 7260.8 / Avg: 7267.33 / Max: 7276.5Min: 7244 / Avg: 7256.93 / Max: 7264.1Min: 7260.8 / Avg: 7267.9 / Max: 7275.71. (CC) gcc options: -O3

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To Compile12320406080100SE +/- 0.23, N = 3SE +/- 0.09, N = 3SE +/- 0.12, N = 381.8481.8681.74
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To Compile1231632486480Min: 81.48 / Avg: 81.84 / Max: 82.27Min: 81.69 / Avg: 81.86 / Max: 81.96Min: 81.5 / Avg: 81.74 / Max: 81.89

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPU1235001000150020002500SE +/- 0.83, N = 3SE +/- 0.51, N = 3SE +/- 1.55, N = 32113.872115.312116.84
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPU123400800120016002000Min: 2112.64 / Avg: 2113.87 / Max: 2115.46Min: 2114.42 / Avg: 2115.31 / Max: 2116.19Min: 2113.84 / Avg: 2116.84 / Max: 2119.04

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svm123510152025SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 322.3022.3322.33
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svm123510152025Min: 22.25 / Avg: 22.3 / Max: 22.33Min: 22.26 / Avg: 22.33 / Max: 22.44Min: 22.25 / Avg: 22.33 / Max: 22.41

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression Speed12316003200480064008000SE +/- 8.20, N = 3SE +/- 10.99, N = 3SE +/- 10.24, N = 37499.07494.07503.51. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression Speed12313002600390052006500Min: 7484.2 / Avg: 7499 / Max: 7512.5Min: 7472.8 / Avg: 7494 / Max: 7509.6Min: 7487.2 / Avg: 7503.5 / Max: 7522.41. (CC) gcc options: -O3

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 312320406080100SE +/- 0.01, N = 3SE +/- 0.09, N = 3SE +/- 0.02, N = 388.8788.9988.931. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 312320406080100Min: 88.86 / Avg: 88.87 / Max: 88.9Min: 88.89 / Avg: 88.99 / Max: 89.16Min: 88.9 / Avg: 88.93 / Max: 88.971. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPU1235001000150020002500SE +/- 0.54, N = 3SE +/- 0.91, N = 3SE +/- 1.10, N = 32114.402115.292116.99
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPU123400800120016002000Min: 2113.36 / Avg: 2114.4 / Max: 2115.16Min: 2113.74 / Avg: 2115.29 / Max: 2116.9Min: 2115.33 / Avg: 2116.99 / Max: 2119.08

Ogg Audio Encoding

This test times how long it takes to encode a sample WAV file to Ogg format using the reference Xiph.org tools/libraries. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOgg Audio Encoding 1.3.4WAV To Ogg123612182430SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 324.7624.7524.78