Ryzen 5 3600XT 2021

AMD Ryzen 5 3600XT 6-Core testing with a MSI X470 GAMING M7 AC (MS-7B77) v1.0 (1.E0 BIOS) and MSI AMD Radeon R7 370 / R9 270/370 OEM 4GB on Ubuntu 20.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2101018-HA-RYZEN536000
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 4 Tests
Bioinformatics 2 Tests
Timed Code Compilation 3 Tests
C/C++ Compiler Tests 5 Tests
CPU Massive 7 Tests
Creator Workloads 6 Tests
Encoding 4 Tests
HPC - High Performance Computing 4 Tests
Machine Learning 2 Tests
Multi-Core 5 Tests
Programmer / Developer System Benchmarks 7 Tests
Scientific Computing 2 Tests
Server 4 Tests
Server CPU Tests 2 Tests
Single-Threaded 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Linux 5.8
January 01 2021
  3 Hours, 16 Minutes
Repeat 2
January 01 2021
  3 Hours, 21 Minutes
Repeat 3
January 01 2021
  3 Hours, 17 Minutes
Invert Hiding All Results Option
  3 Hours, 18 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Ryzen 5 3600XT 2021OpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 5 3600XT 6-Core @ 3.80GHz (6 Cores / 12 Threads)MSI X470 GAMING M7 AC (MS-7B77) v1.0 (1.E0 BIOS)AMD Starship/Matisse16GB500GB CT500P2SSD8MSI AMD Radeon R7 370 / R9 270/370 OEM 4GBAMD Oland/Hainan/CapeG237HLQualcomm Atheros Killer E2500 + Intel 8265 / 8275Ubuntu 20.105.8.0-28-generic (x86_64)GNOME Shell 3.38.1X Server 1.20.9modesetting 1.20.94.5 Mesa 20.2.1 (LLVM 11.0.0)GCC 10.2.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionRyzen 5 3600XT 2021 BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0x8701021 - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

Linux 5.8Repeat 2Repeat 3Result OverviewPhoronix Test Suite100%100%101%101%Timed MAFFT AlignmentCLOMPOpus Codec EncodingSQLite SpeedtestOgg Audio EncodingBRL-CADsimdjsonPHPBenchNode.js V8 Web Tooling BenchmarkCoremarkNCNNTimed HMMer SearchWavPack Audio EncodingMonkey Audio EncodingTimed Eigen CompilationTimed Clash CompilationBuild2CryptsetupUnpacking FirefoxoneDNNTimed FFmpeg Compilation

Ryzen 5 3600XT 2021ncnn: CPU - blazefacencnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - alexnetsimdjson: DistinctUserIDcryptsetup: PBKDF2-sha512cryptsetup: PBKDF2-whirlpoolncnn: CPU - mobilenetmafft: Multiple Sequence Alignment - LSU RNAonednn: IP Shapes 3D - u8s8f32 - CPUclomp: Static OMP Speeduponednn: IP Shapes 1D - f32 - CPUncnn: CPU - efficientnet-b0cryptsetup: AES-XTS 512b Encryptionncnn: CPU-v2-v2 - mobilenet-v2encode-opus: WAV To Opus Encodecryptsetup: AES-XTS 512b Decryptionsqlite-speedtest: Timed Time - Size 1,000encode-ogg: WAV To Oggcryptsetup: Twofish-XTS 512b Decryptionsimdjson: Kostyaonednn: IP Shapes 3D - f32 - CPUcryptsetup: AES-XTS 256b Encryptionncnn: CPU - squeezenet_ssdsimdjson: PartialTweetsbrl-cad: VGR Performance Metricncnn: CPU - googlenetcryptsetup: Twofish-XTS 256b Decryptioncryptsetup: Twofish-XTS 512b Encryptioncryptsetup: Serpent-XTS 512b Encryptionphpbench: PHP Benchmark Suitenode-web-tooling: onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUcoremark: CoreMark Size 666 - Iterations Per Secondcryptsetup: AES-XTS 256b Decryptionncnn: CPU - mnasnetncnn: CPU - vgg16onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUncnn: CPU - regnety_400monednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUncnn: CPU - resnet18ncnn: CPU - resnet50cryptsetup: Twofish-XTS 256b Encryptioncryptsetup: Serpent-XTS 512b Decryptionhmmer: Pfam Database Searchencode-wavpack: WAV To WavPackonednn: IP Shapes 1D - u8s8f32 - CPUncnn: CPU - yolov4-tinyencode-ape: WAV To APEncnn: CPU - shufflenet-v2onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUcryptsetup: Serpent-XTS 256b Decryptiononednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUbuild-eigen: Time To Compilebuild-clash: Time To Compileonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUcryptsetup: Serpent-XTS 256b Encryptiononednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUbuild2: Time To Compileunpack-firefox: firefox-84.0.source.tar.xzonednn: Recurrent Neural Network Training - f32 - CPUbuild-ffmpeg: Time To Compileonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUsimdjson: LargeRandLinux 5.8Repeat 2Repeat 32.535.2914.940.77180080175335020.1411.5423.0681516.95.823298.301826.86.127.1381828.557.47018.719428.40.6812.64462073.524.870.769353517.94425.4429.4737.866334011.318.78558269797.6695272080.65.2768.665.0487417.663385.703389.0418.5234.89429.5726.3106.40612.4123.4973330.7811.4227.765135.6923.6698722.05110.574.4140075.134376.60424.60729.18729733.53378.54177.25817.5065069.4065.4866.784266.677680.442.455.2014.790.79175767673574419.7211.3473.0120417.15.924868.211840.96.107.1971843.956.65218.427435.00.6712.58182095.424.540.769284717.88427.1433.6742.466650711.198.76683271723.3393222088.55.2268.515.0361217.813357.803361.2518.5034.85426.7730.6106.97812.4043.5169830.6111.4527.725141.7923.7672718.65109.674.3942775.400375.10824.53469.18130734.93385.30176.75217.4585078.5165.4256.794266.680310.442.435.3615.190.77176853974309019.8911.3243.0414716.85.923978.351857.56.027.0811858.457.55918.645433.30.6712.75712067.524.580.759231617.71430.9434.6746.765934511.298.69285268921.7007282068.25.2369.125.0803417.743366.593387.6618.6534.62426.2731.4107.04212.4773.5041930.7111.3927.745162.1123.6542721.05132.654.4121275.438376.49024.51619.15703735.73388.22176.92417.4705080.0765.5406.787866.685410.44OpenBenchmarking.org

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceRepeat 3Repeat 2Linux 5.80.56931.13861.70792.27722.8465SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 32.432.452.53MIN: 2.29 / MAX: 7.08MIN: 2.29 / MAX: 7.26MIN: 2.29 / MAX: 7.811. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceRepeat 3Repeat 2Linux 5.8246810Min: 2.41 / Avg: 2.43 / Max: 2.44Min: 2.42 / Avg: 2.45 / Max: 2.47Min: 2.43 / Avg: 2.53 / Max: 2.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3Repeat 3Repeat 2Linux 5.81.2062.4123.6184.8246.03SE +/- 0.05, N = 3SE +/- 0.05, N = 3SE +/- 0.08, N = 35.365.205.29MIN: 4.81 / MAX: 14.77MIN: 4.65 / MAX: 25.68MIN: 4.69 / MAX: 30.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3Repeat 3Repeat 2Linux 5.8246810Min: 5.3 / Avg: 5.36 / Max: 5.46Min: 5.1 / Avg: 5.2 / Max: 5.26Min: 5.18 / Avg: 5.29 / Max: 5.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetRepeat 3Repeat 2Linux 5.848121620SE +/- 0.07, N = 3SE +/- 0.22, N = 3SE +/- 0.03, N = 315.1914.7914.94MIN: 13.48 / MAX: 42.73MIN: 13.34 / MAX: 40.9MIN: 13.91 / MAX: 38.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetRepeat 3Repeat 2Linux 5.848121620Min: 15.12 / Avg: 15.19 / Max: 15.32Min: 14.38 / Avg: 14.79 / Max: 15.13Min: 14.9 / Avg: 14.94 / Max: 14.991. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDRepeat 3Repeat 2Linux 5.80.17780.35560.53340.71120.889SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 30.770.790.771. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDRepeat 3Repeat 2Linux 5.8246810Min: 0.76 / Avg: 0.77 / Max: 0.78Min: 0.78 / Avg: 0.79 / Max: 0.8Min: 0.76 / Avg: 0.77 / Max: 0.781. (CXX) g++ options: -O3 -pthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-sha512Repeat 3Repeat 2Linux 5.8400K800K1200K1600K2000KSE +/- 15858.95, N = 3SE +/- 15938.76, N = 3SE +/- 11811.84, N = 3176853917576761800801
OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-sha512Repeat 3Repeat 2Linux 5.8300K600K900K1200K1500KMin: 1744718 / Avg: 1768538.67 / Max: 1798586Min: 1738932 / Avg: 1757676 / Max: 1789378Min: 1777247 / Avg: 1800801.33 / Max: 1814145

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-whirlpoolRepeat 3Repeat 2Linux 5.8160K320K480K640K800KSE +/- 6755.62, N = 3SE +/- 5231.74, N = 3SE +/- 4850.18, N = 3743090735744753350
OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-whirlpoolRepeat 3Repeat 2Linux 5.8130K260K390K520K650KMin: 735326 / Avg: 743089.67 / Max: 756548Min: 728177 / Avg: 735744 / Max: 745786Min: 743670 / Avg: 753349.67 / Max: 758738

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetRepeat 3Repeat 2Linux 5.8510152025SE +/- 0.03, N = 3SE +/- 0.11, N = 3SE +/- 0.07, N = 319.8919.7220.14MIN: 18.82 / MAX: 52.28MIN: 18.42 / MAX: 58.23MIN: 18.78 / MAX: 54.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetRepeat 3Repeat 2Linux 5.8510152025Min: 19.83 / Avg: 19.89 / Max: 19.95Min: 19.51 / Avg: 19.72 / Max: 19.9Min: 20.01 / Avg: 20.14 / Max: 20.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Timed MAFFT Alignment

This test performs an alignment of 100 pyruvate decarboxylase sequences. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNARepeat 3Repeat 2Linux 5.83691215SE +/- 0.06, N = 3SE +/- 0.12, N = 3SE +/- 0.06, N = 311.3211.3511.541. (CC) gcc options: -std=c99 -O3 -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNARepeat 3Repeat 2Linux 5.83691215Min: 11.2 / Avg: 11.32 / Max: 11.39Min: 11.16 / Avg: 11.35 / Max: 11.56Min: 11.43 / Avg: 11.54 / Max: 11.641. (CC) gcc options: -std=c99 -O3 -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPURepeat 3Repeat 2Linux 5.80.69031.38062.07092.76123.4515SE +/- 0.02149, N = 3SE +/- 0.04368, N = 3SE +/- 0.01517, N = 33.041473.012043.06815MIN: 2.76MIN: 2.74MIN: 2.761. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPURepeat 3Repeat 2Linux 5.8246810Min: 3 / Avg: 3.04 / Max: 3.07Min: 2.92 / Avg: 3.01 / Max: 3.06Min: 3.05 / Avg: 3.07 / Max: 3.11. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

CLOMP

CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading in order to influence future system designs. This particular test profile configuration is currently set to look at the OpenMP static schedule speed-up across all available CPU cores using the recommended test configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP SpeedupRepeat 3Repeat 2Linux 5.848121620SE +/- 0.21, N = 3SE +/- 0.06, N = 3SE +/- 0.23, N = 316.817.116.91. (CC) gcc options: -fopenmp -O3 -lm
OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP SpeedupRepeat 3Repeat 2Linux 5.848121620Min: 16.5 / Avg: 16.8 / Max: 17.2Min: 17 / Avg: 17.1 / Max: 17.2Min: 16.5 / Avg: 16.87 / Max: 17.31. (CC) gcc options: -fopenmp -O3 -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPURepeat 3Repeat 2Linux 5.81.33312.66623.99935.33246.6655SE +/- 0.00727, N = 3SE +/- 0.01363, N = 3SE +/- 0.01960, N = 35.923975.924865.82329MIN: 5.37MIN: 5.36MIN: 5.311. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPURepeat 3Repeat 2Linux 5.8246810Min: 5.91 / Avg: 5.92 / Max: 5.93Min: 5.9 / Avg: 5.92 / Max: 5.94Min: 5.8 / Avg: 5.82 / Max: 5.861. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0Repeat 3Repeat 2Linux 5.8246810SE +/- 0.08, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 38.358.218.30MIN: 7.51 / MAX: 32.39MIN: 7.37 / MAX: 31.37MIN: 7.42 / MAX: 26.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0Repeat 3Repeat 2Linux 5.83691215Min: 8.21 / Avg: 8.35 / Max: 8.47Min: 8.14 / Avg: 8.21 / Max: 8.33Min: 8.25 / Avg: 8.3 / Max: 8.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b EncryptionRepeat 3Repeat 2Linux 5.8400800120016002000SE +/- 2.66, N = 3SE +/- 17.19, N = 3SE +/- 17.03, N = 31857.51840.91826.8
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b EncryptionRepeat 3Repeat 2Linux 5.830060090012001500Min: 1852.9 / Avg: 1857.47 / Max: 1862.1Min: 1818.1 / Avg: 1840.93 / Max: 1874.6Min: 1809.6 / Avg: 1826.83 / Max: 1860.9

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2Repeat 3Repeat 2Linux 5.8246810SE +/- 0.03, N = 3SE +/- 0.07, N = 3SE +/- 0.03, N = 36.026.106.12MIN: 5.47 / MAX: 14.76MIN: 5.43 / MAX: 16.66MIN: 5.4 / MAX: 581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2Repeat 3Repeat 2Linux 5.8246810Min: 5.97 / Avg: 6.02 / Max: 6.08Min: 6.03 / Avg: 6.1 / Max: 6.23Min: 6.07 / Avg: 6.12 / Max: 6.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeRepeat 3Repeat 2Linux 5.8246810SE +/- 0.044, N = 5SE +/- 0.019, N = 5SE +/- 0.040, N = 57.0817.1977.1381. (CXX) g++ options: -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeRepeat 3Repeat 2Linux 5.83691215Min: 7 / Avg: 7.08 / Max: 7.26Min: 7.14 / Avg: 7.2 / Max: 7.25Min: 7.03 / Avg: 7.14 / Max: 7.231. (CXX) g++ options: -fvisibility=hidden -logg -lm

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b DecryptionRepeat 3Repeat 2Linux 5.8400800120016002000SE +/- 2.48, N = 3SE +/- 16.96, N = 3SE +/- 17.25, N = 31858.41843.91828.5
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b DecryptionRepeat 3Repeat 2Linux 5.830060090012001500Min: 1854.1 / Avg: 1858.43 / Max: 1862.7Min: 1817.6 / Avg: 1843.87 / Max: 1875.6Min: 1806 / Avg: 1828.5 / Max: 1862.4

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Repeat 3Repeat 2Linux 5.81326395265SE +/- 0.40, N = 3SE +/- 0.30, N = 3SE +/- 0.13, N = 357.5656.6557.471. (CC) gcc options: -O2 -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Repeat 3Repeat 2Linux 5.81122334455Min: 56.91 / Avg: 57.56 / Max: 58.28Min: 56.09 / Avg: 56.65 / Max: 57.1Min: 57.28 / Avg: 57.47 / Max: 57.731. (CC) gcc options: -O2 -ldl -lz -lpthread

Ogg Audio Encoding

This test times how long it takes to encode a sample WAV file to Ogg format using the reference Xiph.org tools/libraries. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOgg Audio Encoding 1.3.4WAV To OggRepeat 3Repeat 2Linux 5.8510152025SE +/- 0.15, N = 3SE +/- 0.12, N = 3SE +/- 0.07, N = 318.6518.4318.721. (CC) gcc options: -O2 -ffast-math -fsigned-char
OpenBenchmarking.orgSeconds, Fewer Is BetterOgg Audio Encoding 1.3.4WAV To OggRepeat 3Repeat 2Linux 5.8510152025Min: 18.37 / Avg: 18.65 / Max: 18.87Min: 18.28 / Avg: 18.43 / Max: 18.66Min: 18.63 / Avg: 18.72 / Max: 18.861. (CC) gcc options: -O2 -ffast-math -fsigned-char

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b DecryptionRepeat 3Repeat 2Linux 5.890180270360450SE +/- 0.17, N = 3SE +/- 0.27, N = 3SE +/- 4.97, N = 3433.3435.0428.4
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b DecryptionRepeat 3Repeat 2Linux 5.880160240320400Min: 433.1 / Avg: 433.27 / Max: 433.6Min: 434.5 / Avg: 435.03 / Max: 435.4Min: 418.5 / Avg: 428.43 / Max: 433.7

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: KostyaRepeat 3Repeat 2Linux 5.80.1530.3060.4590.6120.765SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 30.670.670.681. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: KostyaRepeat 3Repeat 2Linux 5.8246810Min: 0.66 / Avg: 0.67 / Max: 0.68Min: 0.66 / Avg: 0.67 / Max: 0.68Min: 0.67 / Avg: 0.68 / Max: 0.681. (CXX) g++ options: -O3 -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPURepeat 3Repeat 2Linux 5.83691215SE +/- 0.03, N = 3SE +/- 0.05, N = 3SE +/- 0.06, N = 312.7612.5812.64MIN: 12.06MIN: 11.93MIN: 12.061. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPURepeat 3Repeat 2Linux 5.848121620Min: 12.7 / Avg: 12.76 / Max: 12.8Min: 12.48 / Avg: 12.58 / Max: 12.67Min: 12.54 / Avg: 12.64 / Max: 12.721. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b EncryptionRepeat 3Repeat 2Linux 5.8400800120016002000SE +/- 17.56, N = 3SE +/- 11.85, N = 3SE +/- 19.49, N = 32067.52095.42073.5
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b EncryptionRepeat 3Repeat 2Linux 5.8400800120016002000Min: 2033.7 / Avg: 2067.47 / Max: 2092.7Min: 2073.9 / Avg: 2095.4 / Max: 2114.8Min: 2034.7 / Avg: 2073.47 / Max: 2096.4

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdRepeat 3Repeat 2Linux 5.8612182430SE +/- 0.16, N = 3SE +/- 0.16, N = 3SE +/- 0.46, N = 324.5824.5424.87MIN: 22.83 / MAX: 56.42MIN: 22.47 / MAX: 71.41MIN: 22.53 / MAX: 78.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdRepeat 3Repeat 2Linux 5.8612182430Min: 24.32 / Avg: 24.58 / Max: 24.86Min: 24.34 / Avg: 24.54 / Max: 24.87Min: 24.35 / Avg: 24.87 / Max: 25.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweetsRepeat 3Repeat 2Linux 5.80.1710.3420.5130.6840.855SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 30.750.760.761. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweetsRepeat 3Repeat 2Linux 5.8246810Min: 0.75 / Avg: 0.75 / Max: 0.76Min: 0.74 / Avg: 0.76 / Max: 0.77Min: 0.74 / Avg: 0.76 / Max: 0.771. (CXX) g++ options: -O3 -pthread

BRL-CAD

BRL-CAD 7.28.0 is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.30.8VGR Performance MetricRepeat 3Repeat 2Linux 5.820K40K60K80K100K9231692847935351. (CXX) g++ options: -std=c++11 -pipe -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -rdynamic -lSM -lICE -lXi -lGLU -lGL -lGLdispatch -lX11 -lXext -lXrender -lpthread -ldl -luuid -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetRepeat 3Repeat 2Linux 5.848121620SE +/- 0.25, N = 3SE +/- 0.10, N = 3SE +/- 0.25, N = 317.7117.8817.94MIN: 16.4 / MAX: 48.5MIN: 16.43 / MAX: 56.75MIN: 16.31 / MAX: 59.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetRepeat 3Repeat 2Linux 5.8510152025Min: 17.29 / Avg: 17.71 / Max: 18.15Min: 17.68 / Avg: 17.88 / Max: 18Min: 17.65 / Avg: 17.94 / Max: 18.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b DecryptionRepeat 3Repeat 2Linux 5.890180270360450SE +/- 2.68, N = 3SE +/- 4.01, N = 3SE +/- 4.33, N = 3430.9427.1425.4
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b DecryptionRepeat 3Repeat 2Linux 5.880160240320400Min: 425.5 / Avg: 430.87 / Max: 433.7Min: 422.7 / Avg: 427.1 / Max: 435.1Min: 420.2 / Avg: 425.4 / Max: 434

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b EncryptionRepeat 3Repeat 2Linux 5.890180270360450SE +/- 0.15, N = 3SE +/- 2.88, N = 3SE +/- 4.82, N = 3434.6433.6429.4
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b EncryptionRepeat 3Repeat 2Linux 5.880160240320400Min: 434.4 / Avg: 434.6 / Max: 434.9Min: 427.8 / Avg: 433.57 / Max: 436.5Min: 419.8 / Avg: 429.4 / Max: 435

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b EncryptionRepeat 3Repeat 2Linux 5.8160320480640800SE +/- 0.33, N = 3SE +/- 7.66, N = 3SE +/- 6.76, N = 3746.7742.4737.8
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b EncryptionRepeat 3Repeat 2Linux 5.8130260390520650Min: 746.1 / Avg: 746.73 / Max: 747.2Min: 727.1 / Avg: 742.4 / Max: 750.8Min: 724.5 / Avg: 737.8 / Max: 746.6

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteRepeat 3Repeat 2Linux 5.8140K280K420K560K700KSE +/- 4604.42, N = 3SE +/- 2399.07, N = 3SE +/- 4754.65, N = 3659345666507663340
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteRepeat 3Repeat 2Linux 5.8120K240K360K480K600KMin: 650786 / Avg: 659345.33 / Max: 666567Min: 663277 / Avg: 666507.33 / Max: 671195Min: 656276 / Avg: 663339.67 / Max: 672385

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkRepeat 3Repeat 2Linux 5.83691215SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.11, N = 311.2911.1911.311. Nodejs v12.18.2
OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkRepeat 3Repeat 2Linux 5.83691215Min: 11.22 / Avg: 11.29 / Max: 11.38Min: 11.13 / Avg: 11.19 / Max: 11.26Min: 11.18 / Avg: 11.31 / Max: 11.521. Nodejs v12.18.2

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPURepeat 3Repeat 2Linux 5.8246810SE +/- 0.01336, N = 3SE +/- 0.07453, N = 3SE +/- 0.12505, N = 38.692858.766838.78558MIN: 7.91MIN: 7.9MIN: 7.91. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPURepeat 3Repeat 2Linux 5.83691215Min: 8.67 / Avg: 8.69 / Max: 8.71Min: 8.68 / Avg: 8.77 / Max: 8.91Min: 8.65 / Avg: 8.79 / Max: 9.041. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondRepeat 3Repeat 2Linux 5.860K120K180K240K300KSE +/- 1381.97, N = 3SE +/- 1424.10, N = 3SE +/- 48.96, N = 3268921.70271723.34269797.671. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondRepeat 3Repeat 2Linux 5.850K100K150K200K250KMin: 267210.99 / Avg: 268921.7 / Max: 271657.11Min: 269018.08 / Avg: 271723.34 / Max: 273847.56Min: 269743.74 / Avg: 269797.67 / Max: 269895.421. (CC) gcc options: -O2 -lrt" -lrt

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b DecryptionRepeat 3Repeat 2Linux 5.8400800120016002000SE +/- 18.26, N = 3SE +/- 13.17, N = 3SE +/- 9.96, N = 32068.22088.52080.6
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b DecryptionRepeat 3Repeat 2Linux 5.8400800120016002000Min: 2033.5 / Avg: 2068.2 / Max: 2095.4Min: 2073.3 / Avg: 2088.47 / Max: 2114.7Min: 2062.7 / Avg: 2080.6 / Max: 2097.1

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetRepeat 3Repeat 2Linux 5.81.18582.37163.55744.74325.929SE +/- 0.05, N = 3SE +/- 0.08, N = 3SE +/- 0.03, N = 35.235.225.27MIN: 4.66 / MAX: 36.04MIN: 4.63 / MAX: 10.82MIN: 4.67 / MAX: 10.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetRepeat 3Repeat 2Linux 5.8246810Min: 5.14 / Avg: 5.23 / Max: 5.3Min: 5.09 / Avg: 5.22 / Max: 5.37Min: 5.24 / Avg: 5.27 / Max: 5.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16Repeat 3Repeat 2Linux 5.81530456075SE +/- 0.22, N = 3SE +/- 0.25, N = 3SE +/- 0.29, N = 369.1268.5168.66MIN: 65.47 / MAX: 108.3MIN: 65.01 / MAX: 111.99MIN: 65.3 / MAX: 107.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16Repeat 3Repeat 2Linux 5.81326395265Min: 68.82 / Avg: 69.12 / Max: 69.54Min: 68.01 / Avg: 68.51 / Max: 68.82Min: 68.3 / Avg: 68.66 / Max: 69.231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPURepeat 3Repeat 2Linux 5.81.14312.28623.42934.57245.7155SE +/- 0.00785, N = 3SE +/- 0.01578, N = 3SE +/- 0.01204, N = 35.080345.036125.04874MIN: 4.39MIN: 4.31MIN: 4.361. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPURepeat 3Repeat 2Linux 5.8246810Min: 5.07 / Avg: 5.08 / Max: 5.09Min: 5 / Avg: 5.04 / Max: 5.05Min: 5.04 / Avg: 5.05 / Max: 5.071. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mRepeat 3Repeat 2Linux 5.848121620SE +/- 0.06, N = 3SE +/- 0.23, N = 3SE +/- 0.21, N = 317.7417.8117.66MIN: 17.04 / MAX: 62.12MIN: 16.9 / MAX: 53.49MIN: 17.1 / MAX: 57.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mRepeat 3Repeat 2Linux 5.8510152025Min: 17.62 / Avg: 17.74 / Max: 17.8Min: 17.4 / Avg: 17.81 / Max: 18.21Min: 17.43 / Avg: 17.66 / Max: 18.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPURepeat 3Repeat 2Linux 5.87001400210028003500SE +/- 4.76, N = 3SE +/- 4.46, N = 3SE +/- 14.40, N = 33366.593357.803385.70MIN: 3318.41MIN: 3305.01MIN: 3314.61. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPURepeat 3Repeat 2Linux 5.86001200180024003000Min: 3359.55 / Avg: 3366.59 / Max: 3375.67Min: 3350.29 / Avg: 3357.8 / Max: 3365.73Min: 3358.55 / Avg: 3385.7 / Max: 3407.571. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPURepeat 3Repeat 2Linux 5.87001400210028003500SE +/- 6.23, N = 3SE +/- 10.93, N = 3SE +/- 19.15, N = 33387.663361.253389.04MIN: 3326.97MIN: 3290.33MIN: 3314.61. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPURepeat 3Repeat 2Linux 5.86001200180024003000Min: 3375.46 / Avg: 3387.66 / Max: 3395.99Min: 3342.5 / Avg: 3361.25 / Max: 3380.37Min: 3357.12 / Avg: 3389.04 / Max: 3423.331. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18Repeat 3Repeat 2Linux 5.8510152025SE +/- 0.07, N = 3SE +/- 0.14, N = 3SE +/- 0.12, N = 318.6518.5018.52MIN: 17.29 / MAX: 40.02MIN: 17.04 / MAX: 37.4MIN: 17.05 / MAX: 56.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18Repeat 3Repeat 2Linux 5.8510152025Min: 18.52 / Avg: 18.65 / Max: 18.74Min: 18.32 / Avg: 18.5 / Max: 18.77Min: 18.35 / Avg: 18.52 / Max: 18.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50Repeat 3Repeat 2Linux 5.8816243240SE +/- 0.29, N = 3SE +/- 0.10, N = 3SE +/- 0.19, N = 334.6234.8534.89MIN: 32.92 / MAX: 80.63MIN: 32.75 / MAX: 81.04MIN: 33.24 / MAX: 80.991. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50Repeat 3Repeat 2Linux 5.8714212835Min: 34.1 / Avg: 34.62 / Max: 35.1Min: 34.73 / Avg: 34.85 / Max: 35.04Min: 34.66 / Avg: 34.89 / Max: 35.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b EncryptionRepeat 3Repeat 2Linux 5.890180270360450SE +/- 4.25, N = 3SE +/- 5.15, N = 3SE +/- 3.09, N = 3426.2426.7429.5
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b EncryptionRepeat 3Repeat 2Linux 5.880160240320400Min: 421.1 / Avg: 426.17 / Max: 434.6Min: 418.7 / Avg: 426.67 / Max: 436.3Min: 423.5 / Avg: 429.53 / Max: 433.7

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b DecryptionRepeat 3Repeat 2Linux 5.8160320480640800SE +/- 0.18, N = 3SE +/- 3.91, N = 3SE +/- 5.24, N = 3731.4730.6726.3
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b DecryptionRepeat 3Repeat 2Linux 5.8130260390520650Min: 731.1 / Avg: 731.43 / Max: 731.7Min: 722.8 / Avg: 730.6 / Max: 735Min: 715.8 / Avg: 726.27 / Max: 731.8

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchRepeat 3Repeat 2Linux 5.820406080100SE +/- 0.28, N = 3SE +/- 0.30, N = 3SE +/- 0.06, N = 3107.04106.98106.411. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchRepeat 3Repeat 2Linux 5.820406080100Min: 106.53 / Avg: 107.04 / Max: 107.47Min: 106.42 / Avg: 106.98 / Max: 107.43Min: 106.28 / Avg: 106.41 / Max: 106.491. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

WavPack Audio Encoding

This test times how long it takes to encode a sample WAV file to WavPack format with very high quality settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPackRepeat 3Repeat 2Linux 5.83691215SE +/- 0.06, N = 5SE +/- 0.07, N = 5SE +/- 0.07, N = 512.4812.4012.411. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPackRepeat 3Repeat 2Linux 5.848121620Min: 12.25 / Avg: 12.48 / Max: 12.58Min: 12.14 / Avg: 12.4 / Max: 12.52Min: 12.18 / Avg: 12.41 / Max: 12.621. (CXX) g++ options: -rdynamic

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPURepeat 3Repeat 2Linux 5.80.79131.58262.37393.16523.9565SE +/- 0.00745, N = 3SE +/- 0.01058, N = 3SE +/- 0.00924, N = 33.504193.516983.49733MIN: 3.36MIN: 3.35MIN: 3.351. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPURepeat 3Repeat 2Linux 5.8246810Min: 3.49 / Avg: 3.5 / Max: 3.52Min: 3.5 / Avg: 3.52 / Max: 3.54Min: 3.48 / Avg: 3.5 / Max: 3.511. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyRepeat 3Repeat 2Linux 5.8714212835SE +/- 0.14, N = 3SE +/- 0.17, N = 3SE +/- 0.30, N = 330.7130.6130.78MIN: 29.37 / MAX: 65.42MIN: 28.74 / MAX: 89.42MIN: 28.76 / MAX: 80.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyRepeat 3Repeat 2Linux 5.8714212835Min: 30.51 / Avg: 30.71 / Max: 30.97Min: 30.28 / Avg: 30.61 / Max: 30.81Min: 30.47 / Avg: 30.78 / Max: 31.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Monkey Audio Encoding

This test times how long it takes to encode a sample WAV file to Monkey's Audio APE format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APERepeat 3Repeat 2Linux 5.83691215SE +/- 0.06, N = 5SE +/- 0.12, N = 21SE +/- 0.06, N = 511.3911.4511.421. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt
OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APERepeat 3Repeat 2Linux 5.83691215Min: 11.22 / Avg: 11.39 / Max: 11.53Min: 11.1 / Avg: 11.45 / Max: 13.77Min: 11.25 / Avg: 11.42 / Max: 11.581. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2Repeat 3Repeat 2Linux 5.8246810SE +/- 0.12, N = 3SE +/- 0.09, N = 3SE +/- 0.06, N = 37.747.727.76MIN: 7.17 / MAX: 15.91MIN: 7.17 / MAX: 13.38MIN: 7.17 / MAX: 17.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2Repeat 3Repeat 2Linux 5.83691215Min: 7.55 / Avg: 7.74 / Max: 7.95Min: 7.55 / Avg: 7.72 / Max: 7.87Min: 7.68 / Avg: 7.76 / Max: 7.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPURepeat 3Repeat 2Linux 5.811002200330044005500SE +/- 8.57, N = 3SE +/- 12.15, N = 3SE +/- 2.49, N = 35162.115141.795135.69MIN: 5091.71MIN: 5067.78MIN: 5062.61. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPURepeat 3Repeat 2Linux 5.89001800270036004500Min: 5145.07 / Avg: 5162.11 / Max: 5172.27Min: 5117.99 / Avg: 5141.79 / Max: 5157.91Min: 5131.15 / Avg: 5135.69 / Max: 5139.751. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPURepeat 3Repeat 2Linux 5.8612182430SE +/- 0.10, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 323.6523.7723.67MIN: 22.52MIN: 22.4MIN: 22.691. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPURepeat 3Repeat 2Linux 5.8612182430Min: 23.46 / Avg: 23.65 / Max: 23.8Min: 23.7 / Avg: 23.77 / Max: 23.83Min: 23.61 / Avg: 23.67 / Max: 23.731. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b DecryptionRepeat 3Repeat 2Linux 5.8160320480640800SE +/- 6.51, N = 3SE +/- 7.77, N = 3SE +/- 5.38, N = 3721.0718.6722.0
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b DecryptionRepeat 3Repeat 2Linux 5.8130260390520650Min: 709 / Avg: 720.97 / Max: 731.4Min: 707.6 / Avg: 718.6 / Max: 733.6Min: 713 / Avg: 721.97 / Max: 731.6

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPURepeat 3Repeat 2Linux 5.811002200330044005500SE +/- 0.83, N = 3SE +/- 0.87, N = 3SE +/- 5.15, N = 35132.655109.675110.57MIN: 5073.41MIN: 5045.96MIN: 5045.231. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPURepeat 3Repeat 2Linux 5.89001800270036004500Min: 5131.2 / Avg: 5132.65 / Max: 5134.07Min: 5108.57 / Avg: 5109.67 / Max: 5111.39Min: 5101.51 / Avg: 5110.57 / Max: 5119.351. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPURepeat 3Repeat 2Linux 5.80.99321.98642.97963.97284.966SE +/- 0.01222, N = 3SE +/- 0.01709, N = 3SE +/- 0.01035, N = 34.412124.394274.41400MIN: 3.97MIN: 4.04MIN: 3.951. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPURepeat 3Repeat 2Linux 5.8246810Min: 4.39 / Avg: 4.41 / Max: 4.43Min: 4.37 / Avg: 4.39 / Max: 4.43Min: 4.4 / Avg: 4.41 / Max: 4.431. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Timed Eigen Compilation

This test times how long it takes to build all Eigen examples. The Eigen examples are compiled serially. Eigen is a C++ template library for linear algebra. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To CompileRepeat 3Repeat 2Linux 5.820406080100SE +/- 0.22, N = 3SE +/- 0.34, N = 3SE +/- 0.39, N = 375.4475.4075.13
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To CompileRepeat 3Repeat 2Linux 5.81428425670Min: 75.15 / Avg: 75.44 / Max: 75.87Min: 75.05 / Avg: 75.4 / Max: 76.08Min: 74.68 / Avg: 75.13 / Max: 75.91

Timed Clash Compilation

Build the clash-lang Haskell to VHDL/Verilog/SystemVerilog compiler with GHC 8.10.1 Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Clash CompilationTime To CompileRepeat 3Repeat 2Linux 5.880160240320400SE +/- 1.29, N = 3SE +/- 1.96, N = 3SE +/- 2.24, N = 3376.49375.11376.60
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Clash CompilationTime To CompileRepeat 3Repeat 2Linux 5.870140210280350Min: 374.81 / Avg: 376.49 / Max: 379.02Min: 372.11 / Avg: 375.11 / Max: 378.79Min: 373.01 / Avg: 376.6 / Max: 380.7

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPURepeat 3Repeat 2Linux 5.8612182430SE +/- 0.05, N = 3SE +/- 0.17, N = 3SE +/- 0.09, N = 324.5224.5324.61MIN: 22.82MIN: 22.85MIN: 22.921. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPURepeat 3Repeat 2Linux 5.8612182430Min: 24.46 / Avg: 24.52 / Max: 24.61Min: 24.2 / Avg: 24.53 / Max: 24.72Min: 24.5 / Avg: 24.61 / Max: 24.791. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPURepeat 3Repeat 2Linux 5.83691215SE +/- 0.03988, N = 3SE +/- 0.01634, N = 3SE +/- 0.02921, N = 39.157039.181309.18729MIN: 8.89MIN: 8.9MIN: 8.841. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPURepeat 3Repeat 2Linux 5.83691215Min: 9.11 / Avg: 9.16 / Max: 9.24Min: 9.16 / Avg: 9.18 / Max: 9.21Min: 9.15 / Avg: 9.19 / Max: 9.251. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b EncryptionRepeat 3Repeat 2Linux 5.8160320480640800SE +/- 7.00, N = 3SE +/- 7.26, N = 3SE +/- 6.74, N = 3735.7734.9733.5
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b EncryptionRepeat 3Repeat 2Linux 5.8130260390520650Min: 722.8 / Avg: 735.67 / Max: 746.9Min: 726.9 / Avg: 734.9 / Max: 749.4Min: 726.5 / Avg: 733.53 / Max: 747

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPURepeat 3Repeat 2Linux 5.87001400210028003500SE +/- 15.22, N = 3SE +/- 7.72, N = 3SE +/- 14.47, N = 33388.223385.303378.54MIN: 3306.01MIN: 3320.3MIN: 3306.981. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPURepeat 3Repeat 2Linux 5.86001200180024003000Min: 3358.12 / Avg: 3388.22 / Max: 3407.18Min: 3372.6 / Avg: 3385.3 / Max: 3399.25Min: 3352.57 / Avg: 3378.54 / Max: 3402.571. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileRepeat 3Repeat 2Linux 5.84080120160200SE +/- 0.28, N = 3SE +/- 0.69, N = 3SE +/- 0.55, N = 3176.92176.75177.26
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileRepeat 3Repeat 2Linux 5.8306090120150Min: 176.63 / Avg: 176.92 / Max: 177.49Min: 175.84 / Avg: 176.75 / Max: 178.11Min: 176.59 / Avg: 177.26 / Max: 178.36

Unpacking Firefox

This simple test profile measures how long it takes to extract the .tar.xz source package of the Mozilla Firefox Web Browser. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzRepeat 3Repeat 2Linux 5.848121620SE +/- 0.04, N = 4SE +/- 0.05, N = 4SE +/- 0.09, N = 417.4717.4617.51
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzRepeat 3Repeat 2Linux 5.848121620Min: 17.37 / Avg: 17.47 / Max: 17.56Min: 17.32 / Avg: 17.46 / Max: 17.55Min: 17.26 / Avg: 17.51 / Max: 17.64

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPURepeat 3Repeat 2Linux 5.811002200330044005500SE +/- 26.15, N = 3SE +/- 21.00, N = 3SE +/- 11.85, N = 35080.075078.515069.40MIN: 4976MIN: 4971.76MIN: 4979.471. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPURepeat 3Repeat 2Linux 5.89001800270036004500Min: 5027.91 / Avg: 5080.07 / Max: 5109.54Min: 5049.83 / Avg: 5078.51 / Max: 5119.41Min: 5046.59 / Avg: 5069.4 / Max: 5086.351. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileRepeat 3Repeat 2Linux 5.81530456075SE +/- 0.26, N = 3SE +/- 0.14, N = 3SE +/- 0.29, N = 365.5465.4365.49
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileRepeat 3Repeat 2Linux 5.81326395265Min: 65.03 / Avg: 65.54 / Max: 65.91Min: 65.15 / Avg: 65.43 / Max: 65.59Min: 64.92 / Avg: 65.49 / Max: 65.92

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPURepeat 3Repeat 2Linux 5.8246810SE +/- 0.02855, N = 3SE +/- 0.03413, N = 3SE +/- 0.03755, N = 36.787866.794266.78426MIN: 6.35MIN: 6.3MIN: 6.331. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPURepeat 3Repeat 2Linux 5.83691215Min: 6.76 / Avg: 6.79 / Max: 6.84Min: 6.74 / Avg: 6.79 / Max: 6.86Min: 6.71 / Avg: 6.78 / Max: 6.831. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPURepeat 3Repeat 2Linux 5.8246810SE +/- 0.01027, N = 3SE +/- 0.01083, N = 3SE +/- 0.00919, N = 36.685416.680316.67768MIN: 6.46MIN: 6.47MIN: 6.471. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPURepeat 3Repeat 2Linux 5.83691215Min: 6.67 / Avg: 6.69 / Max: 6.7Min: 6.67 / Avg: 6.68 / Max: 6.7Min: 6.67 / Avg: 6.68 / Max: 6.71. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandomRepeat 3Repeat 2Linux 5.80.0990.1980.2970.3960.495SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.440.440.441. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandomRepeat 3Repeat 2Linux 5.812345Min: 0.43 / Avg: 0.44 / Max: 0.45Min: 0.43 / Avg: 0.44 / Max: 0.44Min: 0.43 / Avg: 0.44 / Max: 0.441. (CXX) g++ options: -O3 -pthread