Intel 10980XE  GCC Compiler Benchmarks

Intel Core i9-10980XE GCC compiler benchmarking by Michael Larabel for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2107032-IB-10980XECO53
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 4 Tests
AV1 4 Tests
Bioinformatics 3 Tests
C/C++ Compiler Tests 21 Tests
Compression Tests 3 Tests
CPU Massive 20 Tests
Creator Workloads 24 Tests
Cryptography 5 Tests
Encoding 13 Tests
Finance 2 Tests
HPC - High Performance Computing 7 Tests
Imaging 3 Tests
Machine Learning 4 Tests
Multi-Core 18 Tests
NVIDIA GPU Compute 3 Tests
OpenMPI Tests 2 Tests
Programmer / Developer System Benchmarks 3 Tests
Raytracing 2 Tests
Renderers 3 Tests
Scientific Computing 3 Tests
Server CPU Tests 12 Tests
Single-Threaded 6 Tests
Speech 2 Tests
Telephony 3 Tests
Video Encoding 9 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
GCC 8.5
July 03 2021
  6 Hours, 30 Minutes
GCC 9.4
July 02 2021
  9 Hours, 31 Minutes
GCC 10.3
July 02 2021
  6 Hours, 12 Minutes
GCC 11.1
July 02 2021
  6 Hours, 7 Minutes
GCC 12.0.0 20210701
July 01 2021
  7 Hours, 40 Minutes
Invert Hiding All Results Option
  7 Hours, 12 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Intel 10980XE  GCC Compiler BenchmarksOpenBenchmarking.orgPhoronix Test SuiteIntel Core i9-10980XE @ 4.80GHz (18 Cores / 36 Threads)ASRock X299 Steel Legend (P1.30 BIOS)Intel Sky Lake-E DMI3 Registers32GBSamsung SSD 970 PRO 512GBNVIDIA NV132 11GBRealtek ALC1220ASUS VP28UIntel I219-V + Intel I211Ubuntu 21.045.11.0-22-generic (x86_64)GNOME Shell 3.38.4X Server + Waylandnouveau4.3 Mesa 21.0.11.0.2GCC 8.5.0GCC 9.4.0GCC 10.3.0GCC 11.1.0GCC 12.0.0 20210701ext42560x1600ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLVulkanCompilersFile-SystemScreen ResolutionIntel 10980XE  GCC Compiler Benchmarks PerformanceSystem Logs- Transparent Huge Pages: madvise- CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"- --disable-multilib --enable-checking=release --enable-languages=c,c++- Scaling Governor: intel_cpufreq schedutil - CPU Microcode: 0x5003102- Python 3.9.5- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled

GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701Result OverviewPhoronix Test Suite100%108%115%123%eSpeak-NG Speech EngineSmallptGraphicsMagickTimed MrBayes AnalysisQuantLibBotanCoremarkGcrypt LibraryTNNStockfishVP9 libvpx EncodingZstd CompressiononeDNNx265Liquid-DSPWebP Image EncodeOpus Codec EncodingViennaCLTachyonSVT-VP9Timed HMMer SearchMobile Neural NetworkLAME MP3 Encoding7-Zip CompressionCrypto++EtcpakC-RayVOSK Speech Recognition ToolkitHimeno BenchmarkSecureMarkNCNNC-Bloscdav1dFLAC Audio EncodingSQLite Speedtestlibjpeg-turbo tjbenchNgspiceSVT-AV1PJSIPAOM AV1GnuPGKvazaarSVT-HEVCWavPack Audio EncodingFinanceBench

Intel 10980XE  GCC Compiler Benchmarksgraphics-magick: Sharpenfinancebench: Bonds OpenMPcompress-zstd: 8, Long Mode - Compression Speedespeak: Text-To-Speech Synthesisbotan: ChaCha20Poly1305botan: ChaCha20Poly1305 - Decryptfinancebench: Repo OpenMPgraphics-magick: Swirlonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUsmallpt: Global Illumination Renderer; 128 Samplesmnn: resnet-v2-50viennacl: CPU BLAS - dGEMM-NNbotan: Twofishviennacl: CPU BLAS - dGEMM-TNviennacl: CPU BLAS - dGEMM-NTviennacl: CPU BLAS - dGEMM-TTbotan: Twofish - Decryptgraphics-magick: Resizingtnn: CPU - MobileNet v2graphics-magick: Rotatebotan: Blowfishmrbayes: Primate Phylogeny Analysisonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUbotan: Blowfish - Decryptquantlib: ncnn: CPU-v2-v2 - mobilenet-v2coremark: CoreMark Size 666 - Iterations Per Secondcompress-zstd: 19, Long Mode - Decompression Speedbotan: CAST-256 - Decryptbotan: CAST-256gcrypt: mnn: inception-v3ncnn: CPU - googlenettnn: CPU - SqueezeNet v2ncnn: CPU - efficientnet-b0compress-zstd: 19 - Decompression Speedgraphics-magick: HWB Color Spacencnn: CPU - resnet18ncnn: CPU - mnasnetvpxenc: Speed 5 - Bosphorus 4Kmnn: mobilenetV3webp: Quality 100, Highest Compressioncryptopp: Unkeyed Algorithmsonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUcompress-zstd: 8 - Decompression Speedetcpak: DXT1stockfish: Total Timeonednn: IP Shapes 3D - u8s8f32 - CPUmnn: MobileNetV2_224compress-zstd: 8, Long Mode - Decompression Speedsvt-vp9: VMAF Optimized - Bosphorus 1080pncnn: CPU - resnet50graphics-magick: Noise-Gaussianliquid-dsp: 36 - 256 - 57ncnn: CPU-v3-v3 - mobilenet-v3mnn: SqueezeNetV1.0cryptopp: Keyed Algorithmswebp: Quality 100, Lossless, Highest Compressiondav1d: Chimera 1080p 10-bitncnn: CPU - squeezenet_ssdx265: Bosphorus 4Kdav1d: Summer Nature 4Kvpxenc: Speed 0 - Bosphorus 4Kviennacl: CPU BLAS - sAXPYtnn: CPU - SqueezeNet v1.1aom-av1: Speed 6 Two-Pass - Bosphorus 4Kaom-av1: Speed 9 Realtime - Bosphorus 4Kncnn: CPU - regnety_400metcpak: ETC1 + Ditheringviennacl: CPU BLAS - dGEMV-Nencode-opus: WAV To Opus Encodecompress-zstd: 8 - Compression Speedonednn: IP Shapes 3D - bf16bf16bf16 - CPUtachyon: Total Timeliquid-dsp: 32 - 256 - 57svt-vp9: PSNR/SSIM Optimized - Bosphorus 1080ponednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUncnn: CPU - blazefacencnn: CPU - mobilenetonednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPUwebp: Quality 100, Losslessviennacl: CPU BLAS - sCOPYonednn: Recurrent Neural Network Inference - u8s8f32 - CPUhmmer: Pfam Database Searchbotan: KASUMIngspice: C7552encode-mp3: WAV To MP3compress-7zip: Compress Speed Testcompress-zstd: 19, Long Mode - Compression Speedonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUetcpak: ETC2pjsip: OPTIONS, Statelessncnn: CPU - alexnetpjsip: INVITEc-ray: Total Time - 4K, 16 Rays Per Pixelvosk: aom-av1: Speed 8 Realtime - Bosphorus 4Kbotan: KASUMI - Decrypthimeno: Poisson Pressure Solverncnn: CPU - vgg16securemark: SecureMark-TLSgraphics-magick: Enhancedblosc: blosclzmnn: mobilenet-v1-1.0aom-av1: Speed 6 Realtime - Bosphorus 4Kcryptopp: Integer + Elliptic Curve Public Key Algorithmssvt-vp9: Visual Quality Optimized - Bosphorus 1080pkvazaar: Bosphorus 4K - Very Fastncnn: CPU - shufflenet-v2sqlite-speedtest: Timed Time - Size 1,000encode-flac: WAV To FLACcompress-zstd: 19 - Compression Speedtjbench: Decompression Throughputsvt-av1: Preset 4 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 4Kviennacl: CPU BLAS - sDOTkvazaar: Bosphorus 4K - Ultra Fastpjsip: OPTIONS, Statefulsvt-hevc: 1 - Bosphorus 1080pngspice: C2670svt-hevc: 10 - Bosphorus 1080ponednn: IP Shapes 1D - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUsvt-hevc: 7 - Bosphorus 1080pviennacl: CPU BLAS - dGEMV-Tbotan: AES-256gnupg: 2.7GB Sample File Encryptiontnn: CPU - DenseNetviennacl: CPU BLAS - dAXPYviennacl: CPU BLAS - dCOPYonednn: IP Shapes 1D - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUencode-wavpack: WAV To WavPackbotan: AES-256 - Decryptlibgav1: Chimera 1080p 10-bitlibgav1: Summer Nature 4Kncnn: CPU - yolov4-tinymnn: squeezenetv1.1viennacl: CPU BLAS - dDOTonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070119774820.187500335.526.836984.019977.16542354.8984377920.6794825.28627.29755.5416.18456.054.755.2420.3661444321.387809491.138146.0960.441500482.7802586.35.36618050.8300092819.2152.429152.235194.12130.15412.7370.1246.692676.890310.544.828.222.4036.642377.7809021573.181573.039.828583361.71445.235511400131.232393.7043547.4306.0818.333909167900004.725.594714.48766937.695215.1615.6321.53197.464.7568.8289.9634.4627.1813.77329.12869.58.283425.72.9490348.1611924630000310.79939.9232.5614.161.7630817.20245.9937.607126.20499.716126.2008.7169971143.39.34766198.1091353029.05330729.81820.72119.4097.6444522.73117036.2025986942411800.72.4337.515519.144212247.4521.355.0557.6068.43660.3218.8135811.34011.92577.040.13580112.88134.346374.625.5406211.0180190.0379.93998.62764.2393505.93157.138.10.5284160.46145913.3553991.20120.844.56463.67.8903226576452.778646375.628.107951.152945.44942795.0533857520.8288666.04531.45256.1414.44556.254.154.9411.5731617347.436776486.315157.3320.467594475.4082568.85.10650499.4748883017.6150.879150.585193.97332.41212.4473.9386.572802.786410.584.738.612.2976.560375.2314921639.781636.969.791853436.81450.560506225521.279913.5943628.4295.0717.593879216700004.575.540719.76248436.869219.6915.0821.64199.894.8771.2296.1424.4427.2413.87336.10671.88.241429.42.8600847.8849930236667302.25961.3832.5413.941.7915016.82347.1960.128126.506100.411127.6888.5259829843.59.55455199.0971359039.15325230.03520.99319.3498.5574609.39856336.0126345742411802.62.4557.435538.547039243.4821.204.9757.0348.50060.2220.6177871.35512.02178.040.16572912.91134.222376.895.5953310.9596190.6279.73987.08064.2283527.68257.438.10.5267980.45945613.3753993.89021.354.42063.78.1003831749799.115885370.132.672788.573780.12335458.6627617610.6805226.13028.25558.7404.15758.956.457.1411.5221585311.604765486.288154.6290.426902474.8042529.85.12630485.5885102876.9151.248151.184193.41130.80413.0669.2976.622701.986411.164.808.592.3416.866372.5830411566.751565.419.390623285.51484.230499422761.227033.6693479.0299.9717.874039406600004.645.612717.16555937.869222.2715.2621.84195.154.8470.3286.2154.4827.8613.63329.48671.68.283424.02.9397847.9045939716667306.89938.2202.6113.811.7552017.25146.4935.962126.68798.200126.8778.7309742643.99.35095197.6291382229.18328130.43020.75319.2896.8794538.66196136.3526347242711713.42.4677.385503.078606246.7521.095.0257.3038.36960.1219.5427991.35912.06277.440.64574413.04133.912372.695.5333410.9199189.8879.63999.24864.4243508.29057.338.30.5260140.46048213.3403998.47921.264.27154.77.9016331948802.755208469.435.086779.376774.64534558.2239589240.6970806.20128.44551.0367.67751.949.850.5373.5621571314.561852442.670142.9340.430676439.5552749.15.06597455.1608122775.7140.850141.031208.34331.27512.8469.8026.562642.291610.974.728.632.4126.780360.4141921566.251563.059.413463351.91468.968522069631.230123.7543553.2297.3217.774039545366674.665.723692.98639936.471223.0615.2321.16192.954.8770.7288.9104.3328.0713.93327.32971.78.456419.52.9342847.8659951170000305.72936.8802.5513.791.7626416.80346.7938.043126.616100.697129.3968.7329814944.39.35550194.7591364449.08324029.96020.89419.6497.0014592.94740136.5625956543211926.82.4777.515593.130439244.0521.015.0557.4118.41161.0218.6181621.34712.09077.640.31576912.96134.711375.655.5454410.9199191.0279.33985.26664.2043508.39057.438.30.5281330.45995613.3533995.23221.374.60963.77.9047731948317.579427385.527.281781.383775.48834223.3033869030.6786265.99128.27251.2366.64851.949.851.5374.7071607318.056794442.265145.1740.425675442.4102773.64.89601830.2632922782.7140.480140.309196.22230.79712.2169.2316.272773.988410.934.588.652.3726.548374.6407951564.711567.419.594523332.71419.365507345711.224963.7293531.4293.7117.744039375533334.545.506714.20526937.161221.9415.2821.08194.064.9270.1287.4904.4028.1013.48325.27871.48.186432.72.9467049.3237944433333303.54935.3782.5413.811.7447116.89246.4936.491129.454100.609128.3208.5999849343.99.34206197.1281379668.99330429.97320.57419.6796.6104580.18807336.7026451442911889.42.4777.495532.467940244.5621.154.9856.7158.37960.8217.4258161.34811.96677.140.20576312.94135.428375.325.5367210.9330191.3379.53972.02064.6133524.74657.238.20.5279390.45966013.3313993.17321.3228.2422.834.28363.77.91258OpenBenchmarking.org

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SharpenGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070170140210280350SE +/- 0.33, N = 31972653173193191. (CC) gcc options: -fopenmp -O3 -march=native -pthread -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SharpenGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070160120180240300Min: 318 / Avg: 318.67 / Max: 3191. (CC) gcc options: -fopenmp -O3 -march=native -pthread -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -lz -lm -lpthread

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Bonds OpenMPGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070116K32K48K64K80KSE +/- 5.53, N = 3SE +/- 1061.39, N = 3SE +/- 11.94, N = 3SE +/- 36.89, N = 3SE +/- 48.36, N = 374820.1976452.7849799.1248802.7648317.581. (CXX) g++ options: -O3 -march=native -fopenmp
OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Bonds OpenMPGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070113K26K39K52K65KMin: 74809.59 / Avg: 74820.19 / Max: 74828.23Min: 75309.16 / Avg: 76452.78 / Max: 78573.37Min: 49776.8 / Avg: 49799.12 / Max: 49817.63Min: 48733.25 / Avg: 48802.76 / Max: 48858.92Min: 48254.08 / Avg: 48317.58 / Max: 48412.51. (CXX) g++ options: -O3 -march=native -fopenmp

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Compression SpeedGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701100200300400500SE +/- 3.63, N = 3SE +/- 2.56, N = 13SE +/- 3.28, N = 15SE +/- 3.45, N = 3SE +/- 3.88, N = 15335.5375.6370.1469.4385.51. (CC) gcc options: -O3 -march=native -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Compression SpeedGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070180160240320400Min: 329.7 / Avg: 335.53 / Max: 342.2Min: 361.5 / Avg: 375.65 / Max: 390.5Min: 347.6 / Avg: 370.14 / Max: 391.7Min: 465.4 / Avg: 469.43 / Max: 476.3Min: 368.3 / Avg: 385.47 / Max: 415.61. (CC) gcc options: -O3 -march=native -pthread -lz

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701816243240SE +/- 0.19, N = 4SE +/- 0.23, N = 4SE +/- 0.09, N = 4SE +/- 0.16, N = 4SE +/- 0.18, N = 426.8428.1132.6735.0927.281. (CC) gcc options: -O3 -march=native -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701714212835Min: 26.26 / Avg: 26.84 / Max: 27.04Min: 27.43 / Avg: 28.11 / Max: 28.44Min: 32.44 / Avg: 32.67 / Max: 32.87Min: 34.67 / Avg: 35.09 / Max: 35.38Min: 26.98 / Avg: 27.28 / Max: 27.81. (CC) gcc options: -O3 -march=native -std=c99

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: ChaCha20Poly1305GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107012004006008001000SE +/- 2.79, N = 3SE +/- 1.36, N = 3SE +/- 0.15, N = 3SE +/- 1.05, N = 3SE +/- 0.73, N = 3984.02951.15788.57779.38781.381. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: ChaCha20Poly1305GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107012004006008001000Min: 978.52 / Avg: 984.02 / Max: 987.61Min: 948.43 / Avg: 951.15 / Max: 952.55Min: 788.39 / Avg: 788.57 / Max: 788.88Min: 777.36 / Avg: 779.38 / Max: 780.91Min: 780.03 / Avg: 781.38 / Max: 782.521. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: ChaCha20Poly1305 - DecryptGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107012004006008001000SE +/- 2.38, N = 3SE +/- 0.08, N = 3SE +/- 0.79, N = 3SE +/- 0.89, N = 3SE +/- 0.47, N = 3977.17945.45780.12774.65775.491. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: ChaCha20Poly1305 - DecryptGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107012004006008001000Min: 972.4 / Avg: 977.17 / Max: 979.59Min: 945.3 / Avg: 945.45 / Max: 945.56Min: 778.55 / Avg: 780.12 / Max: 781.06Min: 772.9 / Avg: 774.64 / Max: 775.81Min: 774.56 / Avg: 775.49 / Max: 776.041. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Repo OpenMPGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107019K18K27K36K45KSE +/- 34.41, N = 3SE +/- 10.60, N = 3SE +/- 120.35, N = 3SE +/- 43.71, N = 3SE +/- 22.03, N = 342354.9042795.0535458.6634558.2234223.301. (CXX) g++ options: -O3 -march=native -fopenmp
OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Repo OpenMPGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107017K14K21K28K35KMin: 42296.33 / Avg: 42354.9 / Max: 42415.47Min: 42778 / Avg: 42795.05 / Max: 42814.5Min: 35226.46 / Avg: 35458.66 / Max: 35629.69Min: 34511.68 / Avg: 34558.22 / Max: 34645.57Min: 34181.55 / Avg: 34223.3 / Max: 34256.41. (CXX) g++ options: -O3 -march=native -fopenmp

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SwirlGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107012004006008001000SE +/- 0.33, N = 3SE +/- 0.33, N = 3SE +/- 1.20, N = 37927527619249031. (CC) gcc options: -fopenmp -O3 -march=native -pthread -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SwirlGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701160320480640800Min: 751 / Avg: 751.67 / Max: 752Min: 761 / Avg: 761.33 / Max: 762Min: 922 / Avg: 923.67 / Max: 9261. (CC) gcc options: -fopenmp -O3 -march=native -pthread -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -lz -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107010.18650.3730.55950.7460.9325SE +/- 0.003298, N = 3SE +/- 0.008442, N = 15SE +/- 0.005954, N = 8SE +/- 0.005568, N = 3SE +/- 0.008934, N = 30.6794820.8288660.6805220.6970800.678626MIN: 0.66MIN: 0.74MIN: 0.63MIN: 0.67MIN: 0.651. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701246810Min: 0.68 / Avg: 0.68 / Max: 0.69Min: 0.78 / Avg: 0.83 / Max: 0.9Min: 0.65 / Avg: 0.68 / Max: 0.71Min: 0.69 / Avg: 0.7 / Max: 0.7Min: 0.66 / Avg: 0.68 / Max: 0.71. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Smallpt

Smallpt is a C++ global illumination renderer written in less than 100 lines of code. Global illumination is done via unbiased Monte Carlo path tracing and there is multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSmallpt 1.0Global Illumination Renderer; 128 SamplesGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701246810SE +/- 0.007, N = 3SE +/- 0.011, N = 3SE +/- 0.033, N = 3SE +/- 0.004, N = 3SE +/- 0.017, N = 35.2866.0456.1306.2015.9911. (CXX) g++ options: -fopenmp -O3 -march=native
OpenBenchmarking.orgSeconds, Fewer Is BetterSmallpt 1.0Global Illumination Renderer; 128 SamplesGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701246810Min: 5.27 / Avg: 5.29 / Max: 5.3Min: 6.02 / Avg: 6.04 / Max: 6.06Min: 6.07 / Avg: 6.13 / Max: 6.18Min: 6.2 / Avg: 6.2 / Max: 6.21Min: 5.96 / Avg: 5.99 / Max: 6.021. (CXX) g++ options: -fopenmp -O3 -march=native

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: resnet-v2-50GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701714212835SE +/- 0.10, N = 3SE +/- 0.47, N = 15SE +/- 0.24, N = 3SE +/- 0.06, N = 3SE +/- 0.19, N = 327.3031.4528.2628.4528.27MIN: 26.86 / MAX: 27.93MIN: 24.41 / MAX: 36.24MIN: 27.6 / MAX: 28.76MIN: 27.77 / MAX: 28.83MIN: 27.7 / MAX: 28.911. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: resnet-v2-50GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701714212835Min: 27.17 / Avg: 27.3 / Max: 27.5Min: 25 / Avg: 31.45 / Max: 32.74Min: 27.79 / Avg: 28.26 / Max: 28.61Min: 28.37 / Avg: 28.44 / Max: 28.56Min: 27.95 / Avg: 28.27 / Max: 28.611. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-NNGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011326395265SE +/- 0.75, N = 3SE +/- 0.13, N = 3SE +/- 0.63, N = 3SE +/- 0.15, N = 3SE +/- 0.44, N = 355.556.158.751.051.21. (CXX) g++ options: -O3 -march=native -fopenmp -rdynamic -lOpenCL
OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-NNGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011224364860Min: 54 / Avg: 55.5 / Max: 56.4Min: 56 / Avg: 56.13 / Max: 56.4Min: 57.8 / Avg: 58.67 / Max: 59.9Min: 50.7 / Avg: 51 / Max: 51.2Min: 50.5 / Avg: 51.17 / Max: 521. (CXX) g++ options: -O3 -march=native -fopenmp -rdynamic -lOpenCL

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: TwofishGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070190180270360450SE +/- 0.33, N = 3SE +/- 0.12, N = 3SE +/- 0.23, N = 3SE +/- 1.13, N = 3SE +/- 0.30, N = 3416.18414.45404.16367.68366.651. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: TwofishGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070170140210280350Min: 415.56 / Avg: 416.18 / Max: 416.66Min: 414.28 / Avg: 414.45 / Max: 414.67Min: 403.79 / Avg: 404.16 / Max: 404.59Min: 365.45 / Avg: 367.68 / Max: 369.12Min: 366.05 / Avg: 366.65 / Max: 366.971. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-TNGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011326395265SE +/- 1.55, N = 2SE +/- 0.36, N = 3SE +/- 1.17, N = 3SE +/- 0.10, N = 3SE +/- 0.10, N = 356.056.258.951.951.91. (CXX) g++ options: -O3 -march=native -fopenmp -rdynamic -lOpenCL
OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-TNGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011224364860Min: 54.4 / Avg: 55.95 / Max: 57.5Min: 55.5 / Avg: 56.2 / Max: 56.7Min: 56.6 / Avg: 58.93 / Max: 60.2Min: 51.7 / Avg: 51.9 / Max: 52Min: 51.7 / Avg: 51.9 / Max: 521. (CXX) g++ options: -O3 -march=native -fopenmp -rdynamic -lOpenCL

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-NTGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011326395265SE +/- 0.25, N = 3SE +/- 0.27, N = 3SE +/- 0.48, N = 3SE +/- 0.15, N = 3SE +/- 0.37, N = 354.754.156.449.849.81. (CXX) g++ options: -O3 -march=native -fopenmp -rdynamic -lOpenCL
OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-NTGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011122334455Min: 54.4 / Avg: 54.7 / Max: 55.2Min: 53.6 / Avg: 54.13 / Max: 54.5Min: 55.7 / Avg: 56.37 / Max: 57.3Min: 49.6 / Avg: 49.83 / Max: 50.1Min: 49.1 / Avg: 49.83 / Max: 50.31. (CXX) g++ options: -O3 -march=native -fopenmp -rdynamic -lOpenCL

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-TTGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011326395265SE +/- 0.32, N = 3SE +/- 0.07, N = 3SE +/- 0.70, N = 2SE +/- 0.20, N = 3SE +/- 0.09, N = 355.254.957.150.551.51. (CXX) g++ options: -O3 -march=native -fopenmp -rdynamic -lOpenCL
OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-TTGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011122334455Min: 54.7 / Avg: 55.23 / Max: 55.8Min: 54.8 / Avg: 54.93 / Max: 55Min: 56.4 / Avg: 57.1 / Max: 57.8Min: 50.1 / Avg: 50.47 / Max: 50.8Min: 51.4 / Avg: 51.53 / Max: 51.71. (CXX) g++ options: -O3 -march=native -fopenmp -rdynamic -lOpenCL

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: Twofish - DecryptGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070190180270360450SE +/- 0.07, N = 3SE +/- 0.05, N = 3SE +/- 0.26, N = 3SE +/- 0.56, N = 3SE +/- 0.17, N = 3420.37411.57411.52373.56374.711. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: Twofish - DecryptGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070170140210280350Min: 420.26 / Avg: 420.37 / Max: 420.49Min: 411.48 / Avg: 411.57 / Max: 411.65Min: 411.25 / Avg: 411.52 / Max: 412.05Min: 372.47 / Avg: 373.56 / Max: 374.28Min: 374.37 / Avg: 374.71 / Max: 374.891. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: ResizingGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070130060090012001500SE +/- 2.91, N = 3SE +/- 4.93, N = 3SE +/- 8.67, N = 3SE +/- 3.51, N = 3SE +/- 7.21, N = 3144416171585157116071. (CC) gcc options: -fopenmp -O3 -march=native -pthread -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: ResizingGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070130060090012001500Min: 1439 / Avg: 1444.33 / Max: 1449Min: 1608 / Avg: 1617 / Max: 1625Min: 1570 / Avg: 1585.33 / Max: 1600Min: 1567 / Avg: 1571 / Max: 1578Min: 1593 / Avg: 1607 / Max: 16171. (CC) gcc options: -fopenmp -O3 -march=native -pthread -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -lz -lm -lpthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070180160240320400SE +/- 0.29, N = 3SE +/- 0.34, N = 3SE +/- 0.31, N = 3SE +/- 0.19, N = 3SE +/- 0.17, N = 3321.39347.44311.60314.56318.06MIN: 319.29 / MAX: 341.28MIN: 345.68 / MAX: 356.59MIN: 309.73 / MAX: 322.67MIN: 312.66 / MAX: 328.44MIN: 316.44 / MAX: 326.161. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070160120180240300Min: 321.1 / Avg: 321.39 / Max: 321.97Min: 346.86 / Avg: 347.44 / Max: 348.04Min: 310.98 / Avg: 311.6 / Max: 311.98Min: 314.2 / Avg: 314.56 / Max: 314.8Min: 317.88 / Avg: 318.06 / Max: 318.391. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: RotateGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107012004006008001000SE +/- 7.07, N = 15SE +/- 3.51, N = 3SE +/- 5.24, N = 3SE +/- 2.52, N = 3SE +/- 7.25, N = 158097767658527941. (CC) gcc options: -fopenmp -O3 -march=native -pthread -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: RotateGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701150300450600750Min: 767 / Avg: 809.07 / Max: 854Min: 769 / Avg: 776 / Max: 780Min: 755 / Avg: 764.67 / Max: 773Min: 847 / Avg: 852 / Max: 855Min: 747 / Avg: 793.8 / Max: 8311. (CC) gcc options: -fopenmp -O3 -march=native -pthread -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -lz -lm -lpthread

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: BlowfishGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701110220330440550SE +/- 0.01, N = 3SE +/- 0.29, N = 3SE +/- 0.22, N = 3SE +/- 0.12, N = 3SE +/- 0.07, N = 3491.14486.32486.29442.67442.271. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: BlowfishGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070190180270360450Min: 491.12 / Avg: 491.14 / Max: 491.15Min: 485.73 / Avg: 486.31 / Max: 486.61Min: 485.99 / Avg: 486.29 / Max: 486.7Min: 442.47 / Avg: 442.67 / Max: 442.9Min: 442.13 / Avg: 442.27 / Max: 442.341. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

Timed MrBayes Analysis

This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701306090120150SE +/- 0.45, N = 3SE +/- 1.65, N = 12SE +/- 0.58, N = 3SE +/- 0.32, N = 3SE +/- 1.02, N = 3146.10157.33154.63142.93145.171. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -maes -mavx -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mrdrnd -mbmi -mbmi2 -madx -mmpx -mabm -O3 -std=c99 -pedantic -march=native -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701306090120150Min: 145.23 / Avg: 146.1 / Max: 146.73Min: 153.18 / Avg: 157.33 / Max: 170.15Min: 153.51 / Avg: 154.63 / Max: 155.49Min: 142.35 / Avg: 142.93 / Max: 143.44Min: 143.61 / Avg: 145.17 / Max: 147.091. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -maes -mavx -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mrdrnd -mbmi -mbmi2 -madx -mmpx -mabm -O3 -std=c99 -pedantic -march=native -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107010.10520.21040.31560.42080.526SE +/- 0.003487, N = 3SE +/- 0.002588, N = 3SE +/- 0.004576, N = 4SE +/- 0.004665, N = 3SE +/- 0.000391, N = 30.4415000.4675940.4269020.4306760.425675MIN: 0.41MIN: 0.44MIN: 0.4MIN: 0.41MIN: 0.411. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070112345Min: 0.44 / Avg: 0.44 / Max: 0.45Min: 0.46 / Avg: 0.47 / Max: 0.47Min: 0.42 / Avg: 0.43 / Max: 0.44Min: 0.43 / Avg: 0.43 / Max: 0.44Min: 0.42 / Avg: 0.43 / Max: 0.431. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: Blowfish - DecryptGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701100200300400500SE +/- 0.04, N = 3SE +/- 0.21, N = 3SE +/- 0.30, N = 3SE +/- 0.13, N = 3SE +/- 0.07, N = 3482.78475.41474.80439.56442.411. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: Blowfish - DecryptGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070190180270360450Min: 482.73 / Avg: 482.78 / Max: 482.87Min: 475 / Avg: 475.41 / Max: 475.66Min: 474.28 / Avg: 474.8 / Max: 475.3Min: 439.37 / Avg: 439.56 / Max: 439.8Min: 442.28 / Avg: 442.41 / Max: 442.511. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107016001200180024003000SE +/- 19.67, N = 3SE +/- 35.29, N = 3SE +/- 18.68, N = 3SE +/- 33.86, N = 4SE +/- 0.85, N = 32586.32568.82529.82749.12773.61. (CXX) g++ options: -O3 -march=native -rdynamic
OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107015001000150020002500Min: 2547 / Avg: 2586.27 / Max: 2608Min: 2498.8 / Avg: 2568.83 / Max: 2611.5Min: 2492.4 / Avg: 2529.77 / Max: 2548.5Min: 2647.6 / Avg: 2749.1 / Max: 2786Min: 2771.9 / Avg: 2773.57 / Max: 2774.71. (CXX) g++ options: -O3 -march=native -rdynamic

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU-v2-v2 - Model: mobilenet-v2GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011.2062.4123.6184.8246.03SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.10, N = 3SE +/- 0.04, N = 35.365.105.125.064.89MIN: 4.96 / MAX: 8.4MIN: 4.73 / MAX: 10.4MIN: 4.76 / MAX: 8.89MIN: 4.72 / MAX: 10.01MIN: 4.72 / MAX: 10.071. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU-v2-v2 - Model: mobilenet-v2GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701246810Min: 5.29 / Avg: 5.36 / Max: 5.4Min: 5.05 / Avg: 5.1 / Max: 5.14Min: 5.1 / Avg: 5.12 / Max: 5.14Min: 4.86 / Avg: 5.06 / Max: 5.17Min: 4.84 / Avg: 4.89 / Max: 4.981. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread -pthread

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701140K280K420K560K700KSE +/- 3624.80, N = 3SE +/- 1621.16, N = 3SE +/- 2003.80, N = 3SE +/- 1267.23, N = 3SE +/- 2406.19, N = 3618050.83650499.47630485.59597455.16601830.261. (CC) gcc options: -O2 -O3 -march=native -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701110K220K330K440K550KMin: 610825.18 / Avg: 618050.83 / Max: 622173.57Min: 647404.39 / Avg: 650499.47 / Max: 652883.57Min: 627068.45 / Avg: 630485.59 / Max: 634007.46Min: 594942.98 / Avg: 597455.16 / Max: 599001.66Min: 597493.85 / Avg: 601830.26 / Max: 605805.641. (CC) gcc options: -O2 -O3 -march=native -lrt" -lrt

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression SpeedGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107016001200180024003000SE +/- 10.70, N = 3SE +/- 4.29, N = 3SE +/- 2.75, N = 3SE +/- 14.84, N = 3SE +/- 2.25, N = 32819.23017.62876.92775.72782.71. (CC) gcc options: -O3 -march=native -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression SpeedGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107015001000150020002500Min: 2807.6 / Avg: 2819.23 / Max: 2840.6Min: 3012 / Avg: 3017.57 / Max: 3026Min: 2871.5 / Avg: 2876.87 / Max: 2880.6Min: 2746 / Avg: 2775.67 / Max: 2791.5Min: 2779.6 / Avg: 2782.73 / Max: 2787.11. (CC) gcc options: -O3 -march=native -pthread -lz

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: CAST-256 - DecryptGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701306090120150SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.50, N = 3SE +/- 0.28, N = 3152.43150.88151.25140.85140.481. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: CAST-256 - DecryptGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701306090120150Min: 152.41 / Avg: 152.43 / Max: 152.45Min: 150.83 / Avg: 150.88 / Max: 150.93Min: 151.18 / Avg: 151.25 / Max: 151.37Min: 139.84 / Avg: 140.85 / Max: 141.37Min: 139.92 / Avg: 140.48 / Max: 140.81. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: CAST-256GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701306090120150SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.30, N = 3152.24150.59151.18141.03140.311. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: CAST-256GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701306090120150Min: 152.22 / Avg: 152.24 / Max: 152.26Min: 150.56 / Avg: 150.59 / Max: 150.64Min: 151.15 / Avg: 151.18 / Max: 151.26Min: 140.99 / Avg: 141.03 / Max: 141.07Min: 139.72 / Avg: 140.31 / Max: 140.661. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

Gcrypt Library

Libgcrypt is a general purpose cryptographic library developed as part of the GnuPG project. This is a benchmark of libgcrypt's integrated benchmark and is measuring the time to run the benchmark command with a cipher/mac/hash repetition count set for 50 times as simple, high level look at the overall crypto performance of the system under test. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGcrypt Library 1.9GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070150100150200250SE +/- 0.27, N = 3SE +/- 0.21, N = 3SE +/- 0.44, N = 3SE +/- 0.18, N = 3SE +/- 0.33, N = 3194.12193.97193.41208.34196.221. (CC) gcc options: -O3 -march=native -fvisibility=hidden -lgpg-error
OpenBenchmarking.orgSeconds, Fewer Is BetterGcrypt Library 1.9GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107014080120160200Min: 193.71 / Avg: 194.12 / Max: 194.64Min: 193.71 / Avg: 193.97 / Max: 194.39Min: 192.84 / Avg: 193.41 / Max: 194.28Min: 208.04 / Avg: 208.34 / Max: 208.65Min: 195.61 / Avg: 196.22 / Max: 196.741. (CC) gcc options: -O3 -march=native -fvisibility=hidden -lgpg-error

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: inception-v3GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701816243240SE +/- 0.10, N = 3SE +/- 0.28, N = 15SE +/- 0.42, N = 3SE +/- 0.42, N = 3SE +/- 0.43, N = 330.1532.4130.8031.2830.80MIN: 29.77 / MAX: 30.53MIN: 29.17 / MAX: 33.89MIN: 30.12 / MAX: 31.88MIN: 30.28 / MAX: 31.97MIN: 30.14 / MAX: 31.871. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: inception-v3GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701714212835Min: 29.96 / Avg: 30.15 / Max: 30.28Min: 29.33 / Avg: 32.41 / Max: 33.73Min: 30.28 / Avg: 30.8 / Max: 31.63Min: 30.44 / Avg: 31.28 / Max: 31.78Min: 30.36 / Avg: 30.8 / Max: 31.661. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: googlenetGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107013691215SE +/- 0.23, N = 3SE +/- 0.28, N = 3SE +/- 0.08, N = 3SE +/- 0.02, N = 3SE +/- 0.31, N = 312.7312.4413.0612.8412.21MIN: 12.1 / MAX: 19.86MIN: 12 / MAX: 13.22MIN: 12.84 / MAX: 14.29MIN: 12.68 / MAX: 16.73MIN: 11.77 / MAX: 13.021. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: googlenetGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070148121620Min: 12.27 / Avg: 12.73 / Max: 12.96Min: 12.13 / Avg: 12.44 / Max: 13.01Min: 12.96 / Avg: 13.06 / Max: 13.21Min: 12.82 / Avg: 12.84 / Max: 12.88Min: 11.89 / Avg: 12.21 / Max: 12.821. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread -pthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v2GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011632486480SE +/- 0.03, N = 3SE +/- 1.05, N = 3SE +/- 0.07, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 370.1273.9469.3069.8069.23MIN: 69.44 / MAX: 71.63MIN: 72.33 / MAX: 77.08MIN: 68.65 / MAX: 70.61MIN: 69.16 / MAX: 70.99MIN: 68.59 / MAX: 70.341. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v2GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011428425670Min: 70.08 / Avg: 70.12 / Max: 70.19Min: 72.67 / Avg: 73.94 / Max: 76.03Min: 69.16 / Avg: 69.3 / Max: 69.4Min: 69.77 / Avg: 69.8 / Max: 69.85Min: 69.19 / Avg: 69.23 / Max: 69.291. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: efficientnet-b0GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701246810SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.11, N = 3SE +/- 0.02, N = 3SE +/- 0.09, N = 36.696.576.626.566.27MIN: 6.33 / MAX: 10.85MIN: 6.28 / MAX: 14.64MIN: 6.24 / MAX: 24.41MIN: 6.27 / MAX: 11.76MIN: 6.05 / MAX: 14.371. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: efficientnet-b0GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107013691215Min: 6.63 / Avg: 6.69 / Max: 6.73Min: 6.56 / Avg: 6.57 / Max: 6.6Min: 6.5 / Avg: 6.62 / Max: 6.84Min: 6.52 / Avg: 6.56 / Max: 6.59Min: 6.16 / Avg: 6.27 / Max: 6.441. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread -pthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression SpeedGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107016001200180024003000SE +/- 3.24, N = 3SE +/- 14.28, N = 3SE +/- 9.76, N = 3SE +/- 7.69, N = 9SE +/- 16.86, N = 32676.82802.72701.92642.22773.91. (CC) gcc options: -O3 -march=native -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression SpeedGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107015001000150020002500Min: 2670.7 / Avg: 2676.83 / Max: 2681.7Min: 2785.2 / Avg: 2802.7 / Max: 2831Min: 2682.8 / Avg: 2701.9 / Max: 2714.9Min: 2612.7 / Avg: 2642.22 / Max: 2675.1Min: 2742.3 / Avg: 2773.9 / Max: 2799.91. (CC) gcc options: -O3 -march=native -pthread -lz

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: HWB Color SpaceGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107012004006008001000SE +/- 1.15, N = 3SE +/- 1.33, N = 3SE +/- 0.88, N = 39038648649168841. (CC) gcc options: -fopenmp -O3 -march=native -pthread -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: HWB Color SpaceGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701160320480640800Min: 901 / Avg: 903 / Max: 905Min: 861 / Avg: 863.67 / Max: 865Min: 863 / Avg: 864.33 / Max: 8661. (CC) gcc options: -fopenmp -O3 -march=native -pthread -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -lz -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: resnet18GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107013691215SE +/- 0.23, N = 3SE +/- 0.28, N = 3SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 310.5410.5811.1610.9710.93MIN: 10.19 / MAX: 17.98MIN: 10.2 / MAX: 11.57MIN: 11.03 / MAX: 11.45MIN: 10.84 / MAX: 20.49MIN: 10.84 / MAX: 11.271. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: resnet18GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107013691215Min: 10.27 / Avg: 10.54 / Max: 11Min: 10.28 / Avg: 10.58 / Max: 11.13Min: 11.1 / Avg: 11.16 / Max: 11.23Min: 10.92 / Avg: 10.97 / Max: 11.04Min: 10.91 / Avg: 10.93 / Max: 10.941. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: mnasnetGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011.08452.1693.25354.3385.4225SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.10, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 34.824.734.804.724.58MIN: 4.55 / MAX: 12.4MIN: 4.44 / MAX: 11.54MIN: 4.42 / MAX: 16.22MIN: 4.46 / MAX: 10.72MIN: 4.39 / MAX: 10.871. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: mnasnetGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701246810Min: 4.8 / Avg: 4.82 / Max: 4.87Min: 4.69 / Avg: 4.73 / Max: 4.77Min: 4.63 / Avg: 4.8 / Max: 4.98Min: 4.66 / Avg: 4.72 / Max: 4.77Min: 4.49 / Avg: 4.58 / Max: 4.671. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread -pthread

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 4KGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701246810SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 38.228.618.598.638.651. (CXX) g++ options: -m64 -lm -lpthread -O3 -march=native -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 4KGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107013691215Min: 8.19 / Avg: 8.22 / Max: 8.29Min: 8.59 / Avg: 8.61 / Max: 8.63Min: 8.58 / Avg: 8.59 / Max: 8.6Min: 8.61 / Avg: 8.63 / Max: 8.65Min: 8.63 / Avg: 8.65 / Max: 8.661. (CXX) g++ options: -m64 -lm -lpthread -O3 -march=native -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenetV3GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107010.54271.08541.62812.17082.7135SE +/- 0.012, N = 3SE +/- 0.015, N = 15SE +/- 0.011, N = 3SE +/- 0.032, N = 3SE +/- 0.011, N = 32.4032.2972.3412.4122.372MIN: 2.28 / MAX: 2.54MIN: 1.96 / MAX: 2.53MIN: 2.16 / MAX: 2.53MIN: 2.23 / MAX: 2.61MIN: 2.25 / MAX: 2.51. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenetV3GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701246810Min: 2.38 / Avg: 2.4 / Max: 2.42Min: 2.15 / Avg: 2.3 / Max: 2.38Min: 2.32 / Avg: 2.34 / Max: 2.36Min: 2.35 / Avg: 2.41 / Max: 2.47Min: 2.35 / Avg: 2.37 / Max: 2.391. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701246810SE +/- 0.007, N = 3SE +/- 0.007, N = 3SE +/- 0.003, N = 3SE +/- 0.002, N = 3SE +/- 0.043, N = 36.6426.5606.8666.7806.5481. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107013691215Min: 6.63 / Avg: 6.64 / Max: 6.66Min: 6.55 / Avg: 6.56 / Max: 6.57Min: 6.86 / Avg: 6.87 / Max: 6.87Min: 6.78 / Avg: 6.78 / Max: 6.78Min: 6.5 / Avg: 6.55 / Max: 6.631. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Unkeyed AlgorithmsGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070180160240320400SE +/- 0.02, N = 3SE +/- 0.65, N = 3SE +/- 0.17, N = 3SE +/- 0.04, N = 3SE +/- 0.68, N = 3377.78375.23372.58360.41374.641. (CXX) g++ options: -O3 -march=native -fPIC -pthread -pipe
OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Unkeyed AlgorithmsGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070170140210280350Min: 377.76 / Avg: 377.78 / Max: 377.83Min: 373.93 / Avg: 375.23 / Max: 376.01Min: 372.31 / Avg: 372.58 / Max: 372.9Min: 360.33 / Avg: 360.41 / Max: 360.48Min: 373.55 / Avg: 374.64 / Max: 375.881. (CXX) g++ options: -O3 -march=native -fPIC -pthread -pipe

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701400800120016002000SE +/- 0.81, N = 3SE +/- 1.53, N = 3SE +/- 1.06, N = 3SE +/- 3.72, N = 3SE +/- 1.54, N = 31573.181639.781566.751566.251564.71MIN: 1566.72MIN: 1630.86MIN: 1558.77MIN: 1553.36MIN: 1557.511. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070130060090012001500Min: 1571.58 / Avg: 1573.18 / Max: 1574.21Min: 1636.88 / Avg: 1639.78 / Max: 1642.08Min: 1565.34 / Avg: 1566.75 / Max: 1568.82Min: 1559.15 / Avg: 1566.25 / Max: 1571.73Min: 1562.11 / Avg: 1564.71 / Max: 1567.451. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701400800120016002000SE +/- 1.09, N = 3SE +/- 2.09, N = 3SE +/- 0.93, N = 3SE +/- 2.29, N = 3SE +/- 0.75, N = 31573.031636.961565.411563.051567.41MIN: 1566.77MIN: 1629.18MIN: 1559.12MIN: 1555.64MIN: 1561.171. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070130060090012001500Min: 1571.3 / Avg: 1573.03 / Max: 1575.04Min: 1632.85 / Avg: 1636.96 / Max: 1639.69Min: 1563.92 / Avg: 1565.41 / Max: 1567.11Min: 1560.18 / Avg: 1563.05 / Max: 1567.58Min: 1566.42 / Avg: 1567.41 / Max: 1568.881. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107013691215SE +/- 0.02091, N = 3SE +/- 0.00795, N = 3SE +/- 0.02137, N = 3SE +/- 0.01742, N = 3SE +/- 0.02038, N = 39.828589.791859.390629.413469.59452MIN: 9.53MIN: 9.62MIN: 9.29MIN: 9.27MIN: 9.41. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107013691215Min: 9.79 / Avg: 9.83 / Max: 9.86Min: 9.78 / Avg: 9.79 / Max: 9.8Min: 9.35 / Avg: 9.39 / Max: 9.42Min: 9.38 / Avg: 9.41 / Max: 9.44Min: 9.55 / Avg: 9.59 / Max: 9.621. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Decompression SpeedGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107017001400210028003500SE +/- 2.28, N = 3SE +/- 2.27, N = 5SE +/- 3.06, N = 3SE +/- 5.37, N = 3SE +/- 2.91, N = 33361.73436.83285.53351.93332.71. (CC) gcc options: -O3 -march=native -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Decompression SpeedGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107016001200180024003000Min: 3357.8 / Avg: 3361.67 / Max: 3365.7Min: 3433.2 / Avg: 3436.82 / Max: 3445.7Min: 3279.8 / Avg: 3285.5 / Max: 3290.3Min: 3341.5 / Avg: 3351.87 / Max: 3359.5Min: 3328.5 / Avg: 3332.73 / Max: 3338.31. (CC) gcc options: -O3 -march=native -pthread -lz

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: DXT1GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070130060090012001500SE +/- 1.51, N = 3SE +/- 0.45, N = 3SE +/- 0.37, N = 3SE +/- 0.47, N = 3SE +/- 1.30, N = 31445.241450.561484.231468.971419.371. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: DXT1GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070130060090012001500Min: 1442.22 / Avg: 1445.23 / Max: 1446.92Min: 1450.06 / Avg: 1450.56 / Max: 1451.47Min: 1483.77 / Avg: 1484.23 / Max: 1484.97Min: 1468.16 / Avg: 1468.97 / Max: 1469.78Min: 1416.85 / Avg: 1419.37 / Max: 1421.211. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

Stockfish

This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 13Total TimeGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070111M22M33M44M55MSE +/- 562135.06, N = 15SE +/- 508293.85, N = 15SE +/- 623930.99, N = 3SE +/- 211973.29, N = 3SE +/- 432778.20, N = 851140013506225524994227652206963507345711. (CXX) g++ options: -lgcov -m64 -lpthread -O3 -march=native -fno-exceptions -std=c++17 -pedantic -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -fprofile-use -fno-peel-loops -fno-tracer -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 13Total TimeGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107019M18M27M36M45MMin: 47565086 / Avg: 51140012.87 / Max: 55438779Min: 47957719 / Avg: 50622551.67 / Max: 54966944Min: 48754719 / Avg: 49942276.33 / Max: 50867944Min: 51909352 / Avg: 52206963.33 / Max: 52617243Min: 48112650 / Avg: 50734571 / Max: 520275531. (CXX) g++ options: -lgcov -m64 -lpthread -O3 -march=native -fno-exceptions -std=c++17 -pedantic -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -fprofile-use -fno-peel-loops -fno-tracer -flto=jobserver

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107010.2880.5760.8641.1521.44SE +/- 0.00683, N = 3SE +/- 0.00348, N = 3SE +/- 0.00554, N = 3SE +/- 0.00400, N = 3SE +/- 0.00255, N = 31.232391.279911.227031.230121.22496MIN: 1.18MIN: 1.23MIN: 1.18MIN: 1.19MIN: 1.181. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701246810Min: 1.22 / Avg: 1.23 / Max: 1.24Min: 1.27 / Avg: 1.28 / Max: 1.28Min: 1.22 / Avg: 1.23 / Max: 1.23Min: 1.22 / Avg: 1.23 / Max: 1.24Min: 1.22 / Avg: 1.22 / Max: 1.231. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: MobileNetV2_224GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107010.84471.68942.53413.37884.2235SE +/- 0.069, N = 3SE +/- 0.040, N = 15SE +/- 0.090, N = 3SE +/- 0.018, N = 3SE +/- 0.058, N = 33.7043.5943.6693.7543.729MIN: 3.31 / MAX: 3.95MIN: 3.07 / MAX: 4.08MIN: 3.42 / MAX: 4.19MIN: 3.47 / MAX: 3.95MIN: 3.41 / MAX: 3.941. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: MobileNetV2_224GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701246810Min: 3.57 / Avg: 3.7 / Max: 3.77Min: 3.3 / Avg: 3.59 / Max: 3.79Min: 3.55 / Avg: 3.67 / Max: 3.85Min: 3.72 / Avg: 3.75 / Max: 3.78Min: 3.61 / Avg: 3.73 / Max: 3.791. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Decompression SpeedGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107018001600240032004000SE +/- 5.55, N = 3SE +/- 2.52, N = 13SE +/- 3.49, N = 15SE +/- 3.35, N = 3SE +/- 3.04, N = 153547.43628.43479.03553.23531.41. (CC) gcc options: -O3 -march=native -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Decompression SpeedGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107016001200180024003000Min: 3538.2 / Avg: 3547.43 / Max: 3557.4Min: 3606.6 / Avg: 3628.44 / Max: 3640.4Min: 3444 / Avg: 3479.01 / Max: 3489.4Min: 3546.9 / Avg: 3553.23 / Max: 3558.3Min: 3506.5 / Avg: 3531.42 / Max: 3544.91. (CC) gcc options: -O3 -march=native -pthread -lz

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070170140210280350SE +/- 2.56, N = 13SE +/- 4.16, N = 3SE +/- 1.76, N = 14SE +/- 2.83, N = 6SE +/- 3.01, N = 5306.08295.07299.97297.32293.711. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070150100150200250Min: 276.54 / Avg: 306.08 / Max: 312.16Min: 286.77 / Avg: 295.07 / Max: 299.68Min: 277.88 / Avg: 299.97 / Max: 304.04Min: 283.56 / Avg: 297.32 / Max: 302.02Min: 284.24 / Avg: 293.71 / Max: 299.361. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: resnet50GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701510152025SE +/- 0.30, N = 3SE +/- 0.29, N = 3SE +/- 0.29, N = 3SE +/- 0.24, N = 3SE +/- 0.24, N = 318.3317.5917.8717.7717.74MIN: 17.58 / MAX: 24.57MIN: 17.07 / MAX: 18.69MIN: 17.16 / MAX: 18.62MIN: 17.07 / MAX: 28.68MIN: 17.09 / MAX: 18.961. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: resnet50GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701510152025Min: 17.74 / Avg: 18.33 / Max: 18.73Min: 17.27 / Avg: 17.59 / Max: 18.17Min: 17.29 / Avg: 17.87 / Max: 18.16Min: 17.29 / Avg: 17.77 / Max: 18.01Min: 17.25 / Avg: 17.74 / Max: 181. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread -pthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-GaussianGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070190180270360450SE +/- 0.33, N = 3SE +/- 0.67, N = 3SE +/- 0.33, N = 3SE +/- 0.58, N = 33903874034034031. (CC) gcc options: -fopenmp -O3 -march=native -pthread -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-GaussianGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070170140210280350Min: 386 / Avg: 386.67 / Max: 387Min: 402 / Avg: 403.33 / Max: 404Min: 402 / Avg: 402.67 / Max: 403Min: 402 / Avg: 403 / Max: 4041. (CC) gcc options: -fopenmp -O3 -march=native -pthread -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -lz -lm -lpthread

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 36 - Buffer Length: 256 - Filter Length: 57GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701200M400M600M800M1000MSE +/- 120554.28, N = 3SE +/- 588132.64, N = 3SE +/- 272213.15, N = 3SE +/- 1013283.99, N = 3SE +/- 1056729.76, N = 39167900009216700009406600009545366679375533331. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 36 - Buffer Length: 256 - Filter Length: 57GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701170M340M510M680M850MMin: 916550000 / Avg: 916790000 / Max: 916930000Min: 920980000 / Avg: 921670000 / Max: 922840000Min: 940120000 / Avg: 940660000 / Max: 940990000Min: 953270000 / Avg: 954536666.67 / Max: 956540000Min: 935440000 / Avg: 937553333.33 / Max: 9386300001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU-v3-v3 - Model: mobilenet-v3GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011.0622.1243.1864.2485.31SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.05, N = 34.724.574.644.664.54MIN: 4.49 / MAX: 7.39MIN: 4.37 / MAX: 9.05MIN: 4.36 / MAX: 10.01MIN: 4.46 / MAX: 12.92MIN: 4.37 / MAX: 11.471. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU-v3-v3 - Model: mobilenet-v3GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701246810Min: 4.67 / Avg: 4.72 / Max: 4.76Min: 4.52 / Avg: 4.57 / Max: 4.62Min: 4.62 / Avg: 4.64 / Max: 4.67Min: 4.63 / Avg: 4.66 / Max: 4.68Min: 4.45 / Avg: 4.54 / Max: 4.611. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread -pthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: SqueezeNetV1.0GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011.28772.57543.86315.15086.4385SE +/- 0.056, N = 3SE +/- 0.039, N = 15SE +/- 0.083, N = 3SE +/- 0.020, N = 3SE +/- 0.079, N = 35.5945.5405.6125.7235.506MIN: 5.4 / MAX: 5.85MIN: 5.06 / MAX: 6.72MIN: 5.24 / MAX: 6.01MIN: 5.47 / MAX: 6.72MIN: 5.22 / MAX: 5.911. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: SqueezeNetV1.0GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701246810Min: 5.5 / Avg: 5.59 / Max: 5.69Min: 5.29 / Avg: 5.54 / Max: 5.75Min: 5.45 / Avg: 5.61 / Max: 5.73Min: 5.69 / Avg: 5.72 / Max: 5.75Min: 5.36 / Avg: 5.51 / Max: 5.621. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Keyed AlgorithmsGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701160320480640800SE +/- 0.10, N = 3SE +/- 0.35, N = 3SE +/- 0.48, N = 3SE +/- 0.20, N = 3SE +/- 0.24, N = 3714.49719.76717.17692.99714.211. (CXX) g++ options: -O3 -march=native -fPIC -pthread -pipe
OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Keyed AlgorithmsGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701130260390520650Min: 714.32 / Avg: 714.49 / Max: 714.66Min: 719.07 / Avg: 719.76 / Max: 720.13Min: 716.22 / Avg: 717.17 / Max: 717.81Min: 692.58 / Avg: 692.99 / Max: 693.2Min: 713.74 / Avg: 714.21 / Max: 714.561. (CXX) g++ options: -O3 -march=native -fPIC -pthread -pipe

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701918273645SE +/- 0.11, N = 3SE +/- 0.13, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 337.7036.8737.8736.4737.161. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701816243240Min: 37.48 / Avg: 37.69 / Max: 37.81Min: 36.61 / Avg: 36.87 / Max: 37.02Min: 37.86 / Avg: 37.87 / Max: 37.88Min: 36.46 / Avg: 36.47 / Max: 36.48Min: 37.15 / Avg: 37.16 / Max: 37.171. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.0Video Input: Chimera 1080p 10-bitGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070150100150200250SE +/- 0.39, N = 3SE +/- 0.34, N = 3SE +/- 0.50, N = 3SE +/- 1.18, N = 3SE +/- 0.48, N = 3215.16219.69222.27223.06221.94-lm - MIN: 151.62 / MAX: 411.26-lm - MIN: 156.35 / MAX: 406.23MIN: 157.09 / MAX: 436.51MIN: 157.45 / MAX: 397.96MIN: 157.38 / MAX: 404.981. (CC) gcc options: -O3 -march=native -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.0Video Input: Chimera 1080p 10-bitGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107014080120160200Min: 214.5 / Avg: 215.16 / Max: 215.86Min: 219.26 / Avg: 219.69 / Max: 220.36Min: 221.61 / Avg: 222.27 / Max: 223.25Min: 220.78 / Avg: 223.06 / Max: 224.74Min: 220.98 / Avg: 221.94 / Max: 222.541. (CC) gcc options: -O3 -march=native -pthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: squeezenet_ssdGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070148121620SE +/- 0.36, N = 3SE +/- 0.06, N = 3SE +/- 0.05, N = 3SE +/- 0.08, N = 3SE +/- 0.13, N = 315.6315.0815.2615.2315.28MIN: 15.02 / MAX: 16.85MIN: 14.88 / MAX: 21.62MIN: 14.97 / MAX: 18.92MIN: 14.88 / MAX: 17.07MIN: 14.89 / MAX: 16.11. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: squeezenet_ssdGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070148121620Min: 15.14 / Avg: 15.63 / Max: 16.33Min: 15.01 / Avg: 15.08 / Max: 15.2Min: 15.15 / Avg: 15.26 / Max: 15.32Min: 15.06 / Avg: 15.23 / Max: 15.33Min: 15.04 / Avg: 15.28 / Max: 15.51. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread -pthread

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701510152025SE +/- 0.10, N = 3SE +/- 0.12, N = 3SE +/- 0.13, N = 3SE +/- 0.11, N = 3SE +/- 0.11, N = 321.5321.6421.8421.1621.081. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701510152025Min: 21.4 / Avg: 21.53 / Max: 21.72Min: 21.5 / Avg: 21.64 / Max: 21.88Min: 21.59 / Avg: 21.84 / Max: 22Min: 21.02 / Avg: 21.16 / Max: 21.39Min: 20.92 / Avg: 21.08 / Max: 21.31. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread -lrt -ldl -lnuma

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.0Video Input: Summer Nature 4KGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107014080120160200SE +/- 0.89, N = 3SE +/- 1.65, N = 3SE +/- 2.16, N = 3SE +/- 1.13, N = 3SE +/- 1.97, N = 6197.46199.89195.15192.95194.06-lm - MIN: 150.4 / MAX: 226.05-lm - MIN: 143.79 / MAX: 228.44MIN: 149.2 / MAX: 222.59MIN: 132.83 / MAX: 217.9MIN: 131.48 / MAX: 225.931. (CC) gcc options: -O3 -march=native -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.0Video Input: Summer Nature 4KGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107014080120160200Min: 196.15 / Avg: 197.46 / Max: 199.16Min: 196.73 / Avg: 199.89 / Max: 202.32Min: 191.31 / Avg: 195.15 / Max: 198.78Min: 190.87 / Avg: 192.95 / Max: 194.77Min: 188.87 / Avg: 194.06 / Max: 200.121. (CC) gcc options: -O3 -march=native -pthread

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 4KGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011.1072.2143.3214.4285.535SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 34.754.874.844.874.921. (CXX) g++ options: -m64 -lm -lpthread -O3 -march=native -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 4KGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701246810Min: 4.71 / Avg: 4.75 / Max: 4.81Min: 4.83 / Avg: 4.87 / Max: 4.94Min: 4.8 / Avg: 4.84 / Max: 4.91Min: 4.82 / Avg: 4.87 / Max: 4.95Min: 4.85 / Avg: 4.92 / Max: 4.961. (CXX) g++ options: -m64 -lm -lpthread -O3 -march=native -fPIC -U_FORTIFY_SOURCE -std=gnu++11

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sAXPYGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011632486480SE +/- 1.48, N = 3SE +/- 0.12, N = 3SE +/- 0.30, N = 3SE +/- 0.31, N = 3SE +/- 0.35, N = 368.871.270.370.770.11. (CXX) g++ options: -O3 -march=native -fopenmp -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sAXPYGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011428425670Min: 65.9 / Avg: 68.83 / Max: 70.7Min: 71 / Avg: 71.17 / Max: 71.4Min: 69.9 / Avg: 70.33 / Max: 70.9Min: 70.3 / Avg: 70.7 / Max: 71.3Min: 69.5 / Avg: 70.07 / Max: 70.71. (CXX) g++ options: -O3 -march=native -fopenmp -rdynamic -lOpenCL

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.1GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070160120180240300SE +/- 0.07, N = 3SE +/- 0.07, N = 3SE +/- 0.10, N = 3SE +/- 0.63, N = 3SE +/- 0.19, N = 3289.96296.14286.22288.91287.49MIN: 288.43 / MAX: 291.61MIN: 294.66 / MAX: 298.56MIN: 285.02 / MAX: 287.82MIN: 286.05 / MAX: 294.45MIN: 285.88 / MAX: 299.621. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.1GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070150100150200250Min: 289.82 / Avg: 289.96 / Max: 290.04Min: 296.07 / Avg: 296.14 / Max: 296.28Min: 286.01 / Avg: 286.22 / Max: 286.33Min: 288.09 / Avg: 288.91 / Max: 290.15Min: 287.18 / Avg: 287.49 / Max: 287.841. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.1Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011.0082.0163.0244.0325.04SE +/- 0.06, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 104.464.444.484.334.401. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.1Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701246810Min: 4.35 / Avg: 4.46 / Max: 4.55Min: 4.39 / Avg: 4.44 / Max: 4.53Min: 4.41 / Avg: 4.48 / Max: 4.55Min: 4.25 / Avg: 4.33 / Max: 4.38Min: 4.28 / Avg: 4.4 / Max: 4.561. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.1Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701714212835SE +/- 0.04, N = 3SE +/- 0.07, N = 3SE +/- 0.15, N = 3SE +/- 0.08, N = 3SE +/- 0.04, N = 327.1827.2427.8628.0728.101. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.1Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701612182430Min: 27.12 / Avg: 27.18 / Max: 27.27Min: 27.1 / Avg: 27.24 / Max: 27.35Min: 27.56 / Avg: 27.86 / Max: 28.04Min: 27.98 / Avg: 28.07 / Max: 28.22Min: 28.02 / Avg: 28.1 / Max: 28.161. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: regnety_400mGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070148121620SE +/- 0.14, N = 3SE +/- 0.21, N = 3SE +/- 0.14, N = 3SE +/- 0.11, N = 3SE +/- 0.05, N = 313.7713.8713.6313.9313.48MIN: 12.93 / MAX: 14.62MIN: 13.18 / MAX: 15.01MIN: 13.01 / MAX: 14.57MIN: 13.17 / MAX: 14.48MIN: 13.11 / MAX: 14.041. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: regnety_400mGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070148121620Min: 13.55 / Avg: 13.77 / Max: 14.04Min: 13.51 / Avg: 13.87 / Max: 14.22Min: 13.37 / Avg: 13.63 / Max: 13.86Min: 13.73 / Avg: 13.93 / Max: 14.12Min: 13.42 / Avg: 13.48 / Max: 13.581. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread -pthread

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC1 + DitheringGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070170140210280350SE +/- 2.72, N = 3SE +/- 0.09, N = 3SE +/- 0.38, N = 3SE +/- 0.20, N = 3SE +/- 0.05, N = 3329.13336.11329.49327.33325.281. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC1 + DitheringGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070160120180240300Min: 323.83 / Avg: 329.13 / Max: 332.87Min: 335.96 / Avg: 336.11 / Max: 336.28Min: 328.73 / Avg: 329.49 / Max: 329.89Min: 326.94 / Avg: 327.33 / Max: 327.59Min: 325.21 / Avg: 325.28 / Max: 325.381. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMV-NGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011632486480SE +/- 0.12, N = 3SE +/- 0.07, N = 3SE +/- 0.03, N = 3SE +/- 0.09, N = 3SE +/- 0.31, N = 369.571.871.671.771.41. (CXX) g++ options: -O3 -march=native -fopenmp -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMV-NGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011428425670Min: 69.3 / Avg: 69.47 / Max: 69.7Min: 71.7 / Avg: 71.83 / Max: 71.9Min: 71.6 / Avg: 71.63 / Max: 71.7Min: 71.6 / Avg: 71.73 / Max: 71.9Min: 70.8 / Avg: 71.4 / Max: 71.81. (CXX) g++ options: -O3 -march=native -fopenmp -rdynamic -lOpenCL

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701246810SE +/- 0.011, N = 5SE +/- 0.021, N = 5SE +/- 0.031, N = 5SE +/- 0.013, N = 5SE +/- 0.011, N = 58.2838.2418.2838.4568.1861. (CXX) g++ options: -O3 -march=native -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107013691215Min: 8.27 / Avg: 8.28 / Max: 8.33Min: 8.22 / Avg: 8.24 / Max: 8.32Min: 8.23 / Avg: 8.28 / Max: 8.4Min: 8.43 / Avg: 8.46 / Max: 8.5Min: 8.17 / Avg: 8.19 / Max: 8.231. (CXX) g++ options: -O3 -march=native -fvisibility=hidden -logg -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Compression SpeedGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070190180270360450SE +/- 4.99, N = 3SE +/- 4.23, N = 5SE +/- 4.87, N = 3SE +/- 5.64, N = 3SE +/- 5.56, N = 3425.7429.4424.0419.5432.71. (CC) gcc options: -O3 -march=native -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Compression SpeedGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070180160240320400Min: 417.1 / Avg: 425.73 / Max: 434.4Min: 420.1 / Avg: 429.44 / Max: 443.6Min: 414.4 / Avg: 424 / Max: 430.2Min: 408.2 / Avg: 419.47 / Max: 425.4Min: 421.6 / Avg: 432.7 / Max: 438.71. (CC) gcc options: -O3 -march=native -pthread -lz

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107010.66351.3271.99052.6543.3175SE +/- 0.01360, N = 3SE +/- 0.01601, N = 3SE +/- 0.01261, N = 3SE +/- 0.01584, N = 3SE +/- 0.01797, N = 32.949032.860082.939782.934282.94670MIN: 2.85MIN: 2.77MIN: 2.83MIN: 2.83MIN: 2.841. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701246810Min: 2.92 / Avg: 2.95 / Max: 2.97Min: 2.83 / Avg: 2.86 / Max: 2.89Min: 2.92 / Avg: 2.94 / Max: 2.96Min: 2.9 / Avg: 2.93 / Max: 2.95Min: 2.91 / Avg: 2.95 / Max: 2.971. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Tachyon

This is a test of the threaded Tachyon, a parallel ray-tracing system, measuring the time to ray-trace a sample scene. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total TimeGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011122334455SE +/- 0.08, N = 3SE +/- 0.08, N = 3SE +/- 0.08, N = 3SE +/- 0.26, N = 3SE +/- 0.18, N = 348.1647.8847.9047.8749.321. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total TimeGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011020304050Min: 48.06 / Avg: 48.16 / Max: 48.32Min: 47.79 / Avg: 47.88 / Max: 48.04Min: 47.76 / Avg: 47.9 / Max: 48.03Min: 47.56 / Avg: 47.87 / Max: 48.39Min: 49.09 / Avg: 49.32 / Max: 49.671. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 32 - Buffer Length: 256 - Filter Length: 57GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701200M400M600M800M1000MSE +/- 353836.12, N = 3SE +/- 539269.05, N = 3SE +/- 2904171.33, N = 3SE +/- 4781007.56, N = 3SE +/- 4623189.13, N = 39246300009302366679397166679511700009444333331. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 32 - Buffer Length: 256 - Filter Length: 57GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701160M320M480M640M800MMin: 924190000 / Avg: 924630000 / Max: 925330000Min: 929160000 / Avg: 930236666.67 / Max: 930830000Min: 935120000 / Avg: 939716666.67 / Max: 945090000Min: 942480000 / Avg: 951170000 / Max: 958970000Min: 937590000 / Avg: 944433333.33 / Max: 9532400001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070170140210280350SE +/- 0.21, N = 3SE +/- 1.42, N = 3SE +/- 1.79, N = 3SE +/- 2.35, N = 3SE +/- 0.68, N = 3310.79302.25306.89305.72303.541. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070160120180240300Min: 310.42 / Avg: 310.79 / Max: 311.15Min: 300.29 / Avg: 302.25 / Max: 305.01Min: 303.52 / Avg: 306.89 / Max: 309.63Min: 301.05 / Avg: 305.72 / Max: 308.58Min: 302.5 / Avg: 303.54 / Max: 304.821. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107012004006008001000SE +/- 1.32, N = 3SE +/- 1.46, N = 3SE +/- 0.16, N = 3SE +/- 0.70, N = 3SE +/- 0.15, N = 3939.92961.38938.22936.88935.38MIN: 933.27MIN: 955.17MIN: 930.52MIN: 931.54MIN: 931.251. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107012004006008001000Min: 937.41 / Avg: 939.92 / Max: 941.9Min: 958.56 / Avg: 961.38 / Max: 963.43Min: 937.99 / Avg: 938.22 / Max: 938.53Min: 935.96 / Avg: 936.88 / Max: 938.26Min: 935.23 / Avg: 935.38 / Max: 935.681. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: blazefaceGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107010.58731.17461.76192.34922.9365SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.07, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 32.562.542.612.552.54MIN: 2.5 / MAX: 3.31MIN: 2.45 / MAX: 3.32MIN: 2.47 / MAX: 3.3MIN: 2.47 / MAX: 3.17MIN: 2.46 / MAX: 3.121. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: blazefaceGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701246810Min: 2.55 / Avg: 2.56 / Max: 2.57Min: 2.51 / Avg: 2.54 / Max: 2.57Min: 2.52 / Avg: 2.61 / Max: 2.74Min: 2.52 / Avg: 2.55 / Max: 2.57Min: 2.51 / Avg: 2.54 / Max: 2.581. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread -pthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: mobilenetGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070148121620SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.06, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 314.1613.9413.8113.7913.81MIN: 13.93 / MAX: 14.76MIN: 13.68 / MAX: 22.27MIN: 13.51 / MAX: 20.36MIN: 13.61 / MAX: 14.23MIN: 13.64 / MAX: 14.541. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: mobilenetGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070148121620Min: 14.12 / Avg: 14.16 / Max: 14.19Min: 13.9 / Avg: 13.94 / Max: 14.03Min: 13.69 / Avg: 13.81 / Max: 13.9Min: 13.77 / Avg: 13.79 / Max: 13.81Min: 13.79 / Avg: 13.81 / Max: 13.831. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPUGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107010.40310.80621.20931.61242.0155SE +/- 0.00719, N = 3SE +/- 0.00782, N = 3SE +/- 0.00494, N = 3SE +/- 0.00508, N = 3SE +/- 0.00613, N = 31.763081.791501.755201.762641.74471MIN: 1.7MIN: 1.74MIN: 1.69MIN: 1.69MIN: 1.661. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPUGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701246810Min: 1.75 / Avg: 1.76 / Max: 1.77Min: 1.78 / Avg: 1.79 / Max: 1.8Min: 1.75 / Avg: 1.76 / Max: 1.76Min: 1.75 / Avg: 1.76 / Max: 1.77Min: 1.73 / Avg: 1.74 / Max: 1.751. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070148121620SE +/- 0.06, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 317.2016.8217.2516.8016.891. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070148121620Min: 17.08 / Avg: 17.2 / Max: 17.27Min: 16.81 / Avg: 16.82 / Max: 16.86Min: 17.14 / Avg: 17.25 / Max: 17.31Min: 16.69 / Avg: 16.8 / Max: 16.86Min: 16.88 / Avg: 16.89 / Max: 16.91. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sCOPYGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011122334455SE +/- 0.27, N = 3SE +/- 0.17, N = 3SE +/- 0.12, N = 3SE +/- 0.07, N = 3SE +/- 0.09, N = 345.947.146.446.746.41. (CXX) g++ options: -O3 -march=native -fopenmp -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sCOPYGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011020304050Min: 45.4 / Avg: 45.93 / Max: 46.2Min: 46.8 / Avg: 47.13 / Max: 47.3Min: 46.2 / Avg: 46.43 / Max: 46.6Min: 46.6 / Avg: 46.73 / Max: 46.8Min: 46.2 / Avg: 46.37 / Max: 46.51. (CXX) g++ options: -O3 -march=native -fopenmp -rdynamic -lOpenCL

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107012004006008001000SE +/- 0.50, N = 3SE +/- 0.63, N = 3SE +/- 0.69, N = 3SE +/- 0.60, N = 3SE +/- 0.46, N = 3937.61960.13935.96938.04936.49MIN: 932.91MIN: 955.07MIN: 930.6MIN: 933.04MIN: 932.181. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107012004006008001000Min: 936.67 / Avg: 937.61 / Max: 938.36Min: 959.01 / Avg: 960.13 / Max: 961.18Min: 934.73 / Avg: 935.96 / Max: 937.11Min: 936.84 / Avg: 938.04 / Max: 938.73Min: 935.88 / Avg: 936.49 / Max: 937.391. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.2Pfam Database SearchGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701306090120150SE +/- 0.06, N = 3SE +/- 0.10, N = 3SE +/- 0.32, N = 3SE +/- 0.11, N = 3SE +/- 0.10, N = 3126.20126.51126.69126.62129.451. (CC) gcc options: -O3 -march=native -pthread -lhmmer -leasel -lm -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.2Pfam Database SearchGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070120406080100Min: 126.08 / Avg: 126.2 / Max: 126.27Min: 126.32 / Avg: 126.51 / Max: 126.65Min: 126.34 / Avg: 126.69 / Max: 127.32Min: 126.4 / Avg: 126.62 / Max: 126.76Min: 129.27 / Avg: 129.45 / Max: 129.591. (CC) gcc options: -O3 -march=native -pthread -lhmmer -leasel -lm -lmpi

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: KASUMIGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070120406080100SE +/- 0.01, N = 3SE +/- 0.12, N = 3SE +/- 0.06, N = 3SE +/- 0.07, N = 3SE +/- 0.25, N = 399.72100.4198.20100.70100.611. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: KASUMIGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070120406080100Min: 99.7 / Avg: 99.72 / Max: 99.73Min: 100.17 / Avg: 100.41 / Max: 100.55Min: 98.08 / Avg: 98.2 / Max: 98.26Min: 100.55 / Avg: 100.7 / Max: 100.79Min: 100.11 / Avg: 100.61 / Max: 100.871. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701306090120150SE +/- 1.35, N = 3SE +/- 1.54, N = 3SE +/- 1.51, N = 3SE +/- 0.26, N = 3SE +/- 1.05, N = 3126.20127.69126.88129.40128.321. (CC) gcc options: -O3 -march=native -fopenmp -lm -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE
OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070120406080100Min: 124.83 / Avg: 126.2 / Max: 128.9Min: 124.61 / Avg: 127.69 / Max: 129.32Min: 125.36 / Avg: 126.88 / Max: 129.89Min: 129.12 / Avg: 129.4 / Max: 129.91Min: 126.31 / Avg: 128.32 / Max: 129.851. (CC) gcc options: -O3 -march=native -fopenmp -lm -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE

LAME MP3 Encoding

LAME is an MP3 encoder licensed under the LGPL. This test measures the time required to encode a WAV file to MP3 format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701246810SE +/- 0.002, N = 3SE +/- 0.011, N = 3SE +/- 0.004, N = 3SE +/- 0.005, N = 3SE +/- 0.003, N = 38.7168.5258.7308.7328.5991. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -march=native -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107013691215Min: 8.71 / Avg: 8.72 / Max: 8.72Min: 8.51 / Avg: 8.53 / Max: 8.55Min: 8.72 / Avg: 8.73 / Max: 8.74Min: 8.72 / Avg: 8.73 / Max: 8.74Min: 8.59 / Avg: 8.6 / Max: 8.61. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -march=native -lm

7-Zip Compression

This is a test of 7-Zip using p7zip with its integrated benchmark feature or upstream 7-Zip for the Windows x64 build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070120K40K60K80K100KSE +/- 69.57, N = 3SE +/- 343.09, N = 3SE +/- 46.23, N = 3SE +/- 304.44, N = 3SE +/- 228.39, N = 399711982989742698149984931. (CXX) g++ options: -pipe -lpthread
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070120K40K60K80K100KMin: 99573 / Avg: 99710.67 / Max: 99797Min: 97669 / Avg: 98298 / Max: 98850Min: 97344 / Avg: 97426 / Max: 97504Min: 97557 / Avg: 98148.67 / Max: 98569Min: 98199 / Avg: 98493.33 / Max: 989431. (CXX) g++ options: -pipe -lpthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression SpeedGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011020304050SE +/- 0.15, N = 3SE +/- 0.13, N = 3SE +/- 0.19, N = 3SE +/- 0.07, N = 3SE +/- 0.15, N = 343.343.543.944.343.91. (CC) gcc options: -O3 -march=native -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression SpeedGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701918273645Min: 43 / Avg: 43.27 / Max: 43.5Min: 43.4 / Avg: 43.53 / Max: 43.8Min: 43.5 / Avg: 43.87 / Max: 44.1Min: 44.2 / Avg: 44.27 / Max: 44.4Min: 43.6 / Avg: 43.87 / Max: 44.11. (CC) gcc options: -O3 -march=native -pthread -lz

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107013691215SE +/- 0.01311, N = 3SE +/- 0.01517, N = 3SE +/- 0.01276, N = 3SE +/- 0.01164, N = 3SE +/- 0.00808, N = 39.347669.554559.350959.355509.34206MIN: 9.29MIN: 9.5MIN: 9.29MIN: 9.29MIN: 9.281. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107013691215Min: 9.33 / Avg: 9.35 / Max: 9.37Min: 9.54 / Avg: 9.55 / Max: 9.58Min: 9.34 / Avg: 9.35 / Max: 9.38Min: 9.34 / Avg: 9.36 / Max: 9.38Min: 9.33 / Avg: 9.34 / Max: 9.361. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC2GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107014080120160200SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.08, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3198.11199.10197.63194.76197.131. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC2GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107014080120160200Min: 198.06 / Avg: 198.11 / Max: 198.16Min: 199.05 / Avg: 199.1 / Max: 199.14Min: 197.52 / Avg: 197.63 / Max: 197.78Min: 194.75 / Avg: 194.76 / Max: 194.78Min: 197.09 / Avg: 197.13 / Max: 197.181. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

PJSIP

PJSIP is a free and open source multimedia communication library written in C language implementing standard based protocols such as SIP, SDP, RTP, STUN, TURN, and ICE. It combines signaling protocol (SIP) with rich multimedia framework and NAT traversal functionality into high level API that is portable and suitable for almost any type of systems ranging from desktops, embedded systems, to mobile handsets. This test profile is making use of pjsip-perf with both the client/server on teh system. More details on the PJSIP benchmark at https://www.pjsip.org/high-performance-sip.htm Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: OPTIONS, StatelessGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070130K60K90K120K150KSE +/- 578.10, N = 3SE +/- 1635.40, N = 4SE +/- 1006.09, N = 3SE +/- 946.90, N = 3SE +/- 734.59, N = 31353021359031382221364441379661. (CC) gcc options: -lstdc++ -lssl -lcrypto -lm -lrt -lpthread -O3 -march=native
OpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: OPTIONS, StatelessGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070120K40K60K80K100KMin: 134535 / Avg: 135302.33 / Max: 136435Min: 131126 / Avg: 135903.25 / Max: 138547Min: 136262 / Avg: 138221.67 / Max: 139597Min: 135219 / Avg: 136443.67 / Max: 138307Min: 136711 / Avg: 137966 / Max: 1392551. (CC) gcc options: -lstdc++ -lssl -lcrypto -lm -lrt -lpthread -O3 -march=native

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: alexnetGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107013691215SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.09, N = 39.059.159.189.088.99MIN: 8.96 / MAX: 19.48MIN: 9.08 / MAX: 9.59MIN: 9.11 / MAX: 9.74MIN: 9 / MAX: 11.81MIN: 8.73 / MAX: 9.391. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: alexnetGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107013691215Min: 9.02 / Avg: 9.05 / Max: 9.09Min: 9.14 / Avg: 9.15 / Max: 9.16Min: 9.17 / Avg: 9.18 / Max: 9.18Min: 9.06 / Avg: 9.08 / Max: 9.11Min: 8.81 / Avg: 8.99 / Max: 9.081. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread -pthread

PJSIP

PJSIP is a free and open source multimedia communication library written in C language implementing standard based protocols such as SIP, SDP, RTP, STUN, TURN, and ICE. It combines signaling protocol (SIP) with rich multimedia framework and NAT traversal functionality into high level API that is portable and suitable for almost any type of systems ranging from desktops, embedded systems, to mobile handsets. This test profile is making use of pjsip-perf with both the client/server on teh system. More details on the PJSIP benchmark at https://www.pjsip.org/high-performance-sip.htm Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: INVITEGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107017001400210028003500SE +/- 25.16, N = 15SE +/- 27.02, N = 3SE +/- 6.36, N = 3SE +/- 7.22, N = 3SE +/- 25.40, N = 15330732523281324033041. (CC) gcc options: -lstdc++ -lssl -lcrypto -lm -lrt -lpthread -O3 -march=native
OpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: INVITEGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107016001200180024003000Min: 3205 / Avg: 3307.4 / Max: 3527Min: 3223 / Avg: 3252 / Max: 3306Min: 3270 / Avg: 3280.67 / Max: 3292Min: 3226 / Avg: 3240.33 / Max: 3249Min: 3186 / Avg: 3304.13 / Max: 34951. (CC) gcc options: -lstdc++ -lssl -lcrypto -lm -lrt -lpthread -O3 -march=native

C-Ray

This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701714212835SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 329.8230.0430.4329.9629.971. (CC) gcc options: -lm -lpthread -O3 -march=native
OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701714212835Min: 29.78 / Avg: 29.82 / Max: 29.87Min: 30.02 / Avg: 30.04 / Max: 30.06Min: 30.42 / Avg: 30.43 / Max: 30.44Min: 29.95 / Avg: 29.96 / Max: 29.97Min: 29.96 / Avg: 29.97 / Max: 29.991. (CC) gcc options: -lm -lpthread -O3 -march=native

VOSK Speech Recognition Toolkit

VOSK is an open-source offline speech recognition API/toolkit. VOSK supports speech recognition in 17 languages and has a variety of models available and interfaces for different programming languages. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterVOSK Speech Recognition Toolkit 0.3.21GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701510152025SE +/- 0.15, N = 3SE +/- 0.13, N = 3SE +/- 0.12, N = 3SE +/- 0.26, N = 3SE +/- 0.07, N = 320.7220.9920.7520.8920.57
OpenBenchmarking.orgSeconds, Fewer Is BetterVOSK Speech Recognition Toolkit 0.3.21GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701510152025Min: 20.44 / Avg: 20.72 / Max: 20.96Min: 20.78 / Avg: 20.99 / Max: 21.24Min: 20.61 / Avg: 20.75 / Max: 21Min: 20.56 / Avg: 20.89 / Max: 21.41Min: 20.49 / Avg: 20.57 / Max: 20.71

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.1Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701510152025SE +/- 0.04, N = 3SE +/- 0.06, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.09, N = 319.4019.3419.2819.6419.671. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.1Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701510152025Min: 19.35 / Avg: 19.4 / Max: 19.49Min: 19.22 / Avg: 19.34 / Max: 19.43Min: 19.25 / Avg: 19.28 / Max: 19.29Min: 19.57 / Avg: 19.64 / Max: 19.69Min: 19.56 / Avg: 19.67 / Max: 19.841. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: KASUMI - DecryptGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070120406080100SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.07, N = 397.6498.5696.8897.0096.611. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: KASUMI - DecryptGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070120406080100Min: 97.61 / Avg: 97.64 / Max: 97.66Min: 98.5 / Avg: 98.56 / Max: 98.61Min: 96.84 / Avg: 96.88 / Max: 96.92Min: 96.99 / Avg: 97 / Max: 97.03Min: 96.46 / Avg: 96.61 / Max: 96.691. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

Himeno Benchmark

The Himeno benchmark is a linear solver of pressure Poisson using a point-Jacobi method. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterHimeno Benchmark 3.0Poisson Pressure SolverGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070110002000300040005000SE +/- 0.61, N = 3SE +/- 5.70, N = 3SE +/- 13.08, N = 3SE +/- 0.64, N = 3SE +/- 2.82, N = 34522.734609.404538.664592.954580.191. (CC) gcc options: -O3 -march=native -mavx2
OpenBenchmarking.orgMFLOPS, More Is BetterHimeno Benchmark 3.0Poisson Pressure SolverGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107018001600240032004000Min: 4521.88 / Avg: 4522.73 / Max: 4523.92Min: 4599.03 / Avg: 4609.4 / Max: 4618.68Min: 4516.68 / Avg: 4538.66 / Max: 4561.95Min: 4591.75 / Avg: 4592.95 / Max: 4593.93Min: 4576.81 / Avg: 4580.19 / Max: 4585.781. (CC) gcc options: -O3 -march=native -mavx2

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: vgg16GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701816243240SE +/- 0.37, N = 3SE +/- 0.49, N = 3SE +/- 0.47, N = 3SE +/- 0.52, N = 3SE +/- 0.53, N = 336.2036.0136.3536.5636.70MIN: 35.36 / MAX: 47.25MIN: 35.37 / MAX: 37.68MIN: 35.3 / MAX: 37.7MIN: 35.42 / MAX: 58.41MIN: 35.5 / MAX: 41.991. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: vgg16GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701816243240Min: 35.46 / Avg: 36.2 / Max: 36.61Min: 35.47 / Avg: 36.01 / Max: 36.99Min: 35.42 / Avg: 36.35 / Max: 36.89Min: 35.53 / Avg: 36.56 / Max: 37.15Min: 35.69 / Avg: 36.7 / Max: 37.511. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread -pthread

SecureMark

SecureMark is an objective, standardized benchmarking framework for measuring the efficiency of cryptographic processing solutions developed by EEMBC. SecureMark-TLS is benchmarking Transport Layer Security performance with a focus on IoT/edge computing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmarks, More Is BetterSecureMark 1.0.4Benchmark: SecureMark-TLSGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070160K120K180K240K300KSE +/- 106.27, N = 3SE +/- 247.29, N = 3SE +/- 95.66, N = 3SE +/- 68.80, N = 3SE +/- 101.43, N = 32598692634572634722595652645141. (CC) gcc options: -pedantic -O3
OpenBenchmarking.orgmarks, More Is BetterSecureMark 1.0.4Benchmark: SecureMark-TLSGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070150K100K150K200K250KMin: 259657.89 / Avg: 259869.33 / Max: 259993.78Min: 263099.91 / Avg: 263456.8 / Max: 263931.78Min: 263292.84 / Avg: 263472.46 / Max: 263619.34Min: 259452.73 / Avg: 259564.65 / Max: 259689.94Min: 264324.59 / Avg: 264513.97 / Max: 264671.661. (CC) gcc options: -pedantic -O3

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: EnhancedGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701901802703604504244244274324291. (CC) gcc options: -fopenmp -O3 -march=native -pthread -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -lz -lm -lpthread

C-Blosc

A simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0Compressor: blosclzGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107013K6K9K12K15KSE +/- 11.42, N = 3SE +/- 18.15, N = 3SE +/- 21.15, N = 3SE +/- 69.56, N = 3SE +/- 37.20, N = 311800.711802.611713.411926.811889.41. (CC) gcc options: -std=gnu99 -O3 -pthread -lrt -lm
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0Compressor: blosclzGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107012K4K6K8K10KMin: 11779.2 / Avg: 11800.73 / Max: 11818.1Min: 11773 / Avg: 11802.63 / Max: 11835.6Min: 11671.3 / Avg: 11713.43 / Max: 11737.7Min: 11817.2 / Avg: 11926.8 / Max: 12055.8Min: 11837.1 / Avg: 11889.43 / Max: 11961.41. (CC) gcc options: -std=gnu99 -O3 -pthread -lrt -lm

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenet-v1-1.0GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107010.55731.11461.67192.22922.7865SE +/- 0.025, N = 3SE +/- 0.012, N = 15SE +/- 0.025, N = 3SE +/- 0.024, N = 3SE +/- 0.029, N = 32.4332.4552.4672.4772.477MIN: 2.32 / MAX: 2.62MIN: 2.23 / MAX: 3.16MIN: 2.32 / MAX: 2.68MIN: 2.3 / MAX: 2.65MIN: 2.3 / MAX: 2.741. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenet-v1-1.0GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701246810Min: 2.41 / Avg: 2.43 / Max: 2.48Min: 2.4 / Avg: 2.46 / Max: 2.56Min: 2.44 / Avg: 2.47 / Max: 2.52Min: 2.43 / Avg: 2.48 / Max: 2.51Min: 2.45 / Avg: 2.48 / Max: 2.541. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.1Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701246810SE +/- 0.05, N = 3SE +/- 0.08, N = 15SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.08, N = 127.517.437.387.517.491. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.1Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107013691215Min: 7.41 / Avg: 7.51 / Max: 7.59Min: 6.5 / Avg: 7.43 / Max: 7.61Min: 7.31 / Avg: 7.38 / Max: 7.44Min: 7.44 / Avg: 7.51 / Max: 7.58Min: 6.63 / Avg: 7.49 / Max: 7.61. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Integer + Elliptic Curve Public Key AlgorithmsGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070112002400360048006000SE +/- 1.33, N = 3SE +/- 6.16, N = 3SE +/- 1.85, N = 3SE +/- 1.99, N = 3SE +/- 5.19, N = 35519.145538.555503.085593.135532.471. (CXX) g++ options: -O3 -march=native -fPIC -pthread -pipe
OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Integer + Elliptic Curve Public Key AlgorithmsGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070110002000300040005000Min: 5516.77 / Avg: 5519.14 / Max: 5521.37Min: 5527.28 / Avg: 5538.55 / Max: 5548.49Min: 5500.3 / Avg: 5503.08 / Max: 5506.58Min: 5590.14 / Avg: 5593.13 / Max: 5596.91Min: 5522.66 / Avg: 5532.47 / Max: 5540.311. (CXX) g++ options: -O3 -march=native -fPIC -pthread -pipe

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070150100150200250SE +/- 1.34, N = 3SE +/- 1.22, N = 3SE +/- 3.33, N = 3SE +/- 3.10, N = 3SE +/- 1.58, N = 3247.45243.48246.75244.05244.561. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107014080120160200Min: 245.33 / Avg: 247.45 / Max: 249.94Min: 241.1 / Avg: 243.48 / Max: 245.1Min: 240.11 / Avg: 246.75 / Max: 250.39Min: 239.21 / Avg: 244.05 / Max: 249.83Min: 241.74 / Avg: 244.56 / Max: 247.211. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very FastGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701510152025SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 321.3521.2021.0921.0121.151. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O3 -march=native -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very FastGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701510152025Min: 21.3 / Avg: 21.35 / Max: 21.41Min: 21.13 / Avg: 21.2 / Max: 21.25Min: 21.02 / Avg: 21.09 / Max: 21.2Min: 20.96 / Avg: 21.01 / Max: 21.06Min: 21.11 / Avg: 21.15 / Max: 21.211. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O3 -march=native -lpthread -lm -lrt

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: shufflenet-v2GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011.13632.27263.40894.54525.6815SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 35.054.975.025.054.98MIN: 4.83 / MAX: 14.14MIN: 4.8 / MAX: 8.9MIN: 4.83 / MAX: 15.94MIN: 4.88 / MAX: 9.37MIN: 4.88 / MAX: 8.61. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: shufflenet-v2GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701246810Min: 5.01 / Avg: 5.05 / Max: 5.07Min: 4.95 / Avg: 4.97 / Max: 4.99Min: 5 / Avg: 5.02 / Max: 5.06Min: 4.98 / Avg: 5.05 / Max: 5.1Min: 4.95 / Avg: 4.98 / Max: 51. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread -pthread

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011326395265SE +/- 0.22, N = 3SE +/- 0.09, N = 3SE +/- 0.19, N = 3SE +/- 0.10, N = 3SE +/- 0.11, N = 357.6157.0357.3057.4156.721. (CC) gcc options: -O3 -march=native -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011122334455Min: 57.18 / Avg: 57.61 / Max: 57.87Min: 56.87 / Avg: 57.03 / Max: 57.16Min: 56.92 / Avg: 57.3 / Max: 57.53Min: 57.22 / Avg: 57.41 / Max: 57.53Min: 56.58 / Avg: 56.72 / Max: 56.931. (CC) gcc options: -O3 -march=native -ldl -lz -lpthread

FLAC Audio Encoding

This test times how long it takes to encode a sample WAV file to FLAC format five times. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLACGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701246810SE +/- 0.012, N = 5SE +/- 0.006, N = 5SE +/- 0.004, N = 5SE +/- 0.009, N = 5SE +/- 0.014, N = 58.4368.5008.3698.4118.3791. (CXX) g++ options: -O3 -march=native -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLACGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107013691215Min: 8.41 / Avg: 8.44 / Max: 8.48Min: 8.49 / Avg: 8.5 / Max: 8.52Min: 8.36 / Avg: 8.37 / Max: 8.38Min: 8.38 / Avg: 8.41 / Max: 8.43Min: 8.33 / Avg: 8.38 / Max: 8.411. (CXX) g++ options: -O3 -march=native -fvisibility=hidden -logg -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression SpeedGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011428425670SE +/- 0.19, N = 3SE +/- 0.50, N = 3SE +/- 0.56, N = 3SE +/- 0.50, N = 9SE +/- 0.47, N = 360.360.260.161.060.81. (CC) gcc options: -O3 -march=native -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression SpeedGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011224364860Min: 60.1 / Avg: 60.33 / Max: 60.7Min: 59.5 / Avg: 60.23 / Max: 61.2Min: 59.4 / Avg: 60.1 / Max: 61.2Min: 59.3 / Avg: 60.96 / Max: 63.6Min: 60.2 / Avg: 60.77 / Max: 61.71. (CC) gcc options: -O3 -march=native -pthread -lz

libjpeg-turbo tjbench

tjbench is a JPEG decompression/compression benchmark that is part of libjpeg-turbo, a JPEG image codec library optimized for SIMD instructions on modern CPU architectures. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMegapixels/sec, More Is Betterlibjpeg-turbo tjbench 2.1.0Test: Decompression ThroughputGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070150100150200250SE +/- 0.03, N = 3SE +/- 0.84, N = 3SE +/- 0.30, N = 3SE +/- 0.47, N = 3SE +/- 0.26, N = 3218.81220.62219.54218.62217.431. (CC) gcc options: -O3 -march=native -rdynamic
OpenBenchmarking.orgMegapixels/sec, More Is Betterlibjpeg-turbo tjbench 2.1.0Test: Decompression ThroughputGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107014080120160200Min: 218.77 / Avg: 218.81 / Max: 218.88Min: 219.3 / Avg: 220.62 / Max: 222.18Min: 218.97 / Avg: 219.54 / Max: 219.96Min: 217.72 / Avg: 218.62 / Max: 219.28Min: 216.93 / Avg: 217.43 / Max: 217.811. (CC) gcc options: -O3 -march=native -rdynamic

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 4 - Input: Bosphorus 4KGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107010.30580.61160.91741.22321.529SE +/- 0.005, N = 3SE +/- 0.001, N = 3SE +/- 0.003, N = 3SE +/- 0.003, N = 3SE +/- 0.002, N = 31.3401.3551.3591.3471.3481. (CXX) g++ options: -O3 -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 4 - Input: Bosphorus 4KGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701246810Min: 1.33 / Avg: 1.34 / Max: 1.35Min: 1.35 / Avg: 1.36 / Max: 1.36Min: 1.35 / Avg: 1.36 / Max: 1.36Min: 1.34 / Avg: 1.35 / Max: 1.35Min: 1.34 / Avg: 1.35 / Max: 1.351. (CXX) g++ options: -O3 -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 8 - Input: Bosphorus 4KGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107013691215SE +/- 0.02, N = 3SE +/- 0.08, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 311.9312.0212.0612.0911.971. (CXX) g++ options: -O3 -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 8 - Input: Bosphorus 4KGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070148121620Min: 11.89 / Avg: 11.92 / Max: 11.95Min: 11.9 / Avg: 12.02 / Max: 12.17Min: 11.98 / Avg: 12.06 / Max: 12.13Min: 12.07 / Avg: 12.09 / Max: 12.12Min: 11.92 / Avg: 11.97 / Max: 12.011. (CXX) g++ options: -O3 -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sDOTGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070120406080100SE +/- 0.42, N = 3SE +/- 0.26, N = 3SE +/- 0.33, N = 3SE +/- 0.43, N = 3SE +/- 0.49, N = 377.078.077.477.677.11. (CXX) g++ options: -O3 -march=native -fopenmp -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sDOTGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011530456075Min: 76.5 / Avg: 76.97 / Max: 77.8Min: 77.6 / Avg: 78 / Max: 78.5Min: 76.9 / Avg: 77.37 / Max: 78Min: 77 / Avg: 77.57 / Max: 78.4Min: 76.3 / Avg: 77.13 / Max: 781. (CXX) g++ options: -O3 -march=native -fopenmp -rdynamic -lOpenCL

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra FastGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701918273645SE +/- 0.07, N = 3SE +/- 0.07, N = 3SE +/- 0.15, N = 3SE +/- 0.11, N = 3SE +/- 0.08, N = 340.1340.1640.6440.3140.201. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O3 -march=native -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra FastGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701816243240Min: 40.05 / Avg: 40.13 / Max: 40.27Min: 40.01 / Avg: 40.16 / Max: 40.24Min: 40.4 / Avg: 40.64 / Max: 40.92Min: 40.14 / Avg: 40.31 / Max: 40.52Min: 40.04 / Avg: 40.2 / Max: 40.31. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O3 -march=native -lpthread -lm -lrt

PJSIP

PJSIP is a free and open source multimedia communication library written in C language implementing standard based protocols such as SIP, SDP, RTP, STUN, TURN, and ICE. It combines signaling protocol (SIP) with rich multimedia framework and NAT traversal functionality into high level API that is portable and suitable for almost any type of systems ranging from desktops, embedded systems, to mobile handsets. This test profile is making use of pjsip-perf with both the client/server on teh system. More details on the PJSIP benchmark at https://www.pjsip.org/high-performance-sip.htm Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: OPTIONS, StatefulGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070112002400360048006000SE +/- 16.26, N = 3SE +/- 24.67, N = 3SE +/- 8.67, N = 3SE +/- 53.69, N = 3SE +/- 23.13, N = 3580157295744576957631. (CC) gcc options: -lstdc++ -lssl -lcrypto -lm -lrt -lpthread -O3 -march=native
OpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: OPTIONS, StatefulGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070110002000300040005000Min: 5780 / Avg: 5801 / Max: 5833Min: 5704 / Avg: 5728.67 / Max: 5778Min: 5729 / Avg: 5744.33 / Max: 5759Min: 5700 / Avg: 5769.33 / Max: 5875Min: 5718 / Avg: 5763.33 / Max: 57941. (CC) gcc options: -lstdc++ -lssl -lcrypto -lm -lrt -lpthread -O3 -march=native

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080pGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107013691215SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 312.8812.9113.0412.9612.941. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080pGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070148121620Min: 12.86 / Avg: 12.88 / Max: 12.9Min: 12.88 / Avg: 12.91 / Max: 12.93Min: 13.04 / Avg: 13.04 / Max: 13.04Min: 12.94 / Avg: 12.96 / Max: 12.98Min: 12.93 / Avg: 12.94 / Max: 12.951. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C2670GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701306090120150SE +/- 0.81, N = 3SE +/- 0.71, N = 3SE +/- 1.13, N = 3SE +/- 0.92, N = 3SE +/- 0.83, N = 3134.35134.22133.91134.71135.431. (CC) gcc options: -O3 -march=native -fopenmp -lm -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE
OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C2670GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701306090120150Min: 132.73 / Avg: 134.35 / Max: 135.3Min: 133.3 / Avg: 134.22 / Max: 135.61Min: 131.71 / Avg: 133.91 / Max: 135.43Min: 133.51 / Avg: 134.71 / Max: 136.53Min: 133.8 / Avg: 135.43 / Max: 136.531. (CC) gcc options: -O3 -march=native -fopenmp -lm -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070180160240320400SE +/- 0.95, N = 3SE +/- 0.96, N = 3SE +/- 1.74, N = 3SE +/- 1.99, N = 3SE +/- 0.75, N = 3374.62376.89372.69375.65375.321. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070170140210280350Min: 372.9 / Avg: 374.62 / Max: 376.18Min: 375.7 / Avg: 376.89 / Max: 378.79Min: 369.23 / Avg: 372.69 / Max: 374.77Min: 372.9 / Avg: 375.65 / Max: 379.51Min: 373.83 / Avg: 375.32 / Max: 376.181. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011.25892.51783.77675.03566.2945SE +/- 0.02371, N = 3SE +/- 0.02285, N = 3SE +/- 0.01959, N = 3SE +/- 0.02335, N = 3SE +/- 0.02089, N = 35.540625.595335.533345.545445.53672MIN: 5.4MIN: 5.45MIN: 5.38MIN: 5.4MIN: 5.381. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701246810Min: 5.49 / Avg: 5.54 / Max: 5.57Min: 5.55 / Avg: 5.6 / Max: 5.62Min: 5.49 / Avg: 5.53 / Max: 5.55Min: 5.5 / Avg: 5.55 / Max: 5.57Min: 5.49 / Avg: 5.54 / Max: 5.561. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107013691215SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 311.0210.9610.9210.9210.93MIN: 10.79MIN: 10.74MIN: 10.74MIN: 10.74MIN: 10.771. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107013691215Min: 10.99 / Avg: 11.02 / Max: 11.05Min: 10.91 / Avg: 10.96 / Max: 10.99Min: 10.9 / Avg: 10.92 / Max: 10.95Min: 10.89 / Avg: 10.92 / Max: 10.96Min: 10.9 / Avg: 10.93 / Max: 10.971. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107014080120160200SE +/- 0.58, N = 3SE +/- 0.18, N = 3SE +/- 0.44, N = 3SE +/- 0.31, N = 3SE +/- 0.32, N = 3190.03190.62189.88191.02191.331. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107014080120160200Min: 189.45 / Avg: 190.03 / Max: 191.2Min: 190.36 / Avg: 190.62 / Max: 190.96Min: 189.04 / Avg: 189.88 / Max: 190.54Min: 190.66 / Avg: 191.02 / Max: 191.63Min: 190.72 / Avg: 191.33 / Max: 191.821. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMV-TGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070120406080100SE +/- 0.25, N = 3SE +/- 0.10, N = 3SE +/- 0.07, N = 3SE +/- 0.38, N = 3SE +/- 0.09, N = 379.979.779.679.379.51. (CXX) g++ options: -O3 -march=native -fopenmp -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMV-TGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011530456075Min: 79.4 / Avg: 79.9 / Max: 80.2Min: 79.6 / Avg: 79.7 / Max: 79.9Min: 79.5 / Avg: 79.57 / Max: 79.7Min: 78.5 / Avg: 79.27 / Max: 79.7Min: 79.4 / Avg: 79.53 / Max: 79.71. (CXX) g++ options: -O3 -march=native -fopenmp -rdynamic -lOpenCL

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: AES-256GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107019001800270036004500SE +/- 0.87, N = 3SE +/- 0.82, N = 3SE +/- 4.08, N = 3SE +/- 2.67, N = 3SE +/- 7.30, N = 33998.633987.083999.253985.273972.021. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: AES-256GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107017001400210028003500Min: 3997.16 / Avg: 3998.63 / Max: 4000.17Min: 3986.06 / Avg: 3987.08 / Max: 3988.71Min: 3991.13 / Avg: 3999.25 / Max: 4003.98Min: 3981.07 / Avg: 3985.27 / Max: 3990.24Min: 3961 / Avg: 3972.02 / Max: 3985.821. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

GnuPG

This test times how long it takes to encrypt a sample file using GnuPG. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGnuPG 2.2.272.7GB Sample File EncryptionGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011428425670SE +/- 0.17, N = 3SE +/- 0.19, N = 3SE +/- 0.36, N = 3SE +/- 0.23, N = 3SE +/- 0.56, N = 364.2464.2364.4264.2064.611. (CC) gcc options: -O3 -march=native
OpenBenchmarking.orgSeconds, Fewer Is BetterGnuPG 2.2.272.7GB Sample File EncryptionGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011326395265Min: 64.06 / Avg: 64.24 / Max: 64.58Min: 64.01 / Avg: 64.23 / Max: 64.6Min: 64.03 / Avg: 64.42 / Max: 65.15Min: 63.9 / Avg: 64.2 / Max: 64.65Min: 64.01 / Avg: 64.61 / Max: 65.741. (CC) gcc options: -O3 -march=native

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107018001600240032004000SE +/- 0.16, N = 3SE +/- 2.55, N = 3SE +/- 0.59, N = 3SE +/- 0.77, N = 3SE +/- 0.21, N = 33505.933527.683508.293508.393524.75MIN: 3487.54 / MAX: 3535.34MIN: 3508.67 / MAX: 3981.67MIN: 3489.27 / MAX: 3603.98MIN: 3486.98 / MAX: 3606.8MIN: 3509.67 / MAX: 3548.511. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107016001200180024003000Min: 3505.72 / Avg: 3505.93 / Max: 3506.26Min: 3523.43 / Avg: 3527.68 / Max: 3532.25Min: 3507.21 / Avg: 3508.29 / Max: 3509.25Min: 3506.92 / Avg: 3508.39 / Max: 3509.5Min: 3524.32 / Avg: 3524.75 / Max: 3524.971. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dAXPYGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011326395265SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.12, N = 357.157.457.357.457.21. (CXX) g++ options: -O3 -march=native -fopenmp -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dAXPYGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011122334455Min: 57.1 / Avg: 57.1 / Max: 57.1Min: 57.4 / Avg: 57.43 / Max: 57.5Min: 57.3 / Avg: 57.33 / Max: 57.4Min: 57.4 / Avg: 57.4 / Max: 57.4Min: 57 / Avg: 57.23 / Max: 57.41. (CXX) g++ options: -O3 -march=native -fopenmp -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dCOPYGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701918273645SE +/- 0.00, N = 3SE +/- 0.15, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 338.138.138.338.338.21. (CXX) g++ options: -O3 -march=native -fopenmp -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dCOPYGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701816243240Min: 38.1 / Avg: 38.1 / Max: 38.1Min: 37.8 / Avg: 38.1 / Max: 38.3Min: 38.2 / Avg: 38.27 / Max: 38.3Min: 38.3 / Avg: 38.3 / Max: 38.3Min: 38.1 / Avg: 38.17 / Max: 38.21. (CXX) g++ options: -O3 -march=native -fopenmp -rdynamic -lOpenCL

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107010.11890.23780.35670.47560.5945SE +/- 0.003001, N = 3SE +/- 0.003361, N = 3SE +/- 0.003324, N = 3SE +/- 0.003036, N = 3SE +/- 0.003225, N = 30.5284160.5267980.5260140.5281330.527939MIN: 0.5MIN: 0.5MIN: 0.5MIN: 0.5MIN: 0.51. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701246810Min: 0.52 / Avg: 0.53 / Max: 0.53Min: 0.52 / Avg: 0.53 / Max: 0.53Min: 0.52 / Avg: 0.53 / Max: 0.53Min: 0.52 / Avg: 0.53 / Max: 0.53Min: 0.52 / Avg: 0.53 / Max: 0.531. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107010.10380.20760.31140.41520.519SE +/- 0.000631, N = 3SE +/- 0.001062, N = 3SE +/- 0.001585, N = 3SE +/- 0.000687, N = 3SE +/- 0.001427, N = 30.4614590.4594560.4604820.4599560.459660MIN: 0.45MIN: 0.45MIN: 0.45MIN: 0.45MIN: 0.451. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070112345Min: 0.46 / Avg: 0.46 / Max: 0.46Min: 0.46 / Avg: 0.46 / Max: 0.46Min: 0.46 / Avg: 0.46 / Max: 0.46Min: 0.46 / Avg: 0.46 / Max: 0.46Min: 0.46 / Avg: 0.46 / Max: 0.461. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

WavPack Audio Encoding

This test times how long it takes to encode a sample WAV file to WavPack format with very high quality settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPackGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107013691215SE +/- 0.00, N = 5SE +/- 0.00, N = 5SE +/- 0.02, N = 5SE +/- 0.00, N = 5SE +/- 0.00, N = 513.3613.3813.3413.3513.331. (CXX) g++ options: -O3 -march=native -rdynamic
OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPackGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 2021070148121620Min: 13.35 / Avg: 13.35 / Max: 13.37Min: 13.37 / Avg: 13.38 / Max: 13.38Min: 13.31 / Avg: 13.34 / Max: 13.41Min: 13.35 / Avg: 13.35 / Max: 13.36Min: 13.33 / Avg: 13.33 / Max: 13.341. (CXX) g++ options: -O3 -march=native -rdynamic

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: AES-256 - DecryptGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107019001800270036004500SE +/- 1.28, N = 3SE +/- 0.39, N = 3SE +/- 0.87, N = 3SE +/- 0.79, N = 3SE +/- 4.17, N = 33991.203993.893998.483995.233993.171. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: AES-256 - DecryptGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107017001400210028003500Min: 3989.71 / Avg: 3991.2 / Max: 3993.75Min: 3993.18 / Avg: 3993.89 / Max: 3994.55Min: 3997.15 / Avg: 3998.48 / Max: 4000.11Min: 3993.67 / Avg: 3995.23 / Max: 3996.13Min: 3984.84 / Avg: 3993.17 / Max: 3997.711. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

libgav1

Libgav1 is an AV1 decoder developed by Google for AV1 profile 0/1 compliance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterlibgav1 0.16.3Video Input: Chimera 1080p 10-bitGCC 12.0.0 20210701510152025SE +/- 0.01, N = 321.321. (CXX) g++ options: -O3 -march=native -lpthread -lrt

OpenBenchmarking.orgFPS, More Is Betterlibgav1 0.16.3Video Input: Summer Nature 4KGCC 12.0.0 20210701714212835SE +/- 0.01, N = 328.241. (CXX) g++ options: -O3 -march=native -lpthread -lrt

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: yolov4-tinyGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701510152025SE +/- 0.25, N = 3SE +/- 0.19, N = 3SE +/- 0.17, N = 3SE +/- 0.10, N = 3SE +/- 1.86, N = 320.8421.3521.2621.3722.83MIN: 19.92 / MAX: 24.91MIN: 20.42 / MAX: 33.9MIN: 20 / MAX: 24.4MIN: 20.44 / MAX: 22.72MIN: 20.18 / MAX: 937.41. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread -pthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210525Target: CPU - Model: yolov4-tinyGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701510152025Min: 20.47 / Avg: 20.84 / Max: 21.32Min: 21.14 / Avg: 21.35 / Max: 21.73Min: 21.02 / Avg: 21.26 / Max: 21.58Min: 21.2 / Avg: 21.37 / Max: 21.53Min: 20.97 / Avg: 22.83 / Max: 26.551. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread -pthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: squeezenetv1.1GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011.0372.0743.1114.1485.185SE +/- 0.036, N = 3SE +/- 0.061, N = 15SE +/- 0.162, N = 3SE +/- 0.007, N = 3SE +/- 0.149, N = 34.5644.4204.2714.6094.283MIN: 4.42 / MAX: 4.75MIN: 3.98 / MAX: 4.76MIN: 3.97 / MAX: 4.72MIN: 4.51 / MAX: 4.78MIN: 3.97 / MAX: 4.711. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: squeezenetv1.1GCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701246810Min: 4.49 / Avg: 4.56 / Max: 4.61Min: 4.08 / Avg: 4.42 / Max: 4.64Min: 4.08 / Avg: 4.27 / Max: 4.59Min: 4.6 / Avg: 4.61 / Max: 4.62Min: 4.06 / Avg: 4.28 / Max: 4.571. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dDOTGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011428425670SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 9.10, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 363.663.754.763.763.71. (CXX) g++ options: -O3 -march=native -fopenmp -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dDOTGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107011224364860Min: 63.5 / Avg: 63.57 / Max: 63.6Min: 63.6 / Avg: 63.67 / Max: 63.7Min: 36.5 / Avg: 54.7 / Max: 63.8Min: 63.7 / Avg: 63.7 / Max: 63.7Min: 63.6 / Avg: 63.67 / Max: 63.71. (CXX) g++ options: -O3 -march=native -fopenmp -rdynamic -lOpenCL

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701246810SE +/- 0.02602, N = 3SE +/- 0.17046, N = 14SE +/- 0.03159, N = 3SE +/- 0.03513, N = 3SE +/- 0.03413, N = 37.890328.100387.901637.904777.91258MIN: 7.58MIN: 7.58MIN: 7.61MIN: 7.56MIN: 7.631. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 202107013691215Min: 7.86 / Avg: 7.89 / Max: 7.94Min: 7.87 / Avg: 8.1 / Max: 10.31Min: 7.87 / Avg: 7.9 / Max: 7.96Min: 7.87 / Avg: 7.9 / Max: 7.98Min: 7.87 / Avg: 7.91 / Max: 7.981. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Geometric Mean Of All Test Results

OpenBenchmarking.orgGeometric Mean, More Is BetterGeometric Mean Of All Test ResultsResult Composite - Intel 10980XE  GCC Compiler BenchmarksGCC 8.5GCC 9.4GCC 10.3GCC 11.1GCC 12.0.0 20210701142842567061.1461.1161.4061.2061.42

143 Results Shown

GraphicsMagick
FinanceBench
Zstd Compression
eSpeak-NG Speech Engine
Botan:
  ChaCha20Poly1305
  ChaCha20Poly1305 - Decrypt
FinanceBench
GraphicsMagick
oneDNN
Smallpt
Mobile Neural Network
ViennaCL
Botan
ViennaCL:
  CPU BLAS - dGEMM-TN
  CPU BLAS - dGEMM-NT
  CPU BLAS - dGEMM-TT
Botan
GraphicsMagick
TNN
GraphicsMagick
Botan
Timed MrBayes Analysis
oneDNN
Botan
QuantLib
NCNN
Coremark
Zstd Compression
Botan:
  CAST-256 - Decrypt
  CAST-256
Gcrypt Library
Mobile Neural Network
NCNN
TNN
NCNN
Zstd Compression
GraphicsMagick
NCNN:
  CPU - resnet18
  CPU - mnasnet
VP9 libvpx Encoding
Mobile Neural Network
WebP Image Encode
Crypto++
oneDNN:
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
  Recurrent Neural Network Training - u8s8f32 - CPU
  Deconvolution Batch shapes_1d - bf16bf16bf16 - CPU
Zstd Compression
Etcpak
Stockfish
oneDNN
Mobile Neural Network
Zstd Compression
SVT-VP9
NCNN
GraphicsMagick
Liquid-DSP
NCNN
Mobile Neural Network
Crypto++
WebP Image Encode
dav1d
NCNN
x265
dav1d
VP9 libvpx Encoding
ViennaCL
TNN
AOM AV1:
  Speed 6 Two-Pass - Bosphorus 4K
  Speed 9 Realtime - Bosphorus 4K
NCNN
Etcpak
ViennaCL
Opus Codec Encoding
Zstd Compression
oneDNN
Tachyon
Liquid-DSP
SVT-VP9
oneDNN
NCNN:
  CPU - blazeface
  CPU - mobilenet
oneDNN
WebP Image Encode
ViennaCL
oneDNN
Timed HMMer Search
Botan
Ngspice
LAME MP3 Encoding
7-Zip Compression
Zstd Compression
oneDNN
Etcpak
PJSIP
NCNN
PJSIP
C-Ray
VOSK Speech Recognition Toolkit
AOM AV1
Botan
Himeno Benchmark
NCNN
SecureMark
GraphicsMagick
C-Blosc
Mobile Neural Network
AOM AV1
Crypto++
SVT-VP9
Kvazaar
NCNN
SQLite Speedtest
FLAC Audio Encoding
Zstd Compression
libjpeg-turbo tjbench
SVT-AV1:
  Preset 4 - Bosphorus 4K
  Preset 8 - Bosphorus 4K
ViennaCL
Kvazaar
PJSIP
SVT-HEVC
Ngspice
SVT-HEVC
oneDNN:
  IP Shapes 1D - bf16bf16bf16 - CPU
  Deconvolution Batch shapes_3d - bf16bf16bf16 - CPU
SVT-HEVC
ViennaCL
Botan
GnuPG
TNN
ViennaCL:
  CPU BLAS - dAXPY
  CPU BLAS - dCOPY
oneDNN:
  IP Shapes 1D - u8s8f32 - CPU
  Deconvolution Batch shapes_1d - u8s8f32 - CPU
WavPack Audio Encoding
Botan
libgav1:
  Chimera 1080p 10-bit
  Summer Nature 4K
NCNN
Mobile Neural Network
ViennaCL
oneDNN
Geometric Mean Of All Test Results