Core i7 9750H EOY

Intel Core i7-9750H testing with a Notebook P95_96_97Ex Rx (1.07.13MIN29 BIOS) and Intel UHD 630 3GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2101014-HA-COREI797566
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 4 Tests
Bioinformatics 2 Tests
Timed Code Compilation 3 Tests
C/C++ Compiler Tests 5 Tests
CPU Massive 7 Tests
Creator Workloads 6 Tests
Encoding 4 Tests
HPC - High Performance Computing 4 Tests
Machine Learning 2 Tests
Multi-Core 5 Tests
NVIDIA GPU Compute 3 Tests
Programmer / Developer System Benchmarks 5 Tests
Scientific Computing 2 Tests
Server 2 Tests
Server CPU Tests 2 Tests
Single-Threaded 2 Tests
Vulkan Compute 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Run 1
December 31 2020
  4 Hours, 52 Minutes
Run 2
December 31 2020
  4 Hours, 49 Minutes
Run 3
January 01 2021
  4 Hours, 52 Minutes
Run 4
January 01 2021
  4 Hours, 49 Minutes
Invert Hiding All Results Option
  4 Hours, 50 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Core i7 9750H EOYOpenBenchmarking.orgPhoronix Test SuiteIntel Core i7-9750H @ 4.50GHz (6 Cores / 12 Threads)Notebook P95_96_97Ex Rx (1.07.13MIN29 BIOS)Intel Cannon Lake PCH32GB1000GB Samsung SSD 970 EVO Plus 1TBIntel UHD 630 3GB (1150MHz)Realtek ALC1220Realtek RTL8111/8168/8411 + Intel-AC 9560Ubuntu 20.045.7.0-999-generic (x86_64) 20200530GNOME Shell 3.36.4X Server 1.20.8modesetting 1.20.84.6 Mesa 20.0.41.2.131GCC 9.3.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLVulkanCompilerFile-SystemScreen ResolutionCore I7 9750H EOY BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave - CPU Microcode: 0xde - Thermald 1.9.1 - itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + tsx_async_abort: Not affected

Run 1Run 2Run 3Run 4Result OverviewPhoronix Test Suite100%102%103%105%Monkey Audio EncodingPHPBenchCoremarkTimed MAFFT AlignmentNCNNSQLite SpeedtestTimed Eigen CompilationCryptsetupOgg Audio EncodingBRL-CADTimed FFmpeg CompilationVkResampleBuild2Unpacking FirefoxOpus Codec EncodingoneDNNTimed HMMer SearchVkFFTWavPack Audio EncodingCLOMP

Core i7 9750H EOYencode-ape: WAV To APEphpbench: PHP Benchmark Suitecryptsetup: Twofish-XTS 512b Encryptioncryptsetup: Twofish-XTS 256b Encryptioncryptsetup: PBKDF2-sha512onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUcryptsetup: PBKDF2-whirlpoolcryptsetup: Twofish-XTS 512b Decryptioncoremark: CoreMark Size 666 - Iterations Per Secondcryptsetup: AES-XTS 512b Encryptionmafft: Multiple Sequence Alignment - LSU RNAncnn: Vulkan GPU - resnet18ncnn: Vulkan GPU - alexnetonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUcryptsetup: Twofish-XTS 256b Decryptiononednn: IP Shapes 1D - f32 - CPUncnn: Vulkan GPU - squeezenet_ssdsqlite-speedtest: Timed Time - Size 1,000build-eigen: Time To Compileonednn: IP Shapes 3D - f32 - CPUncnn: Vulkan GPU - mobilenetvkresample: 2x - Doubleonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUencode-ogg: WAV To Oggncnn: Vulkan GPU - yolov4-tinybrl-cad: VGR Performance Metriconednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Training - f32 - CPUcryptsetup: Serpent-XTS 256b Decryptioncryptsetup: Serpent-XTS 512b Decryptioncryptsetup: Serpent-XTS 256b Encryptionncnn: Vulkan GPU - vgg16ncnn: Vulkan GPU - resnet50build-ffmpeg: Time To Compileonednn: Recurrent Neural Network Inference - u8s8f32 - CPUncnn: CPU - mobilenetncnn: CPU - resnet18vkresample: 2x - Singlecryptsetup: Serpent-XTS 512b Encryptionncnn: CPU - yolov4-tinyonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUncnn: CPU - vgg16ncnn: CPU - regnety_400monednn: Recurrent Neural Network Training - u8s8f32 - CPUcryptsetup: AES-XTS 512b Decryptiononednn: IP Shapes 3D - u8s8f32 - CPUunpack-firefox: firefox-84.0.source.tar.xzcryptsetup: AES-XTS 256b Encryptionncnn: CPU - squeezenet_ssdbuild2: Time To Compileencode-opus: WAV To Opus Encodecryptsetup: AES-XTS 256b Decryptionncnn: CPU - resnet50onednn: IP Shapes 1D - u8s8f32 - CPUncnn: CPU - alexnetvkfft: hmmer: Pfam Database Searchencode-wavpack: WAV To WavPackonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUclomp: Static OMP Speedupncnn: Vulkan GPU - regnety_400mncnn: Vulkan GPU - googlenetncnn: Vulkan GPU - blazefacencnn: Vulkan GPU - efficientnet-b0ncnn: Vulkan GPU - mnasnetncnn: Vulkan GPU - shufflenet-v2ncnn: Vulkan GPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2Run 1Run 2Run 3Run 411.051703856446.1442.5182891210.02868780585440.5213456.1022932104.710.48918.5515.4212.80199.594914426.8016624.4262.07174.2309.7740223.431008.5643427.625.7358620.19233.51599476403.935.752573457.326341.74828.0830.4811.069.7837.71101.2833421.6023.1718.02461.629815.133.374.1001417.285769.5216.186362.642078.32.7707918.8822359.824.22214.4098.5742307.637.463.2267615.011321117.23114.50816.10702.716.7118.802.408.695.517.835.366.5518.072.268.355.277.174.755.8611.422724360426.4430.817785239.86713760573445.5211631.5766452064.610.72918.1415.1512.78819.73463442.36.9294824.7962.69175.3919.7970123.661001.2023454.225.7745020.23433.86602536328.705.783703418.996404.44824.4823.5807.669.4337.98101.7293443.8123.0517.91464.404812.733.194.1126717.271269.1916.126382.082074.22.7686418.8732358.824.16214.8368.5742312.237.433.2323114.991321117.14314.50216.09792.716.2818.152.308.395.247.394.986.1018.022.278.335.247.214.725.8810.969722538446.2445.218299869.89771781744448.4210367.5240322076.410.52918.5115.4512.82169.59132447.36.8032824.3361.70575.0069.7970823.391005.0703410.085.8098320.16733.47595696369.955.820493440.226368.25830.8830.9814.869.8637.66102.0763424.7723.1318.05463.050813.833.214.1071317.234969.4816.196383.522082.92.7760418.9322366.924.24214.1788.5632314.637.543.2294515.001319117.18114.52816.09952.716.6118.822.388.655.367.415.126.3018.132.268.385.297.184.765.8211.624745275435.4433.6183104910.1556764849436.4215746.3204852115.110.60718.5415.4812.56089.53953438.56.7942824.3862.39274.2629.9112423.341014.4543445.595.7852220.42433.45595416377.375.783363444.936382.78822.9824.8809.670.0237.73102.1243415.6323.2417.99464.961809.333.424.0878417.333369.3416.166389.882078.92.7644618.8552357.424.23214.8588.5902312.537.483.2235715.031322117.40414.51016.10902.716.7618.862.418.725.497.835.376.5618.012.298.315.217.234.916.07OpenBenchmarking.org

Monkey Audio Encoding

This test times how long it takes to encode a sample WAV file to Monkey's Audio APE format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APERun 4Run 3Run 2Run 13691215SE +/- 0.09, N = 5SE +/- 0.02, N = 5SE +/- 0.04, N = 5SE +/- 0.04, N = 511.6210.9711.4211.051. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt
OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APERun 4Run 3Run 2Run 13691215Min: 11.35 / Avg: 11.62 / Max: 11.89Min: 10.92 / Avg: 10.97 / Max: 11.02Min: 11.32 / Avg: 11.42 / Max: 11.57Min: 10.93 / Avg: 11.05 / Max: 11.161. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteRun 4Run 3Run 2Run 1160K320K480K640K800KSE +/- 1307.68, N = 3SE +/- 2965.05, N = 3SE +/- 3349.76, N = 3SE +/- 10003.74, N = 3745275722538724360703856
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteRun 4Run 3Run 2Run 1130K260K390K520K650KMin: 742784 / Avg: 745274.67 / Max: 747211Min: 717618 / Avg: 722538 / Max: 727865Min: 718130 / Avg: 724359.67 / Max: 729609Min: 684153 / Avg: 703855.67 / Max: 716720

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b EncryptionRun 4Run 3Run 2Run 1100200300400500SE +/- 10.19, N = 3SE +/- 3.92, N = 3SE +/- 11.67, N = 3SE +/- 2.54, N = 3435.4446.2426.4446.1
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b EncryptionRun 4Run 3Run 2Run 180160240320400Min: 416.2 / Avg: 435.43 / Max: 450.9Min: 438.4 / Avg: 446.23 / Max: 450.4Min: 412.1 / Avg: 426.37 / Max: 449.5Min: 441 / Avg: 446.07 / Max: 449

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b EncryptionRun 4Run 3Run 2Run 1100200300400500SE +/- 9.61, N = 3SE +/- 4.65, N = 3SE +/- 10.07, N = 3SE +/- 5.35, N = 3433.6445.2430.8442.5
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b EncryptionRun 4Run 3Run 2Run 180160240320400Min: 416.3 / Avg: 433.57 / Max: 449.5Min: 436 / Avg: 445.17 / Max: 451.1Min: 413.4 / Avg: 430.8 / Max: 448.3Min: 432.4 / Avg: 442.53 / Max: 450.6

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-sha512Run 4Run 3Run 2Run 1400K800K1200K1600K2000KSE +/- 2822.26, N = 3SE +/- 3199.33, N = 3SE +/- 15473.88, N = 3SE +/- 1062.67, N = 31831049182998617785231828912
OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-sha512Run 4Run 3Run 2Run 1300K600K900K1200K1500KMin: 1826787 / Avg: 1831049 / Max: 1836385Min: 1826787 / Avg: 1829986.33 / Max: 1836385Min: 1747626 / Avg: 1778523 / Max: 1795506Min: 1826787 / Avg: 1828912.33 / Max: 1829975

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPURun 4Run 3Run 2Run 13691215SE +/- 0.02385, N = 3SE +/- 0.02224, N = 3SE +/- 0.03232, N = 3SE +/- 0.09774, N = 310.155609.897719.8671310.02868MIN: 8.06MIN: 7.78MIN: 7.76MIN: 7.811. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPURun 4Run 3Run 2Run 13691215Min: 10.11 / Avg: 10.16 / Max: 10.19Min: 9.87 / Avg: 9.9 / Max: 9.94Min: 9.82 / Avg: 9.87 / Max: 9.93Min: 9.83 / Avg: 10.03 / Max: 10.131. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-whirlpoolRun 4Run 3Run 2Run 1200K400K600K800K1000KSE +/- 18853.30, N = 3SE +/- 1028.82, N = 3SE +/- 736.33, N = 3SE +/- 1689.92, N = 3764849781744760573780585
OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-whirlpoolRun 4Run 3Run 2Run 1140K280K420K560K700KMin: 727167 / Avg: 764849.33 / Max: 784862Min: 780190 / Avg: 781744 / Max: 783689Min: 759837 / Avg: 760573.33 / Max: 762046Min: 777875 / Avg: 780584.67 / Max: 783689

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b DecryptionRun 4Run 3Run 2Run 1100200300400500SE +/- 8.57, N = 3SE +/- 6.04, N = 3SE +/- 4.74, N = 3SE +/- 10.81, N = 3436.4448.4445.5440.5
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b DecryptionRun 4Run 3Run 2Run 180160240320400Min: 423.6 / Avg: 436.43 / Max: 452.7Min: 436.3 / Avg: 448.37 / Max: 454.7Min: 440.5 / Avg: 445.53 / Max: 455Min: 419 / Avg: 440.5 / Max: 453.2

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondRun 4Run 3Run 2Run 150K100K150K200K250KSE +/- 1011.06, N = 3SE +/- 1572.50, N = 3SE +/- 428.07, N = 3SE +/- 2550.57, N = 3215746.32210367.52211631.58213456.101. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondRun 4Run 3Run 2Run 140K80K120K160K200KMin: 213732.3 / Avg: 215746.32 / Max: 216909.94Min: 207253.89 / Avg: 210367.52 / Max: 212307.96Min: 210846.9 / Avg: 211631.58 / Max: 212320.49Min: 208387.6 / Avg: 213456.1 / Max: 216489.271. (CC) gcc options: -O2 -lrt" -lrt

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b EncryptionRun 4Run 3Run 2Run 15001000150020002500SE +/- 7.16, N = 3SE +/- 46.45, N = 3SE +/- 38.22, N = 3SE +/- 7.18, N = 32115.12076.42064.62104.7
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b EncryptionRun 4Run 3Run 2Run 1400800120016002000Min: 2101.4 / Avg: 2115.13 / Max: 2125.5Min: 1983.7 / Avg: 2076.4 / Max: 2127.9Min: 1989.4 / Avg: 2064.6 / Max: 2114.1Min: 2090.7 / Avg: 2104.73 / Max: 2114.4

Timed MAFFT Alignment

This test performs an alignment of 100 pyruvate decarboxylase sequences. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNARun 4Run 3Run 2Run 13691215SE +/- 0.16, N = 3SE +/- 0.05, N = 3SE +/- 0.11, N = 3SE +/- 0.05, N = 310.6110.5310.7310.491. (CC) gcc options: -std=c99 -O3 -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNARun 4Run 3Run 2Run 13691215Min: 10.31 / Avg: 10.61 / Max: 10.84Min: 10.44 / Avg: 10.53 / Max: 10.63Min: 10.51 / Avg: 10.73 / Max: 10.88Min: 10.44 / Avg: 10.49 / Max: 10.591. (CC) gcc options: -std=c99 -O3 -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet18Run 4Run 3Run 2Run 1510152025SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.60, N = 3SE +/- 0.05, N = 318.5418.5118.1418.55MIN: 18.33 / MAX: 21.98MIN: 18.31 / MAX: 22.04MIN: 16.78 / MAX: 30.17MIN: 18.17 / MAX: 40.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet18Run 4Run 3Run 2Run 1510152025Min: 18.44 / Avg: 18.54 / Max: 18.62Min: 18.43 / Avg: 18.51 / Max: 18.58Min: 16.95 / Avg: 18.14 / Max: 18.87Min: 18.45 / Avg: 18.55 / Max: 18.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: alexnetRun 4Run 3Run 2Run 148121620SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.42, N = 3SE +/- 0.03, N = 315.4815.4515.1515.42MIN: 15.27 / MAX: 18.81MIN: 15.23 / MAX: 27.92MIN: 14.28 / MAX: 24.62MIN: 15.21 / MAX: 26.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: alexnetRun 4Run 3Run 2Run 148121620Min: 15.41 / Avg: 15.48 / Max: 15.53Min: 15.39 / Avg: 15.45 / Max: 15.51Min: 14.33 / Avg: 15.15 / Max: 15.7Min: 15.38 / Avg: 15.42 / Max: 15.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPURun 4Run 3Run 2Run 13691215SE +/- 0.15, N = 15SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 312.5612.8212.7912.80MIN: 10.48MIN: 10.53MIN: 10.49MIN: 10.511. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPURun 4Run 3Run 2Run 148121620Min: 10.84 / Avg: 12.56 / Max: 12.84Min: 12.79 / Avg: 12.82 / Max: 12.86Min: 12.74 / Avg: 12.79 / Max: 12.82Min: 12.77 / Avg: 12.8 / Max: 12.841. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPURun 4Run 3Run 2Run 13691215SE +/- 0.07559, N = 3SE +/- 0.10858, N = 3SE +/- 0.01778, N = 3SE +/- 0.06881, N = 39.539539.591329.734639.59491MIN: 8.54MIN: 8.53MIN: 8.39MIN: 8.551. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPURun 4Run 3Run 2Run 13691215Min: 9.4 / Avg: 9.54 / Max: 9.67Min: 9.38 / Avg: 9.59 / Max: 9.73Min: 9.7 / Avg: 9.73 / Max: 9.76Min: 9.46 / Avg: 9.59 / Max: 9.671. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b DecryptionRun 4Run 3Run 2Run 1100200300400500SE +/- 8.35, N = 3SE +/- 6.93, N = 3SE +/- 6.85, N = 3SE +/- 7.84, N = 3438.5447.3442.3442.0
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b DecryptionRun 4Run 3Run 2Run 180160240320400Min: 426.6 / Avg: 438.5 / Max: 454.6Min: 433.5 / Avg: 447.33 / Max: 455.1Min: 430.8 / Avg: 442.3 / Max: 454.5Min: 428 / Avg: 442.33 / Max: 455

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPURun 4Run 3Run 2Run 1246810SE +/- 0.01427, N = 3SE +/- 0.05391, N = 14SE +/- 0.01289, N = 3SE +/- 0.01210, N = 36.794286.803286.929486.80166MIN: 5.69MIN: 5.3MIN: 5.87MIN: 5.741. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPURun 4Run 3Run 2Run 13691215Min: 6.77 / Avg: 6.79 / Max: 6.82Min: 6.11 / Avg: 6.8 / Max: 6.92Min: 6.9 / Avg: 6.93 / Max: 6.95Min: 6.78 / Avg: 6.8 / Max: 6.821. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: squeezenet_ssdRun 4Run 3Run 2Run 1612182430SE +/- 0.07, N = 3SE +/- 0.10, N = 3SE +/- 0.10, N = 3SE +/- 0.11, N = 324.3824.3324.7924.42MIN: 23.95 / MAX: 35.5MIN: 23.89 / MAX: 35.37MIN: 24.43 / MAX: 36MIN: 24.15 / MAX: 35.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: squeezenet_ssdRun 4Run 3Run 2Run 1612182430Min: 24.24 / Avg: 24.38 / Max: 24.47Min: 24.18 / Avg: 24.33 / Max: 24.53Min: 24.59 / Avg: 24.79 / Max: 24.94Min: 24.24 / Avg: 24.42 / Max: 24.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Run 4Run 3Run 2Run 11428425670SE +/- 0.05, N = 3SE +/- 0.09, N = 3SE +/- 0.11, N = 3SE +/- 0.23, N = 362.3961.7162.6962.071. (CC) gcc options: -O2 -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Run 4Run 3Run 2Run 11224364860Min: 62.34 / Avg: 62.39 / Max: 62.49Min: 61.57 / Avg: 61.71 / Max: 61.89Min: 62.47 / Avg: 62.69 / Max: 62.8Min: 61.67 / Avg: 62.07 / Max: 62.451. (CC) gcc options: -O2 -ldl -lz -lpthread

Timed Eigen Compilation

This test times how long it takes to build all Eigen examples. The Eigen examples are compiled serially. Eigen is a C++ template library for linear algebra. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To CompileRun 4Run 3Run 2Run 120406080100SE +/- 0.03, N = 3SE +/- 0.08, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 374.2675.0175.3974.23
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To CompileRun 4Run 3Run 2Run 11428425670Min: 74.2 / Avg: 74.26 / Max: 74.32Min: 74.91 / Avg: 75.01 / Max: 75.17Min: 75.38 / Avg: 75.39 / Max: 75.42Min: 74.17 / Avg: 74.23 / Max: 74.28

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPURun 4Run 3Run 2Run 13691215SE +/- 0.00660, N = 3SE +/- 0.01833, N = 3SE +/- 0.01052, N = 3SE +/- 0.02638, N = 39.911249.797089.797019.77402MIN: 9.71MIN: 9.63MIN: 9.59MIN: 9.61. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPURun 4Run 3Run 2Run 13691215Min: 9.9 / Avg: 9.91 / Max: 9.92Min: 9.76 / Avg: 9.8 / Max: 9.82Min: 9.78 / Avg: 9.8 / Max: 9.81Min: 9.72 / Avg: 9.77 / Max: 9.811. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mobilenetRun 4Run 3Run 2Run 1612182430SE +/- 0.11, N = 3SE +/- 0.08, N = 3SE +/- 0.25, N = 3SE +/- 0.06, N = 323.3423.3923.6623.43MIN: 22.75 / MAX: 36.5MIN: 22.81 / MAX: 35.57MIN: 22.8 / MAX: 37.21MIN: 21.91 / MAX: 34.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mobilenetRun 4Run 3Run 2Run 1612182430Min: 23.13 / Avg: 23.34 / Max: 23.5Min: 23.26 / Avg: 23.39 / Max: 23.54Min: 23.18 / Avg: 23.66 / Max: 24Min: 23.31 / Avg: 23.43 / Max: 23.531. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

VkResample

VkResample is a Vulkan-based image upscaling library based on VkFFT. The sample input file is upscaling a 4K image to 8K using Vulkan-based GPU acceleration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterVkResample 1.0Upscale: 2x - Precision: DoubleRun 4Run 3Run 2Run 12004006008001000SE +/- 0.88, N = 3SE +/- 2.73, N = 3SE +/- 0.13, N = 3SE +/- 4.15, N = 31014.451005.071001.201008.561. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgms, Fewer Is BetterVkResample 1.0Upscale: 2x - Precision: DoubleRun 4Run 3Run 2Run 12004006008001000Min: 1013.5 / Avg: 1014.45 / Max: 1016.2Min: 1000.38 / Avg: 1005.07 / Max: 1009.85Min: 1000.94 / Avg: 1001.2 / Max: 1001.36Min: 1001.96 / Avg: 1008.56 / Max: 1016.221. (CXX) g++ options: -O3 -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPURun 4Run 3Run 2Run 17001400210028003500SE +/- 5.51, N = 3SE +/- 9.95, N = 3SE +/- 23.16, N = 3SE +/- 7.89, N = 33445.593410.083454.223427.62MIN: 3407.83MIN: 3353.35MIN: 3391.1MIN: 3392.931. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPURun 4Run 3Run 2Run 16001200180024003000Min: 3436.22 / Avg: 3445.59 / Max: 3455.3Min: 3395.1 / Avg: 3410.08 / Max: 3428.91Min: 3407.98 / Avg: 3454.22 / Max: 3479.69Min: 3411.91 / Avg: 3427.62 / Max: 3436.761. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPURun 4Run 3Run 2Run 11.30722.61443.92165.22886.536SE +/- 0.00883, N = 3SE +/- 0.00559, N = 3SE +/- 0.01018, N = 3SE +/- 0.01607, N = 35.785225.809835.774505.73586MIN: 4.9MIN: 4.89MIN: 4.85MIN: 4.841. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPURun 4Run 3Run 2Run 1246810Min: 5.77 / Avg: 5.79 / Max: 5.8Min: 5.8 / Avg: 5.81 / Max: 5.82Min: 5.76 / Avg: 5.77 / Max: 5.79Min: 5.7 / Avg: 5.74 / Max: 5.761. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Ogg Audio Encoding

This test times how long it takes to encode a sample WAV file to Ogg format using the reference Xiph.org tools/libraries. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOgg Audio Encoding 1.3.4WAV To OggRun 4Run 3Run 2Run 1510152025SE +/- 0.26, N = 3SE +/- 0.02, N = 3SE +/- 0.07, N = 3SE +/- 0.01, N = 320.4220.1720.2320.191. (CC) gcc options: -O2 -ffast-math -fsigned-char
OpenBenchmarking.orgSeconds, Fewer Is BetterOgg Audio Encoding 1.3.4WAV To OggRun 4Run 3Run 2Run 1510152025Min: 20.16 / Avg: 20.42 / Max: 20.94Min: 20.15 / Avg: 20.17 / Max: 20.21Min: 20.16 / Avg: 20.23 / Max: 20.36Min: 20.17 / Avg: 20.19 / Max: 20.221. (CC) gcc options: -O2 -ffast-math -fsigned-char

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: yolov4-tinyRun 4Run 3Run 2Run 1816243240SE +/- 0.09, N = 3SE +/- 0.08, N = 3SE +/- 0.34, N = 3SE +/- 0.06, N = 333.4533.4733.8633.51MIN: 32.66 / MAX: 46.08MIN: 33.21 / MAX: 44MIN: 32.71 / MAX: 44.51MIN: 33.23 / MAX: 44.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: yolov4-tinyRun 4Run 3Run 2Run 1714212835Min: 33.28 / Avg: 33.45 / Max: 33.56Min: 33.32 / Avg: 33.47 / Max: 33.58Min: 33.2 / Avg: 33.86 / Max: 34.34Min: 33.39 / Avg: 33.51 / Max: 33.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

BRL-CAD

BRL-CAD 7.28.0 is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.30.8VGR Performance MetricRun 4Run 3Run 2Run 113K26K39K52K65K595415956960253599471. (CXX) g++ options: -std=c++11 -pipe -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -rdynamic -lSM -lICE -lXi -lGLU -lGL -lGLdispatch -lX11 -lXext -lXrender -lpthread -ldl -luuid -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPURun 4Run 3Run 2Run 114002800420056007000SE +/- 17.19, N = 3SE +/- 9.29, N = 3SE +/- 13.20, N = 3SE +/- 18.98, N = 36377.376369.956328.706403.93MIN: 6330.45MIN: 6301.48MIN: 6288.87MIN: 6326.051. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPURun 4Run 3Run 2Run 111002200330044005500Min: 6350.91 / Avg: 6377.37 / Max: 6409.6Min: 6352.28 / Avg: 6369.95 / Max: 6383.73Min: 6309.66 / Avg: 6328.7 / Max: 6354.07Min: 6366.87 / Avg: 6403.93 / Max: 6429.551. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPURun 4Run 3Run 2Run 11.30962.61923.92885.23846.548SE +/- 0.00722, N = 3SE +/- 0.00647, N = 3SE +/- 0.05534, N = 12SE +/- 0.00709, N = 35.783365.820495.783705.75257MIN: 4.8MIN: 4.73MIN: 4.72MIN: 4.731. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPURun 4Run 3Run 2Run 1246810Min: 5.77 / Avg: 5.78 / Max: 5.79Min: 5.81 / Avg: 5.82 / Max: 5.83Min: 5.36 / Avg: 5.78 / Max: 6.19Min: 5.74 / Avg: 5.75 / Max: 5.771. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPURun 4Run 3Run 2Run 17001400210028003500SE +/- 1.10, N = 3SE +/- 4.57, N = 3SE +/- 10.03, N = 3SE +/- 20.71, N = 33444.933440.223418.993457.32MIN: 3415.6MIN: 3388.24MIN: 3391.87MIN: 3398.061. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPURun 4Run 3Run 2Run 16001200180024003000Min: 3443.45 / Avg: 3444.93 / Max: 3447.08Min: 3433.63 / Avg: 3440.22 / Max: 3448.99Min: 3408.59 / Avg: 3418.99 / Max: 3439.04Min: 3416.08 / Avg: 3457.32 / Max: 3481.281. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPURun 4Run 3Run 2Run 114002800420056007000SE +/- 20.81, N = 3SE +/- 7.79, N = 3SE +/- 12.16, N = 3SE +/- 26.71, N = 36382.786368.256404.446341.74MIN: 6299.02MIN: 6328.69MIN: 6340.77MIN: 6254.751. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPURun 4Run 3Run 2Run 111002200330044005500Min: 6345.1 / Avg: 6382.78 / Max: 6416.91Min: 6359.52 / Avg: 6368.25 / Max: 6383.78Min: 6380.5 / Avg: 6404.44 / Max: 6420.08Min: 6290.62 / Avg: 6341.74 / Max: 6380.711. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b DecryptionRun 4Run 3Run 2Run 12004006008001000SE +/- 6.17, N = 3SE +/- 1.31, N = 3SE +/- 2.80, N = 3SE +/- 0.81, N = 3822.9830.8824.4828.0
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b DecryptionRun 4Run 3Run 2Run 1150300450600750Min: 810.8 / Avg: 822.87 / Max: 831.1Min: 829.2 / Avg: 830.8 / Max: 833.4Min: 820.9 / Avg: 824.37 / Max: 829.9Min: 826.5 / Avg: 827.97 / Max: 829.3

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b DecryptionRun 4Run 3Run 2Run 12004006008001000SE +/- 4.45, N = 3SE +/- 0.97, N = 3SE +/- 4.94, N = 3SE +/- 1.59, N = 3824.8830.9823.5830.4
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b DecryptionRun 4Run 3Run 2Run 1150300450600750Min: 816.1 / Avg: 824.8 / Max: 830.8Min: 829.6 / Avg: 830.9 / Max: 832.8Min: 813.7 / Avg: 823.53 / Max: 829.3Min: 827.3 / Avg: 830.43 / Max: 832.5

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b EncryptionRun 4Run 3Run 2Run 12004006008001000SE +/- 4.68, N = 3SE +/- 0.85, N = 3SE +/- 3.18, N = 3SE +/- 3.15, N = 3809.6814.8807.6811.0
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b EncryptionRun 4Run 3Run 2Run 1140280420560700Min: 800.6 / Avg: 809.6 / Max: 816.3Min: 813.5 / Avg: 814.8 / Max: 816.4Min: 801.2 / Avg: 807.57 / Max: 810.9Min: 804.8 / Avg: 810.97 / Max: 815.2

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: vgg16Run 4Run 3Run 2Run 11632486480SE +/- 0.02, N = 3SE +/- 0.13, N = 3SE +/- 0.91, N = 3SE +/- 0.11, N = 370.0269.8669.4369.78MIN: 69.53 / MAX: 93.37MIN: 69.45 / MAX: 91.5MIN: 67.3 / MAX: 104.68MIN: 69.46 / MAX: 81.231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: vgg16Run 4Run 3Run 2Run 11428425670Min: 69.98 / Avg: 70.02 / Max: 70.05Min: 69.73 / Avg: 69.86 / Max: 70.12Min: 67.62 / Avg: 69.43 / Max: 70.34Min: 69.62 / Avg: 69.78 / Max: 69.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet50Run 4Run 3Run 2Run 1918273645SE +/- 0.10, N = 3SE +/- 0.20, N = 3SE +/- 0.29, N = 3SE +/- 0.11, N = 337.7337.6637.9837.71MIN: 36.87 / MAX: 42.94MIN: 36.88 / MAX: 61.06MIN: 33.68 / MAX: 49.65MIN: 36.79 / MAX: 49.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet50Run 4Run 3Run 2Run 1816243240Min: 37.61 / Avg: 37.73 / Max: 37.92Min: 37.34 / Avg: 37.66 / Max: 38.04Min: 37.4 / Avg: 37.98 / Max: 38.28Min: 37.49 / Avg: 37.71 / Max: 37.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileRun 4Run 3Run 2Run 120406080100SE +/- 0.27, N = 3SE +/- 0.40, N = 3SE +/- 0.18, N = 3SE +/- 0.16, N = 3102.12102.08101.73101.28
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileRun 4Run 3Run 2Run 120406080100Min: 101.62 / Avg: 102.12 / Max: 102.53Min: 101.34 / Avg: 102.08 / Max: 102.69Min: 101.5 / Avg: 101.73 / Max: 102.09Min: 101.02 / Avg: 101.28 / Max: 101.59

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPURun 4Run 3Run 2Run 17001400210028003500SE +/- 3.92, N = 3SE +/- 10.73, N = 3SE +/- 8.62, N = 3SE +/- 14.67, N = 33415.633424.773443.813421.60MIN: 3390.01MIN: 3395.01MIN: 3400.63MIN: 3374.621. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPURun 4Run 3Run 2Run 16001200180024003000Min: 3409.2 / Avg: 3415.63 / Max: 3422.73Min: 3412.61 / Avg: 3424.77 / Max: 3446.17Min: 3427.62 / Avg: 3443.81 / Max: 3457.03Min: 3395.11 / Avg: 3421.6 / Max: 3445.751. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetRun 4Run 3Run 2Run 1612182430SE +/- 0.08, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 323.2423.1323.0523.17MIN: 22.8 / MAX: 34.1MIN: 22.78 / MAX: 26.54MIN: 22.78 / MAX: 26.55MIN: 22.8 / MAX: 26.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetRun 4Run 3Run 2Run 1510152025Min: 23.09 / Avg: 23.24 / Max: 23.36Min: 23.08 / Avg: 23.13 / Max: 23.15Min: 23.02 / Avg: 23.05 / Max: 23.09Min: 23.08 / Avg: 23.17 / Max: 23.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18Run 4Run 3Run 2Run 148121620SE +/- 0.51, N = 3SE +/- 0.48, N = 3SE +/- 0.54, N = 3SE +/- 0.54, N = 317.9918.0517.9118.02MIN: 16.77 / MAX: 20.61MIN: 16.77 / MAX: 29.51MIN: 16.71 / MAX: 23.14MIN: 16.77 / MAX: 28.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18Run 4Run 3Run 2Run 1510152025Min: 16.97 / Avg: 17.99 / Max: 18.55Min: 17.09 / Avg: 18.05 / Max: 18.57Min: 16.82 / Avg: 17.91 / Max: 18.46Min: 16.95 / Avg: 18.02 / Max: 18.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

VkResample

VkResample is a Vulkan-based image upscaling library based on VkFFT. The sample input file is upscaling a 4K image to 8K using Vulkan-based GPU acceleration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterVkResample 1.0Upscale: 2x - Precision: SingleRun 4Run 3Run 2Run 1100200300400500SE +/- 0.28, N = 3SE +/- 0.19, N = 3SE +/- 0.33, N = 3SE +/- 0.13, N = 3464.96463.05464.40461.631. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgms, Fewer Is BetterVkResample 1.0Upscale: 2x - Precision: SingleRun 4Run 3Run 2Run 180160240320400Min: 464.47 / Avg: 464.96 / Max: 465.44Min: 462.84 / Avg: 463.05 / Max: 463.43Min: 463.78 / Avg: 464.4 / Max: 464.9Min: 461.43 / Avg: 461.63 / Max: 461.871. (CXX) g++ options: -O3 -pthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b EncryptionRun 4Run 3Run 2Run 12004006008001000SE +/- 3.44, N = 3SE +/- 0.48, N = 3SE +/- 1.43, N = 3SE +/- 1.12, N = 3809.3813.8812.7815.1
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b EncryptionRun 4Run 3Run 2Run 1140280420560700Min: 802.4 / Avg: 809.27 / Max: 813.1Min: 812.8 / Avg: 813.77 / Max: 814.3Min: 811.3 / Avg: 812.73 / Max: 815.6Min: 813 / Avg: 815.1 / Max: 816.8

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyRun 4Run 3Run 2Run 1816243240SE +/- 0.15, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 3SE +/- 0.12, N = 333.4233.2133.1933.37MIN: 32.68 / MAX: 44.68MIN: 32.65 / MAX: 40.32MIN: 32.66 / MAX: 46.16MIN: 32.64 / MAX: 55.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyRun 4Run 3Run 2Run 1714212835Min: 33.17 / Avg: 33.42 / Max: 33.69Min: 33.17 / Avg: 33.21 / Max: 33.24Min: 33.08 / Avg: 33.19 / Max: 33.3Min: 33.14 / Avg: 33.37 / Max: 33.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPURun 4Run 3Run 2Run 10.92541.85082.77623.70164.627SE +/- 0.00557, N = 3SE +/- 0.01007, N = 3SE +/- 0.00554, N = 3SE +/- 0.00387, N = 34.087844.107134.112674.10014MIN: 4MIN: 4.01MIN: 4.01MIN: 4.011. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPURun 4Run 3Run 2Run 1246810Min: 4.08 / Avg: 4.09 / Max: 4.1Min: 4.09 / Avg: 4.11 / Max: 4.12Min: 4.1 / Avg: 4.11 / Max: 4.12Min: 4.1 / Avg: 4.1 / Max: 4.111. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPURun 4Run 3Run 2Run 148121620SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 317.3317.2317.2717.29MIN: 17.13MIN: 17.11MIN: 17.11MIN: 17.11. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPURun 4Run 3Run 2Run 148121620Min: 17.28 / Avg: 17.33 / Max: 17.44Min: 17.22 / Avg: 17.23 / Max: 17.27Min: 17.24 / Avg: 17.27 / Max: 17.29Min: 17.22 / Avg: 17.29 / Max: 17.341. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16Run 4Run 3Run 2Run 11530456075SE +/- 0.63, N = 3SE +/- 0.03, N = 3SE +/- 0.15, N = 3SE +/- 0.11, N = 369.3469.4869.1969.52MIN: 67.44 / MAX: 82.12MIN: 67.44 / MAX: 83.1MIN: 67.17 / MAX: 78.76MIN: 67.25 / MAX: 80.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16Run 4Run 3Run 2Run 11326395265Min: 68.08 / Avg: 69.34 / Max: 69.98Min: 69.43 / Avg: 69.48 / Max: 69.52Min: 68.89 / Avg: 69.19 / Max: 69.37Min: 69.39 / Avg: 69.52 / Max: 69.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mRun 4Run 3Run 2Run 148121620SE +/- 0.44, N = 3SE +/- 0.54, N = 3SE +/- 0.49, N = 3SE +/- 0.54, N = 316.1616.1916.1216.18MIN: 15.08 / MAX: 27.26MIN: 14.99 / MAX: 19.01MIN: 15.04 / MAX: 28.1MIN: 14.99 / MAX: 20.991. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mRun 4Run 3Run 2Run 148121620Min: 15.28 / Avg: 16.16 / Max: 16.62Min: 15.11 / Avg: 16.19 / Max: 16.77Min: 15.15 / Avg: 16.12 / Max: 16.63Min: 15.1 / Avg: 16.18 / Max: 16.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPURun 4Run 3Run 2Run 114002800420056007000SE +/- 19.20, N = 3SE +/- 38.17, N = 3SE +/- 25.22, N = 3SE +/- 12.24, N = 36389.886383.526382.086362.64MIN: 6310.82MIN: 6259.53MIN: 6294.61MIN: 6320.021. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPURun 4Run 3Run 2Run 111002200330044005500Min: 6354.31 / Avg: 6389.88 / Max: 6420.19Min: 6321.25 / Avg: 6383.52 / Max: 6452.91Min: 6344.02 / Avg: 6382.08 / Max: 6429.77Min: 6339.48 / Avg: 6362.64 / Max: 6381.091. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b DecryptionRun 4Run 3Run 2Run 1400800120016002000SE +/- 8.81, N = 3SE +/- 2.92, N = 3SE +/- 4.43, N = 3SE +/- 7.85, N = 32078.92082.92074.22078.3
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b DecryptionRun 4Run 3Run 2Run 1400800120016002000Min: 2061.5 / Avg: 2078.93 / Max: 2089.9Min: 2079.6 / Avg: 2082.87 / Max: 2088.7Min: 2068.7 / Avg: 2074.23 / Max: 2083Min: 2062.6 / Avg: 2078.3 / Max: 2086.2

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPURun 4Run 3Run 2Run 10.62461.24921.87382.49843.123SE +/- 0.00419, N = 3SE +/- 0.00532, N = 3SE +/- 0.00726, N = 3SE +/- 0.00676, N = 32.764462.776042.768642.77079MIN: 2.61MIN: 2.62MIN: 2.61MIN: 2.621. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPURun 4Run 3Run 2Run 1246810Min: 2.76 / Avg: 2.76 / Max: 2.77Min: 2.77 / Avg: 2.78 / Max: 2.79Min: 2.75 / Avg: 2.77 / Max: 2.78Min: 2.76 / Avg: 2.77 / Max: 2.781. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Unpacking Firefox

This simple test profile measures how long it takes to extract the .tar.xz source package of the Mozilla Firefox Web Browser. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzRun 4Run 3Run 2Run 1510152025SE +/- 0.03, N = 4SE +/- 0.06, N = 4SE +/- 0.02, N = 4SE +/- 0.06, N = 418.8618.9318.8718.88
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzRun 4Run 3Run 2Run 1510152025Min: 18.8 / Avg: 18.86 / Max: 18.93Min: 18.82 / Avg: 18.93 / Max: 19.09Min: 18.83 / Avg: 18.87 / Max: 18.94Min: 18.82 / Avg: 18.88 / Max: 19.07

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b EncryptionRun 4Run 3Run 2Run 15001000150020002500SE +/- 9.78, N = 3SE +/- 5.73, N = 3SE +/- 9.33, N = 3SE +/- 9.61, N = 32357.42366.92358.82359.8
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b EncryptionRun 4Run 3Run 2Run 1400800120016002000Min: 2346.9 / Avg: 2357.37 / Max: 2376.9Min: 2355.5 / Avg: 2366.9 / Max: 2373.6Min: 2343.1 / Avg: 2358.83 / Max: 2375.4Min: 2347.3 / Avg: 2359.8 / Max: 2378.7

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdRun 4Run 3Run 2Run 1612182430SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 324.2324.2424.1624.22MIN: 24.02 / MAX: 26.28MIN: 24 / MAX: 27.39MIN: 24.01 / MAX: 27.38MIN: 23.95 / MAX: 34.271. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdRun 4Run 3Run 2Run 1612182430Min: 24.18 / Avg: 24.23 / Max: 24.3Min: 24.23 / Avg: 24.24 / Max: 24.25Min: 24.12 / Avg: 24.16 / Max: 24.19Min: 24.15 / Avg: 24.22 / Max: 24.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileRun 4Run 3Run 2Run 150100150200250SE +/- 0.31, N = 3SE +/- 0.25, N = 3SE +/- 0.50, N = 3SE +/- 0.35, N = 3214.86214.18214.84214.41
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileRun 4Run 3Run 2Run 14080120160200Min: 214.27 / Avg: 214.86 / Max: 215.35Min: 213.72 / Avg: 214.18 / Max: 214.59Min: 213.99 / Avg: 214.84 / Max: 215.72Min: 213.89 / Avg: 214.41 / Max: 215.08

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeRun 4Run 3Run 2Run 1246810SE +/- 0.011, N = 5SE +/- 0.003, N = 5SE +/- 0.003, N = 5SE +/- 0.002, N = 58.5908.5638.5748.5741. (CXX) g++ options: -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeRun 4Run 3Run 2Run 13691215Min: 8.57 / Avg: 8.59 / Max: 8.63Min: 8.55 / Avg: 8.56 / Max: 8.57Min: 8.57 / Avg: 8.57 / Max: 8.58Min: 8.57 / Avg: 8.57 / Max: 8.581. (CXX) g++ options: -fvisibility=hidden -logg -lm

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b DecryptionRun 4Run 3Run 2Run 15001000150020002500SE +/- 7.50, N = 3SE +/- 2.97, N = 3SE +/- 6.84, N = 3SE +/- 8.91, N = 32312.52314.62312.22307.6
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b DecryptionRun 4Run 3Run 2Run 1400800120016002000Min: 2297.5 / Avg: 2312.47 / Max: 2320.7Min: 2310.9 / Avg: 2314.63 / Max: 2320.5Min: 2300.3 / Avg: 2312.23 / Max: 2324Min: 2298.1 / Avg: 2307.6 / Max: 2325.4

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50Run 4Run 3Run 2Run 1918273645SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 337.4837.5437.4337.46MIN: 36.87 / MAX: 50.24MIN: 36.85 / MAX: 49.57MIN: 36.5 / MAX: 51.46MIN: 36.82 / MAX: 40.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50Run 4Run 3Run 2Run 1816243240Min: 37.4 / Avg: 37.48 / Max: 37.57Min: 37.51 / Avg: 37.54 / Max: 37.55Min: 37.43 / Avg: 37.43 / Max: 37.44Min: 37.42 / Avg: 37.46 / Max: 37.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPURun 4Run 3Run 2Run 10.72731.45462.18192.90923.6365SE +/- 0.00785, N = 3SE +/- 0.00749, N = 3SE +/- 0.01078, N = 3SE +/- 0.00484, N = 33.223573.229453.232313.22676MIN: 2.61MIN: 2.62MIN: 2.63MIN: 2.621. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPURun 4Run 3Run 2Run 1246810Min: 3.21 / Avg: 3.22 / Max: 3.24Min: 3.21 / Avg: 3.23 / Max: 3.24Min: 3.21 / Avg: 3.23 / Max: 3.25Min: 3.22 / Avg: 3.23 / Max: 3.231. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetRun 4Run 3Run 2Run 148121620SE +/- 0.36, N = 3SE +/- 0.35, N = 3SE +/- 0.35, N = 3SE +/- 0.38, N = 315.0315.0014.9915.01MIN: 14.24 / MAX: 25.99MIN: 14.24 / MAX: 18.29MIN: 14.24 / MAX: 24.74MIN: 14.18 / MAX: 15.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetRun 4Run 3Run 2Run 148121620Min: 14.3 / Avg: 15.03 / Max: 15.4Min: 14.3 / Avg: 15 / Max: 15.36Min: 14.29 / Avg: 14.99 / Max: 15.38Min: 14.26 / Avg: 15.01 / Max: 15.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

VkFFT

VkFFT is a Fast Fourier Transform (FFT) Library that is GPU accelerated by means of the Vulkan API. The VkFFT benchmark runs FFT performance differences of many different sizes before returning an overall benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.1.1Run 4Run 3Run 2Run 130060090012001500SE +/- 0.88, N = 3SE +/- 0.67, N = 3SE +/- 0.58, N = 313221319132113211. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.1.1Run 4Run 3Run 2Run 12004006008001000Min: 1321 / Avg: 1322.33 / Max: 1324Min: 1318 / Avg: 1318.67 / Max: 1320Min: 1320 / Avg: 1321 / Max: 13221. (CXX) g++ options: -O3 -pthread

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchRun 4Run 3Run 2Run 1306090120150SE +/- 0.14, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 3117.40117.18117.14117.231. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchRun 4Run 3Run 2Run 120406080100Min: 117.12 / Avg: 117.4 / Max: 117.59Min: 117.12 / Avg: 117.18 / Max: 117.22Min: 117.07 / Avg: 117.14 / Max: 117.19Min: 117.18 / Avg: 117.23 / Max: 117.291. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

WavPack Audio Encoding

This test times how long it takes to encode a sample WAV file to WavPack format with very high quality settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPackRun 4Run 3Run 2Run 148121620SE +/- 0.00, N = 5SE +/- 0.02, N = 5SE +/- 0.00, N = 5SE +/- 0.00, N = 514.5114.5314.5014.511. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPackRun 4Run 3Run 2Run 148121620Min: 14.5 / Avg: 14.51 / Max: 14.52Min: 14.49 / Avg: 14.53 / Max: 14.62Min: 14.49 / Avg: 14.5 / Max: 14.51Min: 14.49 / Avg: 14.51 / Max: 14.511. (CXX) g++ options: -rdynamic

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPURun 4Run 3Run 2Run 148121620SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 316.1116.1016.1016.11MIN: 15.78MIN: 15.79MIN: 15.78MIN: 15.861. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPURun 4Run 3Run 2Run 148121620Min: 16.07 / Avg: 16.11 / Max: 16.14Min: 16.07 / Avg: 16.1 / Max: 16.13Min: 16.07 / Avg: 16.1 / Max: 16.14Min: 16.05 / Avg: 16.11 / Max: 16.181. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

CLOMP

CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading in order to influence future system designs. This particular test profile configuration is currently set to look at the OpenMP static schedule speed-up across all available CPU cores using the recommended test configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP SpeedupRun 4Run 3Run 2Run 10.60751.2151.82252.433.0375SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 32.72.72.72.71. (CC) gcc options: -fopenmp -O3 -lm
OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP SpeedupRun 4Run 3Run 2Run 1246810Min: 2.7 / Avg: 2.7 / Max: 2.7Min: 2.7 / Avg: 2.7 / Max: 2.7Min: 2.7 / Avg: 2.7 / Max: 2.7Min: 2.7 / Avg: 2.7 / Max: 2.71. (CC) gcc options: -fopenmp -O3 -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: regnety_400mRun 4Run 3Run 2Run 148121620SE +/- 0.09, N = 3SE +/- 0.10, N = 3SE +/- 0.59, N = 3SE +/- 0.13, N = 316.7616.6116.2816.71MIN: 16.44 / MAX: 19.53MIN: 16.26 / MAX: 20.92MIN: 14.98 / MAX: 19.89MIN: 16.43 / MAX: 19.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: regnety_400mRun 4Run 3Run 2Run 148121620Min: 16.61 / Avg: 16.76 / Max: 16.93Min: 16.46 / Avg: 16.61 / Max: 16.8Min: 15.1 / Avg: 16.28 / Max: 16.99Min: 16.53 / Avg: 16.71 / Max: 16.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: googlenetRun 4Run 3Run 2Run 1510152025SE +/- 0.06, N = 3SE +/- 0.05, N = 3SE +/- 0.81, N = 3SE +/- 0.08, N = 318.8618.8218.1518.80MIN: 17.89 / MAX: 30.52MIN: 17.95 / MAX: 30.43MIN: 16.4 / MAX: 21.94MIN: 18.57 / MAX: 22.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: googlenetRun 4Run 3Run 2Run 1510152025Min: 18.75 / Avg: 18.86 / Max: 18.92Min: 18.76 / Avg: 18.82 / Max: 18.91Min: 16.54 / Avg: 18.15 / Max: 19.16Min: 18.68 / Avg: 18.8 / Max: 18.941. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: blazefaceRun 4Run 3Run 2Run 10.54231.08461.62692.16922.7115SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.12, N = 3SE +/- 0.01, N = 32.412.382.302.40MIN: 2.35 / MAX: 2.48MIN: 2.33 / MAX: 2.46MIN: 2.03 / MAX: 3.94MIN: 2.34 / MAX: 3.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: blazefaceRun 4Run 3Run 2Run 1246810Min: 2.39 / Avg: 2.41 / Max: 2.42Min: 2.36 / Avg: 2.38 / Max: 2.4Min: 2.07 / Avg: 2.3 / Max: 2.45Min: 2.38 / Avg: 2.4 / Max: 2.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: efficientnet-b0Run 4Run 3Run 2Run 1246810SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.39, N = 3SE +/- 0.04, N = 38.728.658.398.69MIN: 8.4 / MAX: 21.56MIN: 8.49 / MAX: 12.07MIN: 7.55 / MAX: 20.09MIN: 8.49 / MAX: 9.231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: efficientnet-b0Run 4Run 3Run 2Run 13691215Min: 8.63 / Avg: 8.72 / Max: 8.8Min: 8.61 / Avg: 8.65 / Max: 8.7Min: 7.62 / Avg: 8.39 / Max: 8.82Min: 8.61 / Avg: 8.69 / Max: 8.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mnasnetRun 4Run 3Run 2Run 11.23982.47963.71944.95926.199SE +/- 0.02, N = 3SE +/- 0.12, N = 3SE +/- 0.30, N = 3SE +/- 0.05, N = 35.495.365.245.51MIN: 5.29 / MAX: 7.23MIN: 4.65 / MAX: 9.92MIN: 4.59 / MAX: 7.39MIN: 5.16 / MAX: 19.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mnasnetRun 4Run 3Run 2Run 1246810Min: 5.44 / Avg: 5.49 / Max: 5.51Min: 5.12 / Avg: 5.36 / Max: 5.5Min: 4.64 / Avg: 5.24 / Max: 5.62Min: 5.45 / Avg: 5.51 / Max: 5.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: shufflenet-v2Run 4Run 3Run 2Run 1246810SE +/- 0.05, N = 3SE +/- 0.44, N = 3SE +/- 0.41, N = 3SE +/- 0.04, N = 37.837.417.397.83MIN: 7.55 / MAX: 11.43MIN: 6.45 / MAX: 19.52MIN: 6.41 / MAX: 21.93MIN: 7.61 / MAX: 9.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: shufflenet-v2Run 4Run 3Run 2Run 13691215Min: 7.72 / Avg: 7.83 / Max: 7.89Min: 6.52 / Avg: 7.41 / Max: 7.88Min: 6.6 / Avg: 7.39 / Max: 8Min: 7.79 / Avg: 7.83 / Max: 7.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3Run 4Run 3Run 2Run 11.20832.41663.62494.83326.0415SE +/- 0.03, N = 3SE +/- 0.22, N = 3SE +/- 0.24, N = 3SE +/- 0.02, N = 35.375.124.985.36MIN: 5.24 / MAX: 7.07MIN: 4.61 / MAX: 6.87MIN: 4.63 / MAX: 7.4MIN: 5.24 / MAX: 7.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3Run 4Run 3Run 2Run 1246810Min: 5.32 / Avg: 5.37 / Max: 5.4Min: 4.69 / Avg: 5.12 / Max: 5.36Min: 4.71 / Avg: 4.98 / Max: 5.45Min: 5.34 / Avg: 5.36 / Max: 5.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2Run 4Run 3Run 2Run 1246810SE +/- 0.05, N = 3SE +/- 0.21, N = 3SE +/- 0.26, N = 3SE +/- 0.06, N = 36.566.306.106.55MIN: 6.33 / MAX: 10.24MIN: 5.74 / MAX: 9.41MIN: 5.68 / MAX: 8.75MIN: 6.32 / MAX: 9.951. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2Run 4Run 3Run 2Run 13691215Min: 6.46 / Avg: 6.56 / Max: 6.62Min: 5.87 / Avg: 6.3 / Max: 6.54Min: 5.79 / Avg: 6.1 / Max: 6.62Min: 6.46 / Avg: 6.55 / Max: 6.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetRun 4Run 3Run 2Run 148121620SE +/- 0.70, N = 3SE +/- 0.69, N = 3SE +/- 0.76, N = 3SE +/- 0.77, N = 318.0118.1318.0218.07MIN: 16.42 / MAX: 22.07MIN: 16.51 / MAX: 22.3MIN: 16.35 / MAX: 32.15MIN: 16.36 / MAX: 29.811. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetRun 4Run 3Run 2Run 1510152025Min: 16.6 / Avg: 18.01 / Max: 18.71Min: 16.75 / Avg: 18.13 / Max: 18.83Min: 16.5 / Avg: 18.02 / Max: 18.79Min: 16.53 / Avg: 18.07 / Max: 18.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceRun 4Run 3Run 2Run 10.51531.03061.54592.06122.5765SE +/- 0.12, N = 3SE +/- 0.11, N = 3SE +/- 0.12, N = 3SE +/- 0.11, N = 32.292.262.272.26MIN: 1.99 / MAX: 13.86MIN: 1.99 / MAX: 2.47MIN: 1.99 / MAX: 12.5MIN: 1.99 / MAX: 2.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceRun 4Run 3Run 2Run 1246810Min: 2.05 / Avg: 2.29 / Max: 2.43Min: 2.04 / Avg: 2.26 / Max: 2.37Min: 2.04 / Avg: 2.27 / Max: 2.41Min: 2.04 / Avg: 2.26 / Max: 2.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0Run 4Run 3Run 2Run 1246810SE +/- 0.33, N = 3SE +/- 0.35, N = 3SE +/- 0.34, N = 3SE +/- 0.36, N = 38.318.388.338.35MIN: 7.57 / MAX: 10.61MIN: 7.57 / MAX: 12.25MIN: 7.59 / MAX: 12.2MIN: 7.58 / MAX: 10.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0Run 4Run 3Run 2Run 13691215Min: 7.64 / Avg: 8.31 / Max: 8.65Min: 7.68 / Avg: 8.38 / Max: 8.75Min: 7.66 / Avg: 8.33 / Max: 8.68Min: 7.64 / Avg: 8.35 / Max: 8.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetRun 4Run 3Run 2Run 11.19032.38063.57094.76125.9515SE +/- 0.27, N = 3SE +/- 0.33, N = 3SE +/- 0.29, N = 3SE +/- 0.28, N = 35.215.295.245.27MIN: 4.62 / MAX: 7.42MIN: 4.57 / MAX: 17.96MIN: 4.6 / MAX: 7.38MIN: 4.62 / MAX: 9.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetRun 4Run 3Run 2Run 1246810Min: 4.66 / Avg: 5.21 / Max: 5.49Min: 4.64 / Avg: 5.29 / Max: 5.66Min: 4.67 / Avg: 5.24 / Max: 5.56Min: 4.71 / Avg: 5.27 / Max: 5.631. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2Run 4Run 3Run 2Run 1246810SE +/- 0.39, N = 3SE +/- 0.35, N = 3SE +/- 0.37, N = 3SE +/- 0.35, N = 37.237.187.217.17MIN: 6.39 / MAX: 9.61MIN: 6.43 / MAX: 20.32MIN: 6.38 / MAX: 12.35MIN: 6.36 / MAX: 13.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2Run 4Run 3Run 2Run 13691215Min: 6.47 / Avg: 7.23 / Max: 7.79Min: 6.48 / Avg: 7.18 / Max: 7.56Min: 6.47 / Avg: 7.21 / Max: 7.6Min: 6.47 / Avg: 7.17 / Max: 7.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3Run 4Run 3Run 2Run 11.10482.20963.31444.41925.524SE +/- 0.21, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 34.914.764.724.75MIN: 4.62 / MAX: 7.08MIN: 4.65 / MAX: 6.29MIN: 4.6 / MAX: 7.3MIN: 4.65 / MAX: 6.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3Run 4Run 3Run 2Run 1246810Min: 4.69 / Avg: 4.91 / Max: 5.33Min: 4.7 / Avg: 4.76 / Max: 4.85Min: 4.67 / Avg: 4.72 / Max: 4.75Min: 4.71 / Avg: 4.75 / Max: 4.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2Run 4Run 3Run 2Run 1246810SE +/- 0.23, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 36.075.825.885.86MIN: 5.67 / MAX: 17.34MIN: 5.67 / MAX: 8.89MIN: 5.67 / MAX: 17.36MIN: 5.72 / MAX: 7.361. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2Run 4Run 3Run 2Run 1246810Min: 5.81 / Avg: 6.07 / Max: 6.53Min: 5.8 / Avg: 5.82 / Max: 5.86Min: 5.85 / Avg: 5.88 / Max: 5.9Min: 5.84 / Avg: 5.86 / Max: 5.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

80 Results Shown

Monkey Audio Encoding
PHPBench
Cryptsetup:
  Twofish-XTS 512b Encryption
  Twofish-XTS 256b Encryption
  PBKDF2-sha512
oneDNN
Cryptsetup:
  PBKDF2-whirlpool
  Twofish-XTS 512b Decryption
Coremark
Cryptsetup
Timed MAFFT Alignment
NCNN:
  Vulkan GPU - resnet18
  Vulkan GPU - alexnet
oneDNN:
  Deconvolution Batch shapes_3d - f32 - CPU
  Deconvolution Batch shapes_1d - f32 - CPU
Cryptsetup
oneDNN
NCNN
SQLite Speedtest
Timed Eigen Compilation
oneDNN
NCNN
VkResample
oneDNN:
  Recurrent Neural Network Inference - f32 - CPU
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
Ogg Audio Encoding
NCNN
BRL-CAD
oneDNN:
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
  Deconvolution Batch shapes_3d - u8s8f32 - CPU
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  Recurrent Neural Network Training - f32 - CPU
Cryptsetup:
  Serpent-XTS 256b Decryption
  Serpent-XTS 512b Decryption
  Serpent-XTS 256b Encryption
NCNN:
  Vulkan GPU - vgg16
  Vulkan GPU - resnet50
Timed FFmpeg Compilation
oneDNN
NCNN:
  CPU - mobilenet
  CPU - resnet18
VkResample
Cryptsetup
NCNN
oneDNN:
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
NCNN:
  CPU - vgg16
  CPU - regnety_400m
oneDNN
Cryptsetup
oneDNN
Unpacking Firefox
Cryptsetup
NCNN
Build2
Opus Codec Encoding
Cryptsetup
NCNN
oneDNN
NCNN
VkFFT
Timed HMMer Search
WavPack Audio Encoding
oneDNN
CLOMP
NCNN:
  Vulkan GPU - regnety_400m
  Vulkan GPU - googlenet
  Vulkan GPU - blazeface
  Vulkan GPU - efficientnet-b0
  Vulkan GPU - mnasnet
  Vulkan GPU - shufflenet-v2
  Vulkan GPU-v3-v3 - mobilenet-v3
  Vulkan GPU-v2-v2 - mobilenet-v2
  CPU - googlenet
  CPU - blazeface
  CPU - efficientnet-b0
  CPU - mnasnet
  CPU - shufflenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU-v2-v2 - mobilenet-v2