xeon-platinum-8380-2p-smoke-run

2 x Intel Xeon Platinum 8380 testing with a Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS) and ASPEED on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2105012-IB-XEONPLATI04
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 2 Tests
Timed Code Compilation 6 Tests
C/C++ Compiler Tests 6 Tests
CPU Massive 11 Tests
Creator Workloads 13 Tests
Cryptography 3 Tests
Encoding 4 Tests
Game Development 5 Tests
HPC - High Performance Computing 4 Tests
Imaging 2 Tests
Machine Learning 2 Tests
Molecular Dynamics 2 Tests
Multi-Core 16 Tests
NVIDIA GPU Compute 3 Tests
OpenMPI Tests 2 Tests
Programmer / Developer System Benchmarks 6 Tests
Python Tests 5 Tests
Renderers 2 Tests
Scientific Computing 2 Tests
Software Defined Radio 4 Tests
Server CPU Tests 11 Tests
Single-Threaded 3 Tests
Texture Compression 4 Tests
Video Encoding 4 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
r1
April 28
  1 Day, 1 Minute
r1a
April 29
  11 Hours, 50 Minutes
r2
April 29
  1 Minute
r2a
April 29
  1 Hour, 9 Minutes
r2b
April 29
  18 Hours, 2 Minutes
r3
April 30
  17 Hours, 57 Minutes
r4
April 30
  17 Hours, 55 Minutes
r5
May 01
  46 Minutes
Invert Hiding All Results Option
  11 Hours, 28 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):


xeon-platinum-8380-2p-smoke-run ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen Resolutionr1r1ar2r2ar2br3r4r52 x Intel Xeon Platinum 8380 @ 3.40GHz (80 Cores / 160 Threads)Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS)Intel Device 099816 x 32 GB DDR4-3200MT/s Hynix HMA84GR7CJR4N-XN2 x 7682GB INTEL SSDPF2KX076TZ + 2 x 800GB INTEL SSDPF21Q800GB + 3841GB Micron_9300_MTFDHAL3T8TDP + 960GB INTEL SSDSC2KG96ASPEEDVE2282 x Intel X710 for 10GBASE-T + 2 x Intel E810-C for QSFPUbuntu 20.045.11.0-051100-generic (x86_64)GNOME Shell 3.36.4X Server 1.20.8GCC 9.3.0ext41920x10801024x768OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- r1: Scaling Governor: intel_pstate performance - CPU Microcode: 0xd000270- r1a: Scaling Governor: intel_pstate performance - CPU Microcode: 0xd000270- r2: Scaling Governor: intel_pstate performance - CPU Microcode: 0xd000270- r2a: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd000270- r2b: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd000270- r3: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd000270- r4: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd000270- r5: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd000270Python Details- Python 2.7.18 + Python 3.8.5Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

xeon-platinum-8380-2p-smoke-run onednn: Deconvolution Batch shapes_1d - f32 - CPUaom-av1: Speed 9 Realtime - Bosphorus 1080paom-av1: Speed 8 Realtime - Bosphorus 1080paom-av1: Speed 6 Two-Pass - Bosphorus 1080paom-av1: Speed 6 Realtime - Bosphorus 1080paom-av1: Speed 6 Realtime - Bosphorus 4Kaom-av1: Speed 8 Realtime - Bosphorus 4Kaom-av1: Speed 6 Two-Pass - Bosphorus 4Kaom-av1: Speed 9 Realtime - Bosphorus 4Ksvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080psvt-hevc: 10 - Bosphorus 1080pintel-mlc: Idle Latencyaom-av1: Speed 4 Two-Pass - Bosphorus 1080paom-av1: Speed 4 Two-Pass - Bosphorus 4Ksvt-vp9: Visual Quality Optimized - Bosphorus 1080psvt-hevc: 7 - Bosphorus 1080pbuild-erlang: Time To Compileaom-av1: Speed 0 Two-Pass - Bosphorus 1080pluxcorerender: LuxCore Benchmark - CPUaom-av1: Speed 0 Two-Pass - Bosphorus 4Ksvt-hevc: 1 - Bosphorus 1080pluxcorerender: Danish Mood - CPUincompact3d: input.i3d 129 Cells Per Directionincompact3d: input.i3d 193 Cells Per Directionincompact3d: X3D-benchmarking input.i3davifenc: 6avifenc: 6, Losslessavifenc: 2luaradio: Complex Phaseavifenc: 10, Losslessbuild-wasmer: Time To Compilebuild-linux-kernel: Time To Compileavifenc: 0luaradio: FM Deemphasis Filterbuild-nodejs: Time To Compilexmrig: Monero - 1Mbuild-mesa: Time To Compileluxcorerender: DLSC - CPUbuild-llvm: Unix Makefilesmnn: mobilenet-v1-1.0liquid-dsp: 1 - 256 - 57xmrig: Wownero - 1Msrslte: PHY_DL_Testtoybrot: C++ Tasksstockfish: Total Timevosk: onednn: Convolution Batch Shapes Auto - u8s8f32 - CPUliquid-dsp: 16 - 256 - 57onednn: IP Shapes 1D - f32 - CPUluxcorerender: Orange Juice - CPUliquid-dsp: 8 - 256 - 57onednn: Convolution Batch Shapes Auto - f32 - CPUtoybrot: C++ Threadshammerdb-mariadb: 64 - 500hammerdb-mariadb: 64 - 500gmpbench: Total Timetjbench: Decompression Throughputonednn: IP Shapes 3D - u8s8f32 - CPUluaradio: Hilbert Transformonednn: IP Shapes 3D - bf16bf16bf16 - CPUtoybrot: TBBonednn: IP Shapes 1D - u8s8f32 - CPUliquid-dsp: 32 - 256 - 57mysqlslap: 4liquid-dsp: 4 - 256 - 57intel-mlc: Peak Injection Bandwidth - 1:1 Reads-Writesonednn: Recurrent Neural Network Training - f32 - CPUbuild-llvm: Ninjaonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUliquid-dsp: 2 - 256 - 57liquid-dsp: 128 - 256 - 57toktx: UASTC 3onednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUtoybrot: OpenMPmnn: inception-v3onednn: IP Shapes 1D - bf16bf16bf16 - CPUmysqlslap: 128onednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPUtoktx: Zstd Compression 19onednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUsrslte: PHY_DL_Testbotan: AES-256onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUbotan: KASUMIbasis: UASTC Level 2botan: CAST-256botan: ChaCha20Poly1305liquid-dsp: 64 - 256 - 57botan: ChaCha20Poly1305 - Decryptsecuremark: SecureMark-TLSbotan: Blowfishdraco: Church Facadebotan: Twofishonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUliquid-dsp: 160 - 256 - 57onednn: Recurrent Neural Network Inference - f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUhelsing: 14 digitonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUblender: Fishy Cat - CPU-Onlydraco: Lionblender: Classroom - CPU-Onlyintel-mlc: Max Bandwidth - 1:1 Reads-Writesintel-mlc: Peak Injection Bandwidth - 2:1 Reads-Writesintel-mlc: Max Bandwidth - 2:1 Reads-Writessrslte: OFDM_Testastcenc: Mediumintel-mlc: Peak Injection Bandwidth - All Readsonednn: Recurrent Neural Network Inference - u8s8f32 - CPUbasis: ETC1Smysqlslap: 8blender: BMW27 - CPU-Onlyintel-mlc: Peak Injection Bandwidth - 3:1 Reads-Writesonednn: Recurrent Neural Network Training - u8s8f32 - CPUintel-mlc: Max Bandwidth - 3:1 Reads-Writessysbench: RAM / Memoryintel-mlc: Max Bandwidth - All Readsbotan: CAST-256 - Decryptmysqlslap: 64botan: AES-256 - Decryptmysqlslap: 32basis: UASTC Level 0astcenc: Thoroughtoktx: UASTC 4 + Zstd Compression 19toktx: UASTC 3 + Zstd Compression 19intel-mlc: Max Bandwidth - Stream-Triad Likeintel-mlc: Peak Injection Bandwidth - Stream-Triad Likemysqlslap: 16botan: Twofish - Decryptbasis: UASTC Level 3blender: Pabellon Barcelona - CPU-Onlyhammerdb-mariadb: 128 - 500astcenc: Exhaustivebotan: KASUMI - Decryptmnn: SqueezeNetV1.0blender: Barbershop - CPU-Onlybotan: Blowfish - Decrypthammerdb-mariadb: 128 - 500sysbench: CPUmysqlslap: 512mysqlslap: 256cp2k: Fayalite-FISThammerdb-mariadb: 128 - 250hammerdb-mariadb: 128 - 250hammerdb-mariadb: 64 - 250hammerdb-mariadb: 64 - 250hammerdb-mariadb: 32 - 500hammerdb-mariadb: 32 - 500hammerdb-mariadb: 32 - 250hammerdb-mariadb: 32 - 250hammerdb-mariadb: 16 - 500hammerdb-mariadb: 16 - 500hammerdb-mariadb: 16 - 250hammerdb-mariadb: 16 - 250hammerdb-mariadb: 8 - 500hammerdb-mariadb: 8 - 500hammerdb-mariadb: 8 - 250hammerdb-mariadb: 8 - 250mnn: MobileNetV2_224mnn: resnet-v2-50toktx: Zstd Compression 9mysqlslap: 1viennacl: CPU BLAS - dGEMM-TTviennacl: CPU BLAS - dGEMM-TNviennacl: CPU BLAS - dGEMM-NTviennacl: CPU BLAS - dGEMM-NNviennacl: CPU BLAS - dGEMV-Tviennacl: CPU BLAS - dGEMV-Nviennacl: CPU BLAS - dDOTviennacl: CPU BLAS - dAXPYviennacl: CPU BLAS - dCOPYviennacl: CPU BLAS - sDOTviennacl: CPU BLAS - sAXPYviennacl: CPU BLAS - sCOPYonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUavifenc: 10svt-vp9: VMAF Optimized - Bosphorus 1080pluxcorerender: Rainbow Colors and Prism - CPUgnuradio: Hilbert Transformgnuradio: FM Deemphasis Filtergnuradio: IIR Filtergnuradio: FIR Filtergnuradio: Signal Source (Cosine)gnuradio: Five Back to Back FIR Filtersluaradio: Five Back to Back FIR Filtersr1r1ar2r2ar2br3r4r57.4946715.0929.207.3733.07401.29499.2335.1327.87290.67114.5507.8436.917.422.7437099611.3586022313.92045113.24732.11331.539546.88.85262.16024.38257.975410.0101.10119299.520.9529.70216.3235779200048051.576.9787918164481935.9180.8778158853200000.91856814.364419533331.109917018642981946844642.1161.6346190.39828280.31.8004668501.215941735100000217643333442422.3801.409145.7173.5724711071333334159333332.0794473182.961350.5930420.8641640.215115183.45669.7000.23998977.287115.972623.4943267133333619.458225412363.038289.1260.3383273144800000447.9711.248093.5302677.872445.144439496.74459038.6459455.38120300000356476.2445.519425933.7792.831426148.96357285.28116.0745663.055325766.94324377.2292.7365719074.320363.255173288167809554151913976327920841968818209254690541952586447719291363757285984943792900829576876.376.075.673.571972.3720105884362010031834804.3920.2109195.477386.2917.04459.3734.0610.6603.02183.51024.31094.87.50059125.25103.9221.2528.6615.1928.997.5532.51408.24493.5133.06.894.17329.53288.99113.8000.518.040.1937.347.552.7385909611.2727114311.96078513.32831.62431.479548.28.81261.93024.36057.710409.6100.44619452.020.3799.61215.76050166.177.3772418626355235.0090.8791378902733330.91227914.261.122246980623111887614642.8156.9690160.39558880.31.7988169641.222781736800000442843.2804.323145.5503.5766233527333332.0853273082.968570.5956610.8632140.213643184.25670.8090.24012277.310115.970623.1983263700000619.538225366363.615288.8520.3416633162066667447.3081.252673.5436778.159446.936441408.09456260.3456629.89120133333358385.5447.436424096.6791.927424612.62358364.56116.0695663.612325184.58323924.2292.3745724274.288363.32617322877.277.476.872.331963.6371392335277370504793.3630.2107285.505393.4613.34459.1727.4609.5604.82175.31015.21094.567.532.5442144.2442460.05456408.6456545.88358269.7424077.3424818.83358456.09325260.41323826.91374.66328.402343.2636.207.4510.395.9712.033.2214.30182.17234.513.302.01164.32158.16191.7460.325.840.1427.805.733.0228199211.5617158307.62210816.06538.39538.372458.710.28271.92827.99764.971370.1110.93019311.121.5759.27226.4403.2135623033349908.375.0805018155421836.4240.8699788628900000.94362414.284281000001.1187471494524.5160.2625590.40340978.21.8177469841.2379616993333331614213203333440454.7808.289148.4843.6423211017333334000666675.6642.11712741253.0733.004641920.60212219.7810.8740800.216806181.65606.9670.24302676.28613.979114.663615.8063227433333612.438225343362.9267001288.5620.3418933131866667446.3891.253133.5312178.33447.28746.38612671.78441732.77459309.8459226.531207333337.1887357742.9447.70134.237141329.56425925.6789.836425997.2212510.56357774.43116.0804035662.76388511.2519.290756.66010.011325409.99324209.81264292.39617.16388.5716.362174.2757.174110.02363.196214210.831661604.07848.7323.470333654.762.359.861.9389.962.3447.65507.1422.2349474691791.6950.2103246.656182.2613.42357.4645.8498.2470.01684.4111.2804.528.181543.4236.067.3810.395.9711.943.2014.06181.52234.3967.63.362.05164.51157.83192.2450.335.920.1528.225.653.5659277414.5982965386.39000116.61538.59038.313458.210.08871.13028.01865.960370.3111.79020652.921.3699.24226.1995719766749813.476.1804818921449935.5810.9018238654100000.93694113.894321700001.1457872034504.5159.1870380.40687778.21.8433970031.2450817045000001580215343333449554.1796.689147.1633.6403311151000034110000002.1084174393.009291890.6023140.8749680.216586181.65593.3660.24330876.407114.517616.5013232700000612.149225291359.452286.1800.3419553143300000450.6481.241763.5622478.079447.144440939.22457190.5457141.24120833333358463.7446.9171420424904.5793.080424925.84358268.00115.7234045662.342887325218.50324227.41262292.82774.309363.314345861.766.968.966.464764.3713.471024.29135328621135793.9160.2183496.597185.5316.47408.0621.0487.4502.01723.9580.5662.828.461342.3736.357.4310.546.0012.103.2314.73179.13233.9667.83.362.10162.21156.26193.8390.335.870.1428.015.683.5727815314.6577489389.69828016.21138.50737.796452.710.20870.75828.09465.888368.0111.67320574.621.3139.25224.2903.3625525166749937.378.3803718601326135.5030.8754218600466670.94071413.944320133331.1181171414525.7159.2377520.40291978.41.8191370161.241161697500000216773333446396.0792.296146.9093.6431910943000033988000005.5622.10837742952.2273.009070.60203820.0820.8762270.215085183.75611.9950.24245076.40314.159114.646619.6383245666667615.975222747359.5737082286.0040.3402433140266667446.5361.242223.5478378.539448.90646.73617072.29440315.41458941.9458790.961206666677.1472358110.5447.95834.42029.69425822.1792.049425848.0912553.44357925.98116.0705650.13911.2269.309156.77010.029325314.62324112.8292.61017.18588.6816.372974.2927.170109.96363.279214241.344.10048.0413.69763.767.672.470.864770.276511589365358551167811.9410.2179416.746184.0714.79373.8622.0487.7515.61619.2487.9706.168.1448800.1440205.22458830.6458756.46357722.7425508.1425467.51357550.82325312.30324234.5OpenBenchmarking.org

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUr1r1ar2br3r4714212835SE +/- 0.02080, N = 3SE +/- 0.01835, N = 3SE +/- 0.31773, N = 13SE +/- 0.30585, N = 15SE +/- 0.38629, N = 127.494677.5005928.4023028.1815028.46130MIN: 6.98MIN: 6.91MIN: 14.66MIN: 14.34MIN: 14.761. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUr1r1ar2br3r4612182430Min: 7.47 / Avg: 7.49 / Max: 7.54Min: 7.46 / Avg: 7.5 / Max: 7.52Min: 27.46 / Avg: 28.4 / Max: 31.89Min: 26.85 / Avg: 28.18 / Max: 31.91Min: 26.94 / Avg: 28.46 / Max: 32.391. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pr1ar2br3r4306090120150SE +/- 0.82, N = 15SE +/- 0.49, N = 3SE +/- 0.31, N = 15SE +/- 0.28, N = 3125.2543.2643.4242.371. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pr1ar2br3r420406080100Min: 119.84 / Avg: 125.25 / Max: 130.13Min: 42.47 / Avg: 43.26 / Max: 44.15Min: 40.6 / Avg: 43.42 / Max: 44.65Min: 41.84 / Avg: 42.37 / Max: 42.771. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pr1ar2br3r420406080100SE +/- 1.01, N = 15SE +/- 0.19, N = 3SE +/- 0.26, N = 3SE +/- 0.27, N = 3103.9236.2036.0636.351. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pr1ar2br3r420406080100Min: 94.84 / Avg: 103.92 / Max: 110.81Min: 35.82 / Avg: 36.2 / Max: 36.4Min: 35.73 / Avg: 36.06 / Max: 36.56Min: 35.96 / Avg: 36.35 / Max: 36.861. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pr1ar2br3r4510152025SE +/- 0.17, N = 3SE +/- 0.01, N = 3SE +/- 0.06, N = 3SE +/- 0.05, N = 321.257.457.387.431. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pr1ar2br3r4510152025Min: 20.96 / Avg: 21.25 / Max: 21.56Min: 7.44 / Avg: 7.45 / Max: 7.46Min: 7.26 / Avg: 7.38 / Max: 7.48Min: 7.35 / Avg: 7.43 / Max: 7.521. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pr1ar2br3r4714212835SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 328.6610.3910.3910.541. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pr1ar2br3r4612182430Min: 28.58 / Avg: 28.66 / Max: 28.77Min: 10.33 / Avg: 10.39 / Max: 10.43Min: 10.38 / Avg: 10.39 / Max: 10.41Min: 10.49 / Avg: 10.54 / Max: 10.641. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4Kr1r1ar2br3r448121620SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.07, N = 12SE +/- 0.01, N = 315.0915.195.975.976.001. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4Kr1r1ar2br3r448121620Min: 14.99 / Avg: 15.09 / Max: 15.15Min: 15.13 / Avg: 15.19 / Max: 15.25Min: 5.89 / Avg: 5.97 / Max: 6.08Min: 5.3 / Avg: 5.97 / Max: 6.17Min: 5.98 / Avg: 6 / Max: 6.011. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4Kr1r1ar2br3r4714212835SE +/- 0.19, N = 3SE +/- 0.29, N = 5SE +/- 0.08, N = 15SE +/- 0.12, N = 15SE +/- 0.17, N = 329.2028.9912.0311.9412.101. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4Kr1r1ar2br3r4612182430Min: 28.96 / Avg: 29.2 / Max: 29.58Min: 28.36 / Avg: 28.99 / Max: 29.86Min: 11.36 / Avg: 12.03 / Max: 12.45Min: 10.66 / Avg: 11.94 / Max: 12.53Min: 11.79 / Avg: 12.1 / Max: 12.381. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4Kr1r1ar2br3r4246810SE +/- 0.09, N = 15SE +/- 0.06, N = 3SE +/- 0.03, N = 9SE +/- 0.04, N = 3SE +/- 0.03, N = 57.377.553.223.203.231. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4Kr1r1ar2br3r43691215Min: 6.58 / Avg: 7.37 / Max: 7.82Min: 7.43 / Avg: 7.55 / Max: 7.64Min: 3.07 / Avg: 3.22 / Max: 3.35Min: 3.13 / Avg: 3.2 / Max: 3.27Min: 3.13 / Avg: 3.23 / Max: 3.321. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4Kr1r1ar2br3r4816243240SE +/- 0.28, N = 3SE +/- 0.28, N = 3SE +/- 0.15, N = 15SE +/- 0.18, N = 4SE +/- 0.08, N = 333.0732.5114.3014.0614.731. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4Kr1r1ar2br3r4714212835Min: 32.72 / Avg: 33.07 / Max: 33.62Min: 32.17 / Avg: 32.51 / Max: 33.06Min: 13.19 / Avg: 14.3 / Max: 15.14Min: 13.55 / Avg: 14.06 / Max: 14.36Min: 14.65 / Avg: 14.73 / Max: 14.881. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pr1r1ar2br3r490180270360450SE +/- 1.44, N = 3SE +/- 0.66, N = 3SE +/- 0.90, N = 3SE +/- 2.25, N = 3SE +/- 0.47, N = 3401.29408.24182.17181.52179.131. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pr1r1ar2br3r470140210280350Min: 398.74 / Avg: 401.29 / Max: 403.74Min: 407.03 / Avg: 408.24 / Max: 409.32Min: 181 / Avg: 182.17 / Max: 183.94Min: 178.55 / Avg: 181.52 / Max: 185.94Min: 178.64 / Avg: 179.13 / Max: 180.071. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pr1r1ar2br3r4110220330440550SE +/- 3.80, N = 3SE +/- 4.78, N = 3SE +/- 2.64, N = 4SE +/- 1.80, N = 10SE +/- 1.14, N = 3499.23493.51234.51234.39233.961. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pr1r1ar2br3r490180270360450Min: 494.64 / Avg: 499.23 / Max: 506.76Min: 486.22 / Avg: 493.51 / Max: 502.51Min: 231.3 / Avg: 234.51 / Max: 242.33Min: 229.62 / Avg: 234.39 / Max: 249.38Min: 231.75 / Avg: 233.96 / Max: 235.571. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Intel Memory Latency Checker

Intel Memory Latency Checker (MLC) is a binary-only system memory bandwidth and latency benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns, Fewer Is BetterIntel Memory Latency CheckerTest: Idle Latencyr1r1ar2r2ar3r4r51530456075SE +/- 0.10, N = 3SE +/- 0.39, N = 3SE +/- 0.09, N = 3SE +/- 0.28, N = 8SE +/- 0.12, N = 3SE +/- 0.12, N = 3SE +/- 0.09, N = 335.133.067.532.567.667.868.1
OpenBenchmarking.orgns, Fewer Is BetterIntel Memory Latency CheckerTest: Idle Latencyr1r1ar2r2ar3r4r51326395265Min: 35 / Avg: 35.1 / Max: 35.3Min: 32.5 / Avg: 33.03 / Max: 33.8Min: 67.3 / Avg: 67.47 / Max: 67.6Min: 31.2 / Avg: 32.45 / Max: 33.7Min: 67.4 / Avg: 67.6 / Max: 67.8Min: 67.6 / Avg: 67.8 / Max: 68Min: 67.9 / Avg: 68.07 / Max: 68.2

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pr1ar2br3r4246810SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 5SE +/- 0.01, N = 36.893.303.363.361. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pr1ar2br3r43691215Min: 6.85 / Avg: 6.89 / Max: 6.91Min: 3.26 / Avg: 3.3 / Max: 3.36Min: 3.23 / Avg: 3.36 / Max: 3.45Min: 3.34 / Avg: 3.36 / Max: 3.381. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4Kr1ar2br3r40.93831.87662.81493.75324.6915SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 9SE +/- 0.01, N = 34.172.012.052.101. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4Kr1ar2br3r4246810Min: 4.13 / Avg: 4.17 / Max: 4.24Min: 1.98 / Avg: 2.01 / Max: 2.06Min: 1.97 / Avg: 2.05 / Max: 2.11Min: 2.09 / Avg: 2.1 / Max: 2.111. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pr1r1ar2br3r470140210280350SE +/- 1.20, N = 3SE +/- 1.10, N = 3SE +/- 1.13, N = 3SE +/- 1.63, N = 3SE +/- 1.59, N = 3327.87329.53164.32164.51162.211. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pr1r1ar2br3r460120180240300Min: 325.69 / Avg: 327.87 / Max: 329.83Min: 327.66 / Avg: 329.53 / Max: 331.48Min: 162.07 / Avg: 164.32 / Max: 165.46Min: 161.35 / Avg: 164.51 / Max: 166.76Min: 159.05 / Avg: 162.21 / Max: 164.121. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pr1r1ar2br3r460120180240300SE +/- 1.68, N = 3SE +/- 1.37, N = 3SE +/- 1.76, N = 5SE +/- 1.64, N = 3SE +/- 1.22, N = 3290.67288.99158.16157.83156.261. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pr1r1ar2br3r450100150200250Min: 287.36 / Avg: 290.67 / Max: 292.83Min: 287.22 / Avg: 288.99 / Max: 291.69Min: 154.64 / Avg: 158.16 / Max: 163.67Min: 154.6 / Avg: 157.83 / Max: 159.91Min: 154.92 / Avg: 156.26 / Max: 158.691. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Timed Erlang/OTP Compilation

This test times how long it takes to compile Erlang/OTP. Erlang is a programming language and run-time for massively scalable soft real-time systems with high availability requirements. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Erlang/OTP Compilation 23.2Time To Compiler1r1ar2br3r44080120160200SE +/- 0.18, N = 3SE +/- 0.37, N = 3SE +/- 1.08, N = 3SE +/- 0.31, N = 3SE +/- 1.56, N = 3114.55113.80191.75192.25193.84
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Erlang/OTP Compilation 23.2Time To Compiler1r1ar2br3r44080120160200Min: 114.19 / Avg: 114.55 / Max: 114.74Min: 113.19 / Avg: 113.8 / Max: 114.48Min: 189.6 / Avg: 191.75 / Max: 192.94Min: 191.89 / Avg: 192.25 / Max: 192.87Min: 191.34 / Avg: 193.84 / Max: 196.71

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pr1ar2br3r40.11480.22960.34440.45920.574SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.510.320.330.331. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pr1ar2br3r4246810Min: 0.5 / Avg: 0.51 / Max: 0.51Min: 0.32 / Avg: 0.32 / Max: 0.33Min: 0.33 / Avg: 0.33 / Max: 0.33Min: 0.33 / Avg: 0.33 / Max: 0.331. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.5Scene: LuxCore Benchmark - Acceleration: CPUr1r1ar2br3r4246810SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 37.848.045.845.925.87MIN: 3.44 / MAX: 9.2MIN: 3.51 / MAX: 9.33MIN: 1.16 / MAX: 7.97MIN: 1.15 / MAX: 7.98MIN: 1.15 / MAX: 7.95
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.5Scene: LuxCore Benchmark - Acceleration: CPUr1r1ar2br3r43691215Min: 7.79 / Avg: 7.84 / Max: 7.94Min: 8.01 / Avg: 8.04 / Max: 8.05Min: 5.8 / Avg: 5.84 / Max: 5.87Min: 5.9 / Avg: 5.92 / Max: 5.95Min: 5.82 / Avg: 5.87 / Max: 5.92

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4Kr1ar2br3r40.04280.08560.12840.17120.214SE +/- 0.00, N = 5SE +/- 0.00, N = 12SE +/- 0.00, N = 3SE +/- 0.00, N = 30.190.140.150.141. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4Kr1ar2br3r412345Min: 0.19 / Avg: 0.19 / Max: 0.2Min: 0.14 / Avg: 0.14 / Max: 0.15Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.14 / Avg: 0.14 / Max: 0.141. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080pr1r1ar2br3r4918273645SE +/- 0.29, N = 3SE +/- 0.24, N = 3SE +/- 0.09, N = 3SE +/- 0.14, N = 3SE +/- 0.31, N = 336.9137.3427.8028.2228.011. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080pr1r1ar2br3r4816243240Min: 36.42 / Avg: 36.91 / Max: 37.42Min: 37.03 / Avg: 37.34 / Max: 37.81Min: 27.64 / Avg: 27.8 / Max: 27.95Min: 27.94 / Avg: 28.22 / Max: 28.42Min: 27.41 / Avg: 28.01 / Max: 28.481. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.5Scene: Danish Mood - Acceleration: CPUr1r1ar2br3r4246810SE +/- 0.08, N = 3SE +/- 0.10, N = 3SE +/- 0.04, N = 3SE +/- 0.07, N = 3SE +/- 0.04, N = 37.427.555.735.655.68MIN: 3.2 / MAX: 8.74MIN: 3.28 / MAX: 8.86MIN: 1.3 / MAX: 7.65MIN: 1.24 / MAX: 7.63MIN: 1.26 / MAX: 7.6
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.5Scene: Danish Mood - Acceleration: CPUr1r1ar2br3r43691215Min: 7.27 / Avg: 7.42 / Max: 7.55Min: 7.37 / Avg: 7.55 / Max: 7.7Min: 5.66 / Avg: 5.73 / Max: 5.8Min: 5.52 / Avg: 5.65 / Max: 5.74Min: 5.61 / Avg: 5.68 / Max: 5.76

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per Directionr1r1ar2br3r40.80391.60782.41173.21564.0195SE +/- 0.00774937, N = 3SE +/- 0.01532048, N = 3SE +/- 0.02799890, N = 3SE +/- 0.03072276, N = 15SE +/- 0.02850005, N = 152.743709962.738590963.022819923.565927743.572781531. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per Directionr1r1ar2br3r4246810Min: 2.73 / Avg: 2.74 / Max: 2.76Min: 2.71 / Avg: 2.74 / Max: 2.76Min: 2.97 / Avg: 3.02 / Max: 3.06Min: 3.42 / Avg: 3.57 / Max: 3.89Min: 3.42 / Avg: 3.57 / Max: 3.791. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per Directionr1r1ar2br3r448121620SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 311.3611.2711.5614.6014.661. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per Directionr1r1ar2br3r448121620Min: 11.31 / Avg: 11.36 / Max: 11.4Min: 11.22 / Avg: 11.27 / Max: 11.32Min: 11.48 / Avg: 11.56 / Max: 11.61Min: 14.55 / Avg: 14.6 / Max: 14.64Min: 14.63 / Avg: 14.66 / Max: 14.691. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: X3D-benchmarking input.i3dr1r1ar2br3r480160240320400SE +/- 0.46, N = 3SE +/- 0.12, N = 3SE +/- 2.73, N = 9SE +/- 4.39, N = 9SE +/- 3.91, N = 9313.92311.96307.62386.39389.701. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: X3D-benchmarking input.i3dr1r1ar2br3r470140210280350Min: 313.03 / Avg: 313.92 / Max: 314.59Min: 311.74 / Avg: 311.96 / Max: 312.14Min: 298.59 / Avg: 307.62 / Max: 315.81Min: 379.12 / Avg: 386.39 / Max: 413.18Min: 379.73 / Avg: 389.7 / Max: 405.581. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6r1r1ar2br3r448121620SE +/- 0.05, N = 3SE +/- 0.08, N = 3SE +/- 0.23, N = 3SE +/- 0.13, N = 15SE +/- 0.12, N = 1513.2513.3316.0716.6216.211. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6r1r1ar2br3r448121620Min: 13.18 / Avg: 13.25 / Max: 13.35Min: 13.2 / Avg: 13.33 / Max: 13.48Min: 15.72 / Avg: 16.07 / Max: 16.49Min: 15.73 / Avg: 16.61 / Max: 17.78Min: 15.26 / Avg: 16.21 / Max: 16.891. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6, Losslessr1r1ar2br3r4918273645SE +/- 0.04, N = 3SE +/- 0.09, N = 3SE +/- 0.24, N = 3SE +/- 0.35, N = 3SE +/- 0.36, N = 632.1131.6238.4038.5938.511. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6, Losslessr1r1ar2br3r4816243240Min: 32.05 / Avg: 32.11 / Max: 32.19Min: 31.48 / Avg: 31.62 / Max: 31.8Min: 37.93 / Avg: 38.4 / Max: 38.7Min: 38.18 / Avg: 38.59 / Max: 39.29Min: 37.14 / Avg: 38.51 / Max: 39.731. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 2r1r1ar2br3r4918273645SE +/- 0.10, N = 3SE +/- 0.04, N = 3SE +/- 0.40, N = 3SE +/- 0.20, N = 3SE +/- 0.08, N = 331.5431.4838.3738.3137.801. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 2r1r1ar2br3r4816243240Min: 31.41 / Avg: 31.54 / Max: 31.73Min: 31.4 / Avg: 31.48 / Max: 31.55Min: 37.65 / Avg: 38.37 / Max: 39.03Min: 37.93 / Avg: 38.31 / Max: 38.6Min: 37.64 / Avg: 37.8 / Max: 37.911. (CXX) g++ options: -O3 -fPIC -lm

LuaRadio

LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Complex Phaser1r1ar2br3r4120240360480600SE +/- 0.25, N = 3SE +/- 0.71, N = 3SE +/- 3.61, N = 9SE +/- 4.31, N = 6SE +/- 4.50, N = 6546.8548.2458.7458.2452.7
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Complex Phaser1r1ar2br3r4100200300400500Min: 546.3 / Avg: 546.8 / Max: 547.1Min: 546.9 / Avg: 548.23 / Max: 549.3Min: 443.6 / Avg: 458.66 / Max: 470.8Min: 443.3 / Avg: 458.2 / Max: 470.9Min: 440.1 / Avg: 452.67 / Max: 470

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 10, Losslessr1r1ar2br3r43691215SE +/- 0.036, N = 3SE +/- 0.016, N = 3SE +/- 0.154, N = 15SE +/- 0.130, N = 15SE +/- 0.157, N = 158.8528.81210.28210.08810.2081. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 10, Losslessr1r1ar2br3r43691215Min: 8.81 / Avg: 8.85 / Max: 8.93Min: 8.79 / Avg: 8.81 / Max: 8.84Min: 9.34 / Avg: 10.28 / Max: 11.29Min: 9.23 / Avg: 10.09 / Max: 10.99Min: 9.23 / Avg: 10.21 / Max: 111. (CXX) g++ options: -O3 -fPIC -lm

Timed Wasmer Compilation

This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 1.0.2Time To Compiler1r1ar2br3r41632486480SE +/- 0.22, N = 3SE +/- 0.62, N = 3SE +/- 0.42, N = 3SE +/- 0.66, N = 7SE +/- 0.51, N = 362.1661.9371.9371.1370.761. (CC) gcc options: -m64 -pie -nodefaultlibs -ldl -lrt -lpthread -lgcc_s -lc -lm -lutil
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 1.0.2Time To Compiler1r1ar2br3r41428425670Min: 61.82 / Avg: 62.16 / Max: 62.56Min: 61.03 / Avg: 61.93 / Max: 63.12Min: 71.24 / Avg: 71.93 / Max: 72.69Min: 68.8 / Avg: 71.13 / Max: 73.79Min: 70.14 / Avg: 70.76 / Max: 71.771. (CC) gcc options: -m64 -pie -nodefaultlibs -ldl -lrt -lpthread -lgcc_s -lc -lm -lutil

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.10.20Time To Compiler1r1ar2br3r4714212835SE +/- 0.30, N = 4SE +/- 0.28, N = 4SE +/- 0.32, N = 14SE +/- 0.41, N = 14SE +/- 0.37, N = 1424.3824.3628.0028.0228.09
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.10.20Time To Compiler1r1ar2br3r4612182430Min: 24.04 / Avg: 24.38 / Max: 25.28Min: 24.01 / Avg: 24.36 / Max: 25.2Min: 27.09 / Avg: 28 / Max: 31.92Min: 27.22 / Avg: 28.02 / Max: 33.23Min: 27.27 / Avg: 28.09 / Max: 32.82

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 0r1r1ar2br3r41530456075SE +/- 0.21, N = 3SE +/- 0.24, N = 3SE +/- 0.22, N = 3SE +/- 0.20, N = 3SE +/- 0.68, N = 357.9857.7164.9765.9665.891. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 0r1r1ar2br3r41326395265Min: 57.76 / Avg: 57.98 / Max: 58.4Min: 57.22 / Avg: 57.71 / Max: 57.98Min: 64.55 / Avg: 64.97 / Max: 65.27Min: 65.74 / Avg: 65.96 / Max: 66.37Min: 64.59 / Avg: 65.89 / Max: 66.881. (CXX) g++ options: -O3 -fPIC -lm

LuaRadio

LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: FM Deemphasis Filterr1r1ar2br3r490180270360450SE +/- 0.21, N = 3SE +/- 1.40, N = 3SE +/- 5.30, N = 9SE +/- 4.83, N = 6SE +/- 1.19, N = 6410.0409.6370.1370.3368.0
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: FM Deemphasis Filterr1r1ar2br3r470140210280350Min: 409.6 / Avg: 410 / Max: 410.3Min: 406.8 / Avg: 409.6 / Max: 411.1Min: 338.9 / Avg: 370.12 / Max: 387.2Min: 353.2 / Avg: 370.33 / Max: 387.5Min: 363.2 / Avg: 367.98 / Max: 370.2

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 15.11Time To Compiler1r1ar2br3r4306090120150SE +/- 0.27, N = 3SE +/- 0.29, N = 3SE +/- 0.50, N = 3SE +/- 0.68, N = 3SE +/- 0.78, N = 3101.10100.45110.93111.79111.67
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 15.11Time To Compiler1r1ar2br3r420406080100Min: 100.61 / Avg: 101.1 / Max: 101.55Min: 100.12 / Avg: 100.45 / Max: 101.02Min: 110.03 / Avg: 110.93 / Max: 111.75Min: 110.54 / Avg: 111.79 / Max: 112.86Min: 110.44 / Avg: 111.67 / Max: 113.1

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.12.1Variant: Monero - Hash Count: 1Mr1r1ar2br3r44K8K12K16K20KSE +/- 23.28, N = 3SE +/- 20.55, N = 3SE +/- 151.73, N = 3SE +/- 245.77, N = 3SE +/- 243.31, N = 1519299.519452.019311.120652.920574.61. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.12.1Variant: Monero - Hash Count: 1Mr1r1ar2br3r44K8K12K16K20KMin: 19253 / Avg: 19299.5 / Max: 19324.8Min: 19416.3 / Avg: 19452 / Max: 19487.5Min: 19032 / Avg: 19311.13 / Max: 19553.8Min: 20230.2 / Avg: 20652.93 / Max: 21081.5Min: 19131.8 / Avg: 20574.62 / Max: 22183.81. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Timed Mesa Compilation

This test profile times how long it takes to compile Mesa with Meson/Ninja. For minimizing build dependencies and avoid versioning conflicts, test this is just the core Mesa build without LLVM or the extra Gallium3D/Mesa drivers enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To Compiler1r1ar2br3r4510152025SE +/- 0.02, N = 3SE +/- 0.12, N = 3SE +/- 0.04, N = 3SE +/- 0.15, N = 3SE +/- 0.11, N = 320.9520.3821.5821.3721.31
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To Compiler1r1ar2br3r4510152025Min: 20.92 / Avg: 20.95 / Max: 21Min: 20.24 / Avg: 20.38 / Max: 20.63Min: 21.51 / Avg: 21.57 / Max: 21.63Min: 21.14 / Avg: 21.37 / Max: 21.64Min: 21.11 / Avg: 21.31 / Max: 21.5

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.5Scene: DLSC - Acceleration: CPUr1r1ar2br3r43691215SE +/- 0.09, N = 3SE +/- 0.09, N = 15SE +/- 0.08, N = 15SE +/- 0.10, N = 3SE +/- 0.09, N = 39.709.619.279.249.25MIN: 8.98 / MAX: 12.22MIN: 8 / MAX: 12.27MIN: 8.31 / MAX: 11.98MIN: 8.74 / MAX: 11.37MIN: 8.59 / MAX: 11.4
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.5Scene: DLSC - Acceleration: CPUr1r1ar2br3r43691215Min: 9.6 / Avg: 9.7 / Max: 9.88Min: 8.62 / Avg: 9.61 / Max: 10.06Min: 8.74 / Avg: 9.27 / Max: 9.85Min: 9.05 / Avg: 9.24 / Max: 9.34Min: 9.1 / Avg: 9.25 / Max: 9.42

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 12.0Build System: Unix Makefilesr1r1ar2br3r450100150200250SE +/- 0.91, N = 3SE +/- 0.80, N = 3SE +/- 0.77, N = 3SE +/- 1.24, N = 3SE +/- 0.43, N = 3216.32215.76226.44226.20224.29
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 12.0Build System: Unix Makefilesr1r1ar2br3r44080120160200Min: 214.93 / Avg: 216.32 / Max: 218.03Min: 214.28 / Avg: 215.76 / Max: 217.02Min: 225.34 / Avg: 226.44 / Max: 227.92Min: 224.41 / Avg: 226.2 / Max: 228.59Min: 223.45 / Avg: 224.29 / Max: 224.89

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: mobilenet-v1-1.0r2br40.75651.5132.26953.0263.7825SE +/- 0.089, N = 3SE +/- 0.021, N = 123.2133.362MIN: 2.8 / MAX: 6.7MIN: 2.98 / MAX: 6.661. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: mobilenet-v1-1.0r2br4246810Min: 3.05 / Avg: 3.21 / Max: 3.35Min: 3.24 / Avg: 3.36 / Max: 3.491. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 1 - Buffer Length: 256 - Filter Length: 57r1r2br3r412M24M36M48M60MSE +/- 173700.89, N = 3SE +/- 613156.95, N = 3SE +/- 550708.74, N = 3SE +/- 534784.17, N = 3577920005623033357197667552516671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 1 - Buffer Length: 256 - Filter Length: 57r1r2br3r410M20M30M40M50MMin: 57448000 / Avg: 57792000 / Max: 58006000Min: 55442000 / Avg: 56230333.33 / Max: 57438000Min: 56105000 / Avg: 57197666.67 / Max: 57864000Min: 54673000 / Avg: 55251666.67 / Max: 563200001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.12.1Variant: Wownero - Hash Count: 1Mr1r1ar2br3r411K22K33K44K55KSE +/- 425.40, N = 7SE +/- 588.34, N = 3SE +/- 238.38, N = 3SE +/- 358.18, N = 3SE +/- 235.04, N = 348051.550166.149908.349813.449937.31. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.12.1Variant: Wownero - Hash Count: 1Mr1r1ar2br3r49K18K27K36K45KMin: 45934.8 / Avg: 48051.54 / Max: 48904.5Min: 49000.4 / Avg: 50166.07 / Max: 50888Min: 49431.5 / Avg: 49908.27 / Max: 50147.9Min: 49123.2 / Avg: 49813.43 / Max: 50324.6Min: 49566.3 / Avg: 49937.3 / Max: 50372.81. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

srsLTE

srsLTE is an open-source LTE software radio suite created by Software Radio Systems (SRS). srsLTE can be used for building your own software defined (SDR) LTE mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsLTE 20.10.1Test: PHY_DL_Testr1r1ar2br3r420406080100SE +/- 0.76, N = 3SE +/- 1.16, N = 3SE +/- 0.38, N = 3SE +/- 1.14, N = 3SE +/- 0.62, N = 376.977.375.076.178.31. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lpthread -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lm -lfftw3f
OpenBenchmarking.orgUE Mb/s, More Is BettersrsLTE 20.10.1Test: PHY_DL_Testr1r1ar2br3r41530456075Min: 75.7 / Avg: 76.9 / Max: 78.3Min: 75.4 / Avg: 77.33 / Max: 79.4Min: 74.3 / Avg: 75 / Max: 75.6Min: 74.3 / Avg: 76.07 / Max: 78.2Min: 77.1 / Avg: 78.27 / Max: 79.21. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lpthread -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lm -lfftw3f

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ Tasksr1r1ar2br3r42K4K6K8K10KSE +/- 43.45, N = 3SE +/- 80.44, N = 4SE +/- 102.03, N = 3SE +/- 93.55, N = 4SE +/- 85.46, N = 4787977248050804880371. (CXX) g++ options: -O3 -lpthread -lm -lgcc -lgcc_s -lc
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ Tasksr1r1ar2br3r414002800420056007000Min: 7811 / Avg: 7879.33 / Max: 7960Min: 7499 / Avg: 7724.25 / Max: 7878Min: 7924 / Avg: 8050 / Max: 8252Min: 7815 / Avg: 8048.25 / Max: 8267Min: 7901 / Avg: 8037.25 / Max: 82871. (CXX) g++ options: -O3 -lpthread -lm -lgcc -lgcc_s -lc

Stockfish

This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 13Total Timer1r1ar2br3r440M80M120M160M200MSE +/- 1585265.68, N = 15SE +/- 2404481.41, N = 3SE +/- 1982639.48, N = 3SE +/- 1924842.52, N = 3SE +/- 2183262.34, N = 41816448191862635521815542181892144991860132611. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fprofile-use -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 13Total Timer1r1ar2br3r430M60M90M120M150MMin: 170087434 / Avg: 181644818.6 / Max: 191465307Min: 181497022 / Avg: 186263551.67 / Max: 189198848Min: 177852020 / Avg: 181554218 / Max: 184635307Min: 185384757 / Avg: 189214499 / Max: 191468285Min: 180868607 / Avg: 186013260.75 / Max: 1915498961. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fprofile-use -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver

VOSK Speech Recognition Toolkit

VOSK is an open-source offline speech recognition API/toolkit. VOSK supports speech recognition in 17 languages and has a variety of models available and interfaces for different programming languages. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterVOSK Speech Recognition Toolkit 0.3.21r1r1ar2br3r4816243240SE +/- 0.32, N = 3SE +/- 0.29, N = 8SE +/- 0.43, N = 3SE +/- 0.43, N = 3SE +/- 0.32, N = 335.9235.0136.4235.5835.50
OpenBenchmarking.orgSeconds, Fewer Is BetterVOSK Speech Recognition Toolkit 0.3.21r1r1ar2br3r4816243240Min: 35.54 / Avg: 35.92 / Max: 36.55Min: 33.96 / Avg: 35.01 / Max: 36.86Min: 35.67 / Avg: 36.42 / Max: 37.14Min: 34.85 / Avg: 35.58 / Max: 36.33Min: 34.92 / Avg: 35.5 / Max: 36.01

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUr1r1ar2br3r40.20290.40580.60870.81161.0145SE +/- 0.006225, N = 3SE +/- 0.003986, N = 3SE +/- 0.004902, N = 3SE +/- 0.006631, N = 3SE +/- 0.005244, N = 30.8778150.8791370.8699780.9018230.875421MIN: 0.82MIN: 0.83MIN: 0.82MIN: 0.84MIN: 0.821. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUr1r1ar2br3r4246810Min: 0.87 / Avg: 0.88 / Max: 0.89Min: 0.87 / Avg: 0.88 / Max: 0.89Min: 0.86 / Avg: 0.87 / Max: 0.88Min: 0.89 / Avg: 0.9 / Max: 0.91Min: 0.87 / Avg: 0.88 / Max: 0.881. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57r1r1ar2br3r4200M400M600M800M1000MSE +/- 691953.76, N = 3SE +/- 669162.00, N = 3SE +/- 3620722.76, N = 3SE +/- 859903.10, N = 3SE +/- 10609570.10, N = 38853200008902733338628900008654100008600466671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57r1r1ar2br3r4150M300M450M600M750MMin: 884540000 / Avg: 885320000 / Max: 886700000Min: 888940000 / Avg: 890273333.33 / Max: 891040000Min: 855690000 / Avg: 862890000 / Max: 867160000Min: 864120000 / Avg: 865410000 / Max: 867040000Min: 841040000 / Avg: 860046666.67 / Max: 8777200001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUr1r1ar2br3r40.21230.42460.63690.84921.0615SE +/- 0.002101, N = 3SE +/- 0.002111, N = 3SE +/- 0.011253, N = 3SE +/- 0.007264, N = 3SE +/- 0.008450, N = 30.9185680.9122790.9436240.9369410.940714MIN: 0.85MIN: 0.86MIN: 0.86MIN: 0.85MIN: 0.861. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUr1r1ar2br3r4246810Min: 0.91 / Avg: 0.92 / Max: 0.92Min: 0.91 / Avg: 0.91 / Max: 0.91Min: 0.92 / Avg: 0.94 / Max: 0.96Min: 0.93 / Avg: 0.94 / Max: 0.95Min: 0.93 / Avg: 0.94 / Max: 0.961. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.5Scene: Orange Juice - Acceleration: CPUr1r1ar2br3r448121620SE +/- 0.13, N = 3SE +/- 0.21, N = 3SE +/- 0.18, N = 3SE +/- 0.12, N = 15SE +/- 0.13, N = 1514.3614.2614.2813.8913.94MIN: 11.58 / MAX: 19.44MIN: 11.6 / MAX: 19.3MIN: 11.93 / MAX: 17.73MIN: 11.08 / MAX: 17.77MIN: 11.06 / MAX: 17.84
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.5Scene: Orange Juice - Acceleration: CPUr1r1ar2br3r448121620Min: 14.15 / Avg: 14.36 / Max: 14.59Min: 14.04 / Avg: 14.26 / Max: 14.67Min: 13.92 / Avg: 14.28 / Max: 14.51Min: 13.37 / Avg: 13.89 / Max: 14.86Min: 13.31 / Avg: 13.94 / Max: 15.02

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 8 - Buffer Length: 256 - Filter Length: 57r1r2br3r490M180M270M360M450MSE +/- 422150.58, N = 3SE +/- 2458908.97, N = 3SE +/- 1240739.03, N = 3SE +/- 2739929.03, N = 34419533334281000004321700004320133331. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 8 - Buffer Length: 256 - Filter Length: 57r1r2br3r480M160M240M320M400MMin: 441150000 / Avg: 441953333.33 / Max: 442580000Min: 423390000 / Avg: 428100000 / Max: 431680000Min: 429860000 / Avg: 432170000 / Max: 434110000Min: 426620000 / Avg: 432013333.33 / Max: 4355500001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUr1r1ar2br3r40.25780.51560.77341.03121.289SE +/- 0.00274, N = 3SE +/- 0.00124, N = 3SE +/- 0.00330, N = 3SE +/- 0.00975, N = 3SE +/- 0.01182, N = 31.109911.122241.118741.145781.11811MIN: 1.02MIN: 1.02MIN: 1.02MIN: 1.04MIN: 1.021. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUr1r1ar2br3r4246810Min: 1.11 / Avg: 1.11 / Max: 1.12Min: 1.12 / Avg: 1.12 / Max: 1.12Min: 1.11 / Avg: 1.12 / Max: 1.13Min: 1.13 / Avg: 1.15 / Max: 1.16Min: 1.1 / Avg: 1.12 / Max: 1.141. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ Threadsr1r1ar2br3r415003000450060007500SE +/- 49.12, N = 3SE +/- 29.96, N = 3SE +/- 89.67, N = 3SE +/- 98.76, N = 3SE +/- 76.94, N = 4701869807149720371411. (CXX) g++ options: -O3 -lpthread -lm -lgcc -lgcc_s -lc
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ Threadsr1r1ar2br3r413002600390052006500Min: 6956 / Avg: 7018 / Max: 7115Min: 6938 / Avg: 6980 / Max: 7038Min: 7057 / Avg: 7148.67 / Max: 7328Min: 7071 / Avg: 7202.67 / Max: 7396Min: 7008 / Avg: 7141.25 / Max: 73631. (CXX) g++ options: -O3 -lpthread -lm -lgcc -lgcc_s -lc

HammerDB - MariaDB

This is a MariaDB MySQL database server benchmark making use of the HammerDB benchmarking / load testing tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.5.9Virtual Users: 64 - Warehouses: 500r1r1a14K28K42K56K70KSE +/- 620.04, N = 3SE +/- 730.55, N = 964298623111. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.5.9Virtual Users: 64 - Warehouses: 500r1r1a11K22K33K44K55KMin: 63421 / Avg: 64298.33 / Max: 65496Min: 58815 / Avg: 62311.11 / Max: 657341. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.5.9Virtual Users: 64 - Warehouses: 500r1r1a40K80K120K160K200KSE +/- 2149.33, N = 3SE +/- 2084.32, N = 91946841887611. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.5.9Virtual Users: 64 - Warehouses: 500r1r1a30K60K90K120K150KMin: 191710 / Avg: 194684 / Max: 198859Min: 178525 / Avg: 188761.44 / Max: 1982821. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt

GNU GMP GMPbench

GMPbench is a test of the GNU Multiple Precision Arithmetic (GMP) Library. GMPbench is a single-threaded integer benchmark that leverages the GMP library to stress the CPU with widening integer multiplication. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGMPbench Score, More Is BetterGNU GMP GMPbench 6.2.1Total Timer1r1ar2br3r4100020003000400050004642.14642.84524.54504.54525.71. (CC) gcc options: -O3 -fomit-frame-pointer -lm

libjpeg-turbo tjbench

tjbench is a JPEG decompression/compression benchmark that is part of libjpeg-turbo, a JPEG image codec library optimized for SIMD instructions on modern CPU architectures. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMegapixels/sec, More Is Betterlibjpeg-turbo tjbench 2.1.0Test: Decompression Throughputr1r1ar2br3r44080120160200SE +/- 0.15, N = 3SE +/- 0.39, N = 3SE +/- 0.07, N = 3SE +/- 1.04, N = 3SE +/- 0.47, N = 3161.63156.97160.26159.19159.241. (CC) gcc options: -O3 -rdynamic
OpenBenchmarking.orgMegapixels/sec, More Is Betterlibjpeg-turbo tjbench 2.1.0Test: Decompression Throughputr1r1ar2br3r4306090120150Min: 161.4 / Avg: 161.63 / Max: 161.9Min: 156.22 / Avg: 156.97 / Max: 157.53Min: 160.19 / Avg: 160.26 / Max: 160.4Min: 157.79 / Avg: 159.19 / Max: 161.23Min: 158.6 / Avg: 159.24 / Max: 160.151. (CC) gcc options: -O3 -rdynamic

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUr1r1ar2br3r40.09150.1830.27450.3660.4575SE +/- 0.001135, N = 3SE +/- 0.001124, N = 3SE +/- 0.004259, N = 4SE +/- 0.003204, N = 10SE +/- 0.002415, N = 140.3982820.3955880.4034090.4068770.402919MIN: 0.37MIN: 0.36MIN: 0.36MIN: 0.37MIN: 0.361. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUr1r1ar2br3r412345Min: 0.4 / Avg: 0.4 / Max: 0.4Min: 0.39 / Avg: 0.4 / Max: 0.4Min: 0.4 / Avg: 0.4 / Max: 0.42Min: 0.4 / Avg: 0.41 / Max: 0.43Min: 0.39 / Avg: 0.4 / Max: 0.431. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

LuaRadio

LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Hilbert Transformr1r1ar2br3r420406080100SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.41, N = 9SE +/- 0.47, N = 6SE +/- 0.61, N = 680.380.378.278.278.4
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Hilbert Transformr1r1ar2br3r41530456075Min: 80.3 / Avg: 80.3 / Max: 80.3Min: 80.3 / Avg: 80.3 / Max: 80.3Min: 76.1 / Avg: 78.21 / Max: 79.7Min: 76.5 / Avg: 78.15 / Max: 79.7Min: 75.8 / Avg: 78.4 / Max: 79.7

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUr1r1ar2br3r40.41480.82961.24441.65922.074SE +/- 0.00580, N = 3SE +/- 0.00121, N = 3SE +/- 0.01382, N = 3SE +/- 0.02043, N = 3SE +/- 0.00968, N = 31.800461.798811.817741.843391.81913MIN: 1.68MIN: 1.69MIN: 1.69MIN: 1.67MIN: 1.681. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUr1r1ar2br3r4246810Min: 1.79 / Avg: 1.8 / Max: 1.81Min: 1.8 / Avg: 1.8 / Max: 1.8Min: 1.8 / Avg: 1.82 / Max: 1.85Min: 1.81 / Avg: 1.84 / Max: 1.88Min: 1.8 / Avg: 1.82 / Max: 1.831. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: TBBr1r1ar2br3r415003000450060007500SE +/- 59.06, N = 15SE +/- 80.68, N = 3SE +/- 73.83, N = 15SE +/- 69.20, N = 15SE +/- 81.70, N = 15685069646984700370161. (CXX) g++ options: -O3 -lpthread -lm -lgcc -lgcc_s -lc
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: TBBr1r1ar2br3r412002400360048006000Min: 6609 / Avg: 6850.4 / Max: 7469Min: 6817 / Avg: 6964.33 / Max: 7095Min: 6631 / Avg: 6984.4 / Max: 7714Min: 6633 / Avg: 7002.87 / Max: 7464Min: 6704 / Avg: 7016.13 / Max: 78601. (CXX) g++ options: -O3 -lpthread -lm -lgcc -lgcc_s -lc

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUr1r1ar2br3r40.28010.56020.84031.12041.4005SE +/- 0.01080, N = 15SE +/- 0.01126, N = 15SE +/- 0.01174, N = 15SE +/- 0.01066, N = 15SE +/- 0.00891, N = 151.215941.222781.237961.245081.24116MIN: 0.84MIN: 0.85MIN: 0.87MIN: 0.89MIN: 0.851. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUr1r1ar2br3r4246810Min: 1.14 / Avg: 1.22 / Max: 1.26Min: 1.16 / Avg: 1.22 / Max: 1.3Min: 1.16 / Avg: 1.24 / Max: 1.33Min: 1.18 / Avg: 1.25 / Max: 1.34Min: 1.18 / Avg: 1.24 / Max: 1.331. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 32 - Buffer Length: 256 - Filter Length: 57r1r1ar2br3r4400M800M1200M1600M2000MSE +/- 3951371.07, N = 3SE +/- 2515949.13, N = 3SE +/- 10121648.97, N = 3SE +/- 6582552.70, N = 3173510000017368000001699333333170450000016975000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 32 - Buffer Length: 256 - Filter Length: 57r1r1ar2br3r4300M600M900M1200M1500MMin: 1730100000 / Avg: 1735100000 / Max: 1742900000Min: 1732300000 / Avg: 1736800000 / Max: 1741000000Min: 1679100000 / Avg: 1699333333.33 / Max: 1710000000Min: 1689200000 / Avg: 1697500000 / Max: 17105000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

MariaDB

This is a MariaDB MySQL database server benchmark making use of mysqlslap. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 10.5.2Clients: 4r2br330060090012001500SE +/- 16.07, N = 3SE +/- 7.20, N = 3161415801. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lsnappy -ldl -lz -lrt
OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 10.5.2Clients: 4r2br330060090012001500Min: 1582.28 / Avg: 1614.09 / Max: 1633.99Min: 1567.4 / Avg: 1579.84 / Max: 1592.361. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lsnappy -ldl -lz -lrt

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 4 - Buffer Length: 256 - Filter Length: 57r1r2br3r450M100M150M200M250MSE +/- 1090112.12, N = 3SE +/- 824809.74, N = 3SE +/- 1663583.82, N = 3SE +/- 1956802.95, N = 32176433332132033332153433332167733331. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 4 - Buffer Length: 256 - Filter Length: 57r1r2br3r440M80M120M160M200MMin: 215470000 / Avg: 217643333.33 / Max: 218880000Min: 211610000 / Avg: 213203333.33 / Max: 214370000Min: 213630000 / Avg: 215343333.33 / Max: 218670000Min: 212860000 / Avg: 216773333.33 / Max: 2187700001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Intel Memory Latency Checker

Intel Memory Latency Checker (MLC) is a binary-only system memory bandwidth and latency benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterIntel Memory Latency CheckerTest: Peak Injection Bandwidth - 1:1 Reads-Writesr1r1ar2ar2br3r4r5100K200K300K400K500KSE +/- 1187.16, N = 3SE +/- 148.63, N = 3SE +/- 212.40, N = 3SE +/- 314.54, N = 3SE +/- 138.13, N = 3SE +/- 1601.80, N = 3SE +/- 847.23, N = 3442422.3442843.2442144.2440454.7449554.1446396.0448800.1
OpenBenchmarking.orgMB/s, More Is BetterIntel Memory Latency CheckerTest: Peak Injection Bandwidth - 1:1 Reads-Writesr1r1ar2ar2br3r4r580K160K240K320K400KMin: 440684.8 / Avg: 442422.27 / Max: 444692.4Min: 442548.7 / Avg: 442843.2 / Max: 443025.4Min: 441727.6 / Avg: 442144.17 / Max: 442424.5Min: 439978.3 / Avg: 440454.7 / Max: 441048.7Min: 449395.8 / Avg: 449554.07 / Max: 449829.3Min: 443202.1 / Avg: 446395.97 / Max: 448208.9Min: 447252.4 / Avg: 448800.13 / Max: 450171.3

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUr1r1ar2br3r42004006008001000SE +/- 7.46, N = 3SE +/- 4.49, N = 3SE +/- 9.76, N = 3SE +/- 1.09, N = 3SE +/- 2.67, N = 3801.41804.32808.29796.69792.30MIN: 767.38MIN: 765.37MIN: 767.97MIN: 771.28MIN: 763.961. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUr1r1ar2br3r4140280420560700Min: 790.81 / Avg: 801.41 / Max: 815.81Min: 795.37 / Avg: 804.32 / Max: 809.32Min: 796.57 / Avg: 808.29 / Max: 827.68Min: 794.92 / Avg: 796.69 / Max: 798.67Min: 786.96 / Avg: 792.3 / Max: 794.991. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 12.0Build System: Ninjar1r1ar2br3r4306090120150SE +/- 0.52, N = 3SE +/- 0.75, N = 3SE +/- 1.12, N = 3SE +/- 0.32, N = 3SE +/- 0.56, N = 3145.72145.55148.48147.16146.91
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 12.0Build System: Ninjar1r1ar2br3r4306090120150Min: 144.8 / Avg: 145.72 / Max: 146.6Min: 144.52 / Avg: 145.55 / Max: 147.01Min: 146.58 / Avg: 148.48 / Max: 150.47Min: 146.55 / Avg: 147.16 / Max: 147.64Min: 145.96 / Avg: 146.91 / Max: 147.9

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUr1r1ar2br3r40.81971.63942.45913.27884.0985SE +/- 0.00924, N = 3SE +/- 0.00795, N = 3SE +/- 0.05421, N = 14SE +/- 0.05675, N = 14SE +/- 0.05617, N = 143.572473.576623.642323.640333.64319MIN: 3.53MIN: 3.5MIN: 3.51MIN: 3.47MIN: 3.51. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUr1r1ar2br3r4246810Min: 3.56 / Avg: 3.57 / Max: 3.59Min: 3.56 / Avg: 3.58 / Max: 3.59Min: 3.57 / Avg: 3.64 / Max: 4.35Min: 3.57 / Avg: 3.64 / Max: 4.38Min: 3.57 / Avg: 3.64 / Max: 4.371. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 2 - Buffer Length: 256 - Filter Length: 57r1r2br3r420M40M60M80M100MSE +/- 729984.78, N = 3SE +/- 907677.13, N = 3SE +/- 430348.70, N = 3SE +/- 132035.35, N = 31107133331101733331115100001094300001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 2 - Buffer Length: 256 - Filter Length: 57r1r2br3r420M40M60M80M100MMin: 109770000 / Avg: 110713333.33 / Max: 112150000Min: 109050000 / Avg: 110173333.33 / Max: 111970000Min: 110650000 / Avg: 111510000 / Max: 111970000Min: 109170000 / Avg: 109430000 / Max: 1096000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 128 - Buffer Length: 256 - Filter Length: 57r1r1ar2br3r4700M1400M2100M2800M3500MSE +/- 8088331.79, N = 3SE +/- 38975091.76, N = 3SE +/- 14312737.14, N = 3SE +/- 6896617.53, N = 3SE +/- 16537936.19, N = 3341593333333527333333400066667341100000033988000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 128 - Buffer Length: 256 - Filter Length: 57r1r1ar2br3r4600M1200M1800M2400M3000MMin: 3405700000 / Avg: 3415933333.33 / Max: 3431900000Min: 3302200000 / Avg: 3352733333.33 / Max: 3429400000Min: 3374500000 / Avg: 3400066666.67 / Max: 3424000000Min: 3397500000 / Avg: 3411000000 / Max: 3420200000Min: 3379400000 / Avg: 3398800000 / Max: 34317000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

KTX-Software toktx

This is a benchmark of The Khronos Group's KTX-Software library and tools. KTX-Software provides "toktx" for converting/creating in the KTX container format for image textures. This benchmark times how long it takes to convert to KTX 2.0 format with various settings using a reference PNG sample input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: UASTC 3r2br41.27442.54883.82325.09766.372SE +/- 0.053, N = 15SE +/- 0.008, N = 35.6645.562
OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: UASTC 3r2br4246810Min: 5.52 / Avg: 5.66 / Max: 6.01Min: 5.55 / Avg: 5.56 / Max: 5.57

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUr1r1ar2br3r40.47640.95281.42921.90562.382SE +/- 0.00138, N = 3SE +/- 0.00168, N = 3SE +/- 0.01980, N = 3SE +/- 0.01943, N = 3SE +/- 0.01801, N = 32.079442.085322.117122.108412.10837MIN: 2.03MIN: 2.03MIN: 2.03MIN: 2.03MIN: 2.031. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUr1r1ar2br3r4246810Min: 2.08 / Avg: 2.08 / Max: 2.08Min: 2.08 / Avg: 2.09 / Max: 2.09Min: 2.1 / Avg: 2.12 / Max: 2.16Min: 2.09 / Avg: 2.11 / Max: 2.15Min: 2.08 / Avg: 2.11 / Max: 2.141. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: OpenMPr1r1ar2br3r416003200480064008000SE +/- 5.13, N = 3SE +/- 0.88, N = 3SE +/- 101.59, N = 3SE +/- 85.45, N = 4SE +/- 91.12, N = 4731873087412743974291. (CXX) g++ options: -O3 -lpthread -lm -lgcc -lgcc_s -lc
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: OpenMPr1r1ar2br3r413002600390052006500Min: 7308 / Avg: 7318 / Max: 7325Min: 7307 / Avg: 7308.33 / Max: 7310Min: 7281 / Avg: 7412 / Max: 7612Min: 7314 / Avg: 7439 / Max: 7691Min: 7313 / Avg: 7428.5 / Max: 77001. (CXX) g++ options: -O3 -lpthread -lm -lgcc -lgcc_s -lc

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: inception-v3r2br41224364860SE +/- 1.54, N = 3SE +/- 0.75, N = 1253.0752.23MIN: 49.59 / MAX: 69.62MIN: 47.47 / MAX: 94.691. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: inception-v3r2br41122334455Min: 50.05 / Avg: 53.07 / Max: 55.13Min: 48.24 / Avg: 52.23 / Max: 55.451. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUr1r1ar2br3r40.67711.35422.03132.70843.3855SE +/- 0.00128, N = 3SE +/- 0.00276, N = 3SE +/- 0.02287, N = 13SE +/- 0.02478, N = 14SE +/- 0.02449, N = 142.961352.968573.004643.009293.00907MIN: 2.84MIN: 2.84MIN: 2.84MIN: 2.84MIN: 2.841. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUr1r1ar2br3r4246810Min: 2.96 / Avg: 2.96 / Max: 2.96Min: 2.96 / Avg: 2.97 / Max: 2.97Min: 2.97 / Avg: 3 / Max: 3.28Min: 2.97 / Avg: 3.01 / Max: 3.33Min: 2.97 / Avg: 3.01 / Max: 3.331. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

MariaDB

This is a MariaDB MySQL database server benchmark making use of mysqlslap. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 10.5.2Clients: 128r2br34080120160200SE +/- 0.65, N = 3SE +/- 0.35, N = 31921891. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lsnappy -ldl -lz -lrt
OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 10.5.2Clients: 128r2br34080120160200Min: 190.77 / Avg: 191.7 / Max: 192.94Min: 188.82 / Avg: 189.47 / Max: 190.011. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lsnappy -ldl -lz -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPUr1r1ar2br3r40.13550.2710.40650.5420.6775SE +/- 0.001703, N = 3SE +/- 0.000780, N = 3SE +/- 0.004180, N = 3SE +/- 0.004400, N = 3SE +/- 0.003648, N = 30.5930420.5956610.6021220.6023140.602038MIN: 0.56MIN: 0.56MIN: 0.56MIN: 0.56MIN: 0.561. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPUr1r1ar2br3r4246810Min: 0.59 / Avg: 0.59 / Max: 0.6Min: 0.59 / Avg: 0.6 / Max: 0.6Min: 0.6 / Avg: 0.6 / Max: 0.61Min: 0.6 / Avg: 0.6 / Max: 0.61Min: 0.6 / Avg: 0.6 / Max: 0.611. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

KTX-Software toktx

This is a benchmark of The Khronos Group's KTX-Software library and tools. KTX-Software provides "toktx" for converting/creating in the KTX container format for image textures. This benchmark times how long it takes to convert to KTX 2.0 format with various settings using a reference PNG sample input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: Zstd Compression 19r2br4510152025SE +/- 0.22, N = 3SE +/- 0.20, N = 319.7820.08
OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: Zstd Compression 19r2br4510152025Min: 19.57 / Avg: 19.78 / Max: 20.21Min: 19.88 / Avg: 20.08 / Max: 20.48

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUr1r1ar2br3r40.19720.39440.59160.78880.986SE +/- 0.002419, N = 3SE +/- 0.002055, N = 3SE +/- 0.008361, N = 14SE +/- 0.007890, N = 14SE +/- 0.007461, N = 140.8641640.8632140.8740800.8749680.876227MIN: 0.84MIN: 0.84MIN: 0.83MIN: 0.84MIN: 0.841. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUr1r1ar2br3r4246810Min: 0.86 / Avg: 0.86 / Max: 0.87Min: 0.86 / Avg: 0.86 / Max: 0.87Min: 0.86 / Avg: 0.87 / Max: 0.98Min: 0.86 / Avg: 0.87 / Max: 0.98Min: 0.86 / Avg: 0.88 / Max: 0.971. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUr1r1ar2br3r40.04880.09760.14640.19520.244SE +/- 0.000867, N = 3SE +/- 0.000781, N = 3SE +/- 0.001893, N = 8SE +/- 0.002019, N = 7SE +/- 0.001544, N = 120.2151150.2136430.2168060.2165860.215085MIN: 0.19MIN: 0.19MIN: 0.19MIN: 0.19MIN: 0.191. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUr1r1ar2br3r412345Min: 0.21 / Avg: 0.22 / Max: 0.22Min: 0.21 / Avg: 0.21 / Max: 0.21Min: 0.21 / Avg: 0.22 / Max: 0.23Min: 0.21 / Avg: 0.22 / Max: 0.23Min: 0.21 / Avg: 0.22 / Max: 0.231. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

srsLTE

srsLTE is an open-source LTE software radio suite created by Software Radio Systems (SRS). srsLTE can be used for building your own software defined (SDR) LTE mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsLTE 20.10.1Test: PHY_DL_Testr1r1ar2br3r44080120160200SE +/- 1.15, N = 3SE +/- 0.36, N = 3SE +/- 1.23, N = 3SE +/- 2.42, N = 3SE +/- 0.58, N = 3183.4184.2181.6181.6183.71. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lpthread -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lm -lfftw3f
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsLTE 20.10.1Test: PHY_DL_Testr1r1ar2br3r4306090120150Min: 181.2 / Avg: 183.43 / Max: 185Min: 183.5 / Avg: 184.2 / Max: 184.7Min: 179.2 / Avg: 181.63 / Max: 183.2Min: 176.8 / Avg: 181.63 / Max: 184.2Min: 182.9 / Avg: 183.67 / Max: 184.81. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lpthread -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lm -lfftw3f

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: AES-256r1r1ar2br3r412002400360048006000SE +/- 0.92, N = 3SE +/- 0.28, N = 3SE +/- 55.60, N = 3SE +/- 42.23, N = 3SE +/- 51.03, N = 35669.705670.815606.975593.375612.001. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: AES-256r1r1ar2br3r410002000300040005000Min: 5667.87 / Avg: 5669.7 / Max: 5670.65Min: 5670.53 / Avg: 5670.81 / Max: 5671.36Min: 5495.8 / Avg: 5606.97 / Max: 5665.15Min: 5514.87 / Avg: 5593.37 / Max: 5659.62Min: 5509.94 / Avg: 5612 / Max: 5663.781. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUr1r1ar2br3r40.05470.10940.16410.21880.2735SE +/- 0.000856, N = 3SE +/- 0.000662, N = 3SE +/- 0.003187, N = 3SE +/- 0.002507, N = 5SE +/- 0.002245, N = 70.2399890.2401220.2430260.2433080.242450MIN: 0.22MIN: 0.23MIN: 0.22MIN: 0.22MIN: 0.221. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUr1r1ar2br3r412345Min: 0.24 / Avg: 0.24 / Max: 0.24Min: 0.24 / Avg: 0.24 / Max: 0.24Min: 0.24 / Avg: 0.24 / Max: 0.25Min: 0.24 / Avg: 0.24 / Max: 0.25Min: 0.24 / Avg: 0.24 / Max: 0.261. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: KASUMIr1r1ar2br3r420406080100SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 1.01, N = 3SE +/- 0.77, N = 3SE +/- 0.87, N = 377.2977.3176.2976.4176.401. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: KASUMIr1r1ar2br3r41530456075Min: 77.25 / Avg: 77.29 / Max: 77.31Min: 77.25 / Avg: 77.31 / Max: 77.39Min: 74.26 / Avg: 76.29 / Max: 77.38Min: 74.87 / Avg: 76.41 / Max: 77.21Min: 74.67 / Avg: 76.4 / Max: 77.411. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

Basis Universal

Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 2r2br448121620SE +/- 0.18, N = 3SE +/- 0.15, N = 313.9814.161. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 2r2br448121620Min: 13.79 / Avg: 13.98 / Max: 14.34Min: 13.85 / Avg: 14.16 / Max: 14.341. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: CAST-256r1r1ar2br3r4306090120150SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 1.15, N = 3SE +/- 1.33, N = 3SE +/- 1.17, N = 3115.97115.97114.66114.52114.651. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: CAST-256r1r1ar2br3r420406080100Min: 115.96 / Avg: 115.97 / Max: 115.99Min: 115.96 / Avg: 115.97 / Max: 116Min: 112.36 / Avg: 114.66 / Max: 115.83Min: 111.86 / Avg: 114.52 / Max: 115.85Min: 112.3 / Avg: 114.65 / Max: 115.831. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: ChaCha20Poly1305r1r1ar2br3r4130260390520650SE +/- 0.03, N = 3SE +/- 0.17, N = 3SE +/- 3.48, N = 3SE +/- 3.19, N = 3SE +/- 2.98, N = 3623.49623.20615.81616.50619.641. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: ChaCha20Poly1305r1r1ar2br3r4110220330440550Min: 623.44 / Avg: 623.49 / Max: 623.53Min: 622.89 / Avg: 623.2 / Max: 623.46Min: 612.18 / Avg: 615.81 / Max: 622.77Min: 613.05 / Avg: 616.5 / Max: 622.88Min: 613.67 / Avg: 619.64 / Max: 622.781. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 64 - Buffer Length: 256 - Filter Length: 57r1r1ar2br3r4700M1400M2100M2800M3500MSE +/- 5206513.02, N = 3SE +/- 2150193.79, N = 3SE +/- 17049079.48, N = 3SE +/- 14893734.70, N = 3SE +/- 12876378.03, N = 3326713333332637000003227433333323270000032456666671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 64 - Buffer Length: 256 - Filter Length: 57r1r1ar2br3r4600M1200M1800M2400M3000MMin: 3261100000 / Avg: 3267133333.33 / Max: 3277500000Min: 3260800000 / Avg: 3263700000 / Max: 3267900000Min: 3198700000 / Avg: 3227433333.33 / Max: 3257700000Min: 3203800000 / Avg: 3232700000 / Max: 3253400000Min: 3220800000 / Avg: 3245666666.67 / Max: 32639000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: ChaCha20Poly1305 - Decryptr1r1ar2br3r4130260390520650SE +/- 0.40, N = 3SE +/- 0.57, N = 3SE +/- 3.49, N = 3SE +/- 3.74, N = 3SE +/- 2.81, N = 3619.46619.54612.44612.15615.981. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: ChaCha20Poly1305 - Decryptr1r1ar2br3r4110220330440550Min: 619.03 / Avg: 619.46 / Max: 620.26Min: 618.44 / Avg: 619.54 / Max: 620.33Min: 608.87 / Avg: 612.44 / Max: 619.41Min: 607.38 / Avg: 612.15 / Max: 619.53Min: 610.4 / Avg: 615.97 / Max: 619.371. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

SecureMark

SecureMark is an objective, standardized benchmarking framework for measuring the efficiency of cryptographic processing solutions developed by EEMBC. SecureMark-TLS is benchmarking Transport Layer Security performance with a focus on IoT/edge computing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmarks, More Is BetterSecureMark 1.0.4Benchmark: SecureMark-TLSr1r1ar2br3r450K100K150K200K250KSE +/- 234.37, N = 3SE +/- 236.12, N = 3SE +/- 84.15, N = 3SE +/- 267.95, N = 3SE +/- 2769.20, N = 32254122253662253432252912227471. (CC) gcc options: -pedantic -O3
OpenBenchmarking.orgmarks, More Is BetterSecureMark 1.0.4Benchmark: SecureMark-TLSr1r1ar2br3r440K80K120K160K200KMin: 225033.83 / Avg: 225412.36 / Max: 225841.06Min: 224893.89 / Avg: 225365.98 / Max: 225612.48Min: 225255.45 / Avg: 225342.99 / Max: 225511.25Min: 224806.05 / Avg: 225291.45 / Max: 225730.83Min: 217215.56 / Avg: 222747.23 / Max: 225749.551. (CC) gcc options: -pedantic -O3

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: Blowfishr1r1ar2br3r480160240320400SE +/- 0.56, N = 3SE +/- 0.05, N = 3SE +/- 0.11, N = 3SE +/- 3.73, N = 3SE +/- 3.51, N = 3363.04363.62362.93359.45359.571. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: Blowfishr1r1ar2br3r460120180240300Min: 361.92 / Avg: 363.04 / Max: 363.61Min: 363.53 / Avg: 363.61 / Max: 363.69Min: 362.8 / Avg: 362.93 / Max: 363.15Min: 352 / Avg: 359.45 / Max: 363.27Min: 352.55 / Avg: 359.57 / Max: 363.261. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

Google Draco

Draco is a library developed by Google for compressing/decompressing 3D geometric meshes and point clouds. This test profile uses some Artec3D PLY models as the sample 3D model input formats for Draco compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.4.1Model: Church Facader2br415003000450060007500SE +/- 20.01, N = 3SE +/- 3.33, N = 3700170821. (CXX) g++ options: -O3
OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.4.1Model: Church Facader2br412002400360048006000Min: 6980 / Avg: 7001 / Max: 7041Min: 7075 / Avg: 7081.67 / Max: 70851. (CXX) g++ options: -O3

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: Twofishr1r1ar2br3r460120180240300SE +/- 0.14, N = 3SE +/- 0.14, N = 3SE +/- 0.11, N = 3SE +/- 2.66, N = 3SE +/- 2.83, N = 3289.13288.85288.56286.18286.001. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: Twofishr1r1ar2br3r450100150200250Min: 288.88 / Avg: 289.13 / Max: 289.35Min: 288.58 / Avg: 288.85 / Max: 289.02Min: 288.41 / Avg: 288.56 / Max: 288.78Min: 280.85 / Avg: 286.18 / Max: 288.95Min: 280.34 / Avg: 286 / Max: 288.851. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUr1r1ar2br3r40.07690.15380.23070.30760.3845SE +/- 0.000853, N = 3SE +/- 0.002562, N = 3SE +/- 0.003448, N = 5SE +/- 0.003372, N = 6SE +/- 0.004121, N = 30.3383270.3416630.3418930.3419550.340243MIN: 0.3MIN: 0.31MIN: 0.3MIN: 0.31MIN: 0.31. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUr1r1ar2br3r412345Min: 0.34 / Avg: 0.34 / Max: 0.34Min: 0.34 / Avg: 0.34 / Max: 0.35Min: 0.33 / Avg: 0.34 / Max: 0.35Min: 0.34 / Avg: 0.34 / Max: 0.36Min: 0.33 / Avg: 0.34 / Max: 0.351. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 160 - Buffer Length: 256 - Filter Length: 57r1r1ar2br3r4700M1400M2100M2800M3500MSE +/- 17047384.94, N = 3SE +/- 2062630.47, N = 3SE +/- 14685858.66, N = 3SE +/- 14901789.60, N = 3SE +/- 16411005.79, N = 3314480000031620666673131866667314330000031402666671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 160 - Buffer Length: 256 - Filter Length: 57r1r1ar2br3r4500M1000M1500M2000M2500MMin: 3110800000 / Avg: 3144800000 / Max: 3164000000Min: 3158000000 / Avg: 3162066666.67 / Max: 3164700000Min: 3113500000 / Avg: 3131866666.67 / Max: 3160900000Min: 3113500000 / Avg: 3143300000 / Max: 3158600000Min: 3107500000 / Avg: 3140266666.67 / Max: 31583000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUr1r1ar2br3r4100200300400500SE +/- 0.58, N = 3SE +/- 0.90, N = 3SE +/- 0.78, N = 3SE +/- 2.40, N = 3SE +/- 1.10, N = 3447.97447.31446.39450.65446.54MIN: 433.22MIN: 432.33MIN: 432.04MIN: 432.96MIN: 429.711. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUr1r1ar2br3r480160240320400Min: 447.38 / Avg: 447.97 / Max: 449.14Min: 445.85 / Avg: 447.31 / Max: 448.94Min: 445.04 / Avg: 446.39 / Max: 447.76Min: 446.45 / Avg: 450.65 / Max: 454.76Min: 444.35 / Avg: 446.54 / Max: 447.761. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUr1r1ar2br3r40.2820.5640.8461.1281.41SE +/- 0.00180, N = 3SE +/- 0.01592, N = 15SE +/- 0.00964, N = 3SE +/- 0.01211, N = 3SE +/- 0.01282, N = 31.248091.252671.253131.241761.24222MIN: 1.2MIN: 1.19MIN: 1.2MIN: 1.18MIN: 1.191. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUr1r1ar2br3r4246810Min: 1.24 / Avg: 1.25 / Max: 1.25Min: 1.23 / Avg: 1.25 / Max: 1.47Min: 1.24 / Avg: 1.25 / Max: 1.27Min: 1.22 / Avg: 1.24 / Max: 1.26Min: 1.23 / Avg: 1.24 / Max: 1.271. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUr1r1ar2br3r40.80151.6032.40453.2064.0075SE +/- 0.00193, N = 3SE +/- 0.00732, N = 3SE +/- 0.00854, N = 3SE +/- 0.01280, N = 3SE +/- 0.00650, N = 33.530263.543673.531213.562243.54783MIN: 3.38MIN: 3.38MIN: 3.37MIN: 3.39MIN: 3.371. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUr1r1ar2br3r4246810Min: 3.53 / Avg: 3.53 / Max: 3.53Min: 3.53 / Avg: 3.54 / Max: 3.56Min: 3.52 / Avg: 3.53 / Max: 3.55Min: 3.54 / Avg: 3.56 / Max: 3.59Min: 3.54 / Avg: 3.55 / Max: 3.561. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Helsing

Helsing is an open-source POSIX vampire number generator. This test profile measures the time it takes to generate vampire numbers between varying numbers of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHelsing 1.0-betaDigit Range: 14 digitr1r1ar2br3r42040608010077.8778.1678.3378.0878.541. (CC) gcc options: -O2 -pthread -lcrypto

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUr1r1ar2br3r4100200300400500SE +/- 0.58, N = 3SE +/- 1.79, N = 3SE +/- 0.65, N = 3SE +/- 1.24, N = 3SE +/- 3.51, N = 3445.14446.94447.29447.14448.91MIN: 431.52MIN: 430.47MIN: 433.06MIN: 432.42MIN: 431.331. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUr1r1ar2br3r480160240320400Min: 444.55 / Avg: 445.14 / Max: 446.3Min: 443.6 / Avg: 446.94 / Max: 449.74Min: 446.24 / Avg: 447.29 / Max: 448.47Min: 445.31 / Avg: 447.14 / Max: 449.5Min: 445.15 / Avg: 448.91 / Max: 455.921. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL, NVIDIA OptiX, and NVIDIA CUDA is supported. Learn more via the OpenBenchmarking.org test page.