xeon-platinum-8380-2p-smoke-run

2 x Intel Xeon Platinum 8380 testing with a Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS) and ASPEED on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2105012-IB-XEONPLATI04
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 2 Tests
Timed Code Compilation 6 Tests
C/C++ Compiler Tests 6 Tests
CPU Massive 11 Tests
Creator Workloads 13 Tests
Cryptography 3 Tests
Encoding 4 Tests
Game Development 5 Tests
HPC - High Performance Computing 4 Tests
Imaging 2 Tests
Machine Learning 2 Tests
Molecular Dynamics 2 Tests
Multi-Core 16 Tests
NVIDIA GPU Compute 3 Tests
OpenMPI Tests 2 Tests
Programmer / Developer System Benchmarks 6 Tests
Python Tests 5 Tests
Renderers 2 Tests
Scientific Computing 2 Tests
Software Defined Radio 4 Tests
Server CPU Tests 11 Tests
Single-Threaded 3 Tests
Texture Compression 4 Tests
Video Encoding 4 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
r1
April 28 2021
  1 Day, 1 Minute
r1a
April 29 2021
  11 Hours, 50 Minutes
r2
April 29 2021
  1 Minute
r2a
April 29 2021
  1 Hour, 9 Minutes
r2b
April 29 2021
  18 Hours, 2 Minutes
r3
April 30 2021
  17 Hours, 57 Minutes
r4
April 30 2021
  17 Hours, 55 Minutes
r5
May 01 2021
  46 Minutes
Invert Hiding All Results Option
  11 Hours, 28 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


xeon-platinum-8380-2p-smoke-run ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen Resolutionr1r1ar2r2ar2br3r4r52 x Intel Xeon Platinum 8380 @ 3.40GHz (80 Cores / 160 Threads)Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS)Intel Device 099816 x 32 GB DDR4-3200MT/s Hynix HMA84GR7CJR4N-XN2 x 7682GB INTEL SSDPF2KX076TZ + 2 x 800GB INTEL SSDPF21Q800GB + 3841GB Micron_9300_MTFDHAL3T8TDP + 960GB INTEL SSDSC2KG96ASPEEDVE2282 x Intel X710 for 10GBASE-T + 2 x Intel E810-C for QSFPUbuntu 20.045.11.0-051100-generic (x86_64)GNOME Shell 3.36.4X Server 1.20.8GCC 9.3.0ext41920x10801024x768OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- r1: Scaling Governor: intel_pstate performance - CPU Microcode: 0xd000270- r1a: Scaling Governor: intel_pstate performance - CPU Microcode: 0xd000270- r2: Scaling Governor: intel_pstate performance - CPU Microcode: 0xd000270- r2a: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd000270- r2b: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd000270- r3: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd000270- r4: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd000270- r5: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd000270Python Details- Python 2.7.18 + Python 3.8.5Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

xeon-platinum-8380-2p-smoke-run onednn: Deconvolution Batch shapes_1d - f32 - CPUaom-av1: Speed 9 Realtime - Bosphorus 1080paom-av1: Speed 8 Realtime - Bosphorus 1080paom-av1: Speed 6 Two-Pass - Bosphorus 1080paom-av1: Speed 6 Realtime - Bosphorus 1080paom-av1: Speed 6 Realtime - Bosphorus 4Kaom-av1: Speed 8 Realtime - Bosphorus 4Kaom-av1: Speed 6 Two-Pass - Bosphorus 4Kaom-av1: Speed 9 Realtime - Bosphorus 4Ksvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080psvt-hevc: 10 - Bosphorus 1080pintel-mlc: Idle Latencyaom-av1: Speed 4 Two-Pass - Bosphorus 1080paom-av1: Speed 4 Two-Pass - Bosphorus 4Ksvt-vp9: Visual Quality Optimized - Bosphorus 1080psvt-hevc: 7 - Bosphorus 1080pbuild-erlang: Time To Compileaom-av1: Speed 0 Two-Pass - Bosphorus 1080pluxcorerender: LuxCore Benchmark - CPUaom-av1: Speed 0 Two-Pass - Bosphorus 4Ksvt-hevc: 1 - Bosphorus 1080pluxcorerender: Danish Mood - CPUincompact3d: input.i3d 129 Cells Per Directionincompact3d: input.i3d 193 Cells Per Directionincompact3d: X3D-benchmarking input.i3davifenc: 6avifenc: 6, Losslessavifenc: 2luaradio: Complex Phaseavifenc: 10, Losslessbuild-wasmer: Time To Compilebuild-linux-kernel: Time To Compileavifenc: 0luaradio: FM Deemphasis Filterbuild-nodejs: Time To Compilexmrig: Monero - 1Mbuild-mesa: Time To Compileluxcorerender: DLSC - CPUbuild-llvm: Unix Makefilesmnn: mobilenet-v1-1.0liquid-dsp: 1 - 256 - 57xmrig: Wownero - 1Msrslte: PHY_DL_Testtoybrot: C++ Tasksstockfish: Total Timevosk: onednn: Convolution Batch Shapes Auto - u8s8f32 - CPUliquid-dsp: 16 - 256 - 57onednn: IP Shapes 1D - f32 - CPUluxcorerender: Orange Juice - CPUliquid-dsp: 8 - 256 - 57onednn: Convolution Batch Shapes Auto - f32 - CPUtoybrot: C++ Threadshammerdb-mariadb: 64 - 500hammerdb-mariadb: 64 - 500gmpbench: Total Timetjbench: Decompression Throughputonednn: IP Shapes 3D - u8s8f32 - CPUluaradio: Hilbert Transformonednn: IP Shapes 3D - bf16bf16bf16 - CPUtoybrot: TBBonednn: IP Shapes 1D - u8s8f32 - CPUliquid-dsp: 32 - 256 - 57mysqlslap: 4liquid-dsp: 4 - 256 - 57intel-mlc: Peak Injection Bandwidth - 1:1 Reads-Writesonednn: Recurrent Neural Network Training - f32 - CPUbuild-llvm: Ninjaonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUliquid-dsp: 2 - 256 - 57liquid-dsp: 128 - 256 - 57toktx: UASTC 3onednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUtoybrot: OpenMPmnn: inception-v3onednn: IP Shapes 1D - bf16bf16bf16 - CPUmysqlslap: 128onednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPUtoktx: Zstd Compression 19onednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUsrslte: PHY_DL_Testbotan: AES-256onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUbotan: KASUMIbasis: UASTC Level 2botan: CAST-256botan: ChaCha20Poly1305liquid-dsp: 64 - 256 - 57botan: ChaCha20Poly1305 - Decryptsecuremark: SecureMark-TLSbotan: Blowfishdraco: Church Facadebotan: Twofishonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUliquid-dsp: 160 - 256 - 57onednn: Recurrent Neural Network Inference - f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUhelsing: 14 digitonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUblender: Fishy Cat - CPU-Onlydraco: Lionblender: Classroom - CPU-Onlyintel-mlc: Max Bandwidth - 1:1 Reads-Writesintel-mlc: Peak Injection Bandwidth - 2:1 Reads-Writesintel-mlc: Max Bandwidth - 2:1 Reads-Writessrslte: OFDM_Testastcenc: Mediumintel-mlc: Peak Injection Bandwidth - All Readsonednn: Recurrent Neural Network Inference - u8s8f32 - CPUbasis: ETC1Smysqlslap: 8blender: BMW27 - CPU-Onlyintel-mlc: Peak Injection Bandwidth - 3:1 Reads-Writesonednn: Recurrent Neural Network Training - u8s8f32 - CPUintel-mlc: Max Bandwidth - 3:1 Reads-Writessysbench: RAM / Memoryintel-mlc: Max Bandwidth - All Readsbotan: CAST-256 - Decryptmysqlslap: 64botan: AES-256 - Decryptmysqlslap: 32basis: UASTC Level 0astcenc: Thoroughtoktx: UASTC 4 + Zstd Compression 19toktx: UASTC 3 + Zstd Compression 19intel-mlc: Max Bandwidth - Stream-Triad Likeintel-mlc: Peak Injection Bandwidth - Stream-Triad Likemysqlslap: 16botan: Twofish - Decryptbasis: UASTC Level 3blender: Pabellon Barcelona - CPU-Onlyhammerdb-mariadb: 128 - 500astcenc: Exhaustivebotan: KASUMI - Decryptmnn: SqueezeNetV1.0blender: Barbershop - CPU-Onlybotan: Blowfish - Decrypthammerdb-mariadb: 128 - 500sysbench: CPUmysqlslap: 512mysqlslap: 256cp2k: Fayalite-FISThammerdb-mariadb: 128 - 250hammerdb-mariadb: 128 - 250hammerdb-mariadb: 64 - 250hammerdb-mariadb: 64 - 250hammerdb-mariadb: 32 - 500hammerdb-mariadb: 32 - 500hammerdb-mariadb: 32 - 250hammerdb-mariadb: 32 - 250hammerdb-mariadb: 16 - 500hammerdb-mariadb: 16 - 500hammerdb-mariadb: 16 - 250hammerdb-mariadb: 16 - 250hammerdb-mariadb: 8 - 500hammerdb-mariadb: 8 - 500hammerdb-mariadb: 8 - 250hammerdb-mariadb: 8 - 250mnn: MobileNetV2_224mnn: resnet-v2-50toktx: Zstd Compression 9mysqlslap: 1viennacl: CPU BLAS - dGEMM-TTviennacl: CPU BLAS - dGEMM-TNviennacl: CPU BLAS - dGEMM-NTviennacl: CPU BLAS - dGEMM-NNviennacl: CPU BLAS - dGEMV-Tviennacl: CPU BLAS - dGEMV-Nviennacl: CPU BLAS - dDOTviennacl: CPU BLAS - dAXPYviennacl: CPU BLAS - dCOPYviennacl: CPU BLAS - sDOTviennacl: CPU BLAS - sAXPYviennacl: CPU BLAS - sCOPYonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUavifenc: 10svt-vp9: VMAF Optimized - Bosphorus 1080pluxcorerender: Rainbow Colors and Prism - CPUgnuradio: Hilbert Transformgnuradio: FM Deemphasis Filtergnuradio: IIR Filtergnuradio: FIR Filtergnuradio: Signal Source (Cosine)gnuradio: Five Back to Back FIR Filtersluaradio: Five Back to Back FIR Filtersr1r1ar2r2ar2br3r4r57.4946715.0929.207.3733.07401.29499.2335.1327.87290.67114.5507.8436.917.422.7437099611.3586022313.92045113.24732.11331.539546.88.85262.16024.38257.975410.0101.10119299.520.9529.70216.3235779200048051.576.9787918164481935.9180.8778158853200000.91856814.364419533331.109917018642981946844642.1161.6346190.39828280.31.8004668501.215941735100000217643333442422.3801.409145.7173.5724711071333334159333332.0794473182.961350.5930420.8641640.215115183.45669.7000.23998977.287115.972623.4943267133333619.458225412363.038289.1260.3383273144800000447.9711.248093.5302677.872445.144439496.74459038.6459455.38120300000356476.2445.519425933.7792.831426148.96357285.28116.0745663.055325766.94324377.2292.7365719074.320363.255173288167809554151913976327920841968818209254690541952586447719291363757285984943792900829576876.376.075.673.571972.3720105884362010031834804.3920.2109195.477386.2917.04459.3734.0610.6603.02183.51024.31094.87.50059125.25103.9221.2528.6615.1928.997.5532.51408.24493.5133.06.894.17329.53288.99113.8000.518.040.1937.347.552.7385909611.2727114311.96078513.32831.62431.479548.28.81261.93024.36057.710409.6100.44619452.020.3799.61215.76050166.177.3772418626355235.0090.8791378902733330.91227914.261.122246980623111887614642.8156.9690160.39558880.31.7988169641.222781736800000442843.2804.323145.5503.5766233527333332.0853273082.968570.5956610.8632140.213643184.25670.8090.24012277.310115.970623.1983263700000619.538225366363.615288.8520.3416633162066667447.3081.252673.5436778.159446.936441408.09456260.3456629.89120133333358385.5447.436424096.6791.927424612.62358364.56116.0695663.612325184.58323924.2292.3745724274.288363.32617322877.277.476.872.331963.6371392335277370504793.3630.2107285.505393.4613.34459.1727.4609.5604.82175.31015.21094.567.532.5442144.2442460.05456408.6456545.88358269.7424077.3424818.83358456.09325260.41323826.91374.66328.402343.2636.207.4510.395.9712.033.2214.30182.17234.513.302.01164.32158.16191.7460.325.840.1427.805.733.0228199211.5617158307.62210816.06538.39538.372458.710.28271.92827.99764.971370.1110.93019311.121.5759.27226.4403.2135623033349908.375.0805018155421836.4240.8699788628900000.94362414.284281000001.1187471494524.5160.2625590.40340978.21.8177469841.2379616993333331614213203333440454.7808.289148.4843.6423211017333334000666675.6642.11712741253.0733.004641920.60212219.7810.8740800.216806181.65606.9670.24302676.28613.979114.663615.8063227433333612.438225343362.9267001288.5620.3418933131866667446.3891.253133.5312178.33447.28746.38612671.78441732.77459309.8459226.531207333337.1887357742.9447.70134.237141329.56425925.6789.836425997.2212510.56357774.43116.0804035662.76388511.2519.290756.66010.011325409.99324209.81264292.39617.16388.5716.362174.2757.174110.02363.196214210.831661604.07848.7323.470333654.762.359.861.9389.962.3447.65507.1422.2349474691791.6950.2103246.656182.2613.42357.4645.8498.2470.01684.4111.2804.528.181543.4236.067.3810.395.9711.943.2014.06181.52234.3967.63.362.05164.51157.83192.2450.335.920.1528.225.653.5659277414.5982965386.39000116.61538.59038.313458.210.08871.13028.01865.960370.3111.79020652.921.3699.24226.1995719766749813.476.1804818921449935.5810.9018238654100000.93694113.894321700001.1457872034504.5159.1870380.40687778.21.8433970031.2450817045000001580215343333449554.1796.689147.1633.6403311151000034110000002.1084174393.009291890.6023140.8749680.216586181.65593.3660.24330876.407114.517616.5013232700000612.149225291359.452286.1800.3419553143300000450.6481.241763.5622478.079447.144440939.22457190.5457141.24120833333358463.7446.9171420424904.5793.080424925.84358268.00115.7234045662.342887325218.50324227.41262292.82774.309363.314345861.766.968.966.464764.3713.471024.29135328621135793.9160.2183496.597185.5316.47408.0621.0487.4502.01723.9580.5662.828.461342.3736.357.4310.546.0012.103.2314.73179.13233.9667.83.362.10162.21156.26193.8390.335.870.1428.015.683.5727815314.6577489389.69828016.21138.50737.796452.710.20870.75828.09465.888368.0111.67320574.621.3139.25224.2903.3625525166749937.378.3803718601326135.5030.8754218600466670.94071413.944320133331.1181171414525.7159.2377520.40291978.41.8191370161.241161697500000216773333446396.0792.296146.9093.6431910943000033988000005.5622.10837742952.2273.009070.60203820.0820.8762270.215085183.75611.9950.24245076.40314.159114.646619.6383245666667615.975222747359.5737082286.0040.3402433140266667446.5361.242223.5478378.539448.90646.73617072.29440315.41458941.9458790.961206666677.1472358110.5447.95834.42029.69425822.1792.049425848.0912553.44357925.98116.0705650.13911.2269.309156.77010.029325314.62324112.8292.61017.18588.6816.372974.2927.170109.96363.279214241.344.10048.0413.69763.767.672.470.864770.276511589365358551167811.9410.2179416.746184.0714.79373.8622.0487.7515.61619.2487.9706.168.1448800.1440205.22458830.6458756.46357722.7425508.1425467.51357550.82325312.30324234.5OpenBenchmarking.org

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUr4r3r2br1ar1714212835SE +/- 0.38629, N = 12SE +/- 0.30585, N = 15SE +/- 0.31773, N = 13SE +/- 0.01835, N = 3SE +/- 0.02080, N = 328.4613028.1815028.402307.500597.49467MIN: 14.76MIN: 14.34MIN: 14.66MIN: 6.91MIN: 6.981. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUr4r3r2br1ar1612182430Min: 26.94 / Avg: 28.46 / Max: 32.39Min: 26.85 / Avg: 28.18 / Max: 31.91Min: 27.46 / Avg: 28.4 / Max: 31.89Min: 7.46 / Avg: 7.5 / Max: 7.52Min: 7.47 / Avg: 7.49 / Max: 7.541. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pr4r3r2br1a306090120150SE +/- 0.28, N = 3SE +/- 0.31, N = 15SE +/- 0.49, N = 3SE +/- 0.82, N = 1542.3743.4243.26125.251. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pr4r3r2br1a20406080100Min: 41.84 / Avg: 42.37 / Max: 42.77Min: 40.6 / Avg: 43.42 / Max: 44.65Min: 42.47 / Avg: 43.26 / Max: 44.15Min: 119.84 / Avg: 125.25 / Max: 130.131. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pr4r3r2br1a20406080100SE +/- 0.27, N = 3SE +/- 0.26, N = 3SE +/- 0.19, N = 3SE +/- 1.01, N = 1536.3536.0636.20103.921. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pr4r3r2br1a20406080100Min: 35.96 / Avg: 36.35 / Max: 36.86Min: 35.73 / Avg: 36.06 / Max: 36.56Min: 35.82 / Avg: 36.2 / Max: 36.4Min: 94.84 / Avg: 103.92 / Max: 110.811. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pr4r3r2br1a510152025SE +/- 0.05, N = 3SE +/- 0.06, N = 3SE +/- 0.01, N = 3SE +/- 0.17, N = 37.437.387.4521.251. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pr4r3r2br1a510152025Min: 7.35 / Avg: 7.43 / Max: 7.52Min: 7.26 / Avg: 7.38 / Max: 7.48Min: 7.44 / Avg: 7.45 / Max: 7.46Min: 20.96 / Avg: 21.25 / Max: 21.561. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pr4r3r2br1a714212835SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 310.5410.3910.3928.661. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pr4r3r2br1a612182430Min: 10.49 / Avg: 10.54 / Max: 10.64Min: 10.38 / Avg: 10.39 / Max: 10.41Min: 10.33 / Avg: 10.39 / Max: 10.43Min: 28.58 / Avg: 28.66 / Max: 28.771. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4Kr4r3r2br1ar148121620SE +/- 0.01, N = 3SE +/- 0.07, N = 12SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 36.005.975.9715.1915.091. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4Kr4r3r2br1ar148121620Min: 5.98 / Avg: 6 / Max: 6.01Min: 5.3 / Avg: 5.97 / Max: 6.17Min: 5.89 / Avg: 5.97 / Max: 6.08Min: 15.13 / Avg: 15.19 / Max: 15.25Min: 14.99 / Avg: 15.09 / Max: 15.151. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4Kr4r3r2br1ar1714212835SE +/- 0.17, N = 3SE +/- 0.12, N = 15SE +/- 0.08, N = 15SE +/- 0.29, N = 5SE +/- 0.19, N = 312.1011.9412.0328.9929.201. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4Kr4r3r2br1ar1612182430Min: 11.79 / Avg: 12.1 / Max: 12.38Min: 10.66 / Avg: 11.94 / Max: 12.53Min: 11.36 / Avg: 12.03 / Max: 12.45Min: 28.36 / Avg: 28.99 / Max: 29.86Min: 28.96 / Avg: 29.2 / Max: 29.581. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4Kr4r3r2br1ar1246810SE +/- 0.03, N = 5SE +/- 0.04, N = 3SE +/- 0.03, N = 9SE +/- 0.06, N = 3SE +/- 0.09, N = 153.233.203.227.557.371. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4Kr4r3r2br1ar13691215Min: 3.13 / Avg: 3.23 / Max: 3.32Min: 3.13 / Avg: 3.2 / Max: 3.27Min: 3.07 / Avg: 3.22 / Max: 3.35Min: 7.43 / Avg: 7.55 / Max: 7.64Min: 6.58 / Avg: 7.37 / Max: 7.821. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4Kr4r3r2br1ar1816243240SE +/- 0.08, N = 3SE +/- 0.18, N = 4SE +/- 0.15, N = 15SE +/- 0.28, N = 3SE +/- 0.28, N = 314.7314.0614.3032.5133.071. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4Kr4r3r2br1ar1714212835Min: 14.65 / Avg: 14.73 / Max: 14.88Min: 13.55 / Avg: 14.06 / Max: 14.36Min: 13.19 / Avg: 14.3 / Max: 15.14Min: 32.17 / Avg: 32.51 / Max: 33.06Min: 32.72 / Avg: 33.07 / Max: 33.621. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pr4r3r2br1ar190180270360450SE +/- 0.47, N = 3SE +/- 2.25, N = 3SE +/- 0.90, N = 3SE +/- 0.66, N = 3SE +/- 1.44, N = 3179.13181.52182.17408.24401.291. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pr4r3r2br1ar170140210280350Min: 178.64 / Avg: 179.13 / Max: 180.07Min: 178.55 / Avg: 181.52 / Max: 185.94Min: 181 / Avg: 182.17 / Max: 183.94Min: 407.03 / Avg: 408.24 / Max: 409.32Min: 398.74 / Avg: 401.29 / Max: 403.741. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pr4r3r2br1ar1110220330440550SE +/- 1.14, N = 3SE +/- 1.80, N = 10SE +/- 2.64, N = 4SE +/- 4.78, N = 3SE +/- 3.80, N = 3233.96234.39234.51493.51499.231. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pr4r3r2br1ar190180270360450Min: 231.75 / Avg: 233.96 / Max: 235.57Min: 229.62 / Avg: 234.39 / Max: 249.38Min: 231.3 / Avg: 234.51 / Max: 242.33Min: 486.22 / Avg: 493.51 / Max: 502.51Min: 494.64 / Avg: 499.23 / Max: 506.761. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Intel Memory Latency Checker

Intel Memory Latency Checker (MLC) is a binary-only system memory bandwidth and latency benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns, Fewer Is BetterIntel Memory Latency CheckerTest: Idle Latencyr5r4r3r2ar2r1ar11530456075SE +/- 0.09, N = 3SE +/- 0.12, N = 3SE +/- 0.12, N = 3SE +/- 0.28, N = 8SE +/- 0.09, N = 3SE +/- 0.39, N = 3SE +/- 0.10, N = 368.167.867.632.567.533.035.1
OpenBenchmarking.orgns, Fewer Is BetterIntel Memory Latency CheckerTest: Idle Latencyr5r4r3r2ar2r1ar11326395265Min: 67.9 / Avg: 68.07 / Max: 68.2Min: 67.6 / Avg: 67.8 / Max: 68Min: 67.4 / Avg: 67.6 / Max: 67.8Min: 31.2 / Avg: 32.45 / Max: 33.7Min: 67.3 / Avg: 67.47 / Max: 67.6Min: 32.5 / Avg: 33.03 / Max: 33.8Min: 35 / Avg: 35.1 / Max: 35.3

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pr4r3r2br1a246810SE +/- 0.01, N = 3SE +/- 0.04, N = 5SE +/- 0.03, N = 3SE +/- 0.02, N = 33.363.363.306.891. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pr4r3r2br1a3691215Min: 3.34 / Avg: 3.36 / Max: 3.38Min: 3.23 / Avg: 3.36 / Max: 3.45Min: 3.26 / Avg: 3.3 / Max: 3.36Min: 6.85 / Avg: 6.89 / Max: 6.911. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4Kr4r3r2br1a0.93831.87662.81493.75324.6915SE +/- 0.01, N = 3SE +/- 0.02, N = 9SE +/- 0.03, N = 3SE +/- 0.03, N = 32.102.052.014.171. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4Kr4r3r2br1a246810Min: 2.09 / Avg: 2.1 / Max: 2.11Min: 1.97 / Avg: 2.05 / Max: 2.11Min: 1.98 / Avg: 2.01 / Max: 2.06Min: 4.13 / Avg: 4.17 / Max: 4.241. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pr4r3r2br1ar170140210280350SE +/- 1.59, N = 3SE +/- 1.63, N = 3SE +/- 1.13, N = 3SE +/- 1.10, N = 3SE +/- 1.20, N = 3162.21164.51164.32329.53327.871. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pr4r3r2br1ar160120180240300Min: 159.05 / Avg: 162.21 / Max: 164.12Min: 161.35 / Avg: 164.51 / Max: 166.76Min: 162.07 / Avg: 164.32 / Max: 165.46Min: 327.66 / Avg: 329.53 / Max: 331.48Min: 325.69 / Avg: 327.87 / Max: 329.831. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pr4r3r2br1ar160120180240300SE +/- 1.22, N = 3SE +/- 1.64, N = 3SE +/- 1.76, N = 5SE +/- 1.37, N = 3SE +/- 1.68, N = 3156.26157.83158.16288.99290.671. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pr4r3r2br1ar150100150200250Min: 154.92 / Avg: 156.26 / Max: 158.69Min: 154.6 / Avg: 157.83 / Max: 159.91Min: 154.64 / Avg: 158.16 / Max: 163.67Min: 287.22 / Avg: 288.99 / Max: 291.69Min: 287.36 / Avg: 290.67 / Max: 292.831. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Timed Erlang/OTP Compilation

This test times how long it takes to compile Erlang/OTP. Erlang is a programming language and run-time for massively scalable soft real-time systems with high availability requirements. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Erlang/OTP Compilation 23.2Time To Compiler4r3r2br1ar14080120160200SE +/- 1.56, N = 3SE +/- 0.31, N = 3SE +/- 1.08, N = 3SE +/- 0.37, N = 3SE +/- 0.18, N = 3193.84192.25191.75113.80114.55
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Erlang/OTP Compilation 23.2Time To Compiler4r3r2br1ar14080120160200Min: 191.34 / Avg: 193.84 / Max: 196.71Min: 191.89 / Avg: 192.25 / Max: 192.87Min: 189.6 / Avg: 191.75 / Max: 192.94Min: 113.19 / Avg: 113.8 / Max: 114.48Min: 114.19 / Avg: 114.55 / Max: 114.74

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pr4r3r2br1a0.11480.22960.34440.45920.574SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.330.330.320.511. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pr4r3r2br1a246810Min: 0.33 / Avg: 0.33 / Max: 0.33Min: 0.33 / Avg: 0.33 / Max: 0.33Min: 0.32 / Avg: 0.32 / Max: 0.33Min: 0.5 / Avg: 0.51 / Max: 0.511. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.5Scene: LuxCore Benchmark - Acceleration: CPUr4r3r2br1ar1246810SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 35.875.925.848.047.84MIN: 1.15 / MAX: 7.95MIN: 1.15 / MAX: 7.98MIN: 1.16 / MAX: 7.97MIN: 3.51 / MAX: 9.33MIN: 3.44 / MAX: 9.2
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.5Scene: LuxCore Benchmark - Acceleration: CPUr4r3r2br1ar13691215Min: 5.82 / Avg: 5.87 / Max: 5.92Min: 5.9 / Avg: 5.92 / Max: 5.95Min: 5.8 / Avg: 5.84 / Max: 5.87Min: 8.01 / Avg: 8.04 / Max: 8.05Min: 7.79 / Avg: 7.84 / Max: 7.94

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4Kr4r3r2br1a0.04280.08560.12840.17120.214SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 12SE +/- 0.00, N = 50.140.150.140.191. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4Kr4r3r2br1a12345Min: 0.14 / Avg: 0.14 / Max: 0.14Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.14 / Avg: 0.14 / Max: 0.15Min: 0.19 / Avg: 0.19 / Max: 0.21. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080pr4r3r2br1ar1918273645SE +/- 0.31, N = 3SE +/- 0.14, N = 3SE +/- 0.09, N = 3SE +/- 0.24, N = 3SE +/- 0.29, N = 328.0128.2227.8037.3436.911. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080pr4r3r2br1ar1816243240Min: 27.41 / Avg: 28.01 / Max: 28.48Min: 27.94 / Avg: 28.22 / Max: 28.42Min: 27.64 / Avg: 27.8 / Max: 27.95Min: 37.03 / Avg: 37.34 / Max: 37.81Min: 36.42 / Avg: 36.91 / Max: 37.421. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.5Scene: Danish Mood - Acceleration: CPUr4r3r2br1ar1246810SE +/- 0.04, N = 3SE +/- 0.07, N = 3SE +/- 0.04, N = 3SE +/- 0.10, N = 3SE +/- 0.08, N = 35.685.655.737.557.42MIN: 1.26 / MAX: 7.6MIN: 1.24 / MAX: 7.63MIN: 1.3 / MAX: 7.65MIN: 3.28 / MAX: 8.86MIN: 3.2 / MAX: 8.74
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.5Scene: Danish Mood - Acceleration: CPUr4r3r2br1ar13691215Min: 5.61 / Avg: 5.68 / Max: 5.76Min: 5.52 / Avg: 5.65 / Max: 5.74Min: 5.66 / Avg: 5.73 / Max: 5.8Min: 7.37 / Avg: 7.55 / Max: 7.7Min: 7.27 / Avg: 7.42 / Max: 7.55

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per Directionr4r3r2br1ar10.80391.60782.41173.21564.0195SE +/- 0.02850005, N = 15SE +/- 0.03072276, N = 15SE +/- 0.02799890, N = 3SE +/- 0.01532048, N = 3SE +/- 0.00774937, N = 33.572781533.565927743.022819922.738590962.743709961. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per Directionr4r3r2br1ar1246810Min: 3.42 / Avg: 3.57 / Max: 3.79Min: 3.42 / Avg: 3.57 / Max: 3.89Min: 2.97 / Avg: 3.02 / Max: 3.06Min: 2.71 / Avg: 2.74 / Max: 2.76Min: 2.73 / Avg: 2.74 / Max: 2.761. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per Directionr4r3r2br1ar148121620SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 314.6614.6011.5611.2711.361. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per Directionr4r3r2br1ar148121620Min: 14.63 / Avg: 14.66 / Max: 14.69Min: 14.55 / Avg: 14.6 / Max: 14.64Min: 11.48 / Avg: 11.56 / Max: 11.61Min: 11.22 / Avg: 11.27 / Max: 11.32Min: 11.31 / Avg: 11.36 / Max: 11.41. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: X3D-benchmarking input.i3dr4r3r2br1ar180160240320400SE +/- 3.91, N = 9SE +/- 4.39, N = 9SE +/- 2.73, N = 9SE +/- 0.12, N = 3SE +/- 0.46, N = 3389.70386.39307.62311.96313.921. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: X3D-benchmarking input.i3dr4r3r2br1ar170140210280350Min: 379.73 / Avg: 389.7 / Max: 405.58Min: 379.12 / Avg: 386.39 / Max: 413.18Min: 298.59 / Avg: 307.62 / Max: 315.81Min: 311.74 / Avg: 311.96 / Max: 312.14Min: 313.03 / Avg: 313.92 / Max: 314.591. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6r4r3r2br1ar148121620SE +/- 0.12, N = 15SE +/- 0.13, N = 15SE +/- 0.23, N = 3SE +/- 0.08, N = 3SE +/- 0.05, N = 316.2116.6216.0713.3313.251. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6r4r3r2br1ar148121620Min: 15.26 / Avg: 16.21 / Max: 16.89Min: 15.73 / Avg: 16.61 / Max: 17.78Min: 15.72 / Avg: 16.07 / Max: 16.49Min: 13.2 / Avg: 13.33 / Max: 13.48Min: 13.18 / Avg: 13.25 / Max: 13.351. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6, Losslessr4r3r2br1ar1918273645SE +/- 0.36, N = 6SE +/- 0.35, N = 3SE +/- 0.24, N = 3SE +/- 0.09, N = 3SE +/- 0.04, N = 338.5138.5938.4031.6232.111. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6, Losslessr4r3r2br1ar1816243240Min: 37.14 / Avg: 38.51 / Max: 39.73Min: 38.18 / Avg: 38.59 / Max: 39.29Min: 37.93 / Avg: 38.4 / Max: 38.7Min: 31.48 / Avg: 31.62 / Max: 31.8Min: 32.05 / Avg: 32.11 / Max: 32.191. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 2r4r3r2br1ar1918273645SE +/- 0.08, N = 3SE +/- 0.20, N = 3SE +/- 0.40, N = 3SE +/- 0.04, N = 3SE +/- 0.10, N = 337.8038.3138.3731.4831.541. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 2r4r3r2br1ar1816243240Min: 37.64 / Avg: 37.8 / Max: 37.91Min: 37.93 / Avg: 38.31 / Max: 38.6Min: 37.65 / Avg: 38.37 / Max: 39.03Min: 31.4 / Avg: 31.48 / Max: 31.55Min: 31.41 / Avg: 31.54 / Max: 31.731. (CXX) g++ options: -O3 -fPIC -lm

LuaRadio

LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Complex Phaser4r3r2br1ar1120240360480600SE +/- 4.50, N = 6SE +/- 4.31, N = 6SE +/- 3.61, N = 9SE +/- 0.71, N = 3SE +/- 0.25, N = 3452.7458.2458.7548.2546.8
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Complex Phaser4r3r2br1ar1100200300400500Min: 440.1 / Avg: 452.67 / Max: 470Min: 443.3 / Avg: 458.2 / Max: 470.9Min: 443.6 / Avg: 458.66 / Max: 470.8Min: 546.9 / Avg: 548.23 / Max: 549.3Min: 546.3 / Avg: 546.8 / Max: 547.1

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 10, Losslessr4r3r2br1ar13691215SE +/- 0.157, N = 15SE +/- 0.130, N = 15SE +/- 0.154, N = 15SE +/- 0.016, N = 3SE +/- 0.036, N = 310.20810.08810.2828.8128.8521. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 10, Losslessr4r3r2br1ar13691215Min: 9.23 / Avg: 10.21 / Max: 11Min: 9.23 / Avg: 10.09 / Max: 10.99Min: 9.34 / Avg: 10.28 / Max: 11.29Min: 8.79 / Avg: 8.81 / Max: 8.84Min: 8.81 / Avg: 8.85 / Max: 8.931. (CXX) g++ options: -O3 -fPIC -lm

Timed Wasmer Compilation

This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 1.0.2Time To Compiler4r3r2br1ar11632486480SE +/- 0.51, N = 3SE +/- 0.66, N = 7SE +/- 0.42, N = 3SE +/- 0.62, N = 3SE +/- 0.22, N = 370.7671.1371.9361.9362.161. (CC) gcc options: -m64 -pie -nodefaultlibs -ldl -lrt -lpthread -lgcc_s -lc -lm -lutil
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 1.0.2Time To Compiler4r3r2br1ar11428425670Min: 70.14 / Avg: 70.76 / Max: 71.77Min: 68.8 / Avg: 71.13 / Max: 73.79Min: 71.24 / Avg: 71.93 / Max: 72.69Min: 61.03 / Avg: 61.93 / Max: 63.12Min: 61.82 / Avg: 62.16 / Max: 62.561. (CC) gcc options: -m64 -pie -nodefaultlibs -ldl -lrt -lpthread -lgcc_s -lc -lm -lutil

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.10.20Time To Compiler4r3r2br1ar1714212835SE +/- 0.37, N = 14SE +/- 0.41, N = 14SE +/- 0.32, N = 14SE +/- 0.28, N = 4SE +/- 0.30, N = 428.0928.0228.0024.3624.38
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.10.20Time To Compiler4r3r2br1ar1612182430Min: 27.27 / Avg: 28.09 / Max: 32.82Min: 27.22 / Avg: 28.02 / Max: 33.23Min: 27.09 / Avg: 28 / Max: 31.92Min: 24.01 / Avg: 24.36 / Max: 25.2Min: 24.04 / Avg: 24.38 / Max: 25.28

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 0r4r3r2br1ar11530456075SE +/- 0.68, N = 3SE +/- 0.20, N = 3SE +/- 0.22, N = 3SE +/- 0.24, N = 3SE +/- 0.21, N = 365.8965.9664.9757.7157.981. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 0r4r3r2br1ar11326395265Min: 64.59 / Avg: 65.89 / Max: 66.88Min: 65.74 / Avg: 65.96 / Max: 66.37Min: 64.55 / Avg: 64.97 / Max: 65.27Min: 57.22 / Avg: 57.71 / Max: 57.98Min: 57.76 / Avg: 57.98 / Max: 58.41. (CXX) g++ options: -O3 -fPIC -lm

LuaRadio

LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: FM Deemphasis Filterr4r3r2br1ar190180270360450SE +/- 1.19, N = 6SE +/- 4.83, N = 6SE +/- 5.30, N = 9SE +/- 1.40, N = 3SE +/- 0.21, N = 3368.0370.3370.1409.6410.0
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: FM Deemphasis Filterr4r3r2br1ar170140210280350Min: 363.2 / Avg: 367.98 / Max: 370.2Min: 353.2 / Avg: 370.33 / Max: 387.5Min: 338.9 / Avg: 370.12 / Max: 387.2Min: 406.8 / Avg: 409.6 / Max: 411.1Min: 409.6 / Avg: 410 / Max: 410.3

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 15.11Time To Compiler4r3r2br1ar1306090120150SE +/- 0.78, N = 3SE +/- 0.68, N = 3SE +/- 0.50, N = 3SE +/- 0.29, N = 3SE +/- 0.27, N = 3111.67111.79110.93100.45101.10
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 15.11Time To Compiler4r3r2br1ar120406080100Min: 110.44 / Avg: 111.67 / Max: 113.1Min: 110.54 / Avg: 111.79 / Max: 112.86Min: 110.03 / Avg: 110.93 / Max: 111.75Min: 100.12 / Avg: 100.45 / Max: 101.02Min: 100.61 / Avg: 101.1 / Max: 101.55

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.12.1Variant: Monero - Hash Count: 1Mr4r3r2br1ar14K8K12K16K20KSE +/- 243.31, N = 15SE +/- 245.77, N = 3SE +/- 151.73, N = 3SE +/- 20.55, N = 3SE +/- 23.28, N = 320574.620652.919311.119452.019299.51. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.12.1Variant: Monero - Hash Count: 1Mr4r3r2br1ar14K8K12K16K20KMin: 19131.8 / Avg: 20574.62 / Max: 22183.8Min: 20230.2 / Avg: 20652.93 / Max: 21081.5Min: 19032 / Avg: 19311.13 / Max: 19553.8Min: 19416.3 / Avg: 19452 / Max: 19487.5Min: 19253 / Avg: 19299.5 / Max: 19324.81. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Timed Mesa Compilation

This test profile times how long it takes to compile Mesa with Meson/Ninja. For minimizing build dependencies and avoid versioning conflicts, test this is just the core Mesa build without LLVM or the extra Gallium3D/Mesa drivers enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To Compiler4r3r2br1ar1510152025SE +/- 0.11, N = 3SE +/- 0.15, N = 3SE +/- 0.04, N = 3SE +/- 0.12, N = 3SE +/- 0.02, N = 321.3121.3721.5820.3820.95
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To Compiler4r3r2br1ar1510152025Min: 21.11 / Avg: 21.31 / Max: 21.5Min: 21.14 / Avg: 21.37 / Max: 21.64Min: 21.51 / Avg: 21.57 / Max: 21.63Min: 20.24 / Avg: 20.38 / Max: 20.63Min: 20.92 / Avg: 20.95 / Max: 21

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.5Scene: DLSC - Acceleration: CPUr4r3r2br1ar13691215SE +/- 0.09, N = 3SE +/- 0.10, N = 3SE +/- 0.08, N = 15SE +/- 0.09, N = 15SE +/- 0.09, N = 39.259.249.279.619.70MIN: 8.59 / MAX: 11.4MIN: 8.74 / MAX: 11.37MIN: 8.31 / MAX: 11.98MIN: 8 / MAX: 12.27MIN: 8.98 / MAX: 12.22
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.5Scene: DLSC - Acceleration: CPUr4r3r2br1ar13691215Min: 9.1 / Avg: 9.25 / Max: 9.42Min: 9.05 / Avg: 9.24 / Max: 9.34Min: 8.74 / Avg: 9.27 / Max: 9.85Min: 8.62 / Avg: 9.61 / Max: 10.06Min: 9.6 / Avg: 9.7 / Max: 9.88

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 12.0Build System: Unix Makefilesr4r3r2br1ar150100150200250SE +/- 0.43, N = 3SE +/- 1.24, N = 3SE +/- 0.77, N = 3SE +/- 0.80, N = 3SE +/- 0.91, N = 3224.29226.20226.44215.76216.32
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 12.0Build System: Unix Makefilesr4r3r2br1ar14080120160200Min: 223.45 / Avg: 224.29 / Max: 224.89Min: 224.41 / Avg: 226.2 / Max: 228.59Min: 225.34 / Avg: 226.44 / Max: 227.92Min: 214.28 / Avg: 215.76 / Max: 217.02Min: 214.93 / Avg: 216.32 / Max: 218.03

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: mobilenet-v1-1.0r4r2b0.75651.5132.26953.0263.7825SE +/- 0.021, N = 12SE +/- 0.089, N = 33.3623.213MIN: 2.98 / MAX: 6.66MIN: 2.8 / MAX: 6.71. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: mobilenet-v1-1.0r4r2b246810Min: 3.24 / Avg: 3.36 / Max: 3.49Min: 3.05 / Avg: 3.21 / Max: 3.351. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 1 - Buffer Length: 256 - Filter Length: 57r4r3r2br112M24M36M48M60MSE +/- 534784.17, N = 3SE +/- 550708.74, N = 3SE +/- 613156.95, N = 3SE +/- 173700.89, N = 3552516675719766756230333577920001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 1 - Buffer Length: 256 - Filter Length: 57r4r3r2br110M20M30M40M50MMin: 54673000 / Avg: 55251666.67 / Max: 56320000Min: 56105000 / Avg: 57197666.67 / Max: 57864000Min: 55442000 / Avg: 56230333.33 / Max: 57438000Min: 57448000 / Avg: 57792000 / Max: 580060001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.12.1Variant: Wownero - Hash Count: 1Mr4r3r2br1ar111K22K33K44K55KSE +/- 235.04, N = 3SE +/- 358.18, N = 3SE +/- 238.38, N = 3SE +/- 588.34, N = 3SE +/- 425.40, N = 749937.349813.449908.350166.148051.51. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.12.1Variant: Wownero - Hash Count: 1Mr4r3r2br1ar19K18K27K36K45KMin: 49566.3 / Avg: 49937.3 / Max: 50372.8Min: 49123.2 / Avg: 49813.43 / Max: 50324.6Min: 49431.5 / Avg: 49908.27 / Max: 50147.9Min: 49000.4 / Avg: 50166.07 / Max: 50888Min: 45934.8 / Avg: 48051.54 / Max: 48904.51. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

srsLTE

srsLTE is an open-source LTE software radio suite created by Software Radio Systems (SRS). srsLTE can be used for building your own software defined (SDR) LTE mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsLTE 20.10.1Test: PHY_DL_Testr4r3r2br1ar120406080100SE +/- 0.62, N = 3SE +/- 1.14, N = 3SE +/- 0.38, N = 3SE +/- 1.16, N = 3SE +/- 0.76, N = 378.376.175.077.376.91. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lpthread -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lm -lfftw3f
OpenBenchmarking.orgUE Mb/s, More Is BettersrsLTE 20.10.1Test: PHY_DL_Testr4r3r2br1ar11530456075Min: 77.1 / Avg: 78.27 / Max: 79.2Min: 74.3 / Avg: 76.07 / Max: 78.2Min: 74.3 / Avg: 75 / Max: 75.6Min: 75.4 / Avg: 77.33 / Max: 79.4Min: 75.7 / Avg: 76.9 / Max: 78.31. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lpthread -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lm -lfftw3f

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ Tasksr4r3r2br1ar12K4K6K8K10KSE +/- 85.46, N = 4SE +/- 93.55, N = 4SE +/- 102.03, N = 3SE +/- 80.44, N = 4SE +/- 43.45, N = 3803780488050772478791. (CXX) g++ options: -O3 -lpthread -lm -lgcc -lgcc_s -lc
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ Tasksr4r3r2br1ar114002800420056007000Min: 7901 / Avg: 8037.25 / Max: 8287Min: 7815 / Avg: 8048.25 / Max: 8267Min: 7924 / Avg: 8050 / Max: 8252Min: 7499 / Avg: 7724.25 / Max: 7878Min: 7811 / Avg: 7879.33 / Max: 79601. (CXX) g++ options: -O3 -lpthread -lm -lgcc -lgcc_s -lc

Stockfish

This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 13Total Timer4r3r2br1ar140M80M120M160M200MSE +/- 2183262.34, N = 4SE +/- 1924842.52, N = 3SE +/- 1982639.48, N = 3SE +/- 2404481.41, N = 3SE +/- 1585265.68, N = 151860132611892144991815542181862635521816448191. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fprofile-use -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 13Total Timer4r3r2br1ar130M60M90M120M150MMin: 180868607 / Avg: 186013260.75 / Max: 191549896Min: 185384757 / Avg: 189214499 / Max: 191468285Min: 177852020 / Avg: 181554218 / Max: 184635307Min: 181497022 / Avg: 186263551.67 / Max: 189198848Min: 170087434 / Avg: 181644818.6 / Max: 1914653071. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fprofile-use -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver

VOSK Speech Recognition Toolkit

VOSK is an open-source offline speech recognition API/toolkit. VOSK supports speech recognition in 17 languages and has a variety of models available and interfaces for different programming languages. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterVOSK Speech Recognition Toolkit 0.3.21r4r3r2br1ar1816243240SE +/- 0.32, N = 3SE +/- 0.43, N = 3SE +/- 0.43, N = 3SE +/- 0.29, N = 8SE +/- 0.32, N = 335.5035.5836.4235.0135.92
OpenBenchmarking.orgSeconds, Fewer Is BetterVOSK Speech Recognition Toolkit 0.3.21r4r3r2br1ar1816243240Min: 34.92 / Avg: 35.5 / Max: 36.01Min: 34.85 / Avg: 35.58 / Max: 36.33Min: 35.67 / Avg: 36.42 / Max: 37.14Min: 33.96 / Avg: 35.01 / Max: 36.86Min: 35.54 / Avg: 35.92 / Max: 36.55

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUr4r3r2br1ar10.20290.40580.60870.81161.0145SE +/- 0.005244, N = 3SE +/- 0.006631, N = 3SE +/- 0.004902, N = 3SE +/- 0.003986, N = 3SE +/- 0.006225, N = 30.8754210.9018230.8699780.8791370.877815MIN: 0.82MIN: 0.84MIN: 0.82MIN: 0.83MIN: 0.821. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUr4r3r2br1ar1246810Min: 0.87 / Avg: 0.88 / Max: 0.88Min: 0.89 / Avg: 0.9 / Max: 0.91Min: 0.86 / Avg: 0.87 / Max: 0.88Min: 0.87 / Avg: 0.88 / Max: 0.89Min: 0.87 / Avg: 0.88 / Max: 0.891. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57r4r3r2br1ar1200M400M600M800M1000MSE +/- 10609570.10, N = 3SE +/- 859903.10, N = 3SE +/- 3620722.76, N = 3SE +/- 669162.00, N = 3SE +/- 691953.76, N = 38600466678654100008628900008902733338853200001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57r4r3r2br1ar1150M300M450M600M750MMin: 841040000 / Avg: 860046666.67 / Max: 877720000Min: 864120000 / Avg: 865410000 / Max: 867040000Min: 855690000 / Avg: 862890000 / Max: 867160000Min: 888940000 / Avg: 890273333.33 / Max: 891040000Min: 884540000 / Avg: 885320000 / Max: 8867000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUr4r3r2br1ar10.21230.42460.63690.84921.0615SE +/- 0.008450, N = 3SE +/- 0.007264, N = 3SE +/- 0.011253, N = 3SE +/- 0.002111, N = 3SE +/- 0.002101, N = 30.9407140.9369410.9436240.9122790.918568MIN: 0.86MIN: 0.85MIN: 0.86MIN: 0.86MIN: 0.851. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUr4r3r2br1ar1246810Min: 0.93 / Avg: 0.94 / Max: 0.96Min: 0.93 / Avg: 0.94 / Max: 0.95Min: 0.92 / Avg: 0.94 / Max: 0.96Min: 0.91 / Avg: 0.91 / Max: 0.91Min: 0.91 / Avg: 0.92 / Max: 0.921. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.5Scene: Orange Juice - Acceleration: CPUr4r3r2br1ar148121620SE +/- 0.13, N = 15SE +/- 0.12, N = 15SE +/- 0.18, N = 3SE +/- 0.21, N = 3SE +/- 0.13, N = 313.9413.8914.2814.2614.36MIN: 11.06 / MAX: 17.84MIN: 11.08 / MAX: 17.77MIN: 11.93 / MAX: 17.73MIN: 11.6 / MAX: 19.3MIN: 11.58 / MAX: 19.44
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.5Scene: Orange Juice - Acceleration: CPUr4r3r2br1ar148121620Min: 13.31 / Avg: 13.94 / Max: 15.02Min: 13.37 / Avg: 13.89 / Max: 14.86Min: 13.92 / Avg: 14.28 / Max: 14.51Min: 14.04 / Avg: 14.26 / Max: 14.67Min: 14.15 / Avg: 14.36 / Max: 14.59

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 8 - Buffer Length: 256 - Filter Length: 57r4r3r2br190M180M270M360M450MSE +/- 2739929.03, N = 3SE +/- 1240739.03, N = 3SE +/- 2458908.97, N = 3SE +/- 422150.58, N = 34320133334321700004281000004419533331. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 8 - Buffer Length: 256 - Filter Length: 57r4r3r2br180M160M240M320M400MMin: 426620000 / Avg: 432013333.33 / Max: 435550000Min: 429860000 / Avg: 432170000 / Max: 434110000Min: 423390000 / Avg: 428100000 / Max: 431680000Min: 441150000 / Avg: 441953333.33 / Max: 4425800001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUr4r3r2br1ar10.25780.51560.77341.03121.289SE +/- 0.01182, N = 3SE +/- 0.00975, N = 3SE +/- 0.00330, N = 3SE +/- 0.00124, N = 3SE +/- 0.00274, N = 31.118111.145781.118741.122241.10991MIN: 1.02MIN: 1.04MIN: 1.02MIN: 1.02MIN: 1.021. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUr4r3r2br1ar1246810Min: 1.1 / Avg: 1.12 / Max: 1.14Min: 1.13 / Avg: 1.15 / Max: 1.16Min: 1.11 / Avg: 1.12 / Max: 1.13Min: 1.12 / Avg: 1.12 / Max: 1.12Min: 1.11 / Avg: 1.11 / Max: 1.121. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ Threadsr4r3r2br1ar115003000450060007500SE +/- 76.94, N = 4SE +/- 98.76, N = 3SE +/- 89.67, N = 3SE +/- 29.96, N = 3SE +/- 49.12, N = 3714172037149698070181. (CXX) g++ options: -O3 -lpthread -lm -lgcc -lgcc_s -lc
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ Threadsr4r3r2br1ar113002600390052006500Min: 7008 / Avg: 7141.25 / Max: 7363Min: 7071 / Avg: 7202.67 / Max: 7396Min: 7057 / Avg: 7148.67 / Max: 7328Min: 6938 / Avg: 6980 / Max: 7038Min: 6956 / Avg: 7018 / Max: 71151. (CXX) g++ options: -O3 -lpthread -lm -lgcc -lgcc_s -lc

HammerDB - MariaDB

This is a MariaDB MySQL database server benchmark making use of the HammerDB benchmarking / load testing tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.5.9Virtual Users: 64 - Warehouses: 500r1ar114K28K42K56K70KSE +/- 730.55, N = 9SE +/- 620.04, N = 362311642981. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.5.9Virtual Users: 64 - Warehouses: 500r1ar111K22K33K44K55KMin: 58815 / Avg: 62311.11 / Max: 65734Min: 63421 / Avg: 64298.33 / Max: 654961. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.5.9Virtual Users: 64 - Warehouses: 500r1ar140K80K120K160K200KSE +/- 2084.32, N = 9SE +/- 2149.33, N = 31887611946841. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.5.9Virtual Users: 64 - Warehouses: 500r1ar130K60K90K120K150KMin: 178525 / Avg: 188761.44 / Max: 198282Min: 191710 / Avg: 194684 / Max: 1988591. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt

GNU GMP GMPbench

GMPbench is a test of the GNU Multiple Precision Arithmetic (GMP) Library. GMPbench is a single-threaded integer benchmark that leverages the GMP library to stress the CPU with widening integer multiplication. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGMPbench Score, More Is BetterGNU GMP GMPbench 6.2.1Total Timer4r3r2br1ar1100020003000400050004525.74504.54524.54642.84642.11. (CC) gcc options: -O3 -fomit-frame-pointer -lm

libjpeg-turbo tjbench

tjbench is a JPEG decompression/compression benchmark that is part of libjpeg-turbo, a JPEG image codec library optimized for SIMD instructions on modern CPU architectures. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMegapixels/sec, More Is Betterlibjpeg-turbo tjbench 2.1.0Test: Decompression Throughputr4r3r2br1ar14080120160200SE +/- 0.47, N = 3SE +/- 1.04, N = 3SE +/- 0.07, N = 3SE +/- 0.39, N = 3SE +/- 0.15, N = 3159.24159.19160.26156.97161.631. (CC) gcc options: -O3 -rdynamic
OpenBenchmarking.orgMegapixels/sec, More Is Betterlibjpeg-turbo tjbench 2.1.0Test: Decompression Throughputr4r3r2br1ar1306090120150Min: 158.6 / Avg: 159.24 / Max: 160.15Min: 157.79 / Avg: 159.19 / Max: 161.23Min: 160.19 / Avg: 160.26 / Max: 160.4Min: 156.22 / Avg: 156.97 / Max: 157.53Min: 161.4 / Avg: 161.63 / Max: 161.91. (CC) gcc options: -O3 -rdynamic

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUr4r3r2br1ar10.09150.1830.27450.3660.4575SE +/- 0.002415, N = 14SE +/- 0.003204, N = 10SE +/- 0.004259, N = 4SE +/- 0.001124, N = 3SE +/- 0.001135, N = 30.4029190.4068770.4034090.3955880.398282MIN: 0.36MIN: 0.37MIN: 0.36MIN: 0.36MIN: 0.371. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUr4r3r2br1ar112345Min: 0.39 / Avg: 0.4 / Max: 0.43Min: 0.4 / Avg: 0.41 / Max: 0.43Min: 0.4 / Avg: 0.4 / Max: 0.42Min: 0.39 / Avg: 0.4 / Max: 0.4Min: 0.4 / Avg: 0.4 / Max: 0.41. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

LuaRadio

LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Hilbert Transformr4r3r2br1ar120406080100SE +/- 0.61, N = 6SE +/- 0.47, N = 6SE +/- 0.41, N = 9SE +/- 0.00, N = 3SE +/- 0.00, N = 378.478.278.280.380.3
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Hilbert Transformr4r3r2br1ar11530456075Min: 75.8 / Avg: 78.4 / Max: 79.7Min: 76.5 / Avg: 78.15 / Max: 79.7Min: 76.1 / Avg: 78.21 / Max: 79.7Min: 80.3 / Avg: 80.3 / Max: 80.3Min: 80.3 / Avg: 80.3 / Max: 80.3

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUr4r3r2br1ar10.41480.82961.24441.65922.074SE +/- 0.00968, N = 3SE +/- 0.02043, N = 3SE +/- 0.01382, N = 3SE +/- 0.00121, N = 3SE +/- 0.00580, N = 31.819131.843391.817741.798811.80046MIN: 1.68MIN: 1.67MIN: 1.69MIN: 1.69MIN: 1.681. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUr4r3r2br1ar1246810Min: 1.8 / Avg: 1.82 / Max: 1.83Min: 1.81 / Avg: 1.84 / Max: 1.88Min: 1.8 / Avg: 1.82 / Max: 1.85Min: 1.8 / Avg: 1.8 / Max: 1.8Min: 1.79 / Avg: 1.8 / Max: 1.811. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: TBBr4r3r2br1ar115003000450060007500SE +/- 81.70, N = 15SE +/- 69.20, N = 15SE +/- 73.83, N = 15SE +/- 80.68, N = 3SE +/- 59.06, N = 15701670036984696468501. (CXX) g++ options: -O3 -lpthread -lm -lgcc -lgcc_s -lc
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: TBBr4r3r2br1ar112002400360048006000Min: 6704 / Avg: 7016.13 / Max: 7860Min: 6633 / Avg: 7002.87 / Max: 7464Min: 6631 / Avg: 6984.4 / Max: 7714Min: 6817 / Avg: 6964.33 / Max: 7095Min: 6609 / Avg: 6850.4 / Max: 74691. (CXX) g++ options: -O3 -lpthread -lm -lgcc -lgcc_s -lc

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUr4r3r2br1ar10.28010.56020.84031.12041.4005SE +/- 0.00891, N = 15SE +/- 0.01066, N = 15SE +/- 0.01174, N = 15SE +/- 0.01126, N = 15SE +/- 0.01080, N = 151.241161.245081.237961.222781.21594MIN: 0.85MIN: 0.89MIN: 0.87MIN: 0.85MIN: 0.841. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUr4r3r2br1ar1246810Min: 1.18 / Avg: 1.24 / Max: 1.33Min: 1.18 / Avg: 1.25 / Max: 1.34Min: 1.16 / Avg: 1.24 / Max: 1.33Min: 1.16 / Avg: 1.22 / Max: 1.3Min: 1.14 / Avg: 1.22 / Max: 1.261. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 32 - Buffer Length: 256 - Filter Length: 57r4r3r2br1ar1400M800M1200M1600M2000MSE +/- 6582552.70, N = 3SE +/- 10121648.97, N = 3SE +/- 2515949.13, N = 3SE +/- 3951371.07, N = 3169750000017045000001699333333173680000017351000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 32 - Buffer Length: 256 - Filter Length: 57r4r3r2br1ar1300M600M900M1200M1500MMin: 1689200000 / Avg: 1697500000 / Max: 1710500000Min: 1679100000 / Avg: 1699333333.33 / Max: 1710000000Min: 1732300000 / Avg: 1736800000 / Max: 1741000000Min: 1730100000 / Avg: 1735100000 / Max: 17429000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

MariaDB

This is a MariaDB MySQL database server benchmark making use of mysqlslap. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 10.5.2Clients: 4r3r2b30060090012001500SE +/- 7.20, N = 3SE +/- 16.07, N = 3158016141. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lsnappy -ldl -lz -lrt
OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 10.5.2Clients: 4r3r2b30060090012001500Min: 1567.4 / Avg: 1579.84 / Max: 1592.36Min: 1582.28 / Avg: 1614.09 / Max: 1633.991. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lsnappy -ldl -lz -lrt

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 4 - Buffer Length: 256 - Filter Length: 57r4r3r2br150M100M150M200M250MSE +/- 1956802.95, N = 3SE +/- 1663583.82, N = 3SE +/- 824809.74, N = 3SE +/- 1090112.12, N = 32167733332153433332132033332176433331. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 4 - Buffer Length: 256 - Filter Length: 57r4r3r2br140M80M120M160M200MMin: 212860000 / Avg: 216773333.33 / Max: 218770000Min: 213630000 / Avg: 215343333.33 / Max: 218670000Min: 211610000 / Avg: 213203333.33 / Max: 214370000Min: 215470000 / Avg: 217643333.33 / Max: 2188800001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Intel Memory Latency Checker

Intel Memory Latency Checker (MLC) is a binary-only system memory bandwidth and latency benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterIntel Memory Latency CheckerTest: Peak Injection Bandwidth - 1:1 Reads-Writesr5r4r3r2br2ar1ar1100K200K300K400K500KSE +/- 847.23, N = 3SE +/- 1601.80, N = 3SE +/- 138.13, N = 3SE +/- 314.54, N = 3SE +/- 212.40, N = 3SE +/- 148.63, N = 3SE +/- 1187.16, N = 3448800.1446396.0449554.1440454.7442144.2442843.2442422.3
OpenBenchmarking.orgMB/s, More Is BetterIntel Memory Latency CheckerTest: Peak Injection Bandwidth - 1:1 Reads-Writesr5r4r3r2br2ar1ar180K160K240K320K400KMin: 447252.4 / Avg: 448800.13 / Max: 450171.3Min: 443202.1 / Avg: 446395.97 / Max: 448208.9Min: 449395.8 / Avg: 449554.07 / Max: 449829.3Min: 439978.3 / Avg: 440454.7 / Max: 441048.7Min: 441727.6 / Avg: 442144.17 / Max: 442424.5Min: 442548.7 / Avg: 442843.2 / Max: 443025.4Min: 440684.8 / Avg: 442422.27 / Max: 444692.4

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUr4r3r2br1ar12004006008001000SE +/- 2.67, N = 3SE +/- 1.09, N = 3SE +/- 9.76, N = 3SE +/- 4.49, N = 3SE +/- 7.46, N = 3792.30796.69808.29804.32801.41MIN: 763.96MIN: 771.28MIN: 767.97MIN: 765.37MIN: 767.381. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUr4r3r2br1ar1140280420560700Min: 786.96 / Avg: 792.3 / Max: 794.99Min: 794.92 / Avg: 796.69 / Max: 798.67Min: 796.57 / Avg: 808.29 / Max: 827.68Min: 795.37 / Avg: 804.32 / Max: 809.32Min: 790.81 / Avg: 801.41 / Max: 815.811. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 12.0Build System: Ninjar4r3r2br1ar1306090120150SE +/- 0.56, N = 3SE +/- 0.32, N = 3SE +/- 1.12, N = 3SE +/- 0.75, N = 3SE +/- 0.52, N = 3146.91147.16148.48145.55145.72
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 12.0Build System: Ninjar4r3r2br1ar1306090120150Min: 145.96 / Avg: 146.91 / Max: 147.9Min: 146.55 / Avg: 147.16 / Max: 147.64Min: 146.58 / Avg: 148.48 / Max: 150.47Min: 144.52 / Avg: 145.55 / Max: 147.01Min: 144.8 / Avg: 145.72 / Max: 146.6

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUr4r3r2br1ar10.81971.63942.45913.27884.0985SE +/- 0.05617, N = 14SE +/- 0.05675, N = 14SE +/- 0.05421, N = 14SE +/- 0.00795, N = 3SE +/- 0.00924, N = 33.643193.640333.642323.576623.57247MIN: 3.5MIN: 3.47MIN: 3.51MIN: 3.5MIN: 3.531. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUr4r3r2br1ar1246810Min: 3.57 / Avg: 3.64 / Max: 4.37Min: 3.57 / Avg: 3.64 / Max: 4.38Min: 3.57 / Avg: 3.64 / Max: 4.35Min: 3.56 / Avg: 3.58 / Max: 3.59Min: 3.56 / Avg: 3.57 / Max: 3.591. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 2 - Buffer Length: 256 - Filter Length: 57r4r3r2br120M40M60M80M100MSE +/- 132035.35, N = 3SE +/- 430348.70, N = 3SE +/- 907677.13, N = 3SE +/- 729984.78, N = 31094300001115100001101733331107133331. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 2 - Buffer Length: 256 - Filter Length: 57r4r3r2br120M40M60M80M100MMin: 109170000 / Avg: 109430000 / Max: 109600000Min: 110650000 / Avg: 111510000 / Max: 111970000Min: 109050000 / Avg: 110173333.33 / Max: 111970000Min: 109770000 / Avg: 110713333.33 / Max: 1121500001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 128 - Buffer Length: 256 - Filter Length: 57r4r3r2br1ar1700M1400M2100M2800M3500MSE +/- 16537936.19, N = 3SE +/- 6896617.53, N = 3SE +/- 14312737.14, N = 3SE +/- 38975091.76, N = 3SE +/- 8088331.79, N = 3339880000034110000003400066667335273333334159333331. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 128 - Buffer Length: 256 - Filter Length: 57r4r3r2br1ar1600M1200M1800M2400M3000MMin: 3379400000 / Avg: 3398800000 / Max: 3431700000Min: 3397500000 / Avg: 3411000000 / Max: 3420200000Min: 3374500000 / Avg: 3400066666.67 / Max: 3424000000Min: 3302200000 / Avg: 3352733333.33 / Max: 3429400000Min: 3405700000 / Avg: 3415933333.33 / Max: 34319000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

KTX-Software toktx

This is a benchmark of The Khronos Group's KTX-Software library and tools. KTX-Software provides "toktx" for converting/creating in the KTX container format for image textures. This benchmark times how long it takes to convert to KTX 2.0 format with various settings using a reference PNG sample input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: UASTC 3r4r2b1.27442.54883.82325.09766.372SE +/- 0.008, N = 3SE +/- 0.053, N = 155.5625.664
OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: UASTC 3r4r2b246810Min: 5.55 / Avg: 5.56 / Max: 5.57Min: 5.52 / Avg: 5.66 / Max: 6.01

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUr4r3r2br1ar10.47640.95281.42921.90562.382SE +/- 0.01801, N = 3SE +/- 0.01943, N = 3SE +/- 0.01980, N = 3SE +/- 0.00168, N = 3SE +/- 0.00138, N = 32.108372.108412.117122.085322.07944MIN: 2.03MIN: 2.03MIN: 2.03MIN: 2.03MIN: 2.031. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUr4r3r2br1ar1246810Min: 2.08 / Avg: 2.11 / Max: 2.14Min: 2.09 / Avg: 2.11 / Max: 2.15Min: 2.1 / Avg: 2.12 / Max: 2.16Min: 2.08 / Avg: 2.09 / Max: 2.09Min: 2.08 / Avg: 2.08 / Max: 2.081. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: OpenMPr4r3r2br1ar116003200480064008000SE +/- 91.12, N = 4SE +/- 85.45, N = 4SE +/- 101.59, N = 3SE +/- 0.88, N = 3SE +/- 5.13, N = 3742974397412730873181. (CXX) g++ options: -O3 -lpthread -lm -lgcc -lgcc_s -lc
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: OpenMPr4r3r2br1ar113002600390052006500Min: 7313 / Avg: 7428.5 / Max: 7700Min: 7314 / Avg: 7439 / Max: 7691Min: 7281 / Avg: 7412 / Max: 7612Min: 7307 / Avg: 7308.33 / Max: 7310Min: 7308 / Avg: 7318 / Max: 73251. (CXX) g++ options: -O3 -lpthread -lm -lgcc -lgcc_s -lc

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: inception-v3r4r2b1224364860SE +/- 0.75, N = 12SE +/- 1.54, N = 352.2353.07MIN: 47.47 / MAX: 94.69MIN: 49.59 / MAX: 69.621. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: inception-v3r4r2b1122334455Min: 48.24 / Avg: 52.23 / Max: 55.45Min: 50.05 / Avg: 53.07 / Max: 55.131. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUr4r3r2br1ar10.67711.35422.03132.70843.3855SE +/- 0.02449, N = 14SE +/- 0.02478, N = 14SE +/- 0.02287, N = 13SE +/- 0.00276, N = 3SE +/- 0.00128, N = 33.009073.009293.004642.968572.96135MIN: 2.84MIN: 2.84MIN: 2.84MIN: 2.84MIN: 2.841. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUr4r3r2br1ar1246810Min: 2.97 / Avg: 3.01 / Max: 3.33Min: 2.97 / Avg: 3.01 / Max: 3.33Min: 2.97 / Avg: 3 / Max: 3.28Min: 2.96 / Avg: 2.97 / Max: 2.97Min: 2.96 / Avg: 2.96 / Max: 2.961. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

MariaDB

This is a MariaDB MySQL database server benchmark making use of mysqlslap. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 10.5.2Clients: 128r3r2b4080120160200SE +/- 0.35, N = 3SE +/- 0.65, N = 31891921. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lsnappy -ldl -lz -lrt
OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 10.5.2Clients: 128r3r2b4080120160200Min: 188.82 / Avg: 189.47 / Max: 190.01Min: 190.77 / Avg: 191.7 / Max: 192.941. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lsnappy -ldl -lz -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPUr4r3r2br1ar10.13550.2710.40650.5420.6775SE +/- 0.003648, N = 3SE +/- 0.004400, N = 3SE +/- 0.004180, N = 3SE +/- 0.000780, N = 3SE +/- 0.001703, N = 30.6020380.6023140.6021220.5956610.593042MIN: 0.56MIN: 0.56MIN: 0.56MIN: 0.56MIN: 0.561. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPUr4r3r2br1ar1246810Min: 0.6 / Avg: 0.6 / Max: 0.61Min: 0.6 / Avg: 0.6 / Max: 0.61Min: 0.6 / Avg: 0.6 / Max: 0.61Min: 0.59 / Avg: 0.6 / Max: 0.6Min: 0.59 / Avg: 0.59 / Max: 0.61. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

KTX-Software toktx

This is a benchmark of The Khronos Group's KTX-Software library and tools. KTX-Software provides "toktx" for converting/creating in the KTX container format for image textures. This benchmark times how long it takes to convert to KTX 2.0 format with various settings using a reference PNG sample input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: Zstd Compression 19r4r2b510152025SE +/- 0.20, N = 3SE +/- 0.22, N = 320.0819.78
OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: Zstd Compression 19r4r2b510152025Min: 19.88 / Avg: 20.08 / Max: 20.48Min: 19.57 / Avg: 19.78 / Max: 20.21

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUr4r3r2br1ar10.19720.39440.59160.78880.986SE +/- 0.007461, N = 14SE +/- 0.007890, N = 14SE +/- 0.008361, N = 14SE +/- 0.002055, N = 3SE +/- 0.002419, N = 30.8762270.8749680.8740800.8632140.864164MIN: 0.84MIN: 0.84MIN: 0.83MIN: 0.84MIN: 0.841. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUr4r3r2br1ar1246810Min: 0.86 / Avg: 0.88 / Max: 0.97Min: 0.86 / Avg: 0.87 / Max: 0.98Min: 0.86 / Avg: 0.87 / Max: 0.98Min: 0.86 / Avg: 0.86 / Max: 0.87Min: 0.86 / Avg: 0.86 / Max: 0.871. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUr4r3r2br1ar10.04880.09760.14640.19520.244SE +/- 0.001544, N = 12SE +/- 0.002019, N = 7SE +/- 0.001893, N = 8SE +/- 0.000781, N = 3SE +/- 0.000867, N = 30.2150850.2165860.2168060.2136430.215115MIN: 0.19MIN: 0.19MIN: 0.19MIN: 0.19MIN: 0.191. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUr4r3r2br1ar112345Min: 0.21 / Avg: 0.22 / Max: 0.23Min: 0.21 / Avg: 0.22 / Max: 0.23Min: 0.21 / Avg: 0.22 / Max: 0.23Min: 0.21 / Avg: 0.21 / Max: 0.21Min: 0.21 / Avg: 0.22 / Max: 0.221. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

srsLTE

srsLTE is an open-source LTE software radio suite created by Software Radio Systems (SRS). srsLTE can be used for building your own software defined (SDR) LTE mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsLTE 20.10.1Test: PHY_DL_Testr4r3r2br1ar14080120160200SE +/- 0.58, N = 3SE +/- 2.42, N = 3SE +/- 1.23, N = 3SE +/- 0.36, N = 3SE +/- 1.15, N = 3183.7181.6181.6184.2183.41. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lpthread -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lm -lfftw3f
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsLTE 20.10.1Test: PHY_DL_Testr4r3r2br1ar1306090120150Min: 182.9 / Avg: 183.67 / Max: 184.8Min: 176.8 / Avg: 181.63 / Max: 184.2Min: 179.2 / Avg: 181.63 / Max: 183.2Min: 183.5 / Avg: 184.2 / Max: 184.7Min: 181.2 / Avg: 183.43 / Max: 1851. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lpthread -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lm -lfftw3f

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: AES-256r4r3r2br1ar112002400360048006000SE +/- 51.03, N = 3SE +/- 42.23, N = 3SE +/- 55.60, N = 3SE +/- 0.28, N = 3SE +/- 0.92, N = 35612.005593.375606.975670.815669.701. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: AES-256r4r3r2br1ar110002000300040005000Min: 5509.94 / Avg: 5612 / Max: 5663.78Min: 5514.87 / Avg: 5593.37 / Max: 5659.62Min: 5495.8 / Avg: 5606.97 / Max: 5665.15Min: 5670.53 / Avg: 5670.81 / Max: 5671.36Min: 5667.87 / Avg: 5669.7 / Max: 5670.651. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUr4r3r2br1ar10.05470.10940.16410.21880.2735SE +/- 0.002245, N = 7SE +/- 0.002507, N = 5SE +/- 0.003187, N = 3SE +/- 0.000662, N = 3SE +/- 0.000856, N = 30.2424500.2433080.2430260.2401220.239989MIN: 0.22MIN: 0.22MIN: 0.22MIN: 0.23MIN: 0.221. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUr4r3r2br1ar112345Min: 0.24 / Avg: 0.24 / Max: 0.26Min: 0.24 / Avg: 0.24 / Max: 0.25Min: 0.24 / Avg: 0.24 / Max: 0.25Min: 0.24 / Avg: 0.24 / Max: 0.24Min: 0.24 / Avg: 0.24 / Max: 0.241. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: KASUMIr4r3r2br1ar120406080100SE +/- 0.87, N = 3SE +/- 0.77, N = 3SE +/- 1.01, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 376.4076.4176.2977.3177.291. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: KASUMIr4r3r2br1ar11530456075Min: 74.67 / Avg: 76.4 / Max: 77.41Min: 74.87 / Avg: 76.41 / Max: 77.21Min: 74.26 / Avg: 76.29 / Max: 77.38Min: 77.25 / Avg: 77.31 / Max: 77.39Min: 77.25 / Avg: 77.29 / Max: 77.311. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

Basis Universal

Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 2r4r2b48121620SE +/- 0.15, N = 3SE +/- 0.18, N = 314.1613.981. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 2r4r2b48121620Min: 13.85 / Avg: 14.16 / Max: 14.34Min: 13.79 / Avg: 13.98 / Max: 14.341. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: CAST-256r4r3r2br1ar1306090120150SE +/- 1.17, N = 3SE +/- 1.33, N = 3SE +/- 1.15, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3114.65114.52114.66115.97115.971. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: CAST-256r4r3r2br1ar120406080100Min: 112.3 / Avg: 114.65 / Max: 115.83Min: 111.86 / Avg: 114.52 / Max: 115.85Min: 112.36 / Avg: 114.66 / Max: 115.83Min: 115.96 / Avg: 115.97 / Max: 116Min: 115.96 / Avg: 115.97 / Max: 115.991. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: ChaCha20Poly1305r4r3r2br1ar1130260390520650SE +/- 2.98, N = 3SE +/- 3.19, N = 3SE +/- 3.48, N = 3SE +/- 0.17, N = 3SE +/- 0.03, N = 3619.64616.50615.81623.20623.491. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: ChaCha20Poly1305r4r3r2br1ar1110220330440550Min: 613.67 / Avg: 619.64 / Max: 622.78Min: 613.05 / Avg: 616.5 / Max: 622.88Min: 612.18 / Avg: 615.81 / Max: 622.77Min: 622.89 / Avg: 623.2 / Max: 623.46Min: 623.44 / Avg: 623.49 / Max: 623.531. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 64 - Buffer Length: 256 - Filter Length: 57r4r3r2br1ar1700M1400M2100M2800M3500MSE +/- 12876378.03, N = 3SE +/- 14893734.70, N = 3SE +/- 17049079.48, N = 3SE +/- 2150193.79, N = 3SE +/- 5206513.02, N = 3324566666732327000003227433333326370000032671333331. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 64 - Buffer Length: 256 - Filter Length: 57r4r3r2br1ar1600M1200M1800M2400M3000MMin: 3220800000 / Avg: 3245666666.67 / Max: 3263900000Min: 3203800000 / Avg: 3232700000 / Max: 3253400000Min: 3198700000 / Avg: 3227433333.33 / Max: 3257700000Min: 3260800000 / Avg: 3263700000 / Max: 3267900000Min: 3261100000 / Avg: 3267133333.33 / Max: 32775000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: ChaCha20Poly1305 - Decryptr4r3r2br1ar1130260390520650SE +/- 2.81, N = 3SE +/- 3.74, N = 3SE +/- 3.49, N = 3SE +/- 0.57, N = 3SE +/- 0.40, N = 3615.98612.15612.44619.54619.461. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: ChaCha20Poly1305 - Decryptr4r3r2br1ar1110220330440550Min: 610.4 / Avg: 615.97 / Max: 619.37Min: 607.38 / Avg: 612.15 / Max: 619.53Min: 608.87 / Avg: 612.44 / Max: 619.41Min: 618.44 / Avg: 619.54 / Max: 620.33Min: 619.03 / Avg: 619.46 / Max: 620.261. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

SecureMark

SecureMark is an objective, standardized benchmarking framework for measuring the efficiency of cryptographic processing solutions developed by EEMBC. SecureMark-TLS is benchmarking Transport Layer Security performance with a focus on IoT/edge computing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmarks, More Is BetterSecureMark 1.0.4Benchmark: SecureMark-TLSr4r3r2br1ar150K100K150K200K250KSE +/- 2769.20, N = 3SE +/- 267.95, N = 3SE +/- 84.15, N = 3SE +/- 236.12, N = 3SE +/- 234.37, N = 32227472252912253432253662254121. (CC) gcc options: -pedantic -O3
OpenBenchmarking.orgmarks, More Is BetterSecureMark 1.0.4Benchmark: SecureMark-TLSr4r3r2br1ar140K80K120K160K200KMin: 217215.56 / Avg: 222747.23 / Max: 225749.55Min: 224806.05 / Avg: 225291.45 / Max: 225730.83Min: 225255.45 / Avg: 225342.99 / Max: 225511.25Min: 224893.89 / Avg: 225365.98 / Max: 225612.48Min: 225033.83 / Avg: 225412.36 / Max: 225841.061. (CC) gcc options: -pedantic -O3

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: Blowfishr4r3r2br1ar180160240320400SE +/- 3.51, N = 3SE +/- 3.73, N = 3SE +/- 0.11, N = 3SE +/- 0.05, N = 3SE +/- 0.56, N = 3359.57359.45362.93363.62363.041. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: Blowfishr4r3r2br1ar160120180240300Min: 352.55 / Avg: 359.57 / Max: 363.26Min: 352 / Avg: 359.45 / Max: 363.27Min: 362.8 / Avg: 362.93 / Max: 363.15Min: 363.53 / Avg: 363.61 / Max: 363.69Min: 361.92 / Avg: 363.04 / Max: 363.611. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

Google Draco

Draco is a library developed by Google for compressing/decompressing 3D geometric meshes and point clouds. This test profile uses some Artec3D PLY models as the sample 3D model input formats for Draco compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.4.1Model: Church Facader4r2b15003000450060007500SE +/- 3.33, N = 3SE +/- 20.01, N = 3708270011. (CXX) g++ options: -O3
OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.4.1Model: Church Facader4r2b12002400360048006000Min: 7075 / Avg: 7081.67 / Max: 7085Min: 6980 / Avg: 7001 / Max: 70411. (CXX) g++ options: -O3

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: Twofishr4r3r2br1ar160120180240300SE +/- 2.83, N = 3SE +/- 2.66, N = 3SE +/- 0.11, N = 3SE +/- 0.14, N = 3SE +/- 0.14, N = 3286.00286.18288.56288.85289.131. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: Twofishr4r3r2br1ar150100150200250Min: 280.34 / Avg: 286 / Max: 288.85Min: 280.85 / Avg: 286.18 / Max: 288.95Min: 288.41 / Avg: 288.56 / Max: 288.78Min: 288.58 / Avg: 288.85 / Max: 289.02Min: 288.88 / Avg: 289.13 / Max: 289.351. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUr4r3r2br1ar10.07690.15380.23070.30760.3845SE +/- 0.004121, N = 3SE +/- 0.003372, N = 6SE +/- 0.003448, N = 5SE +/- 0.002562, N = 3SE +/- 0.000853, N = 30.3402430.3419550.3418930.3416630.338327MIN: 0.3MIN: 0.31MIN: 0.3MIN: 0.31MIN: 0.31. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUr4r3r2br1ar112345Min: 0.33 / Avg: 0.34 / Max: 0.35Min: 0.34 / Avg: 0.34 / Max: 0.36Min: 0.33 / Avg: 0.34 / Max: 0.35Min: 0.34 / Avg: 0.34 / Max: 0.35Min: 0.34 / Avg: 0.34 / Max: 0.341. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 160 - Buffer Length: 256 - Filter Length: 57r4r3r2br1ar1700M1400M2100M2800M3500MSE +/- 16411005.79, N = 3SE +/- 14901789.60, N = 3SE +/- 14685858.66, N = 3SE +/- 2062630.47, N = 3SE +/- 17047384.94, N = 3314026666731433000003131866667316206666731448000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 160 - Buffer Length: 256 - Filter Length: 57r4r3r2br1ar1500M1000M1500M2000M2500MMin: 3107500000 / Avg: 3140266666.67 / Max: 3158300000Min: 3113500000 / Avg: 3143300000 / Max: 3158600000Min: 3113500000 / Avg: 3131866666.67 / Max: 3160900000Min: 3158000000 / Avg: 3162066666.67 / Max: 3164700000Min: 3110800000 / Avg: 3144800000 / Max: 31640000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUr4r3r2br1ar1100200300400500SE +/- 1.10, N = 3SE +/- 2.40, N = 3SE +/- 0.78, N = 3SE +/- 0.90, N = 3SE +/- 0.58, N = 3446.54450.65446.39447.31447.97MIN: 429.71MIN: 432.96MIN: 432.04MIN: 432.33MIN: 433.221. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUr4r3r2br1ar180160240320400Min: 444.35 / Avg: 446.54 / Max: 447.76Min: 446.45 / Avg: 450.65 / Max: 454.76Min: 445.04 / Avg: 446.39 / Max: 447.76Min: 445.85 / Avg: 447.31 / Max: 448.94Min: 447.38 / Avg: 447.97 / Max: 449.141. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUr4r3r2br1ar10.2820.5640.8461.1281.41SE +/- 0.01282, N = 3SE +/- 0.01211, N = 3SE +/- 0.00964, N = 3SE +/- 0.01592, N = 15SE +/- 0.00180, N = 31.242221.241761.253131.252671.24809MIN: 1.19MIN: 1.18MIN: 1.2MIN: 1.19MIN: 1.21. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUr4r3r2br1ar1246810Min: 1.23 / Avg: 1.24 / Max: 1.27Min: 1.22 / Avg: 1.24 / Max: 1.26Min: 1.24 / Avg: 1.25 / Max: 1.27Min: 1.23 / Avg: 1.25 / Max: 1.47Min: 1.24 / Avg: 1.25 / Max: 1.251. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUr4r3r2br1ar10.80151.6032.40453.2064.0075SE +/- 0.00650, N = 3SE +/- 0.01280, N = 3SE +/- 0.00854, N = 3SE +/- 0.00732, N = 3SE +/- 0.00193, N = 33.547833.562243.531213.543673.53026MIN: 3.37MIN: 3.39MIN: 3.37MIN: 3.38MIN: 3.381. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUr4r3r2br1ar1246810Min: 3.54 / Avg: 3.55 / Max: 3.56Min: 3.54 / Avg: 3.56 / Max: 3.59Min: 3.52 / Avg: 3.53 / Max: 3.55Min: 3.53 / Avg: 3.54 / Max: 3.56Min: 3.53 / Avg: 3.53 / Max: 3.531. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Helsing

Helsing is an open-source POSIX vampire number generator. This test profile measures the time it takes to generate vampire numbers between varying numbers of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHelsing 1.0-betaDigit Range: 14 digitr4r3r2br1ar12040608010078.5478.0878.3378.1677.871. (CC) gcc options: -O2 -pthread -lcrypto

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUr4r3r2br1ar1100200300400500SE +/- 3.51, N = 3SE +/- 1.24, N = 3SE +/- 0.65, N = 3SE +/- 1.79, N = 3SE +/- 0.58, N = 3448.91447.14447.29446.94445.14MIN: 431.33MIN: 432.42MIN: 433.06MIN: 430.47MIN: 431.521. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUr4r3r2br1ar180160240320400Min: 445.15 / Avg: 448.91 / Max: 455.92Min: 445.31 / Avg: 447.14 / Max: 449.5Min: 446.24 / Avg: 447.29 / Max: 448.47Min: 443.6 / Avg: 446.94 / Max: 449.74Min: 444.55 / Avg: 445.14 / Max: 446.31. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL, NVIDIA OptiX, and NVIDIA CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: Fishy Cat - Compute: CPU-Onlyr4r2b1122334455SE +/- 0.25, N = 3SE +/- 0.15, N = 346.7346.38
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: Fishy Cat - Compute: CPU-Onlyr4r2b1020304050Min: 46.26 / Avg: 46.73 / Max: 47.1Min: 46.18 / Avg: 46.38 / Max: 46.68

Google Draco

Draco is a library developed by Google for compressing/decompressing 3D geometric meshes and point clouds. This test profile uses some Artec3D PLY models as the sample 3D model input formats for Draco compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.4.1Model: Lionr4r2b13002600390052006500SE +/- 21.15, N = 3SE +/- 25.21, N = 3617061261. (CXX) g++ options: -O3
OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.4.1Model: Lionr4r2b11002200330044005500Min: 6132 / Avg: 6170.33 / Max: 6205Min: 6098 / Avg: 6125.67 / Max: 61761. (CXX) g++ options: -O3

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL, NVIDIA OptiX, and NVIDIA CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: Classroom - Compute: CPU-Onlyr4r2b1632486480SE +/- 0.13, N = 3SE +/- 0.08, N = 372.2971.78
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: Classroom - Compute: CPU-Onlyr4r2b1428425670Min: 72.1 / Avg: 72.29 / Max: 72.55Min: 71.69 / Avg: 71.78 / Max: 71.94

Intel Memory Latency Checker

Intel Memory Latency Checker (MLC) is a binary-only system memory bandwidth and latency benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterIntel Memory Latency CheckerTest: Max Bandwidth - 1:1 Reads-Writesr5r4r3r2br2ar1ar190K180K270K360K450KSE +/- 1051.98, N = 3SE +/- 2322.32, N = 3SE +/- 276.68, N = 3SE +/- 3117.58, N = 3SE +/- 1844.14, N = 3SE +/- 1093.30, N = 3SE +/- 821.19, N = 3440205.22440315.41440939.22441732.77442460.05441408.09439496.74
OpenBenchmarking.orgMB/s, More Is BetterIntel Memory Latency CheckerTest: Max Bandwidth - 1:1 Reads-Writesr5r4r3r2br2ar1ar180K160K240K320K400KMin: 438546.05 / Avg: 440205.22 / Max: 442155.22Min: 437170.92 / Avg: 440315.41 / Max: 444848Min: 440649.44 / Avg: 440939.22 / Max: 441492.37Min: 438582.49 / Avg: 441732.77 / Max: 447967.82Min: 439839.35 / Avg: 442460.05 / Max: 446017.96Min: 439871.94 / Avg: 441408.09 / Max: 443523.79Min: 438156.11 / Avg: 439496.74 / Max: 440988.69

OpenBenchmarking.orgMB/s, More Is BetterIntel Memory Latency CheckerTest: Peak Injection Bandwidth - 2:1 Reads-Writesr5r4r3r2br2ar1ar1100K200K300K400K500KSE +/- 12.06, N = 3SE +/- 36.24, N = 3SE +/- 73.04, N = 3SE +/- 64.32, N = 3SE +/- 115.55, N = 3SE +/- 130.28, N = 3SE +/- 274.15, N = 3458830.6458941.9457190.5459309.8456408.6456260.3459038.6
OpenBenchmarking.orgMB/s, More Is BetterIntel Memory Latency CheckerTest: Peak Injection Bandwidth - 2:1 Reads-Writesr5r4r3r2br2ar1ar180K160K240K320K400KMin: 458808.5 / Avg: 458830.6 / Max: 458850Min: 458890.3 / Avg: 458941.93 / Max: 459011.8Min: 457044.4 / Avg: 457190.47 / Max: 457264.9Min: 459181.6 / Avg: 459309.77 / Max: 459383.4Min: 456227.2 / Avg: 456408.57 / Max: 456623.3Min: 456008.8 / Avg: 456260.33 / Max: 456445Min: 458513.2 / Avg: 459038.6 / Max: 459437.1

OpenBenchmarking.orgMB/s, More Is BetterIntel Memory Latency CheckerTest: Max Bandwidth - 2:1 Reads-Writesr5r4r3r2br2ar1ar1100K200K300K400K500KSE +/- 53.22, N = 3SE +/- 8.60, N = 3SE +/- 89.89, N = 3SE +/- 51.02, N = 3SE +/- 54.98, N = 3SE +/- 129.26, N = 3SE +/- 33.49, N = 3458756.46458790.96457141.24459226.53456545.88456629.89459455.38
OpenBenchmarking.orgMB/s, More Is BetterIntel Memory Latency CheckerTest: Max Bandwidth - 2:1 Reads-Writesr5r4r3r2br2ar1ar180K160K240K320K400KMin: 458664.93 / Avg: 458756.46 / Max: 458849.29Min: 458781.09 / Avg: 458790.96 / Max: 458808.09Min: 456963.3 / Avg: 457141.24 / Max: 457252.41Min: 459165.58 / Avg: 459226.53 / Max: 459327.88Min: 456488.94 / Avg: 456545.88 / Max: 456655.82Min: 456468.59 / Avg: 456629.89 / Max: 456885.51Min: 459412.04 / Avg: 459455.38 / Max: 459521.28

srsLTE

srsLTE is an open-source LTE software radio suite created by Software Radio Systems (SRS). srsLTE can be used for building your own software defined (SDR) LTE mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSamples / Second, More Is BettersrsLTE 20.10.1Test: OFDM_Testr4r3r2br1ar130M60M90M120M150MSE +/- 233333.33, N = 3SE +/- 600925.21, N = 3SE +/- 366666.67, N = 3SE +/- 240370.09, N = 3SE +/- 611010.09, N = 31206666671208333331207333331201333331203000001. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lpthread -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lm -lfftw3f
OpenBenchmarking.orgSamples / Second, More Is BettersrsLTE 20.10.1Test: OFDM_Testr4r3r2br1ar120M40M60M80M100MMin: 120300000 / Avg: 120666666.67 / Max: 121100000Min: 120000000 / Avg: 120833333.33 / Max: 122000000Min: 120000000 / Avg: 120733333.33 / Max: 121100000Min: 119800000 / Avg: 120133333.33 / Max: 120600000Min: 119100000 / Avg: 120300000 / Max: 1211000001. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -mavx512f -mavx512cd -mavx512bw -mavx512dq -rdynamic -lpthread -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lm -lfftw3f

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: Mediumr4r2b246810SE +/- 0.0290, N = 3SE +/- 0.0906, N = 157.14727.18871. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: Mediumr4r2b3691215Min: 7.09 / Avg: 7.15 / Max: 7.19Min: 6.6 / Avg: 7.19 / Max: 7.541. (CXX) g++ options: -O3 -flto -pthread

Intel Memory Latency Checker

Intel Memory Latency Checker (MLC) is a binary-only system memory bandwidth and latency benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterIntel Memory Latency CheckerTest: Peak Injection Bandwidth - All Readsr5r4r3r2br2ar1ar180K160K240K320K400KSE +/- 23.85, N = 3SE +/- 26.62, N = 3SE +/- 24.95, N = 3SE +/- 14.54, N = 3SE +/- 37.47, N = 3SE +/- 14.58, N = 3SE +/- 709.43, N = 3357722.7358110.5358463.7357742.9358269.7358385.5356476.2
OpenBenchmarking.orgMB/s, More Is BetterIntel Memory Latency CheckerTest: Peak Injection Bandwidth - All Readsr5r4r3r2br2ar1ar160K120K180K240K300KMin: 357695 / Avg: 357722.73 / Max: 357770.2Min: 358080.2 / Avg: 358110.53 / Max: 358163.6Min: 358434.2 / Avg: 358463.7 / Max: 358513.3Min: 357725.9 / Avg: 357742.87 / Max: 357771.8Min: 358209 / Avg: 358269.67 / Max: 358338.1Min: 358364.2 / Avg: 358385.5 / Max: 358413.4Min: 355081.8 / Avg: 356476.2 / Max: 357400.6

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUr4r3r2br1ar1100200300400500SE +/- 2.63, N = 3SE +/- 0.04, N = 3SE +/- 1.13, N = 3SE +/- 2.18, N = 3SE +/- 0.85, N = 3447.96446.92447.70447.44445.52MIN: 429.99MIN: 433.64MIN: 433.04MIN: 429.4MIN: 431.181. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUr4r3r2br1ar180160240320400Min: 443.52 / Avg: 447.96 / Max: 452.63Min: 446.88 / Avg: 446.92 / Max: 446.99Min: 445.83 / Avg: 447.7 / Max: 449.74Min: 444.15 / Avg: 447.44 / Max: 451.57Min: 444.57 / Avg: 445.52 / Max: 447.221. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Basis Universal

Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: ETC1Sr4r2b816243240SE +/- 0.42, N = 3SE +/- 0.21, N = 334.4234.241. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: ETC1Sr4r2b714212835Min: 33.96 / Avg: 34.42 / Max: 35.26Min: 34.01 / Avg: 34.24 / Max: 34.651. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

MariaDB

This is a MariaDB MySQL database server benchmark making use of mysqlslap. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 10.5.2Clients: 8r3r2b30060090012001500SE +/- 3.56, N = 3SE +/- 10.97, N = 3142014131. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lsnappy -ldl -lz -lrt
OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 10.5.2Clients: 8r3r2b2004006008001000Min: 1414.43 / Avg: 1419.8 / Max: 1426.53Min: 1398.6 / Avg: 1413.26 / Max: 1434.721. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lsnappy -ldl -lz -lrt

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL, NVIDIA OptiX, and NVIDIA CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: BMW27 - Compute: CPU-Onlyr4r2b714212835SE +/- 0.32, N = 3SE +/- 0.08, N = 329.6929.56
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: BMW27 - Compute: CPU-Onlyr4r2b714212835Min: 29.06 / Avg: 29.69 / Max: 30.08Min: 29.41 / Avg: 29.56 / Max: 29.66

Intel Memory Latency Checker

Intel Memory Latency Checker (MLC) is a binary-only system memory bandwidth and latency benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterIntel Memory Latency CheckerTest: Peak Injection Bandwidth - 3:1 Reads-Writesr5r4r3r2br2ar1ar190K180K270K360K450KSE +/- 23.30, N = 3SE +/- 23.30, N = 3SE +/- 88.34, N = 3SE +/- 25.04, N = 3SE +/- 236.99, N = 3SE +/- 94.95, N = 3SE +/- 163.24, N = 3425508.1425822.1424904.5425925.6424077.3424096.6425933.7
OpenBenchmarking.orgMB/s, More Is BetterIntel Memory Latency CheckerTest: Peak Injection Bandwidth - 3:1 Reads-Writesr5r4r3r2br2ar1ar170K140K210K280K350KMin: 425461.6 / Avg: 425508.1 / Max: 425533.9Min: 425783 / Avg: 425822.13 / Max: 425863.6Min: 424736.4 / Avg: 424904.47 / Max: 425035.7Min: 425885.8 / Avg: 425925.57 / Max: 425971.8Min: 423613.6 / Avg: 424077.3 / Max: 424394.2Min: 423974.4 / Avg: 424096.63 / Max: 424283.6Min: 425637.5 / Avg: 425933.7 / Max: 426200.7

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUr4r3r2br1ar12004006008001000SE +/- 1.96, N = 3SE +/- 2.18, N = 3SE +/- 1.48, N = 3SE +/- 3.65, N = 3SE +/- 2.07, N = 3792.05793.08789.84791.93792.83MIN: 765.9MIN: 768.2MIN: 767.03MIN: 765.01MIN: 763.761. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUr4r3r2br1ar1140280420560700Min: 789.46 / Avg: 792.05 / Max: 795.9Min: 788.83 / Avg: 793.08 / Max: 796.04Min: 787.56 / Avg: 789.84 / Max: 792.61Min: 787 / Avg: 791.93 / Max: 799.06Min: 790.29 / Avg: 792.83 / Max: 796.931. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Intel Memory Latency Checker

Intel Memory Latency Checker (MLC) is a binary-only system memory bandwidth and latency benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterIntel Memory Latency CheckerTest: Max Bandwidth - 3:1 Reads-Writesr5r4r3r2br2ar1ar190K180K270K360K450KSE +/- 133.64, N = 3SE +/- 67.02, N = 3SE +/- 109.66, N = 3SE +/- 71.38, N = 3SE +/- 392.90, N = 3SE +/- 465.24, N = 3SE +/- 105.41, N = 3425467.51425848.09424925.84425997.22424818.83424612.62426148.96
OpenBenchmarking.orgMB/s, More Is BetterIntel Memory Latency CheckerTest: Max Bandwidth - 3:1 Reads-Writesr5r4r3r2br2ar1ar170K140K210K280K350KMin: 425332.11 / Avg: 425467.51 / Max: 425734.77Min: 425752.89 / Avg: 425848.09 / Max: 425977.41Min: 424737.3 / Avg: 424925.84 / Max: 425117.15Min: 425881.17 / Avg: 425997.22 / Max: 426127.26Min: 424373.25 / Avg: 424818.83 / Max: 425602.16Min: 424032.6 / Avg: 424612.62 / Max: 425532.74Min: 426030.23 / Avg: 426148.96 / Max: 426359.2

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/sec, More Is BetterSysbench 1.0.20Test: RAM / Memoryr4r2b3K6K9K12K15KSE +/- 118.72, N = 15SE +/- 125.16, N = 1512553.4412510.561. (CC) gcc options: -pthread -O2 -funroll-loops -rdynamic -ldl -laio -lm
OpenBenchmarking.orgMiB/sec, More Is BetterSysbench 1.0.20Test: RAM / Memoryr4r2b2K4K6K8K10KMin: 11372.15 / Avg: 12553.44 / Max: 13371.45Min: 11672.1 / Avg: 12510.56 / Max: 13464.171. (CC) gcc options: -pthread -O2 -funroll-loops -rdynamic -ldl -laio -lm

Intel Memory Latency Checker

Intel Memory Latency Checker (MLC) is a binary-only system memory bandwidth and latency benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterIntel Memory Latency CheckerTest: Max Bandwidth - All Readsr5r4r3r2br2ar1ar180K160K240K320K400KSE +/- 46.23, N = 3SE +/- 83.70, N = 3SE +/- 59.61, N = 3SE +/- 83.63, N = 3SE +/- 107.35, N = 3SE +/- 142.76, N = 3SE +/- 67.01, N = 3357550.82357925.98358268.00357774.43358456.09358364.56357285.28
OpenBenchmarking.orgMB/s, More Is BetterIntel Memory Latency CheckerTest: Max Bandwidth - All Readsr5r4r3r2br2ar1ar160K120K180K240K300KMin: 357493.95 / Avg: 357550.82 / Max: 357642.39Min: 357816.22 / Avg: 357925.98 / Max: 358090.32Min: 358152.59 / Avg: 358268 / Max: 358351.58Min: 357667.94 / Avg: 357774.43 / Max: 357939.38Min: 358285.28 / Avg: 358456.09 / Max: 358654.14Min: 358214.22 / Avg: 358364.56 / Max: 358649.94Min: 357217.71 / Avg: 357285.28 / Max: 357419.31

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: CAST-256 - Decryptr4r3r2br1ar1306090120150SE +/- 0.01, N = 3SE +/- 0.35, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3116.07115.72116.08116.07116.071. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: CAST-256 - Decryptr4r3r2br1ar120406080100Min: 116.05 / Avg: 116.07 / Max: 116.09Min: 115.02 / Avg: 115.72 / Max: 116.09Min: 116.07 / Avg: 116.08 / Max: 116.09Min: 116.05 / Avg: 116.07 / Max: 116.1Min: 116.06 / Avg: 116.07 / Max: 116.091. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

MariaDB

This is a MariaDB MySQL database server benchmark making use of mysqlslap. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 10.5.2Clients: 64r3r2b90180270360450SE +/- 0.16, N = 3SE +/- 0.62, N = 34044031. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lsnappy -ldl -lz -lrt
OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 10.5.2Clients: 64r3r2b70140210280350Min: 403.39 / Avg: 403.55 / Max: 403.88Min: 401.28 / Avg: 402.52 / Max: 403.231. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lsnappy -ldl -lz -lrt

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: AES-256 - Decryptr4r3r2br1ar112002400360048006000SE +/- 12.66, N = 3SE +/- 1.10, N = 3SE +/- 0.94, N = 3SE +/- 0.12, N = 3SE +/- 1.20, N = 35650.145662.345662.765663.615663.061. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: AES-256 - Decryptr4r3r2br1ar110002000300040005000Min: 5624.84 / Avg: 5650.14 / Max: 5663.53Min: 5660.15 / Avg: 5662.34 / Max: 5663.66Min: 5660.9 / Avg: 5662.76 / Max: 5663.88Min: 5663.43 / Avg: 5663.61 / Max: 5663.83Min: 5660.81 / Avg: 5663.05 / Max: 5664.891. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

MariaDB

This is a MariaDB MySQL database server benchmark making use of mysqlslap. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 10.5.2Clients: 32r3r2b2004006008001000SE +/- 1.83, N = 3SE +/- 0.26, N = 38878851. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lsnappy -ldl -lz -lrt
OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 10.5.2Clients: 32r3r2b160320480640800Min: 883.39 / Avg: 886.79 / Max: 889.68Min: 884.17 / Avg: 884.69 / Max: 884.961. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lsnappy -ldl -lz -lrt

Basis Universal

Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 0r4r2b3691215SE +/- 0.08, N = 3SE +/- 0.08, N = 1511.2311.251. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 0r4r2b3691215Min: 11.09 / Avg: 11.23 / Max: 11.36Min: 10.87 / Avg: 11.25 / Max: 11.851. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: Thoroughr4r2b3691215SE +/- 0.0879, N = 7SE +/- 0.0796, N = 89.30919.29071. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: Thoroughr4r2b3691215Min: 9.19 / Avg: 9.31 / Max: 9.83Min: 9.19 / Avg: 9.29 / Max: 9.841. (CXX) g++ options: -O3 -flto -pthread

KTX-Software toktx

This is a benchmark of The Khronos Group's KTX-Software library and tools. KTX-Software provides "toktx" for converting/creating in the KTX container format for image textures. This benchmark times how long it takes to convert to KTX 2.0 format with various settings using a reference PNG sample input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: UASTC 4 + Zstd Compression 19r4r2b1326395265SE +/- 0.74, N = 3SE +/- 0.68, N = 456.7756.66
OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: UASTC 4 + Zstd Compression 19r4r2b1122334455Min: 55.3 / Avg: 56.77 / Max: 57.65Min: 55.31 / Avg: 56.66 / Max: 58.06

OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: UASTC 3 + Zstd Compression 19r4r2b3691215SE +/- 0.11, N = 5SE +/- 0.06, N = 310.0310.01
OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: UASTC 3 + Zstd Compression 19r4r2b3691215Min: 9.85 / Avg: 10.03 / Max: 10.47Min: 9.91 / Avg: 10.01 / Max: 10.13

Intel Memory Latency Checker

Intel Memory Latency Checker (MLC) is a binary-only system memory bandwidth and latency benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterIntel Memory Latency CheckerTest: Max Bandwidth - Stream-Triad Liker5r4r3r2br2ar1ar170K140K210K280K350KSE +/- 22.58, N = 3SE +/- 7.71, N = 3SE +/- 50.80, N = 3SE +/- 50.20, N = 3SE +/- 53.08, N = 3SE +/- 11.61, N = 3SE +/- 25.05, N = 3325312.30325314.62325218.50325409.99325260.41325184.58325766.94
OpenBenchmarking.orgMB/s, More Is BetterIntel Memory Latency CheckerTest: Max Bandwidth - Stream-Triad Liker5r4r3r2br2ar1ar160K120K180K240K300KMin: 325268.79 / Avg: 325312.3 / Max: 325344.54Min: 325301.67 / Avg: 325314.62 / Max: 325328.34Min: 325137.42 / Avg: 325218.5 / Max: 325312.05Min: 325313.79 / Avg: 325409.99 / Max: 325482.96Min: 325202.46 / Avg: 325260.41 / Max: 325366.41Min: 325161.37 / Avg: 325184.58 / Max: 325196.57Min: 325717.03 / Avg: 325766.94 / Max: 325795.68

OpenBenchmarking.orgMB/s, More Is BetterIntel Memory Latency CheckerTest: Peak Injection Bandwidth - Stream-Triad Liker5r4r3r2br2ar1ar170K140K210K280K350KSE +/- 55.81, N = 3SE +/- 60.42, N = 3SE +/- 32.03, N = 3SE +/- 12.95, N = 3SE +/- 34.05, N = 3SE +/- 38.10, N = 3SE +/- 177.93, N = 3324234.5324112.8324227.4324209.8323826.9323924.2324377.2
OpenBenchmarking.orgMB/s, More Is BetterIntel Memory Latency CheckerTest: Peak Injection Bandwidth - Stream-Triad Liker5r4r3r2br2ar1ar160K120K180K240K300KMin: 324154.8 / Avg: 324234.47 / Max: 324342Min: 324038.6 / Avg: 324112.8 / Max: 324232.5Min: 324174.2 / Avg: 324227.4 / Max: 324284.9Min: 324192.6 / Avg: 324209.83 / Max: 324235.2Min: 323774.4 / Avg: 323826.87 / Max: 323890.7Min: 323850 / Avg: 323924.17 / Max: 323976.4Min: 324023.4 / Avg: 324377.17 / Max: 324587.4

MariaDB

This is a MariaDB MySQL database server benchmark making use of mysqlslap. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 10.5.2Clients: 16r3r2b30060090012001500SE +/- 3.49, N = 3SE +/- 1.85, N = 3126212641. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lsnappy -ldl -lz -lrt
OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 10.5.2Clients: 16r3r2b2004006008001000Min: 1257.86 / Avg: 1262.11 / Max: 1269.04Min: 1261.03 / Avg: 1264.23 / Max: 1267.431. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lsnappy -ldl -lz -lrt

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: Twofish - Decryptr4r3r2br1ar160120180240300SE +/- 0.04, N = 3SE +/- 0.06, N = 3SE +/- 0.12, N = 3SE +/- 0.11, N = 3SE +/- 0.14, N = 3292.61292.83292.40292.37292.741. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: Twofish - Decryptr4r3r2br1ar150100150200250Min: 292.53 / Avg: 292.61 / Max: 292.67Min: 292.7 / Avg: 292.83 / Max: 292.89Min: 292.21 / Avg: 292.4 / Max: 292.63Min: 292.19 / Avg: 292.37 / Max: 292.58Min: 292.46 / Avg: 292.74 / Max: 292.931. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

Basis Universal

Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 3r4r2b48121620SE +/- 0.01, N = 3SE +/- 0.02, N = 317.1917.161. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 3r4r2b48121620Min: 17.17 / Avg: 17.19 / Max: 17.21Min: 17.14 / Avg: 17.16 / Max: 17.21. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL, NVIDIA OptiX, and NVIDIA CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: Pabellon Barcelona - Compute: CPU-Onlyr4r2b20406080100SE +/- 0.28, N = 3SE +/- 0.08, N = 388.6888.57
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: Pabellon Barcelona - Compute: CPU-Onlyr4r2b20406080100Min: 88.22 / Avg: 88.68 / Max: 89.18Min: 88.41 / Avg: 88.57 / Max: 88.65

HammerDB - MariaDB

This is a MariaDB MySQL database server benchmark making use of the HammerDB benchmarking / load testing tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.5.9Virtual Users: 128 - Warehouses: 500r1ar112K24K36K48K60KSE +/- 484.29, N = 9SE +/- 891.59, N = 957242571901. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.5.9Virtual Users: 128 - Warehouses: 500r1ar110K20K30K40K50KMin: 54404 / Avg: 57242 / Max: 59279Min: 51821 / Avg: 57189.78 / Max: 604071. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: Exhaustiver4r2b48121620SE +/- 0.02, N = 3SE +/- 0.00, N = 316.3716.361. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: Exhaustiver4r2b48121620Min: 16.34 / Avg: 16.37 / Max: 16.4Min: 16.36 / Avg: 16.36 / Max: 16.361. (CXX) g++ options: -O3 -flto -pthread

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: KASUMI - Decryptr4r3r2br1ar120406080100SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 374.2974.3174.2874.2974.321. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: KASUMI - Decryptr4r3r2br1ar11428425670Min: 74.25 / Avg: 74.29 / Max: 74.33Min: 74.29 / Avg: 74.31 / Max: 74.33Min: 74.21 / Avg: 74.28 / Max: 74.32Min: 74.27 / Avg: 74.29 / Max: 74.32Min: 74.29 / Avg: 74.32 / Max: 74.341. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: SqueezeNetV1.0r4r2b246810SE +/- 0.078, N = 12SE +/- 0.002, N = 37.1707.174MIN: 6.38 / MAX: 9.97MIN: 6.95 / MAX: 7.881. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: SqueezeNetV1.0r4r2b3691215Min: 6.45 / Avg: 7.17 / Max: 7.49Min: 7.17 / Avg: 7.17 / Max: 7.181. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL, NVIDIA OptiX, and NVIDIA CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: Barbershop - Compute: CPU-Onlyr4r2b20406080100SE +/- 0.59, N = 3SE +/- 0.18, N = 3109.96110.02
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: Barbershop - Compute: CPU-Onlyr4r2b20406080100Min: 109.01 / Avg: 109.96 / Max: 111.04Min: 109.69 / Avg: 110.02 / Max: 110.3

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: Blowfish - Decryptr4r3r2br1ar180160240320400SE +/- 0.07, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.06, N = 3SE +/- 0.05, N = 3363.28363.31363.20363.33363.261. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: Blowfish - Decryptr4r3r2br1ar160120180240300Min: 363.14 / Avg: 363.28 / Max: 363.39Min: 363.26 / Avg: 363.31 / Max: 363.37Min: 363.13 / Avg: 363.2 / Max: 363.27Min: 363.21 / Avg: 363.33 / Max: 363.43Min: 363.19 / Avg: 363.26 / Max: 363.351. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

HammerDB - MariaDB

This is a MariaDB MySQL database server benchmark making use of the HammerDB benchmarking / load testing tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.5.9Virtual Users: 128 - Warehouses: 500r1ar140K80K120K160K200KSE +/- 1389.03, N = 9SE +/- 2691.06, N = 91732281732881. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt
OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.5.9Virtual Users: 128 - Warehouses: 500r1ar130K60K90K120K150KMin: 165020 / Avg: 173228 / Max: 178979Min: 156968 / Avg: 173288.44 / Max: 1834391. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPUr4r2b50K100K150K200K250KSE +/- 269.51, N = 3SE +/- 247.29, N = 3214241.34214210.831. (CC) gcc options: -pthread -O2 -funroll-loops -rdynamic -ldl -laio -lm
OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPUr4r2b40K80K120K160K200KMin: 213702.33 / Avg: 214241.34 / Max: 214513.12Min: 213716.29 / Avg: 214210.83 / Max: 214464.031. (CC) gcc options: -pthread -O2 -funroll-loops -rdynamic -ldl -laio -lm

MariaDB

This is a MariaDB MySQL database server benchmark making use of mysqlslap. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 10.5.2Clients: 512r2b4080120160200SE +/- 0.87, N = 31661. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lsnappy -ldl -lz -lrt

OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 10.5.2Clients: 256r2b4080120160200SE +/- 0.22, N = 31601. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lsnappy -ldl -lz -lrt

CP2K Molecular Dynamics

CP2K is an open-source molecular dynamics software package focused on quantum chemistry and solid-state physics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCP2K Molecular Dynamics 8.1Input: Fayalite-FISTr2a300600900120015001374.66

HammerDB - MariaDB

This is a MariaDB MySQL database server benchmark making use of the HammerDB benchmarking / load testing tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.5.9Virtual Users: 128 - Warehouses: 250r140K80K120K160K200KSE +/- 2616.54, N = 91678091. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.5.9Virtual Users: 128 - Warehouses: 250r112K24K36K48K60KSE +/- 857.30, N = 9554151. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.5.9Virtual Users: 64 - Warehouses: 250r140K80K120K160K200KSE +/- 2831.11, N = 91913971. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.5.9Virtual Users: 64 - Warehouses: 250r114K28K42K56K70KSE +/- 937.55, N = 9632791. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.5.9Virtual Users: 32 - Warehouses: 500r140K80K120K160K200KSE +/- 2885.40, N = 92084191. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.5.9Virtual Users: 32 - Warehouses: 500r115K30K45K60K75KSE +/- 921.11, N = 9688181. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.5.9Virtual Users: 32 - Warehouses: 250r140K80K120K160K200KSE +/- 3390.81, N = 92092541. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.5.9Virtual Users: 32 - Warehouses: 250r115K30K45K60K75KSE +/- 1078.76, N = 9690541. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.5.9Virtual Users: 16 - Warehouses: 500r140K80K120K160K200KSE +/- 3159.46, N = 91952581. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.5.9Virtual Users: 16 - Warehouses: 500r114K28K42K56K70KSE +/- 1031.07, N = 9644771. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.5.9Virtual Users: 16 - Warehouses: 250r140K80K120K160K200KSE +/- 2649.02, N = 31929131. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.5.9Virtual Users: 16 - Warehouses: 250r114K28K42K56K70KSE +/- 880.35, N = 3637571. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.5.9Virtual Users: 8 - Warehouses: 500r160K120K180K240K300KSE +/- 2338.98, N = 32859841. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.5.9Virtual Users: 8 - Warehouses: 500r120K40K60K80K100KSE +/- 693.36, N = 3943791. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt

OpenBenchmarking.orgTransactions Per Minute, More Is BetterHammerDB - MariaDB 10.5.9Virtual Users: 8 - Warehouses: 250r160K120K180K240K300KSE +/- 2006.72, N = 32900821. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt

OpenBenchmarking.orgNew Orders Per Minute, More Is BetterHammerDB - MariaDB 10.5.9Virtual Users: 8 - Warehouses: 250r120K40K60K80K100KSE +/- 675.05, N = 3957681. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lbz2 -lsnappy -ldl -lz -lrt

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: MobileNetV2_224r4r2b0.92251.8452.76753.694.6125SE +/- 0.135, N = 12SE +/- 0.333, N = 34.1004.078MIN: 2.97 / MAX: 12.98MIN: 2.9 / MAX: 13.171. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: MobileNetV2_224r4r2b246810Min: 3.47 / Avg: 4.1 / Max: 4.74Min: 3.59 / Avg: 4.08 / Max: 4.721. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: resnet-v2-50r4r2b1122334455SE +/- 1.07, N = 12SE +/- 2.59, N = 348.0448.73MIN: 42.13 / MAX: 145.2MIN: 43.19 / MAX: 69.591. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: resnet-v2-50r4r2b1020304050Min: 42.77 / Avg: 48.04 / Max: 53.25Min: 43.55 / Avg: 48.73 / Max: 51.371. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

KTX-Software toktx

This is a benchmark of The Khronos Group's KTX-Software library and tools. KTX-Software provides "toktx" for converting/creating in the KTX container format for image textures. This benchmark times how long it takes to convert to KTX 2.0 format with various settings using a reference PNG sample input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: Zstd Compression 9r4r2b0.83181.66362.49543.32724.159SE +/- 0.064, N = 15SE +/- 0.003, N = 33.6973.470
OpenBenchmarking.orgSeconds, Fewer Is BetterKTX-Software toktx 4.0Settings: Zstd Compression 9r4r2b246810Min: 3.54 / Avg: 3.7 / Max: 4.15Min: 3.47 / Avg: 3.47 / Max: 3.48

MariaDB

This is a MariaDB MySQL database server benchmark making use of mysqlslap. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 10.5.2Clients: 1r3r2b7001400210028003500SE +/- 61.33, N = 12SE +/- 73.97, N = 15345833361. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lsnappy -ldl -lz -lrt
OpenBenchmarking.orgQueries Per Second, More Is BetterMariaDB 10.5.2Clients: 1r3r2b6001200180024003000Min: 2949.85 / Avg: 3458.14 / Max: 3690.04Min: 2898.55 / Avg: 3335.86 / Max: 38611. (CXX) g++ options: -fPIC -pie -fstack-protector -O2 -shared -lpthread -lsnappy -ldl -lz -lrt

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-TTr4r3r2br1ar120406080100SE +/- 2.94, N = 15SE +/- 2.33, N = 15SE +/- 1.75, N = 15SE +/- 0.90, N = 3SE +/- 1.45, N = 1363.761.754.777.276.31. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-TTr4r3r2br1ar11530456075Min: 44.1 / Avg: 63.72 / Max: 80.5Min: 45.2 / Avg: 61.65 / Max: 77Min: 41.2 / Avg: 54.75 / Max: 71.6Min: 75.4 / Avg: 77.2 / Max: 78.2Min: 67.4 / Avg: 76.31 / Max: 81.71. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-TNr4r3r2br1ar120406080100SE +/- 2.43, N = 14SE +/- 1.88, N = 15SE +/- 2.02, N = 15SE +/- 0.69, N = 3SE +/- 1.67, N = 1367.666.962.377.476.01. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-TNr4r3r2br1ar11530456075Min: 54.9 / Avg: 67.6 / Max: 79.1Min: 53.5 / Avg: 66.94 / Max: 76.6Min: 54.7 / Avg: 62.33 / Max: 76.7Min: 76.2 / Avg: 77.37 / Max: 78.6Min: 61.2 / Avg: 75.97 / Max: 81.21. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-NTr4r3r2br1ar120406080100SE +/- 1.98, N = 15SE +/- 1.99, N = 15SE +/- 1.14, N = 15SE +/- 1.01, N = 3SE +/- 1.88, N = 1372.468.959.876.875.61. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-NTr4r3r2br1ar11530456075Min: 55.8 / Avg: 72.36 / Max: 82.5Min: 55.4 / Avg: 68.89 / Max: 76.8Min: 54 / Avg: 59.8 / Max: 70.2Min: 74.9 / Avg: 76.83 / Max: 78.3Min: 61.8 / Avg: 75.65 / Max: 811. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-NNr4r3r2br1ar11632486480SE +/- 1.95, N = 15SE +/- 2.18, N = 15SE +/- 2.06, N = 15SE +/- 3.11, N = 3SE +/- 1.42, N = 1470.866.461.972.373.51. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-NNr4r3r2br1ar11428425670Min: 55.9 / Avg: 70.78 / Max: 81Min: 53.6 / Avg: 66.42 / Max: 80.2Min: 51.6 / Avg: 61.92 / Max: 73.9Min: 66.2 / Avg: 72.27 / Max: 76.5Min: 63.8 / Avg: 73.54 / Max: 78.71. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMV-Tr4r3r2br1ar1160320480640800SE +/- 3.20, N = 15SE +/- 2.02, N = 15SE +/- 27.49, N = 15SE +/- 5.04, N = 3SE +/- 2.46, N = 13647.0647.0389.9319.0719.01. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMV-Tr4r3r2br1ar1130260390520650Min: 626 / Avg: 647.33 / Max: 671Min: 637 / Avg: 647.27 / Max: 662Min: 25.3 / Avg: 389.89 / Max: 442Min: 309 / Avg: 318.67 / Max: 326Min: 700 / Avg: 719.23 / Max: 7291. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMV-Nr4r3r2br1ar11632486480SE +/- 0.25, N = 15SE +/- 3.93, N = 15SE +/- 3.75, N = 15SE +/- 2.90, N = 3SE +/- 0.36, N = 1470.264.362.363.672.31. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMV-Nr4r3r2br1ar11428425670Min: 68.5 / Avg: 70.21 / Max: 72Min: 11.1 / Avg: 64.31 / Max: 72Min: 10 / Avg: 62.34 / Max: 67.6Min: 57.8 / Avg: 63.6 / Max: 66.7Min: 69 / Avg: 72.29 / Max: 741. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dDOTr4r3r2br1ar1160320480640800SE +/- 2.76, N = 15SE +/- 50.57, N = 15SE +/- 34.40, N = 14SE +/- 34.44, N = 3SE +/- 6.43, N = 14765.00713.47447.65371.00720.001. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dDOTr4r3r2br1ar1130260390520650Min: 747 / Avg: 765.2 / Max: 779Min: 7.03 / Avg: 713.47 / Max: 778Min: 7.11 / Avg: 447.65 / Max: 503Min: 323 / Avg: 371.33 / Max: 438Min: 643 / Avg: 720 / Max: 7421. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dAXPYr4r3r2br1ar12004006008001000SE +/- 5.62, N = 15SE +/- 82.34, N = 15SE +/- 40.80, N = 15SE +/- 23.02, N = 3SE +/- 20.63, N = 141158.01024.2507.1392.01058.01. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dAXPYr4r3r2br1ar12004006008001000Min: 1120 / Avg: 1158 / Max: 1200Min: 28.1 / Avg: 1024.21 / Max: 1170Min: 19.8 / Avg: 507.05 / Max: 597Min: 349 / Avg: 391.67 / Max: 428Min: 879 / Avg: 1058.21 / Max: 11101. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dCOPYr4r3r2br1ar12004006008001000SE +/- 9.73, N = 15SE +/- 26.97, N = 15SE +/- 35.11, N = 15SE +/- 29.90, N = 3SE +/- 25.47, N = 14936.0913.0422.2335.0843.01. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dCOPYr4r3r2br1ar1160320480640800Min: 836 / Avg: 936.2 / Max: 981Min: 581 / Avg: 913.27 / Max: 996Min: 13.5 / Avg: 422.23 / Max: 527Min: 276 / Avg: 334.67 / Max: 374Min: 523 / Avg: 842.71 / Max: 8911. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sDOTr4r3r2br1ar1130260390520650SE +/- 2.45, N = 15SE +/- 2.55, N = 15SE +/- 5.60, N = 15SE +/- 11.67, N = 3SE +/- 2.34, N = 145355323492776201. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sDOTr4r3r2br1ar1110220330440550Min: 523 / Avg: 535.33 / Max: 556Min: 508 / Avg: 531.93 / Max: 552Min: 286 / Avg: 348.87 / Max: 383Min: 259 / Avg: 277.33 / Max: 299Min: 602 / Avg: 620 / Max: 6311. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sAXPYr4r3r2br1ar12004006008001000SE +/- 11.35, N = 15SE +/- 8.11, N = 15SE +/- 10.36, N = 15SE +/- 15.25, N = 3SE +/- 6.62, N = 1485586247437010031. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sAXPYr4r3r2br1ar12004006008001000Min: 770 / Avg: 854.93 / Max: 904Min: 788 / Avg: 861.8 / Max: 902Min: 374 / Avg: 474.4 / Max: 518Min: 347 / Avg: 370.33 / Max: 399Min: 927 / Avg: 1002.86 / Max: 10301. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sCOPYr4r3r2br1ar1400800120016002000SE +/- 54.62, N = 15SE +/- 51.32, N = 15SE +/- 22.07, N = 15SE +/- 4.10, N = 3SE +/- 16.63, N = 141167113569150418341. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sCOPYr4r3r2br1ar130060090012001500Min: 1020 / Avg: 1166.67 / Max: 1590Min: 964 / Avg: 1134.93 / Max: 1580Min: 479 / Avg: 691.33 / Max: 777Min: 498 / Avg: 504.33 / Max: 512Min: 1630 / Avg: 1833.57 / Max: 18801. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUr4r3r2br1ar12004006008001000SE +/- 16.86, N = 14SE +/- 0.83, N = 3SE +/- 0.61, N = 3SE +/- 1.56, N = 3SE +/- 7.01, N = 3811.94793.92791.70793.36804.39MIN: 761.61MIN: 769MIN: 769.61MIN: 765.14MIN: 763.491. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUr4r3r2br1ar1140280420560700Min: 787.01 / Avg: 811.94 / Max: 1029.24Min: 792.87 / Avg: 793.92 / Max: 795.56Min: 790.49 / Avg: 791.69 / Max: 792.48Min: 790.63 / Avg: 793.36 / Max: 796.05Min: 796.8 / Avg: 804.39 / Max: 818.391. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUr4r3r2br1ar10.04910.09820.14730.19640.2455SE +/- 0.004970, N = 15SE +/- 0.003384, N = 15SE +/- 0.004449, N = 15SE +/- 0.001109, N = 3SE +/- 0.002205, N = 150.2179410.2183490.2103240.2107280.210919MIN: 0.19MIN: 0.19MIN: 0.18MIN: 0.2MIN: 0.191. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUr4r3r2br1ar112345Min: 0.2 / Avg: 0.22 / Max: 0.28Min: 0.2 / Avg: 0.22 / Max: 0.26Min: 0.19 / Avg: 0.21 / Max: 0.27Min: 0.21 / Avg: 0.21 / Max: 0.21Min: 0.19 / Avg: 0.21 / Max: 0.231. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 10r4r3r2br1ar1246810SE +/- 0.130, N = 15SE +/- 0.145, N = 15SE +/- 0.116, N = 15SE +/- 0.014, N = 3SE +/- 0.038, N = 36.7466.5976.6565.5055.4771. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 10r4r3r2br1ar13691215Min: 5.73 / Avg: 6.75 / Max: 7.41Min: 5.74 / Avg: 6.6 / Max: 7.31Min: 5.81 / Avg: 6.66 / Max: 7.35Min: 5.49 / Avg: 5.5 / Max: 5.53Min: 5.4 / Avg: 5.48 / Max: 5.521. (CXX) g++ options: -O3 -fPIC -lm

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pr4r3r2br1ar190180270360450SE +/- 0.65, N = 3SE +/- 1.57, N = 3SE +/- 4.05, N = 12SE +/- 16.03, N = 12SE +/- 15.40, N = 12184.07185.53182.26393.46386.291. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pr4r3r2br1ar170140210280350Min: 183.24 / Avg: 184.07 / Max: 185.35Min: 183.55 / Avg: 185.53 / Max: 188.63Min: 139.7 / Avg: 182.26 / Max: 195.76Min: 218.1 / Avg: 393.46 / Max: 414.74Min: 219.38 / Avg: 386.29 / Max: 415.711. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.5Scene: Rainbow Colors and Prism - Acceleration: CPUr4r3r2br1ar148121620SE +/- 0.79, N = 12SE +/- 1.13, N = 12SE +/- 0.87, N = 13SE +/- 0.47, N = 15SE +/- 1.05, N = 1514.7916.4713.4213.3417.04MIN: 9.85 / MAX: 20.95MIN: 10.39 / MAX: 21.43MIN: 8.28 / MAX: 21.15MIN: 10.32 / MAX: 17.45MIN: 11.27 / MAX: 22.05
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.5Scene: Rainbow Colors and Prism - Acceleration: CPUr4r3r2br1ar148121620Min: 11.32 / Avg: 14.79 / Max: 20.88Min: 10.45 / Avg: 16.47 / Max: 21.31Min: 10.54 / Avg: 13.42 / Max: 21.07Min: 10.34 / Avg: 13.34 / Max: 17.25Min: 11.62 / Avg: 17.04 / Max: 22.01

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Hilbert Transformr4r3r2br1ar1100200300400500SE +/- 24.71, N = 9SE +/- 17.46, N = 9SE +/- 47.90, N = 3SE +/- 1.66, N = 3SE +/- 2.02, N = 3373.8408.0357.4459.1459.31. 3.8.1.0
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Hilbert Transformr4r3r2br1ar180160240320400Min: 275 / Avg: 373.81 / Max: 451.5Min: 315.8 / Avg: 408.01 / Max: 448Min: 282.9 / Avg: 357.4 / Max: 446.8Min: 455.9 / Avg: 459.07 / Max: 461.5Min: 455.7 / Avg: 459.3 / Max: 462.71. 3.8.1.0

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FM Deemphasis Filterr4r3r2br1ar1160320480640800SE +/- 32.02, N = 9SE +/- 31.57, N = 9SE +/- 53.33, N = 3SE +/- 1.04, N = 3SE +/- 1.94, N = 3622.0621.0645.8727.4734.01. 3.8.1.0
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FM Deemphasis Filterr4r3r2br1ar1130260390520650Min: 464.2 / Avg: 622 / Max: 702.8Min: 470.6 / Avg: 620.97 / Max: 701.7Min: 539.2 / Avg: 645.83 / Max: 701.3Min: 725.7 / Avg: 727.4 / Max: 729.3Min: 730.2 / Avg: 734.03 / Max: 736.51. 3.8.1.0

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: IIR Filterr4r3r2br1ar1130260390520650SE +/- 25.67, N = 9SE +/- 26.49, N = 9SE +/- 45.07, N = 3SE +/- 0.46, N = 3SE +/- 0.38, N = 3487.7487.4498.2609.5610.61. 3.8.1.0
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: IIR Filterr4r3r2br1ar1110220330440550Min: 388.3 / Avg: 487.67 / Max: 588.1Min: 367.6 / Avg: 487.38 / Max: 587.4Min: 452.9 / Avg: 498.17 / Max: 588.3Min: 608.7 / Avg: 609.53 / Max: 610.3Min: 609.9 / Avg: 610.57 / Max: 611.21. 3.8.1.0

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FIR Filterr4r3r2br1ar1130260390520650SE +/- 11.25, N = 9SE +/- 16.19, N = 9SE +/- 44.41, N = 3SE +/- 0.20, N = 3SE +/- 1.45, N = 3515.6502.0470.0604.8603.01. 3.8.1.0
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FIR Filterr4r3r2br1ar1110220330440550Min: 439.9 / Avg: 515.6 / Max: 572.8Min: 398.9 / Avg: 502.02 / Max: 569.3Min: 381.4 / Avg: 470.03 / Max: 519.4Min: 604.5 / Avg: 604.83 / Max: 605.2Min: 601.2 / Avg: 603.03 / Max: 605.91. 3.8.1.0

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Signal Source (Cosine)r4r3r2br1ar15001000150020002500SE +/- 82.03, N = 9SE +/- 72.44, N = 9SE +/- 168.17, N = 3SE +/- 2.24, N = 3SE +/- 0.93, N = 31619.21723.91684.42175.32183.51. 3.8.1.0
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Signal Source (Cosine)r4r3r2br1ar1400800120016002000Min: 1367.9 / Avg: 1619.24 / Max: 1875.3Min: 1360.9 / Avg: 1723.86 / Max: 1869.3Min: 1348.1 / Avg: 1684.43 / Max: 1853.4Min: 2170.8 / Avg: 2175.27 / Max: 2177.7Min: 2181.7 / Avg: 2183.5 / Max: 2184.81. 3.8.1.0

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Five Back to Back FIR Filtersr4r3r2br1ar12004006008001000SE +/- 48.36, N = 9SE +/- 39.63, N = 9SE +/- 1.12, N = 3SE +/- 2.30, N = 3SE +/- 2.54, N = 3487.9580.5111.21015.21024.31. 3.8.1.0
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Five Back to Back FIR Filtersr4r3r2br1ar12004006008001000Min: 121.1 / Avg: 487.93 / Max: 611Min: 306 / Avg: 580.52 / Max: 666.8Min: 109.1 / Avg: 111.2 / Max: 112.9Min: 1010.6 / Avg: 1015.17 / Max: 1017.9Min: 1019.3 / Avg: 1024.27 / Max: 1027.71. 3.8.1.0

LuaRadio

LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Five Back to Back FIR Filtersr4r3r2br1ar12004006008001000SE +/- 73.21, N = 6SE +/- 74.31, N = 6SE +/- 22.87, N = 9SE +/- 0.62, N = 3SE +/- 2.24, N = 3706.1662.8804.51094.51094.8
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Five Back to Back FIR Filtersr4r3r2br1ar12004006008001000Min: 355 / Avg: 706.08 / Max: 828.9Min: 349 / Avg: 662.77 / Max: 810Min: 702.2 / Avg: 804.47 / Max: 897.9Min: 1093.6 / Avg: 1094.5 / Max: 1095.7Min: 1090.5 / Avg: 1094.77 / Max: 1098.1

188 Results Shown

oneDNN
AOM AV1:
  Speed 9 Realtime - Bosphorus 1080p
  Speed 8 Realtime - Bosphorus 1080p
  Speed 6 Two-Pass - Bosphorus 1080p
  Speed 6 Realtime - Bosphorus 1080p
  Speed 6 Realtime - Bosphorus 4K
  Speed 8 Realtime - Bosphorus 4K
  Speed 6 Two-Pass - Bosphorus 4K
  Speed 9 Realtime - Bosphorus 4K
SVT-VP9
SVT-HEVC
Intel Memory Latency Checker
AOM AV1:
  Speed 4 Two-Pass - Bosphorus 1080p
  Speed 4 Two-Pass - Bosphorus 4K
SVT-VP9
SVT-HEVC
Timed Erlang/OTP Compilation
AOM AV1
LuxCoreRender
AOM AV1
SVT-HEVC
LuxCoreRender
Xcompact3d Incompact3d:
  input.i3d 129 Cells Per Direction
  input.i3d 193 Cells Per Direction
  X3D-benchmarking input.i3d
libavif avifenc:
  6
  6, Lossless
  2
LuaRadio
libavif avifenc
Timed Wasmer Compilation
Timed Linux Kernel Compilation
libavif avifenc
LuaRadio
Timed Node.js Compilation
Xmrig
Timed Mesa Compilation
LuxCoreRender
Timed LLVM Compilation
Mobile Neural Network
Liquid-DSP
Xmrig
srsLTE
toyBrot Fractal Generator
Stockfish
VOSK Speech Recognition Toolkit
oneDNN
Liquid-DSP
oneDNN
LuxCoreRender
Liquid-DSP
oneDNN
toyBrot Fractal Generator
HammerDB - MariaDB:
  64 - 500:
    New Orders Per Minute
    Transactions Per Minute
GNU GMP GMPbench
libjpeg-turbo tjbench
oneDNN
LuaRadio
oneDNN
toyBrot Fractal Generator
oneDNN
Liquid-DSP
MariaDB
Liquid-DSP
Intel Memory Latency Checker
oneDNN
Timed LLVM Compilation
oneDNN
Liquid-DSP:
  2 - 256 - 57
  128 - 256 - 57
KTX-Software toktx
oneDNN
toyBrot Fractal Generator
Mobile Neural Network
oneDNN
MariaDB
oneDNN
KTX-Software toktx
oneDNN:
  Deconvolution Batch shapes_3d - f32 - CPU
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
srsLTE
Botan
oneDNN
Botan
Basis Universal
Botan:
  CAST-256
  ChaCha20Poly1305
Liquid-DSP
Botan
SecureMark
Botan
Google Draco
Botan
oneDNN
Liquid-DSP
oneDNN:
  Recurrent Neural Network Inference - f32 - CPU
  IP Shapes 3D - f32 - CPU
  Deconvolution Batch shapes_1d - bf16bf16bf16 - CPU
Helsing
oneDNN
Blender
Google Draco
Blender
Intel Memory Latency Checker:
  Max Bandwidth - 1:1 Reads-Writes
  Peak Injection Bandwidth - 2:1 Reads-Writes
  Max Bandwidth - 2:1 Reads-Writes
srsLTE
ASTC Encoder
Intel Memory Latency Checker
oneDNN
Basis Universal
MariaDB
Blender
Intel Memory Latency Checker
oneDNN
Intel Memory Latency Checker
Sysbench
Intel Memory Latency Checker
Botan
MariaDB
Botan
MariaDB
Basis Universal
ASTC Encoder
KTX-Software toktx:
  UASTC 4 + Zstd Compression 19
  UASTC 3 + Zstd Compression 19
Intel Memory Latency Checker:
  Max Bandwidth - Stream-Triad Like
  Peak Injection Bandwidth - Stream-Triad Like
MariaDB
Botan
Basis Universal
Blender
HammerDB - MariaDB
ASTC Encoder
Botan
Mobile Neural Network
Blender
Botan
HammerDB - MariaDB
Sysbench
MariaDB:
  512
  256
CP2K Molecular Dynamics
HammerDB - MariaDB:
  128 - 250:
    Transactions Per Minute
    New Orders Per Minute
  64 - 250:
    Transactions Per Minute
    New Orders Per Minute
  32 - 500:
    Transactions Per Minute
    New Orders Per Minute
  32 - 250:
    Transactions Per Minute
    New Orders Per Minute
  16 - 500:
    Transactions Per Minute
    New Orders Per Minute
  16 - 250:
    Transactions Per Minute
    New Orders Per Minute
  8 - 500:
    Transactions Per Minute
    New Orders Per Minute
  8 - 250:
    Transactions Per Minute
    New Orders Per Minute
Mobile Neural Network:
  MobileNetV2_224
  resnet-v2-50
KTX-Software toktx
MariaDB
ViennaCL:
  CPU BLAS - dGEMM-TT
  CPU BLAS - dGEMM-TN
  CPU BLAS - dGEMM-NT
  CPU BLAS - dGEMM-NN
  CPU BLAS - dGEMV-T
  CPU BLAS - dGEMV-N
  CPU BLAS - dDOT
  CPU BLAS - dAXPY
  CPU BLAS - dCOPY
  CPU BLAS - sDOT
  CPU BLAS - sAXPY
  CPU BLAS - sCOPY
oneDNN:
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
  Deconvolution Batch shapes_3d - u8s8f32 - CPU
libavif avifenc
SVT-VP9
LuxCoreRender
GNU Radio:
  Hilbert Transform
  FM Deemphasis Filter
  IIR Filter
  FIR Filter
  Signal Source (Cosine)
  Five Back to Back FIR Filters
LuaRadio