TR 2990WX 2020

AMD Ryzen Threadripper 2990WX 32-Core testing with a ASUS ROG ZENITH EXTREME (1701 BIOS) and Gigabyte AMD Radeon RX 470/480/570/570X/580/580X/590 4GB on Ubuntu 20.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2012260-HA-TR2990WX254
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 3 Tests
Bioinformatics 2 Tests
Chess Test Suite 3 Tests
Timed Code Compilation 3 Tests
C/C++ Compiler Tests 12 Tests
Compression Tests 2 Tests
CPU Massive 15 Tests
Creator Workloads 14 Tests
Database Test Suite 2 Tests
Encoding 6 Tests
Fortran Tests 2 Tests
Game Development 3 Tests
HPC - High Performance Computing 9 Tests
Machine Learning 4 Tests
Molecular Dynamics 2 Tests
MPI Benchmarks 3 Tests
Multi-Core 15 Tests
NVIDIA GPU Compute 8 Tests
Intel oneAPI 2 Tests
OpenMPI Tests 3 Tests
Programmer / Developer System Benchmarks 8 Tests
Python Tests 2 Tests
Scientific Computing 5 Tests
Server 5 Tests
Server CPU Tests 8 Tests
Single-Threaded 4 Tests
Texture Compression 3 Tests
Video Encoding 3 Tests
Vulkan Compute 6 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
1
December 23 2020
  1 Day, 3 Hours, 19 Minutes
2
December 24 2020
  1 Day, 2 Hours, 17 Minutes
3
December 24 2020
  22 Hours, 30 Minutes
Invert Hiding All Results Option
  1 Day, 1 Hour, 22 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


TR 2990WX 2020ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLVulkanCompilerFile-SystemScreen Resolution123AMD Ryzen Threadripper 2990WX 32-Core @ 3.00GHz (32 Cores / 64 Threads)ASUS ROG ZENITH EXTREME (1701 BIOS)AMD 17h32GBSamsung SSD 970 EVO 500GB + 250GB Western Digital WDS250G2X0C-00L350Gigabyte AMD Radeon RX 470/480/570/570X/580/580X/590 4GB (1244/1750MHz)Realtek ALC1220LG Ultra HDIntel I211 + Qualcomm Atheros QCA6174 802.11ac + Wilocity Wil6200 802.11adUbuntu 20.105.8.0-33-generic (x86_64)GNOME Shell 3.38.1X Server 1.20.9modesetting 1.20.94.6 Mesa 20.2.1 (LLVM 11.0.0)1.2.131GCC 10.2.0ext41920x1080OpenBenchmarking.orgCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0x800820dGraphics Details- GLAMORJava Details- OpenJDK Runtime Environment (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.10)Python Details- Python 3.8.6Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Not affected

123Result OverviewPhoronix Test Suite100%112%125%137%Timed FFmpeg CompilationZstd CompressionStockfishRedisx265HPC ChallengeTimed Clash CompilationLAMMPS Molecular Dynamics SimulatorNCNNVKMarkNode.js V8 Web Tooling BenchmarkasmFishyquake2Betsy GPU CompressorTimed HMMer SearchEmbreeGROMACSLZ4 CompressionsimdjsononeDNNeSpeak-NG Speech EngineIndigoBenchTimed MAFFT AlignmentNumpy BenchmarkCraftyOpus Codec EncodingLibplaceboVkFFTSQLite SpeedtestCLOMPASTC EncoderWaifu2x-NCNN VulkanKvazaarTimed Eigen CompilationMonkey Audio Encodingrav1eCoremarkVkResampleBasis Universal

TR 2990WX 2020hpcc: G-HPLbuild-clash: Time To Compilebasis: UASTC Level 2 + RDO Post-Processingncnn: CPU - regnety_400mncnn: CPU - squeezenet_ssdncnn: CPU - yolov4-tinyncnn: CPU - resnet50ncnn: CPU - alexnetncnn: CPU - resnet18ncnn: CPU - vgg16ncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - mobilenetlammps: 20k Atomsncnn: Vulkan GPU - regnety_400mncnn: Vulkan GPU - squeezenet_ssdncnn: Vulkan GPU - yolov4-tinyncnn: Vulkan GPU - resnet50ncnn: Vulkan GPU - alexnetncnn: Vulkan GPU - resnet18ncnn: Vulkan GPU - vgg16ncnn: Vulkan GPU - googlenetncnn: Vulkan GPU - blazefacencnn: Vulkan GPU - efficientnet-b0ncnn: Vulkan GPU - mnasnetncnn: Vulkan GPU - shufflenet-v2ncnn: Vulkan GPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: Vulkan GPU - mobilenetvkfft: ai-benchmark: Device AI Scoreai-benchmark: Device Training Scoreai-benchmark: Device Inference Scoreonednn: Recurrent Neural Network Training - u8s8f32 - CPUhmmer: Pfam Database Searchnumpy: onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUbrl-cad: VGR Performance Metricembree: Pathtracer - Asian Dragon Objembree: Pathtracer ISPC - Asian Dragon Objcompress-zstd: 19compress-lz4: 9 - Decompression Speedcompress-lz4: 9 - Compression Speedasmfish: 1024 Hash Memory, 26 Depthcompress-zstd: 3embree: Pathtracer - Asian Dragononednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUvkmark: 1280 x 1024vkmark: 1920 x 1080indigobench: CPU - Bedroomnode-web-tooling: gromacs: Water Benchmarkbuild2: Time To Compileembree: Pathtracer ISPC - Asian Dragonbuild-eigen: Time To Compileonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUastcenc: Exhaustiveonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUsqlite-speedtest: Timed Time - Size 1,000compress-lz4: 3 - Decompression Speedcompress-lz4: 3 - Compression Speedrav1e: 5rav1e: 1indigobench: CPU - Supercaronednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUembree: Pathtracer - Crownkvazaar: Bosphorus 4K - Slowsimdjson: LargeRandkvazaar: Bosphorus 4K - Mediumsimdjson: PartialTweetssimdjson: DistinctUserIDredis: GETclomp: Static OMP Speedupsimdjson: Kostyabasis: ETC1Srav1e: 6stockfish: Total Timeredis: LPOPespeak: Text-To-Speech Synthesisredis: SADDx265: Bosphorus 4Kbuild-ffmpeg: Time To Compilelibplacebo: av1_grain_laplibplacebo: hdr_peakdetectlibplacebo: polar_nocomputelibplacebo: deband_heavyphpbench: PHP Benchmark Suitecompress-lz4: 1 - Decompression Speedcompress-lz4: 1 - Compression Speedrav1e: 10onednn: IP Shapes 3D - f32 - CPUbetsy: ETC2 RGB - Highestonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUembree: Pathtracer ISPC - Crowncrafty: Elapsed Timecoremark: CoreMark Size 666 - Iterations Per Secondkvazaar: Bosphorus 4K - Very Fastbasis: UASTC Level 3betsy: ETC1 - Highestencode-ape: WAV To APEsunflow: Global Illumination + Image Synthesiskvazaar: Bosphorus 1080p - Slowencode-wavpack: WAV To WavPackkvazaar: Bosphorus 1080p - Mediumonednn: Deconvolution Batch shapes_1d - f32 - CPUvkresample: 2x - Singlemafft: Multiple Sequence Alignment - LSU RNAbasis: UASTC Level 2kvazaar: Bosphorus 4K - Ultra Fastonednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUx265: Bosphorus 1080pastcenc: Thoroughencode-opus: WAV To Opus Encoderedis: LPUSHredis: SETwaifu2x-ncnn: 2x - 3 - Yeskvazaar: Bosphorus 1080p - Very Fastonednn: IP Shapes 3D - u8s8f32 - CPUbasis: UASTC Level 0lammps: Rhodopsin Proteinyquake2: Software CPU - 1920 x 1080astcenc: Mediumonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUastcenc: Fastkvazaar: Bosphorus 1080p - Ultra Fastyquake2: OpenGL 1.x - 1920 x 1080waifu2x-ncnn: 2x - 3 - Noonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUyquake2: OpenGL 3.x - 1920 x 1080hpcc: Max Ping Pong Bandwidthhpcc: Rand Ring Bandwidthhpcc: Rand Ring Latencyhpcc: G-Rand Accesshpcc: EP-STREAM Triadhpcc: G-Ptranshpcc: EP-DGEMMhpcc: G-Ffte12354.59837463.220651.213102.9341.8950.1979.5337.7259.91102.3847.876.8320.0113.6015.2814.4415.6737.2415.501101.0141.2850.2572.0633.5655.89103.4141.089.8219.1415.6414.9214.5815.5937.3094122270966130413991.1163.935296.8013826.228684718.241817.679133.99106.946.60738040013606.721.956413861.23802.52585043845.0368.471.83496.20021.315693.2523808.573749.1873.363.6965767.2079128.247.460.9540.31511.0351.6576724.019310.290.3910.470.510.522201882.0351.70.4447.6021.270498212542431710.3530.8471892562.2116.6246.084448.241762.98268.72192.545772299506.28518.042.85911.469412.6971.4692822.725973295311146059.29469123.5425.22910.75414.0290.78926.9913.19927.692.9955949.01212.15315.78539.686.277152.1240841.029.547.7531368828.881675529.3810.03860.693.453707.59513.399106.66.3425.164319.97345.22116.80683.82.0523.437275.92604949.710862.0410.571201.536810.028191.309664.0016412.6172710.0466952.77753477.084647.924101.4740.8049.6975.0334.3454.2992.9842.046.6720.9214.5014.4215.7015.6034.5615.418100.2640.0150.5773.8632.3953.19100.7338.907.4019.2816.2614.6514.2216.3033.7894512317998131913295.0162.561298.3113783.329101319.299217.235742.09217.247.55748951873798.922.496613529.03821.24564442674.9948.251.85896.34021.667492.9813780.473782.3673.143.7616467.4589191.947.120.9510.32010.9632.0956724.615510.380.3910.560.500.522099666.5251.60.4447.6021.273554047722058658.4131.1531826296.8515.2630.918441.371762.78268.36192.435793609647.98619.842.83011.208512.5431.4854922.236573713741148000.04493223.5725.21710.51914.0100.73127.1313.21527.843.0172148.99612.26415.79039.676.250112.1089340.369.497.7951340079.291654163.6010.05560.693.490647.62612.913107.16.3225.102620.15385.22116.09619.93.449805.93856972.210831.9740.583001.534530.027981.307714.341068.803039.7408354.30507485.485647.089103.8939.1349.2573.6535.6964.4995.6142.957.6518.9714.1715.3015.2115.7834.9115.187101.2739.1448.2370.9235.9356.35100.1738.327.0920.4913.7715.7515.9514.9734.39941713669.2161.632295.7913668.419.401417.399936.29130.545.94726307373781.222.265513753.93711.96564042685.0578.211.83721.562293.0583777.653783.4073.103.4169967.2479188.346.870.9530.31811.0362.0098424.363610.320.4010.530.510.522091130.6751.80.4448.1641.274549840121998586.1631.0281974193.4616.5532.699446.381762.55267.90192.229540.08508.642.85310.928512.5531.4895722.645373667011146667.40611423.6125.14410.52914.04126.9227.783.0164549.02112.15715.75839.396.216232.1261141.069.457.7741366164.671577289.7910.07460.793.510877.60012.551106.26.3525.104120.25435.20116.68623.83.469095.92300972.510972.4940.593851.535090.028661.338204.1742813.034179.67646OpenBenchmarking.org

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: G-HPL1231224364860SE +/- 0.14, N = 3SE +/- 0.10, N = 3SE +/- 0.24, N = 354.6052.7854.311. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3
OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: G-HPL1231122334455Min: 54.42 / Avg: 54.6 / Max: 54.87Min: 52.58 / Avg: 52.78 / Max: 52.9Min: 53.86 / Avg: 54.31 / Max: 54.711. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

Timed Clash Compilation

Build the clash-lang Haskell to VHDL/Verilog/SystemVerilog compiler with GHC 8.10.1 Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Clash CompilationTime To Compile123110220330440550SE +/- 13.67, N = 9SE +/- 4.96, N = 9SE +/- 0.58, N = 3463.22477.08485.49
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Clash CompilationTime To Compile12390180270360450Min: 353.97 / Avg: 463.22 / Max: 480.16Min: 438.83 / Avg: 477.08 / Max: 487.42Min: 484.68 / Avg: 485.48 / Max: 486.6

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2 + RDO Post-Processing123140280420560700SE +/- 1.61, N = 3SE +/- 0.39, N = 3SE +/- 0.07, N = 3651.21647.92647.091. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2 + RDO Post-Processing123110220330440550Min: 649.44 / Avg: 651.21 / Max: 654.43Min: 647.25 / Avg: 647.92 / Max: 648.62Min: 646.95 / Avg: 647.09 / Max: 647.191. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400m12320406080100SE +/- 1.12, N = 12SE +/- 1.29, N = 12SE +/- 3.17, N = 12102.93101.47103.89MIN: 90.68 / MAX: 1519.43MIN: 90.53 / MAX: 2458.71MIN: 90.75 / MAX: 3380.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400m12320406080100Min: 98.71 / Avg: 102.93 / Max: 110.86Min: 93.08 / Avg: 101.47 / Max: 108.92Min: 93.09 / Avg: 103.89 / Max: 133.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssd1231020304050SE +/- 1.53, N = 12SE +/- 1.74, N = 12SE +/- 1.90, N = 1241.8940.8039.13MIN: 31.38 / MAX: 429.4MIN: 31.26 / MAX: 438.66MIN: 32.02 / MAX: 459.231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssd123918273645Min: 35.72 / Avg: 41.89 / Max: 56.24Min: 35.37 / Avg: 40.8 / Max: 55.4Min: 33.43 / Avg: 39.13 / Max: 551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tiny1231122334455SE +/- 0.68, N = 12SE +/- 1.04, N = 12SE +/- 0.86, N = 1250.1949.6949.25MIN: 39.36 / MAX: 224.81MIN: 39.24 / MAX: 224.21MIN: 39.85 / MAX: 267.241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tiny1231020304050Min: 46.73 / Avg: 50.19 / Max: 54Min: 45.2 / Avg: 49.69 / Max: 55.89Min: 43.98 / Avg: 49.25 / Max: 53.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet5012320406080100SE +/- 5.46, N = 12SE +/- 2.78, N = 12SE +/- 4.79, N = 1279.5375.0373.65MIN: 38.42 / MAX: 565.15MIN: 38.58 / MAX: 559.97MIN: 39.25 / MAX: 638.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet501231530456075Min: 54.57 / Avg: 79.53 / Max: 116.34Min: 59.08 / Avg: 75.03 / Max: 89.58Min: 51.3 / Avg: 73.65 / Max: 104.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnet123918273645SE +/- 1.89, N = 12SE +/- 2.63, N = 12SE +/- 1.62, N = 1237.7234.3435.69MIN: 15.66 / MAX: 106.73MIN: 15.18 / MAX: 103.77MIN: 16.27 / MAX: 106.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnet123816243240Min: 28.54 / Avg: 37.72 / Max: 49.75Min: 23.34 / Avg: 34.34 / Max: 54.32Min: 28.04 / Avg: 35.69 / Max: 48.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet181231428425670SE +/- 4.97, N = 12SE +/- 5.28, N = 12SE +/- 4.57, N = 1259.9154.2964.49MIN: 23.15 / MAX: 227.75MIN: 22 / MAX: 230.1MIN: 21.25 / MAX: 228.141. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet181231326395265Min: 37.62 / Avg: 59.91 / Max: 97.89Min: 34.52 / Avg: 54.29 / Max: 98.07Min: 43.76 / Avg: 64.49 / Max: 92.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg1612320406080100SE +/- 3.95, N = 12SE +/- 1.61, N = 12SE +/- 2.05, N = 12102.3892.9895.61MIN: 62.04 / MAX: 220.23MIN: 61.58 / MAX: 223.7MIN: 64.1 / MAX: 227.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg1612320406080100Min: 82.96 / Avg: 102.38 / Max: 124.7Min: 81.44 / Avg: 92.98 / Max: 101.44Min: 86.14 / Avg: 95.61 / Max: 110.231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenet1231122334455SE +/- 2.92, N = 12SE +/- 2.39, N = 12SE +/- 3.13, N = 1247.8742.0442.95MIN: 27.73 / MAX: 542.74MIN: 28.93 / MAX: 517.65MIN: 27.93 / MAX: 530.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenet1231020304050Min: 35.6 / Avg: 47.87 / Max: 64.59Min: 32.5 / Avg: 42.04 / Max: 55.39Min: 30.59 / Avg: 42.95 / Max: 65.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazeface123246810SE +/- 0.16, N = 12SE +/- 0.10, N = 12SE +/- 0.60, N = 126.836.677.65MIN: 6.11 / MAX: 191.61MIN: 6.12 / MAX: 175.66MIN: 6.15 / MAX: 211.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazeface1233691215Min: 6.17 / Avg: 6.83 / Max: 8.38Min: 6.16 / Avg: 6.67 / Max: 7.6Min: 6.43 / Avg: 7.65 / Max: 13.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0123510152025SE +/- 0.67, N = 12SE +/- 1.79, N = 12SE +/- 0.37, N = 1220.0120.9218.97MIN: 17.38 / MAX: 456.25MIN: 17.12 / MAX: 465.23MIN: 17.3 / MAX: 414.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0123510152025Min: 17.54 / Avg: 20.01 / Max: 26.43Min: 17.46 / Avg: 20.92 / Max: 40.42Min: 17.69 / Avg: 18.97 / Max: 22.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnet12348121620SE +/- 0.16, N = 12SE +/- 0.64, N = 12SE +/- 0.31, N = 1213.6014.5014.17MIN: 12.42 / MAX: 189.74MIN: 12.55 / MAX: 348.4MIN: 12.78 / MAX: 347.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnet12348121620Min: 12.91 / Avg: 13.6 / Max: 14.92Min: 12.62 / Avg: 14.5 / Max: 19.92Min: 13.05 / Avg: 14.17 / Max: 17.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v212348121620SE +/- 0.85, N = 12SE +/- 0.15, N = 12SE +/- 0.64, N = 1215.2814.4215.30MIN: 13.6 / MAX: 309.56MIN: 13.2 / MAX: 104.86MIN: 12.9 / MAX: 306.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v212348121620Min: 13.74 / Avg: 15.28 / Max: 24.49Min: 13.25 / Avg: 14.42 / Max: 14.86Min: 12.97 / Avg: 15.3 / Max: 21.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v312348121620SE +/- 0.50, N = 12SE +/- 1.39, N = 12SE +/- 1.31, N = 1214.4415.7015.21MIN: 12.49 / MAX: 358.11MIN: 12.6 / MAX: 382.98MIN: 12.52 / MAX: 378.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v312348121620Min: 13.09 / Avg: 14.44 / Max: 19.14Min: 13.56 / Avg: 15.7 / Max: 30.85Min: 12.92 / Avg: 15.21 / Max: 29.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v212348121620SE +/- 0.44, N = 12SE +/- 0.34, N = 12SE +/- 0.73, N = 1215.6715.6015.78MIN: 13.15 / MAX: 357.88MIN: 13.33 / MAX: 383.7MIN: 13.2 / MAX: 388.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v212348121620Min: 14.28 / Avg: 15.67 / Max: 19.41Min: 14.51 / Avg: 15.6 / Max: 18.42Min: 14.23 / Avg: 15.78 / Max: 23.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenet123918273645SE +/- 1.43, N = 12SE +/- 0.55, N = 12SE +/- 0.99, N = 1237.2434.5634.91MIN: 29.37 / MAX: 419.38MIN: 30.05 / MAX: 404.5MIN: 29.38 / MAX: 419.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenet123816243240Min: 31.88 / Avg: 37.24 / Max: 50.8Min: 31.61 / Avg: 34.56 / Max: 37.06Min: 31.7 / Avg: 34.91 / Max: 41.141. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k Atoms12348121620SE +/- 0.21, N = 3SE +/- 0.02, N = 3SE +/- 0.21, N = 415.5015.4215.191. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k Atoms12348121620Min: 15.26 / Avg: 15.5 / Max: 15.91Min: 15.39 / Avg: 15.42 / Max: 15.45Min: 14.75 / Avg: 15.19 / Max: 15.661. (CXX) g++ options: -O3 -pthread -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: regnety_400m12320406080100SE +/- 1.78, N = 12SE +/- 2.13, N = 12SE +/- 2.24, N = 9101.01100.26101.27MIN: 90.72 / MAX: 1587.31MIN: 90.99 / MAX: 1833.25MIN: 89.99 / MAX: 2458.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: regnety_400m12320406080100Min: 93.46 / Avg: 101.01 / Max: 116.32Min: 91.44 / Avg: 100.26 / Max: 118.17Min: 90.57 / Avg: 101.27 / Max: 112.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: squeezenet_ssd123918273645SE +/- 1.34, N = 12SE +/- 1.39, N = 12SE +/- 1.29, N = 941.2840.0139.14MIN: 31.56 / MAX: 448.33MIN: 31.52 / MAX: 514.49MIN: 31.19 / MAX: 435.951. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: squeezenet_ssd123918273645Min: 35.72 / Avg: 41.28 / Max: 52.74Min: 33.31 / Avg: 40.01 / Max: 50.79Min: 33.91 / Avg: 39.14 / Max: 45.241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: yolov4-tiny1231122334455SE +/- 1.15, N = 12SE +/- 1.28, N = 12SE +/- 1.42, N = 950.2550.5748.23MIN: 39.88 / MAX: 213.87MIN: 39.57 / MAX: 214.93MIN: 39.54 / MAX: 230.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: yolov4-tiny1231020304050Min: 45.3 / Avg: 50.25 / Max: 56.69Min: 45.52 / Avg: 50.57 / Max: 58.38Min: 44.56 / Avg: 48.23 / Max: 56.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet501231632486480SE +/- 3.10, N = 12SE +/- 5.68, N = 12SE +/- 6.02, N = 972.0673.8670.92MIN: 39.39 / MAX: 562.21MIN: 40.4 / MAX: 546.35MIN: 39.2 / MAX: 557.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet501231428425670Min: 54.86 / Avg: 72.06 / Max: 89.94Min: 46.53 / Avg: 73.86 / Max: 113.78Min: 52.57 / Avg: 70.92 / Max: 104.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: alexnet123816243240SE +/- 1.15, N = 12SE +/- 1.09, N = 12SE +/- 1.85, N = 933.5632.3935.93MIN: 15.36 / MAX: 104.32MIN: 17.56 / MAX: 91.67MIN: 17.55 / MAX: 96.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: alexnet123816243240Min: 28.3 / Avg: 33.56 / Max: 39.83Min: 27.22 / Avg: 32.39 / Max: 38.31Min: 27.58 / Avg: 35.93 / Max: 44.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet181231326395265SE +/- 3.22, N = 12SE +/- 5.95, N = 12SE +/- 3.64, N = 955.8953.1956.35MIN: 23.8 / MAX: 226.22MIN: 21.74 / MAX: 222.51MIN: 22.77 / MAX: 219.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet181231122334455Min: 38.58 / Avg: 55.89 / Max: 71.75Min: 31.13 / Avg: 53.19 / Max: 97.49Min: 42.95 / Avg: 56.35 / Max: 74.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: vgg1612320406080100SE +/- 2.14, N = 12SE +/- 2.43, N = 12SE +/- 2.35, N = 9103.41100.73100.17MIN: 63 / MAX: 216.59MIN: 65.03 / MAX: 221.49MIN: 63.35 / MAX: 242.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: vgg1612320406080100Min: 90.1 / Avg: 103.41 / Max: 113.51Min: 89.9 / Avg: 100.73 / Max: 116.96Min: 91.79 / Avg: 100.17 / Max: 113.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: googlenet123918273645SE +/- 2.33, N = 12SE +/- 1.77, N = 12SE +/- 1.78, N = 941.0838.9038.32MIN: 28.69 / MAX: 513.63MIN: 28.91 / MAX: 532.2MIN: 28.17 / MAX: 505.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: googlenet123918273645Min: 32 / Avg: 41.08 / Max: 61.6Min: 31.95 / Avg: 38.9 / Max: 52.86Min: 31.28 / Avg: 38.32 / Max: 46.531. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: blazeface1233691215SE +/- 1.94, N = 12SE +/- 0.76, N = 12SE +/- 0.51, N = 99.827.407.09MIN: 6.19 / MAX: 229.58MIN: 6.14 / MAX: 215.25MIN: 6.15 / MAX: 204.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: blazeface1233691215Min: 6.27 / Avg: 9.82 / Max: 27.33Min: 6.18 / Avg: 7.4 / Max: 15.61Min: 6.2 / Avg: 7.09 / Max: 11.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: efficientnet-b0123510152025SE +/- 0.25, N = 12SE +/- 0.70, N = 12SE +/- 0.66, N = 919.1419.2820.49MIN: 17.51 / MAX: 352.05MIN: 16.96 / MAX: 430.49MIN: 17.45 / MAX: 438.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: efficientnet-b0123510152025Min: 18.03 / Avg: 19.14 / Max: 21.31Min: 17.12 / Avg: 19.28 / Max: 24.99Min: 18.21 / Avg: 20.49 / Max: 23.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mnasnet12348121620SE +/- 0.70, N = 11SE +/- 1.06, N = 12SE +/- 0.35, N = 915.6416.2613.77MIN: 12.3 / MAX: 352.98MIN: 12.28 / MAX: 390.46MIN: 12.4 / MAX: 378.941. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mnasnet12348121620Min: 13.12 / Avg: 15.64 / Max: 21.84Min: 12.69 / Avg: 16.26 / Max: 23.77Min: 12.5 / Avg: 13.77 / Max: 15.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: shufflenet-v212348121620SE +/- 0.51, N = 12SE +/- 0.14, N = 11SE +/- 0.93, N = 914.9214.6515.75MIN: 13.15 / MAX: 283.76MIN: 13.43 / MAX: 114.64MIN: 12.97 / MAX: 295.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: shufflenet-v212348121620Min: 13.59 / Avg: 14.92 / Max: 20.28Min: 13.6 / Avg: 14.65 / Max: 15.65Min: 14.04 / Avg: 15.75 / Max: 22.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v3-v3 - Model: mobilenet-v312348121620SE +/- 0.55, N = 12SE +/- 0.34, N = 12SE +/- 1.16, N = 914.5814.2215.95MIN: 12.63 / MAX: 382.37MIN: 12.46 / MAX: 387.96MIN: 12.47 / MAX: 359.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v3-v3 - Model: mobilenet-v312348121620Min: 13.14 / Avg: 14.58 / Max: 18.79Min: 12.88 / Avg: 14.22 / Max: 17.39Min: 13.38 / Avg: 15.95 / Max: 23.141. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v2-v2 - Model: mobilenet-v212348121620SE +/- 0.35, N = 12SE +/- 0.63, N = 12SE +/- 0.24, N = 915.5916.3014.97MIN: 12.98 / MAX: 359.55MIN: 13.31 / MAX: 389.24MIN: 13.4 / MAX: 357.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v2-v2 - Model: mobilenet-v212348121620Min: 14.19 / Avg: 15.59 / Max: 17.76Min: 14.46 / Avg: 16.3 / Max: 22.46Min: 13.77 / Avg: 14.97 / Max: 16.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mobilenet123918273645SE +/- 1.11, N = 12SE +/- 0.72, N = 12SE +/- 1.11, N = 937.3033.7834.39MIN: 29.33 / MAX: 427.64MIN: 29.45 / MAX: 408.23MIN: 29.33 / MAX: 412.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mobilenet123816243240Min: 31.1 / Avg: 37.3 / Max: 44.52Min: 30.89 / Avg: 33.78 / Max: 37.31Min: 31.32 / Avg: 34.39 / Max: 41.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

VkFFT

VkFFT is a Fast Fourier Transform (FFT) Library that is GPU accelerated by means of the Vulkan API. The VkFFT benchmark runs FFT performance differences of many different sizes before returning an overall benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.1.11232K4K6K8K10KSE +/- 13.38, N = 3SE +/- 49.17, N = 3SE +/- 6.74, N = 39412945194171. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.1.112316003200480064008000Min: 9385 / Avg: 9411.67 / Max: 9427Min: 9401 / Avg: 9450.67 / Max: 9549Min: 9407 / Avg: 9417.33 / Max: 94301. (CXX) g++ options: -O3 -pthread

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI Score12500100015002000250022702317

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training Score122004006008001000966998

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference Score123006009001200150013041319

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU1233K6K9K12K15KSE +/- 48.11, N = 3SE +/- 123.89, N = 10SE +/- 122.96, N = 1113991.113295.013669.2MIN: 13812.5MIN: 12213.4MIN: 12754.31. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU1232K4K6K8K10KMin: 13897.4 / Avg: 13991.1 / Max: 14056.9Min: 12562.5 / Avg: 13295.04 / Max: 13712.1Min: 12920.4 / Avg: 13669.2 / Max: 14079.41. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database Search1234080120160200SE +/- 1.51, N = 10SE +/- 0.23, N = 3SE +/- 0.19, N = 3163.94162.56161.631. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database Search123306090120150Min: 161.33 / Avg: 163.93 / Max: 177.44Min: 162.15 / Avg: 162.56 / Max: 162.93Min: 161.25 / Avg: 161.63 / Max: 161.91. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy Benchmark12360120180240300SE +/- 0.54, N = 3SE +/- 0.29, N = 3SE +/- 0.70, N = 3296.80298.31295.79
OpenBenchmarking.orgScore, More Is BetterNumpy Benchmark12350100150200250Min: 296.01 / Avg: 296.8 / Max: 297.83Min: 297.96 / Avg: 298.31 / Max: 298.88Min: 295.01 / Avg: 295.79 / Max: 297.19

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU1233K6K9K12K15KSE +/- 141.29, N = 3SE +/- 53.39, N = 3SE +/- 128.38, N = 1213826.213783.313668.4MIN: 13450.2MIN: 13567.9MIN: 12362.11. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU1232K4K6K8K10KMin: 13578 / Avg: 13826.23 / Max: 14067.3Min: 13708.3 / Avg: 13783.27 / Max: 13886.6Min: 12737.4 / Avg: 13668.4 / Max: 14124.61. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

BRL-CAD

BRL-CAD 7.28.0 is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.30.8VGR Performance Metric1260K120K180K240K300K2868472910131. (CXX) g++ options: -std=c++11 -pipe -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -rdynamic -lSM -lICE -lXi -lGLU -lGL -lGLdispatch -lX11 -lXext -lXrender -lpthread -ldl -luuid -lm

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon Obj123510152025SE +/- 0.25, N = 15SE +/- 0.37, N = 15SE +/- 0.44, N = 1218.2419.3019.40MIN: 16.42 / MAX: 21.1MIN: 16.68 / MAX: 22.31MIN: 16.59 / MAX: 22.2
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon Obj123510152025Min: 16.79 / Avg: 18.24 / Max: 19.84Min: 17.02 / Avg: 19.3 / Max: 21.61Min: 17.06 / Avg: 19.4 / Max: 21.16

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon Obj12348121620SE +/- 0.39, N = 12SE +/- 0.39, N = 12SE +/- 0.30, N = 1517.6817.2417.40MIN: 15.66 / MAX: 20.6MIN: 14.95 / MAX: 20.29MIN: 15.12 / MAX: 19.79
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon Obj12348121620Min: 15.88 / Avg: 17.68 / Max: 20.16Min: 15.13 / Avg: 17.24 / Max: 19.07Min: 15.28 / Avg: 17.4 / Max: 19.47

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 191231020304050SE +/- 0.37, N = 15SE +/- 0.34, N = 3SE +/- 0.39, N = 333.942.036.21. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19123918273645Min: 30.8 / Avg: 33.93 / Max: 36.9Min: 41.3 / Avg: 41.97 / Max: 42.4Min: 35.4 / Avg: 36.17 / Max: 36.71. (CC) gcc options: -O3 -pthread -lz -llzma

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression Speed1232K4K6K8K10KSE +/- 20.12, N = 15SE +/- 4.46, N = 3SE +/- 18.60, N = 39106.99217.29130.51. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression Speed12316003200480064008000Min: 9050.8 / Avg: 9106.89 / Max: 9295.6Min: 9210.8 / Avg: 9217.23 / Max: 9225.8Min: 9096.3 / Avg: 9130.47 / Max: 9160.31. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression Speed1231122334455SE +/- 0.43, N = 15SE +/- 0.77, N = 3SE +/- 0.01, N = 346.6047.5545.941. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression Speed1231020304050Min: 44.95 / Avg: 46.6 / Max: 51.77Min: 46.02 / Avg: 47.55 / Max: 48.34Min: 45.92 / Avg: 45.94 / Max: 45.961. (CC) gcc options: -O3

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Depth12316M32M48M64M80MSE +/- 738089.20, N = 3SE +/- 126998.98, N = 3SE +/- 633798.86, N = 3738040017489518772630737
OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Depth12313M26M39M52M65MMin: 73057589 / Avg: 73804001 / Max: 75280148Min: 74646847 / Avg: 74895187.33 / Max: 75065526Min: 71875840 / Avg: 72630737 / Max: 73890059

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 31238001600240032004000SE +/- 88.27, N = 12SE +/- 129.73, N = 12SE +/- 107.86, N = 153606.73798.93781.21. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 31237001400210028003500Min: 3193.9 / Avg: 3606.73 / Max: 4358.4Min: 3246.3 / Avg: 3798.93 / Max: 4463.7Min: 3105.3 / Avg: 3781.15 / Max: 4481.41. (CC) gcc options: -O3 -pthread -lz -llzma

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon123510152025SE +/- 0.31, N = 15SE +/- 0.25, N = 15SE +/- 0.26, N = 1521.9622.5022.27MIN: 19.03 / MAX: 24.66MIN: 20.59 / MAX: 25.14MIN: 20.37 / MAX: 25.26
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon123510152025Min: 19.3 / Avg: 21.96 / Max: 23.51Min: 21.13 / Avg: 22.5 / Max: 24.53Min: 20.94 / Avg: 22.27 / Max: 24.71

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU1233K6K9K12K15KSE +/- 110.93, N = 3SE +/- 161.18, N = 3SE +/- 132.58, N = 313861.213529.013753.9MIN: 13490.9MIN: 13190.1MIN: 12649.51. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU1232K4K6K8K10KMin: 13639.4 / Avg: 13861.23 / Max: 13975.6Min: 13361.4 / Avg: 13529.03 / Max: 13851.3Min: 13489.9 / Avg: 13753.87 / Max: 13907.61. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU1238001600240032004000SE +/- 10.92, N = 3SE +/- 46.19, N = 3SE +/- 38.84, N = 83802.523821.243711.96MIN: 3624.28MIN: 3746.24MIN: 3503.461. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU1237001400210028003500Min: 3782.3 / Avg: 3802.52 / Max: 3819.76Min: 3773.63 / Avg: 3821.24 / Max: 3913.6Min: 3516.6 / Avg: 3711.96 / Max: 3795.731. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

VKMark

VKMark is a collection of Vulkan tests/benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVKMark Score, More Is BetterVKMark 2020-05-21Resolution: 1280 x 102412313002600390052006500SE +/- 2.19, N = 3SE +/- 5.90, N = 3SE +/- 2.33, N = 35850564456401. (CXX) g++ options: -ldl -pipe -std=c++14 -fPIC -MD -MQ -MF
OpenBenchmarking.orgVKMark Score, More Is BetterVKMark 2020-05-21Resolution: 1280 x 102412310002000300040005000Min: 5846 / Avg: 5850.33 / Max: 5853Min: 5632 / Avg: 5643.67 / Max: 5651Min: 5638 / Avg: 5640.33 / Max: 56451. (CXX) g++ options: -ldl -pipe -std=c++14 -fPIC -MD -MQ -MF

OpenBenchmarking.orgVKMark Score, More Is BetterVKMark 2020-05-21Resolution: 1920 x 10801239001800270036004500SE +/- 2.96, N = 3SE +/- 2.65, N = 3SE +/- 2.85, N = 34384426742681. (CXX) g++ options: -ldl -pipe -std=c++14 -fPIC -MD -MQ -MF
OpenBenchmarking.orgVKMark Score, More Is BetterVKMark 2020-05-21Resolution: 1920 x 10801238001600240032004000Min: 4380 / Avg: 4384.33 / Max: 4390Min: 4262 / Avg: 4267 / Max: 4271Min: 4262 / Avg: 4267.67 / Max: 42711. (CXX) g++ options: -ldl -pipe -std=c++14 -fPIC -MD -MQ -MF

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Bedroom1231.13782.27563.41344.55125.689SE +/- 0.025, N = 3SE +/- 0.054, N = 12SE +/- 0.019, N = 35.0364.9945.057
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Bedroom123246810Min: 4.99 / Avg: 5.04 / Max: 5.07Min: 4.4 / Avg: 4.99 / Max: 5.07Min: 5.02 / Avg: 5.06 / Max: 5.09

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling Benchmark123246810SE +/- 0.03, N = 3SE +/- 0.12, N = 4SE +/- 0.04, N = 38.478.258.211. Nodejs v12.18.2
OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling Benchmark1233691215Min: 8.43 / Avg: 8.47 / Max: 8.52Min: 7.9 / Avg: 8.25 / Max: 8.4Min: 8.17 / Avg: 8.21 / Max: 8.31. Nodejs v12.18.2

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water Benchmark1230.41810.83621.25431.67242.0905SE +/- 0.004, N = 3SE +/- 0.004, N = 3SE +/- 0.003, N = 21.8341.8581.8371. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water Benchmark123246810Min: 1.83 / Avg: 1.83 / Max: 1.84Min: 1.85 / Avg: 1.86 / Max: 1.87Min: 1.83 / Avg: 1.84 / Max: 1.841. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To Compile1220406080100SE +/- 0.24, N = 3SE +/- 0.21, N = 396.2096.34
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To Compile1220406080100Min: 95.77 / Avg: 96.2 / Max: 96.59Min: 95.97 / Avg: 96.34 / Max: 96.68

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon123510152025SE +/- 0.09, N = 3SE +/- 0.26, N = 15SE +/- 0.19, N = 1521.3221.6721.56MIN: 20.36 / MAX: 22.38MIN: 19.41 / MAX: 23.55MIN: 19.33 / MAX: 23.88
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon123510152025Min: 21.2 / Avg: 21.32 / Max: 21.48Min: 19.96 / Avg: 21.67 / Max: 23Min: 20.37 / Avg: 21.56 / Max: 23.22

Timed Eigen Compilation

This test times how long it takes to build all Eigen examples. The Eigen examples are compiled serially. Eigen is a C++ template library for linear algebra. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To Compile12320406080100SE +/- 0.06, N = 3SE +/- 0.04, N = 3SE +/- 0.04, N = 393.2592.9893.06
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To Compile12320406080100Min: 93.13 / Avg: 93.25 / Max: 93.33Min: 92.94 / Avg: 92.98 / Max: 93.06Min: 93 / Avg: 93.06 / Max: 93.14

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU1238001600240032004000SE +/- 3.43, N = 3SE +/- 13.99, N = 3SE +/- 10.39, N = 33808.573780.473777.65MIN: 3705.89MIN: 3731.84MIN: 3644.351. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU1237001400210028003500Min: 3803.03 / Avg: 3808.57 / Max: 3814.85Min: 3758.2 / Avg: 3780.47 / Max: 3806.26Min: 3758.53 / Avg: 3777.65 / Max: 3794.271. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU1238001600240032004000SE +/- 16.86, N = 3SE +/- 8.72, N = 3SE +/- 8.76, N = 33749.183782.363783.40MIN: 3631.66MIN: 3756.39MIN: 3758.061. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU1237001400210028003500Min: 3724.08 / Avg: 3749.18 / Max: 3781.22Min: 3767.21 / Avg: 3782.36 / Max: 3797.43Min: 3769.15 / Avg: 3783.4 / Max: 3799.351. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Exhaustive1231632486480SE +/- 0.09, N = 3SE +/- 0.12, N = 3SE +/- 0.10, N = 373.3673.1473.101. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Exhaustive1231428425670Min: 73.18 / Avg: 73.36 / Max: 73.49Min: 72.9 / Avg: 73.14 / Max: 73.28Min: 72.9 / Avg: 73.1 / Max: 73.221. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU1230.84641.69282.53923.38564.232SE +/- 0.07703, N = 15SE +/- 0.07933, N = 15SE +/- 0.01146, N = 33.696573.761643.41699MIN: 3.25MIN: 3.26MIN: 3.221. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU123246810Min: 3.41 / Avg: 3.7 / Max: 4.17Min: 3.41 / Avg: 3.76 / Max: 4.19Min: 3.4 / Avg: 3.42 / Max: 3.441. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,0001231530456075SE +/- 0.47, N = 3SE +/- 0.25, N = 3SE +/- 0.13, N = 367.2167.4667.251. (CC) gcc options: -O2 -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,0001231326395265Min: 66.65 / Avg: 67.21 / Max: 68.13Min: 67.16 / Avg: 67.46 / Max: 67.95Min: 67.01 / Avg: 67.25 / Max: 67.451. (CC) gcc options: -O2 -ldl -lz -lpthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression Speed1232K4K6K8K10KSE +/- 64.14, N = 3SE +/- 19.51, N = 3SE +/- 57.74, N = 39128.29191.99188.31. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression Speed12316003200480064008000Min: 9032.9 / Avg: 9128.17 / Max: 9250.2Min: 9154.2 / Avg: 9191.87 / Max: 9219.5Min: 9075.2 / Avg: 9188.33 / Max: 92651. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression Speed1231122334455SE +/- 0.54, N = 3SE +/- 0.25, N = 3SE +/- 0.29, N = 347.4647.1246.871. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression Speed1231020304050Min: 46.54 / Avg: 47.46 / Max: 48.41Min: 46.76 / Avg: 47.12 / Max: 47.61Min: 46.35 / Avg: 46.87 / Max: 47.371. (CC) gcc options: -O3

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 51230.21470.42940.64410.85881.0735SE +/- 0.002, N = 3SE +/- 0.002, N = 3SE +/- 0.004, N = 30.9540.9510.953
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 5123246810Min: 0.95 / Avg: 0.95 / Max: 0.96Min: 0.95 / Avg: 0.95 / Max: 0.95Min: 0.95 / Avg: 0.95 / Max: 0.96

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 11230.0720.1440.2160.2880.36SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.001, N = 30.3150.3200.318
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 112312345Min: 0.31 / Avg: 0.32 / Max: 0.32Min: 0.32 / Avg: 0.32 / Max: 0.32Min: 0.32 / Avg: 0.32 / Max: 0.32

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Supercar1233691215SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 311.0410.9611.04
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Supercar1233691215Min: 11.02 / Avg: 11.03 / Max: 11.05Min: 10.88 / Avg: 10.96 / Max: 11.05Min: 10.98 / Avg: 11.04 / Max: 11.09

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU1230.47150.9431.41451.8862.3575SE +/- 0.05725, N = 15SE +/- 0.05017, N = 15SE +/- 0.03417, N = 151.657672.095672.00984MIN: 1.11MIN: 1.4MIN: 1.391. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU123246810Min: 1.36 / Avg: 1.66 / Max: 1.96Min: 1.58 / Avg: 2.1 / Max: 2.26Min: 1.79 / Avg: 2.01 / Max: 2.231. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Crown123612182430SE +/- 0.33, N = 15SE +/- 0.22, N = 3SE +/- 0.31, N = 324.0224.6224.36MIN: 19.33 / MAX: 25.81MIN: 23.95 / MAX: 25.61MIN: 23 / MAX: 25.59
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Crown123612182430Min: 20.13 / Avg: 24.02 / Max: 25.1Min: 24.3 / Avg: 24.62 / Max: 25.04Min: 23.82 / Avg: 24.36 / Max: 24.9

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Slow1233691215SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 310.2910.3810.321. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Slow1233691215Min: 10.29 / Avg: 10.29 / Max: 10.3Min: 10.36 / Avg: 10.38 / Max: 10.4Min: 10.3 / Avg: 10.32 / Max: 10.341. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandom1230.090.180.270.360.45SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.390.390.401. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandom12312345Min: 0.39 / Avg: 0.39 / Max: 0.39Min: 0.39 / Avg: 0.39 / Max: 0.4Min: 0.39 / Avg: 0.4 / Max: 0.41. (CXX) g++ options: -O3 -pthread

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Medium1233691215SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 310.4710.5610.531. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Medium1233691215Min: 10.46 / Avg: 10.47 / Max: 10.47Min: 10.53 / Avg: 10.56 / Max: 10.6Min: 10.51 / Avg: 10.53 / Max: 10.541. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweets1230.11480.22960.34440.45920.574SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 30.510.500.511. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweets123246810Min: 0.5 / Avg: 0.51 / Max: 0.51Min: 0.49 / Avg: 0.5 / Max: 0.51Min: 0.51 / Avg: 0.51 / Max: 0.511. (CXX) g++ options: -O3 -pthread

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserID1230.1170.2340.3510.4680.585SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.520.520.521. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserID123246810Min: 0.52 / Avg: 0.52 / Max: 0.52Min: 0.52 / Avg: 0.52 / Max: 0.52Min: 0.52 / Avg: 0.52 / Max: 0.521. (CXX) g++ options: -O3 -pthread

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GET123500K1000K1500K2000K2500KSE +/- 31653.62, N = 15SE +/- 31097.88, N = 15SE +/- 27660.05, N = 152201882.032099666.522091130.671. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GET123400K800K1200K1600K2000KMin: 1949879.12 / Avg: 2201882.03 / Max: 2421307.5Min: 1957009.88 / Avg: 2099666.52 / Max: 2331748.25Min: 1938170.62 / Avg: 2091130.67 / Max: 2315555.51. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

CLOMP

CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading in order to influence future system designs. This particular test profile configuration is currently set to look at the OpenMP static schedule speed-up across all available CPU cores using the recommended test configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP Speedup1231224364860SE +/- 0.57, N = 3SE +/- 0.20, N = 2SE +/- 0.68, N = 351.751.651.81. (CC) gcc options: -fopenmp -O3 -lm
OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP Speedup1231020304050Min: 50.6 / Avg: 51.73 / Max: 52.4Min: 51.4 / Avg: 51.6 / Max: 51.8Min: 50.5 / Avg: 51.8 / Max: 52.81. (CC) gcc options: -fopenmp -O3 -lm

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: Kostya1230.0990.1980.2970.3960.495SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.440.440.441. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: Kostya12312345Min: 0.43 / Avg: 0.44 / Max: 0.44Min: 0.44 / Avg: 0.44 / Max: 0.44Min: 0.43 / Avg: 0.44 / Max: 0.441. (CXX) g++ options: -O3 -pthread

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1S1231122334455SE +/- 0.22, N = 3SE +/- 0.10, N = 3SE +/- 0.24, N = 347.6047.6048.161. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1S1231020304050Min: 47.29 / Avg: 47.6 / Max: 48.04Min: 47.47 / Avg: 47.6 / Max: 47.79Min: 47.69 / Avg: 48.16 / Max: 48.441. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 61230.28670.57340.86011.14681.4335SE +/- 0.001, N = 3SE +/- 0.002, N = 3SE +/- 0.002, N = 31.2701.2731.274
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 6123246810Min: 1.27 / Avg: 1.27 / Max: 1.27Min: 1.27 / Avg: 1.27 / Max: 1.28Min: 1.27 / Avg: 1.27 / Max: 1.28

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total Time12312M24M36M48M60MSE +/- 796222.48, N = 3SE +/- 577141.31, N = 3SE +/- 749500.39, N = 34982125455404772549840121. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total Time12310M20M30M40M50MMin: 48265452 / Avg: 49821253.67 / Max: 50893300Min: 54278952 / Avg: 55404771.67 / Max: 56188303Min: 53503876 / Avg: 54984012 / Max: 559293861. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOP123500K1000K1500K2000K2500KSE +/- 44065.81, N = 15SE +/- 132410.87, N = 12SE +/- 139985.77, N = 122431710.352058658.411998586.161. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOP123400K800K1200K1600K2000KMin: 2202713.75 / Avg: 2431710.35 / Max: 2717913Min: 1291989.62 / Avg: 2058658.41 / Max: 2639366.75Min: 1312503.88 / Avg: 1998586.16 / Max: 2577814.251. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech Synthesis123714212835SE +/- 0.06, N = 4SE +/- 0.17, N = 4SE +/- 0.07, N = 430.8531.1531.031. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech Synthesis123714212835Min: 30.68 / Avg: 30.85 / Max: 30.97Min: 30.8 / Avg: 31.15 / Max: 31.48Min: 30.85 / Avg: 31.03 / Max: 31.191. (CC) gcc options: -O2 -std=c99

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADD123400K800K1200K1600K2000KSE +/- 20792.00, N = 15SE +/- 25792.40, N = 15SE +/- 16528.72, N = 31892562.211826296.851974193.461. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADD123300K600K900K1200K1500KMin: 1751425.62 / Avg: 1892562.21 / Max: 2012394.38Min: 1647815.5 / Avg: 1826296.85 / Max: 1949629.62Min: 1941809.75 / Avg: 1974193.46 / Max: 1996135.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4K12348121620SE +/- 0.03, N = 3SE +/- 0.05, N = 3SE +/- 0.06, N = 316.6215.2616.551. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4K12348121620Min: 16.56 / Avg: 16.62 / Max: 16.68Min: 15.16 / Avg: 15.26 / Max: 15.33Min: 16.44 / Avg: 16.55 / Max: 16.651. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To Compile1231020304050SE +/- 0.18, N = 3SE +/- 0.15, N = 3SE +/- 0.18, N = 346.0830.9232.70
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To Compile123918273645Min: 45.84 / Avg: 46.08 / Max: 46.43Min: 30.65 / Avg: 30.92 / Max: 31.16Min: 32.5 / Avg: 32.7 / Max: 33.07

Libplacebo

Libplacebo is a multimedia rendering library based on the core rendering code of the MPV player. The libplacebo benchmark relies on the Vulkan API and tests various primitives. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterLibplacebo 2.72.2Test: av1_grain_lap123100200300400500SE +/- 3.42, N = 3SE +/- 3.25, N = 3SE +/- 1.25, N = 3448.24441.37446.381. (CXX) g++ options: -lm -lglslang -lHLSL -lOGLCompiler -lOSDependent -lSPIRV -lSPVRemapper -lSPIRV-Tools -lSPIRV-Tools-opt -lpthread -pthread -pipe -std=c++11 -fvisibility=hidden -fPIC -MD -MQ -MF
OpenBenchmarking.orgFPS, More Is BetterLibplacebo 2.72.2Test: av1_grain_lap12380160240320400Min: 442.38 / Avg: 448.24 / Max: 454.24Min: 435.15 / Avg: 441.37 / Max: 446.14Min: 444.79 / Avg: 446.38 / Max: 448.851. (CXX) g++ options: -lm -lglslang -lHLSL -lOGLCompiler -lOSDependent -lSPIRV -lSPVRemapper -lSPIRV-Tools -lSPIRV-Tools-opt -lpthread -pthread -pipe -std=c++11 -fvisibility=hidden -fPIC -MD -MQ -MF

OpenBenchmarking.orgFPS, More Is BetterLibplacebo 2.72.2Test: hdr_peakdetect123400800120016002000SE +/- 0.03, N = 3SE +/- 0.18, N = 3SE +/- 0.10, N = 31762.981762.781762.551. (CXX) g++ options: -lm -lglslang -lHLSL -lOGLCompiler -lOSDependent -lSPIRV -lSPVRemapper -lSPIRV-Tools -lSPIRV-Tools-opt -lpthread -pthread -pipe -std=c++11 -fvisibility=hidden -fPIC -MD -MQ -MF
OpenBenchmarking.orgFPS, More Is BetterLibplacebo 2.72.2Test: hdr_peakdetect12330060090012001500Min: 1762.92 / Avg: 1762.98 / Max: 1763.02Min: 1762.41 / Avg: 1762.78 / Max: 1762.98Min: 1762.39 / Avg: 1762.55 / Max: 1762.731. (CXX) g++ options: -lm -lglslang -lHLSL -lOGLCompiler -lOSDependent -lSPIRV -lSPVRemapper -lSPIRV-Tools -lSPIRV-Tools-opt -lpthread -pthread -pipe -std=c++11 -fvisibility=hidden -fPIC -MD -MQ -MF

OpenBenchmarking.orgFPS, More Is BetterLibplacebo 2.72.2Test: polar_nocompute12360120180240300SE +/- 0.83, N = 3SE +/- 0.53, N = 3SE +/- 0.38, N = 3268.72268.36267.901. (CXX) g++ options: -lm -lglslang -lHLSL -lOGLCompiler -lOSDependent -lSPIRV -lSPVRemapper -lSPIRV-Tools -lSPIRV-Tools-opt -lpthread -pthread -pipe -std=c++11 -fvisibility=hidden -fPIC -MD -MQ -MF
OpenBenchmarking.orgFPS, More Is BetterLibplacebo 2.72.2Test: polar_nocompute12350100150200250Min: 267.46 / Avg: 268.72 / Max: 270.28Min: 267.57 / Avg: 268.36 / Max: 269.36Min: 267.48 / Avg: 267.9 / Max: 268.651. (CXX) g++ options: -lm -lglslang -lHLSL -lOGLCompiler -lOSDependent -lSPIRV -lSPVRemapper -lSPIRV-Tools -lSPIRV-Tools-opt -lpthread -pthread -pipe -std=c++11 -fvisibility=hidden -fPIC -MD -MQ -MF

OpenBenchmarking.orgFPS, More Is BetterLibplacebo 2.72.2Test: deband_heavy1234080120160200SE +/- 0.36, N = 3SE +/- 0.37, N = 3SE +/- 0.29, N = 3192.54192.43192.221. (CXX) g++ options: -lm -lglslang -lHLSL -lOGLCompiler -lOSDependent -lSPIRV -lSPVRemapper -lSPIRV-Tools -lSPIRV-Tools-opt -lpthread -pthread -pipe -std=c++11 -fvisibility=hidden -fPIC -MD -MQ -MF
OpenBenchmarking.orgFPS, More Is BetterLibplacebo 2.72.2Test: deband_heavy1234080120160200Min: 191.85 / Avg: 192.54 / Max: 193.09Min: 191.74 / Avg: 192.43 / Max: 193.01Min: 191.66 / Avg: 192.22 / Max: 192.631. (CXX) g++ options: -lm -lglslang -lHLSL -lOGLCompiler -lOSDependent -lSPIRV -lSPVRemapper -lSPIRV-Tools -lSPIRV-Tools-opt -lpthread -pthread -pipe -std=c++11 -fvisibility=hidden -fPIC -MD -MQ -MF

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark Suite12120K240K360K480K600KSE +/- 2547.76, N = 3SE +/- 145.08, N = 3577229579360
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark Suite12100K200K300K400K500KMin: 572245 / Avg: 577228.67 / Max: 580640Min: 579082 / Avg: 579360 / Max: 579571

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression Speed1232K4K6K8K10KSE +/- 34.06, N = 3SE +/- 9.81, N = 3SE +/- 47.16, N = 39506.29647.99540.01. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression Speed1232K4K6K8K10KMin: 9461.2 / Avg: 9506.2 / Max: 9573Min: 9633.9 / Avg: 9647.9 / Max: 9666.8Min: 9486.3 / Avg: 9540 / Max: 96341. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression Speed1232K4K6K8K10KSE +/- 65.90, N = 3SE +/- 79.09, N = 3SE +/- 51.76, N = 38518.048619.848508.641. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression Speed12315003000450060007500Min: 8414.82 / Avg: 8518.04 / Max: 8640.63Min: 8476.64 / Avg: 8619.84 / Max: 8749.61Min: 8425.56 / Avg: 8508.64 / Max: 8603.661. (CC) gcc options: -O3

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 101230.64331.28661.92992.57323.2165SE +/- 0.006, N = 3SE +/- 0.018, N = 3SE +/- 0.012, N = 32.8592.8302.853
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 10123246810Min: 2.85 / Avg: 2.86 / Max: 2.87Min: 2.8 / Avg: 2.83 / Max: 2.85Min: 2.84 / Avg: 2.85 / Max: 2.88

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU1233691215SE +/- 0.19, N = 12SE +/- 0.15, N = 15SE +/- 0.01, N = 311.4711.2110.93MIN: 10.94MIN: 10.67MIN: 10.811. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU1233691215Min: 11.04 / Avg: 11.47 / Max: 12.55Min: 10.76 / Avg: 11.21 / Max: 12.3Min: 10.91 / Avg: 10.93 / Max: 10.951. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Betsy GPU Compressor

Betsy is an open-source GPU compressor of various GPU compression techniques. Betsy is written in GLSL for Vulkan/OpenGL (compute shader) support for GPU-based texture compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC2 RGB - Quality: Highest1233691215SE +/- 0.24, N = 15SE +/- 0.02, N = 3SE +/- 0.04, N = 312.7012.5412.551. (CXX) g++ options: -O3 -O2 -lpthread -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC2 RGB - Quality: Highest12348121620Min: 12.44 / Avg: 12.7 / Max: 15.99Min: 12.52 / Avg: 12.54 / Max: 12.57Min: 12.51 / Avg: 12.55 / Max: 12.631. (CXX) g++ options: -O3 -O2 -lpthread -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU1230.33520.67041.00561.34081.676SE +/- 0.01123, N = 15SE +/- 0.00191, N = 3SE +/- 0.00013, N = 31.469281.485491.48957MIN: 1.34MIN: 1.45MIN: 1.441. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU123246810Min: 1.38 / Avg: 1.47 / Max: 1.5Min: 1.48 / Avg: 1.49 / Max: 1.49Min: 1.49 / Avg: 1.49 / Max: 1.491. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Crown123510152025SE +/- 0.21, N = 3SE +/- 0.38, N = 3SE +/- 0.22, N = 322.7322.2422.65MIN: 21.97 / MAX: 23.66MIN: 20.82 / MAX: 23.48MIN: 21.43 / MAX: 23.62
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Crown123510152025Min: 22.32 / Avg: 22.73 / Max: 22.95Min: 21.48 / Avg: 22.24 / Max: 22.65Min: 22.39 / Avg: 22.65 / Max: 23.09

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed Time1231.6M3.2M4.8M6.4M8MSE +/- 7194.80, N = 3SE +/- 6808.24, N = 3SE +/- 5337.04, N = 37329531737137473667011. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm
OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed Time1231.3M2.6M3.9M5.2M6.5MMin: 7320001 / Avg: 7329531 / Max: 7343633Min: 7358167 / Avg: 7371374.33 / Max: 7380847Min: 7356257 / Avg: 7366701 / Max: 73738321. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second123200K400K600K800K1000KSE +/- 2731.02, N = 3SE +/- 686.93, N = 3SE +/- 1713.53, N = 31146059.291148000.041146667.411. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second123200K400K600K800K1000KMin: 1141684.88 / Avg: 1146059.29 / Max: 1151079.14Min: 1146747.89 / Avg: 1148000.04 / Max: 1149115.72Min: 1143878.46 / Avg: 1146667.41 / Max: 1149786.661. (CC) gcc options: -O2 -lrt" -lrt

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very Fast123612182430SE +/- 0.03, N = 3SE +/- 0.07, N = 3SE +/- 0.04, N = 323.5423.5723.611. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very Fast123612182430Min: 23.48 / Avg: 23.54 / Max: 23.59Min: 23.44 / Avg: 23.57 / Max: 23.68Min: 23.52 / Avg: 23.61 / Max: 23.651. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 3123612182430SE +/- 0.06, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 325.2325.2225.141. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 3123612182430Min: 25.17 / Avg: 25.23 / Max: 25.34Min: 25.2 / Avg: 25.22 / Max: 25.23Min: 25.1 / Avg: 25.14 / Max: 25.211. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Betsy GPU Compressor

Betsy is an open-source GPU compressor of various GPU compression techniques. Betsy is written in GLSL for Vulkan/OpenGL (compute shader) support for GPU-based texture compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC1 - Quality: Highest1233691215SE +/- 0.23, N = 15SE +/- 0.01, N = 3SE +/- 0.01, N = 310.7510.5210.531. (CXX) g++ options: -O3 -O2 -lpthread -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC1 - Quality: Highest1233691215Min: 10.49 / Avg: 10.75 / Max: 14.01Min: 10.5 / Avg: 10.52 / Max: 10.55Min: 10.51 / Avg: 10.53 / Max: 10.561. (CXX) g++ options: -O3 -O2 -lpthread -ldl

Monkey Audio Encoding

This test times how long it takes to encode a sample WAV file to Monkey's Audio APE format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APE12348121620SE +/- 0.06, N = 5SE +/- 0.06, N = 5SE +/- 0.06, N = 514.0314.0114.041. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt
OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APE12348121620Min: 13.94 / Avg: 14.03 / Max: 14.26Min: 13.91 / Avg: 14.01 / Max: 14.24Min: 13.97 / Avg: 14.04 / Max: 14.281. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt

Sunflow Rendering System

This test runs benchmarks of the Sunflow Rendering System. The Sunflow Rendering System is an open-source render engine for photo-realistic image synthesis with a ray-tracing core. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSunflow Rendering System 0.07.2Global Illumination + Image Synthesis120.17750.3550.53250.710.8875SE +/- 0.004, N = 3SE +/- 0.011, N = 150.7890.731MIN: 0.57 / MAX: 1.63MIN: 0.49 / MAX: 1.57
OpenBenchmarking.orgSeconds, Fewer Is BetterSunflow Rendering System 0.07.2Global Illumination + Image Synthesis12246810Min: 0.78 / Avg: 0.79 / Max: 0.8Min: 0.66 / Avg: 0.73 / Max: 0.78

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Slow123612182430SE +/- 0.04, N = 3SE +/- 0.07, N = 3SE +/- 0.18, N = 326.9927.1326.921. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Slow123612182430Min: 26.9 / Avg: 26.99 / Max: 27.03Min: 27.06 / Avg: 27.13 / Max: 27.26Min: 26.57 / Avg: 26.92 / Max: 27.11. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

WavPack Audio Encoding

This test times how long it takes to encode a sample WAV file to WavPack format with very high quality settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPack123691215SE +/- 0.01, N = 5SE +/- 0.01, N = 513.2013.221. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPack1248121620Min: 13.18 / Avg: 13.2 / Max: 13.21Min: 13.19 / Avg: 13.21 / Max: 13.261. (CXX) g++ options: -rdynamic

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Medium123714212835SE +/- 0.07, N = 3SE +/- 0.06, N = 3SE +/- 0.15, N = 327.6927.8427.781. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Medium123612182430Min: 27.56 / Avg: 27.69 / Max: 27.77Min: 27.74 / Avg: 27.84 / Max: 27.93Min: 27.49 / Avg: 27.78 / Max: 27.941. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU1230.67891.35782.03672.71563.3945SE +/- 0.02534, N = 3SE +/- 0.01913, N = 3SE +/- 0.01010, N = 32.995593.017213.01645MIN: 2.82MIN: 2.84MIN: 2.851. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU123246810Min: 2.95 / Avg: 3 / Max: 3.03Min: 2.98 / Avg: 3.02 / Max: 3.05Min: 3 / Avg: 3.02 / Max: 3.041. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

VkResample

VkResample is a Vulkan-based image upscaling library based on VkFFT. The sample input file is upscaling a 4K image to 8K using Vulkan-based GPU acceleration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterVkResample 1.0Upscale: 2x - Precision: Single1231122334455SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 349.0149.0049.021. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgms, Fewer Is BetterVkResample 1.0Upscale: 2x - Precision: Single1231020304050Min: 48.97 / Avg: 49.01 / Max: 49.09Min: 48.96 / Avg: 49 / Max: 49.03Min: 48.99 / Avg: 49.02 / Max: 49.061. (CXX) g++ options: -O3 -pthread

Timed MAFFT Alignment

This test performs an alignment of 100 pyruvate decarboxylase sequences. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNA1233691215SE +/- 0.04, N = 3SE +/- 0.15, N = 6SE +/- 0.12, N = 312.1512.2612.161. (CC) gcc options: -std=c99 -O3 -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNA12348121620Min: 12.08 / Avg: 12.15 / Max: 12.23Min: 12.01 / Avg: 12.26 / Max: 12.84Min: 11.99 / Avg: 12.16 / Max: 12.391. (CC) gcc options: -std=c99 -O3 -lm -lpthread

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 212348121620SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 315.7915.7915.761. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 212348121620Min: 15.76 / Avg: 15.78 / Max: 15.81Min: 15.73 / Avg: 15.79 / Max: 15.86Min: 15.73 / Avg: 15.76 / Max: 15.781. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra Fast123918273645SE +/- 0.28, N = 3SE +/- 0.15, N = 3SE +/- 0.03, N = 339.6839.6739.391. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra Fast123816243240Min: 39.26 / Avg: 39.68 / Max: 40.21Min: 39.42 / Avg: 39.67 / Max: 39.93Min: 39.33 / Avg: 39.39 / Max: 39.431. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU123246810SE +/- 0.05437, N = 3SE +/- 0.08570, N = 3SE +/- 0.09143, N = 36.277156.250116.21623MIN: 5.63MIN: 5.72MIN: 5.671. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU1233691215Min: 6.2 / Avg: 6.28 / Max: 6.38Min: 6.14 / Avg: 6.25 / Max: 6.42Min: 6.11 / Avg: 6.22 / Max: 6.41. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU1230.47840.95681.43521.91362.392SE +/- 0.00488, N = 3SE +/- 0.00369, N = 3SE +/- 0.00796, N = 32.124082.108932.12611MIN: 2.04MIN: 2.03MIN: 2.051. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU123246810Min: 2.12 / Avg: 2.12 / Max: 2.13Min: 2.1 / Avg: 2.11 / Max: 2.12Min: 2.11 / Avg: 2.13 / Max: 2.141. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080p123918273645SE +/- 0.23, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 341.0240.3641.061. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080p123918273645Min: 40.76 / Avg: 41.02 / Max: 41.49Min: 40.3 / Avg: 40.36 / Max: 40.47Min: 41.02 / Avg: 41.06 / Max: 41.141. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Thorough1233691215SE +/- 0.08, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 39.549.499.451. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Thorough1233691215Min: 9.45 / Avg: 9.54 / Max: 9.69Min: 9.43 / Avg: 9.49 / Max: 9.59Min: 9.43 / Avg: 9.45 / Max: 9.461. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus Encode123246810SE +/- 0.012, N = 5SE +/- 0.012, N = 5SE +/- 0.021, N = 57.7537.7957.7741. (CXX) g++ options: -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus Encode1233691215Min: 7.73 / Avg: 7.75 / Max: 7.8Min: 7.75 / Avg: 7.79 / Max: 7.82Min: 7.73 / Avg: 7.77 / Max: 7.851. (CXX) g++ options: -fvisibility=hidden -logg -lm

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSH123300K600K900K1200K1500KSE +/- 18400.26, N = 3SE +/- 19080.84, N = 4SE +/- 13973.47, N = 31368828.881340079.291366164.671. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSH123200K400K600K800K1000KMin: 1332111.88 / Avg: 1368828.88 / Max: 1389333.25Min: 1302208.38 / Avg: 1340079.29 / Max: 1375824Min: 1338988 / Avg: 1366164.67 / Max: 1385396.121. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SET123400K800K1200K1600K2000KSE +/- 19958.53, N = 3SE +/- 23714.82, N = 4SE +/- 22941.42, N = 31675529.381654163.601577289.791. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SET123300K600K900K1200K1500KMin: 1652892.5 / Avg: 1675529.38 / Max: 1715320.75Min: 1585090.38 / Avg: 1654163.6 / Max: 1686826.38Min: 1531785.62 / Avg: 1577289.79 / Max: 1605136.381. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Waifu2x-NCNN Vulkan

Waifu2x-NCNN is an NCNN neural network implementation of the Waifu2x converter project and accelerated using the Vulkan API. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWaifu2x-NCNN Vulkan 20200818Scale: 2x - Denoise: 3 - TAA: Yes1233691215SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 310.0410.0610.07
OpenBenchmarking.orgSeconds, Fewer Is BetterWaifu2x-NCNN Vulkan 20200818Scale: 2x - Denoise: 3 - TAA: Yes1233691215Min: 10.03 / Avg: 10.04 / Max: 10.04Min: 10.05 / Avg: 10.05 / Max: 10.06Min: 10.07 / Avg: 10.07 / Max: 10.09

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Very Fast1231428425670SE +/- 0.21, N = 3SE +/- 0.20, N = 3SE +/- 0.26, N = 360.6960.6960.791. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Very Fast1231224364860Min: 60.29 / Avg: 60.69 / Max: 61.01Min: 60.48 / Avg: 60.69 / Max: 61.09Min: 60.28 / Avg: 60.79 / Max: 61.121. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU1230.78991.57982.36973.15963.9495SE +/- 0.02421, N = 3SE +/- 0.04824, N = 3SE +/- 0.03734, N = 33.453703.490643.51087MIN: 1.93MIN: 2.01MIN: 21. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU123246810Min: 3.41 / Avg: 3.45 / Max: 3.48Min: 3.39 / Avg: 3.49 / Max: 3.54Min: 3.44 / Avg: 3.51 / Max: 3.561. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 0123246810SE +/- 0.057, N = 3SE +/- 0.021, N = 3SE +/- 0.019, N = 37.5957.6267.6001. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 01233691215Min: 7.53 / Avg: 7.59 / Max: 7.71Min: 7.6 / Avg: 7.63 / Max: 7.67Min: 7.57 / Avg: 7.6 / Max: 7.641. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Protein1233691215SE +/- 0.36, N = 12SE +/- 0.30, N = 15SE +/- 0.15, N = 313.4012.9112.551. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Protein12348121620Min: 11.8 / Avg: 13.4 / Max: 15.28Min: 10.94 / Avg: 12.91 / Max: 14.91Min: 12.35 / Avg: 12.55 / Max: 12.851. (CXX) g++ options: -O3 -pthread -lm

yquake2

This is a test of Yamagi Quake II. Yamagi Quake II is an enhanced client for id Software's Quake II with focus on offline and coop gameplay. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: Software CPU - Resolution: 1920 x 108012320406080100SE +/- 1.02, N = 3SE +/- 1.01, N = 3SE +/- 0.93, N = 3106.6107.1106.21. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC
OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: Software CPU - Resolution: 1920 x 108012320406080100Min: 105.5 / Avg: 106.57 / Max: 108.6Min: 105.1 / Avg: 107.1 / Max: 108.3Min: 104.5 / Avg: 106.23 / Max: 107.71. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Medium123246810SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 36.346.326.351. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Medium1233691215Min: 6.31 / Avg: 6.34 / Max: 6.38Min: 6.31 / Avg: 6.32 / Max: 6.33Min: 6.32 / Avg: 6.35 / Max: 6.391. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU123612182430SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 325.1625.1025.10MIN: 24.12MIN: 24.22MIN: 23.921. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU123612182430Min: 25.15 / Avg: 25.16 / Max: 25.18Min: 25.09 / Avg: 25.1 / Max: 25.12Min: 25.06 / Avg: 25.1 / Max: 25.141. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU123510152025SE +/- 0.13, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 319.9720.1520.25MIN: 14.86MIN: 19.19MIN: 19.161. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU123510152025Min: 19.85 / Avg: 19.97 / Max: 20.22Min: 20.12 / Avg: 20.15 / Max: 20.23Min: 20.25 / Avg: 20.25 / Max: 20.271. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Fast1231.17452.3493.52354.6985.8725SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 35.225.225.201. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Fast123246810Min: 5.18 / Avg: 5.22 / Max: 5.28Min: 5.21 / Avg: 5.22 / Max: 5.23Min: 5.2 / Avg: 5.2 / Max: 5.211. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Ultra Fast123306090120150SE +/- 0.15, N = 3SE +/- 0.66, N = 3SE +/- 0.26, N = 3116.80116.09116.681. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Ultra Fast12320406080100Min: 116.6 / Avg: 116.8 / Max: 117.1Min: 114.91 / Avg: 116.09 / Max: 117.18Min: 116.27 / Avg: 116.68 / Max: 117.171. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

yquake2

This is a test of Yamagi Quake II. Yamagi Quake II is an enhanced client for id Software's Quake II with focus on offline and coop gameplay. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 1.x - Resolution: 1920 x 1080123150300450600750SE +/- 4.44, N = 3SE +/- 6.33, N = 8SE +/- 5.54, N = 15683.8619.9623.81. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC
OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 1.x - Resolution: 1920 x 1080123120240360480600Min: 675.2 / Avg: 683.8 / Max: 690Min: 599.6 / Avg: 619.89 / Max: 654.2Min: 591.2 / Avg: 623.79 / Max: 660.41. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC

Waifu2x-NCNN Vulkan

Waifu2x-NCNN is an NCNN neural network implementation of the Waifu2x converter project and accelerated using the Vulkan API. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWaifu2x-NCNN Vulkan 20200818Scale: 2x - Denoise: 3 - TAA: No10.46170.92341.38511.84682.30852.052

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU1230.78051.5612.34153.1223.9025SE +/- 0.00942, N = 3SE +/- 0.00155, N = 3SE +/- 0.02456, N = 33.437273.449803.46909MIN: 3.34MIN: 3.36MIN: 3.351. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU123246810Min: 3.43 / Avg: 3.44 / Max: 3.46Min: 3.45 / Avg: 3.45 / Max: 3.45Min: 3.43 / Avg: 3.47 / Max: 3.521. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU1231.33622.67244.00865.34486.681SE +/- 0.01022, N = 3SE +/- 0.00211, N = 3SE +/- 0.00240, N = 35.926045.938565.92300MIN: 5.7MIN: 5.71MIN: 5.51. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU123246810Min: 5.91 / Avg: 5.93 / Max: 5.95Min: 5.94 / Avg: 5.94 / Max: 5.94Min: 5.92 / Avg: 5.92 / Max: 5.931. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

yquake2

This is a test of Yamagi Quake II. Yamagi Quake II is an enhanced client for id Software's Quake II with focus on offline and coop gameplay. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 3.x - Resolution: 1920 x 10801232004006008001000SE +/- 12.14, N = 3SE +/- 13.22, N = 3SE +/- 10.37, N = 3949.7972.2972.51. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC
OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 3.x - Resolution: 1920 x 10801232004006008001000Min: 934.9 / Avg: 949.73 / Max: 973.8Min: 946.1 / Avg: 972.17 / Max: 989Min: 951.8 / Avg: 972.53 / Max: 982.91. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Max Ping Pong Bandwidth1232K4K6K8K10KSE +/- 250.74, N = 3SE +/- 181.12, N = 3SE +/- 206.43, N = 310862.0410831.9710972.491. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3
OpenBenchmarking.orgMB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Max Ping Pong Bandwidth1232K4K6K8K10KMin: 10440.76 / Avg: 10862.04 / Max: 11308.28Min: 10520.14 / Avg: 10831.97 / Max: 11147.53Min: 10584.76 / Avg: 10972.49 / Max: 11289.171. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Random Ring Bandwidth1230.13360.26720.40080.53440.668SE +/- 0.00203, N = 3SE +/- 0.01337, N = 3SE +/- 0.00691, N = 30.571200.583000.593851. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3
OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Random Ring Bandwidth123246810Min: 0.57 / Avg: 0.57 / Max: 0.57Min: 0.56 / Avg: 0.58 / Max: 0.61Min: 0.59 / Avg: 0.59 / Max: 0.611. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

OpenBenchmarking.orgusecs, Fewer Is BetterHPC Challenge 1.5.0Test / Class: Random Ring Latency1230.34580.69161.03741.38321.729SE +/- 0.00435, N = 3SE +/- 0.00474, N = 3SE +/- 0.00358, N = 31.536811.534531.535091. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3
OpenBenchmarking.orgusecs, Fewer Is BetterHPC Challenge 1.5.0Test / Class: Random Ring Latency123246810Min: 1.53 / Avg: 1.54 / Max: 1.54Min: 1.53 / Avg: 1.53 / Max: 1.54Min: 1.53 / Avg: 1.54 / Max: 1.541. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

OpenBenchmarking.orgGUP/s, More Is BetterHPC Challenge 1.5.0Test / Class: G-Random Access1230.00640.01280.01920.02560.032SE +/- 0.00038, N = 3SE +/- 0.00067, N = 3SE +/- 0.00018, N = 30.028190.027980.028661. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3
OpenBenchmarking.orgGUP/s, More Is BetterHPC Challenge 1.5.0Test / Class: G-Random Access12312345Min: 0.03 / Avg: 0.03 / Max: 0.03Min: 0.03 / Avg: 0.03 / Max: 0.03Min: 0.03 / Avg: 0.03 / Max: 0.031. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: EP-STREAM Triad1230.30110.60220.90331.20441.5055SE +/- 0.05676, N = 3SE +/- 0.03714, N = 3SE +/- 0.07867, N = 31.309661.307711.338201. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3
OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: EP-STREAM Triad123246810Min: 1.2 / Avg: 1.31 / Max: 1.38Min: 1.25 / Avg: 1.31 / Max: 1.38Min: 1.19 / Avg: 1.34 / Max: 1.461. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: G-Ptrans1230.97671.95342.93013.90684.8835SE +/- 0.25728, N = 3SE +/- 0.03755, N = 3SE +/- 0.08946, N = 34.001644.341064.174281. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3
OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: G-Ptrans123246810Min: 3.49 / Avg: 4 / Max: 4.33Min: 4.27 / Avg: 4.34 / Max: 4.4Min: 4.05 / Avg: 4.17 / Max: 4.351. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: EP-DGEMM1233691215SE +/- 0.86774, N = 3SE +/- 0.14154, N = 3SE +/- 0.34471, N = 312.617278.8030313.034171. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3
OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: EP-DGEMM12348121620Min: 10.88 / Avg: 12.62 / Max: 13.55Min: 8.52 / Avg: 8.8 / Max: 8.95Min: 12.42 / Avg: 13.03 / Max: 13.611. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: G-Ffte1233691215SE +/- 0.04480, N = 3SE +/- 0.17781, N = 3SE +/- 0.14488, N = 310.046699.740839.676461. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3
OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: G-Ffte1233691215Min: 9.98 / Avg: 10.05 / Max: 10.13Min: 9.5 / Avg: 9.74 / Max: 10.09Min: 9.42 / Avg: 9.68 / Max: 9.921. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

147 Results Shown

HPC Challenge
Timed Clash Compilation
Basis Universal
NCNN:
  CPU - regnety_400m
  CPU - squeezenet_ssd
  CPU - yolov4-tiny
  CPU - resnet50
  CPU - alexnet
  CPU - resnet18
  CPU - vgg16
  CPU - googlenet
  CPU - blazeface
  CPU - efficientnet-b0
  CPU - mnasnet
  CPU - shufflenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU-v2-v2 - mobilenet-v2
  CPU - mobilenet
LAMMPS Molecular Dynamics Simulator
NCNN:
  Vulkan GPU - regnety_400m
  Vulkan GPU - squeezenet_ssd
  Vulkan GPU - yolov4-tiny
  Vulkan GPU - resnet50
  Vulkan GPU - alexnet
  Vulkan GPU - resnet18
  Vulkan GPU - vgg16
  Vulkan GPU - googlenet
  Vulkan GPU - blazeface
  Vulkan GPU - efficientnet-b0
  Vulkan GPU - mnasnet
  Vulkan GPU - shufflenet-v2
  Vulkan GPU-v3-v3 - mobilenet-v3
  Vulkan GPU-v2-v2 - mobilenet-v2
  Vulkan GPU - mobilenet
VkFFT
AI Benchmark Alpha:
  Device AI Score
  Device Training Score
  Device Inference Score
oneDNN
Timed HMMer Search
Numpy Benchmark
oneDNN
BRL-CAD
Embree:
  Pathtracer - Asian Dragon Obj
  Pathtracer ISPC - Asian Dragon Obj
Zstd Compression
LZ4 Compression:
  9 - Decompression Speed
  9 - Compression Speed
asmFish
Zstd Compression
Embree
oneDNN:
  Recurrent Neural Network Training - f32 - CPU
  Recurrent Neural Network Inference - u8s8f32 - CPU
VKMark:
  1280 x 1024
  1920 x 1080
IndigoBench
Node.js V8 Web Tooling Benchmark
GROMACS
Build2
Embree
Timed Eigen Compilation
oneDNN:
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - f32 - CPU
ASTC Encoder
oneDNN
SQLite Speedtest
LZ4 Compression:
  3 - Decompression Speed
  3 - Compression Speed
rav1e:
  5
  1
IndigoBench
oneDNN
Embree
Kvazaar
simdjson
Kvazaar
simdjson:
  PartialTweets
  DistinctUserID
Redis
CLOMP
simdjson
Basis Universal
rav1e
Stockfish
Redis
eSpeak-NG Speech Engine
Redis
x265
Timed FFmpeg Compilation
Libplacebo:
  av1_grain_lap
  hdr_peakdetect
  polar_nocompute
  deband_heavy
PHPBench
LZ4 Compression:
  1 - Decompression Speed
  1 - Compression Speed
rav1e
oneDNN
Betsy GPU Compressor
oneDNN
Embree
Crafty
Coremark
Kvazaar
Basis Universal
Betsy GPU Compressor
Monkey Audio Encoding
Sunflow Rendering System
Kvazaar
WavPack Audio Encoding
Kvazaar
oneDNN
VkResample
Timed MAFFT Alignment
Basis Universal
Kvazaar
oneDNN:
  IP Shapes 1D - f32 - CPU
  IP Shapes 1D - u8s8f32 - CPU
x265
ASTC Encoder
Opus Codec Encoding
Redis:
  LPUSH
  SET
Waifu2x-NCNN Vulkan
Kvazaar
oneDNN
Basis Universal
LAMMPS Molecular Dynamics Simulator
yquake2
ASTC Encoder
oneDNN:
  Convolution Batch Shapes Auto - u8s8f32 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
ASTC Encoder
Kvazaar
yquake2
Waifu2x-NCNN Vulkan
oneDNN:
  Deconvolution Batch shapes_3d - u8s8f32 - CPU
  Deconvolution Batch shapes_3d - f32 - CPU
yquake2
HPC Challenge:
  Max Ping Pong Bandwidth
  Rand Ring Bandwidth
  Rand Ring Latency
  G-Rand Access
  EP-STREAM Triad
  G-Ptrans
  EP-DGEMM
  G-Ffte