3100 compare

AMD Ryzen 3 3100 4-Core testing with a ASUS ROG CROSSHAIR VIII HERO (2702 BIOS) and AMD Radeon RX 56/64 8GB on Ubuntu 20.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2011296-HA-3100COMPA40
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 3 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 3 Tests
Chess Test Suite 4 Tests
Timed Code Compilation 2 Tests
C/C++ Compiler Tests 11 Tests
Compression Tests 2 Tests
CPU Massive 24 Tests
Creator Workloads 20 Tests
Encoding 5 Tests
Fortran Tests 5 Tests
Game Development 4 Tests
HPC - High Performance Computing 20 Tests
Imaging 4 Tests
Machine Learning 11 Tests
Molecular Dynamics 4 Tests
MPI Benchmarks 4 Tests
Multi-Core 17 Tests
NVIDIA GPU Compute 9 Tests
OCR 2 Tests
Intel oneAPI 2 Tests
OpenMPI Tests 4 Tests
Programmer / Developer System Benchmarks 6 Tests
Python 4 Tests
Renderers 2 Tests
Scientific Computing 9 Tests
Server 3 Tests
Server CPU Tests 16 Tests
Single-Threaded 7 Tests
Speech 2 Tests
Telephony 2 Tests
Texture Compression 3 Tests
Video Encoding 5 Tests
Vulkan Compute 4 Tests
Common Workstation Benchmarks 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
1
November 27 2020
  13 Hours, 56 Minutes
2
November 28 2020
  13 Hours, 20 Minutes
3
November 28 2020
  13 Hours, 35 Minutes
Invert Hiding All Results Option
  13 Hours, 37 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


3100 compareProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLVulkanCompilerFile-SystemScreen Resolution123AMD Ryzen 3 3100 4-Core @ 3.60GHz (4 Cores / 8 Threads)ASUS ROG CROSSHAIR VIII HERO (2702 BIOS)AMD Starship/Matisse16GB1000GB Sabrent Rocket 4.0 1TBAMD Radeon RX 56/64 8GB (1590/800MHz)AMD Vega 10 HDMI AudioLG Ultra HDRealtek RTL8125 2.5GbE + Intel I211Ubuntu 20.105.8.0-29-generic (x86_64)GNOME Shell 3.38.1X Server 1.20.9amdgpu 19.1.04.6 Mesa 20.2.1 (LLVM 11.0.0)1.2.131GCC 10.2.0ext41920x1080OpenBenchmarking.orgCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq ondemand - CPU Microcode: 0x8701021Graphics Details- GLAMORSecurity Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected Java Details- 2, 3: OpenJDK Runtime Environment (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.10)Python Details- 2, 3: Python 3.8.6

123Result OverviewPhoronix Test Suite100%104%108%113%LeelaChessZeroRedisDDraceNetworkCraftyBetsy GPU CompressoroneDNNLAMMPS Molecular Dynamics SimulatorSunflow Rendering SystemStockfishGROMACSAOM AV1FFTEBRL-CADHuginRawTherapeelibavif avifencMlpack BenchmarkGeekbenchasmFishrav1eDarktablex265Timed Linux Kernel CompilationZstd CompressionAI Benchmark AlphaEmbreeyquake2NCNNMPVLZ4 CompressionGLmark2DolfynIndigoBenchWaifu2x-NCNN VulkanBlenderOCRMyPDFNumpy BenchmarkNAMDKvazaarTNNVkFFTTesseract OCReSpeak-NG Speech EngineBasis UniversalTensorFlow LiteTimed HMMer SearchHierarchical INTegrationASTC EncoderPyPerformancePHPBenchMobile Neural NetworkOpenSSLRNNoiseMonte Carlo Simulations of Ionised NebulaeTimed LLVM CompilationCaffePyBench

3100 comparebetsy: ETC1 - Highestbetsy: ETC2 RGB - Highestwaifu2x-ncnn: 2x - 3 - Yesvkfft: ddnet: 1920 x 1080 - Fullscreen - OpenGL 3.3 - Default - RaiNyMore2ddnet: 1920 x 1080 - Fullscreen - OpenGL 3.3 - Default - Multeasymapyquake2: OpenGL 1.x - 1920 x 1080yquake2: OpenGL 3.x - 1920 x 1080yquake2: Software CPU - 1920 x 1080glmark2: 1920 x 1080hpcc: G-HPLhpcc: G-Fftehpcc: EP-DGEMMhpcc: G-Ptranshpcc: EP-STREAM Triadhpcc: G-Rand Accesshpcc: Rand Ring Latencyhpcc: Rand Ring Bandwidthhpcc: Max Ping Pong Bandwidthlczero: BLASlczero: Eigennamd: ATPase Simulation - 327,506 Atomsdolfyn: Computational Fluid Dynamicsffte: N=256, 3D Complex FFT Routinehmmer: Pfam Database Searchmocassin: Dust 2D tau100.0lammps: Rhodopsin Proteincompress-lz4: 1 - Compression Speedcompress-lz4: 1 - Decompression Speedcompress-lz4: 3 - Compression Speedcompress-lz4: 3 - Decompression Speedcompress-lz4: 9 - Compression Speedcompress-lz4: 9 - Decompression Speedcompress-zstd: 3compress-zstd: 19crafty: Elapsed Timeonednn: IP Batch 1D - f32 - CPUonednn: IP Batch All - f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch deconv_1d - f32 - CPUonednn: Deconvolution Batch deconv_3d - f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUaom-av1: Speed 0 Two-Passaom-av1: Speed 4 Two-Passaom-av1: Speed 6 Realtimeaom-av1: Speed 6 Two-Passaom-av1: Speed 8 Realtimeembree: Pathtracer - Crownembree: Pathtracer ISPC - Crownembree: Pathtracer - Asian Dragonembree: Pathtracer ISPC - Asian Dragonkvazaar: Bosphorus 4K - Mediumkvazaar: Bosphorus 1080p - Mediumkvazaar: Bosphorus 4K - Very Fastkvazaar: Bosphorus 4K - Ultra Fastkvazaar: Bosphorus 1080p - Very Fastkvazaar: Bosphorus 1080p - Ultra Fastrav1e: 1rav1e: 5rav1e: 6rav1e: 10x265: Bosphorus 4Kx265: Bosphorus 1080pstockfish: Total Timeasmfish: 1024 Hash Memory, 26 Depthavifenc: 0avifenc: 2avifenc: 8avifenc: 10build-linux-kernel: Time To Compilebuild-llvm: Time To Compilenumpy: espeak: Text-To-Speech Synthesisrnnoise: openssl: RSA 4096-bit Performancempv: Big Buck Bunny Sunflower 4K - Software Onlympv: Big Buck Bunny Sunflower 1080p - Software Onlygromacs: Water Benchmarktensorflow-lite: SqueezeNettensorflow-lite: Inception V4tensorflow-lite: NASNet Mobiletensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quanttensorflow-lite: Inception ResNet V2astcenc: Fastastcenc: Mediumastcenc: Thoroughastcenc: Exhaustivebasis: ETC1Sbasis: UASTC Level 0basis: UASTC Level 2basis: UASTC Level 3basis: UASTC Level 2 + RDO Post-Processingdarktable: Boat - CPU-onlydarktable: Masskrug - CPU-onlydarktable: Server Rack - CPU-onlydarktable: Server Room - CPU-onlyhugin: Panorama Photo Assistant + Stitching Timeocrmypdf: Processing 60 Page PDF Documentrawtherapee: Total Benchmark Timeredis: LPOPredis: SADDredis: LPUSHredis: GETredis: SETcaffe: AlexNet - CPU - 100caffe: GoogleNet - CPU - 100mnn: SqueezeNetV1.0mnn: resnet-v2-50mnn: MobileNetV2_224mnn: mobilenet-v1-1.0mnn: inception-v3ncnn: CPU - squeezenetncnn: CPU - mobilenetncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - shufflenet-v2ncnn: CPU - mnasnetncnn: CPU - efficientnet-b0ncnn: CPU - blazefacencnn: CPU - googlenetncnn: CPU - vgg16ncnn: CPU - resnet18ncnn: CPU - alexnetncnn: CPU - resnet50ncnn: CPU - yolov4-tinyncnn: Vulkan GPU - squeezenetncnn: Vulkan GPU - mobilenetncnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: Vulkan GPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU - shufflenet-v2ncnn: Vulkan GPU - mnasnetncnn: Vulkan GPU - efficientnet-b0ncnn: Vulkan GPU - blazefacencnn: Vulkan GPU - googlenetncnn: Vulkan GPU - vgg16ncnn: Vulkan GPU - resnet18ncnn: Vulkan GPU - alexnetncnn: Vulkan GPU - resnet50ncnn: Vulkan GPU - yolov4-tinytnn: CPU - MobileNet v2tnn: CPU - SqueezeNet v1.1indigobench: CPU - Bedroomindigobench: CPU - Supercarblender: BMW27 - CPU-Onlypybench: Total For Average Test Timespyperformance: gopyperformance: 2to3pyperformance: chaospyperformance: floatpyperformance: nbodypyperformance: pathlibpyperformance: raytracepyperformance: json_loadspyperformance: crypto_pyaespyperformance: regex_compilepyperformance: python_startuppyperformance: django_templatepyperformance: pickle_pure_pythonhint: FLOATai-benchmark: Device Inference Scoreai-benchmark: Device Training Scoreai-benchmark: Device AI Scoregeekbench: GPU Vulkangeekbench: CPU Multi Coregeekbench: CPU Single Corephpbench: PHP Benchmark Suitemlpack: scikit_icamlpack: scikit_qdamlpack: scikit_svmmlpack: scikit_linearridgeregressionsunflow: Global Illumination + Image Synthesistesseract-ocr: Time To OCR 7 Imageskripke: brl-cad: VGR Performance Metric1235.7996.6887.15818432236.14334.97737.4977.8110.96530120.907003.3068655.509500.653666.741790.020040.394004.9613114400.7667757304.4950017.75625405.988379614111.3542883.2499763.1011116.353.5610496.652.5010575.63687.721.177381467.4326796.741020.64528.0522112.0299503.700249.3714.730570.262.1816.933.4337.095.36735.17546.22946.11852.9112.728.0314.5532.3457.370.3721.0501.4043.0417.7734.71984056113797562146.26286.4606.6316.100159.3011215.501321.4230.99120.0221145.0454.911300.120.542387017557291728095925962126642650486477.398.3052.39425.2362.7959.29758.589114.270702.90712.6208.0190.2306.31861.99932.16479.9552624535.481951987.091437805.002310604.621725875.75490821206348.85334.9314.9375.81942.57321.8723.826.666.184.796.329.211.9519.4272.5717.8318.9537.3232.964.848.282.573.572.322.789.330.915.7410.822.164.096.1011.26263.815260.2600.9301.954323.46102024932011111711717.748027.21161698.0246.9440340004918.538327557881543362465227122360407949.1274.1121.433.191.94824.5676926419574195.6086.4767.18418479214.08331.87747.0977.8109.565477306884.4820917.70425808.658192384111.3032883.32910001.8111146.353.6010635.351.2010711.53632.021.275734978.4207299.737919.19789.7474913.1798491.110262.3684.039100.252.1716.663.4236.185.33105.15636.14306.13542.9212.758.0314.6332.3857.570.3651.0471.3703.0687.7835.121005012513866946149.26388.2096.5756.037160.5581214.944320.6030.94120.0151144.3457.511305.290.550386977557492327916725960026631950469337.398.352.46425.5862.6949.26758.628114.186701.85312.4777.9630.2296.27461.97232.23379.4182159676.181947660.531413366.452095708.191754885.38490471206798.84834.9954.9365.82442.44421.6723.716.646.114.826.279.11.9219.3072.1017.7518.5737.0532.844.858.252.583.582.332.769.340.885.7010.712.174.106.1311.29264.303260.4510.9351.958324.80101724832111111611717.747727.31161708.0846.7439339696862.295957617881549367935248122860451249.1276.9121.443.201.96524.5056434795567305.6026.4707.18318478203.50333.51730.4975.8110.465596496384.4830817.68025835.646713364111.4942883.3409959.0611202.652.5510588.051.4510653.03687.721.273946998.8106599.968319.41449.8251712.8921462.801247.1564.067390.252.1816.913.4337.075.31625.14646.28006.14012.9212.768.0114.6032.3857.470.3611.0581.3933.0877.7534.66986404513730556145.65386.2936.5846.064160.1491216.085321.6130.95620.0281144.7456.911304.010.552387009557171727915825957526634650484337.378.3052.40425.3962.7699.26758.621114.261704.13312.5098.0260.2346.29262.70632.26480.3491559670.962004338.251459398.532135888.901695298.03489621202918.87435.1674.9105.81142.54521.8323.716.686.254.846.309.161.9219.5172.7218.0018.8137.3133.204.858.272.583.582.322.769.360.875.6910.612.174.096.0911.30264.123260.4410.9311.958325.17102124832111111711717.847627.31161688.1346.8438339643088.464497637901553369855256122860432249.1475.2921.523.221.99624.50956960OpenBenchmarking.org

Betsy GPU Compressor

Betsy is an open-source GPU compressor of various GPU compression techniques. Betsy is written in GLSL for Vulkan/OpenGL (compute shader) support for GPU-based texture compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC1 - Quality: Highest1231.30482.60963.91445.21926.524SE +/- 0.224, N = 15SE +/- 0.062, N = 3SE +/- 0.064, N = 35.7995.6085.6021. (CXX) g++ options: -O3 -O2 -lpthread -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC1 - Quality: Highest123246810Min: 5.55 / Avg: 5.8 / Max: 8.94Min: 5.53 / Avg: 5.61 / Max: 5.73Min: 5.52 / Avg: 5.6 / Max: 5.731. (CXX) g++ options: -O3 -O2 -lpthread -ldl

OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC2 RGB - Quality: Highest123246810SE +/- 0.213, N = 15SE +/- 0.003, N = 3SE +/- 0.003, N = 36.6886.4766.4701. (CXX) g++ options: -O3 -O2 -lpthread -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC2 RGB - Quality: Highest1233691215Min: 6.46 / Avg: 6.69 / Max: 9.67Min: 6.47 / Avg: 6.48 / Max: 6.48Min: 6.47 / Avg: 6.47 / Max: 6.471. (CXX) g++ options: -O3 -O2 -lpthread -ldl

Waifu2x-NCNN Vulkan

Waifu2x-NCNN is an NCNN neural network implementation of the Waifu2x converter project and accelerated using the Vulkan API. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWaifu2x-NCNN Vulkan 20200818Scale: 2x - Denoise: 3 - TAA: Yes123246810SE +/- 0.002, N = 3SE +/- 0.007, N = 3SE +/- 0.008, N = 37.1587.1847.183
OpenBenchmarking.orgSeconds, Fewer Is BetterWaifu2x-NCNN Vulkan 20200818Scale: 2x - Denoise: 3 - TAA: Yes1233691215Min: 7.16 / Avg: 7.16 / Max: 7.16Min: 7.18 / Avg: 7.18 / Max: 7.2Min: 7.17 / Avg: 7.18 / Max: 7.2

VkFFT

VkFFT is a Fast Fourier Transform (FFT) Library that is GPU accelerated by means of the Vulkan API. The VkFFT benchmark runs FFT performance differences of many different sizes before returning an overall benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 2020-09-291234K8K12K16K20KSE +/- 80.13, N = 3SE +/- 18.75, N = 3SE +/- 15.18, N = 3184321847918478
OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 2020-09-291233K6K9K12K15KMin: 18272 / Avg: 18432 / Max: 18520Min: 18457 / Avg: 18478.67 / Max: 18516Min: 18448 / Avg: 18478 / Max: 18497

DDraceNetwork

This is a test of DDraceNetwork, an open-source cooperative platformer. OpenGL 3.3 is used for rendering, with fallbacks for older OpenGL versions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterDDraceNetwork 15.2.3Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL 3.3 - Zoom: Default - Demo: RaiNyMore212350100150200250SE +/- 9.66, N = 15SE +/- 13.98, N = 15SE +/- 13.89, N = 12236.14214.08203.50MIN: 35.16 / MAX: 498.75MIN: 33.12 / MAX: 476.64MIN: 27.45 / MAX: 451.061. (CXX) g++ options: -O3 -rdynamic -lcrypto -lz -lrt -lpthread -lcurl -lfreetype -lSDL2 -lwavpack -lopusfile -lopus -logg -lGL -lX11 -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0
OpenBenchmarking.orgFrames Per Second, More Is BetterDDraceNetwork 15.2.3Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL 3.3 - Zoom: Default - Demo: RaiNyMore21234080120160200Min: 152.49 / Avg: 236.14 / Max: 293.24Min: 124.38 / Avg: 214.08 / Max: 294.64Min: 99.73 / Avg: 203.5 / Max: 262.771. (CXX) g++ options: -O3 -rdynamic -lcrypto -lz -lrt -lpthread -lcurl -lfreetype -lSDL2 -lwavpack -lopusfile -lopus -logg -lGL -lX11 -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0

OpenBenchmarking.orgFrames Per Second, More Is BetterDDraceNetwork 15.2.3Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL 3.3 - Zoom: Default - Demo: Multeasymap12370140210280350SE +/- 1.03, N = 3SE +/- 1.65, N = 3SE +/- 0.66, N = 3334.97331.87333.51MIN: 110.11 / MAX: 493.34MIN: 104.44 / MAX: 494.32MIN: 109.02 / MAX: 495.291. (CXX) g++ options: -O3 -rdynamic -lcrypto -lz -lrt -lpthread -lcurl -lfreetype -lSDL2 -lwavpack -lopusfile -lopus -logg -lGL -lX11 -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0
OpenBenchmarking.orgFrames Per Second, More Is BetterDDraceNetwork 15.2.3Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL 3.3 - Zoom: Default - Demo: Multeasymap12360120180240300Min: 333.56 / Avg: 334.97 / Max: 336.97Min: 328.86 / Avg: 331.87 / Max: 334.55Min: 332.32 / Avg: 333.51 / Max: 334.581. (CXX) g++ options: -O3 -rdynamic -lcrypto -lz -lrt -lpthread -lcurl -lfreetype -lSDL2 -lwavpack -lopusfile -lopus -logg -lGL -lX11 -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0

OpenBenchmarking.orgMilliseconds, Fewer Is BetterDDraceNetwork 15.2.3Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL 3.3 - Zoom: Default - Demo: Multeasymap - Total Frame Time1233691215Min: 2.07 / Avg: 2.98 / Max: 7.54Min: 2.09 / Avg: 3.02 / Max: 7.16Min: 2.02 / Avg: 3 / Max: 10.71. (CXX) g++ options: -O3 -rdynamic -lcrypto -lz -lrt -lpthread -lcurl -lfreetype -lSDL2 -lwavpack -lopusfile -lopus -logg -lGL -lX11 -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0

yquake2

This is a test of Yamagi Quake II. Yamagi Quake II is an enhanced client for id Software's Quake II with focus on offline and coop gameplay. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 1.x - Resolution: 1920 x 1080123160320480640800SE +/- 8.88, N = 3SE +/- 9.93, N = 3SE +/- 1.13, N = 3737.4747.0730.41. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC
OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 1.x - Resolution: 1920 x 1080123130260390520650Min: 727.3 / Avg: 737.4 / Max: 755.1Min: 729.8 / Avg: 747 / Max: 764.2Min: 728.1 / Avg: 730.37 / Max: 731.51. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC

OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 3.x - Resolution: 1920 x 10801232004006008001000SE +/- 1.84, N = 3SE +/- 0.50, N = 3SE +/- 2.82, N = 3977.8977.8975.81. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC
OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 3.x - Resolution: 1920 x 10801232004006008001000Min: 975.3 / Avg: 977.83 / Max: 981.4Min: 976.8 / Avg: 977.8 / Max: 978.3Min: 972.3 / Avg: 975.83 / Max: 981.41. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC

OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: Software CPU - Resolution: 1920 x 108012320406080100SE +/- 0.48, N = 3SE +/- 0.30, N = 3SE +/- 0.31, N = 3110.9109.5110.41. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC
OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: Software CPU - Resolution: 1920 x 108012320406080100Min: 110.2 / Avg: 110.87 / Max: 111.8Min: 108.9 / Avg: 109.47 / Max: 109.9Min: 109.8 / Avg: 110.4 / Max: 110.81. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC

GLmark2

This is a test of Linaro's glmark2 port, currently using the X11 OpenGL 2.0 target. GLmark2 is a basic OpenGL benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterGLmark2 2020.04Resolution: 1920 x 108012314002800420056007000653065476559

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: G-HPL1306090120150120.911. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: G-Ffte10.7441.4882.2322.9763.723.306861. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: EP-DGEMM1122436486055.511. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: G-Ptrans10.14710.29420.44130.58840.73550.653661. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: EP-STREAM Triad12468106.741791. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

OpenBenchmarking.orgGUP/s, More Is BetterHPC Challenge 1.5.0Test / Class: G-Random Access10.00450.0090.01350.0180.02250.020041. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

OpenBenchmarking.orgusecs, Fewer Is BetterHPC Challenge 1.5.0Test / Class: Random Ring Latency10.08870.17740.26610.35480.44350.394001. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Random Ring Bandwidth11.11632.23263.34894.46525.58154.961311. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

OpenBenchmarking.orgMB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Max Ping Pong Bandwidth13K6K9K12K15K14400.771. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLAS1232004006008001000SE +/- 7.81, N = 3SE +/- 10.34, N = 4SE +/- 7.68, N = 97757306491. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLAS123140280420560700Min: 761 / Avg: 775 / Max: 788Min: 711 / Avg: 730 / Max: 759Min: 600 / Avg: 649.33 / Max: 6741. (CXX) g++ options: -flto -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: Eigen123160320480640800SE +/- 4.41, N = 3SE +/- 9.06, N = 5SE +/- 7.37, N = 97306886381. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: Eigen123130260390520650Min: 722 / Avg: 730.33 / Max: 737Min: 654 / Avg: 688.4 / Max: 704Min: 600 / Avg: 638.11 / Max: 6701. (CXX) g++ options: -flto -pthread

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atoms1231.01142.02283.03424.04565.057SE +/- 0.00243, N = 3SE +/- 0.00085, N = 3SE +/- 0.00219, N = 34.495004.482094.48308
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atoms123246810Min: 4.49 / Avg: 4.5 / Max: 4.5Min: 4.48 / Avg: 4.48 / Max: 4.48Min: 4.48 / Avg: 4.48 / Max: 4.49

Dolfyn

Dolfyn is a Computational Fluid Dynamics (CFD) code of modern numerical simulation techniques. The Dolfyn test profile measures the execution time of the bundled computational fluid dynamics demos that are bundled with Dolfyn. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid Dynamics12348121620SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 317.7617.7017.68
OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid Dynamics12348121620Min: 17.74 / Avg: 17.76 / Max: 17.77Min: 17.64 / Avg: 17.7 / Max: 17.79Min: 17.64 / Avg: 17.68 / Max: 17.71

FFTE

FFTE is a package by Daisuke Takahashi to compute Discrete Fourier Transforms of 1-, 2- and 3- dimensional sequences of length (2^p)*(3^q)*(5^r). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT Routine1236K12K18K24K30KSE +/- 31.78, N = 3SE +/- 19.99, N = 3SE +/- 17.94, N = 325405.9925808.6625835.651. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp
OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT Routine1234K8K12K16K20KMin: 25371.95 / Avg: 25405.99 / Max: 25469.48Min: 25782.41 / Avg: 25808.66 / Max: 25847.9Min: 25802.36 / Avg: 25835.65 / Max: 25863.891. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database Search12320406080100SE +/- 0.15, N = 3SE +/- 0.11, N = 3SE +/- 0.23, N = 3111.35111.30111.491. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database Search12320406080100Min: 111.19 / Avg: 111.35 / Max: 111.66Min: 111.19 / Avg: 111.3 / Max: 111.52Min: 111.08 / Avg: 111.49 / Max: 111.891. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

Monte Carlo Simulations of Ionised Nebulae

Mocassin is the Monte Carlo Simulations of Ionised Nebulae. MOCASSIN is a fully 3D or 2D photoionisation and dust radiative transfer code which employs a Monte Carlo approach to the transfer of radiation through media of arbitrary geometry and density distribution. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.012360120180240300SE +/- 0.58, N = 3SE +/- 0.67, N = 3SE +/- 0.88, N = 32882882881. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lrt -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.012350100150200250Min: 287 / Avg: 288 / Max: 289Min: 287 / Avg: 288.33 / Max: 289Min: 287 / Avg: 288.33 / Max: 2901. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lrt -lz

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Protein1230.75151.5032.25453.0063.7575SE +/- 0.052, N = 15SE +/- 0.029, N = 3SE +/- 0.021, N = 33.2493.3293.3401. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Protein123246810Min: 2.68 / Avg: 3.25 / Max: 3.37Min: 3.27 / Avg: 3.33 / Max: 3.37Min: 3.3 / Avg: 3.34 / Max: 3.371. (CXX) g++ options: -O3 -pthread -lm

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression Speed1232K4K6K8K10KSE +/- 45.71, N = 3SE +/- 99.34, N = 3SE +/- 105.59, N = 39763.1010001.819959.061. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression Speed1232K4K6K8K10KMin: 9681.1 / Avg: 9763.1 / Max: 9839.09Min: 9806.75 / Avg: 10001.81 / Max: 10132.05Min: 9811.02 / Avg: 9959.06 / Max: 10163.521. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression Speed1232K4K6K8K10KSE +/- 46.13, N = 3SE +/- 12.36, N = 3SE +/- 11.80, N = 311116.311146.311202.61. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression Speed1232K4K6K8K10KMin: 11060.9 / Avg: 11116.3 / Max: 11207.9Min: 11121.8 / Avg: 11146.33 / Max: 11161.2Min: 11179.8 / Avg: 11202.6 / Max: 11219.31. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression Speed1231224364860SE +/- 0.84, N = 3SE +/- 0.85, N = 3SE +/- 0.28, N = 353.5653.6052.551. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression Speed1231122334455Min: 52.7 / Avg: 53.56 / Max: 55.24Min: 52.7 / Avg: 53.6 / Max: 55.3Min: 51.99 / Avg: 52.55 / Max: 52.861. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression Speed1232K4K6K8K10KSE +/- 12.91, N = 3SE +/- 18.42, N = 3SE +/- 22.07, N = 310496.610635.310588.01. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression Speed1232K4K6K8K10KMin: 10477.7 / Avg: 10496.63 / Max: 10521.3Min: 10611.2 / Avg: 10635.33 / Max: 10671.5Min: 10543.9 / Avg: 10587.97 / Max: 10612.21. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression Speed1231224364860SE +/- 0.72, N = 3SE +/- 0.27, N = 3SE +/- 0.03, N = 352.5051.2051.451. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression Speed1231122334455Min: 51.21 / Avg: 52.5 / Max: 53.71Min: 50.66 / Avg: 51.2 / Max: 51.51Min: 51.4 / Avg: 51.45 / Max: 51.491. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression Speed1232K4K6K8K10KSE +/- 23.62, N = 3SE +/- 8.30, N = 3SE +/- 26.11, N = 310575.610711.510653.01. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression Speed1232K4K6K8K10KMin: 10543.9 / Avg: 10575.63 / Max: 10621.8Min: 10703 / Avg: 10711.5 / Max: 10728.1Min: 10619.9 / Avg: 10652.97 / Max: 10704.51. (CC) gcc options: -O3

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 31238001600240032004000SE +/- 19.56, N = 3SE +/- 37.93, N = 3SE +/- 19.08, N = 33687.73632.03687.71. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 31236001200180024003000Min: 3657.3 / Avg: 3687.67 / Max: 3724.2Min: 3557.1 / Avg: 3632.03 / Max: 3679.7Min: 3649.5 / Avg: 3687.67 / Max: 3707.11. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19123510152025SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 321.121.221.21. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19123510152025Min: 21.1 / Avg: 21.1 / Max: 21.1Min: 21.2 / Avg: 21.2 / Max: 21.2Min: 21.2 / Avg: 21.2 / Max: 21.21. (CC) gcc options: -O3 -pthread -lz -llzma

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed Time1231.7M3.4M5.1M6.8M8.5MSE +/- 11851.34, N = 3SE +/- 103135.62, N = 3SE +/- 94283.61, N = 37738146757349773946991. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm
OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed Time1231.3M2.6M3.9M5.2M6.5MMin: 7718515 / Avg: 7738146 / Max: 7759465Min: 7374117 / Avg: 7573496.67 / Max: 7718976Min: 7231919 / Avg: 7394699.33 / Max: 75585221. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: f32 - Engine: CPU123246810SE +/- 0.03162, N = 3SE +/- 0.12292, N = 3SE +/- 0.12077, N = 157.432678.420728.81065MIN: 7.23MIN: 8.01MIN: 8.211. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: f32 - Engine: CPU1233691215Min: 7.39 / Avg: 7.43 / Max: 7.49Min: 8.18 / Avg: 8.42 / Max: 8.56Min: 8.37 / Avg: 8.81 / Max: 10.361. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: f32 - Engine: CPU12320406080100SE +/- 0.07, N = 3SE +/- 0.13, N = 3SE +/- 0.89, N = 1596.7499.7499.97MIN: 95.29MIN: 97.89MIN: 95.991. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: f32 - Engine: CPU12320406080100Min: 96.63 / Avg: 96.74 / Max: 96.87Min: 99.52 / Avg: 99.74 / Max: 99.97Min: 97.73 / Avg: 99.97 / Max: 111.341. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU123510152025SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 320.6519.2019.41MIN: 20.28MIN: 19.01MIN: 19.181. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU123510152025Min: 20.63 / Avg: 20.65 / Max: 20.67Min: 19.19 / Avg: 19.2 / Max: 19.22Min: 19.39 / Avg: 19.41 / Max: 19.451. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPU1233691215SE +/- 0.01249, N = 3SE +/- 0.12846, N = 3SE +/- 0.17039, N = 158.052219.747499.82517MIN: 7.86MIN: 9.46MIN: 8.651. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPU1233691215Min: 8.03 / Avg: 8.05 / Max: 8.07Min: 9.62 / Avg: 9.75 / Max: 10Min: 8.76 / Avg: 9.83 / Max: 10.751. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPU1233691215SE +/- 0.02, N = 3SE +/- 0.17, N = 3SE +/- 0.02, N = 312.0313.1812.89MIN: 11.78MIN: 12.66MIN: 12.661. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPU12348121620Min: 12.01 / Avg: 12.03 / Max: 12.08Min: 12.9 / Avg: 13.18 / Max: 13.49Min: 12.86 / Avg: 12.89 / Max: 12.911. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU123110220330440550SE +/- 2.86, N = 3SE +/- 2.10, N = 3SE +/- 3.41, N = 3503.70491.11462.80MIN: 495.04MIN: 474.1MIN: 453.61. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU12390180270360450Min: 499.13 / Avg: 503.7 / Max: 508.95Min: 486.92 / Avg: 491.11 / Max: 493.37Min: 458.96 / Avg: 462.8 / Max: 469.61. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU12360120180240300SE +/- 0.60, N = 3SE +/- 3.42, N = 3SE +/- 1.11, N = 3249.37262.37247.16MIN: 247.16MIN: 254.47MIN: 243.721. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU12350100150200250Min: 248.65 / Avg: 249.37 / Max: 250.57Min: 255.94 / Avg: 262.37 / Max: 267.61Min: 245.12 / Avg: 247.16 / Max: 248.941. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU1231.06442.12883.19324.25765.322SE +/- 0.00490, N = 3SE +/- 0.01879, N = 3SE +/- 0.00436, N = 34.730574.039104.06739MIN: 4.66MIN: 3.96MIN: 3.991. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU123246810Min: 4.72 / Avg: 4.73 / Max: 4.74Min: 4 / Avg: 4.04 / Max: 4.07Min: 4.06 / Avg: 4.07 / Max: 4.081. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 0 Two-Pass1230.05850.1170.17550.2340.2925SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.260.250.251. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 0 Two-Pass12312345Min: 0.26 / Avg: 0.26 / Max: 0.26Min: 0.25 / Avg: 0.25 / Max: 0.25Min: 0.25 / Avg: 0.25 / Max: 0.261. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-Pass1230.49050.9811.47151.9622.4525SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 32.182.172.181. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-Pass123246810Min: 2.18 / Avg: 2.18 / Max: 2.18Min: 2.16 / Avg: 2.17 / Max: 2.17Min: 2.18 / Avg: 2.18 / Max: 2.191. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Realtime12348121620SE +/- 0.02, N = 3SE +/- 0.07, N = 3SE +/- 0.03, N = 316.9316.6616.911. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Realtime12348121620Min: 16.9 / Avg: 16.93 / Max: 16.95Min: 16.58 / Avg: 16.66 / Max: 16.8Min: 16.85 / Avg: 16.91 / Max: 16.941. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-Pass1230.77181.54362.31543.08723.859SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 33.433.423.431. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-Pass123246810Min: 3.43 / Avg: 3.43 / Max: 3.44Min: 3.41 / Avg: 3.42 / Max: 3.42Min: 3.43 / Avg: 3.43 / Max: 3.431. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 Realtime123918273645SE +/- 0.08, N = 3SE +/- 0.53, N = 3SE +/- 0.08, N = 337.0936.1837.071. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 Realtime123816243240Min: 36.94 / Avg: 37.09 / Max: 37.17Min: 35.17 / Avg: 36.18 / Max: 36.96Min: 36.92 / Avg: 37.07 / Max: 37.211. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Crown1231.20762.41523.62284.83046.038SE +/- 0.0401, N = 3SE +/- 0.0092, N = 3SE +/- 0.0526, N = 35.36735.33105.3162MIN: 4.89 / MAX: 5.49MIN: 5.3 / MAX: 5.4MIN: 4.89 / MAX: 5.45
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Crown123246810Min: 5.29 / Avg: 5.37 / Max: 5.43Min: 5.32 / Avg: 5.33 / Max: 5.35Min: 5.22 / Avg: 5.32 / Max: 5.39

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Crown1231.16452.3293.49354.6585.8225SE +/- 0.0016, N = 3SE +/- 0.0124, N = 3SE +/- 0.0320, N = 35.17545.15635.1464MIN: 5.15 / MAX: 5.25MIN: 5.11 / MAX: 5.25MIN: 5.05 / MAX: 5.25
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Crown123246810Min: 5.17 / Avg: 5.18 / Max: 5.18Min: 5.13 / Avg: 5.16 / Max: 5.18Min: 5.08 / Avg: 5.15 / Max: 5.18

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon123246810SE +/- 0.0315, N = 3SE +/- 0.0103, N = 3SE +/- 0.0267, N = 36.22946.14306.2800MIN: 6.16 / MAX: 6.36MIN: 6.1 / MAX: 6.24MIN: 6.21 / MAX: 6.4
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon1233691215Min: 6.18 / Avg: 6.23 / Max: 6.29Min: 6.12 / Avg: 6.14 / Max: 6.16Min: 6.24 / Avg: 6.28 / Max: 6.33

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon123246810SE +/- 0.0222, N = 3SE +/- 0.0098, N = 3SE +/- 0.0025, N = 36.11856.13546.1401MIN: 6.05 / MAX: 6.22MIN: 6.09 / MAX: 6.22MIN: 6.11 / MAX: 6.21
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon123246810Min: 6.08 / Avg: 6.12 / Max: 6.15Min: 6.12 / Avg: 6.14 / Max: 6.15Min: 6.14 / Avg: 6.14 / Max: 6.14

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Medium1230.6571.3141.9712.6283.285SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 32.912.922.921. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Medium123246810Min: 2.9 / Avg: 2.91 / Max: 2.91Min: 2.91 / Avg: 2.92 / Max: 2.92Min: 2.91 / Avg: 2.92 / Max: 2.921. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Medium1233691215SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 312.7212.7512.761. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Medium12348121620Min: 12.71 / Avg: 12.72 / Max: 12.73Min: 12.74 / Avg: 12.75 / Max: 12.76Min: 12.73 / Avg: 12.76 / Max: 12.781. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very Fast123246810SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 38.038.038.011. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very Fast1233691215Min: 8.02 / Avg: 8.03 / Max: 8.05Min: 8.02 / Avg: 8.03 / Max: 8.04Min: 8 / Avg: 8.01 / Max: 8.021. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra Fast12348121620SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 314.5514.6314.601. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra Fast12348121620Min: 14.51 / Avg: 14.55 / Max: 14.58Min: 14.61 / Avg: 14.63 / Max: 14.65Min: 14.58 / Avg: 14.6 / Max: 14.631. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Very Fast123816243240SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 332.3432.3832.381. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Very Fast123714212835Min: 32.27 / Avg: 32.34 / Max: 32.45Min: 32.35 / Avg: 32.38 / Max: 32.41Min: 32.31 / Avg: 32.38 / Max: 32.441. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Ultra Fast1231326395265SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.11, N = 357.3757.5757.471. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Ultra Fast1231122334455Min: 57.34 / Avg: 57.37 / Max: 57.43Min: 57.51 / Avg: 57.57 / Max: 57.61Min: 57.36 / Avg: 57.47 / Max: 57.681. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 11230.08370.16740.25110.33480.4185SE +/- 0.004, N = 3SE +/- 0.005, N = 3SE +/- 0.006, N = 30.3720.3650.361
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 112312345Min: 0.36 / Avg: 0.37 / Max: 0.38Min: 0.35 / Avg: 0.36 / Max: 0.37Min: 0.35 / Avg: 0.36 / Max: 0.37

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 51230.23810.47620.71430.95241.1905SE +/- 0.007, N = 3SE +/- 0.004, N = 3SE +/- 0.000, N = 31.0501.0471.058
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 5123246810Min: 1.04 / Avg: 1.05 / Max: 1.06Min: 1.04 / Avg: 1.05 / Max: 1.05Min: 1.06 / Avg: 1.06 / Max: 1.06

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 61230.31590.63180.94771.26361.5795SE +/- 0.007, N = 3SE +/- 0.006, N = 3SE +/- 0.011, N = 31.4041.3701.393
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 6123246810Min: 1.4 / Avg: 1.4 / Max: 1.42Min: 1.36 / Avg: 1.37 / Max: 1.38Min: 1.38 / Avg: 1.39 / Max: 1.42

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 101230.69461.38922.08382.77843.473SE +/- 0.003, N = 3SE +/- 0.030, N = 3SE +/- 0.026, N = 33.0413.0683.087
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 10123246810Min: 3.04 / Avg: 3.04 / Max: 3.04Min: 3.02 / Avg: 3.07 / Max: 3.13Min: 3.04 / Avg: 3.09 / Max: 3.13

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4K123246810SE +/- 0.05, N = 3SE +/- 0.08, N = 3SE +/- 0.05, N = 37.777.787.751. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4K1233691215Min: 7.69 / Avg: 7.77 / Max: 7.85Min: 7.69 / Avg: 7.78 / Max: 7.94Min: 7.67 / Avg: 7.75 / Max: 7.851. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080p123816243240SE +/- 0.05, N = 3SE +/- 0.20, N = 3SE +/- 0.22, N = 334.7135.1234.661. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080p123816243240Min: 34.64 / Avg: 34.71 / Max: 34.81Min: 34.72 / Avg: 35.12 / Max: 35.38Min: 34.36 / Avg: 34.66 / Max: 35.081. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total Time1232M4M6M8M10MSE +/- 111801.84, N = 6SE +/- 45923.84, N = 3SE +/- 124228.11, N = 398405611005012598640451. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total Time1232M4M6M8M10MMin: 9453120 / Avg: 9840561 / Max: 10236128Min: 9962157 / Avg: 10050125 / Max: 10116983Min: 9617303 / Avg: 9864044.67 / Max: 100126501. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Depth1233M6M9M12M15MSE +/- 23080.42, N = 3SE +/- 160984.48, N = 3SE +/- 105834.72, N = 3137975621386694613730556
OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Depth1232M4M6M8M10MMin: 13754678 / Avg: 13797562.33 / Max: 13833797Min: 13697677 / Avg: 13866945.67 / Max: 14188770Min: 13518945 / Avg: 13730556.33 / Max: 13840657

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 0123306090120150SE +/- 0.63, N = 3SE +/- 0.61, N = 3SE +/- 0.42, N = 3146.26149.26145.651. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 0123306090120150Min: 145.02 / Avg: 146.26 / Max: 147.07Min: 148.21 / Avg: 149.26 / Max: 150.32Min: 144.94 / Avg: 145.65 / Max: 146.381. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 212320406080100SE +/- 0.23, N = 3SE +/- 0.07, N = 3SE +/- 0.06, N = 386.4688.2186.291. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 212320406080100Min: 86.01 / Avg: 86.46 / Max: 86.74Min: 88.08 / Avg: 88.21 / Max: 88.33Min: 86.19 / Avg: 86.29 / Max: 86.41. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 8123246810SE +/- 0.004, N = 3SE +/- 0.005, N = 3SE +/- 0.001, N = 36.6316.5756.5841. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 81233691215Min: 6.62 / Avg: 6.63 / Max: 6.64Min: 6.57 / Avg: 6.58 / Max: 6.58Min: 6.58 / Avg: 6.58 / Max: 6.591. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10123246810SE +/- 0.015, N = 3SE +/- 0.019, N = 3SE +/- 0.017, N = 36.1006.0376.0641. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10123246810Min: 6.07 / Avg: 6.1 / Max: 6.12Min: 6 / Avg: 6.04 / Max: 6.06Min: 6.03 / Avg: 6.06 / Max: 6.091. (CXX) g++ options: -O3 -fPIC

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To Compile1234080120160200SE +/- 0.52, N = 3SE +/- 1.04, N = 3SE +/- 0.59, N = 3159.30160.56160.15
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To Compile123306090120150Min: 158.7 / Avg: 159.3 / Max: 160.34Min: 159.05 / Avg: 160.56 / Max: 162.56Min: 159.39 / Avg: 160.15 / Max: 161.31

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 10.0Time To Compile12330060090012001500SE +/- 1.95, N = 3SE +/- 1.32, N = 3SE +/- 0.35, N = 31215.501214.941216.09
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 10.0Time To Compile1232004006008001000Min: 1212.39 / Avg: 1215.5 / Max: 1219.08Min: 1213.44 / Avg: 1214.94 / Max: 1217.57Min: 1215.55 / Avg: 1216.08 / Max: 1216.75

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy Benchmark12370140210280350SE +/- 0.41, N = 3SE +/- 0.23, N = 3SE +/- 0.63, N = 3321.42320.60321.61
OpenBenchmarking.orgScore, More Is BetterNumpy Benchmark12360120180240300Min: 320.62 / Avg: 321.42 / Max: 321.99Min: 320.23 / Avg: 320.6 / Max: 321.03Min: 320.59 / Avg: 321.61 / Max: 322.75

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech Synthesis123714212835SE +/- 0.15, N = 4SE +/- 0.01, N = 4SE +/- 0.02, N = 430.9930.9430.961. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech Synthesis123714212835Min: 30.61 / Avg: 30.99 / Max: 31.36Min: 30.92 / Avg: 30.94 / Max: 30.96Min: 30.93 / Avg: 30.96 / Max: 31.021. (CC) gcc options: -O2 -std=c99

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28123510152025SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 320.0220.0220.031. (CC) gcc options: -O2 -pedantic -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28123510152025Min: 19.98 / Avg: 20.02 / Max: 20.05Min: 20 / Avg: 20.02 / Max: 20.03Min: 20.02 / Avg: 20.03 / Max: 20.041. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test measures the RSA 4096-bit performance of OpenSSL. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSigns Per Second, More Is BetterOpenSSL 1.1.1RSA 4096-bit Performance1232004006008001000SE +/- 0.15, N = 3SE +/- 0.27, N = 3SE +/- 0.07, N = 31145.01144.31144.71. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgSigns Per Second, More Is BetterOpenSSL 1.1.1RSA 4096-bit Performance1232004006008001000Min: 1144.8 / Avg: 1145 / Max: 1145.3Min: 1143.9 / Avg: 1144.27 / Max: 1144.8Min: 1144.6 / Avg: 1144.67 / Max: 1144.81. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

MPV

MPV is an open-source, cross-platform media player. This test profile tests the frame-rate that can be achieved unsynchronized in a desynchronized mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterMPVVideo Input: Big Buck Bunny Sunflower 4K - Decode: Software Only123100200300400500SE +/- 0.40, N = 3SE +/- 0.40, N = 3SE +/- 0.38, N = 3454.91457.51456.91MIN: 299.99 / MAX: 631.56MIN: 299.99 / MAX: 666.65MIN: 292.67 / MAX: 631.561. mpv 0.32.0
OpenBenchmarking.orgFPS, More Is BetterMPVVideo Input: Big Buck Bunny Sunflower 4K - Decode: Software Only12380160240320400Min: 454.16 / Avg: 454.91 / Max: 455.54Min: 456.72 / Avg: 457.51 / Max: 457.97Min: 456.18 / Avg: 456.91 / Max: 457.451. mpv 0.32.0

OpenBenchmarking.orgFPS, More Is BetterMPVVideo Input: Big Buck Bunny Sunflower 1080p - Decode: Software Only12330060090012001500SE +/- 2.29, N = 3SE +/- 2.23, N = 3SE +/- 7.87, N = 31300.121305.291304.01MIN: 749.97 / MAX: 2399.95MIN: 799.97 / MAX: 2399.92MIN: 799.97 / MAX: 2399.921. mpv 0.32.0
OpenBenchmarking.orgFPS, More Is BetterMPVVideo Input: Big Buck Bunny Sunflower 1080p - Decode: Software Only1232004006008001000Min: 1295.58 / Avg: 1300.12 / Max: 1302.92Min: 1301.03 / Avg: 1305.29 / Max: 1308.57Min: 1289.26 / Avg: 1304.01 / Max: 1316.151. mpv 0.32.0

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water Benchmark1230.12420.24840.37260.49680.621SE +/- 0.000, N = 3SE +/- 0.002, N = 3SE +/- 0.000, N = 30.5420.5500.5521. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water Benchmark123246810Min: 0.54 / Avg: 0.54 / Max: 0.54Min: 0.55 / Avg: 0.55 / Max: 0.55Min: 0.55 / Avg: 0.55 / Max: 0.551. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNet12380K160K240K320K400KSE +/- 60.36, N = 3SE +/- 56.90, N = 3SE +/- 10.21, N = 3387017386977387009
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNet12370K140K210K280K350KMin: 386922 / Avg: 387017 / Max: 387129Min: 386885 / Avg: 386977 / Max: 387081Min: 386990 / Avg: 387009 / Max: 387025

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V41231.2M2.4M3.6M4.8M6MSE +/- 421.91, N = 3SE +/- 2695.68, N = 3SE +/- 230.96, N = 3557291755749235571717
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V41231000K2000K3000K4000K5000KMin: 5572470 / Avg: 5572916.67 / Max: 5573760Min: 5571650 / Avg: 5574923.33 / Max: 5580270Min: 5571320 / Avg: 5571716.67 / Max: 5572120

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet Mobile12360K120K180K240K300KSE +/- 1226.74, N = 3SE +/- 79.18, N = 3SE +/- 151.01, N = 3280959279167279158
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet Mobile12350K100K150K200K250KMin: 279365 / Avg: 280958.67 / Max: 283371Min: 279069 / Avg: 279167.33 / Max: 279324Min: 278861 / Avg: 279158 / Max: 279354

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Float12360K120K180K240K300KSE +/- 20.61, N = 3SE +/- 44.31, N = 3SE +/- 21.18, N = 3259621259600259575
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Float12350K100K150K200K250KMin: 259583 / Avg: 259620.67 / Max: 259654Min: 259527 / Avg: 259600 / Max: 259680Min: 259536 / Avg: 259574.67 / Max: 259609

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Quant12360K120K180K240K300KSE +/- 51.91, N = 3SE +/- 131.03, N = 3SE +/- 60.84, N = 3266426266319266346
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Quant12350K100K150K200K250KMin: 266354 / Avg: 266426.33 / Max: 266527Min: 266183 / Avg: 266319 / Max: 266581Min: 266284 / Avg: 266346.33 / Max: 266468

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V21231.1M2.2M3.3M4.4M5.5MSE +/- 308.45, N = 3SE +/- 255.76, N = 3SE +/- 2537.61, N = 3504864750469335048433
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2123900K1800K2700K3600K4500KMin: 5048030 / Avg: 5048646.67 / Max: 5048970Min: 5046450 / Avg: 5046933.33 / Max: 5047320Min: 5044640 / Avg: 5048433.33 / Max: 5053250

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Fast123246810SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 37.397.397.371. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Fast1233691215Min: 7.38 / Avg: 7.39 / Max: 7.41Min: 7.39 / Avg: 7.39 / Max: 7.4Min: 7.37 / Avg: 7.37 / Max: 7.381. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Medium123246810SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 38.308.308.301. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Medium1233691215Min: 8.3 / Avg: 8.3 / Max: 8.31Min: 8.3 / Avg: 8.3 / Max: 8.3Min: 8.29 / Avg: 8.3 / Max: 8.311. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Thorough1231224364860SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 352.3952.4652.401. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Thorough1231122334455Min: 52.38 / Avg: 52.39 / Max: 52.4Min: 52.4 / Avg: 52.46 / Max: 52.55Min: 52.38 / Avg: 52.4 / Max: 52.421. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Exhaustive12390180270360450SE +/- 0.13, N = 3SE +/- 0.16, N = 3SE +/- 0.06, N = 3425.23425.58425.391. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Exhaustive12380160240320400Min: 425.09 / Avg: 425.23 / Max: 425.48Min: 425.29 / Avg: 425.58 / Max: 425.83Min: 425.3 / Avg: 425.39 / Max: 425.51. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1S1231428425670SE +/- 0.17, N = 3SE +/- 0.11, N = 3SE +/- 0.15, N = 362.8062.6962.771. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1S1231224364860Min: 62.46 / Avg: 62.8 / Max: 62.98Min: 62.48 / Avg: 62.69 / Max: 62.83Min: 62.47 / Avg: 62.77 / Max: 62.931. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 01233691215SE +/- 0.015, N = 3SE +/- 0.018, N = 3SE +/- 0.014, N = 39.2979.2679.2671. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 01233691215Min: 9.28 / Avg: 9.3 / Max: 9.33Min: 9.25 / Avg: 9.27 / Max: 9.3Min: 9.25 / Avg: 9.27 / Max: 9.291. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 21231326395265SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 358.5958.6358.621. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 21231224364860Min: 58.57 / Avg: 58.59 / Max: 58.6Min: 58.59 / Avg: 58.63 / Max: 58.69Min: 58.59 / Avg: 58.62 / Max: 58.681. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 3123306090120150SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3114.27114.19114.261. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 312320406080100Min: 114.24 / Avg: 114.27 / Max: 114.3Min: 114.17 / Avg: 114.19 / Max: 114.2Min: 114.24 / Avg: 114.26 / Max: 114.291. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2 + RDO Post-Processing123150300450600750SE +/- 1.79, N = 3SE +/- 0.97, N = 3SE +/- 1.97, N = 3702.91701.85704.131. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2 + RDO Post-Processing123120240360480600Min: 700.93 / Avg: 702.91 / Max: 706.47Min: 699.92 / Avg: 701.85 / Max: 702.9Min: 700.83 / Avg: 704.13 / Max: 707.651. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Boat - Acceleration: CPU-only1233691215SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 312.6212.4812.51
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Boat - Acceleration: CPU-only12348121620Min: 12.61 / Avg: 12.62 / Max: 12.63Min: 12.47 / Avg: 12.48 / Max: 12.49Min: 12.5 / Avg: 12.51 / Max: 12.52

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Masskrug - Acceleration: CPU-only123246810SE +/- 0.016, N = 3SE +/- 0.006, N = 3SE +/- 0.018, N = 38.0197.9638.026
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Masskrug - Acceleration: CPU-only1233691215Min: 7.99 / Avg: 8.02 / Max: 8.05Min: 7.95 / Avg: 7.96 / Max: 7.97Min: 8.01 / Avg: 8.03 / Max: 8.06

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Rack - Acceleration: CPU-only1230.05270.10540.15810.21080.2635SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.004, N = 30.2300.2290.234
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Rack - Acceleration: CPU-only12312345Min: 0.23 / Avg: 0.23 / Max: 0.23Min: 0.23 / Avg: 0.23 / Max: 0.23Min: 0.23 / Avg: 0.23 / Max: 0.24

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Room - Acceleration: CPU-only123246810SE +/- 0.003, N = 3SE +/- 0.010, N = 3SE +/- 0.002, N = 36.3186.2746.292
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Room - Acceleration: CPU-only1233691215Min: 6.31 / Avg: 6.32 / Max: 6.32Min: 6.25 / Avg: 6.27 / Max: 6.29Min: 6.29 / Avg: 6.29 / Max: 6.3

Hugin

Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching Time1231428425670SE +/- 0.29, N = 3SE +/- 0.43, N = 3SE +/- 0.37, N = 362.0061.9762.71
OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching Time1231224364860Min: 61.44 / Avg: 62 / Max: 62.39Min: 61.54 / Avg: 61.97 / Max: 62.82Min: 62.11 / Avg: 62.71 / Max: 63.38

OCRMyPDF

OCRMyPDF is an optical character recognition (OCR) text layer to scanned PDF files, producing new PDFs with the text now selectable/searchable/copy-paste capable. OCRMyPDF leverages the Tesseract OCR engine and is written in Python. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 10.3.1+dfsgProcessing 60 Page PDF Document123714212835SE +/- 0.12, N = 3SE +/- 0.07, N = 3SE +/- 0.03, N = 332.1632.2332.26
OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 10.3.1+dfsgProcessing 60 Page PDF Document123714212835Min: 31.96 / Avg: 32.16 / Max: 32.36Min: 32.09 / Avg: 32.23 / Max: 32.31Min: 32.22 / Avg: 32.26 / Max: 32.32

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark Time12320406080100SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 379.9679.4280.351. RawTherapee, version 5.8, command line.
OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark Time1231530456075Min: 79.92 / Avg: 79.96 / Max: 79.98Min: 79.39 / Avg: 79.42 / Max: 79.46Min: 80.26 / Avg: 80.35 / Max: 80.471. RawTherapee, version 5.8, command line.

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOP123600K1200K1800K2400K3000KSE +/- 39136.83, N = 15SE +/- 123032.71, N = 12SE +/- 13136.89, N = 32624535.482159676.181559670.961. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOP123500K1000K1500K2000K2500KMin: 2404076.75 / Avg: 2624535.48 / Max: 2849914.5Min: 1366295.12 / Avg: 2159676.18 / Max: 2611133.25Min: 1540930.75 / Avg: 1559670.96 / Max: 15849891. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADD123400K800K1200K1600K2000KSE +/- 22720.13, N = 15SE +/- 25708.45, N = 15SE +/- 18274.51, N = 31951987.091947660.532004338.251. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADD123300K600K900K1200K1500KMin: 1842092.12 / Avg: 1951987.09 / Max: 2119186.5Min: 1808318.38 / Avg: 1947660.53 / Max: 2118983Min: 1984127 / Avg: 2004338.25 / Max: 2040816.251. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSH123300K600K900K1200K1500KSE +/- 14444.21, N = 3SE +/- 12228.65, N = 3SE +/- 13914.34, N = 151437805.001413366.451459398.531. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSH123300K600K900K1200K1500KMin: 1414472.38 / Avg: 1437805 / Max: 1464222.5Min: 1390820.62 / Avg: 1413366.45 / Max: 1432848.12Min: 1373626.38 / Avg: 1459398.53 / Max: 1536442.381. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GET123500K1000K1500K2000K2500KSE +/- 40313.16, N = 15SE +/- 26352.66, N = 15SE +/- 21009.40, N = 152310604.622095708.192135888.901. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GET123400K800K1200K1600K2000KMin: 2100907.5 / Avg: 2310604.62 / Max: 2551918.5Min: 1905371.5 / Avg: 2095708.19 / Max: 2336673Min: 2020589.88 / Avg: 2135888.9 / Max: 2294091.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SET123400K800K1200K1600K2000KSE +/- 18920.71, N = 3SE +/- 29276.53, N = 3SE +/- 19775.05, N = 151725875.751754885.381695298.031. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SET123300K600K900K1200K1500KMin: 1692372.25 / Avg: 1725875.75 / Max: 1757862.88Min: 1715759.88 / Avg: 1754885.38 / Max: 1812174Min: 1579879.88 / Avg: 1695298.03 / Max: 1799309.381. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 10012311K22K33K44K55KSE +/- 14.01, N = 3SE +/- 8.08, N = 3SE +/- 27.06, N = 34908249047489621. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 1001239K18K27K36K45KMin: 49055 / Avg: 49082 / Max: 49102Min: 49037 / Avg: 49047 / Max: 49063Min: 48909 / Avg: 48962 / Max: 489981. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 10012330K60K90K120K150KSE +/- 118.13, N = 3SE +/- 32.70, N = 3SE +/- 73.69, N = 31206341206791202911. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 10012320K40K60K80K100KMin: 120502 / Avg: 120634.33 / Max: 120870Min: 120617 / Avg: 120679 / Max: 120728Min: 120144 / Avg: 120291.33 / Max: 1203681. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0123246810SE +/- 0.003, N = 3SE +/- 0.005, N = 3SE +/- 0.018, N = 38.8538.8488.874MIN: 8.76 / MAX: 11.24MIN: 8.76 / MAX: 9.86MIN: 8.77 / MAX: 11.461. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.01233691215Min: 8.85 / Avg: 8.85 / Max: 8.86Min: 8.84 / Avg: 8.85 / Max: 8.85Min: 8.85 / Avg: 8.87 / Max: 8.911. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50123816243240SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.25, N = 334.9335.0035.17MIN: 34.38 / MAX: 58.39MIN: 34.7 / MAX: 43.77MIN: 34.65 / MAX: 80.671. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50123816243240Min: 34.87 / Avg: 34.93 / Max: 35.04Min: 34.95 / Avg: 34.99 / Max: 35.07Min: 34.82 / Avg: 35.17 / Max: 35.651. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_2241231.11082.22163.33244.44325.554SE +/- 0.018, N = 3SE +/- 0.020, N = 3SE +/- 0.030, N = 34.9374.9364.910MIN: 4.86 / MAX: 5.44MIN: 4.86 / MAX: 5.66MIN: 4.82 / MAX: 6.271. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224123246810Min: 4.9 / Avg: 4.94 / Max: 4.96Min: 4.9 / Avg: 4.94 / Max: 4.96Min: 4.85 / Avg: 4.91 / Max: 4.941. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.01231.31042.62083.93125.24166.552SE +/- 0.016, N = 3SE +/- 0.028, N = 3SE +/- 0.015, N = 35.8195.8245.811MIN: 5.74 / MAX: 6.34MIN: 5.72 / MAX: 25.28MIN: 5.73 / MAX: 15.061. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0123246810Min: 5.79 / Avg: 5.82 / Max: 5.84Min: 5.77 / Avg: 5.82 / Max: 5.87Min: 5.8 / Avg: 5.81 / Max: 5.841. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v31231020304050SE +/- 0.08, N = 3SE +/- 0.08, N = 3SE +/- 0.17, N = 342.5742.4442.55MIN: 42.21 / MAX: 51.92MIN: 42.09 / MAX: 51.33MIN: 42.02 / MAX: 58.271. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3123918273645Min: 42.42 / Avg: 42.57 / Max: 42.7Min: 42.29 / Avg: 42.44 / Max: 42.53Min: 42.21 / Avg: 42.55 / Max: 42.731. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenet123510152025SE +/- 0.09, N = 3SE +/- 0.06, N = 3SE +/- 0.12, N = 321.8721.6721.83MIN: 21.59 / MAX: 23.26MIN: 21.39 / MAX: 37.01MIN: 21.56 / MAX: 22.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenet123510152025Min: 21.7 / Avg: 21.87 / Max: 21.98Min: 21.61 / Avg: 21.67 / Max: 21.78Min: 21.67 / Avg: 21.83 / Max: 22.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenet123612182430SE +/- 0.18, N = 3SE +/- 0.15, N = 3SE +/- 0.11, N = 323.8223.7123.71MIN: 23.36 / MAX: 24.99MIN: 23.3 / MAX: 24.65MIN: 23.46 / MAX: 24.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenet123612182430Min: 23.46 / Avg: 23.82 / Max: 24.03Min: 23.42 / Avg: 23.71 / Max: 23.91Min: 23.59 / Avg: 23.71 / Max: 23.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v2123246810SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 36.666.646.68MIN: 6.56 / MAX: 7.93MIN: 6.54 / MAX: 7.9MIN: 6.59 / MAX: 8.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v21233691215Min: 6.66 / Avg: 6.66 / Max: 6.67Min: 6.63 / Avg: 6.64 / Max: 6.65Min: 6.67 / Avg: 6.68 / Max: 6.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v3123246810SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 36.186.116.25MIN: 6.11 / MAX: 7.82MIN: 6.03 / MAX: 7.3MIN: 6.11 / MAX: 37.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v31233691215Min: 6.16 / Avg: 6.18 / Max: 6.21Min: 6.1 / Avg: 6.11 / Max: 6.12Min: 6.2 / Avg: 6.25 / Max: 6.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v21231.0892.1783.2674.3565.445SE +/- 0.04, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 34.794.824.84MIN: 4.65 / MAX: 5.62MIN: 4.76 / MAX: 6MIN: 4.76 / MAX: 5.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v2123246810Min: 4.71 / Avg: 4.79 / Max: 4.83Min: 4.82 / Avg: 4.82 / Max: 4.83Min: 4.81 / Avg: 4.84 / Max: 4.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnet123246810SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 36.326.276.30MIN: 6.26 / MAX: 7.07MIN: 6.2 / MAX: 7.42MIN: 6.22 / MAX: 7.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnet1233691215Min: 6.3 / Avg: 6.32 / Max: 6.34Min: 6.25 / Avg: 6.27 / Max: 6.29Min: 6.29 / Avg: 6.3 / Max: 6.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b01233691215SE +/- 0.05, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 39.219.109.16MIN: 9.07 / MAX: 14.29MIN: 9.02 / MAX: 9.48MIN: 9.06 / MAX: 9.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b01233691215Min: 9.16 / Avg: 9.21 / Max: 9.3Min: 9.1 / Avg: 9.1 / Max: 9.1Min: 9.13 / Avg: 9.16 / Max: 9.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazeface1230.43880.87761.31641.75522.194SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 31.951.921.92MIN: 1.9 / MAX: 2.15MIN: 1.88 / MAX: 2.17MIN: 1.89 / MAX: 2.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazeface123246810Min: 1.92 / Avg: 1.95 / Max: 2Min: 1.91 / Avg: 1.92 / Max: 1.94Min: 1.92 / Avg: 1.92 / Max: 1.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenet123510152025SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 319.4219.3019.51MIN: 19.3 / MAX: 24.48MIN: 19.11 / MAX: 38.78MIN: 19.37 / MAX: 20.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenet123510152025Min: 19.41 / Avg: 19.42 / Max: 19.43Min: 19.27 / Avg: 19.3 / Max: 19.32Min: 19.48 / Avg: 19.51 / Max: 19.531. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg161231632486480SE +/- 0.13, N = 3SE +/- 0.22, N = 3SE +/- 0.02, N = 372.5772.1072.72MIN: 72.08 / MAX: 107.37MIN: 71.61 / MAX: 88.56MIN: 72.42 / MAX: 80.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg161231428425670Min: 72.43 / Avg: 72.57 / Max: 72.82Min: 71.77 / Avg: 72.1 / Max: 72.51Min: 72.69 / Avg: 72.72 / Max: 72.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet1812348121620SE +/- 0.05, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 317.8317.7518.00MIN: 17.65 / MAX: 21.58MIN: 17.57 / MAX: 18.32MIN: 17.88 / MAX: 18.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet18123510152025Min: 17.74 / Avg: 17.83 / Max: 17.88Min: 17.66 / Avg: 17.75 / Max: 17.83Min: 17.98 / Avg: 18 / Max: 18.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnet123510152025SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 318.9518.5718.81MIN: 18.85 / MAX: 19.3MIN: 18.49 / MAX: 19.27MIN: 18.72 / MAX: 19.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnet123510152025Min: 18.9 / Avg: 18.95 / Max: 19Min: 18.55 / Avg: 18.57 / Max: 18.62Min: 18.78 / Avg: 18.81 / Max: 18.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50123918273645SE +/- 0.03, N = 3SE +/- 0.07, N = 3SE +/- 0.07, N = 337.3237.0537.31MIN: 37.15 / MAX: 39.69MIN: 36.81 / MAX: 38.07MIN: 37.14 / MAX: 39.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50123816243240Min: 37.28 / Avg: 37.32 / Max: 37.39Min: 36.94 / Avg: 37.05 / Max: 37.17Min: 37.24 / Avg: 37.31 / Max: 37.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tiny123816243240SE +/- 0.04, N = 3SE +/- 0.09, N = 3SE +/- 0.07, N = 332.9632.8433.20MIN: 32.78 / MAX: 34.14MIN: 32.57 / MAX: 55.87MIN: 32.95 / MAX: 34.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tiny123714212835Min: 32.91 / Avg: 32.96 / Max: 33.03Min: 32.7 / Avg: 32.84 / Max: 33Min: 33.12 / Avg: 33.2 / Max: 33.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: squeezenet1231.09132.18263.27394.36525.4565SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 34.844.854.85MIN: 4.71 / MAX: 5.84MIN: 4.69 / MAX: 5.86MIN: 4.71 / MAX: 5.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: squeezenet123246810Min: 4.82 / Avg: 4.84 / Max: 4.86Min: 4.84 / Avg: 4.85 / Max: 4.87Min: 4.82 / Avg: 4.85 / Max: 4.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: mobilenet123246810SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 38.288.258.27MIN: 7.59 / MAX: 11.65MIN: 7.57 / MAX: 11.24MIN: 7.6 / MAX: 11.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: mobilenet1233691215Min: 8.26 / Avg: 8.28 / Max: 8.3Min: 8.23 / Avg: 8.25 / Max: 8.27Min: 8.25 / Avg: 8.27 / Max: 8.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU-v2-v2 - Model: mobilenet-v21230.58051.1611.74152.3222.9025SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 32.572.582.58MIN: 2.53 / MAX: 3.17MIN: 2.53 / MAX: 5.2MIN: 2.53 / MAX: 3.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2123246810Min: 2.56 / Avg: 2.57 / Max: 2.58Min: 2.57 / Avg: 2.58 / Max: 2.59Min: 2.57 / Avg: 2.58 / Max: 2.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU-v3-v3 - Model: mobilenet-v31230.80551.6112.41653.2224.0275SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 33.573.583.58MIN: 3.54 / MAX: 4.28MIN: 3.53 / MAX: 4.28MIN: 3.54 / MAX: 4.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3123246810Min: 3.56 / Avg: 3.57 / Max: 3.58Min: 3.57 / Avg: 3.58 / Max: 3.59Min: 3.57 / Avg: 3.58 / Max: 3.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: shufflenet-v21230.52431.04861.57292.09722.6215SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 32.322.332.32MIN: 2.28 / MAX: 3.46MIN: 2.3 / MAX: 2.98MIN: 2.29 / MAX: 2.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: shufflenet-v2123246810Min: 2.3 / Avg: 2.32 / Max: 2.33Min: 2.32 / Avg: 2.33 / Max: 2.35Min: 2.31 / Avg: 2.32 / Max: 2.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: mnasnet1230.62551.2511.87652.5023.1275SE +/- 0.02, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 32.782.762.76MIN: 2.71 / MAX: 15.58MIN: 2.72 / MAX: 3.3MIN: 2.71 / MAX: 3.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: mnasnet123246810Min: 2.75 / Avg: 2.78 / Max: 2.82Min: 2.76 / Avg: 2.76 / Max: 2.77Min: 2.75 / Avg: 2.76 / Max: 2.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: efficientnet-b01233691215SE +/- 0.12, N = 3SE +/- 0.17, N = 3SE +/- 0.10, N = 39.339.349.36MIN: 8.97 / MAX: 19.5MIN: 8.93 / MAX: 17.56MIN: 8.96 / MAX: 201. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: efficientnet-b01233691215Min: 9.13 / Avg: 9.33 / Max: 9.55Min: 9.08 / Avg: 9.34 / Max: 9.66Min: 9.23 / Avg: 9.36 / Max: 9.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: blazeface1230.20480.40960.61440.81921.024SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 30.910.880.87MIN: 0.85 / MAX: 1.18MIN: 0.86 / MAX: 1.45MIN: 0.85 / MAX: 1.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: blazeface123246810Min: 0.87 / Avg: 0.91 / Max: 0.97Min: 0.87 / Avg: 0.88 / Max: 0.91Min: 0.86 / Avg: 0.87 / Max: 0.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: googlenet1231.29152.5833.87455.1666.4575SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 35.745.705.69MIN: 5.64 / MAX: 15.26MIN: 5.65 / MAX: 8.52MIN: 5.63 / MAX: 10.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: googlenet123246810Min: 5.66 / Avg: 5.74 / Max: 5.83Min: 5.68 / Avg: 5.7 / Max: 5.71Min: 5.66 / Avg: 5.69 / Max: 5.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: vgg161233691215SE +/- 0.16, N = 3SE +/- 0.05, N = 3SE +/- 0.07, N = 310.8210.7110.61MIN: 10.18 / MAX: 24.02MIN: 10.19 / MAX: 20.01MIN: 10.2 / MAX: 241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: vgg161233691215Min: 10.54 / Avg: 10.82 / Max: 11.09Min: 10.61 / Avg: 10.71 / Max: 10.8Min: 10.5 / Avg: 10.61 / Max: 10.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: resnet181230.48830.97661.46491.95322.4415SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 32.162.172.17MIN: 2.09 / MAX: 2.76MIN: 2.1 / MAX: 2.77MIN: 2.1 / MAX: 2.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: resnet18123246810Min: 2.15 / Avg: 2.16 / Max: 2.17Min: 2.16 / Avg: 2.17 / Max: 2.18Min: 2.16 / Avg: 2.17 / Max: 2.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: alexnet1230.92251.8452.76753.694.6125SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 34.094.104.09MIN: 3.96 / MAX: 4.79MIN: 3.99 / MAX: 4.76MIN: 3.97 / MAX: 4.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: alexnet123246810Min: 4.08 / Avg: 4.09 / Max: 4.1Min: 4.09 / Avg: 4.1 / Max: 4.11Min: 4.08 / Avg: 4.09 / Max: 4.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: resnet50123246810SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 36.106.136.09MIN: 6.06 / MAX: 10.11MIN: 6.04 / MAX: 11.31MIN: 6.06 / MAX: 6.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: resnet50123246810Min: 6.09 / Avg: 6.1 / Max: 6.12Min: 6.1 / Avg: 6.13 / Max: 6.18Min: 6.09 / Avg: 6.09 / Max: 6.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: yolov4-tiny1233691215SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 311.2611.2911.30MIN: 11.09 / MAX: 11.79MIN: 11.1 / MAX: 11.73MIN: 11.15 / MAX: 11.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: yolov4-tiny1233691215Min: 11.22 / Avg: 11.26 / Max: 11.29Min: 11.26 / Avg: 11.29 / Max: 11.32Min: 11.29 / Avg: 11.3 / Max: 11.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v212360120180240300SE +/- 0.00, N = 3SE +/- 0.19, N = 3SE +/- 0.03, N = 3263.82264.30264.12MIN: 263.16 / MAX: 270.27MIN: 263.28 / MAX: 314.67MIN: 263.32 / MAX: 271.641. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v212350100150200250Min: 263.81 / Avg: 263.81 / Max: 263.82Min: 263.93 / Avg: 264.3 / Max: 264.55Min: 264.07 / Avg: 264.12 / Max: 264.151. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.112360120180240300SE +/- 0.22, N = 3SE +/- 0.15, N = 3SE +/- 0.05, N = 3260.26260.45260.44MIN: 259.28 / MAX: 261.34MIN: 259.62 / MAX: 261.24MIN: 259.68 / MAX: 261.291. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.112350100150200250Min: 259.82 / Avg: 260.26 / Max: 260.54Min: 260.28 / Avg: 260.45 / Max: 260.76Min: 260.34 / Avg: 260.44 / Max: 260.521. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Bedroom1230.21040.42080.63120.84161.052SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 30.9300.9350.931
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Bedroom123246810Min: 0.93 / Avg: 0.93 / Max: 0.93Min: 0.93 / Avg: 0.94 / Max: 0.94Min: 0.93 / Avg: 0.93 / Max: 0.93

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Supercar1230.44060.88121.32181.76242.203SE +/- 0.005, N = 3SE +/- 0.004, N = 3SE +/- 0.001, N = 31.9541.9581.958
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Supercar123246810Min: 1.95 / Avg: 1.95 / Max: 1.96Min: 1.95 / Avg: 1.96 / Max: 1.97Min: 1.96 / Avg: 1.96 / Max: 1.96

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: CPU-Only12370140210280350SE +/- 0.35, N = 3SE +/- 0.44, N = 3SE +/- 0.71, N = 3323.46324.80325.17
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: CPU-Only12360120180240300Min: 322.92 / Avg: 323.46 / Max: 324.12Min: 324.14 / Avg: 324.8 / Max: 325.64Min: 324.04 / Avg: 325.17 / Max: 326.47

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test Times1232004006008001000SE +/- 2.31, N = 3SE +/- 1.76, N = 3102010171021
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test Times1232004006008001000Min: 1016 / Avg: 1020 / Max: 1024Min: 1018 / Avg: 1020.67 / Max: 1024

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: go12350100150200250SE +/- 0.33, N = 3249248248
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: go12350100150200250Min: 248 / Avg: 248.33 / Max: 249

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to312370140210280350SE +/- 0.33, N = 3320321321
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to312360120180240300Min: 319 / Avg: 319.67 / Max: 320

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaos12320406080100111111111

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: float123306090120150SE +/- 0.33, N = 3117116117
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: float12320406080100Min: 117 / Avg: 117.33 / Max: 118

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbody123306090120150117117117

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlib12348121620SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 317.717.717.8
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlib123510152025Min: 17.7 / Avg: 17.7 / Max: 17.7Min: 17.7 / Avg: 17.7 / Max: 17.7Min: 17.8 / Avg: 17.8 / Max: 17.8

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytrace123100200300400500SE +/- 0.33, N = 3480477476
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytrace12390180270360450Min: 475 / Avg: 475.67 / Max: 476

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loads123612182430SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 327.227.327.3
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loads123612182430Min: 27.2 / Avg: 27.23 / Max: 27.3Min: 27.3 / Avg: 27.3 / Max: 27.3Min: 27.3 / Avg: 27.33 / Max: 27.4

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaes123306090120150116116116

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compile1234080120160200169170168

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startup123246810SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 38.028.088.13
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startup1233691215Min: 8.01 / Avg: 8.02 / Max: 8.03Min: 8.06 / Avg: 8.08 / Max: 8.09Min: 8.08 / Avg: 8.13 / Max: 8.19

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_template1231122334455SE +/- 0.10, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 346.946.746.8
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_template1231020304050Min: 46.8 / Avg: 46.9 / Max: 47.1Min: 46.7 / Avg: 46.73 / Max: 46.8Min: 46.7 / Avg: 46.8 / Max: 46.9

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_python123100200300400500440439438

Hierarchical INTegration

This test runs the U.S. Department of Energy's Ames Laboratory Hierarchical INTegration (HINT) benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOAT12370M140M210M280M350MSE +/- 215594.66, N = 3SE +/- 156026.35, N = 3SE +/- 106732.80, N = 3340004918.54339696862.30339643088.461. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOAT12360M120M180M240M300MMin: 339711148.45 / Avg: 340004918.54 / Max: 340425148.66Min: 339526795.33 / Avg: 339696862.3 / Max: 340008480.38Min: 339464439.79 / Avg: 339643088.46 / Max: 339833601.541. (CC) gcc options: -O3 -march=native -lm

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference Score123160320480640800755761763

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training Score1232004006008001000788788790

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI Score12330060090012001500154315491553

Geekbench

This is a benchmark of Geekbench 5 Pro. The test profile automates the execution of Geekbench 5 under the Phoronix Test Suite, assuming you have a valid license key for Geekbench 5 Pro. This test will not work without a valid license key for Geekbench Pro. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterGeekbench 5Test: GPU Vulkan1238K16K24K32K40KSE +/- 459.84, N = 3SE +/- 115.14, N = 3SE +/- 56.79, N = 3362463679336985
OpenBenchmarking.orgScore, More Is BetterGeekbench 5Test: GPU Vulkan1236K12K18K24K30KMin: 35326 / Avg: 36245.67 / Max: 36708Min: 36664 / Avg: 36793.33 / Max: 37023Min: 36880 / Avg: 36985 / Max: 37075

OpenBenchmarking.orgScore, More Is BetterGeekbench 5Test: CPU Multi Core12311002200330044005500SE +/- 4.98, N = 3SE +/- 7.09, N = 3SE +/- 6.17, N = 3522752485256
OpenBenchmarking.orgScore, More Is BetterGeekbench 5Test: CPU Multi Core1239001800270036004500Min: 5219 / Avg: 5226.67 / Max: 5236Min: 5234 / Avg: 5248 / Max: 5257Min: 5249 / Avg: 5255.67 / Max: 5268

OpenBenchmarking.orgScore, More Is BetterGeekbench 5Test: CPU Single Core12330060090012001500SE +/- 0.58, N = 3SE +/- 1.33, N = 3SE +/- 0.88, N = 3122312281228
OpenBenchmarking.orgScore, More Is BetterGeekbench 5Test: CPU Single Core1232004006008001000Min: 1222 / Avg: 1223 / Max: 1224Min: 1225 / Avg: 1227.67 / Max: 1229Min: 1226 / Avg: 1227.67 / Max: 1229

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark Suite123130K260K390K520K650KSE +/- 5320.80, N = 3SE +/- 3538.13, N = 3SE +/- 3258.38, N = 3604079604512604322
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark Suite123100K200K300K400K500KMin: 595535 / Avg: 604079.33 / Max: 613845Min: 597580 / Avg: 604511.67 / Max: 609210Min: 599074 / Avg: 604322.33 / Max: 610292

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_ica1231122334455SE +/- 0.07, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 349.1249.1249.14
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_ica1231020304050Min: 48.98 / Avg: 49.12 / Max: 49.2Min: 49.05 / Avg: 49.12 / Max: 49.22Min: 49.07 / Avg: 49.14 / Max: 49.2

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qda12320406080100SE +/- 0.39, N = 3SE +/- 1.17, N = 12SE +/- 1.16, N = 374.1176.9175.29
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qda1231530456075Min: 73.33 / Avg: 74.11 / Max: 74.54Min: 72.51 / Avg: 76.91 / Max: 85.94Min: 73.75 / Avg: 75.29 / Max: 77.56

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svm123510152025SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 321.4321.4421.52
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svm123510152025Min: 21.41 / Avg: 21.43 / Max: 21.44Min: 21.42 / Avg: 21.44 / Max: 21.45Min: 21.48 / Avg: 21.52 / Max: 21.57

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregression1230.72451.4492.17352.8983.6225SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 33.193.203.22
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregression123246810Min: 3.18 / Avg: 3.19 / Max: 3.2Min: 3.2 / Avg: 3.2 / Max: 3.21Min: 3.21 / Avg: 3.22 / Max: 3.22

Sunflow Rendering System

This test runs benchmarks of the Sunflow Rendering System. The Sunflow Rendering System is an open-source render engine for photo-realistic image synthesis with a ray-tracing core. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSunflow Rendering System 0.07.2Global Illumination + Image Synthesis1230.44910.89821.34731.79642.2455SE +/- 0.029, N = 4SE +/- 0.024, N = 3SE +/- 0.023, N = 31.9481.9651.996MIN: 1.73 / MAX: 2.71MIN: 1.78 / MAX: 2.54MIN: 1.81 / MAX: 2.69
OpenBenchmarking.orgSeconds, Fewer Is BetterSunflow Rendering System 0.07.2Global Illumination + Image Synthesis123246810Min: 1.88 / Avg: 1.95 / Max: 2.01Min: 1.93 / Avg: 1.96 / Max: 2.01Min: 1.97 / Avg: 2 / Max: 2.04

Tesseract OCR

Tesseract-OCR is the open-source optical character recognition (OCR) engine for the conversion of text within images to raw text output. This test profile relies upon a system-supplied Tesseract installation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.1.1Time To OCR 7 Images123612182430SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.02, N = 324.5724.5124.51
OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.1.1Time To OCR 7 Images123612182430Min: 24.52 / Avg: 24.57 / Max: 24.62Min: 24.38 / Avg: 24.5 / Max: 24.6Min: 24.48 / Avg: 24.51 / Max: 24.54

Kripke

Kripke is a simple, scalable, 3D Sn deterministic particle transport code. Its primary purpose is to research how data layout, programming paradigms and architectures effect the implementation and performance of Sn transport. Kripke is developed by LLNL. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.4121.5M3M4.5M6M7.5MSE +/- 717516.50, N = 2692641964347951. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.4121.2M2.4M3.6M4.8M6MMin: 6208902 / Avg: 6926418.5 / Max: 76439351. (CXX) g++ options: -O3 -fopenmp

BRL-CAD

BRL-CAD 7.28.0 is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.30.8VGR Performance Metric12312K24K36K48K60K5741956730569601. (CXX) g++ options: -std=c++11 -pipe -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -rdynamic -lSM -lICE -lXi -lGLU -lGL -lGLdispatch -lX11 -lXext -lXrender -lpthread -ldl -luuid -lm

178 Results Shown

Betsy GPU Compressor:
  ETC1 - Highest
  ETC2 RGB - Highest
Waifu2x-NCNN Vulkan
VkFFT
DDraceNetwork:
  1920 x 1080 - Fullscreen - OpenGL 3.3 - Default - RaiNyMore2
  1920 x 1080 - Fullscreen - OpenGL 3.3 - Default - Multeasymap
DDraceNetwork
yquake2:
  OpenGL 1.x - 1920 x 1080
  OpenGL 3.x - 1920 x 1080
  Software CPU - 1920 x 1080
GLmark2
HPC Challenge:
  G-HPL
  G-Ffte
  EP-DGEMM
  G-Ptrans
  EP-STREAM Triad
  G-Rand Access
  Rand Ring Latency
  Rand Ring Bandwidth
  Max Ping Pong Bandwidth
LeelaChessZero:
  BLAS
  Eigen
NAMD
Dolfyn
FFTE
Timed HMMer Search
Monte Carlo Simulations of Ionised Nebulae
LAMMPS Molecular Dynamics Simulator
LZ4 Compression:
  1 - Compression Speed
  1 - Decompression Speed
  3 - Compression Speed
  3 - Decompression Speed
  9 - Compression Speed
  9 - Decompression Speed
Zstd Compression:
  3
  19
Crafty
oneDNN:
  IP Batch 1D - f32 - CPU
  IP Batch All - f32 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
  Deconvolution Batch deconv_1d - f32 - CPU
  Deconvolution Batch deconv_3d - f32 - CPU
  Recurrent Neural Network Training - f32 - CPU
  Recurrent Neural Network Inference - f32 - CPU
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
AOM AV1:
  Speed 0 Two-Pass
  Speed 4 Two-Pass
  Speed 6 Realtime
  Speed 6 Two-Pass
  Speed 8 Realtime
Embree:
  Pathtracer - Crown
  Pathtracer ISPC - Crown
  Pathtracer - Asian Dragon
  Pathtracer ISPC - Asian Dragon
Kvazaar:
  Bosphorus 4K - Medium
  Bosphorus 1080p - Medium
  Bosphorus 4K - Very Fast
  Bosphorus 4K - Ultra Fast
  Bosphorus 1080p - Very Fast
  Bosphorus 1080p - Ultra Fast
rav1e:
  1
  5
  6
  10
x265:
  Bosphorus 4K
  Bosphorus 1080p
Stockfish
asmFish
libavif avifenc:
  0
  2
  8
  10
Timed Linux Kernel Compilation
Timed LLVM Compilation
Numpy Benchmark
eSpeak-NG Speech Engine
RNNoise
OpenSSL
MPV:
  Big Buck Bunny Sunflower 4K - Software Only
  Big Buck Bunny Sunflower 1080p - Software Only
GROMACS
TensorFlow Lite:
  SqueezeNet
  Inception V4
  NASNet Mobile
  Mobilenet Float
  Mobilenet Quant
  Inception ResNet V2
ASTC Encoder:
  Fast
  Medium
  Thorough
  Exhaustive
Basis Universal:
  ETC1S
  UASTC Level 0
  UASTC Level 2
  UASTC Level 3
  UASTC Level 2 + RDO Post-Processing
Darktable:
  Boat - CPU-only
  Masskrug - CPU-only
  Server Rack - CPU-only
  Server Room - CPU-only
Hugin
OCRMyPDF
RawTherapee
Redis:
  LPOP
  SADD
  LPUSH
  GET
  SET
Caffe:
  AlexNet - CPU - 100
  GoogleNet - CPU - 100
Mobile Neural Network:
  SqueezeNetV1.0
  resnet-v2-50
  MobileNetV2_224
  mobilenet-v1-1.0
  inception-v3
NCNN:
  CPU - squeezenet
  CPU - mobilenet
  CPU-v2-v2 - mobilenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU - shufflenet-v2
  CPU - mnasnet
  CPU - efficientnet-b0
  CPU - blazeface
  CPU - googlenet
  CPU - vgg16
  CPU - resnet18
  CPU - alexnet
  CPU - resnet50
  CPU - yolov4-tiny
  Vulkan GPU - squeezenet
  Vulkan GPU - mobilenet
  Vulkan GPU-v2-v2 - mobilenet-v2
  Vulkan GPU-v3-v3 - mobilenet-v3
  Vulkan GPU - shufflenet-v2
  Vulkan GPU - mnasnet
  Vulkan GPU - efficientnet-b0
  Vulkan GPU - blazeface
  Vulkan GPU - googlenet
  Vulkan GPU - vgg16
  Vulkan GPU - resnet18
  Vulkan GPU - alexnet
  Vulkan GPU - resnet50
  Vulkan GPU - yolov4-tiny
TNN:
  CPU - MobileNet v2
  CPU - SqueezeNet v1.1
IndigoBench:
  CPU - Bedroom
  CPU - Supercar
Blender
PyBench
PyPerformance:
  go
  2to3
  chaos
  float
  nbody
  pathlib
  raytrace
  json_loads
  crypto_pyaes
  regex_compile
  python_startup
  django_template
  pickle_pure_python
Hierarchical INTegration
AI Benchmark Alpha:
  Device Inference Score
  Device Training Score
  Device AI Score
Geekbench:
  GPU Vulkan
  CPU Multi Core
  CPU Single Core
PHPBench
Mlpack Benchmark:
  scikit_ica
  scikit_qda
  scikit_svm
  scikit_linearridgeregression
Sunflow Rendering System
Tesseract OCR
Kripke
BRL-CAD