HP Zbook

Intel Core i9-10885H testing with a HP 8736 (S91 Ver. 01.02.01 BIOS) and NVIDIA Quadro RTX 5000 with Max-Q Design 16GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2101076-HA-HPZBOOK6247
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 2 Tests
AV1 2 Tests
Bioinformatics 2 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 2 Tests
Chess Test Suite 4 Tests
Timed Code Compilation 4 Tests
C/C++ Compiler Tests 13 Tests
Compression Tests 2 Tests
CPU Massive 23 Tests
Creator Workloads 22 Tests
Database Test Suite 3 Tests
Encoding 4 Tests
Fortran Tests 2 Tests
Game Development 4 Tests
HPC - High Performance Computing 19 Tests
Imaging 5 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 12 Tests
Molecular Dynamics 2 Tests
MPI Benchmarks 3 Tests
Multi-Core 22 Tests
NVIDIA GPU Compute 24 Tests
Intel oneAPI 3 Tests
OpenCL 6 Tests
OpenGL Demos Test Suite 2 Tests
OpenMPI Tests 4 Tests
Productivity 2 Tests
Programmer / Developer System Benchmarks 10 Tests
Python Tests 4 Tests
Renderers 2 Tests
Scientific Computing 5 Tests
Server 6 Tests
Server CPU Tests 11 Tests
Single-Threaded 6 Tests
Speech 3 Tests
Telephony 3 Tests
Texture Compression 3 Tests
Unigine Test Suite 2 Tests
Video Encoding 2 Tests
Vulkan Compute 6 Tests
Common Workstation Benchmarks 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
r1
January 04 2021
  21 Hours, 19 Minutes
r2
January 05 2021
  21 Hours, 8 Minutes
r3
January 06 2021
  20 Hours, 49 Minutes
Invert Hiding All Results Option
  21 Hours, 5 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


HP ZbookOpenBenchmarking.orgPhoronix Test SuiteIntel Core i9-10885H @ 5.30GHz (8 Cores / 16 Threads)HP 8736 (S91 Ver. 01.02.01 BIOS)Intel Comet Lake PCH32GB2048GB KXG50PNV2T04 KIOXIANVIDIA Quadro RTX 5000 with Max-Q Design 16GB (600/6000MHz)Intel Comet Lake PCH cAVSIntel Wi-Fi 6 AX201Ubuntu 20.045.6.0-1034-oem (x86_64)GNOME Shell 3.36.4X Server 1.20.8NVIDIA 450.80.024.6.0OpenCL 1.2 CUDA 11.0.2281.2.133GCC 9.3.0 + CUDA 10.1ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLOpenCLVulkanCompilerFile-SystemScreen ResolutionHP Zbook BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - NONE / errors=remount-ro,relatime,rw / Block Size: 4096- Scaling Governor: intel_pstate powersave - CPU Microcode: 0xe0 - Thermald 1.9.1- GPU Compute Cores: 3072- Python 3.8.3- itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

r1r2r3Result OverviewPhoronix Test Suite100%112%124%136%CLOMPDDraceNetworkRedisViennaCLTNNeSpeak-NG Speech EngineRNNoiseNCNNMonkey Audio EncodingLuxCoreRender OpenCLStockfishNeatBenchLeelaChessZeroTimed Eigen CompilationSQLite SpeedtestWaifu2x-NCNN VulkanWarsowasmFishBetsy GPU CompressorRodiniaGROMACSCryptsetupTimed MAFFT AlignmentBlenderPHPBenchHashcatNode.js V8 Web Tooling BenchmarkCraftyASTC EncoderMobile Neural NetworkPlaidMLArrayFireUnpacking FirefoxVkFFTLZ4 CompressionGraphicsMagickUnigine SuperpositionNumpy BenchmarkNAMD CUDAVkResampleLAMMPS Molecular Dynamics SimulatorUnigine HeavensimdjsonOpenVINORedShift DemoTimed Linux Kernel Compilationrav1edav1dLevelDBBuild2RawTherapeecl-memBasis UniversalRealSR-NCNNInkscapeTensorFlow LiteMandelGPUBRL-CADDeepSpeechEmbreeOpus Codec EncodingclpeakCoremarkZstd CompressionTimed FFmpeg CompilationHigh Performance Conjugate GradientGEGLAI Benchmark AlphaOctaneBenchTimed HMMer SearchFAHBenchoneDNNyquake2DarktableIndigoBenchFinanceBench

HP Zbookblender: Barbershop - NVIDIA OptiXbasis: UASTC Level 2 + RDO Post-Processingblender: Barbershop - CUDAblender: Pabellon Barcelona - CUDAmnn: inception-v3mnn: mobilenet-v1-1.0mnn: MobileNetV2_224mnn: resnet-v2-50mnn: SqueezeNetV1.0ai-benchmark: Device AI Scoreai-benchmark: Device Training Scoreai-benchmark: Device Inference Scoreredshift: ddnet: 1920 x 1080 - Fullscreen - OpenGL 3.0 - Default - RaiNyMore2astcenc: Exhaustivelczero: OpenCLbrl-cad: VGR Performance Metricvkfft: ddnet: 1920 x 1080 - Fullscreen - OpenGL 3.3 - Default - RaiNyMore2gromacs: Water Benchmarkasmfish: 1024 Hash Memory, 26 Depthstockfish: Total Timeunigine-heaven: 1920 x 1080 - Fullscreen - OpenGLblender: Classroom - CUDAdav1d: Chimera 1080p 10-bitbuild2: Time To Compilenumpy: blender: Pabellon Barcelona - NVIDIA OptiXhpcg: unigine-super: 1920 x 1080 - Fullscreen - Ultra - OpenGLunigine-super: 1920 x 1080 - Fullscreen - High - OpenGLunigine-super: 1920 x 1080 - Fullscreen - Medium - OpenGLunigine-super: 1920 x 1080 - Fullscreen - Low - OpenGLgraphics-magick: Swirlclomp: Static OMP Speedupblender: Fishy Cat - CUDAwarsow: 1920 x 1080luxcorerender-cl: LuxCore Benchmarktensorflow-lite: Inception ResNet V2tensorflow-lite: Inception V4luxcorerender-cl: Foodbuild-linux-kernel: Time To Compileoctanebench: Total Scoreopenvino: Person Detection 0106 FP32 - CPUopenvino: Person Detection 0106 FP32 - CPUdav1d: Chimera 1080pluxcorerender-cl: DLSCembree: Pathtracer ISPC - Crownblender: Classroom - NVIDIA OptiXbasis: UASTC Level 3fahbench: hmmer: Pfam Database Searchembree: Pathtracer - Crownrealsr-ncnn: 4x - Yesbuild-ffmpeg: Time To Compileonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUblender: BMW27 - NVIDIA OptiXblender: BMW27 - CUDAcompress-zstd: 19gegl: Cartoonopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUncnn: CPU - regnety_400mncnn: CPU - squeezenet_ssdncnn: CPU - yolov4-tinyncnn: CPU - resnet50ncnn: CPU - alexnetncnn: CPU - resnet18ncnn: CPU - vgg16ncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - mobilenetncnn: Vulkan GPU - regnety_400mncnn: Vulkan GPU - squeezenet_ssdncnn: Vulkan GPU - yolov4-tinyncnn: Vulkan GPU - resnet50ncnn: Vulkan GPU - alexnetncnn: Vulkan GPU - resnet18ncnn: Vulkan GPU - vgg16ncnn: Vulkan GPU - googlenetncnn: Vulkan GPU - blazefacencnn: Vulkan GPU - efficientnet-b0ncnn: Vulkan GPU - mnasnetncnn: Vulkan GPU - shufflenet-v2ncnn: Vulkan GPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: Vulkan GPU - mobilenetembree: Pathtracer - Asian Dragonrawtherapee: Total Benchmark Timeopenvino: Age Gender Recognition Retail 0013 FP32 - CPUopenvino: Age Gender Recognition Retail 0013 FP32 - CPUopenvino: Face Detection 0106 FP32 - CPUopenvino: Face Detection 0106 FP32 - CPUdav1d: Summer Nature 4Kopenvino: Person Detection 0106 FP16 - CPUopenvino: Person Detection 0106 FP16 - CPUbuild-eigen: Time To Compilecompress-lz4: 9 - Decompression Speedcompress-lz4: 9 - Compression Speedopenvino: Face Detection 0106 FP16 - CPUopenvino: Face Detection 0106 FP16 - CPUembree: Pathtracer ISPC - Asian Dragoncompress-lz4: 3 - Decompression Speedcompress-lz4: 3 - Compression Speednode-web-tooling: simdjson: Kostyaddnet: 1920 x 1080 - Fullscreen - OpenGL 3.0 - Default - Multeasymapleveldb: Seek Randblender: Fishy Cat - NVIDIA OptiXindigobench: CPU - Bedroomindigobench: CPU - Supercartensorflow-lite: SqueezeNettensorflow-lite: Mobilenet Quanttensorflow-lite: NASNet Mobiletensorflow-lite: Mobilenet Floatddnet: 1920 x 1080 - Fullscreen - OpenGL 3.3 - Default - Multeasymapgraphics-magick: Sharpengraphics-magick: Enhancedgraphics-magick: Noise-Gaussiangraphics-magick: Resizinggraphics-magick: HWB Color Spacegraphics-magick: Rotateastcenc: Thoroughbasis: ETC1Sgegl: Wavelet Blurrav1e: 1deepspeech: CPUluxcorerender-cl: Rainbow Colors and Prismrav1e: 5basis: UASTC Level 2astcenc: Mediumgegl: Color Enhancesimdjson: LargeRandsimdjson: PartialTweetssimdjson: DistinctUserIDsqlite-speedtest: Timed Time - Size 1,000mafft: Multiple Sequence Alignment - LSU RNAcoremark: CoreMark Size 666 - Iterations Per Secondleveldb: Rand Readdav1d: Summer Nature 1080prav1e: 6plaidml: No - Inference - DenseNet 201 - OpenCLvkresample: 2x - Doublegegl: Rotate 90 Degreesgegl: Antialiasespeak: Text-To-Speech Synthesiscryptsetup: Twofish-XTS 512b Encryptioncryptsetup: Twofish-XTS 512b Decryptioncryptsetup: Serpent-XTS 512b Decryptioncryptsetup: Serpent-XTS 512b Encryptioncryptsetup: AES-XTS 512b Decryptioncryptsetup: AES-XTS 512b Encryptioncryptsetup: Twofish-XTS 256b Decryptioncryptsetup: Twofish-XTS 256b Encryptioncryptsetup: Serpent-XTS 256b Decryptioncryptsetup: Serpent-XTS 256b Encryptioncryptsetup: AES-XTS 256b Decryptioncryptsetup: AES-XTS 256b Encryptioncryptsetup: PBKDF2-whirlpoolcryptsetup: PBKDF2-sha512leveldb: Seq Fillleveldb: Seq Fillleveldb: Rand Deletetnn: CPU - MobileNet v2clpeak: Double-Precision Doubledarktable: Masskrug - CPU-onlyonednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUgegl: Scalecompress-lz4: 1 - Decompression Speedcompress-lz4: 1 - Compression Speedcompress-zstd: 3gegl: Reflectgegl: Tile Glassgegl: Croprav1e: 10namd-cuda: ATPase Simulation - 327,506 Atomsredis: SADDphpbench: PHP Benchmark Suiternnoise: unpack-firefox: firefox-84.0.source.tar.xzlammps: Rhodopsin Proteinonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUinkscape: SVG Files To PNGredis: LPOPcrafty: Elapsed Timetnn: CPU - SqueezeNet v1.1encode-ape: WAV To APEbetsy: ETC2 RGB - Highestdarktable: Boat - CPU-onlyrealsr-ncnn: 4x - Nohashcat: SHA-512onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUyquake2: OpenGL 1.x - 1920 x 1080yquake2: OpenGL 3.x - 1920 x 1080encode-opus: WAV To Opus Encodeonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUredis: LPUSHbetsy: ETC1 - Highestyquake2: Software CPU - 1920 x 1080vkresample: 2x - Singleastcenc: Fastplaidml: No - Inference - IMDB LSTM - OpenCLredis: SETredis: GETonednn: IP Shapes 3D - u8s8f32 - CPUonednn: IP Shapes 3D - f32 - CPUclpeak: Integer Compute INTbasis: UASTC Level 0leveldb: Hot Readrodinia: OpenCL Particle Filterhashcat: TrueCrypt RIPEMD160 + XTShashcat: SHA1hashcat: MD5onednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUcl-mem: Writecl-mem: Copycl-mem: Readplaidml: Yes - Inference - Mobilenet - OpenCLwaifu2x-ncnn: 2x - 3 - Yesplaidml: No - Inference - Mobilenet - OpenCLdarktable: Server Room - CPU-onlyneatbench: GPUleveldb: Rand Fillleveldb: Rand Fillleveldb: Overwriteleveldb: Overwritemandelgpu: GPUclpeak: Single-Precision Floatleveldb: Fill Syncleveldb: Fill Syncarrayfire: Conjugate Gradient OpenCLhashcat: 7-Zipviennacl: OpenCL LU Factorizationclpeak: Global Memory Bandwidthdarktable: Server Rack - CPU-onlyfinancebench: Black-Scholes OpenCLr1r2r31192.96840.347734.81608.8062.56810.6465.23958.1618.8991546816730461170.36447.99132776390925820158.210.617159847199703133139.126250.7886.08210.051419.58196.213.9617725.165.990.4177.72073.7168.87955.62.26466019751631901.27151.656189.0850685069.440.79489.842.707.0735116.76110.838186.4611105.5266.080699.812100.2577140.507155.417144.2341.4791.0028.886.7891.173442.783795.023795.813797.3219.1627.6435.9537.8115.5018.6272.0919.982.5410.006.677.935.747.3126.6219.1627.5835.5237.2515.4418.6271.9620.052.5510.016.637.925.747.2326.527.555580.5861.213363.553202.531.26112.754961.990.8068.7449679.855.723165.241.289.13439676.357.8813.060.76413.8812.69460.350.9392.147354892236716302594239119435.207211514655277590254.2957.82457.9930.34781.295175.301.06955.4997.6854.1140.50.860.8949.54710.497223414.8075589.620460.021.444110.07256.86737.69736.55626.474483.0482.7871.7878.03348.33346.8482.5482.0872.3874.14002.44005.6816282191934947.23537.547.228321.420340.427.1287.165753.177626.9549823.28120.672833.628.18328.2438.9003.4220.221032660539.4283791122.08416.0285.1988.967829.7759420.9963394660.29497414272.90710.5128.01615.91414.73410231000004.747729.8789359.9607.6244.363814.455642041750.085.85460.724.9925.44463.342375800.253248596.082.7255812.47215504.357.2886.9467.11530123385857666672433486666718.008021.6871215.7236.6330.31819.246.0201246.784.18127.541.03743.140.92543.2251986408.75940.643361.7770.52.54937366768.2924324.630.18117.4776671190.05840.319731.67609.5663.18010.6755.29158.5308.9821544814730460169.30449.37131736382225647100.580.610159746119839292139.905251.9085.83210.712419.36196.283.9606825.466.590.6178.12072.5167.96967.92.31467056751681831.32152.208189.1017195079.890.79486.462.776.9794116.15110.926186.4777105.5726.0989100.617100.3977159.427159.487154.6638.0790.8228.787.3191.193403.453797.053800.413799.4518.9127.5135.5937.3015.4618.7171.9120.012.609.055.966.955.817.2226.6317.1527.5235.5137.3415.5318.3371.8218.202.299.025.866.985.737.2226.537.565680.9341.233307.533207.351.27112.034978.250.8067.5439664.856.073166.571.289.25969653.757.3613.170.75412.4312.62960.180.9382.150356034237129304756239224429.377211514755177487554.3858.06357.9500.34681.073165.391.06455.7427.6154.3120.50.870.8850.26810.564223304.9832869.692459.611.440109.98257.06237.54136.55727.178486.4485.7878.1882.13388.53381.9486.3487.4876.6881.44055.14080.5830020194300847.28737.447.296295.547340.467.1507.044043.167696.9739839.98127.782831.028.49628.2428.8393.4040.222382628039.2583241721.31616.1375.1699.006929.7646821.0482104092.339584148264.94810.8617.91215.87014.65610200000004.714579.7770159.9607.6024.378524.470622094056.315.78960.725.1905.59477.392413657.03012560.832.7767012.44475519.397.3457.0997.05530143385445000002426020000017.903521.6992215.6235.4329.91823.066.1021244.954.17427.141.02743.140.95543.2252826584.85858.323424.9180.52.53137040064.2335324.580.18117.4761192.80841.228733.02608.6263.56310.6585.28558.7868.9441544814730459151.49449.90134166403325683130.660.614161806749629353139.184251.8085.95210.945417.03196.413.9545725.366.290.5177.42073.6168.08968.62.29467747351782631.30151.478189.3165535073.090.79487.572.766.9976116.26111.040186.6158105.5056.0641100.748100.2037151.587169.037147.0938.0790.9328.886.9931.193405.923797.723798.123792.8719.3827.6335.6637.2215.4918.6671.8620.212.579.065.967.035.817.2326.5317.6027.5535.5337.2615.5018.3871.8618.262.298.995.917.055.817.1926.517.549680.7121.223347.933212.101.27112.655006.340.8068.6999695.257.013164.511.289.19679685.258.8913.180.75412.3812.64460.250.9352.156356258237406304079239537434.247311514755177690054.6558.06257.8430.34781.039835.411.06455.7667.5854.1010.50.860.8850.26110.608223892.4447269.573459.711.443109.99257.61537.69136.64627.713483.8483.0873.5874.43362.93336.0483.0483.6870.9874.14026.94023.0810352188610347.41537.347.388299.396340.597.1557.145743.112917.0009810.08079.182835.128.31328.0558.8263.4200.221712634908.8382970522.04416.1035.1799.066289.7373221.0682809233.489560012272.67610.5927.90315.86314.69410168000004.737289.8123859.9607.6164.385354.466562083566.295.79260.625.2255.63478.732433543.83009326.752.7487412.60895540.447.3537.1287.02729813385353333332419690000018.032621.6210214.8235.1329.91817.786.0931247.934.17827.640.98143.240.76043.4252822614.45892.703386.0840.52.54836643365.9180324.780.18117.476333OpenBenchmarking.org

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Barbershop - Compute: NVIDIA OptiXr1r2r330060090012001500SE +/- 0.44, N = 3SE +/- 0.85, N = 3SE +/- 2.01, N = 31192.961190.051192.80
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Barbershop - Compute: NVIDIA OptiXr1r2r32004006008001000Min: 1192.4 / Avg: 1192.96 / Max: 1193.83Min: 1188.59 / Avg: 1190.05 / Max: 1191.52Min: 1190.73 / Avg: 1192.8 / Max: 1196.81

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2 + RDO Post-Processingr1r2r32004006008001000SE +/- 0.74, N = 3SE +/- 0.35, N = 3SE +/- 0.62, N = 3840.35840.32841.231. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2 + RDO Post-Processingr1r2r3150300450600750Min: 838.87 / Avg: 840.35 / Max: 841.16Min: 839.66 / Avg: 840.32 / Max: 840.82Min: 840.38 / Avg: 841.23 / Max: 842.431. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Barbershop - Compute: CUDAr1r2r3160320480640800SE +/- 0.24, N = 3SE +/- 0.26, N = 3SE +/- 0.41, N = 3734.81731.67733.02
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Barbershop - Compute: CUDAr1r2r3130260390520650Min: 734.55 / Avg: 734.81 / Max: 735.3Min: 731.15 / Avg: 731.67 / Max: 732.01Min: 732.29 / Avg: 733.02 / Max: 733.72

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Pabellon Barcelona - Compute: CUDAr1r2r3130260390520650SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 3608.80609.56608.62
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Pabellon Barcelona - Compute: CUDAr1r2r3110220330440550Min: 608.75 / Avg: 608.8 / Max: 608.87Min: 609.53 / Avg: 609.56 / Max: 609.61Min: 608.5 / Avg: 608.62 / Max: 608.69

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3r1r2r31428425670SE +/- 0.15, N = 10SE +/- 0.18, N = 11SE +/- 0.22, N = 1062.5763.1863.56MIN: 60.82 / MAX: 96.05MIN: 61.02 / MAX: 104.39MIN: 60.92 / MAX: 102.851. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3r1r2r31224364860Min: 62.13 / Avg: 62.57 / Max: 63.62Min: 62.38 / Avg: 63.18 / Max: 64.17Min: 62.36 / Avg: 63.56 / Max: 64.461. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0r1r2r33691215SE +/- 0.01, N = 10SE +/- 0.01, N = 11SE +/- 0.01, N = 1010.6510.6810.66MIN: 10.33 / MAX: 34.53MIN: 10.35 / MAX: 33.35MIN: 10.33 / MAX: 32.251. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0r1r2r33691215Min: 10.61 / Avg: 10.65 / Max: 10.71Min: 10.6 / Avg: 10.67 / Max: 10.74Min: 10.61 / Avg: 10.66 / Max: 10.711. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224r1r2r31.19052.3813.57154.7625.9525SE +/- 0.210, N = 10SE +/- 0.185, N = 11SE +/- 0.209, N = 105.2395.2915.285MIN: 3.19 / MAX: 26.27MIN: 3.3 / MAX: 27.38MIN: 3.27 / MAX: 26.821. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224r1r2r3246810Min: 3.35 / Avg: 5.24 / Max: 5.51Min: 3.44 / Avg: 5.29 / Max: 5.52Min: 3.41 / Avg: 5.28 / Max: 5.551. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50r1r2r31326395265SE +/- 0.40, N = 10SE +/- 0.35, N = 11SE +/- 0.40, N = 1058.1658.5358.79MIN: 36.86 / MAX: 81.73MIN: 37.33 / MAX: 83.74MIN: 36.87 / MAX: 85.771. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50r1r2r31224364860Min: 54.63 / Avg: 58.16 / Max: 59.25Min: 55.14 / Avg: 58.53 / Max: 59.4Min: 55.26 / Avg: 58.79 / Max: 59.591. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0r1r2r33691215SE +/- 0.373, N = 10SE +/- 0.316, N = 11SE +/- 0.373, N = 108.8998.9828.944MIN: 4.96 / MAX: 31.21MIN: 5.05 / MAX: 31.35MIN: 5.01 / MAX: 31.891. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0r1r2r33691215Min: 5.55 / Avg: 8.9 / Max: 9.33Min: 5.82 / Avg: 8.98 / Max: 9.37Min: 5.6 / Avg: 8.94 / Max: 9.411. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI Scorer1r2r330060090012001500154615441544

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training Scorer1r2r32004006008001000816814814

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference Scorer1r2r3160320480640800730730730

RedShift Demo

This is a test of MAXON's RedShift demo build that currently requires NVIDIA GPU acceleration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRedShift Demo 3.0r1r2r3100200300400500SE +/- 0.88, N = 3SE +/- 0.33, N = 3461460459
OpenBenchmarking.orgSeconds, Fewer Is BetterRedShift Demo 3.0r1r2r380160240320400Min: 459 / Avg: 460.67 / Max: 462Min: 459 / Avg: 459.67 / Max: 460

DDraceNetwork

This is a test of DDraceNetwork, an open-source cooperative platformer. OpenGL 3.3 is used for rendering, with fallbacks for older OpenGL versions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterDDraceNetwork 15.2.3Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL 3.0 - Zoom: Default - Demo: RaiNyMore2r1r2r34080120160200SE +/- 9.09, N = 15SE +/- 9.59, N = 15SE +/- 11.09, N = 15170.36169.30151.49MIN: 2.43 / MAX: 499.5MIN: 2.38 / MAX: 499.5MIN: 2.37 / MAX: 499.751. (CXX) g++ options: -O3 -rdynamic -lcrypto -lz -lrt -lpthread -lcurl -lfreetype -lSDL2 -lwavpack -lopusfile -lopus -logg -lGL -lX11 -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0
OpenBenchmarking.orgFrames Per Second, More Is BetterDDraceNetwork 15.2.3Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL 3.0 - Zoom: Default - Demo: RaiNyMore2r1r2r3306090120150Min: 100.65 / Avg: 170.36 / Max: 236.21Min: 58.09 / Avg: 169.3 / Max: 213.5Min: 49.88 / Avg: 151.49 / Max: 245.381. (CXX) g++ options: -O3 -rdynamic -lcrypto -lz -lrt -lpthread -lcurl -lfreetype -lSDL2 -lwavpack -lopusfile -lopus -logg -lGL -lX11 -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Exhaustiver1r2r3100200300400500SE +/- 0.52, N = 3SE +/- 0.81, N = 3SE +/- 0.54, N = 3447.99449.37449.901. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Exhaustiver1r2r380160240320400Min: 446.98 / Avg: 447.99 / Max: 448.68Min: 447.77 / Avg: 449.37 / Max: 450.39Min: 448.84 / Avg: 449.9 / Max: 450.551. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: OpenCLr1r2r33K6K9K12K15KSE +/- 160.45, N = 3SE +/- 176.76, N = 3SE +/- 44.68, N = 31327713173134161. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: OpenCLr1r2r32K4K6K8K10KMin: 13017 / Avg: 13277.33 / Max: 13570Min: 12986 / Avg: 13172.67 / Max: 13526Min: 13327 / Avg: 13416.33 / Max: 134631. (CXX) g++ options: -flto -pthread

BRL-CAD

BRL-CAD 7.28.0 is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.30.8VGR Performance Metricr1r2r314K28K42K56K70K6390963822640331. (CXX) g++ options: -std=c++11 -pipe -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -rdynamic -lSM -lICE -lXi -lGLU -lGL -lGLdispatch -lX11 -lXext -lXrender -lpthread -ldl -luuid -lm

VkFFT

VkFFT is a Fast Fourier Transform (FFT) Library that is GPU accelerated by means of the Vulkan API. The VkFFT benchmark runs FFT performance differences of many different sizes before returning an overall benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.1.1r1r2r36K12K18K24K30KSE +/- 62.93, N = 3SE +/- 58.68, N = 3SE +/- 108.37, N = 32582025647256831. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.1.1r1r2r34K8K12K16K20KMin: 25695 / Avg: 25819.67 / Max: 25897Min: 25578 / Avg: 25647.33 / Max: 25764Min: 25467 / Avg: 25683.33 / Max: 258031. (CXX) g++ options: -O3 -pthread

DDraceNetwork

This is a test of DDraceNetwork, an open-source cooperative platformer. OpenGL 3.3 is used for rendering, with fallbacks for older OpenGL versions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterDDraceNetwork 15.2.3Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL 3.3 - Zoom: Default - Demo: RaiNyMore2r1r2r3306090120150SE +/- 13.14, N = 12SE +/- 9.86, N = 15158.21100.58130.66MIN: 7.02 / MAX: 449.03MIN: 6.72 / MAX: 493.34MIN: 6.67 / MAX: 498.751. (CXX) g++ options: -O3 -rdynamic -lcrypto -lz -lrt -lpthread -lcurl -lfreetype -lSDL2 -lwavpack -lopusfile -lopus -logg -lGL -lX11 -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0
OpenBenchmarking.orgFrames Per Second, More Is BetterDDraceNetwork 15.2.3Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL 3.3 - Zoom: Default - Demo: RaiNyMore2r1r2r3306090120150Min: 33.8 / Avg: 100.58 / Max: 173.28Min: 35.77 / Avg: 130.66 / Max: 175.131. (CXX) g++ options: -O3 -rdynamic -lcrypto -lz -lrt -lpthread -lcurl -lfreetype -lSDL2 -lwavpack -lopusfile -lopus -logg -lGL -lX11 -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water Benchmarkr1r2r30.13880.27760.41640.55520.694SE +/- 0.003, N = 3SE +/- 0.004, N = 3SE +/- 0.002, N = 30.6170.6100.6141. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water Benchmarkr1r2r3246810Min: 0.61 / Avg: 0.62 / Max: 0.62Min: 0.6 / Avg: 0.61 / Max: 0.62Min: 0.61 / Avg: 0.61 / Max: 0.621. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Depthr1r2r33M6M9M12M15MSE +/- 174263.56, N = 3SE +/- 148124.86, N = 3SE +/- 142852.80, N = 3159847191597461116180674
OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Depthr1r2r33M6M9M12M15MMin: 15637111 / Avg: 15984719 / Max: 16180429Min: 15824449 / Avg: 15974610.67 / Max: 16270851Min: 15900882 / Avg: 16180674.33 / Max: 16370650

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total Timer1r2r32M4M6M8M10MSE +/- 85083.98, N = 8SE +/- 85742.14, N = 3SE +/- 67987.28, N = 129703133983929296293531. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total Timer1r2r32M4M6M8M10MMin: 9395707 / Avg: 9703132.63 / Max: 10044545Min: 9669517 / Avg: 9839292.33 / Max: 9945094Min: 9347765 / Avg: 9629353.25 / Max: 100375501. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver

Unigine Heaven

This test calculates the average frame-rate within the Heaven demo for the Unigine engine. This engine is extremely demanding on the system's graphics card. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnigine Heaven 4.0Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGLr1r2r3306090120150SE +/- 0.71, N = 3SE +/- 0.96, N = 3SE +/- 0.56, N = 3139.13139.91139.18
OpenBenchmarking.orgFrames Per Second, More Is BetterUnigine Heaven 4.0Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGLr1r2r3306090120150Min: 138.03 / Avg: 139.13 / Max: 140.46Min: 138.64 / Avg: 139.91 / Max: 141.78Min: 138.62 / Avg: 139.18 / Max: 140.31

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Classroom - Compute: CUDAr1r2r360120180240300SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 3250.78251.90251.80
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Classroom - Compute: CUDAr1r2r350100150200250Min: 250.75 / Avg: 250.78 / Max: 250.83Min: 251.85 / Avg: 251.9 / Max: 251.98Min: 251.7 / Avg: 251.8 / Max: 251.86

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p 10-bitr1r2r320406080100SE +/- 0.99, N = 4SE +/- 1.05, N = 4SE +/- 1.03, N = 486.0885.8385.95MIN: 54.34 / MAX: 256.39MIN: 54.27 / MAX: 257.58MIN: 54.21 / MAX: 255.721. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p 10-bitr1r2r31632486480Min: 85.06 / Avg: 86.08 / Max: 89.03Min: 84.56 / Avg: 85.83 / Max: 88.98Min: 84.71 / Avg: 85.95 / Max: 89.021. (CC) gcc options: -pthread

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To Compiler1r2r350100150200250SE +/- 0.40, N = 3SE +/- 0.49, N = 3SE +/- 0.85, N = 3210.05210.71210.95
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To Compiler1r2r34080120160200Min: 209.3 / Avg: 210.05 / Max: 210.64Min: 210.02 / Avg: 210.71 / Max: 211.65Min: 209.69 / Avg: 210.94 / Max: 212.55

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy Benchmarkr1r2r390180270360450SE +/- 1.54, N = 3SE +/- 0.84, N = 3SE +/- 0.70, N = 3419.58419.36417.03
OpenBenchmarking.orgScore, More Is BetterNumpy Benchmarkr1r2r370140210280350Min: 416.5 / Avg: 419.58 / Max: 421.32Min: 418.14 / Avg: 419.36 / Max: 420.96Min: 415.63 / Avg: 417.03 / Max: 417.83

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Pabellon Barcelona - Compute: NVIDIA OptiXr1r2r34080120160200SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.08, N = 3196.21196.28196.41
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Pabellon Barcelona - Compute: NVIDIA OptiXr1r2r34080120160200Min: 196.18 / Avg: 196.21 / Max: 196.24Min: 196.25 / Avg: 196.28 / Max: 196.34Min: 196.26 / Avg: 196.41 / Max: 196.51

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1r1r2r30.89141.78282.67423.56564.457SE +/- 0.00082, N = 3SE +/- 0.00692, N = 3SE +/- 0.01196, N = 33.961773.960683.954571. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1r1r2r3246810Min: 3.96 / Avg: 3.96 / Max: 3.96Min: 3.95 / Avg: 3.96 / Max: 3.97Min: 3.94 / Avg: 3.95 / Max: 3.981. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi

Unigine Superposition

This test calculates the average frame-rate within the Superposition demo for the Unigine engine, released in 2017. This engine is extremely demanding on the system's graphics card. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnigine Superposition 1.0Resolution: 1920 x 1080 - Mode: Fullscreen - Quality: Ultra - Renderer: OpenGLr1r2r3612182430SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 325.125.425.3MAX: 29.3MAX: 29.4MAX: 29.7
OpenBenchmarking.orgFrames Per Second, More Is BetterUnigine Superposition 1.0Resolution: 1920 x 1080 - Mode: Fullscreen - Quality: Ultra - Renderer: OpenGLr1r2r3612182430Min: 25 / Avg: 25.1 / Max: 25.2Min: 25.3 / Avg: 25.37 / Max: 25.4Min: 25.2 / Avg: 25.27 / Max: 25.3

OpenBenchmarking.orgFrames Per Second, More Is BetterUnigine Superposition 1.0Resolution: 1920 x 1080 - Mode: Fullscreen - Quality: High - Renderer: OpenGLr1r2r31530456075SE +/- 0.19, N = 3SE +/- 0.12, N = 3SE +/- 0.09, N = 365.966.566.2MAX: 81.6MAX: 80.8MAX: 80.3
OpenBenchmarking.orgFrames Per Second, More Is BetterUnigine Superposition 1.0Resolution: 1920 x 1080 - Mode: Fullscreen - Quality: High - Renderer: OpenGLr1r2r31326395265Min: 65.7 / Avg: 65.93 / Max: 66.3Min: 66.3 / Avg: 66.47 / Max: 66.7Min: 66.1 / Avg: 66.23 / Max: 66.4

OpenBenchmarking.orgFrames Per Second, More Is BetterUnigine Superposition 1.0Resolution: 1920 x 1080 - Mode: Fullscreen - Quality: Medium - Renderer: OpenGLr1r2r320406080100SE +/- 0.15, N = 3SE +/- 0.15, N = 3SE +/- 0.15, N = 390.490.690.5MAX: 114.5MAX: 114.4MAX: 113
OpenBenchmarking.orgFrames Per Second, More Is BetterUnigine Superposition 1.0Resolution: 1920 x 1080 - Mode: Fullscreen - Quality: Medium - Renderer: OpenGLr1r2r320406080100Min: 90.2 / Avg: 90.4 / Max: 90.7Min: 90.4 / Avg: 90.63 / Max: 90.9Min: 90.3 / Avg: 90.5 / Max: 90.8

OpenBenchmarking.orgFrames Per Second, More Is BetterUnigine Superposition 1.0Resolution: 1920 x 1080 - Mode: Fullscreen - Quality: Low - Renderer: OpenGLr1r2r34080120160200SE +/- 0.23, N = 3SE +/- 0.71, N = 3SE +/- 0.52, N = 3177.7178.1177.4MAX: 260.1MAX: 259.4MAX: 263.9
OpenBenchmarking.orgFrames Per Second, More Is BetterUnigine Superposition 1.0Resolution: 1920 x 1080 - Mode: Fullscreen - Quality: Low - Renderer: OpenGLr1r2r3306090120150Min: 177.3 / Avg: 177.73 / Max: 178.1Min: 176.8 / Avg: 178.13 / Max: 179.2Min: 176.4 / Avg: 177.43 / Max: 178.1

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Swirlr1r2r350100150200250SE +/- 1.72, N = 8SE +/- 1.60, N = 10SE +/- 1.72, N = 82072072071. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Swirlr1r2r34080120160200Min: 205 / Avg: 207 / Max: 219Min: 204 / Avg: 206.7 / Max: 221Min: 205 / Avg: 207 / Max: 2191. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

CLOMP

CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading in order to influence future system designs. This particular test profile configuration is currently set to look at the OpenMP static schedule speed-up across all available CPU cores using the recommended test configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP Speedupr1r2r30.83251.6652.49753.334.1625SE +/- 0.03, N = 3SE +/- 0.03, N = 15SE +/- 0.03, N = 153.72.53.61. (CC) gcc options: -fopenmp -O3 -lm
OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP Speedupr1r2r3246810Min: 3.7 / Avg: 3.73 / Max: 3.8Min: 2.4 / Avg: 2.51 / Max: 2.8Min: 3.5 / Avg: 3.62 / Max: 3.81. (CC) gcc options: -fopenmp -O3 -lm

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Fishy Cat - Compute: CUDAr1r2r34080120160200SE +/- 0.10, N = 3SE +/- 0.11, N = 3SE +/- 0.05, N = 3168.87167.96168.08
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Fishy Cat - Compute: CUDAr1r2r3306090120150Min: 168.73 / Avg: 168.87 / Max: 169.06Min: 167.85 / Avg: 167.96 / Max: 168.18Min: 167.98 / Avg: 168.08 / Max: 168.14

Warsow

This is a benchmark of Warsow, a popular open-source first-person shooter. This game uses the QFusion engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 1920 x 1080r1r2r32004006008001000SE +/- 13.76, N = 12SE +/- 1.46, N = 3SE +/- 1.81, N = 3955.6967.9968.6
OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 1920 x 1080r1r2r32004006008001000Min: 804.8 / Avg: 955.63 / Max: 975.4Min: 965.6 / Avg: 967.87 / Max: 970.6Min: 966.1 / Avg: 968.57 / Max: 972.1

LuxCoreRender OpenCL

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on OpenCL accelerators/GPUs. The alternative luxcorerender test profile is for CPU execution due to a difference in tests, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender OpenCL 2.3Scene: LuxCore Benchmarkr1r2r30.51981.03961.55942.07922.599SE +/- 0.04, N = 12SE +/- 0.01, N = 3SE +/- 0.01, N = 32.262.312.29MIN: 0.14 / MAX: 2.63MIN: 0.27 / MAX: 2.63MIN: 0.27 / MAX: 2.64
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender OpenCL 2.3Scene: LuxCore Benchmarkr1r2r3246810Min: 1.77 / Avg: 2.26 / Max: 2.35Min: 2.3 / Avg: 2.31 / Max: 2.32Min: 2.27 / Avg: 2.29 / Max: 2.31

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2r1r2r31000K2000K3000K4000K5000KSE +/- 8775.31, N = 3SE +/- 8796.49, N = 3SE +/- 8398.83, N = 3466019746705674677473
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2r1r2r3800K1600K2400K3200K4000KMin: 4642680 / Avg: 4660196.67 / Max: 4669900Min: 4653140 / Avg: 4670566.67 / Max: 4681370Min: 4660680 / Avg: 4677473.33 / Max: 4686200

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4r1r2r31.1M2.2M3.3M4.4M5.5MSE +/- 5618.75, N = 3SE +/- 7685.69, N = 3SE +/- 8609.77, N = 3516319051681835178263
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4r1r2r3900K1800K2700K3600K4500KMin: 5152020 / Avg: 5163190 / Max: 5169840Min: 5153100 / Avg: 5168183.33 / Max: 5178290Min: 5163010 / Avg: 5178263.33 / Max: 5192810

LuxCoreRender OpenCL

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on OpenCL accelerators/GPUs. The alternative luxcorerender test profile is for CPU execution due to a difference in tests, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender OpenCL 2.3Scene: Foodr1r2r30.2970.5940.8911.1881.485SE +/- 0.04, N = 12SE +/- 0.01, N = 3SE +/- 0.02, N = 31.271.321.30MIN: 0.13 / MAX: 1.57MIN: 0.29 / MAX: 1.57MIN: 0.26 / MAX: 1.57
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender OpenCL 2.3Scene: Foodr1r2r3246810Min: 0.85 / Avg: 1.27 / Max: 1.32Min: 1.31 / Avg: 1.32 / Max: 1.33Min: 1.27 / Avg: 1.3 / Max: 1.32

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To Compiler1r2r3306090120150SE +/- 0.33, N = 3SE +/- 0.24, N = 3SE +/- 0.75, N = 3151.66152.21151.48
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To Compiler1r2r3306090120150Min: 151.32 / Avg: 151.66 / Max: 152.31Min: 151.82 / Avg: 152.21 / Max: 152.65Min: 150.39 / Avg: 151.48 / Max: 152.91

OctaneBench

OctaneBench is a test of the OctaneRender on the GPU and requires the use of NVIDIA CUDA. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterOctaneBench 2020.1Total Scorer1r2r34080120160200189.09189.10189.32

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPUr1r2r311002200330044005500SE +/- 15.43, N = 3SE +/- 9.68, N = 9SE +/- 14.45, N = 55069.445079.895073.091. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPUr1r2r39001800270036004500Min: 5050.47 / Avg: 5069.44 / Max: 5100Min: 5023.61 / Avg: 5079.89 / Max: 5111.12Min: 5034.75 / Avg: 5073.09 / Max: 5109.151. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPUr1r2r30.17780.35560.53340.71120.889SE +/- 0.01, N = 3SE +/- 0.01, N = 9SE +/- 0.01, N = 50.790.790.791. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPUr1r2r3246810Min: 0.78 / Avg: 0.79 / Max: 0.8Min: 0.78 / Avg: 0.79 / Max: 0.84Min: 0.78 / Avg: 0.79 / Max: 0.821. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080pr1r2r3110220330440550SE +/- 5.73, N = 14SE +/- 3.02, N = 14SE +/- 3.24, N = 13489.84486.46487.57MIN: 317.1 / MAX: 898.12MIN: 316.37 / MAX: 900.57MIN: 316.7 / MAX: 911.471. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080pr1r2r390180270360450Min: 480.78 / Avg: 489.84 / Max: 564.13Min: 479.98 / Avg: 486.46 / Max: 525.02Min: 481.02 / Avg: 487.57 / Max: 525.931. (CC) gcc options: -pthread

LuxCoreRender OpenCL

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on OpenCL accelerators/GPUs. The alternative luxcorerender test profile is for CPU execution due to a difference in tests, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender OpenCL 2.3Scene: DLSCr1r2r30.62331.24661.86992.49323.1165SE +/- 0.06, N = 12SE +/- 0.00, N = 3SE +/- 0.00, N = 32.702.772.76MIN: 0.69 / MAX: 2.81MIN: 2.57 / MAX: 2.84MIN: 2.56 / MAX: 2.84
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender OpenCL 2.3Scene: DLSCr1r2r3246810Min: 2.08 / Avg: 2.7 / Max: 2.76Min: 2.77 / Avg: 2.77 / Max: 2.77Min: 2.75 / Avg: 2.76 / Max: 2.76

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Crownr1r2r3246810SE +/- 0.0830, N = 3SE +/- 0.0728, N = 4SE +/- 0.0756, N = 57.07356.97946.9976MIN: 6.66 / MAX: 12.73MIN: 6.57 / MAX: 12.32MIN: 6.56 / MAX: 12.56
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Crownr1r2r33691215Min: 6.96 / Avg: 7.07 / Max: 7.24Min: 6.85 / Avg: 6.98 / Max: 7.19Min: 6.84 / Avg: 7 / Max: 7.29

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Classroom - Compute: NVIDIA OptiXr1r2r3306090120150SE +/- 0.13, N = 3SE +/- 0.23, N = 3SE +/- 0.13, N = 3116.76116.15116.26
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Classroom - Compute: NVIDIA OptiXr1r2r320406080100Min: 116.56 / Avg: 116.76 / Max: 117.01Min: 115.74 / Avg: 116.15 / Max: 116.53Min: 116.01 / Avg: 116.26 / Max: 116.39

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 3r1r2r320406080100SE +/- 0.55, N = 3SE +/- 0.55, N = 3SE +/- 0.53, N = 3110.84110.93111.041. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 3r1r2r320406080100Min: 109.74 / Avg: 110.84 / Max: 111.45Min: 109.83 / Avg: 110.93 / Max: 111.58Min: 109.97 / Avg: 111.04 / Max: 111.61. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

FAHBench

FAHBench is a Folding@Home benchmark on the GPU. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterFAHBench 2.3.2r1r2r34080120160200SE +/- 0.23, N = 3SE +/- 0.14, N = 3SE +/- 0.11, N = 3186.46186.48186.62
OpenBenchmarking.orgNs Per Day, More Is BetterFAHBench 2.3.2r1r2r3306090120150Min: 186.1 / Avg: 186.46 / Max: 186.88Min: 186.26 / Avg: 186.48 / Max: 186.75Min: 186.4 / Avg: 186.62 / Max: 186.79

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database Searchr1r2r320406080100SE +/- 0.06, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 3105.53105.57105.511. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database Searchr1r2r320406080100Min: 105.45 / Avg: 105.53 / Max: 105.65Min: 105.52 / Avg: 105.57 / Max: 105.65Min: 105.47 / Avg: 105.5 / Max: 105.541. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Crownr1r2r3246810SE +/- 0.0766, N = 3SE +/- 0.0737, N = 3SE +/- 0.0667, N = 36.08066.09896.0641MIN: 5.86 / MAX: 11.02MIN: 5.88 / MAX: 10.98MIN: 5.86 / MAX: 10.95
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Crownr1r2r3246810Min: 5.99 / Avg: 6.08 / Max: 6.23Min: 6.02 / Avg: 6.1 / Max: 6.25Min: 6 / Avg: 6.06 / Max: 6.2

RealSR-NCNN

RealSR-NCNN is an NCNN neural network implementation of the RealSR project and accelerated using the Vulkan API. RealSR is the Real-World Super Resolution via Kernel Estimation and Noise Injection. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image by a scale of 4x with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: Yesr1r2r320406080100SE +/- 0.31, N = 3SE +/- 0.48, N = 3SE +/- 0.35, N = 399.81100.62100.75
OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: Yesr1r2r320406080100Min: 99.36 / Avg: 99.81 / Max: 100.4Min: 99.72 / Avg: 100.62 / Max: 101.34Min: 100.16 / Avg: 100.75 / Max: 101.36

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To Compiler1r2r320406080100SE +/- 0.78, N = 3SE +/- 0.39, N = 3SE +/- 0.30, N = 3100.26100.40100.20
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To Compiler1r2r320406080100Min: 98.99 / Avg: 100.26 / Max: 101.69Min: 99.62 / Avg: 100.4 / Max: 100.81Min: 99.61 / Avg: 100.2 / Max: 100.57

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUr1r2r315003000450060007500SE +/- 2.95, N = 3SE +/- 4.70, N = 3SE +/- 6.73, N = 37140.507159.427151.58MIN: 7021.68MIN: 7041.4MIN: 7027.21. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUr1r2r312002400360048006000Min: 7137.36 / Avg: 7140.5 / Max: 7146.4Min: 7150.93 / Avg: 7159.42 / Max: 7167.17Min: 7138.44 / Avg: 7151.58 / Max: 7160.661. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUr1r2r315003000450060007500SE +/- 12.55, N = 3SE +/- 1.75, N = 3SE +/- 6.55, N = 37155.417159.487169.03MIN: 7025.22MIN: 7040.61MIN: 7046.491. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUr1r2r312002400360048006000Min: 7141.09 / Avg: 7155.41 / Max: 7180.43Min: 7156.72 / Avg: 7159.48 / Max: 7162.72Min: 7160.71 / Avg: 7169.03 / Max: 7181.961. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUr1r2r315003000450060007500SE +/- 3.89, N = 3SE +/- 0.92, N = 3SE +/- 2.23, N = 37144.237154.667147.09MIN: 7028.46MIN: 7035.88MIN: 7033.981. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUr1r2r312002400360048006000Min: 7137.39 / Avg: 7144.23 / Max: 7150.85Min: 7152.86 / Avg: 7154.66 / Max: 7155.92Min: 7142.78 / Avg: 7147.09 / Max: 7150.251. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: NVIDIA OptiXr1r2r3918273645SE +/- 3.33, N = 15SE +/- 0.02, N = 3SE +/- 0.05, N = 341.4738.0738.07
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: NVIDIA OptiXr1r2r3918273645Min: 38.09 / Avg: 41.47 / Max: 88.04Min: 38.04 / Avg: 38.07 / Max: 38.11Min: 38.01 / Avg: 38.07 / Max: 38.17

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: CUDAr1r2r320406080100SE +/- 0.14, N = 3SE +/- 0.16, N = 3SE +/- 0.10, N = 391.0090.8290.93
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: CUDAr1r2r320406080100Min: 90.74 / Avg: 91 / Max: 91.2Min: 90.49 / Avg: 90.82 / Max: 91.01Min: 90.78 / Avg: 90.93 / Max: 91.12

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19r1r2r3714212835SE +/- 0.07, N = 3SE +/- 0.06, N = 3SE +/- 0.06, N = 328.828.728.81. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19r1r2r3612182430Min: 28.7 / Avg: 28.83 / Max: 28.9Min: 28.6 / Avg: 28.7 / Max: 28.8Min: 28.7 / Avg: 28.8 / Max: 28.91. (CC) gcc options: -O3 -pthread -lz -llzma

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Cartoonr1r2r320406080100SE +/- 0.12, N = 3SE +/- 0.19, N = 3SE +/- 0.09, N = 386.7987.3286.99
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Cartoonr1r2r320406080100Min: 86.54 / Avg: 86.79 / Max: 86.92Min: 86.95 / Avg: 87.32 / Max: 87.58Min: 86.83 / Avg: 86.99 / Max: 87.11

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUr1r2r30.26780.53560.80341.07121.339SE +/- 0.00, N = 3SE +/- 0.00, N = 4SE +/- 0.00, N = 61.171.191.191. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUr1r2r3246810Min: 1.17 / Avg: 1.17 / Max: 1.18Min: 1.19 / Avg: 1.19 / Max: 1.19Min: 1.18 / Avg: 1.19 / Max: 1.191. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUr1r2r37001400210028003500SE +/- 33.67, N = 3SE +/- 38.35, N = 4SE +/- 34.05, N = 63442.783403.453405.921. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUr1r2r36001200180024003000Min: 3404.42 / Avg: 3442.78 / Max: 3509.89Min: 3358.96 / Avg: 3403.45 / Max: 3517.97Min: 3365.53 / Avg: 3405.92 / Max: 3575.771. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUr1r2r38001600240032004000SE +/- 2.45, N = 3SE +/- 2.65, N = 3SE +/- 3.77, N = 33795.023797.053797.72MIN: 3682.24MIN: 3673.18MIN: 3684.191. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUr1r2r37001400210028003500Min: 3790.14 / Avg: 3795.02 / Max: 3797.77Min: 3792.15 / Avg: 3797.05 / Max: 3801.24Min: 3791 / Avg: 3797.72 / Max: 3804.041. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUr1r2r38001600240032004000SE +/- 6.76, N = 3SE +/- 4.34, N = 3SE +/- 3.22, N = 33795.813800.413798.12MIN: 3687.23MIN: 3681.23MIN: 3685.271. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUr1r2r37001400210028003500Min: 3784.22 / Avg: 3795.81 / Max: 3807.62Min: 3792.58 / Avg: 3800.41 / Max: 3807.57Min: 3791.7 / Avg: 3798.12 / Max: 3801.781. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUr1r2r38001600240032004000SE +/- 1.61, N = 3SE +/- 1.20, N = 3SE +/- 1.33, N = 33797.323799.453792.87MIN: 3686.53MIN: 3692.97MIN: 3672.831. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUr1r2r37001400210028003500Min: 3794.41 / Avg: 3797.32 / Max: 3799.97Min: 3797.39 / Avg: 3799.45 / Max: 3801.56Min: 3790.24 / Avg: 3792.87 / Max: 3794.531. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mr1r2r3510152025SE +/- 0.06, N = 3SE +/- 0.24, N = 3SE +/- 0.10, N = 319.1618.9119.38MIN: 18.07 / MAX: 22.36MIN: 13.5 / MAX: 30.63MIN: 14.45 / MAX: 42.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mr1r2r3510152025Min: 19.1 / Avg: 19.16 / Max: 19.29Min: 18.44 / Avg: 18.91 / Max: 19.24Min: 19.19 / Avg: 19.38 / Max: 19.531. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdr1r2r3714212835SE +/- 0.14, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 327.6427.5127.63MIN: 27 / MAX: 40.14MIN: 26.93 / MAX: 43.6MIN: 27.02 / MAX: 46.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdr1r2r3612182430Min: 27.45 / Avg: 27.64 / Max: 27.91Min: 27.45 / Avg: 27.51 / Max: 27.54Min: 27.55 / Avg: 27.63 / Max: 27.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyr1r2r3816243240SE +/- 0.48, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 335.9535.5935.66MIN: 34.4 / MAX: 55.63MIN: 34.42 / MAX: 51.24MIN: 34.45 / MAX: 49.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyr1r2r3816243240Min: 35.43 / Avg: 35.95 / Max: 36.9Min: 35.5 / Avg: 35.59 / Max: 35.64Min: 35.64 / Avg: 35.66 / Max: 35.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50r1r2r3918273645SE +/- 0.51, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 337.8137.3037.22MIN: 34.04 / MAX: 52.8MIN: 33.91 / MAX: 56.28MIN: 33.9 / MAX: 52.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50r1r2r3816243240Min: 37.26 / Avg: 37.81 / Max: 38.82Min: 37.25 / Avg: 37.3 / Max: 37.35Min: 37.15 / Avg: 37.22 / Max: 37.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetr1r2r348121620SE +/- 0.08, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 315.5015.4615.49MIN: 14.41 / MAX: 55.15MIN: 14.35 / MAX: 27.24MIN: 14.41 / MAX: 24.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetr1r2r348121620Min: 15.4 / Avg: 15.5 / Max: 15.65Min: 15.42 / Avg: 15.46 / Max: 15.54Min: 15.44 / Avg: 15.49 / Max: 15.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18r1r2r3510152025SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 318.6218.7118.66MIN: 17.08 / MAX: 32.57MIN: 17.06 / MAX: 33.58MIN: 17.05 / MAX: 30.941. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18r1r2r3510152025Min: 18.58 / Avg: 18.62 / Max: 18.66Min: 18.62 / Avg: 18.71 / Max: 18.77Min: 18.61 / Avg: 18.66 / Max: 18.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16r1r2r31632486480SE +/- 0.20, N = 3SE +/- 0.03, N = 3SE +/- 0.12, N = 372.0971.9171.86MIN: 70.5 / MAX: 88.28MIN: 70.43 / MAX: 92.47MIN: 70.48 / MAX: 881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16r1r2r31428425670Min: 71.87 / Avg: 72.09 / Max: 72.5Min: 71.85 / Avg: 71.91 / Max: 71.94Min: 71.64 / Avg: 71.86 / Max: 72.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetr1r2r3510152025SE +/- 0.01, N = 3SE +/- 0.08, N = 3SE +/- 0.06, N = 319.9820.0120.21MIN: 18.95 / MAX: 23.24MIN: 18.96 / MAX: 24.67MIN: 19.11 / MAX: 32.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetr1r2r3510152025Min: 19.96 / Avg: 19.98 / Max: 20.01Min: 19.92 / Avg: 20.01 / Max: 20.16Min: 20.1 / Avg: 20.21 / Max: 20.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefacer1r2r30.5851.171.7552.342.925SE +/- 0.00, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 32.542.602.57MIN: 2.35 / MAX: 2.74MIN: 2.45 / MAX: 10.37MIN: 2.45 / MAX: 2.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefacer1r2r3246810Min: 2.53 / Avg: 2.54 / Max: 2.54Min: 2.54 / Avg: 2.6 / Max: 2.7Min: 2.55 / Avg: 2.57 / Max: 2.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0r1r2r33691215SE +/- 0.05, N = 3SE +/- 0.96, N = 3SE +/- 0.96, N = 310.009.059.06MIN: 9.46 / MAX: 24.32MIN: 6.99 / MAX: 21.76MIN: 7.04 / MAX: 12.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0r1r2r33691215Min: 9.92 / Avg: 10 / Max: 10.1Min: 7.14 / Avg: 9.05 / Max: 10.07Min: 7.14 / Avg: 9.06 / Max: 10.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetr1r2r3246810SE +/- 0.02, N = 3SE +/- 0.75, N = 3SE +/- 0.74, N = 36.675.965.96MIN: 5.99 / MAX: 21.18MIN: 4.32 / MAX: 14.32MIN: 4.33 / MAX: 28.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetr1r2r33691215Min: 6.65 / Avg: 6.67 / Max: 6.7Min: 4.45 / Avg: 5.96 / Max: 6.79Min: 4.48 / Avg: 5.96 / Max: 6.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2r1r2r3246810SE +/- 0.03, N = 3SE +/- 0.94, N = 3SE +/- 0.95, N = 37.936.957.03MIN: 7.52 / MAX: 16.61MIN: 5.01 / MAX: 9.68MIN: 5.04 / MAX: 20.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2r1r2r33691215Min: 7.87 / Avg: 7.93 / Max: 7.97Min: 5.08 / Avg: 6.95 / Max: 7.91Min: 5.13 / Avg: 7.03 / Max: 8.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3r1r2r31.30732.61463.92195.22926.5365SE +/- 0.65, N = 3SE +/- 0.65, N = 3SE +/- 0.62, N = 35.745.815.81MIN: 4.3 / MAX: 7.75MIN: 4.43 / MAX: 17.76MIN: 4.48 / MAX: 10.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3r1r2r3246810Min: 4.43 / Avg: 5.74 / Max: 6.4Min: 4.52 / Avg: 5.81 / Max: 6.52Min: 4.56 / Avg: 5.81 / Max: 6.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2r1r2r3246810SE +/- 0.67, N = 3SE +/- 0.73, N = 3SE +/- 0.73, N = 37.317.227.23MIN: 5.51 / MAX: 16.43MIN: 5.54 / MAX: 12.03MIN: 5.55 / MAX: 12.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2r1r2r33691215Min: 5.97 / Avg: 7.31 / Max: 8.02Min: 5.76 / Avg: 7.22 / Max: 7.99Min: 5.77 / Avg: 7.23 / Max: 7.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetr1r2r3612182430SE +/- 0.17, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 326.6226.6326.53MIN: 25.69 / MAX: 38.05MIN: 25.7 / MAX: 41.21MIN: 25.78 / MAX: 41.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetr1r2r3612182430Min: 26.44 / Avg: 26.62 / Max: 26.95Min: 26.62 / Avg: 26.63 / Max: 26.64Min: 26.5 / Avg: 26.53 / Max: 26.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: regnety_400mr1r2r3510152025SE +/- 0.09, N = 3SE +/- 1.83, N = 3SE +/- 1.77, N = 319.1617.1517.60MIN: 17.94 / MAX: 21.24MIN: 13.3 / MAX: 38.12MIN: 13.79 / MAX: 32.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: regnety_400mr1r2r3510152025Min: 19.05 / Avg: 19.16 / Max: 19.33Min: 13.49 / Avg: 17.15 / Max: 19.03Min: 14.07 / Avg: 17.6 / Max: 19.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: squeezenet_ssdr1r2r3612182430SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 327.5827.5227.55MIN: 26.94 / MAX: 43.23MIN: 26.95 / MAX: 42.6MIN: 26.92 / MAX: 41.991. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: squeezenet_ssdr1r2r3612182430Min: 27.56 / Avg: 27.58 / Max: 27.61Min: 27.49 / Avg: 27.52 / Max: 27.56Min: 27.51 / Avg: 27.55 / Max: 27.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: yolov4-tinyr1r2r3816243240SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 335.5235.5135.53MIN: 34.38 / MAX: 51.44MIN: 33.05 / MAX: 50.05MIN: 32.99 / MAX: 52.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: yolov4-tinyr1r2r3816243240Min: 35.46 / Avg: 35.52 / Max: 35.56Min: 35.49 / Avg: 35.51 / Max: 35.53Min: 35.47 / Avg: 35.53 / Max: 35.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet50r1r2r3918273645SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.01, N = 337.2537.3437.26MIN: 34.07 / MAX: 48.19MIN: 33.97 / MAX: 56.32MIN: 33.79 / MAX: 52.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet50r1r2r3816243240Min: 37.21 / Avg: 37.25 / Max: 37.3Min: 37.26 / Avg: 37.34 / Max: 37.45Min: 37.25 / Avg: 37.26 / Max: 37.271. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: alexnetr1r2r348121620SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 315.4415.5315.50MIN: 14.41 / MAX: 26.42MIN: 14.41 / MAX: 25.62MIN: 14.41 / MAX: 26.231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: alexnetr1r2r348121620Min: 15.38 / Avg: 15.44 / Max: 15.49Min: 15.46 / Avg: 15.53 / Max: 15.61Min: 15.41 / Avg: 15.5 / Max: 15.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet18r1r2r3510152025SE +/- 0.00, N = 3SE +/- 0.34, N = 3SE +/- 0.27, N = 318.6218.3318.38MIN: 17.13 / MAX: 20.97MIN: 14.43 / MAX: 32.39MIN: 14.4 / MAX: 32.571. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet18r1r2r3510152025Min: 18.62 / Avg: 18.62 / Max: 18.63Min: 17.65 / Avg: 18.33 / Max: 18.75Min: 17.84 / Avg: 18.38 / Max: 18.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: vgg16r1r2r31632486480SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 371.9671.8271.86MIN: 70.52 / MAX: 88.3MIN: 70.37 / MAX: 86.67MIN: 70.4 / MAX: 88.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: vgg16r1r2r31428425670Min: 71.9 / Avg: 71.96 / Max: 72.06Min: 71.75 / Avg: 71.82 / Max: 71.88Min: 71.82 / Avg: 71.86 / Max: 71.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: googlenetr1r2r3510152025SE +/- 0.06, N = 3SE +/- 1.77, N = 3SE +/- 1.84, N = 320.0518.2018.26MIN: 18.94 / MAX: 32.96MIN: 14.26 / MAX: 31.74MIN: 14.28 / MAX: 36.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: googlenetr1r2r3510152025Min: 19.94 / Avg: 20.05 / Max: 20.13Min: 14.67 / Avg: 18.2 / Max: 20.01Min: 14.59 / Avg: 18.26 / Max: 20.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: blazefacer1r2r30.57381.14761.72142.29522.869SE +/- 0.02, N = 3SE +/- 0.26, N = 3SE +/- 0.25, N = 32.552.292.29MIN: 2.43 / MAX: 2.76MIN: 1.68 / MAX: 8.91MIN: 1.69 / MAX: 12.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: blazefacer1r2r3246810Min: 2.53 / Avg: 2.55 / Max: 2.58Min: 1.76 / Avg: 2.29 / Max: 2.56Min: 1.79 / Avg: 2.29 / Max: 2.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: efficientnet-b0r1r2r33691215SE +/- 0.10, N = 3SE +/- 0.95, N = 3SE +/- 0.94, N = 310.019.028.99MIN: 9.44 / MAX: 29.57MIN: 7 / MAX: 19.29MIN: 6.99 / MAX: 13.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: efficientnet-b0r1r2r33691215Min: 9.9 / Avg: 10.01 / Max: 10.21Min: 7.11 / Avg: 9.02 / Max: 10.01Min: 7.11 / Avg: 8.99 / Max: 9.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mnasnetr1r2r3246810SE +/- 0.00, N = 3SE +/- 0.71, N = 3SE +/- 0.76, N = 36.635.865.91MIN: 6.21 / MAX: 8.85MIN: 4.3 / MAX: 15.47MIN: 4.32 / MAX: 7.941. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mnasnetr1r2r33691215Min: 6.63 / Avg: 6.63 / Max: 6.64Min: 4.44 / Avg: 5.86 / Max: 6.59Min: 4.39 / Avg: 5.91 / Max: 6.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: shufflenet-v2r1r2r3246810SE +/- 0.07, N = 3SE +/- 0.96, N = 3SE +/- 0.93, N = 37.926.987.05MIN: 7.27 / MAX: 20.3MIN: 4.98 / MAX: 27.09MIN: 5.04 / MAX: 20.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: shufflenet-v2r1r2r33691215Min: 7.84 / Avg: 7.92 / Max: 8.07Min: 5.06 / Avg: 6.98 / Max: 8.02Min: 5.19 / Avg: 7.05 / Max: 8.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3r1r2r31.30732.61463.92195.22926.5365SE +/- 0.62, N = 3SE +/- 0.65, N = 3SE +/- 0.64, N = 35.745.735.81MIN: 4.43 / MAX: 9.64MIN: 4.33 / MAX: 10.47MIN: 4.41 / MAX: 25.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3r1r2r3246810Min: 4.51 / Avg: 5.74 / Max: 6.38Min: 4.43 / Avg: 5.73 / Max: 6.42Min: 4.53 / Avg: 5.81 / Max: 6.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2r1r2r3246810SE +/- 0.74, N = 3SE +/- 0.79, N = 3SE +/- 0.73, N = 37.237.227.19MIN: 5.54 / MAX: 9.59MIN: 5.41 / MAX: 20.72MIN: 5.52 / MAX: 9.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2r1r2r33691215Min: 5.76 / Avg: 7.23 / Max: 7.97Min: 5.65 / Avg: 7.22 / Max: 8.04Min: 5.73 / Avg: 7.19 / Max: 7.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mobilenetr1r2r3612182430SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.07, N = 326.5226.5326.51MIN: 25.69 / MAX: 43.81MIN: 25.76 / MAX: 43.91MIN: 25.69 / MAX: 45.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mobilenetr1r2r3612182430Min: 26.48 / Avg: 26.52 / Max: 26.56Min: 26.5 / Avg: 26.53 / Max: 26.56Min: 26.43 / Avg: 26.51 / Max: 26.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragonr1r2r3246810SE +/- 0.0643, N = 3SE +/- 0.0719, N = 3SE +/- 0.0754, N = 37.55557.56567.5496MIN: 7.18 / MAX: 12.55MIN: 7.18 / MAX: 12.51MIN: 7.19 / MAX: 12.66
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragonr1r2r33691215Min: 7.43 / Avg: 7.56 / Max: 7.65Min: 7.43 / Avg: 7.57 / Max: 7.68Min: 7.41 / Avg: 7.55 / Max: 7.67

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark Timer1r2r320406080100SE +/- 0.53, N = 3SE +/- 0.46, N = 3SE +/- 0.45, N = 380.5980.9380.711. RawTherapee, version 5.8, command line.
OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark Timer1r2r31530456075Min: 79.53 / Avg: 80.59 / Max: 81.18Min: 80.02 / Avg: 80.93 / Max: 81.42Min: 79.81 / Avg: 80.71 / Max: 81.171. RawTherapee, version 5.8, command line.

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPUr1r2r30.27680.55360.83041.10721.384SE +/- 0.00, N = 3SE +/- 0.00, N = 5SE +/- 0.00, N = 41.211.231.221. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPUr1r2r3246810Min: 1.21 / Avg: 1.21 / Max: 1.21Min: 1.22 / Avg: 1.23 / Max: 1.23Min: 1.21 / Avg: 1.22 / Max: 1.221. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPUr1r2r37001400210028003500SE +/- 35.01, N = 3SE +/- 33.23, N = 5SE +/- 40.89, N = 43363.553307.533347.931. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPUr1r2r36001200180024003000Min: 3323.61 / Avg: 3363.55 / Max: 3433.33Min: 3264.07 / Avg: 3307.53 / Max: 3439.4Min: 3290.23 / Avg: 3347.93 / Max: 3468.961. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPUr1r2r37001400210028003500SE +/- 2.58, N = 3SE +/- 1.22, N = 4SE +/- 2.51, N = 33202.533207.353212.101. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPUr1r2r36001200180024003000Min: 3197.84 / Avg: 3202.53 / Max: 3206.73Min: 3204.86 / Avg: 3207.35 / Max: 3210.72Min: 3208.09 / Avg: 3212.1 / Max: 3216.721. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPUr1r2r30.28580.57160.85741.14321.429SE +/- 0.01, N = 3SE +/- 0.02, N = 4SE +/- 0.02, N = 31.261.271.271. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPUr1r2r3246810Min: 1.25 / Avg: 1.26 / Max: 1.28Min: 1.25 / Avg: 1.27 / Max: 1.31Min: 1.25 / Avg: 1.27 / Max: 1.31. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 4Kr1r2r3306090120150SE +/- 1.06, N = 6SE +/- 1.08, N = 6SE +/- 1.07, N = 6112.75112.03112.65MIN: 99.69 / MAX: 158.99MIN: 99.17 / MAX: 157.08MIN: 99.62 / MAX: 158.581. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 4Kr1r2r320406080100Min: 111.52 / Avg: 112.75 / Max: 118.05Min: 110.75 / Avg: 112.03 / Max: 117.42Min: 111.46 / Avg: 112.65 / Max: 118.011. (CC) gcc options: -pthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPUr1r2r311002200330044005500SE +/- 4.97, N = 3SE +/- 19.24, N = 3SE +/- 4.20, N = 34961.994978.255006.341. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPUr1r2r39001800270036004500Min: 4952.08 / Avg: 4961.99 / Max: 4967.73Min: 4946.18 / Avg: 4978.25 / Max: 5012.71Min: 4998.61 / Avg: 5006.34 / Max: 5013.051. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPUr1r2r30.180.360.540.720.9SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 30.800.800.801. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPUr1r2r3246810Min: 0.8 / Avg: 0.8 / Max: 0.81Min: 0.79 / Avg: 0.8 / Max: 0.82Min: 0.79 / Avg: 0.8 / Max: 0.821. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

Timed Eigen Compilation

This test times how long it takes to build all Eigen examples. The Eigen examples are compiled serially. Eigen is a C++ template library for linear algebra. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To Compiler1r2r31530456075SE +/- 0.16, N = 3SE +/- 0.30, N = 3SE +/- 0.22, N = 368.7467.5468.70
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To Compiler1r2r31326395265Min: 68.43 / Avg: 68.74 / Max: 68.92Min: 67.03 / Avg: 67.54 / Max: 68.07Min: 68.29 / Avg: 68.7 / Max: 69.04

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression Speedr1r2r32K4K6K8K10KSE +/- 1.80, N = 5SE +/- 15.38, N = 3SE +/- 0.78, N = 39679.89664.89695.21. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression Speedr1r2r32K4K6K8K10KMin: 9673.3 / Avg: 9679.84 / Max: 9684.2Min: 9649.2 / Avg: 9664.83 / Max: 9695.6Min: 9693.9 / Avg: 9695.2 / Max: 9696.61. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression Speedr1r2r31326395265SE +/- 0.59, N = 5SE +/- 0.36, N = 3SE +/- 0.66, N = 355.7256.0757.011. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression Speedr1r2r31122334455Min: 53.88 / Avg: 55.72 / Max: 57.22Min: 55.68 / Avg: 56.07 / Max: 56.79Min: 55.7 / Avg: 57.01 / Max: 57.781. (CC) gcc options: -O3

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPUr1r2r37001400210028003500SE +/- 4.35, N = 3SE +/- 3.88, N = 3SE +/- 7.78, N = 33165.243166.573164.511. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPUr1r2r36001200180024003000Min: 3156.88 / Avg: 3165.24 / Max: 3171.48Min: 3159.64 / Avg: 3166.57 / Max: 3173.05Min: 3148.95 / Avg: 3164.51 / Max: 3172.291. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPUr1r2r30.2880.5760.8641.1521.44SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 31.281.281.281. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPUr1r2r3246810Min: 1.27 / Avg: 1.28 / Max: 1.31Min: 1.27 / Avg: 1.28 / Max: 1.3Min: 1.26 / Avg: 1.28 / Max: 1.31. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragonr1r2r33691215SE +/- 0.0822, N = 3SE +/- 0.0236, N = 3SE +/- 0.1308, N = 39.13439.25969.1967MIN: 8.81 / MAX: 15.06MIN: 8.82 / MAX: 14.99MIN: 8.85 / MAX: 15
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragonr1r2r33691215Min: 9.03 / Avg: 9.13 / Max: 9.3Min: 9.22 / Avg: 9.26 / Max: 9.3Min: 9.05 / Avg: 9.2 / Max: 9.46

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression Speedr1r2r32K4K6K8K10KSE +/- 1.84, N = 5SE +/- 16.28, N = 3SE +/- 0.67, N = 39676.39653.79685.21. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression Speedr1r2r32K4K6K8K10KMin: 9671 / Avg: 9676.32 / Max: 9682.3Min: 9637.2 / Avg: 9653.73 / Max: 9686.3Min: 9684.2 / Avg: 9685.23 / Max: 9686.51. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression Speedr1r2r31326395265SE +/- 0.61, N = 5SE +/- 0.58, N = 3SE +/- 0.48, N = 357.8857.3658.891. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression Speedr1r2r31224364860Min: 55.45 / Avg: 57.88 / Max: 58.56Min: 56.55 / Avg: 57.36 / Max: 58.48Min: 57.93 / Avg: 58.89 / Max: 59.391. (CC) gcc options: -O3

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling Benchmarkr1r2r33691215SE +/- 0.14, N = 3SE +/- 0.11, N = 3SE +/- 0.11, N = 313.0613.1713.181. Nodejs v10.19.0