HP Zbook

Intel Core i9-10885H testing with a HP 8736 (S91 Ver. 01.02.01 BIOS) and NVIDIA Quadro RTX 5000 with Max-Q Design 16GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2101076-HA-HPZBOOK6247
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 2 Tests
AV1 2 Tests
Bioinformatics 2 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 2 Tests
Chess Test Suite 4 Tests
Timed Code Compilation 4 Tests
C/C++ Compiler Tests 13 Tests
Compression Tests 2 Tests
CPU Massive 23 Tests
Creator Workloads 22 Tests
Database Test Suite 3 Tests
Encoding 4 Tests
Fortran Tests 2 Tests
Game Development 4 Tests
HPC - High Performance Computing 19 Tests
Imaging 5 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 12 Tests
Molecular Dynamics 2 Tests
MPI Benchmarks 3 Tests
Multi-Core 22 Tests
NVIDIA GPU Compute 24 Tests
Intel oneAPI 3 Tests
OpenCL 6 Tests
OpenGL Demos Test Suite 2 Tests
OpenMPI Tests 4 Tests
Productivity 2 Tests
Programmer / Developer System Benchmarks 10 Tests
Python Tests 4 Tests
Renderers 2 Tests
Scientific Computing 5 Tests
Server 6 Tests
Server CPU Tests 11 Tests
Single-Threaded 6 Tests
Speech 3 Tests
Telephony 3 Tests
Texture Compression 3 Tests
Unigine Test Suite 2 Tests
Video Encoding 2 Tests
Vulkan Compute 6 Tests
Common Workstation Benchmarks 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
r1
January 04 2021
  21 Hours, 19 Minutes
r2
January 05 2021
  21 Hours, 8 Minutes
r3
January 06 2021
  20 Hours, 49 Minutes
Invert Hiding All Results Option
  21 Hours, 5 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


HP ZbookOpenBenchmarking.orgPhoronix Test SuiteIntel Core i9-10885H @ 5.30GHz (8 Cores / 16 Threads)HP 8736 (S91 Ver. 01.02.01 BIOS)Intel Comet Lake PCH32GB2048GB KXG50PNV2T04 KIOXIANVIDIA Quadro RTX 5000 with Max-Q Design 16GB (600/6000MHz)Intel Comet Lake PCH cAVSIntel Wi-Fi 6 AX201Ubuntu 20.045.6.0-1034-oem (x86_64)GNOME Shell 3.36.4X Server 1.20.8NVIDIA 450.80.024.6.0OpenCL 1.2 CUDA 11.0.2281.2.133GCC 9.3.0 + CUDA 10.1ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLOpenCLVulkanCompilerFile-SystemScreen ResolutionHP Zbook BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - NONE / errors=remount-ro,relatime,rw / Block Size: 4096- Scaling Governor: intel_pstate powersave - CPU Microcode: 0xe0 - Thermald 1.9.1- GPU Compute Cores: 3072- Python 3.8.3- itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

r1r2r3Result OverviewPhoronix Test Suite100%112%124%136%CLOMPDDraceNetworkRedisViennaCLTNNeSpeak-NG Speech EngineRNNoiseNCNNMonkey Audio EncodingLuxCoreRender OpenCLStockfishNeatBenchLeelaChessZeroTimed Eigen CompilationSQLite SpeedtestWaifu2x-NCNN VulkanWarsowasmFishBetsy GPU CompressorRodiniaGROMACSCryptsetupTimed MAFFT AlignmentBlenderPHPBenchHashcatNode.js V8 Web Tooling BenchmarkCraftyASTC EncoderMobile Neural NetworkPlaidMLArrayFireUnpacking FirefoxVkFFTLZ4 CompressionGraphicsMagickUnigine SuperpositionNumpy BenchmarkNAMD CUDAVkResampleLAMMPS Molecular Dynamics SimulatorUnigine HeavensimdjsonOpenVINORedShift DemoTimed Linux Kernel Compilationrav1edav1dLevelDBBuild2RawTherapeecl-memBasis UniversalRealSR-NCNNInkscapeTensorFlow LiteMandelGPUBRL-CADDeepSpeechEmbreeOpus Codec EncodingclpeakCoremarkZstd CompressionTimed FFmpeg CompilationHigh Performance Conjugate GradientGEGLAI Benchmark AlphaOctaneBenchTimed HMMer SearchFAHBenchoneDNNyquake2DarktableIndigoBenchFinanceBench

HP Zbookclomp: Static OMP Speeduptnn: CPU - MobileNet v2redis: GETviennacl: OpenCL LU Factorizationespeak: Text-To-Speech Synthesisrnnoise: astcenc: Fastplaidml: No - Inference - IMDB LSTM - OpenCLencode-ape: WAV To APEgraphics-magick: Rotatecryptsetup: PBKDF2-sha512tnn: CPU - SqueezeNet v1.1compress-lz4: 3 - Compression Speedleveldb: Hot Readredis: LPUSHncnn: CPU - regnety_400mredis: SETcryptsetup: PBKDF2-whirlpoolncnn: CPU - blazefacecompress-lz4: 9 - Compression Speedstockfish: Total Timehashcat: 7-Zipleveldb: Fill Synconednn: IP Shapes 3D - u8s8f32 - CPUcryptsetup: AES-XTS 256b Encryptionlczero: OpenCLbuild-eigen: Time To Compileonednn: IP Shapes 1D - f32 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP32 - CPUopenvino: Age Gender Recognition Retail 0013 FP32 - CPUmnn: inception-v3ncnn: CPU - resnet50ncnn: Vulkan GPU - resnet18sqlite-speedtest: Timed Time - Size 1,000betsy: ETC2 RGB - Highestclpeak: Single-Precision Floatgraphics-magick: Sharpencryptsetup: AES-XTS 512b Encryptionembree: Pathtracer ISPC - Asian Dragonwaifu2x-ncnn: 2x - 3 - Yeswarsow: 1920 x 1080ddnet: 1920 x 1080 - Fullscreen - OpenGL 3.3 - Default - Multeasymapembree: Pathtracer ISPC - Crownsimdjson: Kostyaonednn: IP Shapes 3D - f32 - CPUcryptsetup: AES-XTS 256b Decryptionasmfish: 1024 Hash Memory, 26 Depthrodinia: OpenCL Particle Filterredis: SADDcryptsetup: AES-XTS 512b Decryptionunigine-super: 1920 x 1080 - Fullscreen - Ultra - OpenGLsimdjson: PartialTweetsopenvino: Age Gender Recognition Retail 0013 FP16 - CPUncnn: CPU - googlenetgromacs: Water Benchmarksimdjson: DistinctUserIDbetsy: ETC1 - Highestcryptsetup: Twofish-XTS 256b Encryptiongegl: Reflecthashcat: TrueCrypt RIPEMD160 + XTSonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUmnn: resnet-v2-50mafft: Multiple Sequence Alignment - LSU RNAncnn: CPU - yolov4-tinyphpbench: PHP Benchmark Suiterealsr-ncnn: 4x - Yesvkresample: 2x - Singlenode-web-tooling: crafty: Elapsed Timeunigine-super: 1920 x 1080 - Fullscreen - High - OpenGLopenvino: Person Detection 0106 FP16 - CPUbasis: UASTC Level 0cryptsetup: Serpent-XTS 512b Encryptiongegl: Cropcryptsetup: Serpent-XTS 256b Encryptionopenvino: Face Detection 0106 FP32 - CPUcryptsetup: Twofish-XTS 256b Decryptioncryptsetup: Serpent-XTS 512b Decryptiononednn: Convolution Batch Shapes Auto - u8s8f32 - CPUtensorflow-lite: NASNet Mobilearrayfire: Conjugate Gradient OpenCLcryptsetup: Twofish-XTS 512b Encryptiondav1d: Chimera 1080pgraphics-magick: Noise-Gaussianunpack-firefox: firefox-84.0.source.tar.xzvkfft: gegl: Tile Glassastcenc: Thoroughgegl: Scaleclpeak: Integer Compute INTcryptsetup: Serpent-XTS 256b Decryptiondav1d: Summer Nature 4Kcl-mem: Copycryptsetup: Twofish-XTS 512b Decryptionhashcat: SHA-512numpy: namd-cuda: ATPase Simulation - 327,506 Atomsgegl: Cartooncompress-lz4: 1 - Compression Speedhashcat: SHA1ncnn: Vulkan GPU - alexnetembree: Pathtracer - Crownhashcat: MD5unigine-heaven: 1920 x 1080 - Fullscreen - OpenGLblender: Fishy Cat - CUDAleveldb: Seq Fillrealsr-ncnn: 4x - Norav1e: 10blender: Classroom - NVIDIA OptiXleveldb: Seek Randonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUncnn: CPU - resnet18build-linux-kernel: Time To Compilebasis: UASTC Level 2leveldb: Overwritencnn: CPU - squeezenet_ssdrav1e: 5leveldb: Overwriteblender: Classroom - CUDAredshift: rawtherapee: Total Benchmark Timeblender: Barbershop - CUDAindigobench: CPU - Bedroomastcenc: Exhaustivebuild2: Time To Compileindigobench: CPU - Supercarcl-mem: Writegegl: Rotate 90 Degreesbasis: ETC1Sonednn: Deconvolution Batch shapes_1d - f32 - CPUunigine-super: 1920 x 1080 - Fullscreen - Low - OpenGLgegl: Color Enhancetensorflow-lite: SqueezeNetleveldb: Seq Filldarktable: Masskrug - CPU-onlyncnn: CPU - mobilenettensorflow-lite: Inception ResNet V2ddnet: 1920 x 1080 - Fullscreen - OpenGL 3.0 - Default - Multeasymaponednn: Convolution Batch Shapes Auto - f32 - CPUcompress-zstd: 19inkscape: SVG Files To PNGleveldb: Rand Deleteonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUmandelgpu: GPUbrl-cad: VGR Performance Metriccompress-lz4: 3 - Decompression Speeddarktable: Boat - CPU-onlyncnn: CPU - vgg16deepspeech: CPUcompress-lz4: 9 - Decompression Speedcompress-lz4: 1 - Decompression Speedopenvino: Face Detection 0106 FP32 - CPUtensorflow-lite: Inception V4tensorflow-lite: Mobilenet Quantdav1d: Chimera 1080p 10-bitvkresample: 2x - Doubleplaidml: Yes - Inference - Mobilenet - OpenCLencode-opus: WAV To Opus Encoderav1e: 1blender: Fishy Cat - NVIDIA OptiXrav1e: 6mnn: mobilenet-v1-1.0onednn: Recurrent Neural Network Training - u8s8f32 - CPUcoremark: CoreMark Size 666 - Iterations Per Secondgegl: Wavelet Blurncnn: CPU - alexnetgraphics-magick: HWB Color Spacegegl: Antialiasai-benchmark: Device Training Scoreblender: Barbershop - NVIDIA OptiXncnn: Vulkan GPU - resnet50plaidml: No - Inference - Mobilenet - OpenCLleveldb: Rand Fillunigine-super: 1920 x 1080 - Fullscreen - Medium - OpenGLncnn: Vulkan GPU - squeezenet_ssdembree: Pathtracer - Asian Dragonopenvino: Person Detection 0106 FP32 - CPUblender: BMW27 - CUDAncnn: Vulkan GPU - vgg16build-ffmpeg: Time To Compileonednn: Recurrent Neural Network Training - f32 - CPUbasis: UASTC Level 3hpcg: graphics-magick: Resizingtensorflow-lite: Mobilenet Floatonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUdarktable: Server Room - CPU-onlyyquake2: Software CPU - 1920 x 1080blender: Pabellon Barcelona - CUDAonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUcompress-zstd: 3leveldb: Rand Fillai-benchmark: Device AI Scoreoctanebench: Total Scorecl-mem: Readonednn: Recurrent Neural Network Inference - u8s8f32 - CPUbasis: UASTC Level 2 + RDO Post-Processingblender: Pabellon Barcelona - NVIDIA OptiXdav1d: Summer Nature 1080pfahbench: plaidml: No - Inference - DenseNet 201 - OpenCLncnn: Vulkan GPU - mobilenetonednn: Recurrent Neural Network Inference - f32 - CPUopenvino: Face Detection 0106 FP16 - CPUhmmer: Pfam Database Searchclpeak: Global Memory Bandwidthncnn: Vulkan GPU - yolov4-tinyclpeak: Double-Precision Doublefinancebench: Black-Scholes OpenCLai-benchmark: Device Inference Scoreopenvino: Person Detection 0106 FP32 - CPUopenvino: Person Detection 0106 FP16 - CPUopenvino: Face Detection 0106 FP16 - CPUdarktable: Server Rack - CPU-onlygraphics-magick: Enhancedgraphics-magick: Swirlsimdjson: LargeRandyquake2: OpenGL 3.x - 1920 x 1080yquake2: OpenGL 1.x - 1920 x 1080leveldb: Fill Syncneatbench: GPUblender: BMW27 - NVIDIA OptiXncnn: Vulkan GPU - regnety_400mncnn: Vulkan GPU - googlenetncnn: Vulkan GPU - blazefacencnn: Vulkan GPU - efficientnet-b0ncnn: Vulkan GPU - mnasnetncnn: Vulkan GPU - shufflenet-v2ncnn: Vulkan GPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2mnn: MobileNetV2_224mnn: SqueezeNetV1.0redis: LPOPastcenc: Mediumonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUlammps: Rhodopsin Proteinluxcorerender-cl: Rainbow Colors and Prismluxcorerender-cl: LuxCore Benchmarkluxcorerender-cl: Foodluxcorerender-cl: DLSCddnet: 1920 x 1080 - Fullscreen - OpenGL 3.3 - Default - RaiNyMore2ddnet: 1920 x 1080 - Fullscreen - OpenGL 3.0 - Default - RaiNyMore2leveldb: Rand Readr1r2r33.7321.4203248596.0868.292426.47422.0845.44463.3410.5129021919349272.90757.886.9462041750.0819.162375800.258162822.5455.7297031333736673361.7772.725584005.61327768.7447.165751.173363.551.2162.56837.8118.6249.5478.0165940.64723346.89.13436.020955.6435.207.07350.7612.47214002.4159847197.1152660539.423348.325.10.863442.7819.980.6170.895.854482.028.1833012338.9678258.16110.49735.9583791199.81224.99213.06949741465.94961.997.288878.08.900874.11.26482.5871.718.00803025942.549483.0489.8414616.0282582028.24354.296.9545504.35872.3112.75236.6482.71023100000419.580.2210386.7898120.67858576666715.446.080624334866667139.126168.8737.514.7343.422116.7612.6944.3638118.62151.65655.49940.92527.641.06943.2250.7846180.586734.810.939447.99210.0512.147215.737.69757.8249.77594177.754.11435489247.2357.12826.624660197413.8821.687128.820.99647.2284.45564251986408.7639099676.315.91472.0981.295179679.89823.23202.53516319023671686.08256.8671819.247.6240.34760.351.44410.6467140.50223414.80755857.99315.5077536.5568161192.9637.251246.7843.190.427.587.55555069.4491.0071.96100.2577155.41110.8383.961775522391193797.324.18160.7608.807144.232833.641.0371546189.085068330.33795.81840.347196.21460.02186.4611110.0726.523795.023165.24105.526324.6335.52340.4217.4776677300.790.801.280.1811152070.56059.90.527.541.4719.1620.052.5510.016.637.925.747.2310.006.677.935.747.315.2398.8993394660.27.684.747729.878933.177625.1985.302.261.272.70158.21170.369.6202.5295.5473012560.8364.233527.17821.3165.59477.3910.8618751943008264.94857.367.0992094056.3118.912413657.08300202.6056.0798392923704003424.9182.776704080.51317367.5437.044041.193307.531.2363.18037.3018.3350.2687.9125858.32723381.99.25966.102967.9429.376.97940.7512.44474055.1159746117.0552628039.253388.525.40.873403.4520.010.6100.885.789487.428.4963014339.0069258.53010.56435.59832417100.61725.19013.17958414866.54978.257.345882.18.839881.41.27486.3878.117.90353047562.531486.4486.4614716.1372564728.24254.386.9735519.39876.6112.03235.4485.71020000000419.360.2223887.3198127.78854450000015.536.098924260200000139.905167.9637.414.6563.404116.1512.6294.3785218.71152.20855.74240.95527.511.06443.2251.9046080.934731.670.938449.37210.7122.150215.637.54158.0639.76468178.154.31235603447.2877.15026.634670567412.4321.699228.721.04847.2964.47062252826584.8638229653.715.87071.9181.073169664.89839.93207.35516818323712985.83257.0621823.067.6020.34660.181.44010.6757159.42223304.98328657.95015.4677436.5578141190.0537.341244.9543.190.627.527.56565079.8990.8271.82100.3977159.48110.9263.960685512392243799.454.17460.7609.567154.662831.041.0271544189.101719329.93800.41840.319196.28459.61186.4777109.9826.533797.053166.57105.572324.5835.51340.4617.4767300.790.801.280.1811152070.56059.90.527.138.0717.1518.202.299.025.866.985.737.229.055.966.955.817.225.2918.9822104092.337.614.714579.777013.167695.1695.392.311.322.77100.58169.309.6923.6299.3963009326.7565.918027.71322.0445.63478.7310.5929001886103272.67658.897.1282083566.2919.382433543.88103522.5757.0196293533664333386.0842.748744023.01341668.6997.145741.193347.931.2263.56337.2218.3850.2617.9035892.70733336.09.19676.093968.6434.246.99760.7512.60894026.9161806747.0272634908.833362.925.30.863405.9220.210.6140.885.792483.628.3132981339.0662858.78610.60835.66829705100.74825.22513.18956001266.25006.347.353874.48.826874.11.27483.0873.518.03263040792.548483.8487.5714716.1032568328.05554.657.0005540.44870.9112.65235.1483.01016800000417.030.2217186.9938079.18853533333315.506.064124196900000139.184168.0837.314.6943.420116.2612.6444.3853518.66151.47855.76640.76027.631.06443.4251.8045980.712733.020.935449.90210.9452.156214.837.69158.0629.73732177.454.10135625847.4157.15526.534677473412.3821.621028.821.06847.3884.46656252822614.4640339685.215.86371.8681.039839695.29810.03212.10517826323740685.95257.6151817.787.6160.34760.251.44310.6587151.58223892.44472657.84315.4977636.6468141192.8037.261247.9343.290.527.557.54965073.0990.9371.86100.2037169.03111.0403.954575512395373792.874.17860.6608.627147.092835.140.9811544189.316553329.93798.12841.228196.41459.71186.6158109.9926.513797.723164.51105.505324.7835.53340.5917.4763337300.790.801.280.1811152070.56059.90.527.638.0717.6018.262.298.995.917.055.817.199.065.967.035.817.235.2858.9442809233.487.584.737289.812383.112915.1795.412.291.302.76130.66151.499.573OpenBenchmarking.org

DDraceNetwork

OpenBenchmarking.orgMilliseconds, Fewer Is BetterDDraceNetwork 15.2.3Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL 3.3 - Zoom: Default - Demo: Multeasymap - Total Frame Timer1r2r33691215Min: 2 / Avg: 2.3 / Max: 10.06Min: 2 / Avg: 2.32 / Max: 5.18Min: 2 / Avg: 2.32 / Max: 8.681. (CXX) g++ options: -O3 -rdynamic -lcrypto -lz -lrt -lpthread -lcurl -lfreetype -lSDL2 -lwavpack -lopusfile -lopus -logg -lGL -lX11 -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0

OpenBenchmarking.orgMilliseconds, Fewer Is BetterDDraceNetwork 15.2.3Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL 3.0 - Zoom: Default - Demo: Multeasymap - Total Frame Timer1r2r33691215Min: 2 / Avg: 2.43 / Max: 6.55Min: 2 / Avg: 2.46 / Max: 6.5Min: 2 / Avg: 2.39 / Max: 7.281. (CXX) g++ options: -O3 -rdynamic -lcrypto -lz -lrt -lpthread -lcurl -lfreetype -lSDL2 -lwavpack -lopusfile -lopus -logg -lGL -lX11 -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0

CLOMP

CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading in order to influence future system designs. This particular test profile configuration is currently set to look at the OpenMP static schedule speed-up across all available CPU cores using the recommended test configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP Speedupr1r2r30.83251.6652.49753.334.1625SE +/- 0.03, N = 3SE +/- 0.03, N = 15SE +/- 0.03, N = 153.72.53.61. (CC) gcc options: -fopenmp -O3 -lm
OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP Speedupr1r2r3246810Min: 3.7 / Avg: 3.73 / Max: 3.8Min: 2.4 / Avg: 2.51 / Max: 2.8Min: 3.5 / Avg: 3.62 / Max: 3.81. (CC) gcc options: -fopenmp -O3 -lm

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2r1r2r370140210280350SE +/- 2.78, N = 8SE +/- 0.81, N = 3SE +/- 0.36, N = 3321.42295.55299.40MIN: 300.42 / MAX: 371.06MIN: 292.39 / MAX: 306.56MIN: 297.92 / MAX: 315.551. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2r1r2r360120180240300Min: 302.03 / Avg: 321.42 / Max: 324.98Min: 293.98 / Avg: 295.55 / Max: 296.7Min: 298.68 / Avg: 299.4 / Max: 299.81. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETr1r2r3700K1400K2100K2800K3500KSE +/- 41615.25, N = 3SE +/- 13828.40, N = 3SE +/- 8077.93, N = 33248596.083012560.833009326.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETr1r2r3600K1200K1800K2400K3000KMin: 3165367 / Avg: 3248596.08 / Max: 3290631.75Min: 2985361 / Avg: 3012560.83 / Max: 3030496.75Min: 2994107.75 / Avg: 3009326.75 / Max: 3021631.51. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile uses ViennaCL OpenCL support and runs the included computational benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is BetterViennaCL 1.4.2OpenCL LU Factorizationr1r2r31530456075SE +/- 0.36, N = 3SE +/- 0.08, N = 3SE +/- 0.44, N = 368.2964.2365.921. (CXX) g++ options: -rdynamic -lOpenCL
OpenBenchmarking.orgGFLOPS, More Is BetterViennaCL 1.4.2OpenCL LU Factorizationr1r2r31326395265Min: 67.59 / Avg: 68.29 / Max: 68.8Min: 64.12 / Avg: 64.23 / Max: 64.38Min: 65.06 / Avg: 65.92 / Max: 66.481. (CXX) g++ options: -rdynamic -lOpenCL

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech Synthesisr1r2r3714212835SE +/- 0.29, N = 4SE +/- 0.12, N = 4SE +/- 0.04, N = 426.4727.1827.711. (CC) gcc options: -O2 -std=c99 -lpthread -lm
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech Synthesisr1r2r3612182430Min: 25.63 / Avg: 26.47 / Max: 26.9Min: 26.98 / Avg: 27.18 / Max: 27.41Min: 27.66 / Avg: 27.71 / Max: 27.831. (CC) gcc options: -O2 -std=c99 -lpthread -lm

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28r1r2r3510152025SE +/- 0.06, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 322.0821.3222.041. (CC) gcc options: -O2 -pedantic -fvisibility=hidden -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28r1r2r3510152025Min: 21.98 / Avg: 22.08 / Max: 22.18Min: 21.26 / Avg: 21.32 / Max: 21.39Min: 22.01 / Avg: 22.04 / Max: 22.061. (CC) gcc options: -O2 -pedantic -fvisibility=hidden -lm

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Fastr1r2r31.26682.53363.80045.06726.334SE +/- 0.05, N = 3SE +/- 0.08, N = 3SE +/- 0.04, N = 125.445.595.631. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Fastr1r2r3246810Min: 5.37 / Avg: 5.44 / Max: 5.53Min: 5.44 / Avg: 5.59 / Max: 5.7Min: 5.34 / Avg: 5.63 / Max: 5.751. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: IMDB LSTM - Device: OpenCLr1r2r3100200300400500SE +/- 0.36, N = 3SE +/- 1.92, N = 3SE +/- 2.86, N = 3463.34477.39478.73
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: IMDB LSTM - Device: OpenCLr1r2r380160240320400Min: 462.71 / Avg: 463.34 / Max: 463.96Min: 473.94 / Avg: 477.39 / Max: 480.56Min: 474.97 / Avg: 478.73 / Max: 484.34

Monkey Audio Encoding

This test times how long it takes to encode a sample WAV file to Monkey's Audio APE format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APEr1r2r33691215SE +/- 0.03, N = 5SE +/- 0.04, N = 5SE +/- 0.01, N = 510.5110.8610.591. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt
OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APEr1r2r33691215Min: 10.42 / Avg: 10.51 / Max: 10.57Min: 10.79 / Avg: 10.86 / Max: 11Min: 10.56 / Avg: 10.59 / Max: 10.641. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Rotater1r2r32004006008001000SE +/- 2.52, N = 3SE +/- 3.18, N = 3SE +/- 1.86, N = 39028759001. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Rotater1r2r3160320480640800Min: 899 / Avg: 902 / Max: 907Min: 871 / Avg: 874.67 / Max: 881Min: 898 / Avg: 900.33 / Max: 9041. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-sha512r1r2r3400K800K1200K1600K2000KSE +/- 7117.07, N = 3SE +/- 1201.00, N = 3SE +/- 12877.64, N = 3191934919430081886103
OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-sha512r1r2r3300K600K900K1200K1500KMin: 1906501 / Avg: 1919349.33 / Max: 1931079Min: 1941807 / Avg: 1943008 / Max: 1945410Min: 1865793 / Avg: 1886103 / Max: 1909974

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1r1r2r360120180240300SE +/- 1.46, N = 3SE +/- 0.11, N = 3SE +/- 0.12, N = 3272.91264.95272.68MIN: 264.43 / MAX: 277.05MIN: 264.07 / MAX: 268.01MIN: 271.53 / MAX: 277.61. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1r1r2r350100150200250Min: 270 / Avg: 272.91 / Max: 274.58Min: 264.72 / Avg: 264.95 / Max: 265.09Min: 272.52 / Avg: 272.68 / Max: 272.921. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression Speedr1r2r31326395265SE +/- 0.61, N = 5SE +/- 0.58, N = 3SE +/- 0.48, N = 357.8857.3658.891. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression Speedr1r2r31224364860Min: 55.45 / Avg: 57.88 / Max: 58.56Min: 56.55 / Avg: 57.36 / Max: 58.48Min: 57.93 / Avg: 58.89 / Max: 59.391. (CC) gcc options: -O3

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Hot Readr1r2r3246810SE +/- 0.013, N = 3SE +/- 0.075, N = 3SE +/- 0.049, N = 36.9467.0997.1281. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Hot Readr1r2r33691215Min: 6.93 / Avg: 6.95 / Max: 6.97Min: 6.97 / Avg: 7.1 / Max: 7.23Min: 7.07 / Avg: 7.13 / Max: 7.231. (CXX) g++ options: -O3 -lsnappy -lpthread

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSHr1r2r3400K800K1200K1600K2000KSE +/- 25221.07, N = 3SE +/- 21753.96, N = 4SE +/- 8925.21, N = 32041750.082094056.312083566.291. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSHr1r2r3400K800K1200K1600K2000KMin: 1992350.75 / Avg: 2041750.08 / Max: 2075286.38Min: 2045251.62 / Avg: 2094056.31 / Max: 2150537.5Min: 2066446.25 / Avg: 2083566.29 / Max: 2096503.121. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mr1r2r3510152025SE +/- 0.06, N = 3SE +/- 0.24, N = 3SE +/- 0.10, N = 319.1618.9119.38MIN: 18.07 / MAX: 22.36MIN: 13.5 / MAX: 30.63MIN: 14.45 / MAX: 42.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mr1r2r3510152025Min: 19.1 / Avg: 19.16 / Max: 19.29Min: 18.44 / Avg: 18.91 / Max: 19.24Min: 19.19 / Avg: 19.38 / Max: 19.531. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETr1r2r3500K1000K1500K2000K2500KSE +/- 17218.21, N = 3SE +/- 3903.32, N = 3SE +/- 6859.51, N = 32375800.252413657.002433543.801. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETr1r2r3400K800K1200K1600K2000KMin: 2341995.5 / Avg: 2375800.25 / Max: 2398388.5Min: 2409638.5 / Avg: 2413657 / Max: 2421462.5Min: 2421617.5 / Avg: 2433543.83 / Max: 24453791. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-whirlpoolr1r2r3200K400K600K800K1000KSE +/- 4903.32, N = 3SE +/- 2314.28, N = 3SE +/- 2497.33, N = 3816282830020810352
OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-whirlpoolr1r2r3140K280K420K560K700KMin: 809086 / Avg: 816282.33 / Max: 825650Min: 825650 / Avg: 830020 / Max: 833526Min: 805357 / Avg: 810351.67 / Max: 812849

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefacer1r2r30.5851.171.7552.342.925SE +/- 0.00, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 32.542.602.57MIN: 2.35 / MAX: 2.74MIN: 2.45 / MAX: 10.37MIN: 2.45 / MAX: 2.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefacer1r2r3246810Min: 2.53 / Avg: 2.54 / Max: 2.54Min: 2.54 / Avg: 2.6 / Max: 2.7Min: 2.55 / Avg: 2.57 / Max: 2.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression Speedr1r2r31326395265SE +/- 0.59, N = 5SE +/- 0.36, N = 3SE +/- 0.66, N = 355.7256.0757.011. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression Speedr1r2r31122334455Min: 53.88 / Avg: 55.72 / Max: 57.22Min: 55.68 / Avg: 56.07 / Max: 56.79Min: 55.7 / Avg: 57.01 / Max: 57.781. (CC) gcc options: -O3

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total Timer1r2r32M4M6M8M10MSE +/- 85083.98, N = 8SE +/- 85742.14, N = 3SE +/- 67987.28, N = 129703133983929296293531. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total Timer1r2r32M4M6M8M10MMin: 9395707 / Avg: 9703132.63 / Max: 10044545Min: 9669517 / Avg: 9839292.33 / Max: 9945094Min: 9347765 / Avg: 9629353.25 / Max: 100375501. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver

Hashcat

Hashcat is an open-source, advanced password recovery tool supporting GPU acceleration with OpenCL, NVIDIA CUDA, and Radeon ROCm. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterHashcat 6.1.1Benchmark: 7-Zipr1r2r380K160K240K320K400KSE +/- 1589.90, N = 3SE +/- 1858.31, N = 3SE +/- 3670.30, N = 3373667370400366433
OpenBenchmarking.orgH/s, More Is BetterHashcat 6.1.1Benchmark: 7-Zipr1r2r360K120K180K240K300KMin: 370500 / Avg: 373666.67 / Max: 375500Min: 366800 / Avg: 370400 / Max: 373000Min: 359300 / Avg: 366433.33 / Max: 371500

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Fill Syncr1r2r37001400210028003500SE +/- 33.91, N = 3SE +/- 25.98, N = 3SE +/- 60.32, N = 33361.783424.923386.081. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Fill Syncr1r2r36001200180024003000Min: 3311.59 / Avg: 3361.78 / Max: 3426.38Min: 3387.41 / Avg: 3424.92 / Max: 3474.82Min: 3301.23 / Avg: 3386.08 / Max: 3502.791. (CXX) g++ options: -O3 -lsnappy -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUr1r2r30.62481.24961.87442.49923.124SE +/- 0.00400, N = 3SE +/- 0.01530, N = 3SE +/- 0.00352, N = 32.725582.776702.74874MIN: 2.54MIN: 2.56MIN: 2.541. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUr1r2r3246810Min: 2.72 / Avg: 2.73 / Max: 2.73Min: 2.76 / Avg: 2.78 / Max: 2.81Min: 2.74 / Avg: 2.75 / Max: 2.751. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b Encryptionr1r2r39001800270036004500SE +/- 1.66, N = 3SE +/- 25.91, N = 3SE +/- 20.10, N = 34005.64080.54023.0
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b Encryptionr1r2r37001400210028003500Min: 4002.3 / Avg: 4005.6 / Max: 4007.5Min: 4030 / Avg: 4080.5 / Max: 4115.8Min: 3996.1 / Avg: 4022.97 / Max: 4062.3

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: OpenCLr1r2r33K6K9K12K15KSE +/- 160.45, N = 3SE +/- 176.76, N = 3SE +/- 44.68, N = 31327713173134161. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: OpenCLr1r2r32K4K6K8K10KMin: 13017 / Avg: 13277.33 / Max: 13570Min: 12986 / Avg: 13172.67 / Max: 13526Min: 13327 / Avg: 13416.33 / Max: 134631. (CXX) g++ options: -flto -pthread

Timed Eigen Compilation

This test times how long it takes to build all Eigen examples. The Eigen examples are compiled serially. Eigen is a C++ template library for linear algebra. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To Compiler1r2r31530456075SE +/- 0.16, N = 3SE +/- 0.30, N = 3SE +/- 0.22, N = 368.7467.5468.70
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To Compiler1r2r31326395265Min: 68.43 / Avg: 68.74 / Max: 68.92Min: 67.03 / Avg: 67.54 / Max: 68.07Min: 68.29 / Avg: 68.7 / Max: 69.04

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUr1r2r3246810SE +/- 0.05152, N = 3SE +/- 0.11582, N = 12SE +/- 0.02993, N = 37.165757.044047.14574MIN: 5.58MIN: 4.11MIN: 5.451. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUr1r2r33691215Min: 7.06 / Avg: 7.17 / Max: 7.22Min: 5.78 / Avg: 7.04 / Max: 7.26Min: 7.09 / Avg: 7.15 / Max: 7.181. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUr1r2r30.26780.53560.80341.07121.339SE +/- 0.00, N = 3SE +/- 0.00, N = 4SE +/- 0.00, N = 61.171.191.191. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUr1r2r3246810Min: 1.17 / Avg: 1.17 / Max: 1.18Min: 1.19 / Avg: 1.19 / Max: 1.19Min: 1.18 / Avg: 1.19 / Max: 1.191. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPUr1r2r37001400210028003500SE +/- 35.01, N = 3SE +/- 33.23, N = 5SE +/- 40.89, N = 43363.553307.533347.931. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPUr1r2r36001200180024003000Min: 3323.61 / Avg: 3363.55 / Max: 3433.33Min: 3264.07 / Avg: 3307.53 / Max: 3439.4Min: 3290.23 / Avg: 3347.93 / Max: 3468.961. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPUr1r2r30.27680.55360.83041.10721.384SE +/- 0.00, N = 3SE +/- 0.00, N = 5SE +/- 0.00, N = 41.211.231.221. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPUr1r2r3246810Min: 1.21 / Avg: 1.21 / Max: 1.21Min: 1.22 / Avg: 1.23 / Max: 1.23Min: 1.21 / Avg: 1.22 / Max: 1.221. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3r1r2r31428425670SE +/- 0.15, N = 10SE +/- 0.18, N = 11SE +/- 0.22, N = 1062.5763.1863.56MIN: 60.82 / MAX: 96.05MIN: 61.02 / MAX: 104.39MIN: 60.92 / MAX: 102.851. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3r1r2r31224364860Min: 62.13 / Avg: 62.57 / Max: 63.62Min: 62.38 / Avg: 63.18 / Max: 64.17Min: 62.36 / Avg: 63.56 / Max: 64.461. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50r1r2r3918273645SE +/- 0.51, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 337.8137.3037.22MIN: 34.04 / MAX: 52.8MIN: 33.91 / MAX: 56.28MIN: 33.9 / MAX: 52.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50r1r2r3816243240Min: 37.26 / Avg: 37.81 / Max: 38.82Min: 37.25 / Avg: 37.3 / Max: 37.35Min: 37.15 / Avg: 37.22 / Max: 37.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet18r1r2r3510152025SE +/- 0.00, N = 3SE +/- 0.34, N = 3SE +/- 0.27, N = 318.6218.3318.38MIN: 17.13 / MAX: 20.97MIN: 14.43 / MAX: 32.39MIN: 14.4 / MAX: 32.571. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet18r1r2r3510152025Min: 18.62 / Avg: 18.62 / Max: 18.63Min: 17.65 / Avg: 18.33 / Max: 18.75Min: 17.84 / Avg: 18.38 / Max: 18.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000r1r2r31122334455SE +/- 0.25, N = 3SE +/- 0.17, N = 3SE +/- 0.13, N = 349.5550.2750.261. (CC) gcc options: -O2 -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000r1r2r31020304050Min: 49.05 / Avg: 49.55 / Max: 49.84Min: 49.94 / Avg: 50.27 / Max: 50.49Min: 50 / Avg: 50.26 / Max: 50.411. (CC) gcc options: -O2 -ldl -lz -lpthread

Betsy GPU Compressor

Betsy is an open-source GPU compressor of various GPU compression techniques. Betsy is written in GLSL for Vulkan/OpenGL (compute shader) support for GPU-based texture compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC2 RGB - Quality: Highestr1r2r3246810SE +/- 0.064, N = 13SE +/- 0.018, N = 3SE +/- 0.023, N = 38.0167.9127.9031. (CXX) g++ options: -O3 -O2 -lpthread -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC2 RGB - Quality: Highestr1r2r33691215Min: 7.92 / Avg: 8.02 / Max: 8.78Min: 7.89 / Avg: 7.91 / Max: 7.95Min: 7.87 / Avg: 7.9 / Max: 7.951. (CXX) g++ options: -O3 -O2 -lpthread -ldl

clpeak

Clpeak is designed to test the peak capabilities of OpenCL devices. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is BetterclpeakOpenCL Test: Single-Precision Floatr1r2r313002600390052006500SE +/- 83.30, N = 15SE +/- 64.05, N = 3SE +/- 47.53, N = 35940.645858.325892.701. (CXX) g++ options: -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGFLOPS, More Is BetterclpeakOpenCL Test: Single-Precision Floatr1r2r310002000300040005000Min: 5486.06 / Avg: 5940.64 / Max: 6368.48Min: 5747.86 / Avg: 5858.32 / Max: 5969.74Min: 5804.48 / Avg: 5892.7 / Max: 5967.491. (CXX) g++ options: -O3 -rdynamic -lOpenCL

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Sharpenr1r2r31632486480SE +/- 0.33, N = 3SE +/- 0.58, N = 3SE +/- 0.67, N = 37272731. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Sharpenr1r2r31428425670Min: 72 / Avg: 72.33 / Max: 73Min: 71 / Avg: 72 / Max: 73Min: 72 / Avg: 72.67 / Max: 741. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b Encryptionr1r2r37001400210028003500SE +/- 3.15, N = 3SE +/- 15.69, N = 3SE +/- 25.61, N = 33346.83381.93336.0
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b Encryptionr1r2r36001200180024003000Min: 3341.1 / Avg: 3346.77 / Max: 3352Min: 3354.1 / Avg: 3381.93 / Max: 3408.4Min: 3294.7 / Avg: 3336 / Max: 3382.9

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragonr1r2r33691215SE +/- 0.0822, N = 3SE +/- 0.0236, N = 3SE +/- 0.1308, N = 39.13439.25969.1967MIN: 8.81 / MAX: 15.06MIN: 8.82 / MAX: 14.99MIN: 8.85 / MAX: 15
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragonr1r2r33691215Min: 9.03 / Avg: 9.13 / Max: 9.3Min: 9.22 / Avg: 9.26 / Max: 9.3Min: 9.05 / Avg: 9.2 / Max: 9.46

Waifu2x-NCNN Vulkan

Waifu2x-NCNN is an NCNN neural network implementation of the Waifu2x converter project and accelerated using the Vulkan API. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWaifu2x-NCNN Vulkan 20200818Scale: 2x - Denoise: 3 - TAA: Yesr1r2r3246810SE +/- 0.004, N = 3SE +/- 0.007, N = 3SE +/- 0.011, N = 36.0206.1026.093
OpenBenchmarking.orgSeconds, Fewer Is BetterWaifu2x-NCNN Vulkan 20200818Scale: 2x - Denoise: 3 - TAA: Yesr1r2r3246810Min: 6.02 / Avg: 6.02 / Max: 6.03Min: 6.09 / Avg: 6.1 / Max: 6.11Min: 6.07 / Avg: 6.09 / Max: 6.11

Warsow

This is a benchmark of Warsow, a popular open-source first-person shooter. This game uses the QFusion engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 1920 x 1080r1r2r32004006008001000SE +/- 13.76, N = 12SE +/- 1.46, N = 3SE +/- 1.81, N = 3955.6967.9968.6
OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 1920 x 1080r1r2r32004006008001000Min: 804.8 / Avg: 955.63 / Max: 975.4Min: 965.6 / Avg: 967.87 / Max: 970.6Min: 966.1 / Avg: 968.57 / Max: 972.1

DDraceNetwork

This is a test of DDraceNetwork, an open-source cooperative platformer. OpenGL 3.3 is used for rendering, with fallbacks for older OpenGL versions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterDDraceNetwork 15.2.3Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL 3.3 - Zoom: Default - Demo: Multeasymapr1r2r390180270360450SE +/- 0.25, N = 3SE +/- 2.73, N = 3SE +/- 2.45, N = 3435.20429.37434.24MIN: 99.45 / MAX: 499.75MIN: 112.88 / MAX: 499.75MIN: 115.25 / MAX: 499.751. (CXX) g++ options: -O3 -rdynamic -lcrypto -lz -lrt -lpthread -lcurl -lfreetype -lSDL2 -lwavpack -lopusfile -lopus -logg -lGL -lX11 -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0
OpenBenchmarking.orgFrames Per Second, More Is BetterDDraceNetwork 15.2.3Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL 3.3 - Zoom: Default - Demo: Multeasymapr1r2r380160240320400Min: 434.75 / Avg: 435.2 / Max: 435.59Min: 423.91 / Avg: 429.37 / Max: 432.24Min: 431.6 / Avg: 434.24 / Max: 439.131. (CXX) g++ options: -O3 -rdynamic -lcrypto -lz -lrt -lpthread -lcurl -lfreetype -lSDL2 -lwavpack -lopusfile -lopus -logg -lGL -lX11 -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Crownr1r2r3246810SE +/- 0.0830, N = 3SE +/- 0.0728, N = 4SE +/- 0.0756, N = 57.07356.97946.9976MIN: 6.66 / MAX: 12.73MIN: 6.57 / MAX: 12.32MIN: 6.56 / MAX: 12.56
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Crownr1r2r33691215Min: 6.96 / Avg: 7.07 / Max: 7.24Min: 6.85 / Avg: 6.98 / Max: 7.19Min: 6.84 / Avg: 7 / Max: 7.29

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: Kostyar1r2r30.1710.3420.5130.6840.855SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.760.750.751. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: Kostyar1r2r3246810Min: 0.75 / Avg: 0.76 / Max: 0.76Min: 0.75 / Avg: 0.75 / Max: 0.75Min: 0.75 / Avg: 0.75 / Max: 0.761. (CXX) g++ options: -O3 -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUr1r2r33691215SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 312.4712.4412.61MIN: 12.08MIN: 12.09MIN: 12.21. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUr1r2r348121620Min: 12.45 / Avg: 12.47 / Max: 12.5Min: 12.37 / Avg: 12.44 / Max: 12.49Min: 12.55 / Avg: 12.61 / Max: 12.661. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b Decryptionr1r2r39001800270036004500SE +/- 4.92, N = 3SE +/- 17.20, N = 3SE +/- 15.07, N = 34002.44055.14026.9
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b Decryptionr1r2r37001400210028003500Min: 3992.6 / Avg: 4002.43 / Max: 4007.8Min: 4023 / Avg: 4055.07 / Max: 4081.9Min: 4002.8 / Avg: 4026.87 / Max: 4054.6

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Depthr1r2r33M6M9M12M15MSE +/- 174263.56, N = 3SE +/- 148124.86, N = 3SE +/- 142852.80, N = 3159847191597461116180674
OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Depthr1r2r33M6M9M12M15MMin: 15637111 / Avg: 15984719 / Max: 16180429Min: 15824449 / Avg: 15974610.67 / Max: 16270851Min: 15900882 / Avg: 16180674.33 / Max: 16370650

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenCL Particle Filterr1r2r3246810SE +/- 0.065, N = 3SE +/- 0.013, N = 3SE +/- 0.016, N = 37.1157.0557.0271. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenCL Particle Filterr1r2r33691215Min: 7.04 / Avg: 7.11 / Max: 7.24Min: 7.03 / Avg: 7.05 / Max: 7.08Min: 7 / Avg: 7.03 / Max: 7.061. (CXX) g++ options: -O2 -lOpenCL

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDr1r2r3600K1200K1800K2400K3000KSE +/- 28020.60, N = 3SE +/- 23332.27, N = 15SE +/- 27994.25, N = 32660539.422628039.252634908.831. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDr1r2r3500K1000K1500K2000K2500KMin: 2604500 / Avg: 2660539.42 / Max: 2688946.25Min: 2415536.25 / Avg: 2628039.25 / Max: 2717391.25Min: 2584310 / Avg: 2634908.83 / Max: 2680965.251. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b Decryptionr1r2r37001400210028003500SE +/- 1.21, N = 3SE +/- 10.03, N = 3SE +/- 13.02, N = 33348.33388.53362.9
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b Decryptionr1r2r36001200180024003000Min: 3346.5 / Avg: 3348.3 / Max: 3350.6Min: 3370.3 / Avg: 3388.47 / Max: 3404.9Min: 3340.4 / Avg: 3362.9 / Max: 3385.5

Unigine Superposition

This test calculates the average frame-rate within the Superposition demo for the Unigine engine, released in 2017. This engine is extremely demanding on the system's graphics card. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnigine Superposition 1.0Resolution: 1920 x 1080 - Mode: Fullscreen - Quality: Ultra - Renderer: OpenGLr1r2r3612182430SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 325.125.425.3MAX: 29.3MAX: 29.4MAX: 29.7
OpenBenchmarking.orgFrames Per Second, More Is BetterUnigine Superposition 1.0Resolution: 1920 x 1080 - Mode: Fullscreen - Quality: Ultra - Renderer: OpenGLr1r2r3612182430Min: 25 / Avg: 25.1 / Max: 25.2Min: 25.3 / Avg: 25.37 / Max: 25.4Min: 25.2 / Avg: 25.27 / Max: 25.3

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweetsr1r2r30.19580.39160.58740.78320.979SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 30.860.870.861. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweetsr1r2r3246810Min: 0.86 / Avg: 0.86 / Max: 0.87Min: 0.86 / Avg: 0.87 / Max: 0.89Min: 0.86 / Avg: 0.86 / Max: 0.861. (CXX) g++ options: -O3 -pthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUr1r2r37001400210028003500SE +/- 33.67, N = 3SE +/- 38.35, N = 4SE +/- 34.05, N = 63442.783403.453405.921. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUr1r2r36001200180024003000Min: 3404.42 / Avg: 3442.78 / Max: 3509.89Min: 3358.96 / Avg: 3403.45 / Max: 3517.97Min: 3365.53 / Avg: 3405.92 / Max: 3575.771. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetr1r2r3510152025SE +/- 0.01, N = 3SE +/- 0.08, N = 3SE +/- 0.06, N = 319.9820.0120.21MIN: 18.95 / MAX: 23.24MIN: 18.96 / MAX: 24.67MIN: 19.11 / MAX: 32.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetr1r2r3510152025Min: 19.96 / Avg: 19.98 / Max: 20.01Min: 19.92 / Avg: 20.01 / Max: 20.16Min: 20.1 / Avg: 20.21 / Max: 20.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water Benchmarkr1r2r30.13880.27760.41640.55520.694SE +/- 0.003, N = 3SE +/- 0.004, N = 3SE +/- 0.002, N = 30.6170.6100.6141. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water Benchmarkr1r2r3246810Min: 0.61 / Avg: 0.62 / Max: 0.62Min: 0.6 / Avg: 0.61 / Max: 0.62Min: 0.61 / Avg: 0.61 / Max: 0.621. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDr1r2r30.20030.40060.60090.80121.0015SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.890.880.881. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDr1r2r3246810Min: 0.88 / Avg: 0.89 / Max: 0.89Min: 0.88 / Avg: 0.88 / Max: 0.89Min: 0.88 / Avg: 0.88 / Max: 0.881. (CXX) g++ options: -O3 -pthread

Betsy GPU Compressor

Betsy is an open-source GPU compressor of various GPU compression techniques. Betsy is written in GLSL for Vulkan/OpenGL (compute shader) support for GPU-based texture compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC1 - Quality: Highestr1r2r31.31722.63443.95165.26886.586SE +/- 0.068, N = 12SE +/- 0.008, N = 3SE +/- 0.024, N = 35.8545.7895.7921. (CXX) g++ options: -O3 -O2 -lpthread -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC1 - Quality: Highestr1r2r3246810Min: 5.74 / Avg: 5.85 / Max: 6.6Min: 5.77 / Avg: 5.79 / Max: 5.8Min: 5.75 / Avg: 5.79 / Max: 5.831. (CXX) g++ options: -O3 -O2 -lpthread -ldl

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b Encryptionr1r2r3110220330440550SE +/- 0.75, N = 3SE +/- 1.08, N = 3SE +/- 2.51, N = 3482.0487.4483.6
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b Encryptionr1r2r390180270360450Min: 480.8 / Avg: 482.03 / Max: 483.4Min: 485.2 / Avg: 487.37 / Max: 488.5Min: 479.3 / Avg: 483.6 / Max: 488

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Reflectr1r2r3714212835SE +/- 0.29, N = 3SE +/- 0.30, N = 3SE +/- 0.22, N = 328.1828.5028.31
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Reflectr1r2r3612182430Min: 27.6 / Avg: 28.18 / Max: 28.5Min: 27.89 / Avg: 28.5 / Max: 28.81Min: 27.87 / Avg: 28.31 / Max: 28.6

Hashcat

Hashcat is an open-source, advanced password recovery tool supporting GPU acceleration with OpenCL, NVIDIA CUDA, and Radeon ROCm. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterHashcat 6.1.1Benchmark: TrueCrypt RIPEMD160 + XTSr1r2r360K120K180K240K300KSE +/- 1322.04, N = 3SE +/- 851.14, N = 3SE +/- 545.69, N = 3301233301433298133
OpenBenchmarking.orgH/s, More Is BetterHashcat 6.1.1Benchmark: TrueCrypt RIPEMD160 + XTSr1r2r350K100K150K200K250KMin: 299400 / Avg: 301233.33 / Max: 303800Min: 300300 / Avg: 301433.33 / Max: 303100Min: 297400 / Avg: 298133.33 / Max: 299200

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUr1r2r33691215SE +/- 0.04374, N = 3SE +/- 0.01715, N = 3SE +/- 0.11418, N = 38.967829.006929.06628MIN: 8.14MIN: 8.15MIN: 81. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUr1r2r33691215Min: 8.88 / Avg: 8.97 / Max: 9.01Min: 8.98 / Avg: 9.01 / Max: 9.03Min: 8.91 / Avg: 9.07 / Max: 9.291. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50r1r2r31326395265SE +/- 0.40, N = 10SE +/- 0.35, N = 11SE +/- 0.40, N = 1058.1658.5358.79MIN: 36.86 / MAX: 81.73MIN: 37.33 / MAX: 83.74MIN: 36.87 / MAX: 85.771. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50r1r2r31224364860Min: 54.63 / Avg: 58.16 / Max: 59.25Min: 55.14 / Avg: 58.53 / Max: 59.4Min: 55.26 / Avg: 58.79 / Max: 59.591. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Timed MAFFT Alignment

This test performs an alignment of 100 pyruvate decarboxylase sequences. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNAr1r2r33691215SE +/- 0.08, N = 12SE +/- 0.10, N = 15SE +/- 0.10, N = 1410.5010.5610.611. (CC) gcc options: -std=c99 -O3 -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNAr1r2r33691215Min: 9.63 / Avg: 10.5 / Max: 10.73Min: 9.28 / Avg: 10.56 / Max: 11.13Min: 9.43 / Avg: 10.61 / Max: 11.021. (CC) gcc options: -std=c99 -O3 -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyr1r2r3816243240SE +/- 0.48, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 335.9535.5935.66MIN: 34.4 / MAX: 55.63MIN: 34.42 / MAX: 51.24MIN: 34.45 / MAX: 49.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyr1r2r3816243240Min: 35.43 / Avg: 35.95 / Max: 36.9Min: 35.5 / Avg: 35.59 / Max: 35.64Min: 35.64 / Avg: 35.66 / Max: 35.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark Suiter1r2r3200K400K600K800K1000KSE +/- 4346.11, N = 3SE +/- 2600.83, N = 3SE +/- 587.84, N = 3837911832417829705
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark Suiter1r2r3150K300K450K600K750KMin: 830669 / Avg: 837910.67 / Max: 845695Min: 829341 / Avg: 832417.33 / Max: 837588Min: 828740 / Avg: 829704.67 / Max: 830769

RealSR-NCNN

RealSR-NCNN is an NCNN neural network implementation of the RealSR project and accelerated using the Vulkan API. RealSR is the Real-World Super Resolution via Kernel Estimation and Noise Injection. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image by a scale of 4x with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: Yesr1r2r320406080100SE +/- 0.31, N = 3SE +/- 0.48, N = 3SE +/- 0.35, N = 399.81100.62100.75
OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: Yesr1r2r320406080100Min: 99.36 / Avg: 99.81 / Max: 100.4Min: 99.72 / Avg: 100.62 / Max: 101.34Min: 100.16 / Avg: 100.75 / Max: 101.36

VkResample

VkResample is a Vulkan-based image upscaling library based on VkFFT. The sample input file is upscaling a 4K image to 8K using Vulkan-based GPU acceleration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterVkResample 1.0Upscale: 2x - Precision: Singler1r2r3612182430SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.08, N = 324.9925.1925.231. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgms, Fewer Is BetterVkResample 1.0Upscale: 2x - Precision: Singler1r2r3612182430Min: 24.96 / Avg: 24.99 / Max: 25.02Min: 25.1 / Avg: 25.19 / Max: 25.26Min: 25.09 / Avg: 25.23 / Max: 25.371. (CXX) g++ options: -O3 -pthread

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling Benchmarkr1r2r33691215SE +/- 0.14, N = 3SE +/- 0.11, N = 3SE +/- 0.11, N = 313.0613.1713.181. Nodejs v10.19.0
OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling Benchmarkr1r2r348121620Min: 12.81 / Avg: 13.06 / Max: 13.29Min: 12.95 / Avg: 13.17 / Max: 13.32Min: 12.96 / Avg: 13.18 / Max: 13.321. Nodejs v10.19.0

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed Timer1r2r32M4M6M8M10MSE +/- 45086.65, N = 3SE +/- 7176.35, N = 3SE +/- 16578.83, N = 39497414958414895600121. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm
OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed Timer1r2r31.7M3.4M5.1M6.8M8.5MMin: 9412343 / Avg: 9497413.67 / Max: 9565846Min: 9570591 / Avg: 9584147.67 / Max: 9595008Min: 9532041 / Avg: 9560012.33 / Max: 95894181. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

Unigine Superposition

This test calculates the average frame-rate within the Superposition demo for the Unigine engine, released in 2017. This engine is extremely demanding on the system's graphics card. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnigine Superposition 1.0Resolution: 1920 x 1080 - Mode: Fullscreen - Quality: High - Renderer: OpenGLr1r2r31530456075SE +/- 0.19, N = 3SE +/- 0.12, N = 3SE +/- 0.09, N = 365.966.566.2MAX: 81.6MAX: 80.8MAX: 80.3
OpenBenchmarking.orgFrames Per Second, More Is BetterUnigine Superposition 1.0Resolution: 1920 x 1080 - Mode: Fullscreen - Quality: High - Renderer: OpenGLr1r2r31326395265Min: 65.7 / Avg: 65.93 / Max: 66.3Min: 66.3 / Avg: 66.47 / Max: 66.7Min: 66.1 / Avg: 66.23 / Max: 66.4

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPUr1r2r311002200330044005500SE +/- 4.97, N = 3SE +/- 19.24, N = 3SE +/- 4.20, N = 34961.994978.255006.341. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPUr1r2r39001800270036004500Min: 4952.08 / Avg: 4961.99 / Max: 4967.73Min: 4946.18 / Avg: 4978.25 / Max: 5012.71Min: 4998.61 / Avg: 5006.34 / Max: 5013.051. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 0r1r2r3246810SE +/- 0.079, N = 3SE +/- 0.061, N = 3SE +/- 0.095, N = 37.2887.3457.3531. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 0r1r2r33691215Min: 7.21 / Avg: 7.29 / Max: 7.45Min: 7.27 / Avg: 7.34 / Max: 7.47Min: 7.26 / Avg: 7.35 / Max: 7.541. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b Encryptionr1r2r32004006008001000SE +/- 0.83, N = 3SE +/- 0.87, N = 3SE +/- 4.25, N = 3878.0882.1874.4
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b Encryptionr1r2r3150300450600750Min: 876.4 / Avg: 878 / Max: 879.2Min: 880.6 / Avg: 882.07 / Max: 883.6Min: 866.8 / Avg: 874.4 / Max: 881.5

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Cropr1r2r3246810SE +/- 0.065, N = 11SE +/- 0.073, N = 9SE +/- 0.077, N = 88.9008.8398.826
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Cropr1r2r33691215Min: 8.4 / Avg: 8.9 / Max: 9.11Min: 8.36 / Avg: 8.84 / Max: 9.03Min: 8.36 / Avg: 8.83 / Max: 8.99

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b Encryptionr1r2r32004006008001000SE +/- 0.92, N = 3SE +/- 1.25, N = 3SE +/- 2.67, N = 3874.1881.4874.1
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b Encryptionr1r2r3150300450600750Min: 873.1 / Avg: 874.07 / Max: 875.9Min: 879.1 / Avg: 881.37 / Max: 883.4Min: 869.1 / Avg: 874.13 / Max: 878.2

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPUr1r2r30.28580.57160.85741.14321.429SE +/- 0.01, N = 3SE +/- 0.02, N = 4SE +/- 0.02, N = 31.261.271.271. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPUr1r2r3246810Min: 1.25 / Avg: 1.26 / Max: 1.28Min: 1.25 / Avg: 1.27 / Max: 1.31Min: 1.25 / Avg: 1.27 / Max: 1.31. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b Decryptionr1r2r3110220330440550SE +/- 0.34, N = 3SE +/- 1.43, N = 3SE +/- 2.21, N = 3482.5486.3483.0
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b Decryptionr1r2r390180270360450Min: 482.1 / Avg: 482.53 / Max: 483.2Min: 483.4 / Avg: 486.27 / Max: 487.8Min: 479.7 / Avg: 483 / Max: 487.2

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b Decryptionr1r2r32004006008001000SE +/- 1.28, N = 3SE +/- 1.17, N = 3SE +/- 4.24, N = 3871.7878.1873.5
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b Decryptionr1r2r3150300450600750Min: 869.6 / Avg: 871.67 / Max: 874Min: 875.9 / Avg: 878.07 / Max: 879.9Min: 865.4 / Avg: 873.53 / Max: 879.7

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUr1r2r348121620SE +/- 0.02, N = 3SE +/- 0.08, N = 3SE +/- 0.02, N = 318.0117.9018.03MIN: 17.22MIN: 17.18MIN: 17.241. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUr1r2r3510152025Min: 17.97 / Avg: 18.01 / Max: 18.05Min: 17.75 / Avg: 17.9 / Max: 18Min: 18 / Avg: 18.03 / Max: 18.071. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet Mobiler1r2r370K140K210K280K350KSE +/- 3140.84, N = 3SE +/- 2025.87, N = 3SE +/- 1284.72, N = 3302594304756304079
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet Mobiler1r2r350K100K150K200K250KMin: 296316 / Avg: 302594.33 / Max: 305911Min: 300704 / Avg: 304755.67 / Max: 306802Min: 301523 / Avg: 304079.33 / Max: 305582

ArrayFire

ArrayFire is an GPU and CPU numeric processing library, this test uses the built-in CPU and OpenCL ArrayFire benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterArrayFire 3.7Test: Conjugate Gradient OpenCLr1r2r30.57351.1471.72052.2942.8675SE +/- 0.015, N = 3SE +/- 0.022, N = 3SE +/- 0.018, N = 32.5492.5312.5481. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgms, Fewer Is BetterArrayFire 3.7Test: Conjugate Gradient OpenCLr1r2r3246810Min: 2.52 / Avg: 2.55 / Max: 2.58Min: 2.51 / Avg: 2.53 / Max: 2.57Min: 2.52 / Avg: 2.55 / Max: 2.581. (CXX) g++ options: -rdynamic

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b Encryptionr1r2r3110220330440550SE +/- 0.30, N = 2SE +/- 0.97, N = 3SE +/- 2.12, N = 3483.0486.4483.8
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b Encryptionr1r2r390180270360450Min: 482.7 / Avg: 483 / Max: 483.3Min: 484.9 / Avg: 486.37 / Max: 488.2Min: 480.6 / Avg: 483.8 / Max: 487.8

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080pr1r2r3110220330440550SE +/- 5.73, N = 14SE +/- 3.02, N = 14SE +/- 3.24, N = 13489.84486.46487.57MIN: 317.1 / MAX: 898.12MIN: 316.37 / MAX: 900.57MIN: 316.7 / MAX: 911.471. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080pr1r2r390180270360450Min: 480.78 / Avg: 489.84 / Max: 564.13Min: 479.98 / Avg: 486.46 / Max: 525.02Min: 481.02 / Avg: 487.57 / Max: 525.931. (CC) gcc options: -pthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-Gaussianr1r2r3306090120150SE +/- 1.33, N = 3SE +/- 1.00, N = 3SE +/- 1.20, N = 31461471471. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-Gaussianr1r2r3306090120150Min: 145 / Avg: 146.33 / Max: 149Min: 146 / Avg: 147 / Max: 149Min: 145 / Avg: 146.67 / Max: 1491. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Unpacking Firefox

This simple test profile measures how long it takes to extract the .tar.xz source package of the Mozilla Firefox Web Browser. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzr1r2r348121620SE +/- 0.08, N = 4SE +/- 0.09, N = 4SE +/- 0.14, N = 416.0316.1416.10
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzr1r2r348121620Min: 15.84 / Avg: 16.03 / Max: 16.22Min: 15.92 / Avg: 16.14 / Max: 16.36Min: 15.83 / Avg: 16.1 / Max: 16.38

VkFFT

VkFFT is a Fast Fourier Transform (FFT) Library that is GPU accelerated by means of the Vulkan API. The VkFFT benchmark runs FFT performance differences of many different sizes before returning an overall benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.1.1r1r2r36K12K18K24K30KSE +/- 62.93, N = 3SE +/- 58.68, N = 3SE +/- 108.37, N = 32582025647256831. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.1.1r1r2r34K8K12K16K20KMin: 25695 / Avg: 25819.67 / Max: 25897Min: 25578 / Avg: 25647.33 / Max: 25764Min: 25467 / Avg: 25683.33 / Max: 258031. (CXX) g++ options: -O3 -pthread

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Tile Glassr1r2r3714212835SE +/- 0.36, N = 3SE +/- 0.27, N = 3SE +/- 0.39, N = 328.2428.2428.06
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Tile Glassr1r2r3612182430Min: 27.53 / Avg: 28.24 / Max: 28.65Min: 27.7 / Avg: 28.24 / Max: 28.54Min: 27.27 / Avg: 28.05 / Max: 28.45

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Thoroughr1r2r31224364860SE +/- 0.54, N = 3SE +/- 0.54, N = 3SE +/- 0.42, N = 354.2954.3854.651. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Thoroughr1r2r31122334455Min: 53.22 / Avg: 54.29 / Max: 54.86Min: 53.3 / Avg: 54.38 / Max: 54.95Min: 53.81 / Avg: 54.65 / Max: 55.081. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Scaler1r2r3246810SE +/- 0.055, N = 12SE +/- 0.059, N = 13SE +/- 0.056, N = 146.9546.9737.000
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Scaler1r2r33691215Min: 6.36 / Avg: 6.95 / Max: 7.07Min: 6.29 / Avg: 6.97 / Max: 7.17Min: 6.3 / Avg: 7 / Max: 7.13

clpeak

Clpeak is designed to test the peak capabilities of OpenCL devices. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGIOPS, More Is BetterclpeakOpenCL Test: Integer Compute INTr1r2r312002400360048006000SE +/- 71.93, N = 15SE +/- 81.08, N = 15SE +/- 81.16, N = 155504.355519.395540.441. (CXX) g++ options: -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGIOPS, More Is BetterclpeakOpenCL Test: Integer Compute INTr1r2r310002000300040005000Min: 5155.92 / Avg: 5504.35 / Max: 6102.47Min: 5125.85 / Avg: 5519.39 / Max: 6080.53Min: 5145.44 / Avg: 5540.44 / Max: 6060.171. (CXX) g++ options: -O3 -rdynamic -lOpenCL

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b Decryptionr1r2r32004006008001000SE +/- 1.62, N = 3SE +/- 1.50, N = 3SE +/- 4.03, N = 3872.3876.6870.9
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b Decryptionr1r2r3150300450600750Min: 869.5 / Avg: 872.27 / Max: 875.1Min: 874 / Avg: 876.57 / Max: 879.2Min: 862.8 / Avg: 870.87 / Max: 875

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 4Kr1r2r3306090120150SE +/- 1.06, N = 6SE +/- 1.08, N = 6SE +/- 1.07, N = 6112.75112.03112.65MIN: 99.69 / MAX: 158.99MIN: 99.17 / MAX: 157.08MIN: 99.62 / MAX: 158.581. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 4Kr1r2r320406080100Min: 111.52 / Avg: 112.75 / Max: 118.05Min: 110.75 / Avg: 112.03 / Max: 117.42Min: 111.46 / Avg: 112.65 / Max: 118.011. (CC) gcc options: -pthread

cl-mem

A basic OpenCL memory benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettercl-mem 2017-01-13Benchmark: Copyr1r2r350100150200250SE +/- 0.22, N = 3SE +/- 0.24, N = 3SE +/- 0.27, N = 3236.6235.4235.11. (CC) gcc options: -O2 -flto -lOpenCL
OpenBenchmarking.orgGB/s, More Is Bettercl-mem 2017-01-13Benchmark: Copyr1r2r34080120160200Min: 236.3 / Avg: 236.57 / Max: 237Min: 234.9 / Avg: 235.37 / Max: 235.7Min: 234.6 / Avg: 235.13 / Max: 235.51. (CC) gcc options: -O2 -flto -lOpenCL

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b Decryptionr1r2r3110220330440550SE +/- 0.10, N = 3SE +/- 1.44, N = 3SE +/- 2.34, N = 3482.7485.7483.0
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b Decryptionr1r2r390180270360450Min: 482.5 / Avg: 482.7 / Max: 482.8Min: 482.9 / Avg: 485.73 / Max: 487.6Min: 479 / Avg: 482.97 / Max: 487.1

Hashcat

Hashcat is an open-source, advanced password recovery tool supporting GPU acceleration with OpenCL, NVIDIA CUDA, and Radeon ROCm. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterHashcat 6.1.1Benchmark: SHA-512r1r2r3200M400M600M800M1000MSE +/- 11546345.54, N = 15SE +/- 2594224.35, N = 3SE +/- 1852025.92, N = 3102310000010200000001016800000
OpenBenchmarking.orgH/s, More Is BetterHashcat 6.1.1Benchmark: SHA-512r1r2r3200M400M600M800M1000MMin: 1008500000 / Avg: 1023100000 / Max: 1184500000Min: 1016300000 / Avg: 1020000000 / Max: 1025000000Min: 1014000000 / Avg: 1016800000 / Max: 1020300000

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy Benchmarkr1r2r390180270360450SE +/- 1.54, N = 3SE +/- 0.84, N = 3SE +/- 0.70, N = 3419.58419.36417.03
OpenBenchmarking.orgScore, More Is BetterNumpy Benchmarkr1r2r370140210280350Min: 416.5 / Avg: 419.58 / Max: 421.32Min: 418.14 / Avg: 419.36 / Max: 420.96Min: 415.63 / Avg: 417.03 / Max: 417.83

NAMD CUDA

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. This version of the NAMD test profile uses CUDA GPU acceleration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD CUDA 2.14ATPase Simulation - 327,506 Atomsr1r2r30.050.10.150.20.25SE +/- 0.00131, N = 3SE +/- 0.00245, N = 5SE +/- 0.00272, N = 40.221030.222380.22171
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD CUDA 2.14ATPase Simulation - 327,506 Atomsr1r2r312345Min: 0.22 / Avg: 0.22 / Max: 0.22Min: 0.21 / Avg: 0.22 / Max: 0.23Min: 0.21 / Avg: 0.22 / Max: 0.23

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Cartoonr1r2r320406080100SE +/- 0.12, N = 3SE +/- 0.19, N = 3SE +/- 0.09, N = 386.7987.3286.99
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Cartoonr1r2r320406080100Min: 86.54 / Avg: 86.79 / Max: 86.92Min: 86.95 / Avg: 87.32 / Max: 87.58Min: 86.83 / Avg: 86.99 / Max: 87.11

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression Speedr1r2r32K4K6K8K10KSE +/- 6.52, N = 3SE +/- 4.75, N = 3SE +/- 11.24, N = 38120.678127.788079.181. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression Speedr1r2r314002800420056007000Min: 8108.7 / Avg: 8120.67 / Max: 8131.14Min: 8119.19 / Avg: 8127.78 / Max: 8135.59Min: 8056.72 / Avg: 8079.18 / Max: 8091.251. (CC) gcc options: -O3

Hashcat

Hashcat is an open-source, advanced password recovery tool supporting GPU acceleration with OpenCL, NVIDIA CUDA, and Radeon ROCm. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterHashcat 6.1.1Benchmark: SHA1r1r2r32000M4000M6000M8000M10000MSE +/- 31347213.24, N = 3SE +/- 17380832.35, N = 3SE +/- 18653000.95, N = 3858576666785445000008535333333
OpenBenchmarking.orgH/s, More Is BetterHashcat 6.1.1Benchmark: SHA1r1r2r31500M3000M4500M6000M7500MMin: 8549600000 / Avg: 8585766666.67 / Max: 8648200000Min: 8524300000 / Avg: 8544500000 / Max: 8579100000Min: 8507800000 / Avg: 8535333333.33 / Max: 8570900000

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: alexnetr1r2r348121620SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 315.4415.5315.50MIN: 14.41 / MAX: 26.42MIN: 14.41 / MAX: 25.62MIN: 14.41 / MAX: 26.231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: alexnetr1r2r348121620Min: 15.38 / Avg: 15.44 / Max: 15.49Min: 15.46 / Avg: 15.53 / Max: 15.61Min: 15.41 / Avg: 15.5 / Max: 15.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Crownr1r2r3246810SE +/- 0.0766, N = 3SE +/- 0.0737, N = 3SE +/- 0.0667, N = 36.08066.09896.0641MIN: 5.86 / MAX: 11.02MIN: 5.88 / MAX: 10.98MIN: 5.86 / MAX: 10.95
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Crownr1r2r3246810Min: 5.99 / Avg: 6.08 / Max: 6.23Min: 6.02 / Avg: 6.1 / Max: 6.25Min: 6 / Avg: 6.06 / Max: 6.2

Hashcat

Hashcat is an open-source, advanced password recovery tool supporting GPU acceleration with OpenCL, NVIDIA CUDA, and Radeon ROCm. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterHashcat 6.1.1Benchmark: MD5r1r2r35000M10000M15000M20000M25000MSE +/- 110495102.96, N = 3SE +/- 81107726.72, N = 3SE +/- 49256167.13, N = 3243348666672426020000024196900000
OpenBenchmarking.orgH/s, More Is BetterHashcat 6.1.1Benchmark: MD5r1r2r34000M8000M12000M16000M20000MMin: 24212300000 / Avg: 24334866666.67 / Max: 24555400000Min: 24121500000 / Avg: 24260200000 / Max: 24402400000Min: 24146300000 / Avg: 24196900000 / Max: 24295400000

Unigine Heaven

This test calculates the average frame-rate within the Heaven demo for the Unigine engine. This engine is extremely demanding on the system's graphics card. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnigine Heaven 4.0Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGLr1r2r3306090120150SE +/- 0.71, N = 3SE +/- 0.96, N = 3SE +/- 0.56, N = 3139.13139.91139.18
OpenBenchmarking.orgFrames Per Second, More Is BetterUnigine Heaven 4.0Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGLr1r2r3306090120150Min: 138.03 / Avg: 139.13 / Max: 140.46Min: 138.64 / Avg: 139.91 / Max: 141.78Min: 138.62 / Avg: 139.18 / Max: 140.31

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Fishy Cat - Compute: CUDAr1r2r34080120160200SE +/- 0.10, N = 3SE +/- 0.11, N = 3SE +/- 0.05, N = 3168.87167.96168.08
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Fishy Cat - Compute: CUDAr1r2r3306090120150Min: 168.73 / Avg: 168.87 / Max: 169.06Min: 167.85 / Avg: 167.96 / Max: 168.18Min: 167.98 / Avg: 168.08 / Max: 168.14

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Sequential Fillr1r2r3918273645SE +/- 0.44, N = 4SE +/- 0.46, N = 4SE +/- 0.39, N = 537.537.437.31. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Sequential Fillr1r2r3816243240Min: 37 / Avg: 37.5 / Max: 38.8Min: 36.8 / Avg: 37.43 / Max: 38.8Min: 36.8 / Avg: 37.34 / Max: 38.91. (CXX) g++ options: -O3 -lsnappy -lpthread

RealSR-NCNN

RealSR-NCNN is an NCNN neural network implementation of the RealSR project and accelerated using the Vulkan API. RealSR is the Real-World Super Resolution via Kernel Estimation and Noise Injection. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image by a scale of 4x with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: Nor1r2r348121620SE +/- 0.01, N = 3SE +/- 0.09, N = 3SE +/- 0.11, N = 314.7314.6614.69
OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: Nor1r2r348121620Min: 14.72 / Avg: 14.73 / Max: 14.75Min: 14.49 / Avg: 14.66 / Max: 14.77Min: 14.48 / Avg: 14.69 / Max: 14.81

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 10r1r2r30.771.542.313.083.85SE +/- 0.044, N = 3SE +/- 0.035, N = 3SE +/- 0.027, N = 33.4223.4043.420
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 10r1r2r3246810Min: 3.38 / Avg: 3.42 / Max: 3.51Min: 3.37 / Avg: 3.4 / Max: 3.47Min: 3.37 / Avg: 3.42 / Max: 3.47

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Classroom - Compute: NVIDIA OptiXr1r2r3306090120150SE +/- 0.13, N = 3SE +/- 0.23, N = 3SE +/- 0.13, N = 3116.76116.15116.26
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Classroom - Compute: NVIDIA OptiXr1r2r320406080100Min: 116.56 / Avg: 116.76 / Max: 117.01Min: 115.74 / Avg: 116.15 / Max: 116.53Min: 116.01 / Avg: 116.26 / Max: 116.39

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Seek Randomr1r2r33691215SE +/- 0.11, N = 15SE +/- 0.10, N = 15SE +/- 0.11, N = 1412.6912.6312.641. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Seek Randomr1r2r348121620Min: 11.34 / Avg: 12.69 / Max: 13.06Min: 11.28 / Avg: 12.63 / Max: 12.99Min: 11.47 / Avg: 12.64 / Max: 13.091. (CXX) g++ options: -O3 -lsnappy -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUr1r2r30.98671.97342.96013.94684.9335SE +/- 0.00310, N = 3SE +/- 0.00806, N = 3SE +/- 0.00559, N = 34.363814.378524.38535MIN: 4.23MIN: 4.25MIN: 4.251. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUr1r2r3246810Min: 4.36 / Avg: 4.36 / Max: 4.37Min: 4.36 / Avg: 4.38 / Max: 4.39Min: 4.38 / Avg: 4.39 / Max: 4.391. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18r1r2r3510152025SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 318.6218.7118.66MIN: 17.08 / MAX: 32.57MIN: 17.06 / MAX: 33.58MIN: 17.05 / MAX: 30.941. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18r1r2r3510152025Min: 18.58 / Avg: 18.62 / Max: 18.66Min: 18.62 / Avg: 18.71 / Max: 18.77Min: 18.61 / Avg: 18.66 / Max: 18.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To Compiler1r2r3306090120150SE +/- 0.33, N = 3SE +/- 0.24, N = 3SE +/- 0.75, N = 3151.66152.21151.48
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To Compiler1r2r3306090120150Min: 151.32 / Avg: 151.66 / Max: 152.31Min: 151.82 / Avg: 152.21 / Max: 152.65Min: 150.39 / Avg: 151.48 / Max: 152.91

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2r1r2r31326395265SE +/- 0.55, N = 3SE +/- 0.41, N = 3SE +/- 0.58, N = 355.5055.7455.771. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2r1r2r31122334455Min: 54.4 / Avg: 55.5 / Max: 56.11Min: 54.93 / Avg: 55.74 / Max: 56.24Min: 54.61 / Avg: 55.77 / Max: 56.381. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Overwriter1r2r3918273645SE +/- 0.15, N = 3SE +/- 0.08, N = 3SE +/- 0.04, N = 340.9340.9640.761. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Overwriter1r2r3918273645Min: 40.64 / Avg: 40.92 / Max: 41.14Min: 40.8 / Avg: 40.96 / Max: 41.05Min: 40.71 / Avg: 40.76 / Max: 40.841. (CXX) g++ options: -O3 -lsnappy -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdr1r2r3714212835SE +/- 0.14, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 327.6427.5127.63MIN: 27 / MAX: 40.14MIN: 26.93 / MAX: 43.6MIN: 27.02 / MAX: 46.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdr1r2r3612182430Min: 27.45 / Avg: 27.64 / Max: 27.91Min: 27.45 / Avg: 27.51 / Max: 27.54Min: 27.55 / Avg: 27.63 / Max: 27.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 5r1r2r30.24050.4810.72150.9621.2025SE +/- 0.005, N = 3SE +/- 0.004, N = 3SE +/- 0.003, N = 31.0691.0641.064
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 5r1r2r3246810Min: 1.06 / Avg: 1.07 / Max: 1.08Min: 1.06 / Avg: 1.06 / Max: 1.07Min: 1.06 / Avg: 1.06 / Max: 1.07

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Overwriter1r2r31020304050SE +/- 0.15, N = 3SE +/- 0.07, N = 3SE +/- 0.03, N = 343.243.243.41. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Overwriter1r2r3918273645Min: 43 / Avg: 43.23 / Max: 43.5Min: 43.1 / Avg: 43.17 / Max: 43.3Min: 43.3 / Avg: 43.37 / Max: 43.41. (CXX) g++ options: -O3 -lsnappy -lpthread

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Classroom - Compute: CUDAr1r2r360120180240300SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 3250.78251.90251.80
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Classroom - Compute: CUDAr1r2r350100150200250Min: 250.75 / Avg: 250.78 / Max: 250.83Min: 251.85 / Avg: 251.9 / Max: 251.98Min: 251.7 / Avg: 251.8 / Max: 251.86

RedShift Demo

This is a test of MAXON's RedShift demo build that currently requires NVIDIA GPU acceleration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRedShift Demo 3.0r1r2r3100200300400500SE +/- 0.88, N = 3SE +/- 0.33, N = 3461460459
OpenBenchmarking.orgSeconds, Fewer Is BetterRedShift Demo 3.0r1r2r380160240320400Min: 459 / Avg: 460.67 / Max: 462Min: 459 / Avg: 459.67 / Max: 460

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark Timer1r2r320406080100SE +/- 0.53, N = 3SE +/- 0.46, N = 3SE +/- 0.45, N = 380.5980.9380.711. RawTherapee, version 5.8, command line.
OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark Timer1r2r31530456075Min: 79.53 / Avg: 80.59 / Max: 81.18Min: 80.02 / Avg: 80.93 / Max: 81.42Min: 79.81 / Avg: 80.71 / Max: 81.171. RawTherapee, version 5.8, command line.

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Barbershop - Compute: CUDAr1r2r3160320480640800SE +/- 0.24, N = 3SE +/- 0.26, N = 3SE +/- 0.41, N = 3734.81731.67733.02
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Barbershop - Compute: CUDAr1r2r3130260390520650Min: 734.55 / Avg: 734.81 / Max: 735.3Min: 731.15 / Avg: 731.67 / Max: 732.01Min: 732.29 / Avg: 733.02 / Max: 733.72

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Bedroomr1r2r30.21130.42260.63390.84521.0565SE +/- 0.002, N = 3SE +/- 0.000, N = 3SE +/- 0.001, N = 30.9390.9380.935
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Bedroomr1r2r3246810Min: 0.94 / Avg: 0.94 / Max: 0.94Min: 0.94 / Avg: 0.94 / Max: 0.94Min: 0.93 / Avg: 0.93 / Max: 0.94

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Exhaustiver1r2r3100200300400500SE +/- 0.52, N = 3SE +/- 0.81, N = 3SE +/- 0.54, N = 3447.99449.37449.901. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Exhaustiver1r2r380160240320400Min: 446.98 / Avg: 447.99 / Max: 448.68Min: 447.77 / Avg: 449.37 / Max: 450.39Min: 448.84 / Avg: 449.9 / Max: 450.551. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To Compiler1r2r350100150200250SE +/- 0.40, N = 3SE +/- 0.49, N = 3SE +/- 0.85, N = 3210.05210.71210.95
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To Compiler1r2r34080120160200Min: 209.3 / Avg: 210.05 / Max: 210.64Min: 210.02 / Avg: 210.71 / Max: 211.65Min: 209.69 / Avg: 210.94 / Max: 212.55

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Supercarr1r2r30.48510.97021.45531.94042.4255SE +/- 0.002, N = 3SE +/- 0.001, N = 3SE +/- 0.002, N = 32.1472.1502.156
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Supercarr1r2r3246810Min: 2.14 / Avg: 2.15 / Max: 2.15Min: 2.15 / Avg: 2.15 / Max: 2.15Min: 2.15 / Avg: 2.16 / Max: 2.16

cl-mem

A basic OpenCL memory benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettercl-mem 2017-01-13Benchmark: Writer1r2r350100150200250SE +/- 0.47, N = 3SE +/- 0.26, N = 3SE +/- 0.50, N = 3215.7215.6214.81. (CC) gcc options: -O2 -flto -lOpenCL
OpenBenchmarking.orgGB/s, More Is Bettercl-mem 2017-01-13Benchmark: Writer1r2r34080120160200Min: 215 / Avg: 215.7 / Max: 216.6Min: 215.2 / Avg: 215.63 / Max: 216.1Min: 214 / Avg: 214.77 / Max: 215.71. (CC) gcc options: -O2 -flto -lOpenCL

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Rotate 90 Degreesr1r2r3918273645SE +/- 0.31, N = 3SE +/- 0.36, N = 3SE +/- 0.43, N = 337.7037.5437.69
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Rotate 90 Degreesr1r2r3816243240Min: 37.09 / Avg: 37.7 / Max: 38.06Min: 36.82 / Avg: 37.54 / Max: 37.96Min: 36.84 / Avg: 37.69 / Max: 38.21

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1Sr1r2r31326395265SE +/- 0.38, N = 3SE +/- 0.15, N = 3SE +/- 0.56, N = 357.8258.0658.061. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1Sr1r2r31122334455Min: 57.17 / Avg: 57.82 / Max: 58.5Min: 57.76 / Avg: 58.06 / Max: 58.24Min: 56.96 / Avg: 58.06 / Max: 58.81. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUr1r2r33691215SE +/- 0.04555, N = 3SE +/- 0.03928, N = 3SE +/- 0.03582, N = 39.775949.764689.73732MIN: 8.77MIN: 8.72MIN: 8.751. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUr1r2r33691215Min: 9.69 / Avg: 9.78 / Max: 9.84Min: 9.69 / Avg: 9.76 / Max: 9.82Min: 9.68 / Avg: 9.74 / Max: 9.81. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Unigine Superposition

This test calculates the average frame-rate within the Superposition demo for the Unigine engine, released in 2017. This engine is extremely demanding on the system's graphics card. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnigine Superposition 1.0Resolution: 1920 x 1080 - Mode: Fullscreen - Quality: Low - Renderer: OpenGLr1r2r34080120160200SE +/- 0.23, N = 3SE +/- 0.71, N = 3SE +/- 0.52, N = 3177.7178.1177.4MAX: 260.1MAX: 259.4MAX: 263.9
OpenBenchmarking.orgFrames Per Second, More Is BetterUnigine Superposition 1.0Resolution: 1920 x 1080 - Mode: Fullscreen - Quality: Low - Renderer: OpenGLr1r2r3306090120150Min: 177.3 / Avg: 177.73 / Max: 178.1Min: 176.8 / Avg: 178.13 / Max: 179.2Min: 176.4 / Avg: 177.43 / Max: 178.1

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Color Enhancer1r2r31224364860SE +/- 0.22, N = 3SE +/- 0.04, N = 3SE +/- 0.28, N = 354.1154.3154.10
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Color Enhancer1r2r31122334455Min: 53.69 / Avg: 54.11 / Max: 54.44Min: 54.24 / Avg: 54.31 / Max: 54.35Min: 53.55 / Avg: 54.1 / Max: 54.43

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetr1r2r380K160K240K320K400KSE +/- 2566.21, N = 3SE +/- 2576.61, N = 3SE +/- 2539.06, N = 3354892356034356258
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetr1r2r360K120K180K240K300KMin: 349760 / Avg: 354892.33 / Max: 357484Min: 350881 / Avg: 356034 / Max: 358651Min: 351180 / Avg: 356257.67 / Max: 358855

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Sequential Fillr1r2r31122334455SE +/- 0.54, N = 4SE +/- 0.58, N = 4SE +/- 0.48, N = 547.2447.2947.421. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Sequential Fillr1r2r31020304050Min: 45.62 / Avg: 47.23 / Max: 47.87Min: 45.58 / Avg: 47.29 / Max: 48.13Min: 45.53 / Avg: 47.42 / Max: 48.111. (CXX) g++ options: -O3 -lsnappy -lpthread

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.0.1Test: Masskrug - Acceleration: CPU-onlyr1r2r3246810SE +/- 0.097, N = 12SE +/- 0.096, N = 12SE +/- 0.099, N = 127.1287.1507.155
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.0.1Test: Masskrug - Acceleration: CPU-onlyr1r2r33691215Min: 6.06 / Avg: 7.13 / Max: 7.24Min: 6.1 / Avg: 7.15 / Max: 7.28Min: 6.06 / Avg: 7.15 / Max: 7.28

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetr1r2r3612182430SE +/- 0.17, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 326.6226.6326.53MIN: 25.69 / MAX: 38.05MIN: 25.7 / MAX: 41.21MIN: 25.78 / MAX: 41.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetr1r2r3612182430Min: 26.44 / Avg: 26.62 / Max: 26.95Min: 26.62 / Avg: 26.63 / Max: 26.64Min: 26.5 / Avg: 26.53 / Max: 26.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2r1r2r31000K2000K3000K4000K5000KSE +/- 8775.31, N = 3SE +/- 8796.49, N = 3SE +/- 8398.83, N = 3466019746705674677473
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2r1r2r3800K1600K2400K3200K4000KMin: 4642680 / Avg: 4660196.67 / Max: 4669900Min: 4653140 / Avg: 4670566.67 / Max: 4681370Min: 4660680 / Avg: 4677473.33 / Max: 4686200

DDraceNetwork

This is a test of DDraceNetwork, an open-source cooperative platformer. OpenGL 3.3 is used for rendering, with fallbacks for older OpenGL versions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterDDraceNetwork 15.2.3Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL 3.0 - Zoom: Default - Demo: Multeasymapr1r2r390180270360450SE +/- 0.79, N = 3SE +/- 2.87, N = 3SE +/- 4.35, N = 3413.88412.43412.38MIN: 119.86 / MAX: 499.75MIN: 103.17 / MAX: 499.75MIN: 127.91 / MAX: 499.751. (CXX) g++ options: -O3 -rdynamic -lcrypto -lz -lrt -lpthread -lcurl -lfreetype -lSDL2 -lwavpack -lopusfile -lopus -logg -lGL -lX11 -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0
OpenBenchmarking.orgFrames Per Second, More Is BetterDDraceNetwork 15.2.3Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL 3.0 - Zoom: Default - Demo: Multeasymapr1r2r370140210280350Min: 412.8 / Avg: 413.88 / Max: 415.41Min: 409.09 / Avg: 412.43 / Max: 418.15Min: 405.03 / Avg: 412.38 / Max: 420.081. (CXX) g++ options: -O3 -rdynamic -lcrypto -lz -lrt -lpthread -lcurl -lfreetype -lSDL2 -lwavpack -lopusfile -lopus -logg -lGL -lX11 -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUr1r2r3510152025SE +/- 0.06, N = 3SE +/- 0.06, N = 3SE +/- 0.02, N = 321.6921.7021.62MIN: 21.47MIN: 21.48MIN: 21.511. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUr1r2r3510152025Min: 21.58 / Avg: 21.69 / Max: 21.76Min: 21.59 / Avg: 21.7 / Max: 21.8Min: 21.6 / Avg: 21.62 / Max: 21.661. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19r1r2r3714212835SE +/- 0.07, N = 3SE +/- 0.06, N = 3SE +/- 0.06, N = 328.828.728.81. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19r1r2r3612182430Min: 28.7 / Avg: 28.83 / Max: 28.9Min: 28.6 / Avg: 28.7 / Max: 28.8Min: 28.7 / Avg: 28.8 / Max: 28.91. (CC) gcc options: -O3 -pthread -lz -llzma

Inkscape

Inkscape is an open-source vector graphics editor. This test profile times how long it takes to complete various operations by Inkscape. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterInkscapeOperation: SVG Files To PNGr1r2r3510152025SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 321.0021.0521.071. Inkscape 0.92.5 (2060ec1f9f, 2020-04-08)
OpenBenchmarking.orgSeconds, Fewer Is BetterInkscapeOperation: SVG Files To PNGr1r2r3510152025Min: 20.94 / Avg: 21 / Max: 21.06Min: 21.01 / Avg: 21.05 / Max: 21.09Min: 21.02 / Avg: 21.07 / Max: 21.111. Inkscape 0.92.5 (2060ec1f9f, 2020-04-08)

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random Deleter1r2r31122334455SE +/- 0.49, N = 5SE +/- 0.57, N = 4SE +/- 0.56, N = 447.2347.3047.391. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random Deleter1r2r31020304050Min: 45.3 / Avg: 47.23 / Max: 47.82Min: 45.6 / Avg: 47.3 / Max: 48Min: 45.71 / Avg: 47.39 / Max: 48.091. (CXX) g++ options: -O3 -lsnappy -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUr1r2r31.00592.01183.01774.02365.0295SE +/- 0.00967, N = 3SE +/- 0.01661, N = 3SE +/- 0.00726, N = 34.455644.470624.46656MIN: 4.02MIN: 4.02MIN: 4.011. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUr1r2r3246810Min: 4.44 / Avg: 4.46 / Max: 4.47Min: 4.45 / Avg: 4.47 / Max: 4.5Min: 4.45 / Avg: 4.47 / Max: 4.481. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

MandelGPU

MandelGPU is an OpenCL benchmark and this test runs with the OpenCL rendering float4 kernel with a maximum of 4096 iterations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSamples/sec, More Is BetterMandelGPU 1.3pts1OpenCL Device: GPUr1r2r350M100M150M200M250MSE +/- 1032565.22, N = 3SE +/- 157365.45, N = 3SE +/- 1449538.54, N = 3251986408.7252826584.8252822614.41. (CC) gcc options: -O3 -lm -ftree-vectorize -funroll-loops -lglut -lOpenCL -lGL
OpenBenchmarking.orgSamples/sec, More Is BetterMandelGPU 1.3pts1OpenCL Device: GPUr1r2r340M80M120M160M200MMin: 250154807.4 / Avg: 251986408.73 / Max: 253728345.1Min: 252555057.1 / Avg: 252826584.83 / Max: 253100175.3Min: 249927719.3 / Avg: 252822614.43 / Max: 254404867.61. (CC) gcc options: -O3 -lm -ftree-vectorize -funroll-loops -lglut -lOpenCL -lGL

BRL-CAD

BRL-CAD 7.28.0 is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.30.8VGR Performance Metricr1r2r314K28K42K56K70K6390963822640331. (CXX) g++ options: -std=c++11 -pipe -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -rdynamic -lSM -lICE -lXi -lGLU -lGL -lGLdispatch -lX11 -lXext -lXrender -lpthread -ldl -luuid -lm

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression Speedr1r2r32K4K6K8K10KSE +/- 1.84, N = 5SE +/- 16.28, N = 3SE +/- 0.67, N = 39676.39653.79685.21. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression Speedr1r2r32K4K6K8K10KMin: 9671 / Avg: 9676.32 / Max: 9682.3Min: 9637.2 / Avg: 9653.73 / Max: 9686.3Min: 9684.2 / Avg: 9685.23 / Max: 9686.51. (CC) gcc options: -O3

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.0.1Test: Boat - Acceleration: CPU-onlyr1r2r348121620SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 315.9115.8715.86
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.0.1Test: Boat - Acceleration: CPU-onlyr1r2r348121620Min: 15.89 / Avg: 15.91 / Max: 15.94Min: 15.79 / Avg: 15.87 / Max: 15.91Min: 15.81 / Avg: 15.86 / Max: 15.91

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16r1r2r31632486480SE +/- 0.20, N = 3SE +/- 0.03, N = 3SE +/- 0.12, N = 372.0971.9171.86MIN: 70.5 / MAX: 88.28MIN: 70.43 / MAX: 92.47MIN: 70.48 / MAX: 881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16r1r2r31428425670Min: 71.87 / Avg: 72.09 / Max: 72.5Min: 71.85 / Avg: 71.91 / Max: 71.94Min: 71.64 / Avg: 71.86 / Max: 72.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

DeepSpeech

Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUr1r2r320406080100SE +/- 0.21, N = 3SE +/- 0.04, N = 3SE +/- 0.04, N = 381.3081.0781.04
OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUr1r2r31632486480Min: 81.05 / Avg: 81.3 / Max: 81.72Min: 81 / Avg: 81.07 / Max: 81.12Min: 80.96 / Avg: 81.04 / Max: 81.08

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression Speedr1r2r32K4K6K8K10KSE +/- 1.80, N = 5SE +/- 15.38, N = 3SE +/- 0.78, N = 39679.89664.89695.21. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression Speedr1r2r32K4K6K8K10KMin: 9673.3 / Avg: 9679.84 / Max: 9684.2Min: 9649.2 / Avg: 9664.83 / Max: 9695.6Min: 9693.9 / Avg: 9695.2 / Max: 9696.61. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression Speedr1r2r32K4K6K8K10KSE +/- 3.96, N = 3SE +/- 2.38, N = 3SE +/- 10.11, N = 39823.29839.99810.01. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression Speedr1r2r32K4K6K8K10KMin: 9815.4 / Avg: 9823.23 / Max: 9828.2Min: 9835.1 / Avg: 9839.87 / Max: 9842.4Min: 9789.8 / Avg: 9809.97 / Max: 9821.21. (CC) gcc options: -O3

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPUr1r2r37001400210028003500SE +/- 2.58, N = 3SE +/- 1.22, N = 4SE +/- 2.51, N = 33202.533207.353212.101. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPUr1r2r36001200180024003000Min: 3197.84 / Avg: 3202.53 / Max: 3206.73Min: 3204.86 / Avg: 3207.35 / Max: 3210.72Min: 3208.09 / Avg: 3212.1 / Max: 3216.721. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4r1r2r31.1M2.2M3.3M4.4M5.5MSE +/- 5618.75, N = 3SE +/- 7685.69, N = 3SE +/- 8609.77, N = 3516319051681835178263
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4r1r2r3900K1800K2700K3600K4500KMin: 5152020 / Avg: 5163190 / Max: 5169840Min: 5153100 / Avg: 5168183.33 / Max: 5178290Min: 5163010 / Avg: 5178263.33 / Max: 5192810

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Quantr1r2r350K100K150K200K250KSE +/- 1686.36, N = 3SE +/- 1668.46, N = 3SE +/- 1810.35, N = 3236716237129237406
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Quantr1r2r340K80K120K160K200KMin: 233345 / Avg: 236715.67 / Max: 238503Min: 233796 / Avg: 237129.33 / Max: 238930Min: 233791 / Avg: 237405.67 / Max: 239394

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p 10-bitr1r2r320406080100SE +/- 0.99, N = 4SE +/- 1.05, N = 4SE +/- 1.03, N = 486.0885.8385.95MIN: 54.34 / MAX: 256.39MIN: 54.27 / MAX: 257.58MIN: 54.21 / MAX: 255.721. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p 10-bitr1r2r31632486480Min: 85.06 / Avg: 86.08 / Max: 89.03Min: 84.56 / Avg: 85.83 / Max: 88.98Min: 84.71 / Avg: 85.95 / Max: 89.021. (CC) gcc options: -pthread

VkResample

VkResample is a Vulkan-based image upscaling library based on VkFFT. The sample input file is upscaling a 4K image to 8K using Vulkan-based GPU acceleration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterVkResample 1.0Upscale: 2x - Precision: Doubler1r2r360120180240300SE +/- 0.20, N = 3SE +/- 0.11, N = 3SE +/- 0.20, N = 3256.87257.06257.621. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgms, Fewer Is BetterVkResample 1.0Upscale: 2x - Precision: Doubler1r2r350100150200250Min: 256.48 / Avg: 256.87 / Max: 257.17Min: 256.92 / Avg: 257.06 / Max: 257.28Min: 257.34 / Avg: 257.62 / Max: 258.011. (CXX) g++ options: -O3 -pthread

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: Yes - Mode: Inference - Network: Mobilenet - Device: OpenCLr1r2r3400800120016002000SE +/- 7.57, N = 3SE +/- 3.54, N = 3SE +/- 8.53, N = 31819.241823.061817.78
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: Yes - Mode: Inference - Network: Mobilenet - Device: OpenCLr1r2r330060090012001500Min: 1808.31 / Avg: 1819.24 / Max: 1833.79Min: 1818.66 / Avg: 1823.06 / Max: 1830.06Min: 1808.15 / Avg: 1817.78 / Max: 1834.79

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus Encoder1r2r3246810SE +/- 0.009, N = 5SE +/- 0.004, N = 5SE +/- 0.008, N = 57.6247.6027.6161. (CXX) g++ options: -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus Encoder1r2r33691215Min: 7.61 / Avg: 7.62 / Max: 7.66Min: 7.6 / Avg: 7.6 / Max: 7.62Min: 7.6 / Avg: 7.62 / Max: 7.651. (CXX) g++ options: -fvisibility=hidden -logg -lm

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 1r1r2r30.07810.15620.23430.31240.3905SE +/- 0.002, N = 3SE +/- 0.002, N = 3SE +/- 0.003, N = 30.3470.3460.347
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 1r1r2r312345Min: 0.35 / Avg: 0.35 / Max: 0.35Min: 0.34 / Avg: 0.35 / Max: 0.35Min: 0.34 / Avg: 0.35 / Max: 0.35

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Fishy Cat - Compute: NVIDIA OptiXr1r2r31428425670SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.12, N = 360.3560.1860.25
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Fishy Cat - Compute: NVIDIA OptiXr1r2r31224364860Min: 60.3 / Avg: 60.35 / Max: 60.41Min: 60.11 / Avg: 60.18 / Max: 60.23Min: 60.03 / Avg: 60.25 / Max: 60.45

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 6r1r2r30.32490.64980.97471.29961.6245SE +/- 0.010, N = 3SE +/- 0.006, N = 3SE +/- 0.012, N = 31.4441.4401.443
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 6r1r2r3246810Min: 1.43 / Avg: 1.44 / Max: 1.46Min: 1.43 / Avg: 1.44 / Max: 1.45Min: 1.43 / Avg: 1.44 / Max: 1.47

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0r1r2r33691215SE +/- 0.01, N = 10SE +/- 0.01, N = 11SE +/- 0.01, N = 1010.6510.6810.66MIN: 10.33 / MAX: 34.53MIN: 10.35 / MAX: 33.35MIN: 10.33 / MAX: 32.251. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0r1r2r33691215Min: 10.61 / Avg: 10.65 / Max: 10.71Min: 10.6 / Avg: 10.67 / Max: 10.74Min: 10.61 / Avg: 10.66 / Max: 10.711. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUr1r2r315003000450060007500SE +/- 2.95, N = 3SE +/- 4.70, N = 3SE +/- 6.73, N = 37140.507159.427151.58MIN: 7021.68MIN: 7041.4MIN: 7027.21. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUr1r2r312002400360048006000Min: 7137.36 / Avg: 7140.5 / Max: 7146.4Min: 7150.93 / Avg: 7159.42 / Max: 7167.17Min: 7138.44 / Avg: 7151.58 / Max: 7160.661. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Secondr1r2r350K100K150K200K250KSE +/- 2532.07, N = 3SE +/- 1894.03, N = 3SE +/- 2209.16, N = 3223414.81223304.98223892.441. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Secondr1r2r340K80K120K160K200KMin: 220163.29 / Avg: 223414.81 / Max: 228402.85Min: 221279.73 / Avg: 223304.98 / Max: 227089.94Min: 221642.46 / Avg: 223892.44 / Max: 228310.51. (CC) gcc options: -O2 -lrt" -lrt

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Wavelet Blurr1r2r31326395265SE +/- 0.25, N = 3SE +/- 0.39, N = 3SE +/- 0.25, N = 357.9957.9557.84
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Wavelet Blurr1r2r31122334455Min: 57.6 / Avg: 57.99 / Max: 58.47Min: 57.19 / Avg: 57.95 / Max: 58.44Min: 57.34 / Avg: 57.84 / Max: 58.14

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetr1r2r348121620SE +/- 0.08, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 315.5015.4615.49MIN: 14.41 / MAX: 55.15MIN: 14.35 / MAX: 27.24MIN: 14.41 / MAX: 24.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetr1r2r348121620Min: 15.4 / Avg: 15.5 / Max: 15.65Min: 15.42 / Avg: 15.46 / Max: 15.54Min: 15.44 / Avg: 15.49 / Max: 15.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: HWB Color Spacer1r2r32004006008001000SE +/- 5.03, N = 3SE +/- 5.70, N = 3SE +/- 4.51, N = 37757747761. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: HWB Color Spacer1r2r3140280420560700Min: 769 / Avg: 775 / Max: 785Min: 767 / Avg: 773.67 / Max: 785Min: 771 / Avg: 776 / Max: 7851. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Antialiasr1r2r3816243240SE +/- 0.45, N = 3SE +/- 0.35, N = 3SE +/- 0.38, N = 336.5636.5636.65
OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Antialiasr1r2r3816243240Min: 35.66 / Avg: 36.56 / Max: 37.1Min: 35.87 / Avg: 36.56 / Max: 36.92Min: 35.9 / Avg: 36.65 / Max: 37.12

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training Scorer1r2r32004006008001000816814814

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Barbershop - Compute: NVIDIA OptiXr1r2r330060090012001500SE +/- 0.44, N = 3SE +/- 0.85, N = 3SE +/- 2.01, N = 31192.961190.051192.80
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Barbershop - Compute: NVIDIA OptiXr1r2r32004006008001000Min: 1192.4 / Avg: 1192.96 / Max: 1193.83Min: 1188.59 / Avg: 1190.05 / Max: 1191.52Min: 1190.73 / Avg: 1192.8 / Max: 1196.81

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet50r1r2r3918273645SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.01, N = 337.2537.3437.26MIN: 34.07 / MAX: 48.19MIN: 33.97 / MAX: 56.32MIN: 33.79 / MAX: 52.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet50r1r2r3816243240Min: 37.21 / Avg: 37.25 / Max: 37.3Min: 37.26 / Avg: 37.34 / Max: 37.45Min: 37.25 / Avg: 37.26 / Max: 37.271. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: Mobilenet - Device: OpenCLr1r2r330060090012001500SE +/- 3.10, N = 3SE +/- 2.03, N = 3SE +/- 4.92, N = 31246.781244.951247.93
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: Mobilenet - Device: OpenCLr1r2r32004006008001000Min: 1240.62 / Avg: 1246.78 / Max: 1250.47Min: 1240.98 / Avg: 1244.95 / Max: 1247.67Min: 1243 / Avg: 1247.93 / Max: 1257.76

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Random Fillr1r2r31020304050SE +/- 0.21, N = 3SE +/- 0.19, N = 3SE +/- 0.07, N = 343.143.143.21. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Random Fillr1r2r3918273645Min: 42.8 / Avg: 43.1 / Max: 43.5Min: 42.9 / Avg: 43.13 / Max: 43.5Min: 43.1 / Avg: 43.17 / Max: 43.31. (CXX) g++ options: -O3 -lsnappy -lpthread

Unigine Superposition

This test calculates the average frame-rate within the Superposition demo for the Unigine engine, released in 2017. This engine is extremely demanding on the system's graphics card. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterUnigine Superposition 1.0Resolution: 1920 x 1080 - Mode: Fullscreen - Quality: Medium - Renderer: OpenGLr1r2r320406080100SE +/- 0.15, N = 3SE +/- 0.15, N = 3SE +/- 0.15, N = 390.490.690.5MAX: 114.5MAX: 114.4MAX: 113
OpenBenchmarking.orgFrames Per Second, More Is BetterUnigine Superposition 1.0Resolution: 1920 x 1080 - Mode: Fullscreen - Quality: Medium - Renderer: OpenGLr1r2r320406080100Min: 90.2 / Avg: 90.4 / Max: 90.7Min: 90.4 / Avg: 90.63 / Max: 90.9Min: 90.3 / Avg: 90.5 / Max: 90.8

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: squeezenet_ssdr1r2r3612182430SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 327.5827.5227.55MIN: 26.94 / MAX: 43.23MIN: 26.95 / MAX: 42.6MIN: 26.92 / MAX: 41.991. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: squeezenet_ssdr1r2r3612182430Min: 27.56 / Avg: 27.58 / Max: 27.61Min: 27.49 / Avg: 27.52 / Max: 27.56Min: 27.51 / Avg: 27.55 / Max: 27.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragonr1r2r3246810SE +/- 0.0643, N = 3SE +/- 0.0719, N = 3SE +/- 0.0754, N = 37.55557.56567.5496MIN: 7.18 / MAX: 12.55MIN: 7.18 / MAX: 12.51MIN: 7.19 / MAX: 12.66
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragonr1r2r33691215Min: 7.43 / Avg: 7.56 / Max: 7.65Min: 7.43 / Avg: 7.57 / Max: 7.68Min: 7.41 / Avg: 7.55 / Max: 7.67

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPUr1r2r311002200330044005500SE +/- 15.43, N = 3SE +/- 9.68, N = 9SE +/- 14.45, N = 55069.445079.895073.091. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPUr1r2r39001800270036004500Min: 5050.47 / Avg: 5069.44 / Max: 5100Min: 5023.61 / Avg: 5079.89 / Max: 5111.12Min: 5034.75 / Avg: 5073.09 / Max: 5109.151. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: CUDAr1r2r320406080100SE +/- 0.14, N = 3SE +/- 0.16, N = 3SE +/- 0.10, N = 391.0090.8290.93
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: CUDAr1r2r320406080100Min: 90.74 / Avg: 91 / Max: 91.2Min: 90.49 / Avg: 90.82 / Max: 91.01Min: 90.78 / Avg: 90.93 / Max: 91.12

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: vgg16r1r2r31632486480SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 371.9671.8271.86MIN: 70.52 / MAX: 88.3MIN: 70.37 / MAX: 86.67MIN: 70.4 / MAX: 88.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: vgg16r1r2r31428425670Min: 71.9 / Avg: 71.96 / Max: 72.06Min: 71.75 / Avg: 71.82 / Max: 71.88Min: 71.82 / Avg: 71.86 / Max: 71.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To Compiler1r2r320406080100SE +/- 0.78, N = 3SE +/- 0.39, N = 3SE +/- 0.30, N = 3100.26100.40100.20
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To Compiler1r2r320406080100Min: 98.99 / Avg: 100.26 / Max: 101.69Min: 99.62 / Avg: 100.4 / Max: 100.81Min: 99.61 / Avg: 100.2 / Max: 100.57

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUr1r2r315003000450060007500SE +/- 12.55, N = 3SE +/- 1.75, N = 3SE +/- 6.55, N = 37155.417159.487169.03MIN: 7025.22MIN: 7040.61MIN: 7046.491. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUr1r2r312002400360048006000Min: 7141.09 / Avg: 7155.41 / Max: 7180.43Min: 7156.72 / Avg: 7159.48 / Max: 7162.72Min: 7160.71 / Avg: 7169.03 / Max: 7181.961. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 3r1r2r320406080100SE +/- 0.55, N = 3SE +/- 0.55, N = 3SE +/- 0.53, N = 3110.84110.93111.041. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 3r1r2r320406080100Min: 109.74 / Avg: 110.84 / Max: 111.45Min: 109.83 / Avg: 110.93 / Max: 111.58Min: 109.97 / Avg: 111.04 / Max: 111.61. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1r1r2r30.89141.78282.67423.56564.457SE +/- 0.00082, N = 3SE +/- 0.00692, N = 3SE +/- 0.01196, N = 33.961773.960683.954571. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1r1r2r3246810Min: 3.96 / Avg: 3.96 / Max: 3.96Min: 3.95 / Avg: 3.96 / Max: 3.97Min: 3.94 / Avg: 3.95 / Max: 3.981. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Resizingr1r2r3120240360480600SE +/- 2.73, N = 3SE +/- 5.00, N = 3SE +/- 5.36, N = 35525515511. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Resizingr1r2r3100200300400500Min: 548 / Avg: 551.67 / Max: 557Min: 546 / Avg: 551 / Max: 561Min: 545 / Avg: 551.33 / Max: 5621. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Floatr1r2r350K100K150K200K250KSE +/- 1996.41, N = 3SE +/- 1820.00, N = 3SE +/- 1638.46, N = 3239119239224239537
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Floatr1r2r340K80K120K160K200KMin: 235208 / Avg: 239119.33 / Max: 241770Min: 235584 / Avg: 239224 / Max: 241045Min: 236260 / Avg: 239536.67 / Max: 241210

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUr1r2r38001600240032004000SE +/- 1.61, N = 3SE +/- 1.20, N = 3SE +/- 1.33, N = 33797.323799.453792.87MIN: 3686.53MIN: 3692.97MIN: 3672.831. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUr1r2r37001400210028003500Min: 3794.41 / Avg: 3797.32 / Max: 3799.97Min: 3797.39 / Avg: 3799.45 / Max: 3801.56Min: 3790.24 / Avg: 3792.87 / Max: 3794.531. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.0.1Test: Server Room - Acceleration: CPU-onlyr1r2r30.94071.88142.82213.76284.7035SE +/- 0.010, N = 3SE +/- 0.004, N = 3SE +/- 0.006, N = 34.1814.1744.178
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.0.1Test: Server Room - Acceleration: CPU-onlyr1r2r3246810Min: 4.16 / Avg: 4.18 / Max: 4.2Min: 4.17 / Avg: 4.17 / Max: 4.18Min: 4.17 / Avg: 4.18 / Max: 4.19

yquake2

This is a test of Yamagi Quake II. Yamagi Quake II is an enhanced client for id Software's Quake II with focus on offline and coop gameplay. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: Software CPU - Resolution: 1920 x 1080r1r2r31428425670SE +/- 0.07, N = 3SE +/- 0.07, N = 3SE +/- 0.09, N = 360.760.760.61. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC
OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: Software CPU - Resolution: 1920 x 1080r1r2r31224364860Min: 60.6 / Avg: 60.73 / Max: 60.8Min: 60.6 / Avg: 60.73 / Max: 60.8Min: 60.5 / Avg: 60.63 / Max: 60.81. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Pabellon Barcelona - Compute: CUDAr1r2r3130260390520650SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 3608.80609.56608.62
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Pabellon Barcelona - Compute: CUDAr1r2r3110220330440550Min: 608.75 / Avg: 608.8 / Max: 608.87Min: 609.53 / Avg: 609.56 / Max: 609.61Min: 608.5 / Avg: 608.62 / Max: 608.69

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUr1r2r315003000450060007500SE +/- 3.89, N = 3SE +/- 0.92, N = 3SE +/- 2.23, N = 37144.237154.667147.09MIN: 7028.46MIN: 7035.88MIN: 7033.981. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUr1r2r312002400360048006000Min: 7137.39 / Avg: 7144.23 / Max: 7150.85Min: 7152.86 / Avg: 7154.66 / Max: 7155.92Min: 7142.78 / Avg: 7147.09 / Max: 7150.251. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3r1r2r36001200180024003000SE +/- 7.25, N = 3SE +/- 8.65, N = 3SE +/- 4.18, N = 32833.62831.02835.11. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3r1r2r35001000150020002500Min: 2821.8 / Avg: 2833.6 / Max: 2846.8Min: 2814.4 / Avg: 2831.03 / Max: 2843.5Min: 2827.5 / Avg: 2835.13 / Max: 2841.91. (CC) gcc options: -O3 -pthread -lz -llzma

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random Fillr1r2r3918273645SE +/- 0.19, N = 3SE +/- 0.20, N = 3SE +/- 0.07, N = 341.0441.0340.981. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random Fillr1r2r3918273645Min: 40.68 / Avg: 41.04 / Max: 41.33Min: 40.63 / Avg: 41.03 / Max: 41.29Min: 40.84 / Avg: 40.98 / Max: 41.081. (CXX) g++ options: -O3 -lsnappy -lpthread

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI Scorer1r2r330060090012001500154615441544

OctaneBench

OctaneBench is a test of the OctaneRender on the GPU and requires the use of NVIDIA CUDA. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterOctaneBench 2020.1Total Scorer1r2r34080120160200189.09189.10189.32

cl-mem

A basic OpenCL memory benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettercl-mem 2017-01-13Benchmark: Readr1r2r370140210280350SE +/- 0.18, N = 3SE +/- 0.09, N = 3SE +/- 0.03, N = 3330.3329.9329.91. (CC) gcc options: -O2 -flto -lOpenCL
OpenBenchmarking.orgGB/s, More Is Bettercl-mem 2017-01-13Benchmark: Readr1r2r360120180240300Min: 330 / Avg: 330.33 / Max: 330.6Min: 329.7 / Avg: 329.87 / Max: 330Min: 329.9 / Avg: 329.93 / Max: 3301. (CC) gcc options: -O2 -flto -lOpenCL

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUr1r2r38001600240032004000SE +/- 6.76, N = 3SE +/- 4.34, N = 3SE +/- 3.22, N = 33795.813800.413798.12MIN: 3687.23MIN: 3681.23MIN: 3685.271. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUr1r2r37001400210028003500Min: 3784.22 / Avg: 3795.81 / Max: 3807.62Min: 3792.58 / Avg: 3800.41 / Max: 3807.57Min: 3791.7 / Avg: 3798.12 / Max: 3801.781. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2 + RDO Post-Processingr1r2r32004006008001000SE +/- 0.74, N = 3SE +/- 0.35, N = 3SE +/- 0.62, N = 3840.35840.32841.231. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2 + RDO Post-Processingr1r2r3150300450600750Min: 838.87 / Avg: 840.35 / Max: 841.16Min: 839.66 / Avg: 840.32 / Max: 840.82Min: 840.38 / Avg: 841.23 / Max: 842.431. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Pabellon Barcelona - Compute: NVIDIA OptiXr1r2r34080120160200SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.08, N = 3196.21196.28196.41
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Pabellon Barcelona - Compute: NVIDIA OptiXr1r2r34080120160200Min: 196.18 / Avg: 196.21 / Max: 196.24Min: 196.25 / Avg: 196.28 / Max: 196.34Min: 196.26 / Avg: 196.41 / Max: 196.51

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 1080pr1r2r3100200300400500SE +/- 3.60, N = 14SE +/- 3.46, N = 13SE +/- 3.80, N = 13460.02459.61459.71MIN: 375.05 / MAX: 590.01MIN: 374.03 / MAX: 582.97MIN: 374.63 / MAX: 587.931. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 1080pr1r2r380160240320400Min: 454.53 / Avg: 460.02 / Max: 506.62Min: 453.95 / Avg: 459.61 / Max: 501.01Min: 453.42 / Avg: 459.71 / Max: 504.981. (CC) gcc options: -pthread

FAHBench

FAHBench is a Folding@Home benchmark on the GPU. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterFAHBench 2.3.2r1r2r34080120160200SE +/- 0.23, N = 3SE +/- 0.14, N = 3SE +/- 0.11, N = 3186.46186.48186.62
OpenBenchmarking.orgNs Per Day, More Is BetterFAHBench 2.3.2r1r2r3306090120150Min: 186.1 / Avg: 186.46 / Max: 186.88Min: 186.26 / Avg: 186.48 / Max: 186.75Min: 186.4 / Avg: 186.62 / Max: 186.79

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: DenseNet 201 - Device: OpenCLr1r2r320406080100SE +/- 0.19, N = 3SE +/- 0.42, N = 3SE +/- 0.40, N = 3110.07109.98109.99
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: DenseNet 201 - Device: OpenCLr1r2r320406080100Min: 109.73 / Avg: 110.07 / Max: 110.38Min: 109.32 / Avg: 109.98 / Max: 110.77Min: 109.39 / Avg: 109.99 / Max: 110.74

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mobilenetr1r2r3612182430SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.07, N = 326.5226.5326.51MIN: 25.69 / MAX: 43.81MIN: 25.76 / MAX: 43.91MIN: 25.69 / MAX: 45.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mobilenetr1r2r3612182430Min: 26.48 / Avg: 26.52 / Max: 26.56Min: 26.5 / Avg: 26.53 / Max: 26.56Min: 26.43 / Avg: 26.51 / Max: 26.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUr1r2r38001600240032004000SE +/- 2.45, N = 3SE +/- 2.65, N = 3SE +/- 3.77, N = 33795.023797.053797.72MIN: 3682.24MIN: 3673.18MIN: 3684.191. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUr1r2r37001400210028003500Min: 3790.14 / Avg: 3795.02 / Max: 3797.77Min: 3792.15 / Avg: 3797.05 / Max: 3801.24Min: 3791 / Avg: 3797.72 / Max: 3804.041. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPUr1r2r37001400210028003500SE +/- 4.35, N = 3SE +/- 3.88, N = 3SE +/- 7.78, N = 33165.243166.573164.511. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPUr1r2r36001200180024003000Min: 3156.88 / Avg: 3165.24 / Max: 3171.48Min: 3159.64 / Avg: 3166.57 / Max: 3173.05Min: 3148.95 / Avg: 3164.51 / Max: 3172.291. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database Searchr1r2r320406080100SE +/- 0.06, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 3105.53105.57105.511. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database Searchr1r2r320406080100Min: 105.45 / Avg: 105.53 / Max: 105.65Min: 105.52 / Avg: 105.57 / Max: 105.65Min: 105.47 / Avg: 105.5 / Max: 105.541. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

clpeak

Clpeak is designed to test the peak capabilities of OpenCL devices. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGBPS, More Is BetterclpeakOpenCL Test: Global Memory Bandwidthr1r2r370140210280350SE +/- 0.32, N = 3SE +/- 0.28, N = 3SE +/- 0.28, N = 3324.63324.58324.781. (CXX) g++ options: -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGBPS, More Is BetterclpeakOpenCL Test: Global Memory Bandwidthr1r2r360120180240300Min: 323.99 / Avg: 324.63 / Max: 325Min: 324.01 / Avg: 324.58 / Max: 324.89Min: 324.22 / Avg: 324.78 / Max: 325.081. (CXX) g++ options: -O3 -rdynamic -lOpenCL

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: yolov4-tinyr1r2r3816243240SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 335.5235.5135.53MIN: 34.38 / MAX: 51.44MIN: 33.05 / MAX: 50.05MIN: 32.99 / MAX: 52.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: yolov4-tinyr1r2r3816243240Min: 35.46 / Avg: 35.52 / Max: 35.56Min: 35.49 / Avg: 35.51 / Max: 35.53Min: 35.47 / Avg: 35.53 / Max: 35.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

clpeak

Clpeak is designed to test the peak capabilities of OpenCL devices. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is BetterclpeakOpenCL Test: Double-Precision Doubler1r2r370140210280350SE +/- 3.78, N = 3SE +/- 3.68, N = 3SE +/- 3.74, N = 3340.42340.46340.591. (CXX) g++ options: -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGFLOPS, More Is BetterclpeakOpenCL Test: Double-Precision Doubler1r2r360120180240300Min: 334.7 / Avg: 340.42 / Max: 347.57Min: 335.2 / Avg: 340.46 / Max: 347.55Min: 335.15 / Avg: 340.59 / Max: 347.751. (CXX) g++ options: -O3 -rdynamic -lOpenCL

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-06-06Benchmark: Black-Scholes OpenCLr1r2r348121620SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 317.4817.4817.481. (CXX) g++ options: -O3 -lOpenCL
OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-06-06Benchmark: Black-Scholes OpenCLr1r2r348121620Min: 17.47 / Avg: 17.48 / Max: 17.48Min: 17.48 / Avg: 17.48 / Max: 17.48Min: 17.48 / Avg: 17.48 / Max: 17.481. (CXX) g++ options: -O3 -lOpenCL

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference Scorer1r2r3160320480640800730730730

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPUr1r2r30.17780.35560.53340.71120.889SE +/- 0.01, N = 3SE +/- 0.01, N = 9SE +/- 0.01, N = 50.790.790.791. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPUr1r2r3246810Min: 0.78 / Avg: 0.79 / Max: 0.8Min: 0.78 / Avg: 0.79 / Max: 0.84Min: 0.78 / Avg: 0.79 / Max: 0.821. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPUr1r2r30.180.360.540.720.9SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 30.800.800.801. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPUr1r2r3246810Min: 0.8 / Avg: 0.8 / Max: 0.81Min: 0.79 / Avg: 0.8 / Max: 0.82Min: 0.79 / Avg: 0.8 / Max: 0.821. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPUr1r2r30.2880.5760.8641.1521.44SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 31.281.281.281. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPUr1r2r3246810Min: 1.27 / Avg: 1.28 / Max: 1.31Min: 1.27 / Avg: 1.28 / Max: 1.3Min: 1.26 / Avg: 1.28 / Max: 1.31. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.0.1Test: Server Rack - Acceleration: CPU-onlyr1r2r30.04070.08140.12210.16280.2035SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.1810.1810.181
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.0.1Test: Server Rack - Acceleration: CPU-onlyr1r2r312345Min: 0.18 / Avg: 0.18 / Max: 0.18Min: 0.18 / Avg: 0.18 / Max: 0.18Min: 0.18 / Avg: 0.18 / Max: 0.18

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Enhancedr1r2r3306090120150SE +/- 0.67, N = 3SE +/- 0.67, N = 3SE +/- 0.67, N = 31151151151. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Enhancedr1r2r320406080100Min: 114 / Avg: 114.67 / Max: 116Min: 114 / Avg: 114.67 / Max: 116Min: 114 / Avg: 114.67 / Max: 1161. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Swirlr1r2r350100150200250SE +/- 1.72, N = 8SE +/- 1.60, N = 10SE +/- 1.72, N = 82072072071. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Swirlr1r2r34080120160200Min: 205 / Avg: 207 / Max: 219Min: 204 / Avg: 206.7 / Max: 221Min: 205 / Avg: 207 / Max: 2191. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandomr1r2r30.11250.2250.33750.450.5625SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.50.50.51. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandomr1r2r3246810Min: 0.5 / Avg: 0.5 / Max: 0.5Min: 0.5 / Avg: 0.5 / Max: 0.5Min: 0.5 / Avg: 0.5 / Max: 0.51. (CXX) g++ options: -O3 -pthread

yquake2

This is a test of Yamagi Quake II. Yamagi Quake II is an enhanced client for id Software's Quake II with focus on offline and coop gameplay. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 3.x - Resolution: 1920 x 1080r1r2r313263952656060601. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC

OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 1.x - Resolution: 1920 x 1080r1r2r31326395265SE +/- 0.03, N = 3SE +/- 0.07, N = 3SE +/- 0.03, N = 359.959.959.91. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC
OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 1.x - Resolution: 1920 x 1080r1r2r31224364860Min: 59.9 / Avg: 59.93 / Max: 60Min: 59.8 / Avg: 59.93 / Max: 60Min: 59.9 / Avg: 59.93 / Max: 601. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Fill Syncr1r2r30.11250.2250.33750.450.5625SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.50.50.51. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Fill Syncr1r2r3246810Min: 0.5 / Avg: 0.5 / Max: 0.5Min: 0.5 / Avg: 0.5 / Max: 0.5Min: 0.5 / Avg: 0.5 / Max: 0.51. (CXX) g++ options: -O3 -lsnappy -lpthread

NeatBench

NeatBench is a benchmark of the cross-platform Neat Video software on the CPU and optional GPU (OpenCL / CUDA) support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNeatBench 5Acceleration: GPUr1r2r3612182430SE +/- 0.57, N = 15SE +/- 0.47, N = 15SE +/- 0.60, N = 1527.527.127.6
OpenBenchmarking.orgFPS, More Is BetterNeatBench 5Acceleration: GPUr1r2r3612182430Min: 25.3 / Avg: 27.51 / Max: 31.1Min: 25.3 / Avg: 27.11 / Max: 31Min: 25.3 / Avg: 27.57 / Max: 30.9

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: NVIDIA OptiXr1r2r3918273645SE +/- 3.33, N = 15SE +/- 0.02, N = 3SE +/- 0.05, N = 341.4738.0738.07
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: NVIDIA OptiXr1r2r3918273645Min: 38.09 / Avg: 41.47 / Max: 88.04Min: 38.04 / Avg: 38.07 / Max: 38.11Min: 38.01 / Avg: 38.07 / Max: 38.17

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: regnety_400mr1r2r3510152025SE +/- 0.09, N = 3SE +/- 1.83, N = 3SE +/- 1.77, N = 319.1617.1517.60MIN: 17.94 / MAX: 21.24MIN: 13.3 / MAX: 38.12MIN: 13.79 / MAX: 32.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: regnety_400mr1r2r3510152025Min: 19.05 / Avg: 19.16 / Max: 19.33Min: 13.49 / Avg: 17.15 / Max: 19.03Min: 14.07 / Avg: 17.6 / Max: 19.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: googlenetr1r2r3510152025SE +/- 0.06, N = 3SE +/- 1.77, N = 3SE +/- 1.84, N = 320.0518.2018.26MIN: 18.94 / MAX: 32.96MIN: 14.26 / MAX: 31.74MIN: 14.28 / MAX: 36.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: googlenetr1r2r3510152025Min: 19.94 / Avg: 20.05 / Max: 20.13Min: 14.67 / Avg: 18.2 / Max: 20.01Min: 14.59 / Avg: 18.26 / Max: 20.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: blazefacer1r2r30.57381.14761.72142.29522.869SE +/- 0.02, N = 3SE +/- 0.26, N = 3SE +/- 0.25, N = 32.552.292.29MIN: 2.43 / MAX: 2.76MIN: 1.68 / MAX: 8.91MIN: 1.69 / MAX: 12.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: blazefacer1r2r3246810Min: 2.53 / Avg: 2.55 / Max: 2.58Min: 1.76 / Avg: 2.29 / Max: 2.56Min: 1.79 / Avg: 2.29 / Max: 2.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: efficientnet-b0r1r2r33691215SE +/- 0.10, N = 3SE +/- 0.95, N = 3SE +/- 0.94, N = 310.019.028.99MIN: 9.44 / MAX: 29.57MIN: 7 / MAX: 19.29MIN: 6.99 / MAX: 13.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: efficientnet-b0r1r2r33691215Min: 9.9 / Avg: 10.01 / Max: 10.21Min: 7.11 / Avg: 9.02 / Max: 10.01Min: 7.11 / Avg: 8.99 / Max: 9.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mnasnetr1r2r3246810SE +/- 0.00, N = 3SE +/- 0.71, N = 3SE +/- 0.76, N = 36.635.865.91MIN: 6.21 / MAX: 8.85MIN: 4.3 / MAX: 15.47MIN: 4.32 / MAX: 7.941. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mnasnetr1r2r33691215Min: 6.63 / Avg: 6.63 / Max: 6.64Min: 4.44 / Avg: 5.86 / Max: 6.59Min: 4.39 / Avg: 5.91 / Max: 6.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: shufflenet-v2r1r2r3246810SE +/- 0.07, N = 3SE +/- 0.96, N = 3SE +/- 0.93, N = 37.926.987.05MIN: 7.27 / MAX: 20.3MIN: 4.98 / MAX: 27.09MIN: 5.04 / MAX: 20.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: shufflenet-v2r1r2r33691215Min: 7.84 / Avg: 7.92 / Max: 8.07Min: 5.06 / Avg: 6.98 / Max: 8.02Min: 5.19 / Avg: 7.05 / Max: 8.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3r1r2r31.30732.61463.92195.22926.5365SE +/- 0.62, N = 3SE +/- 0.65, N = 3SE +/- 0.64, N = 35.745.735.81MIN: 4.43 / MAX: 9.64MIN: 4.33 / MAX: 10.47MIN: 4.41 / MAX: 25.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3r1r2r3246810Min: 4.51 / Avg: 5.74 / Max: 6.38Min: 4.43 / Avg: 5.73 / Max: 6.42Min: 4.53 / Avg: 5.81 / Max: 6.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2r1r2r3246810SE +/- 0.74, N = 3SE +/- 0.79, N = 3SE +/- 0.73, N = 37.237.227.19MIN: 5.54 / MAX: 9.59MIN: 5.41 / MAX: 20.72MIN: 5.52 / MAX: 9.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2r1r2r33691215Min: 5.76 / Avg: 7.23 / Max: 7.97Min: 5.65 / Avg: 7.22 / Max: 8.04Min: 5.73 / Avg: 7.19 / Max: 7.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0r1r2r33691215SE +/- 0.05, N = 3SE +/- 0.96, N = 3SE +/- 0.96, N = 310.009.059.06MIN: 9.46 / MAX: 24.32MIN: 6.99 / MAX: 21.76MIN: 7.04 / MAX: 12.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0r1r2r33691215Min: 9.92 / Avg: 10 / Max: 10.1Min: 7.14 / Avg: 9.05 / Max: 10.07Min: 7.14 / Avg: 9.06 / Max: 10.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetr1r2r3246810SE +/- 0.02, N = 3SE +/- 0.75, N = 3SE +/- 0.74, N = 36.675.965.96MIN: 5.99 / MAX: 21.18MIN: 4.32 / MAX: 14.32MIN: 4.33 / MAX: 28.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetr1r2r33691215Min: 6.65 / Avg: 6.67 / Max: 6.7Min: 4.45 / Avg: 5.96 / Max: 6.79Min: 4.48 / Avg: 5.96 / Max: 6.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2r1r2r3246810SE +/- 0.03, N = 3SE +/- 0.94, N = 3SE +/- 0.95, N = 37.936.957.03MIN: 7.52 / MAX: 16.61MIN: 5.01 / MAX: 9.68MIN: 5.04 / MAX: 20.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2r1r2r33691215Min: 7.87 / Avg: 7.93 / Max: 7.97Min: 5.08 / Avg: 6.95 / Max: 7.91Min: 5.13 / Avg: 7.03 / Max: 8.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3r1r2r31.30732.61463.92195.22926.5365SE +/- 0.65, N = 3SE +/- 0.65, N = 3SE +/- 0.62, N = 35.745.815.81MIN: 4.3 / MAX: 7.75MIN: 4.43 / MAX: 17.76MIN: 4.48 / MAX: 10.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3r1r2r3246810Min: 4.43 / Avg: 5.74 / Max: 6.4Min: 4.52 / Avg: 5.81 / Max: 6.52Min: 4.56 / Avg: 5.81 / Max: 6.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2r1r2r3246810SE +/- 0.67, N = 3SE +/- 0.73, N = 3SE +/- 0.73, N = 37.317.227.23MIN: 5.51 / MAX: 16.43MIN: 5.54 / MAX: 12.03MIN: 5.55 / MAX: 12.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2r1r2r33691215Min: 5.97 / Avg: 7.31 / Max: 8.02Min: 5.76 / Avg: 7.22 / Max: 7.99Min: 5.77 / Avg: 7.23 / Max: 7.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224r1r2r31.19052.3813.57154.7625.9525SE +/- 0.210, N = 10SE +/- 0.185, N = 11SE +/- 0.209, N = 105.2395.2915.285MIN: 3.19 / MAX: 26.27MIN: 3.3 / MAX: 27.38MIN: 3.27 / MAX: 26.821. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224r1r2r3246810Min: 3.35 / Avg: 5.24 / Max: 5.51Min: 3.44 / Avg: 5.29 / Max: 5.52Min: 3.41 / Avg: 5.28 / Max: 5.551. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0r1r2r33691215SE +/- 0.373, N = 10SE +/- 0.316, N = 11SE +/- 0.373, N = 108.8998.9828.944MIN: 4.96 / MAX: 31.21MIN: 5.05 / MAX: 31.35MIN: 5.01 / MAX: 31.891. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0r1r2r33691215Min: 5.55 / Avg: 8.9 / Max: 9.33Min: 5.82 / Avg: 8.98 / Max: 9.37Min: 5.6 / Avg: 8.94 / Max: 9.411. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOPr1r2r3700K1400K2100K2800K3500KSE +/- 36042.05, N = 3SE +/- 3702.86, N = 3SE +/- 181152.66, N = 123394660.202104092.332809233.481. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOPr1r2r3600K1200K1800K2400K3000KMin: 3322897 / Avg: 3394660.17 / Max: 3436426Min: 2097174 / Avg: 2104092.33 / Max: 2109839.75Min: 1678604 / Avg: 2809233.48 / Max: 3334293.251. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Mediumr1r2r3246810SE +/- 0.14, N = 15SE +/- 0.11, N = 15SE +/- 0.16, N = 157.687.617.581. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: Mediumr1r2r33691215Min: 7.37 / Avg: 7.68 / Max: 9.59Min: 7.29 / Avg: 7.61 / Max: 9Min: 7.27 / Avg: 7.58 / Max: 9.691. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUr1r2r31.06822.13643.20464.27285.341SE +/- 0.10403, N = 12SE +/- 0.06823, N = 15SE +/- 0.07477, N = 154.747724.714574.73728MIN: 3.29MIN: 3.29MIN: 3.291. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUr1r2r3246810Min: 3.61 / Avg: 4.75 / Max: 4.97Min: 3.83 / Avg: 4.71 / Max: 4.82Min: 3.94 / Avg: 4.74 / Max: 4.881. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUr1r2r33691215SE +/- 0.23621, N = 12SE +/- 0.15643, N = 15SE +/- 0.22537, N = 129.878939.777019.81238MIN: 6.66MIN: 6.67MIN: 6.651. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUr1r2r33691215Min: 7.28 / Avg: 9.88 / Max: 10.18Min: 8.08 / Avg: 9.78 / Max: 10.08Min: 7.34 / Avg: 9.81 / Max: 10.081. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUr1r2r30.7151.432.1452.863.575SE +/- 0.01732, N = 3SE +/- 0.02081, N = 3SE +/- 0.06527, N = 123.177623.167693.11291MIN: 2.58MIN: 2.39MIN: 1.861. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUr1r2r3246810Min: 3.14 / Avg: 3.18 / Max: 3.2Min: 3.13 / Avg: 3.17 / Max: 3.19Min: 2.4 / Avg: 3.11 / Max: 3.21. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Proteinr1r2r31.16962.33923.50884.67845.848SE +/- 0.111, N = 15SE +/- 0.109, N = 15SE +/- 0.110, N = 155.1985.1695.1791. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Proteinr1r2r3246810Min: 4.9 / Avg: 5.2 / Max: 6.46Min: 4.93 / Avg: 5.17 / Max: 6.44Min: 4.95 / Avg: 5.18 / Max: 6.471. (CXX) g++ options: -O3 -pthread -lm

LuxCoreRender OpenCL

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on OpenCL accelerators/GPUs. The alternative luxcorerender test profile is for CPU execution due to a difference in tests, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender OpenCL 2.3Scene: Rainbow Colors and Prismr1r2r31.21732.43463.65194.86926.0865SE +/- 0.12, N = 12SE +/- 0.02, N = 3SE +/- 0.02, N = 35.305.395.41MIN: 1.66 / MAX: 5.7MIN: 4.6 / MAX: 5.67MIN: 4.58 / MAX: 5.7
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender OpenCL 2.3Scene: Rainbow Colors and Prismr1r2r3246810Min: 3.99 / Avg: 5.3 / Max: 5.44Min: 5.37 / Avg: 5.39 / Max: 5.42Min: 5.38 / Avg: 5.41 / Max: 5.44

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender OpenCL 2.3Scene: LuxCore Benchmarkr1r2r30.51981.03961.55942.07922.599SE +/- 0.04, N = 12SE +/- 0.01, N = 3SE +/- 0.01, N = 32.262.312.29MIN: 0.14 / MAX: 2.63MIN: 0.27 / MAX: 2.63MIN: 0.27 / MAX: 2.64
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender OpenCL 2.3Scene: LuxCore Benchmarkr1r2r3246810Min: 1.77 / Avg: 2.26 / Max: 2.35Min: 2.3 / Avg: 2.31 / Max: 2.32Min: 2.27 / Avg: 2.29 / Max: 2.31

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender OpenCL 2.3Scene: Foodr1r2r30.2970.5940.8911.1881.485SE +/- 0.04, N = 12SE +/- 0.01, N = 3SE +/- 0.02, N = 31.271.321.30MIN: 0.13 / MAX: 1.57MIN: 0.29 / MAX: 1.57MIN: 0.26 / MAX: 1.57
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender OpenCL 2.3Scene: Foodr1r2r3246810Min: 0.85 / Avg: 1.27 / Max: 1.32Min: 1.31 / Avg: 1.32 / Max: 1.33Min: 1.27 / Avg: 1.3 / Max: 1.32

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender OpenCL 2.3Scene: DLSCr1r2r30.62331.24661.86992.49323.1165SE +/- 0.06, N = 12SE +/- 0.00, N = 3SE +/- 0.00, N = 32.702.772.76MIN: 0.69 / MAX: 2.81MIN: 2.57 / MAX: 2.84MIN: 2.56 / MAX: 2.84
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender OpenCL 2.3Scene: DLSCr1r2r3246810Min: 2.08 / Avg: 2.7 / Max: 2.76Min: 2.77 / Avg: 2.77 / Max: 2.77Min: 2.75 / Avg: 2.76 / Max: 2.76

DDraceNetwork

This is a test of DDraceNetwork, an open-source cooperative platformer. OpenGL 3.3 is used for rendering, with fallbacks for older OpenGL versions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterDDraceNetwork 15.2.3Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL 3.3 - Zoom: Default - Demo: RaiNyMore2r1r2r3306090120150SE +/- 13.14, N = 12SE +/- 9.86, N = 15158.21100.58130.66MIN: 7.02 / MAX: 449.03MIN: 6.72 / MAX: 493.34MIN: 6.67 / MAX: 498.751. (CXX) g++ options: -O3 -rdynamic -lcrypto -lz -lrt -lpthread -lcurl -lfreetype -lSDL2 -lwavpack -lopusfile -lopus -logg -lGL -lX11 -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0
OpenBenchmarking.orgFrames Per Second, More Is BetterDDraceNetwork 15.2.3Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL 3.3 - Zoom: Default - Demo: RaiNyMore2r1r2r3306090120150Min: 33.8 / Avg: 100.58 / Max: 173.28Min: 35.77 / Avg: 130.66 / Max: 175.131. (CXX) g++ options: -O3 -rdynamic -lcrypto -lz -lrt -lpthread -lcurl -lfreetype -lSDL2 -lwavpack -lopusfile -lopus -logg -lGL -lX11 -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0

OpenBenchmarking.orgFrames Per Second, More Is BetterDDraceNetwork 15.2.3Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL 3.0 - Zoom: Default - Demo: RaiNyMore2r1r2r34080120160200SE +/- 9.09, N = 15SE +/- 9.59, N = 15SE +/- 11.09, N = 15170.36169.30151.49MIN: 2.43 / MAX: 499.5MIN: 2.38 / MAX: 499.5MIN: 2.37 / MAX: 499.751. (CXX) g++ options: -O3 -rdynamic -lcrypto -lz -lrt -lpthread -lcurl -lfreetype -lSDL2 -lwavpack -lopusfile -lopus -logg -lGL -lX11 -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0
OpenBenchmarking.orgFrames Per Second, More Is BetterDDraceNetwork 15.2.3Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL 3.0 - Zoom: Default - Demo: RaiNyMore2r1r2r3306090120150Min: 100.65 / Avg: 170.36 / Max: 236.21Min: 58.09 / Avg: 169.3 / Max: 213.5Min: 49.88 / Avg: 151.49 / Max: 245.381. (CXX) g++ options: -O3 -rdynamic -lcrypto -lz -lrt -lpthread -lcurl -lfreetype -lSDL2 -lwavpack -lopusfile -lopus -logg -lGL -lX11 -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random Readr1r2r33691215SE +/- 0.250, N = 12SE +/- 0.206, N = 15SE +/- 0.214, N = 159.6209.6929.5731. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random Readr1r2r33691215Min: 6.92 / Avg: 9.62 / Max: 10.18Min: 7.19 / Avg: 9.69 / Max: 10.25Min: 7.01 / Avg: 9.57 / Max: 10.171. (CXX) g++ options: -O3 -lsnappy -lpthread

253 Results Shown

DDraceNetwork:
  1920 x 1080 - Fullscreen - OpenGL 3.3 - Default - Multeasymap - Total Frame Time
  1920 x 1080 - Fullscreen - OpenGL 3.0 - Default - Multeasymap - Total Frame Time
CLOMP
TNN
Redis
ViennaCL
eSpeak-NG Speech Engine
RNNoise
ASTC Encoder
PlaidML
Monkey Audio Encoding
GraphicsMagick
Cryptsetup
TNN
LZ4 Compression
LevelDB
Redis
NCNN
Redis
Cryptsetup
NCNN
LZ4 Compression
Stockfish
Hashcat
LevelDB
oneDNN
Cryptsetup
LeelaChessZero
Timed Eigen Compilation
oneDNN
OpenVINO:
  Age Gender Recognition Retail 0013 FP16 - CPU
  Age Gender Recognition Retail 0013 FP32 - CPU
  Age Gender Recognition Retail 0013 FP32 - CPU
Mobile Neural Network
NCNN:
  CPU - resnet50
  Vulkan GPU - resnet18
SQLite Speedtest
Betsy GPU Compressor
clpeak
GraphicsMagick
Cryptsetup
Embree
Waifu2x-NCNN Vulkan
Warsow
DDraceNetwork
Embree
simdjson
oneDNN
Cryptsetup
asmFish
Rodinia
Redis
Cryptsetup
Unigine Superposition
simdjson
OpenVINO
NCNN
GROMACS
simdjson
Betsy GPU Compressor
Cryptsetup
GEGL
Hashcat
oneDNN
Mobile Neural Network
Timed MAFFT Alignment
NCNN
PHPBench
RealSR-NCNN
VkResample
Node.js V8 Web Tooling Benchmark
Crafty
Unigine Superposition
OpenVINO
Basis Universal
Cryptsetup
GEGL
Cryptsetup
OpenVINO
Cryptsetup:
  Twofish-XTS 256b Decryption
  Serpent-XTS 512b Decryption
oneDNN
TensorFlow Lite
ArrayFire
Cryptsetup
dav1d
GraphicsMagick
Unpacking Firefox
VkFFT
GEGL
ASTC Encoder
GEGL
clpeak
Cryptsetup
dav1d
cl-mem
Cryptsetup
Hashcat
Numpy Benchmark
NAMD CUDA
GEGL
LZ4 Compression
Hashcat
NCNN
Embree
Hashcat
Unigine Heaven
Blender
LevelDB
RealSR-NCNN
rav1e
Blender
LevelDB
oneDNN
NCNN
Timed Linux Kernel Compilation
Basis Universal
LevelDB
NCNN
rav1e
LevelDB
Blender
RedShift Demo
RawTherapee
Blender
IndigoBench
ASTC Encoder
Build2
IndigoBench
cl-mem
GEGL
Basis Universal
oneDNN
Unigine Superposition
GEGL
TensorFlow Lite
LevelDB
Darktable
NCNN
TensorFlow Lite
DDraceNetwork
oneDNN
Zstd Compression
Inkscape
LevelDB
oneDNN
MandelGPU
BRL-CAD
LZ4 Compression
Darktable
NCNN
DeepSpeech
LZ4 Compression:
  9 - Decompression Speed
  1 - Decompression Speed
OpenVINO
TensorFlow Lite:
  Inception V4
  Mobilenet Quant
dav1d
VkResample
PlaidML
Opus Codec Encoding
rav1e
Blender
rav1e
Mobile Neural Network
oneDNN
Coremark
GEGL
NCNN
GraphicsMagick
GEGL
AI Benchmark Alpha
Blender
NCNN
PlaidML
LevelDB
Unigine Superposition
NCNN
Embree
OpenVINO
Blender
NCNN
Timed FFmpeg Compilation
oneDNN
Basis Universal
High Performance Conjugate Gradient
GraphicsMagick
TensorFlow Lite
oneDNN
Darktable
yquake2
Blender
oneDNN
Zstd Compression
LevelDB
AI Benchmark Alpha
OctaneBench
cl-mem
oneDNN
Basis Universal
Blender
dav1d
FAHBench
PlaidML
NCNN
oneDNN
OpenVINO
Timed HMMer Search
clpeak
NCNN
clpeak
FinanceBench
AI Benchmark Alpha
OpenVINO:
  Person Detection 0106 FP32 - CPU
  Person Detection 0106 FP16 - CPU
  Face Detection 0106 FP16 - CPU
Darktable
GraphicsMagick:
  Enhanced
  Swirl
simdjson
yquake2:
  OpenGL 3.x - 1920 x 1080
  OpenGL 1.x - 1920 x 1080
LevelDB
NeatBench
Blender
NCNN:
  Vulkan GPU - regnety_400m
  Vulkan GPU - googlenet
  Vulkan GPU - blazeface
  Vulkan GPU - efficientnet-b0
  Vulkan GPU - mnasnet
  Vulkan GPU - shufflenet-v2
  Vulkan GPU-v3-v3 - mobilenet-v3
  Vulkan GPU-v2-v2 - mobilenet-v2
  CPU - efficientnet-b0
  CPU - mnasnet
  CPU - shufflenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU-v2-v2 - mobilenet-v2
Mobile Neural Network:
  MobileNetV2_224
  SqueezeNetV1.0
Redis
ASTC Encoder
oneDNN:
  Deconvolution Batch shapes_3d - u8s8f32 - CPU
  Deconvolution Batch shapes_3d - f32 - CPU
  IP Shapes 1D - u8s8f32 - CPU
LAMMPS Molecular Dynamics Simulator
LuxCoreRender OpenCL:
  Rainbow Colors and Prism
  LuxCore Benchmark
  Food
  DLSC
DDraceNetwork:
  1920 x 1080 - Fullscreen - OpenGL 3.3 - Default - RaiNyMore2
  1920 x 1080 - Fullscreen - OpenGL 3.0 - Default - RaiNyMore2
LevelDB