core-i7-10700t-pts-102-test-run-onlogic

Intel Core i7-10700T testing with a Logic Supply RXM-181 (Z01-0002A026 BIOS) and Intel UHD 630 3GB on Ubuntu 20.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2101103-HA-COREI710751
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 3 Tests
AV1 3 Tests
Bioinformatics 2 Tests
Chess Test Suite 3 Tests
Timed Code Compilation 3 Tests
C/C++ Compiler Tests 13 Tests
Compression Tests 2 Tests
CPU Massive 21 Tests
Creator Workloads 23 Tests
Database Test Suite 3 Tests
Encoding 6 Tests
Fortran Tests 3 Tests
Game Development 4 Tests
HPC - High Performance Computing 16 Tests
Imaging 6 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 9 Tests
Molecular Dynamics 3 Tests
MPI Benchmarks 4 Tests
Multi-Core 20 Tests
NVIDIA GPU Compute 8 Tests
Intel oneAPI 2 Tests
OpenMPI Tests 4 Tests
Programmer / Developer System Benchmarks 10 Tests
Python 2 Tests
Renderers 3 Tests
Scientific Computing 6 Tests
Server 6 Tests
Server CPU Tests 12 Tests
Single-Threaded 6 Tests
Speech 3 Tests
Telephony 3 Tests
Texture Compression 3 Tests
Video Encoding 3 Tests
Vulkan Compute 5 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Run 1
January 04 2021
  2 Hours, 17 Minutes
Run 1a
January 05 2021
  11 Hours, 3 Minutes
Run 1b
January 05 2021
  22 Hours, 28 Minutes
Run 2
January 07 2021
  22 Hours, 53 Minutes
Run 3
January 08 2021
  1 Day, 10 Hours, 16 Minutes
Invert Hiding All Results Option
  18 Hours, 35 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


core-i7-10700t-pts-102-test-run-onlogic OpenBenchmarking.orgPhoronix Test SuiteIntel Core i7-10700T @ 4.50GHz (8 Cores / 16 Threads)Logic Supply RXM-181 (Z01-0002A026 BIOS)Intel Comet Lake PCH32GB256GB TS256GMTS800i915drmfb (1200MHz)Intel UHD 630 3GB (1200MHz)Realtek ALC233DELL P2415QIntel I219-LM + Intel I210Ubuntu 20.105.8.0-34-generic (x86_64)GNOME Shell 3.38.1X Server 1.20.9modesetting 1.20.91.2.145GCC 10.2.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverVulkanCompilerFile-SystemScreen ResolutionCore-i7-10700t-pts-102-test-run-onlogic BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - MQ-DEADLINE / errors=remount-ro,relatime,rw / Block Size: 4096- Run 1: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xc8 - Thermald 2.3 - Run 1a: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xc8 - Thermald 2.3 - Run 1b: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xc8 - Thermald 2.3 - Run 2: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xe0 - Thermald 2.3 - Run 3: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xe0 - Thermald 2.3 - Python 3.8.6- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

Run 1Run 1aRun 1bRun 2Run 3Result OverviewPhoronix Test Suite100%106%112%118%VKMarkDDraceNetworkyquake2WarsowHigh Performance Conjugate GradientLevelDBVkFFTUnpacking The Linux Kernel

core-i7-10700t-pts-102-test-run-onlogic hpcc: G-HPLwarsow: 1920 x 1080blender: Barbershop - CPU-Onlyblender: Pabellon Barcelona - CPU-Onlyblender: Classroom - CPU-Onlybasis: UASTC Level 2 + RDO Post-Processingvkfft: mnn: inception-v3mnn: mobilenet-v1-1.0mnn: MobileNetV2_224mnn: resnet-v2-50mnn: SqueezeNetV1.0appleseed: Emilyai-benchmark: Device AI Scoreai-benchmark: Device Training Scoreai-benchmark: Device Inference Scoreastcenc: Exhaustiveblender: Fishy Cat - CPU-Onlybrl-cad: VGR Performance Metricappleseed: Disney Materialstockfish: Total Timeblender: BMW27 - CPU-Onlyappleseed: Material Testergromacs: Water Benchmarkasmfish: 1024 Hash Memory, 26 Depthbuild2: Time To Compilenumpy: hpcg: compress-zstd: 19embree: Pathtracer - Crowndav1d: Chimera 1080p 10-bittensorflow-lite: Inception ResNet V2tensorflow-lite: Inception V4namd: ATPase Simulation - 327,506 Atomsavifenc: 0vkmark: 1920 x 1080embree: Pathtracer ISPC - Asian Dragon Objhmmer: Pfam Database Searchddnet: 1920 x 1080 - Fullscreen - OpenGL 3.3 - Default - Multeasymapbasis: UASTC Level 3ddnet: 1920 x 1080 - Fullscreen - OpenGL 3.3 - Default - RaiNyMore2embree: Pathtracer - Asian Dragon Objvkresample: 2x - Doubleonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUbuild-ffmpeg: Time To Compilebuild-eigen: Time To Compileembree: Pathtracer ISPC - Crownavifenc: 2embree: Pathtracer ISPC - Asian Dragonclomp: Static OMP Speedupnode-web-tooling: onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUrawtherapee: Total Benchmark Timegraphics-magick: Sharpengraphics-magick: Swirlncnn: CPU - regnety_400mncnn: CPU - squeezenet_ssdncnn: CPU - yolov4-tinyncnn: CPU - resnet50ncnn: CPU - alexnetncnn: CPU - resnet18ncnn: CPU - vgg16ncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - mobilenetncnn: Vulkan GPU - regnety_400mncnn: Vulkan GPU - squeezenet_ssdncnn: Vulkan GPU - yolov4-tinyncnn: Vulkan GPU - resnet50ncnn: Vulkan GPU - alexnetncnn: Vulkan GPU - resnet18ncnn: Vulkan GPU - vgg16ncnn: Vulkan GPU - googlenetncnn: Vulkan GPU - blazefacencnn: Vulkan GPU - efficientnet-b0ncnn: Vulkan GPU - mnasnetncnn: Vulkan GPU - shufflenet-v2ncnn: Vulkan GPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: Vulkan GPU - mobilenetembree: Pathtracer - Asian Dragonsimdjson: Kostyacompress-lz4: 9 - Decompression Speedcompress-lz4: 9 - Compression Speedcompress-lz4: 3 - Decompression Speedcompress-lz4: 3 - Compression Speeddeepspeech: CPUbasis: ETC1Ssqlite-speedtest: Timed Time - Size 1,000dav1d: Summer Nature 4Kespeak: Text-To-Speech Synthesisrav1e: 1rav1e: 5indigobench: CPU - Bedroomindigobench: CPU - Supercartensorflow-lite: NASNet Mobiletensorflow-lite: SqueezeNettensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quantonednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUleveldb: Seek Randgraphics-magick: Enhancedgraphics-magick: Noise-Gaussiangraphics-magick: Resizinggraphics-magick: HWB Color Spacegraphics-magick: Rotatesimdjson: LargeRandastcenc: Mediumastcenc: Thoroughbasis: UASTC Level 2simdjson: PartialTweetssimdjson: DistinctUserIDonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUlibplacebo: av1_grain_laplibplacebo: hdr_peakdetectlibplacebo: polar_nocomputelibplacebo: deband_heavyvkresample: 2x - Singlerav1e: 6leveldb: Rand Readwebp: Quality 100, Lossless, Highest Compressionbetsy: ETC1 - Highestcryptsetup: Twofish-XTS 512b Decryptioncryptsetup: Twofish-XTS 512b Encryptioncryptsetup: Serpent-XTS 512b Decryptioncryptsetup: Serpent-XTS 512b Encryptioncryptsetup: AES-XTS 512b Decryptioncryptsetup: AES-XTS 512b Encryptioncryptsetup: Twofish-XTS 256b Decryptioncryptsetup: Twofish-XTS 256b Encryptioncryptsetup: Serpent-XTS 256b Decryptioncryptsetup: Serpent-XTS 256b Encryptioncryptsetup: AES-XTS 256b Decryptioncryptsetup: AES-XTS 256b Encryptioncryptsetup: PBKDF2-whirlpoolcryptsetup: PBKDF2-sha512coremark: CoreMark Size 666 - Iterations Per Seconddarktable: Masskrug - CPU-onlycompress-lz4: 1 - Decompression Speedcompress-lz4: 1 - Compression Speedphpbench: PHP Benchmark Suitecompress-zstd: 3rav1e: 10encode-wavpack: WAV To WavPackrnnoise: leveldb: Rand Deletedav1d: Chimera 1080punpack-firefox: firefox-84.0.source.tar.xzcrafty: Elapsed Timeleveldb: Seq Fillleveldb: Seq Filltnn: CPU - MobileNet v2tnn: CPU - SqueezeNet v1.1pybench: Total For Average Test Timesredis: LPUSHencode-ape: WAV To APEonednn: IP Shapes 3D - u8s8f32 - CPUrsvg: SVG Files To PNGwebp: Quality 100, Losslessencode-opus: WAV To Opus Encodeleveldb: Hot Readdarktable: Boat - CPU-onlyonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUredis: GETredis: SETredis: LPOPredis: SADDmafft: Multiple Sequence Alignment - LSU RNAleveldb: Overwriteleveldb: Overwriteleveldb: Rand Fillleveldb: Rand Fillonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUbasis: UASTC Level 0onednn: IP Shapes 3D - f32 - CPUwebp: Quality 100, Highest Compressionunpack-linux: linux-4.15.tar.xzdav1d: Summer Nature 1080pastcenc: Fastonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUyquake2: Software CPU - 1920 x 1080avifenc: 8onednn: Convolution Batch Shapes Auto - f32 - CPUleveldb: Fill Syncleveldb: Fill Syncavifenc: 10darktable: Server Room - CPU-onlybetsy: ETC2 RGB - Highestlammps: Rhodopsin Proteinwebp: Quality 100yquake2: OpenGL 3.x - 1920 x 1080webp: Defaultyquake2: OpenGL 1.x - 1920 x 1080darktable: Server Rack - CPU-onlyhpcc: Max Ping Pong Bandwidthhpcc: Rand Ring Bandwidthhpcc: Rand Ring Latencyhpcc: G-Rand Accesshpcc: EP-STREAM Triadhpcc: G-Ptranshpcc: EP-DGEMMhpcc: G-FfteRun 1Run 1aRun 1bRun 2Run 385.415373.904701049244.18116.62903.08712.529668.7342527.0729.8543.71389.2559.41910.95853.83253.24933.29.43653.05933.352.74833.66.151113.45987.1640.35.079546.4558.340.5368386.11548433.56101375370.65016569655251.468322.783.9067534.26.329787.72468121752070033.21755150.33410578.4302123.560226.76108.379117.087.38946727.436723.116687.0094.83892.9207.321588.6079.40995.810.293540.213520.713539.00752207.89100.599279.242.519266.643.6579.9030063.930131.0235.8490.3180.9712975663552732375892366776.573102.9645612.6781191555737756860.386.8251.8654.8810.650.678.918669.14300388.4021.3099.49141.851383.5382.7681.8697.12652.02646.2383.6382.8682.1696.23199.43187.56339631502976217568.6212779725.07736.933302.93.05828.86253.731554.36718300452.99233.413.7842.5389520.02210.2348.1094.257178.807233.9653510.26452.38833.852.05234.04.205019.2528.618018.2136.176509.916.7514.9677113.66.27616.81675949.5770.35.9505.6892.705551.01.738558.712714.9621.710100.434920.028802.962432.302346.634734.0272840.6200084.51213.451040.85943.69916.619154960.24710.2494.33656.1819.110753.4744131592851741434.61426.2566088472.09380110100945302.71424.2621030.64516349428253.213332.743.9073034.16.317487.51469516052036033.22224149.98810548.2867123.572227.64108.338118.107.32396704.956708.176677.7895.00091.2027.339788.1519.45865.910.283526.253530.443515.9379.7187422118.7826.7635.1736.1515.2018.1266.8519.162.669.616.337.425.816.4025.8018.7626.8034.6036.0815.1618.2466.8019.122.669.615.957.405.856.3925.767.79320.599347.543.509332.144.3580.0700563.99163.367130.6335.9690.3170.9700.9662.2192986903544522376812371816.652512.9464412.6701191565707767720.386.8752.3254.8420.650.678.965029.154061.3089.43541.048383.2382.4681.0696.22646.22645.7383.4382.4682.1696.33190.63188.36331981502976217895.7515975.8259788.67772.676550053304.53.05717.58128.92052.649555.3820.704701374653.19433.3368.684359.68210761628075.0813.7532.4430521.23420.06810.2148.40514.6624.289759.059633.877852508965.11877700.882708105.252122214.0010.17352.35933.852.56133.64.412649.2608.657218.2096.167507.906.7814.9719113.36.35916.80915919.3160.35.9584.3935.6932.714552.81.744552.80.18813311.5581.741300.405810.028472.959572.304526.757634.0385340.7242087.31214.951034.66939.24918.047155159.83910.2304.27856.0659.012749.3014571606856750431.10426.6566500471.07399410178167303.08426.1160390.64616310305252.132334.783.9200134.16.288987.69467284351783673.20420149.81810608.4226123.568250.59108.075124.887.3454901.9676723.586710.466711.8094.49090.8827.336788.0329.36515.810.353538.743535.413540.2679.8837522117.8626.9135.0736.0815.1518.1566.7219.172.468.965.867.375.816.4225.8917.9826.7434.6036.2715.1718.1866.7518.072.468.955.627.405.866.3625.717.75360.599245.843.029238.943.9979.9321063.77562.748130.4636.1400.3180.9730.9622.2162988213545142380392366326.751712.9922712.3971191565717696880.386.7851.9354.7170.650.679.033439.06378668.5942389.9829.8643.87386.8711.3149.23641.42810.018383.4382.4682.4696.52649.52648.4383.4382.5682.6696.33197.83188.16337091503694216626.5226805.8099639.77643.086512553300.53.05717.54428.85853.554542.6220.732693557953.49733.1367.423359.57110751631778.3913.7672.3987021.17719.88010.2118.17214.6774.274449.159113.882082423777.921909663.461651323.172163184.9210.20554.64832.452.28433.84.262559.2628.661558.2216.162509.386.7815.2513113.46.28016.87885921.3070.36.0324.3875.6762.704551.91.740555.20.18812960.4491.720800.406890.029112.967782.313086.695934.0522940.6931383.41222.041040.36950.06919.831154960.44510.2804.35856.4259.159754.5812961598852746436.00427.1566208472.28289410150102303.94423.9781980.64416562018253.186326.323.9950134.06.324387.52468817051987673.21383150.6498588.4439123.600228.29108.808116.707.3004901.5196734.196747.016732.1294.65692.9647.271888.3549.38506.110.103567.133568.593564.3779.8417422017.8026.8234.5936.2015.1818.1866.9019.202.488.975.917.435.856.4625.8818.0026.8535.1036.4915.2218.2067.1418.072.478.995.928.095.956.3925.877.84630.599312.843.489309.244.4780.3818363.96863.228128.9236.1240.3190.9690.9672.2212988043557082384212368066.556622.9855512.2711191545707757610.386.8852.3655.2040.650.679.023519.02724670.2142544.8029.8943.92389.5791.3109.16541.13311.094383.4382.8680.0695.82650.82645.0383.5382.7680.6696.93193.63172.06337091504413218109.2531765.8699684.07706.806547383294.03.06017.55628.89152.779544.8520.729719418752.81933.5368.314359.68710741622768.1313.8052.5315921.31119.66210.2358.31914.7294.301149.100873.8636224007191896603.711631529.422120355.1710.12053.81432.952.01134.04.353309.2588.617308.2116.165506.876.7715.0793111.86.30716.88395946.8220.36.0204.3985.7012.705486.21.747504.30.18812614.7241.745340.400640.029083.012522.310496.622564.06682OpenBenchmarking.org

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: G-HPLRun 1aRun 1bRun 2Run 3918273645SE +/- 0.01, N = 3SE +/- 0.09, N = 3SE +/- 0.11, N = 3SE +/- 0.04, N = 340.5440.6240.7240.691. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3
OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: G-HPLRun 1aRun 1bRun 2Run 3816243240Min: 40.52 / Avg: 40.54 / Max: 40.55Min: 40.44 / Avg: 40.62 / Max: 40.74Min: 40.6 / Avg: 40.72 / Max: 40.95Min: 40.64 / Avg: 40.69 / Max: 40.771. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

Warsow

This is a benchmark of Warsow, a popular open-source first-person shooter. This game uses the QFusion engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 1920 x 1080Run 1Run 1aRun 1bRun 2Run 320406080100SE +/- 1.05, N = 3SE +/- 0.82, N = 15SE +/- 0.66, N = 15SE +/- 0.09, N = 3SE +/- 0.12, N = 385.486.184.587.383.4
OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 1920 x 1080Run 1Run 1aRun 1bRun 2Run 320406080100Min: 83.3 / Avg: 85.4 / Max: 86.5Min: 75.7 / Avg: 86.08 / Max: 87.5Min: 75.8 / Avg: 84.55 / Max: 86Min: 87.1 / Avg: 87.27 / Max: 87.4Min: 83.2 / Avg: 83.43 / Max: 83.6

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Barbershop - Compute: CPU-OnlyRun 1bRun 2Run 330060090012001500SE +/- 1.86, N = 3SE +/- 2.40, N = 3SE +/- 1.30, N = 31213.451214.951222.04
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Barbershop - Compute: CPU-OnlyRun 1bRun 2Run 32004006008001000Min: 1209.75 / Avg: 1213.45 / Max: 1215.67Min: 1210.38 / Avg: 1214.95 / Max: 1218.5Min: 1220.19 / Avg: 1222.04 / Max: 1224.54

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Pabellon Barcelona - Compute: CPU-OnlyRun 1bRun 2Run 32004006008001000SE +/- 2.02, N = 3SE +/- 2.08, N = 3SE +/- 0.72, N = 31040.851034.661040.36
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Pabellon Barcelona - Compute: CPU-OnlyRun 1bRun 2Run 32004006008001000Min: 1036.83 / Avg: 1040.85 / Max: 1043.13Min: 1032.26 / Avg: 1034.66 / Max: 1038.81Min: 1039.64 / Avg: 1040.36 / Max: 1041.79

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Classroom - Compute: CPU-OnlyRun 1bRun 2Run 32004006008001000SE +/- 1.04, N = 3SE +/- 3.17, N = 3SE +/- 1.04, N = 3943.69939.24950.06
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Classroom - Compute: CPU-OnlyRun 1bRun 2Run 3170340510680850Min: 941.61 / Avg: 943.69 / Max: 944.9Min: 934.66 / Avg: 939.24 / Max: 945.34Min: 948.58 / Avg: 950.06 / Max: 952.06

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2 + RDO Post-ProcessingRun 1bRun 2Run 32004006008001000SE +/- 0.74, N = 3SE +/- 1.37, N = 3SE +/- 1.48, N = 3916.62918.05919.831. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2 + RDO Post-ProcessingRun 1bRun 2Run 3160320480640800Min: 915.53 / Avg: 916.62 / Max: 918.03Min: 915.64 / Avg: 918.05 / Max: 920.39Min: 917.34 / Avg: 919.83 / Max: 922.451. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

VkFFT

VkFFT is a Fast Fourier Transform (FFT) Library that is GPU accelerated by means of the Vulkan API. The VkFFT benchmark runs FFT performance differences of many different sizes before returning an overall benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.1.1Run 1Run 1aRun 1bRun 2Run 330060090012001500SE +/- 1.73, N = 3SE +/- 1.33, N = 3SE +/- 0.58, N = 3SE +/- 3.18, N = 3SE +/- 3.33, N = 3153715481549155115491. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.1.1Run 1Run 1aRun 1bRun 2Run 330060090012001500Min: 1534 / Avg: 1537 / Max: 1540Min: 1547 / Avg: 1548.33 / Max: 1551Min: 1548 / Avg: 1549 / Max: 1550Min: 1545 / Avg: 1550.67 / Max: 1556Min: 1542 / Avg: 1548.67 / Max: 15521. (CXX) g++ options: -O3 -pthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Run 1bRun 2Run 31428425670SE +/- 0.16, N = 11SE +/- 0.16, N = 11SE +/- 0.18, N = 1160.2559.8460.45MIN: 57.66 / MAX: 110.77MIN: 57.66 / MAX: 67.14MIN: 57.91 / MAX: 112.871. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Run 1bRun 2Run 31224364860Min: 59.24 / Avg: 60.25 / Max: 61.17Min: 59.26 / Avg: 59.84 / Max: 61.11Min: 59.77 / Avg: 60.45 / Max: 61.481. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Run 1bRun 2Run 33691215SE +/- 0.01, N = 11SE +/- 0.01, N = 11SE +/- 0.01, N = 1110.2510.2310.28MIN: 9.88 / MAX: 11.56MIN: 9.89 / MAX: 14.56MIN: 9.91 / MAX: 251. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Run 1bRun 2Run 33691215Min: 10.19 / Avg: 10.25 / Max: 10.31Min: 10.16 / Avg: 10.23 / Max: 10.27Min: 10.19 / Avg: 10.28 / Max: 10.341. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Run 1bRun 2Run 30.98061.96122.94183.92244.903SE +/- 0.094, N = 11SE +/- 0.088, N = 11SE +/- 0.095, N = 114.3364.2784.358MIN: 3.32 / MAX: 6.06MIN: 3.29 / MAX: 6.03MIN: 3.29 / MAX: 7.641. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Run 1bRun 2Run 3246810Min: 3.41 / Avg: 4.34 / Max: 4.49Min: 3.4 / Avg: 4.28 / Max: 4.4Min: 3.41 / Avg: 4.36 / Max: 4.541. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Run 1bRun 2Run 31326395265SE +/- 0.14, N = 11SE +/- 0.10, N = 11SE +/- 0.10, N = 1156.1856.0756.43MIN: 41.51 / MAX: 130.73MIN: 54.43 / MAX: 71.23MIN: 42.04 / MAX: 70.731. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Run 1bRun 2Run 31122334455Min: 55.16 / Avg: 56.18 / Max: 56.82Min: 55.62 / Avg: 56.07 / Max: 56.88Min: 56.13 / Avg: 56.42 / Max: 57.241. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Run 1bRun 2Run 33691215SE +/- 0.240, N = 11SE +/- 0.297, N = 11SE +/- 0.243, N = 119.1109.0129.159MIN: 5.88 / MAX: 25.72MIN: 6.02 / MAX: 44.6MIN: 5.95 / MAX: 26.631. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Run 1bRun 2Run 33691215Min: 6.71 / Avg: 9.11 / Max: 9.44Min: 6.05 / Avg: 9.01 / Max: 9.45Min: 6.74 / Avg: 9.16 / Max: 9.51. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: EmilyRun 1bRun 2Run 3160320480640800753.47749.30754.58

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI ScoreRun 1bRun 2Run 330060090012001500159216061598

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training ScoreRun 1bRun 2Run 32004006008001000851856852

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference ScoreRun 1bRun 2Run 3160320480640800741750746

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveRun 1aRun 1bRun 2Run 390180270360450SE +/- 1.14, N = 3SE +/- 0.62, N = 3SE +/- 0.66, N = 3SE +/- 0.61, N = 3433.56434.61431.10436.001. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveRun 1aRun 1bRun 2Run 380160240320400Min: 431.3 / Avg: 433.56 / Max: 434.93Min: 433.39 / Avg: 434.61 / Max: 435.45Min: 429.94 / Avg: 431.1 / Max: 432.24Min: 434.86 / Avg: 436 / Max: 436.931. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Fishy Cat - Compute: CPU-OnlyRun 1bRun 2Run 390180270360450SE +/- 0.19, N = 3SE +/- 1.17, N = 3SE +/- 1.15, N = 3426.25426.65427.15
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Fishy Cat - Compute: CPU-OnlyRun 1bRun 2Run 380160240320400Min: 425.91 / Avg: 426.25 / Max: 426.55Min: 424.43 / Avg: 426.65 / Max: 428.37Min: 424.89 / Avg: 427.15 / Max: 428.59

BRL-CAD

BRL-CAD 7.28.0 is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.30.8VGR Performance MetricRun 1bRun 2Run 314K28K42K56K70K6608866500662081. (CXX) g++ options: -std=c++11 -pipe -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -rdynamic -lSM -lICE -lXi -lGLU -lGL -lGLdispatch -lX11 -lXext -lXrender -lpthread -ldl -luuid -lm

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Disney MaterialRun 1bRun 2Run 3100200300400500472.09471.07472.28

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total TimeRun 1aRun 1bRun 2Run 32M4M6M8M10MSE +/- 104694.60, N = 3SE +/- 117560.64, N = 3SE +/- 70000.38, N = 15SE +/- 88362.24, N = 15101375371010094510178167101501021. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total TimeRun 1aRun 1bRun 2Run 32M4M6M8M10MMin: 9946731 / Avg: 10137536.67 / Max: 10307624Min: 9887620 / Avg: 10100944.67 / Max: 10293228Min: 9701672 / Avg: 10178166.73 / Max: 10658729Min: 9720488 / Avg: 10150101.87 / Max: 110416991. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: CPU-OnlyRun 1bRun 2Run 370140210280350SE +/- 1.18, N = 3SE +/- 0.43, N = 3SE +/- 1.00, N = 3302.71303.08303.94
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: CPU-OnlyRun 1bRun 2Run 350100150200250Min: 300.9 / Avg: 302.71 / Max: 304.94Min: 302.3 / Avg: 303.08 / Max: 303.78Min: 302.25 / Avg: 303.94 / Max: 305.72

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Material TesterRun 1bRun 2Run 390180270360450424.26426.12423.98

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkRun 1aRun 1bRun 2Run 30.14630.29260.43890.58520.7315SE +/- 0.003, N = 3SE +/- 0.004, N = 3SE +/- 0.006, N = 3SE +/- 0.006, N = 30.6500.6450.6460.6441. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkRun 1aRun 1bRun 2Run 3246810Min: 0.65 / Avg: 0.65 / Max: 0.66Min: 0.64 / Avg: 0.64 / Max: 0.65Min: 0.64 / Avg: 0.65 / Max: 0.66Min: 0.63 / Avg: 0.64 / Max: 0.651. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthRun 1aRun 1bRun 2Run 34M8M12M16M20MSE +/- 155097.17, N = 3SE +/- 160397.25, N = 3SE +/- 76415.87, N = 3SE +/- 48911.26, N = 316569655163494281631030516562018
OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthRun 1aRun 1bRun 2Run 33M6M9M12M15MMin: 16265588 / Avg: 16569655.33 / Max: 16774818Min: 16135941 / Avg: 16349427.67 / Max: 16663534Min: 16228177 / Avg: 16310305.33 / Max: 16462991Min: 16501771 / Avg: 16562018.33 / Max: 16658885

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileRun 1aRun 1bRun 2Run 360120180240300SE +/- 0.75, N = 3SE +/- 1.48, N = 3SE +/- 0.94, N = 3SE +/- 0.31, N = 3251.47253.21252.13253.19
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileRun 1aRun 1bRun 2Run 350100150200250Min: 250.08 / Avg: 251.47 / Max: 252.66Min: 250.62 / Avg: 253.21 / Max: 255.76Min: 250.5 / Avg: 252.13 / Max: 253.75Min: 252.58 / Avg: 253.19 / Max: 253.59

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkRun 1aRun 1bRun 2Run 370140210280350SE +/- 0.74, N = 3SE +/- 0.43, N = 3SE +/- 0.29, N = 3SE +/- 0.19, N = 3322.78332.74334.78326.32
OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkRun 1aRun 1bRun 2Run 360120180240300Min: 321.29 / Avg: 322.78 / Max: 323.6Min: 332.02 / Avg: 332.74 / Max: 333.5Min: 334.24 / Avg: 334.78 / Max: 335.22Min: 326.02 / Avg: 326.32 / Max: 326.66

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1Run 1Run 1aRun 1bRun 2Run 30.89891.79782.69673.59564.4945SE +/- 0.00314, N = 3SE +/- 0.00494, N = 3SE +/- 0.00554, N = 3SE +/- 0.00656, N = 3SE +/- 0.04777, N = 43.904703.906753.907303.920013.995011. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1Run 1Run 1aRun 1bRun 2Run 3246810Min: 3.9 / Avg: 3.9 / Max: 3.91Min: 3.9 / Avg: 3.91 / Max: 3.92Min: 3.9 / Avg: 3.91 / Max: 3.92Min: 3.91 / Avg: 3.92 / Max: 3.93Min: 3.93 / Avg: 4 / Max: 4.131. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Run 1aRun 1bRun 2Run 3816243240SE +/- 0.33, N = 6SE +/- 0.32, N = 6SE +/- 0.29, N = 8SE +/- 0.31, N = 734.234.134.134.01. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Run 1aRun 1bRun 2Run 3714212835Min: 33.6 / Avg: 34.18 / Max: 35.7Min: 33.6 / Avg: 34.1 / Max: 35.7Min: 33.2 / Avg: 34.08 / Max: 35.9Min: 33.4 / Avg: 33.97 / Max: 35.71. (CC) gcc options: -O3 -pthread -lz -llzma

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: CrownRun 1aRun 1bRun 2Run 3246810SE +/- 0.0695, N = 5SE +/- 0.0656, N = 5SE +/- 0.0588, N = 6SE +/- 0.0681, N = 56.32976.31746.28896.3243MIN: 6.04 / MAX: 9.65MIN: 6.05 / MAX: 9.68MIN: 6.01 / MAX: 9.69MIN: 6.04 / MAX: 9.74
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: CrownRun 1aRun 1bRun 2Run 33691215Min: 6.21 / Avg: 6.33 / Max: 6.6Min: 6.22 / Avg: 6.32 / Max: 6.58Min: 6.18 / Avg: 6.29 / Max: 6.57Min: 6.21 / Avg: 6.32 / Max: 6.58

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p 10-bitRun 1aRun 1bRun 2Run 320406080100SE +/- 0.35, N = 3SE +/- 0.46, N = 3SE +/- 0.42, N = 3SE +/- 0.31, N = 387.7287.5187.6987.52MIN: 52.85 / MAX: 231.56MIN: 52.76 / MAX: 232.01MIN: 52.93 / MAX: 230.34MIN: 52.63 / MAX: 234.521. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p 10-bitRun 1aRun 1bRun 2Run 320406080100Min: 87.27 / Avg: 87.72 / Max: 88.41Min: 86.99 / Avg: 87.51 / Max: 88.43Min: 87.12 / Avg: 87.69 / Max: 88.51Min: 87.08 / Avg: 87.52 / Max: 88.121. (CC) gcc options: -pthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Run 1aRun 1bRun 2Run 31000K2000K3000K4000K5000KSE +/- 9712.72, N = 3SE +/- 11604.74, N = 3SE +/- 11385.13, N = 3SE +/- 16050.25, N = 34681217469516046728434688170
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Run 1aRun 1bRun 2Run 3800K1600K2400K3200K4000KMin: 4662300 / Avg: 4681216.67 / Max: 4694500Min: 4672060 / Avg: 4695160 / Max: 4708660Min: 4650090 / Avg: 4672843.33 / Max: 4684980Min: 4657530 / Avg: 4688170 / Max: 4711780

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Run 1aRun 1bRun 2Run 31.1M2.2M3.3M4.4M5.5MSE +/- 9895.14, N = 3SE +/- 12940.29, N = 3SE +/- 11121.29, N = 3SE +/- 10680.29, N = 35207003520360351783675198767
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Run 1aRun 1bRun 2Run 3900K1800K2700K3600K4500KMin: 5189480 / Avg: 5207003.33 / Max: 5223730Min: 5177730 / Avg: 5203603.33 / Max: 5217070Min: 5156330 / Avg: 5178366.67 / Max: 5192000Min: 5178080 / Avg: 5198766.67 / Max: 5213720

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsRun 1aRun 1bRun 2Run 30.7251.452.1752.93.625SE +/- 0.00480, N = 3SE +/- 0.00521, N = 3SE +/- 0.00784, N = 3SE +/- 0.00382, N = 33.217553.222243.204203.21383
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsRun 1aRun 1bRun 2Run 3246810Min: 3.21 / Avg: 3.22 / Max: 3.23Min: 3.22 / Avg: 3.22 / Max: 3.23Min: 3.19 / Avg: 3.2 / Max: 3.22Min: 3.21 / Avg: 3.21 / Max: 3.22

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 0Run 1aRun 1bRun 2Run 3306090120150SE +/- 0.14, N = 3SE +/- 0.07, N = 3SE +/- 0.64, N = 3SE +/- 0.42, N = 3150.33149.99149.82150.651. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 0Run 1aRun 1bRun 2Run 3306090120150Min: 150.19 / Avg: 150.33 / Max: 150.61Min: 149.85 / Avg: 149.99 / Max: 150.09Min: 148.67 / Avg: 149.82 / Max: 150.89Min: 149.86 / Avg: 150.65 / Max: 151.31. (CXX) g++ options: -O3 -fPIC

VKMark

VKMark is a collection of Vulkan tests/benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVKMark Score, More Is BetterVKMark 2020-05-21Resolution: 1920 x 1080Run 1Run 1aRun 1bRun 2Run 32004006008001000SE +/- 3.61, N = 3SE +/- 4.16, N = 3SE +/- 3.61, N = 3SE +/- 1.76, N = 3SE +/- 1.76, N = 310491057105410608581. (CXX) g++ options: -pthread -ldl -pipe -std=c++14 -MD -MQ -MF
OpenBenchmarking.orgVKMark Score, More Is BetterVKMark 2020-05-21Resolution: 1920 x 1080Run 1Run 1aRun 1bRun 2Run 32004006008001000Min: 1044 / Avg: 1049 / Max: 1056Min: 1051 / Avg: 1057 / Max: 1065Min: 1049 / Avg: 1054 / Max: 1061Min: 1057 / Avg: 1060.33 / Max: 1063Min: 855 / Avg: 857.67 / Max: 8611. (CXX) g++ options: -pthread -ldl -pipe -std=c++14 -MD -MQ -MF

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon ObjRun 1aRun 1bRun 2Run 3246810SE +/- 0.0799, N = 3SE +/- 0.0700, N = 8SE +/- 0.0325, N = 3SE +/- 0.0571, N = 38.43028.28678.42268.4439MIN: 8.03 / MAX: 11.8MIN: 7.7 / MAX: 11.85MIN: 8.09 / MAX: 11.74MIN: 8.11 / MAX: 11.74
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon ObjRun 1aRun 1bRun 2Run 33691215Min: 8.3 / Avg: 8.43 / Max: 8.58Min: 7.98 / Avg: 8.29 / Max: 8.51Min: 8.37 / Avg: 8.42 / Max: 8.48Min: 8.37 / Avg: 8.44 / Max: 8.56

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchRun 1aRun 1bRun 2Run 3306090120150SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.06, N = 3SE +/- 0.07, N = 3123.56123.57123.57123.601. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchRun 1aRun 1bRun 2Run 320406080100Min: 123.52 / Avg: 123.56 / Max: 123.62Min: 123.5 / Avg: 123.57 / Max: 123.63Min: 123.5 / Avg: 123.57 / Max: 123.69Min: 123.52 / Avg: 123.6 / Max: 123.751. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

DDraceNetwork

OpenBenchmarking.orgMilliseconds, Fewer Is BetterDDraceNetwork 15.2.3Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL 3.3 - Zoom: Default - Demo: Multeasymap - Total Frame TimeRun 1Run 1aRun 1bRun 2Run 3510152025Min: 3.25 / Avg: 4.06 / Max: 12.43Min: 3.42 / Avg: 4.43 / Max: 16.5Min: 3.45 / Avg: 4.48 / Max: 18.22Min: 2.7 / Avg: 4.03 / Max: 18.08Min: 3.55 / Avg: 4.43 / Max: 15.221. (CXX) g++ options: -O3 -rdynamic -lcrypto -lz -lrt -lpthread -lcurl -lfreetype -lSDL2 -lwavpack -lopusfile -lopus -logg -lGL -lX11 -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0

OpenBenchmarking.orgFrames Per Second, More Is BetterDDraceNetwork 15.2.3Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL 3.3 - Zoom: Default - Demo: MulteasymapRun 1Run 1aRun 1bRun 2Run 350100150200250SE +/- 1.81, N = 15SE +/- 0.91, N = 3SE +/- 1.85, N = 3SE +/- 1.18, N = 3SE +/- 0.99, N = 3244.18226.76227.64250.59228.29MIN: 18.54 / MAX: 406.67MIN: 55.97 / MAX: 331.56MIN: 54.87 / MAX: 364.83MIN: 55.32 / MAX: 399.2MIN: 58.21 / MAX: 356.631. (CXX) g++ options: -O3 -rdynamic -lcrypto -lz -lrt -lpthread -lcurl -lfreetype -lSDL2 -lwavpack -lopusfile -lopus -logg -lGL -lX11 -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0
OpenBenchmarking.orgFrames Per Second, More Is BetterDDraceNetwork 15.2.3Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL 3.3 - Zoom: Default - Demo: MulteasymapRun 1Run 1aRun 1bRun 2Run 350100150200250Min: 225.26 / Avg: 244.18 / Max: 248.21Min: 225.44 / Avg: 226.76 / Max: 228.49Min: 224.08 / Avg: 227.64 / Max: 230.28Min: 248.92 / Avg: 250.59 / Max: 252.88Min: 226.64 / Avg: 228.29 / Max: 230.071. (CXX) g++ options: -O3 -rdynamic -lcrypto -lz -lrt -lpthread -lcurl -lfreetype -lSDL2 -lwavpack -lopusfile -lopus -logg -lGL -lX11 -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 3Run 1aRun 1bRun 2Run 320406080100SE +/- 0.38, N = 3SE +/- 0.38, N = 3SE +/- 0.37, N = 3SE +/- 0.25, N = 3108.38108.34108.08108.811. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 3Run 1aRun 1bRun 2Run 320406080100Min: 107.63 / Avg: 108.38 / Max: 108.89Min: 107.58 / Avg: 108.34 / Max: 108.73Min: 107.35 / Avg: 108.07 / Max: 108.57Min: 108.31 / Avg: 108.81 / Max: 109.141. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

DDraceNetwork

OpenBenchmarking.orgMilliseconds, Fewer Is BetterDDraceNetwork 15.2.3Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL 3.3 - Zoom: Default - Demo: RaiNyMore2 - Total Frame TimeRun 1Run 1aRun 1bRun 2Run 3714212835Min: 7.47 / Avg: 8.59 / Max: 29.55Min: 7.42 / Avg: 8.52 / Max: 15.06Min: 7.24 / Avg: 8.55 / Max: 14.37Min: 7.53 / Avg: 8.04 / Max: 17.18Min: 7.63 / Avg: 8.57 / Max: 15.161. (CXX) g++ options: -O3 -rdynamic -lcrypto -lz -lrt -lpthread -lcurl -lfreetype -lSDL2 -lwavpack -lopusfile -lopus -logg -lGL -lX11 -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0

OpenBenchmarking.orgFrames Per Second, More Is BetterDDraceNetwork 15.2.3Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL 3.3 - Zoom: Default - Demo: RaiNyMore2Run 1Run 1aRun 1bRun 2Run 3306090120150SE +/- 0.07, N = 3SE +/- 0.43, N = 3SE +/- 1.20, N = 5SE +/- 0.96, N = 12SE +/- 0.54, N = 3116.62117.08118.10124.88116.70MIN: 33.84 / MAX: 133.83MIN: 66.16 / MAX: 134.84MIN: 68.79 / MAX: 149.7MIN: 17.42 / MAX: 499.5MIN: 65.97 / MAX: 143.661. (CXX) g++ options: -O3 -rdynamic -lcrypto -lz -lrt -lpthread -lcurl -lfreetype -lSDL2 -lwavpack -lopusfile -lopus -logg -lGL -lX11 -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0
OpenBenchmarking.orgFrames Per Second, More Is BetterDDraceNetwork 15.2.3Resolution: 1920 x 1080 - Mode: Fullscreen - Renderer: OpenGL 3.3 - Zoom: Default - Demo: RaiNyMore2Run 1Run 1aRun 1bRun 2Run 320406080100Min: 116.53 / Avg: 116.62 / Max: 116.76Min: 116.22 / Avg: 117.08 / Max: 117.55Min: 114.96 / Avg: 118.1 / Max: 120.99Min: 117.01 / Avg: 124.88 / Max: 132.46Min: 115.73 / Avg: 116.7 / Max: 117.581. (CXX) g++ options: -O3 -rdynamic -lcrypto -lz -lrt -lpthread -lcurl -lfreetype -lSDL2 -lwavpack -lopusfile -lopus -logg -lGL -lX11 -lnotify -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon ObjRun 1aRun 1bRun 2Run 3246810SE +/- 0.0245, N = 3SE +/- 0.0597, N = 3SE +/- 0.0610, N = 3SE +/- 0.0506, N = 37.38947.32397.34547.3004MIN: 7.06 / MAX: 10.52MIN: 6.98 / MAX: 10.47MIN: 6.99 / MAX: 10.56MIN: 6.97 / MAX: 10.49
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon ObjRun 1aRun 1bRun 2Run 33691215Min: 7.36 / Avg: 7.39 / Max: 7.44Min: 7.25 / Avg: 7.32 / Max: 7.44Min: 7.26 / Avg: 7.35 / Max: 7.46Min: 7.24 / Avg: 7.3 / Max: 7.4

VkResample

VkResample is a Vulkan-based image upscaling library based on VkFFT. The sample input file is upscaling a 4K image to 8K using Vulkan-based GPU acceleration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterVkResample 1.0Upscale: 2x - Precision: DoubleRun 1Run 2Run 32004006008001000SE +/- 2.75, N = 3SE +/- 0.30, N = 3SE +/- 0.30, N = 3903.09901.97901.521. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgms, Fewer Is BetterVkResample 1.0Upscale: 2x - Precision: DoubleRun 1Run 2Run 3160320480640800Min: 899.83 / Avg: 903.09 / Max: 908.56Min: 901.45 / Avg: 901.97 / Max: 902.5Min: 901.13 / Avg: 901.52 / Max: 902.111. (CXX) g++ options: -O3 -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPURun 1aRun 1bRun 2Run 314002800420056007000SE +/- 11.87, N = 3SE +/- 5.44, N = 3SE +/- 18.76, N = 3SE +/- 4.99, N = 36727.436704.956723.586734.19MIN: 6640.48MIN: 6624.09MIN: 6619.72MIN: 6656.171. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPURun 1aRun 1bRun 2Run 312002400360048006000Min: 6706.45 / Avg: 6727.43 / Max: 6747.53Min: 6698.52 / Avg: 6704.95 / Max: 6715.77Min: 6694.2 / Avg: 6723.58 / Max: 6758.48Min: 6728.67 / Avg: 6734.19 / Max: 6744.161. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPURun 1aRun 1bRun 2Run 314002800420056007000SE +/- 1.23, N = 3SE +/- 8.88, N = 3SE +/- 12.12, N = 3SE +/- 5.23, N = 36723.116708.176710.466747.01MIN: 6653.37MIN: 6629.22MIN: 6628.03MIN: 6672.191. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPURun 1aRun 1bRun 2Run 312002400360048006000Min: 6721.51 / Avg: 6723.11 / Max: 6725.53Min: 6690.68 / Avg: 6708.17 / Max: 6719.55Min: 6687.3 / Avg: 6710.46 / Max: 6728.25Min: 6737.13 / Avg: 6747.01 / Max: 6754.931. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPURun 1aRun 1bRun 2Run 314002800420056007000SE +/- 5.28, N = 3SE +/- 15.15, N = 3SE +/- 8.18, N = 3SE +/- 6.15, N = 36687.006677.786711.806732.12MIN: 6607.87MIN: 6591.21MIN: 6631.94MIN: 6659.131. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPURun 1aRun 1bRun 2Run 312002400360048006000Min: 6676.83 / Avg: 6687 / Max: 6694.55Min: 6653.99 / Avg: 6677.78 / Max: 6705.92Min: 6703.43 / Avg: 6711.8 / Max: 6728.15Min: 6725 / Avg: 6732.12 / Max: 6744.361. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileRun 1aRun 1bRun 2Run 320406080100SE +/- 0.64, N = 3SE +/- 0.83, N = 3SE +/- 0.74, N = 3SE +/- 0.81, N = 394.8495.0094.4994.66
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileRun 1aRun 1bRun 2Run 320406080100Min: 93.56 / Avg: 94.84 / Max: 95.52Min: 93.36 / Avg: 95 / Max: 95.94Min: 93.01 / Avg: 94.49 / Max: 95.24Min: 93.05 / Avg: 94.66 / Max: 95.62

Timed Eigen Compilation

This test times how long it takes to build all Eigen examples. The Eigen examples are compiled serially. Eigen is a C++ template library for linear algebra. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To CompileRun 1aRun 1bRun 2Run 320406080100SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 392.9291.2090.8892.96
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To CompileRun 1aRun 1bRun 2Run 320406080100Min: 92.87 / Avg: 92.92 / Max: 92.95Min: 91.18 / Avg: 91.2 / Max: 91.23Min: 90.86 / Avg: 90.88 / Max: 90.9Min: 92.93 / Avg: 92.96 / Max: 93.02

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: CrownRun 1aRun 1bRun 2Run 3246810SE +/- 0.0990, N = 3SE +/- 0.0276, N = 3SE +/- 0.0612, N = 3SE +/- 0.0763, N = 47.32157.33977.33677.2718MIN: 6.97 / MAX: 11.24MIN: 6.98 / MAX: 11.33MIN: 7.03 / MAX: 11.3MIN: 6.97 / MAX: 11.24
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: CrownRun 1aRun 1bRun 2Run 33691215Min: 7.19 / Avg: 7.32 / Max: 7.51Min: 7.3 / Avg: 7.34 / Max: 7.39Min: 7.26 / Avg: 7.34 / Max: 7.46Min: 7.18 / Avg: 7.27 / Max: 7.5

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 2Run 1aRun 1bRun 2Run 320406080100SE +/- 0.50, N = 3SE +/- 0.40, N = 3SE +/- 0.35, N = 3SE +/- 0.63, N = 388.6188.1588.0388.351. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 2Run 1aRun 1bRun 2Run 320406080100Min: 87.66 / Avg: 88.61 / Max: 89.35Min: 87.36 / Avg: 88.15 / Max: 88.58Min: 87.35 / Avg: 88.03 / Max: 88.53Min: 87.2 / Avg: 88.35 / Max: 89.381. (CXX) g++ options: -O3 -fPIC

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian DragonRun 1aRun 1bRun 2Run 33691215SE +/- 0.1039, N = 5SE +/- 0.0348, N = 3SE +/- 0.0457, N = 3SE +/- 0.1033, N = 59.40999.45869.36519.3850MIN: 9.01 / MAX: 13.72MIN: 8.99 / MAX: 13.27MIN: 8.96 / MAX: 13.15MIN: 8.99 / MAX: 13.8
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian DragonRun 1aRun 1bRun 2Run 33691215Min: 9.25 / Avg: 9.41 / Max: 9.81Min: 9.39 / Avg: 9.46 / Max: 9.51Min: 9.3 / Avg: 9.37 / Max: 9.45Min: 9.21 / Avg: 9.39 / Max: 9.77

CLOMP

CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading in order to influence future system designs. This particular test profile configuration is currently set to look at the OpenMP static schedule speed-up across all available CPU cores using the recommended test configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP SpeedupRun 1aRun 1bRun 2Run 3246810SE +/- 0.03, N = 3SE +/- 0.04, N = 15SE +/- 0.06, N = 3SE +/- 0.07, N = 35.85.95.86.11. (CC) gcc options: -fopenmp -O3 -lm
OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP SpeedupRun 1aRun 1bRun 2Run 3246810Min: 5.7 / Avg: 5.77 / Max: 5.8Min: 5.6 / Avg: 5.85 / Max: 6.1Min: 5.7 / Avg: 5.8 / Max: 5.9Min: 6 / Avg: 6.07 / Max: 6.21. (CC) gcc options: -fopenmp -O3 -lm

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkRun 1aRun 1bRun 2Run 33691215SE +/- 0.07, N = 3SE +/- 0.07, N = 3SE +/- 0.04, N = 3SE +/- 0.08, N = 310.2910.2810.3510.101. Nodejs v12.18.2
OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkRun 1aRun 1bRun 2Run 33691215Min: 10.18 / Avg: 10.29 / Max: 10.43Min: 10.2 / Avg: 10.28 / Max: 10.42Min: 10.29 / Avg: 10.35 / Max: 10.43Min: 9.98 / Avg: 10.1 / Max: 10.241. Nodejs v12.18.2

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPURun 1aRun 1bRun 2Run 38001600240032004000SE +/- 4.40, N = 3SE +/- 5.93, N = 3SE +/- 4.85, N = 3SE +/- 3.18, N = 33540.213526.253538.743567.13MIN: 3482.35MIN: 3462.24MIN: 3476.2MIN: 3506.881. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPURun 1aRun 1bRun 2Run 36001200180024003000Min: 3533.08 / Avg: 3540.21 / Max: 3548.24Min: 3519.18 / Avg: 3526.25 / Max: 3538.03Min: 3533.18 / Avg: 3538.74 / Max: 3548.41Min: 3561.65 / Avg: 3567.13 / Max: 3572.661. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPURun 1aRun 1bRun 2Run 38001600240032004000SE +/- 5.45, N = 3SE +/- 8.21, N = 3SE +/- 4.38, N = 3SE +/- 9.81, N = 33520.713530.443535.413568.59MIN: 3458.62MIN: 3460.28MIN: 3473.28MIN: 3506.371. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPURun 1aRun 1bRun 2Run 36001200180024003000Min: 3511.29 / Avg: 3520.71 / Max: 3530.17Min: 3516.45 / Avg: 3530.44 / Max: 3544.87Min: 3526.91 / Avg: 3535.41 / Max: 3541.49Min: 3556.7 / Avg: 3568.59 / Max: 3588.051. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPURun 1aRun 1bRun 2Run 38001600240032004000SE +/- 4.33, N = 3SE +/- 1.76, N = 3SE +/- 8.77, N = 3SE +/- 8.39, N = 33539.003515.933540.263564.37MIN: 3479.17MIN: 3459.28MIN: 3481.53MIN: 3502.091. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPURun 1aRun 1bRun 2Run 36001200180024003000Min: 3532.41 / Avg: 3539 / Max: 3547.15Min: 3512.88 / Avg: 3515.93 / Max: 3518.99Min: 3528.76 / Avg: 3540.26 / Max: 3557.47Min: 3551.32 / Avg: 3564.37 / Max: 3580.041. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeRun 1bRun 2Run 320406080100SE +/- 0.45, N = 3SE +/- 0.40, N = 3SE +/- 0.39, N = 379.7279.8879.841. RawTherapee, version 5.8, command line.
OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeRun 1bRun 2Run 31530456075Min: 78.83 / Avg: 79.72 / Max: 80.17Min: 79.09 / Avg: 79.88 / Max: 80.29Min: 79.08 / Avg: 79.84 / Max: 80.331. RawTherapee, version 5.8, command line.

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SharpenRun 1aRun 1bRun 2Run 320406080100SE +/- 1.00, N = 3SE +/- 0.77, N = 5SE +/- 1.00, N = 3SE +/- 0.77, N = 5757475741. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SharpenRun 1aRun 1bRun 2Run 31428425670Min: 74 / Avg: 75 / Max: 77Min: 73 / Avg: 74 / Max: 77Min: 74 / Avg: 75 / Max: 77Min: 73 / Avg: 74 / Max: 771. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SwirlRun 1aRun 1bRun 2Run 350100150200250SE +/- 2.22, N = 5SE +/- 3.06, N = 3SE +/- 2.75, N = 4SE +/- 2.68, N = 42202212212201. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SwirlRun 1aRun 1bRun 2Run 34080120160200Min: 217 / Avg: 220.2 / Max: 229Min: 217 / Avg: 221 / Max: 227Min: 218 / Avg: 220.75 / Max: 229Min: 217 / Avg: 220 / Max: 2281. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lz -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mRun 1bRun 2Run 3510152025SE +/- 0.04, N = 3SE +/- 0.98, N = 3SE +/- 1.08, N = 318.7817.8617.80MIN: 18.39 / MAX: 19.77MIN: 15.83 / MAX: 19.69MIN: 15.57 / MAX: 19.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mRun 1bRun 2Run 3510152025Min: 18.71 / Avg: 18.78 / Max: 18.85Min: 15.9 / Avg: 17.86 / Max: 18.98Min: 15.64 / Avg: 17.8 / Max: 191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdRun 1bRun 2Run 3612182430SE +/- 0.03, N = 3SE +/- 0.17, N = 3SE +/- 0.01, N = 326.7626.9126.82MIN: 26.11 / MAX: 27.68MIN: 25.89 / MAX: 103.12MIN: 26.16 / MAX: 28.231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdRun 1bRun 2Run 3612182430Min: 26.71 / Avg: 26.76 / Max: 26.81Min: 26.73 / Avg: 26.91 / Max: 27.24Min: 26.81 / Avg: 26.82 / Max: 26.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyRun 1bRun 2Run 3816243240SE +/- 0.61, N = 3SE +/- 0.40, N = 3SE +/- 0.05, N = 335.1735.0734.59MIN: 34.25 / MAX: 130.18MIN: 33.9 / MAX: 36.64MIN: 34.28 / MAX: 36.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyRun 1bRun 2Run 3816243240Min: 34.53 / Avg: 35.17 / Max: 36.39Min: 34.66 / Avg: 35.07 / Max: 35.86Min: 34.54 / Avg: 34.59 / Max: 34.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50Run 1bRun 2Run 3816243240SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.05, N = 336.1536.0836.20MIN: 35.36 / MAX: 41.73MIN: 35.28 / MAX: 37.3MIN: 35.4 / MAX: 39.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50Run 1bRun 2Run 3816243240Min: 36.12 / Avg: 36.15 / Max: 36.17Min: 36.04 / Avg: 36.08 / Max: 36.12Min: 36.13 / Avg: 36.2 / Max: 36.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetRun 1bRun 2Run 348121620SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 315.2015.1515.18MIN: 14.42 / MAX: 15.97MIN: 14.4 / MAX: 16.05MIN: 14.4 / MAX: 15.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetRun 1bRun 2Run 348121620Min: 15.19 / Avg: 15.2 / Max: 15.21Min: 15.1 / Avg: 15.15 / Max: 15.19Min: 15.16 / Avg: 15.18 / Max: 15.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18Run 1bRun 2Run 348121620SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 318.1218.1518.18MIN: 17.2 / MAX: 19.04MIN: 17.19 / MAX: 22.75MIN: 17.21 / MAX: 30.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18Run 1bRun 2Run 3510152025Min: 18.1 / Avg: 18.12 / Max: 18.15Min: 18.14 / Avg: 18.15 / Max: 18.17Min: 18.13 / Avg: 18.18 / Max: 18.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16Run 1bRun 2Run 31530456075SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 366.8566.7266.90MIN: 66.58 / MAX: 67.81MIN: 66.48 / MAX: 69.36MIN: 66.51 / MAX: 78.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16Run 1bRun 2Run 31326395265Min: 66.8 / Avg: 66.85 / Max: 66.91Min: 66.7 / Avg: 66.72 / Max: 66.75Min: 66.8 / Avg: 66.9 / Max: 67.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetRun 1bRun 2Run 3510152025SE +/- 0.02, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 319.1619.1719.20MIN: 18.69 / MAX: 20.08MIN: 17.19 / MAX: 20.28MIN: 18.77 / MAX: 20.571. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetRun 1bRun 2Run 3510152025Min: 19.13 / Avg: 19.16 / Max: 19.19Min: 19.16 / Avg: 19.17 / Max: 19.17Min: 19.17 / Avg: 19.2 / Max: 19.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceRun 1bRun 2Run 30.59851.1971.79552.3942.9925SE +/- 0.01, N = 3SE +/- 0.18, N = 3SE +/- 0.17, N = 32.662.462.48MIN: 2.55 / MAX: 3MIN: 2.08 / MAX: 3.31MIN: 2.06 / MAX: 3.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceRun 1bRun 2Run 3246810Min: 2.65 / Avg: 2.66 / Max: 2.67Min: 2.1 / Avg: 2.46 / Max: 2.64Min: 2.15 / Avg: 2.48 / Max: 2.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0Run 1bRun 2Run 33691215SE +/- 0.01, N = 3SE +/- 0.59, N = 3SE +/- 0.61, N = 39.618.968.97MIN: 9.24 / MAX: 10.38MIN: 7.75 / MAX: 9.96MIN: 7.7 / MAX: 10.221. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0Run 1bRun 2Run 33691215Min: 9.59 / Avg: 9.61 / Max: 9.63Min: 7.78 / Avg: 8.96 / Max: 9.56Min: 7.74 / Avg: 8.97 / Max: 9.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetRun 1bRun 2Run 3246810SE +/- 0.04, N = 3SE +/- 0.45, N = 3SE +/- 0.47, N = 36.335.865.91MIN: 4.74 / MAX: 6.96MIN: 4.93 / MAX: 7.02MIN: 4.93 / MAX: 8.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetRun 1bRun 2Run 33691215Min: 6.25 / Avg: 6.33 / Max: 6.4Min: 4.95 / Avg: 5.86 / Max: 6.32Min: 4.96 / Avg: 5.91 / Max: 6.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2Run 1bRun 2Run 3246810SE +/- 0.66, N = 3SE +/- 0.67, N = 3SE +/- 0.69, N = 37.427.377.43MIN: 6.08 / MAX: 8.59MIN: 6.01 / MAX: 8.77MIN: 6.02 / MAX: 9.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2Run 1bRun 2Run 33691215Min: 6.11 / Avg: 7.42 / Max: 8.11Min: 6.04 / Avg: 7.37 / Max: 8.06Min: 6.05 / Avg: 7.43 / Max: 8.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3Run 1bRun 2Run 31.31632.63263.94895.26526.5815SE +/- 0.41, N = 3SE +/- 0.40, N = 3SE +/- 0.40, N = 35.815.815.85MIN: 4.96 / MAX: 6.93MIN: 4.99 / MAX: 6.66MIN: 5.01 / MAX: 7.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3Run 1bRun 2Run 3246810Min: 4.99 / Avg: 5.81 / Max: 6.23Min: 5.02 / Avg: 5.81 / Max: 6.22Min: 5.05 / Avg: 5.85 / Max: 6.271. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2Run 1bRun 2Run 3246810SE +/- 0.11, N = 3SE +/- 0.10, N = 3SE +/- 0.10, N = 36.406.426.46MIN: 5.94 / MAX: 8.06MIN: 5.9 / MAX: 8.12MIN: 5.93 / MAX: 7.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2Run 1bRun 2Run 33691215Min: 6.21 / Avg: 6.4 / Max: 6.59Min: 6.22 / Avg: 6.42 / Max: 6.54Min: 6.27 / Avg: 6.46 / Max: 6.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetRun 1bRun 2Run 3612182430SE +/- 0.02, N = 3SE +/- 0.14, N = 3SE +/- 0.04, N = 325.8025.8925.88MIN: 25.2 / MAX: 27.12MIN: 25.17 / MAX: 26.88MIN: 25.26 / MAX: 27.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetRun 1bRun 2Run 3612182430Min: 25.78 / Avg: 25.8 / Max: 25.84Min: 25.74 / Avg: 25.89 / Max: 26.17Min: 25.8 / Avg: 25.88 / Max: 25.941. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: regnety_400mRun 1bRun 2Run 3510152025SE +/- 0.06, N = 3SE +/- 0.92, N = 3SE +/- 1.11, N = 318.7617.9818.00MIN: 18.25 / MAX: 19.9MIN: 16.07 / MAX: 20.44MIN: 15.71 / MAX: 30.021. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: regnety_400mRun 1bRun 2Run 3510152025Min: 18.63 / Avg: 18.76 / Max: 18.84Min: 16.15 / Avg: 17.98 / Max: 18.95Min: 15.78 / Avg: 18 / Max: 19.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: squeezenet_ssdRun 1bRun 2Run 3612182430SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 326.8026.7426.85MIN: 26.14 / MAX: 27.77MIN: 26.13 / MAX: 27.68MIN: 26.17 / MAX: 27.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: squeezenet_ssdRun 1bRun 2Run 3612182430Min: 26.76 / Avg: 26.8 / Max: 26.86Min: 26.72 / Avg: 26.74 / Max: 26.77Min: 26.8 / Avg: 26.85 / Max: 26.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: yolov4-tinyRun 1bRun 2Run 3816243240SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.23, N = 334.6034.6035.10MIN: 34.3 / MAX: 35.54MIN: 33.85 / MAX: 35.45MIN: 33.93 / MAX: 47.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: yolov4-tinyRun 1bRun 2Run 3816243240Min: 34.54 / Avg: 34.6 / Max: 34.69Min: 34.55 / Avg: 34.6 / Max: 34.69Min: 34.77 / Avg: 35.1 / Max: 35.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet50Run 1bRun 2Run 3816243240SE +/- 0.01, N = 3SE +/- 0.10, N = 3SE +/- 0.38, N = 336.0836.2736.49MIN: 35.37 / MAX: 37.49MIN: 34.84 / MAX: 38.98MIN: 35.41 / MAX: 49.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet50Run 1bRun 2Run 3816243240Min: 36.06 / Avg: 36.08 / Max: 36.09Min: 36.08 / Avg: 36.27 / Max: 36.39Min: 36.07 / Avg: 36.49 / Max: 37.241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: alexnetRun 1bRun 2Run 348121620SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 315.1615.1715.22MIN: 14.41 / MAX: 15.93MIN: 14.39 / MAX: 15.71MIN: 14.41 / MAX: 16.021. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: alexnetRun 1bRun 2Run 348121620Min: 15.13 / Avg: 15.16 / Max: 15.19Min: 15.14 / Avg: 15.17 / Max: 15.2Min: 15.19 / Avg: 15.22 / Max: 15.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet18Run 1bRun 2Run 348121620SE +/- 0.08, N = 3SE +/- 0.05, N = 3SE +/- 0.05, N = 318.2418.1818.20MIN: 17.23 / MAX: 21.78MIN: 17.21 / MAX: 19MIN: 17.17 / MAX: 19.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet18Run 1bRun 2Run 3510152025Min: 18.15 / Avg: 18.24 / Max: 18.4Min: 18.13 / Avg: 18.18 / Max: 18.27Min: 18.11 / Avg: 18.2 / Max: 18.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: vgg16Run 1bRun 2Run 31530456075SE +/- 0.07, N = 3SE +/- 0.02, N = 3SE +/- 0.25, N = 366.8066.7567.14MIN: 66.46 / MAX: 69.89MIN: 66.49 / MAX: 67.53MIN: 66.58 / MAX: 160.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: vgg16Run 1bRun 2Run 31326395265Min: 66.67 / Avg: 66.8 / Max: 66.9Min: 66.7 / Avg: 66.75 / Max: 66.78Min: 66.83 / Avg: 67.14 / Max: 67.631. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: googlenetRun 1bRun 2Run 3510152025SE +/- 0.02, N = 3SE +/- 1.10, N = 3SE +/- 1.19, N = 319.1218.0718.07MIN: 18.71 / MAX: 20MIN: 15.3 / MAX: 20.13MIN: 15.43 / MAX: 20.221. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: googlenetRun 1bRun 2Run 3510152025Min: 19.09 / Avg: 19.12 / Max: 19.17Min: 15.87 / Avg: 18.07 / Max: 19.2Min: 15.69 / Avg: 18.07 / Max: 19.271. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: blazefaceRun 1bRun 2Run 30.59851.1971.79552.3942.9925SE +/- 0.01, N = 3SE +/- 0.19, N = 3SE +/- 0.18, N = 32.662.462.47MIN: 2.54 / MAX: 2.78MIN: 2.07 / MAX: 2.82MIN: 2.08 / MAX: 2.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: blazefaceRun 1bRun 2Run 3246810Min: 2.65 / Avg: 2.66 / Max: 2.67Min: 2.09 / Avg: 2.46 / Max: 2.66Min: 2.1 / Avg: 2.47 / Max: 2.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: efficientnet-b0Run 1bRun 2Run 33691215SE +/- 0.03, N = 3SE +/- 0.61, N = 3SE +/- 0.61, N = 39.618.958.99MIN: 9.23 / MAX: 10.34MIN: 7.68 / MAX: 11.14MIN: 7.73 / MAX: 11.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: efficientnet-b0Run 1bRun 2Run 33691215Min: 9.57 / Avg: 9.61 / Max: 9.67Min: 7.73 / Avg: 8.95 / Max: 9.61Min: 7.77 / Avg: 8.99 / Max: 9.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mnasnetRun 1bRun 2Run 31.33882.67764.01645.35526.694SE +/- 0.39, N = 3SE +/- 0.69, N = 2SE +/- 0.44, N = 35.955.625.92MIN: 4.8 / MAX: 7.04MIN: 4.9 / MAX: 6.46MIN: 5.01 / MAX: 8.231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mnasnetRun 1bRun 2Run 3246810Min: 5.17 / Avg: 5.95 / Max: 6.35Min: 4.93 / Avg: 5.62 / Max: 6.3Min: 5.04 / Avg: 5.92 / Max: 6.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: shufflenet-v2Run 1bRun 2Run 3246810SE +/- 0.65, N = 3SE +/- 0.67, N = 3SE +/- 0.01, N = 27.407.408.09MIN: 6.09 / MAX: 8.92MIN: 6.03 / MAX: 8.79MIN: 7.66 / MAX: 101. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: shufflenet-v2Run 1bRun 2Run 33691215Min: 6.11 / Avg: 7.4 / Max: 8.06Min: 6.06 / Avg: 7.4 / Max: 8.09Min: 8.08 / Avg: 8.09 / Max: 8.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3Run 1bRun 2Run 31.33882.67764.01645.35526.694SE +/- 0.41, N = 3SE +/- 0.42, N = 3SE +/- 0.46, N = 35.855.865.95MIN: 4.98 / MAX: 6.81MIN: 5 / MAX: 7MIN: 4.99 / MAX: 9.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3Run 1bRun 2Run 3246810Min: 5.02 / Avg: 5.85 / Max: 6.28Min: 5.03 / Avg: 5.86 / Max: 6.32Min: 5.03 / Avg: 5.95 / Max: 6.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2Run 1bRun 2Run 3246810SE +/- 0.10, N = 3SE +/- 0.08, N = 3SE +/- 0.13, N = 36.396.366.39MIN: 5.92 / MAX: 7.91MIN: 5.89 / MAX: 7.7MIN: 5.95 / MAX: 8.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2Run 1bRun 2Run 33691215Min: 6.22 / Avg: 6.39 / Max: 6.57Min: 6.2 / Avg: 6.36 / Max: 6.45Min: 6.14 / Avg: 6.39 / Max: 6.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mobilenetRun 1bRun 2Run 3612182430SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 325.7625.7125.87MIN: 25.23 / MAX: 26.89MIN: 25.14 / MAX: 26.83MIN: 25.31 / MAX: 27.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mobilenetRun 1bRun 2Run 3612182430Min: 25.72 / Avg: 25.76 / Max: 25.81Min: 25.67 / Avg: 25.71 / Max: 25.75Min: 25.81 / Avg: 25.87 / Max: 25.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian DragonRun 1aRun 1bRun 2Run 3246810SE +/- 0.0666, N = 3SE +/- 0.1119, N = 3SE +/- 0.0832, N = 3SE +/- 0.1027, N = 37.89107.79327.75367.8463MIN: 7.55 / MAX: 11.24MIN: 7.43 / MAX: 11.19MIN: 7.38 / MAX: 11.32MIN: 7.45 / MAX: 11.33
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian DragonRun 1aRun 1bRun 2Run 33691215Min: 7.79 / Avg: 7.89 / Max: 8.02Min: 7.64 / Avg: 7.79 / Max: 8.01Min: 7.6 / Avg: 7.75 / Max: 7.89Min: 7.67 / Avg: 7.85 / Max: 8.03

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: KostyaRun 1aRun 1bRun 2Run 30.13280.26560.39840.53120.664SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.590.590.590.591. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: KostyaRun 1aRun 1bRun 2Run 3246810Min: 0.59 / Avg: 0.59 / Max: 0.59Min: 0.59 / Avg: 0.59 / Max: 0.59Min: 0.59 / Avg: 0.59 / Max: 0.59Min: 0.59 / Avg: 0.59 / Max: 0.591. (CXX) g++ options: -O3 -pthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedRun 1aRun 1bRun 2Run 32K4K6K8K10KSE +/- 1.55, N = 3SE +/- 4.84, N = 3SE +/- 1.17, N = 3SE +/- 2.96, N = 39279.29347.59245.89312.81. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedRun 1aRun 1bRun 2Run 316003200480064008000Min: 9276.2 / Avg: 9279.23 / Max: 9281.3Min: 9341.5 / Avg: 9347.53 / Max: 9357.1Min: 9243.5 / Avg: 9245.77 / Max: 9247.4Min: 9309.5 / Avg: 9312.8 / Max: 9318.71. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedRun 1aRun 1bRun 2Run 31020304050SE +/- 0.10, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 342.5143.5043.0243.481. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedRun 1aRun 1bRun 2Run 3918273645Min: 42.4 / Avg: 42.51 / Max: 42.71Min: 43.49 / Avg: 43.5 / Max: 43.52Min: 43.01 / Avg: 43.02 / Max: 43.03Min: 43.47 / Avg: 43.48 / Max: 43.491. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedRun 1aRun 1bRun 2Run 32K4K6K8K10KSE +/- 1.35, N = 3SE +/- 6.92, N = 3SE +/- 5.39, N = 3SE +/- 3.49, N = 39266.69332.19238.99309.21. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedRun 1aRun 1bRun 2Run 316003200480064008000Min: 9264 / Avg: 9266.63 / Max: 9268.5Min: 9325 / Avg: 9332.07 / Max: 9345.9Min: 9228.1 / Avg: 9238.87 / Max: 9244.5Min: 9302.8 / Avg: 9309.2 / Max: 9314.81. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedRun 1aRun 1bRun 2Run 31020304050SE +/- 0.01, N = 3SE +/- 0.15, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 343.6544.3543.9944.471. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedRun 1aRun 1bRun 2Run 3918273645Min: 43.63 / Avg: 43.65 / Max: 43.67Min: 44.05 / Avg: 44.35 / Max: 44.5Min: 43.99 / Avg: 43.99 / Max: 43.99Min: 44.46 / Avg: 44.47 / Max: 44.481. (CC) gcc options: -O3

DeepSpeech

Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPURun 1aRun 1bRun 2Run 320406080100SE +/- 0.33, N = 3SE +/- 0.37, N = 3SE +/- 0.34, N = 3SE +/- 0.37, N = 379.9080.0779.9380.38
OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPURun 1aRun 1bRun 2Run 31530456075Min: 79.52 / Avg: 79.9 / Max: 80.55Min: 79.67 / Avg: 80.07 / Max: 80.8Min: 79.58 / Avg: 79.93 / Max: 80.62Min: 79.97 / Avg: 80.38 / Max: 81.13

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1SRun 1aRun 1bRun 2Run 31428425670SE +/- 0.38, N = 3SE +/- 0.19, N = 3SE +/- 0.37, N = 3SE +/- 0.20, N = 363.9363.9963.7863.971. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1SRun 1aRun 1bRun 2Run 31326395265Min: 63.18 / Avg: 63.93 / Max: 64.37Min: 63.62 / Avg: 63.99 / Max: 64.18Min: 63.2 / Avg: 63.78 / Max: 64.47Min: 63.57 / Avg: 63.97 / Max: 64.211. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Run 1bRun 2Run 31428425670SE +/- 0.10, N = 3SE +/- 0.09, N = 3SE +/- 0.08, N = 363.3762.7563.231. (CC) gcc options: -O2 -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Run 1bRun 2Run 31224364860Min: 63.25 / Avg: 63.37 / Max: 63.57Min: 62.58 / Avg: 62.75 / Max: 62.86Min: 63.12 / Avg: 63.23 / Max: 63.381. (CC) gcc options: -O2 -ldl -lz -lpthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 4KRun 1aRun 1bRun 2Run 3306090120150SE +/- 1.42, N = 5SE +/- 1.23, N = 6SE +/- 1.24, N = 6SE +/- 1.20, N = 6131.02130.63130.46128.92MIN: 109.35 / MAX: 171.58MIN: 109.06 / MAX: 171.73MIN: 108.76 / MAX: 171.84MIN: 108.02 / MAX: 171.011. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 4KRun 1aRun 1bRun 2Run 320406080100Min: 129.31 / Avg: 131.02 / Max: 136.71Min: 129.2 / Avg: 130.63 / Max: 136.79Min: 128.83 / Avg: 130.46 / Max: 136.61Min: 126.53 / Avg: 128.92 / Max: 134.771. (CC) gcc options: -pthread

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisRun 1aRun 1bRun 2Run 3816243240SE +/- 0.29, N = 9SE +/- 0.33, N = 4SE +/- 0.29, N = 4SE +/- 0.28, N = 435.8535.9736.1436.121. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisRun 1aRun 1bRun 2Run 3816243240Min: 34.34 / Avg: 35.85 / Max: 37.55Min: 35.58 / Avg: 35.97 / Max: 36.95Min: 35.68 / Avg: 36.14 / Max: 36.98Min: 35.7 / Avg: 36.12 / Max: 36.951. (CC) gcc options: -O2 -std=c99

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 1Run 1aRun 1bRun 2Run 30.07180.14360.21540.28720.359SE +/- 0.002, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.002, N = 30.3180.3170.3180.319
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 1Run 1aRun 1bRun 2Run 312345Min: 0.32 / Avg: 0.32 / Max: 0.32Min: 0.32 / Avg: 0.32 / Max: 0.32Min: 0.32 / Avg: 0.32 / Max: 0.32Min: 0.32 / Avg: 0.32 / Max: 0.32

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 5Run 1aRun 1bRun 2Run 30.21890.43780.65670.87561.0945SE +/- 0.003, N = 3SE +/- 0.004, N = 3SE +/- 0.004, N = 3SE +/- 0.003, N = 30.9710.9700.9730.969
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 5Run 1aRun 1bRun 2Run 3246810Min: 0.97 / Avg: 0.97 / Max: 0.98Min: 0.97 / Avg: 0.97 / Max: 0.98Min: 0.97 / Avg: 0.97 / Max: 0.98Min: 0.97 / Avg: 0.97 / Max: 0.98

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomRun 1bRun 2Run 30.21760.43520.65280.87041.088SE +/- 0.001, N = 3SE +/- 0.002, N = 3SE +/- 0.002, N = 30.9660.9620.967
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomRun 1bRun 2Run 3246810Min: 0.96 / Avg: 0.97 / Max: 0.97Min: 0.96 / Avg: 0.96 / Max: 0.97Min: 0.96 / Avg: 0.97 / Max: 0.97

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarRun 1bRun 2Run 30.49970.99941.49911.99882.4985SE +/- 0.005, N = 3SE +/- 0.005, N = 3SE +/- 0.005, N = 32.2192.2162.221
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarRun 1bRun 2Run 3246810Min: 2.21 / Avg: 2.22 / Max: 2.23Min: 2.21 / Avg: 2.22 / Max: 2.22Min: 2.21 / Avg: 2.22 / Max: 2.23

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileRun 1aRun 1bRun 2Run 360K120K180K240K300KSE +/- 1898.56, N = 3SE +/- 1944.34, N = 3SE +/- 3107.84, N = 3SE +/- 2240.19, N = 3297566298690298821298804
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileRun 1aRun 1bRun 2Run 350K100K150K200K250KMin: 293802 / Avg: 297565.67 / Max: 299883Min: 294837 / Avg: 298690.33 / Max: 301070Min: 293091 / Avg: 298821 / Max: 303772Min: 294330 / Avg: 298804 / Max: 301248

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetRun 1aRun 1bRun 2Run 380K160K240K320K400KSE +/- 2610.64, N = 3SE +/- 2370.82, N = 3SE +/- 2559.08, N = 3SE +/- 2669.71, N = 3355273354452354514355708
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetRun 1aRun 1bRun 2Run 360K120K180K240K300KMin: 350056 / Avg: 355272.67 / Max: 358071Min: 349712 / Avg: 354452.33 / Max: 356919Min: 349396 / Avg: 354513.67 / Max: 357134Min: 350433 / Avg: 355707.67 / Max: 359063

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatRun 1aRun 1bRun 2Run 350K100K150K200K250KSE +/- 1741.84, N = 3SE +/- 1676.80, N = 3SE +/- 1579.94, N = 3SE +/- 1733.97, N = 3237589237681238039238421
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatRun 1aRun 1bRun 2Run 340K80K120K160K200KMin: 234130 / Avg: 237589 / Max: 239677Min: 234340 / Avg: 237681 / Max: 239603Min: 234882 / Avg: 238039.33 / Max: 239728Min: 234968 / Avg: 238421 / Max: 240426

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantRun 1aRun 1bRun 2Run 350K100K150K200K250KSE +/- 1606.44, N = 3SE +/- 1568.74, N = 3SE +/- 1485.00, N = 3SE +/- 1502.01, N = 3236677237181236632236806
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantRun 1aRun 1bRun 2Run 340K80K120K160K200KMin: 233465 / Avg: 236677 / Max: 238348Min: 234057 / Avg: 237181.33 / Max: 238992Min: 233662 / Avg: 236632 / Max: 238123Min: 233813 / Avg: 236805.67 / Max: 238528

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPURun 1aRun 1bRun 2Run 3246810SE +/- 0.08450, N = 12SE +/- 0.11167, N = 12SE +/- 0.08550, N = 12SE +/- 0.09257, N = 126.573106.652516.751716.55662MIN: 4.26MIN: 4.45MIN: 4.43MIN: 4.151. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPURun 1aRun 1bRun 2Run 33691215Min: 5.65 / Avg: 6.57 / Max: 6.71Min: 5.43 / Avg: 6.65 / Max: 6.81Min: 5.82 / Avg: 6.75 / Max: 6.93Min: 5.54 / Avg: 6.56 / Max: 6.681. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPURun 1aRun 1bRun 2Run 30.67331.34662.01992.69323.3665SE +/- 0.03555, N = 12SE +/- 0.03721, N = 12SE +/- 0.03629, N = 12SE +/- 0.03782, N = 122.964562.946442.992272.98555MIN: 1.96MIN: 1.95MIN: 1.99MIN: 1.941. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPURun 1aRun 1bRun 2Run 3246810Min: 2.59 / Avg: 2.96 / Max: 3.03Min: 2.55 / Avg: 2.95 / Max: 3.01Min: 2.6 / Avg: 2.99 / Max: 3.05Min: 2.57 / Avg: 2.99 / Max: 3.031. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Seek RandomRun 1Run 1aRun 1bRun 2Run 33691215SE +/- 0.10, N = 12SE +/- 0.12, N = 15SE +/- 0.10, N = 15SE +/- 0.09, N = 15SE +/- 0.13, N = 1512.5312.6812.6712.4012.271. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Seek RandomRun 1Run 1aRun 1bRun 2Run 348121620Min: 11.6 / Avg: 12.53 / Max: 12.83Min: 11.35 / Avg: 12.68 / Max: 13.35Min: 11.57 / Avg: 12.67 / Max: 13.22Min: 11.35 / Avg: 12.4 / Max: 12.79Min: 10.7 / Avg: 12.27 / Max: 12.841. (CXX) g++ options: -O3 -lsnappy -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: EnhancedRun 1aRun 1bRun 2Run 3306090120150SE +/- 1.00, N = 3SE +/- 1.00, N = 3SE +/- 1.00, N = 3SE +/- 0.67, N = 31191191191191. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: EnhancedRun 1aRun 1bRun 2Run 320406080100Min: 118 / Avg: 119 / Max: 121Min: 118 / Avg: 119 / Max: 121Min: 118 / Avg: 119 / Max: 121Min: 118 / Avg: 118.67 / Max: 1201. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-GaussianRun 1aRun 1bRun 2Run 3306090120150SE +/- 1.20, N = 3SE +/- 1.20, N = 3SE +/- 1.20, N = 3SE +/- 1.76, N = 31551561561541. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-GaussianRun 1aRun 1bRun 2Run 3306090120150Min: 153 / Avg: 154.67 / Max: 157Min: 154 / Avg: 155.67 / Max: 158Min: 154 / Avg: 155.67 / Max: 158Min: 151 / Avg: 153.67 / Max: 1571. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: ResizingRun 1aRun 1bRun 2Run 3120240360480600SE +/- 3.51, N = 3SE +/- 4.84, N = 3SE +/- 4.91, N = 3SE +/- 3.84, N = 35735705715701. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: ResizingRun 1aRun 1bRun 2Run 3100200300400500Min: 569 / Avg: 573 / Max: 580Min: 565 / Avg: 570.33 / Max: 580Min: 565 / Avg: 571.33 / Max: 581Min: 566 / Avg: 570.33 / Max: 5781. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: HWB Color SpaceRun 1aRun 1bRun 2Run 32004006008001000SE +/- 5.70, N = 3SE +/- 5.24, N = 3SE +/- 6.17, N = 3SE +/- 3.84, N = 37757767697751. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: HWB Color SpaceRun 1aRun 1bRun 2Run 3140280420560700Min: 768 / Avg: 774.67 / Max: 786Min: 769 / Avg: 775.67 / Max: 786Min: 762 / Avg: 768.67 / Max: 781Min: 771 / Avg: 775.33 / Max: 7831. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: RotateRun 1aRun 1bRun 2Run 3170340510680850SE +/- 0.88, N = 36867726887611. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: RotateRun 1aRun 1bRun 2Run 3140280420560700Min: 770 / Avg: 771.67 / Max: 7731. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lz -lm -lpthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandomRun 1aRun 1bRun 2Run 30.08550.1710.25650.3420.4275SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.380.380.380.381. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandomRun 1aRun 1bRun 2Run 312345Min: 0.38 / Avg: 0.38 / Max: 0.38Min: 0.38 / Avg: 0.38 / Max: 0.38Min: 0.38 / Avg: 0.38 / Max: 0.38Min: 0.38 / Avg: 0.38 / Max: 0.381. (CXX) g++ options: -O3 -pthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: MediumRun 1aRun 1bRun 2Run 3246810SE +/- 0.08, N = 15SE +/- 0.08, N = 15SE +/- 0.07, N = 15SE +/- 0.08, N = 156.826.876.786.881. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: MediumRun 1aRun 1bRun 2Run 33691215Min: 6.08 / Avg: 6.82 / Max: 6.96Min: 6.04 / Avg: 6.87 / Max: 7.02Min: 6.01 / Avg: 6.78 / Max: 6.94Min: 6.14 / Avg: 6.88 / Max: 7.041. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughRun 1aRun 1bRun 2Run 31224364860SE +/- 0.48, N = 3SE +/- 0.39, N = 3SE +/- 0.51, N = 3SE +/- 0.52, N = 351.8652.3251.9352.361. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughRun 1aRun 1bRun 2Run 31020304050Min: 50.92 / Avg: 51.86 / Max: 52.47Min: 51.55 / Avg: 52.32 / Max: 52.76Min: 50.91 / Avg: 51.93 / Max: 52.45Min: 51.31 / Avg: 52.36 / Max: 52.951. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2Run 1aRun 1bRun 2Run 31224364860SE +/- 0.65, N = 3SE +/- 0.67, N = 3SE +/- 0.72, N = 3SE +/- 0.67, N = 354.8854.8454.7255.201. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2Run 1aRun 1bRun 2Run 31122334455Min: 53.59 / Avg: 54.88 / Max: 55.54Min: 53.51 / Avg: 54.84 / Max: 55.53Min: 53.29 / Avg: 54.72 / Max: 55.51Min: 53.86 / Avg: 55.2 / Max: 55.91. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweetsRun 1aRun 1bRun 2Run 30.14630.29260.43890.58520.7315SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.650.650.650.651. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweetsRun 1aRun 1bRun 2Run 3246810Min: 0.65 / Avg: 0.65 / Max: 0.65Min: 0.65 / Avg: 0.65 / Max: 0.66Min: 0.65 / Avg: 0.65 / Max: 0.65Min: 0.65 / Avg: 0.65 / Max: 0.661. (CXX) g++ options: -O3 -pthread

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDRun 1aRun 1bRun 2Run 30.15080.30160.45240.60320.754SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.670.670.670.671. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDRun 1aRun 1bRun 2Run 3246810Min: 0.67 / Avg: 0.67 / Max: 0.67Min: 0.67 / Avg: 0.67 / Max: 0.67Min: 0.67 / Avg: 0.67 / Max: 0.67Min: 0.67 / Avg: 0.67 / Max: 0.671. (CXX) g++ options: -O3 -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPURun 1aRun 1bRun 2Run 33691215SE +/- 0.12250, N = 3SE +/- 0.08292, N = 7SE +/- 0.06857, N = 10SE +/- 0.07293, N = 98.918668.965029.033439.02351MIN: 7.69MIN: 7.54MIN: 7.54MIN: 7.471. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPURun 1aRun 1bRun 2Run 33691215Min: 8.68 / Avg: 8.92 / Max: 9.09Min: 8.53 / Avg: 8.97 / Max: 9.16Min: 8.49 / Avg: 9.03 / Max: 9.31Min: 8.49 / Avg: 9.02 / Max: 9.241. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPURun 1aRun 1bRun 2Run 33691215SE +/- 0.07341, N = 15SE +/- 0.09473, N = 3SE +/- 0.06627, N = 3SE +/- 0.07590, N = 89.143009.154069.063789.02724MIN: 6.13MIN: 7.86MIN: 8.04MIN: 7.661. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPURun 1aRun 1bRun 2Run 33691215Min: 8.37 / Avg: 9.14 / Max: 9.55Min: 8.98 / Avg: 9.15 / Max: 9.3Min: 8.93 / Avg: 9.06 / Max: 9.14Min: 8.51 / Avg: 9.03 / Max: 9.211. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Libplacebo

Libplacebo is a multimedia rendering library based on the core rendering code of the MPV player. The libplacebo benchmark relies on the Vulkan API and tests various primitives. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterLibplacebo 2.72.2Test: av1_grain_lapRun 1Run 2Run 3140280420560700SE +/- 13.30, N = 3SE +/- 13.66, N = 3SE +/- 11.90, N = 3668.73668.59670.211. (CXX) g++ options: -lm -lglslang -lHLSL -lOGLCompiler -lOSDependent -lSPIRV -lSPVRemapper -lSPIRV-Tools -lSPIRV-Tools-opt -lpthread -pthread -pipe -std=c++11 -fvisibility=hidden -fPIC -MD -MQ -MF
OpenBenchmarking.orgFPS, More Is BetterLibplacebo 2.72.2Test: av1_grain_lapRun 1Run 2Run 3120240360480600Min: 642.14 / Avg: 668.73 / Max: 682.12Min: 641.27 / Avg: 668.59 / Max: 682.26Min: 646.41 / Avg: 670.21 / Max: 682.421. (CXX) g++ options: -lm -lglslang -lHLSL -lOGLCompiler -lOSDependent -lSPIRV -lSPVRemapper -lSPIRV-Tools -lSPIRV-Tools-opt -lpthread -pthread -pipe -std=c++11 -fvisibility=hidden -fPIC -MD -MQ -MF

OpenBenchmarking.orgFPS, More Is BetterLibplacebo 2.72.2Test: hdr_peakdetectRun 1Run 2Run 39K18K27K36K45KSE +/- 178.07, N = 3SE +/- 110.46, N = 3SE +/- 73.19, N = 342527.0742389.9842544.801. (CXX) g++ options: -lm -lglslang -lHLSL -lOGLCompiler -lOSDependent -lSPIRV -lSPVRemapper -lSPIRV-Tools -lSPIRV-Tools-opt -lpthread -pthread -pipe -std=c++11 -fvisibility=hidden -fPIC -MD -MQ -MF
OpenBenchmarking.orgFPS, More Is BetterLibplacebo 2.72.2Test: hdr_peakdetectRun 1Run 2Run 37K14K21K28K35KMin: 42218.45 / Avg: 42527.07 / Max: 42835.29Min: 42170.77 / Avg: 42389.98 / Max: 42523.3Min: 42399.8 / Avg: 42544.8 / Max: 42634.731. (CXX) g++ options: -lm -lglslang -lHLSL -lOGLCompiler -lOSDependent -lSPIRV -lSPVRemapper -lSPIRV-Tools -lSPIRV-Tools-opt -lpthread -pthread -pipe -std=c++11 -fvisibility=hidden -fPIC -MD -MQ -MF

OpenBenchmarking.orgFPS, More Is BetterLibplacebo 2.72.2Test: polar_nocomputeRun 1Run 2Run 3714212835SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 329.8529.8629.891. (CXX) g++ options: -lm -lglslang -lHLSL -lOGLCompiler -lOSDependent -lSPIRV -lSPVRemapper -lSPIRV-Tools -lSPIRV-Tools-opt -lpthread -pthread -pipe -std=c++11 -fvisibility=hidden -fPIC -MD -MQ -MF
OpenBenchmarking.orgFPS, More Is BetterLibplacebo 2.72.2Test: polar_nocomputeRun 1Run 2Run 3714212835Min: 29.85 / Avg: 29.85 / Max: 29.85Min: 29.86 / Avg: 29.86 / Max: 29.87Min: 29.87 / Avg: 29.89 / Max: 29.91. (CXX) g++ options: -lm -lglslang -lHLSL -lOGLCompiler -lOSDependent -lSPIRV -lSPVRemapper -lSPIRV-Tools -lSPIRV-Tools-opt -lpthread -pthread -pipe -std=c++11 -fvisibility=hidden -fPIC -MD -MQ -MF

OpenBenchmarking.orgFPS, More Is BetterLibplacebo 2.72.2Test: deband_heavyRun 1Run 2Run 31020304050SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 343.7143.8743.921. (CXX) g++ options: -lm -lglslang -lHLSL -lOGLCompiler -lOSDependent -lSPIRV -lSPVRemapper -lSPIRV-Tools -lSPIRV-Tools-opt -lpthread -pthread -pipe -std=c++11 -fvisibility=hidden -fPIC -MD -MQ -MF
OpenBenchmarking.orgFPS, More Is BetterLibplacebo 2.72.2Test: deband_heavyRun 1Run 2Run 3918273645Min: 43.71 / Avg: 43.71 / Max: 43.72Min: 43.86 / Avg: 43.87 / Max: 43.88Min: 43.92 / Avg: 43.92 / Max: 43.921. (CXX) g++ options: -lm -lglslang -lHLSL -lOGLCompiler -lOSDependent -lSPIRV -lSPVRemapper -lSPIRV-Tools -lSPIRV-Tools-opt -lpthread -pthread -pipe -std=c++11 -fvisibility=hidden -fPIC -MD -MQ -MF

VkResample

VkResample is a Vulkan-based image upscaling library based on VkFFT. The sample input file is upscaling a 4K image to 8K using Vulkan-based GPU acceleration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterVkResample 1.0Upscale: 2x - Precision: SingleRun 1Run 1aRun 2Run 380160240320400SE +/- 0.06, N = 3SE +/- 0.38, N = 3SE +/- 0.35, N = 3SE +/- 0.90, N = 3389.26388.40386.87389.581. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgms, Fewer Is BetterVkResample 1.0Upscale: 2x - Precision: SingleRun 1Run 1aRun 2Run 370140210280350Min: 389.18 / Avg: 389.25 / Max: 389.38Min: 387.94 / Avg: 388.4 / Max: 389.16Min: 386.49 / Avg: 386.87 / Max: 387.58Min: 388.29 / Avg: 389.58 / Max: 391.311. (CXX) g++ options: -O3 -pthread

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 6Run 1aRun 1bRun 2Run 30.29570.59140.88711.18281.4785SE +/- 0.008, N = 3SE +/- 0.006, N = 3SE +/- 0.006, N = 3SE +/- 0.007, N = 31.3091.3081.3141.310
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 6Run 1aRun 1bRun 2Run 3246810Min: 1.3 / Avg: 1.31 / Max: 1.33Min: 1.3 / Avg: 1.31 / Max: 1.32Min: 1.31 / Avg: 1.31 / Max: 1.33Min: 1.3 / Avg: 1.31 / Max: 1.32

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random ReadRun 1Run 1aRun 1bRun 2Run 33691215SE +/- 0.144, N = 15SE +/- 0.131, N = 15SE +/- 0.127, N = 15SE +/- 0.142, N = 15SE +/- 0.160, N = 129.4199.4919.4359.2369.1651. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random ReadRun 1Run 1aRun 1bRun 2Run 33691215Min: 8.1 / Avg: 9.42 / Max: 10.24Min: 8.35 / Avg: 9.49 / Max: 10.05Min: 8.28 / Avg: 9.43 / Max: 9.89Min: 7.95 / Avg: 9.24 / Max: 10Min: 8.02 / Avg: 9.16 / Max: 9.871. (CXX) g++ options: -O3 -lsnappy -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionRun 1aRun 1bRun 2Run 31020304050SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 341.8541.0541.4341.131. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionRun 1aRun 1bRun 2Run 3918273645Min: 41.84 / Avg: 41.85 / Max: 41.86Min: 40.99 / Avg: 41.05 / Max: 41.09Min: 41.4 / Avg: 41.43 / Max: 41.47Min: 41.06 / Avg: 41.13 / Max: 41.181. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

Betsy GPU Compressor

Betsy is an open-source GPU compressor of various GPU compression techniques. Betsy is written in GLSL for Vulkan/OpenGL (compute shader) support for GPU-based texture compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC1 - Quality: HighestRun 1Run 2Run 33691215SE +/- 0.34, N = 12SE +/- 0.50, N = 1510.9610.0211.091. (CXX) g++ options: -O3 -O2 -lpthread -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC1 - Quality: HighestRun 1Run 2Run 33691215Min: 8.23 / Avg: 10.96 / Max: 13.72Min: 4.93 / Avg: 10.02 / Max: 11.131. (CXX) g++ options: -O3 -O2 -lpthread -ldl

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b DecryptionRun 1aRun 1bRun 2Run 380160240320400SE +/- 0.09, N = 3SE +/- 0.30, N = 2SE +/- 0.30, N = 2SE +/- 0.23, N = 3383.5383.2383.4383.4
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b DecryptionRun 1aRun 1bRun 2Run 370140210280350Min: 383.3 / Avg: 383.47 / Max: 383.6Min: 382.9 / Avg: 383.2 / Max: 383.5Min: 383.1 / Avg: 383.4 / Max: 383.7Min: 383 / Avg: 383.37 / Max: 383.8

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b EncryptionRun 1aRun 1bRun 2Run 380160240320400SE +/- 0.19, N = 3SE +/- 0.17, N = 3SE +/- 0.19, N = 3SE +/- 0.18, N = 3382.7382.4382.4382.8
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b EncryptionRun 1aRun 1bRun 2Run 370140210280350Min: 382.3 / Avg: 382.67 / Max: 382.9Min: 382.2 / Avg: 382.37 / Max: 382.7Min: 382.2 / Avg: 382.43 / Max: 382.8Min: 382.5 / Avg: 382.77 / Max: 383.1

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b DecryptionRun 1aRun 1bRun 2Run 3150300450600750SE +/- 0.92, N = 3SE +/- 0.42, N = 3SE +/- 0.37, N = 3SE +/- 0.15, N = 2681.8681.0682.4680.0
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b DecryptionRun 1aRun 1bRun 2Run 3120240360480600Min: 680.6 / Avg: 681.8 / Max: 683.6Min: 680.2 / Avg: 681 / Max: 681.6Min: 682 / Avg: 682.37 / Max: 683.1Min: 679.8 / Avg: 679.95 / Max: 680.1

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b EncryptionRun 1aRun 1bRun 2Run 3150300450600750SE +/- 0.38, N = 3SE +/- 0.45, N = 3SE +/- 0.40, N = 3SE +/- 0.49, N = 3697.1696.2696.5695.8
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b EncryptionRun 1aRun 1bRun 2Run 3120240360480600Min: 696.5 / Avg: 697.07 / Max: 697.8Min: 695.3 / Avg: 696.2 / Max: 696.7Min: 695.8 / Avg: 696.5 / Max: 697.2Min: 695.2 / Avg: 695.83 / Max: 696.8

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b DecryptionRun 1aRun 1bRun 2Run 36001200180024003000SE +/- 4.08, N = 3SE +/- 5.81, N = 3SE +/- 3.37, N = 3SE +/- 8.31, N = 32652.02646.22649.52650.8
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b DecryptionRun 1aRun 1bRun 2Run 35001000150020002500Min: 2643.9 / Avg: 2652 / Max: 2656.9Min: 2636.5 / Avg: 2646.2 / Max: 2656.6Min: 2643 / Avg: 2649.53 / Max: 2654.2Min: 2639.9 / Avg: 2650.77 / Max: 2667.1

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b EncryptionRun 1aRun 1bRun 2Run 36001200180024003000SE +/- 4.36, N = 3SE +/- 5.46, N = 3SE +/- 2.20, N = 3SE +/- 6.97, N = 32646.22645.72648.42645.0
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b EncryptionRun 1aRun 1bRun 2Run 35001000150020002500Min: 2640.2 / Avg: 2646.23 / Max: 2654.7Min: 2636.4 / Avg: 2645.7 / Max: 2655.3Min: 2644.2 / Avg: 2648.37 / Max: 2651.7Min: 2636.4 / Avg: 2645 / Max: 2658.8

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b DecryptionRun 1aRun 1bRun 2Run 380160240320400SE +/- 0.09, N = 3SE +/- 0.17, N = 3SE +/- 0.23, N = 3SE +/- 0.32, N = 3383.6383.4383.4383.5
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b DecryptionRun 1aRun 1bRun 2Run 370140210280350Min: 383.4 / Avg: 383.57 / Max: 383.7Min: 383.1 / Avg: 383.4 / Max: 383.7Min: 383 / Avg: 383.37 / Max: 383.8Min: 382.9 / Avg: 383.5 / Max: 384

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b EncryptionRun 1aRun 1bRun 2Run 380160240320400SE +/- 0.20, N = 3SE +/- 0.20, N = 3SE +/- 0.03, N = 3SE +/- 0.33, N = 3382.8382.4382.5382.7
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b EncryptionRun 1aRun 1bRun 2Run 370140210280350Min: 382.4 / Avg: 382.77 / Max: 383.1Min: 382.1 / Avg: 382.43 / Max: 382.8Min: 382.5 / Avg: 382.53 / Max: 382.6Min: 382.1 / Avg: 382.73 / Max: 383.2

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b DecryptionRun 1aRun 1bRun 2Run 3150300450600750SE +/- 0.50, N = 3SE +/- 0.43, N = 3SE +/- 0.50, N = 3SE +/- 0.50, N = 3682.1682.1682.6680.6
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b DecryptionRun 1aRun 1bRun 2Run 3120240360480600Min: 681.6 / Avg: 682.1 / Max: 683.1Min: 681.5 / Avg: 682.07 / Max: 682.9Min: 681.6 / Avg: 682.6 / Max: 683.2Min: 680 / Avg: 680.6 / Max: 681.6

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b EncryptionRun 1aRun 1bRun 2Run 3150300450600750SE +/- 0.34, N = 3SE +/- 0.37, N = 3SE +/- 0.20, N = 3SE +/- 0.64, N = 3696.2696.3696.3696.9
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b EncryptionRun 1aRun 1bRun 2Run 3120240360480600Min: 695.5 / Avg: 696.17 / Max: 696.6Min: 695.8 / Avg: 696.27 / Max: 697Min: 696.1 / Avg: 696.3 / Max: 696.7Min: 695.8 / Avg: 696.93 / Max: 698

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b DecryptionRun 1aRun 1bRun 2Run 37001400210028003500SE +/- 6.34, N = 3SE +/- 8.41, N = 3SE +/- 2.23, N = 3SE +/- 11.92, N = 33199.43190.63197.83193.6
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b DecryptionRun 1aRun 1bRun 2Run 36001200180024003000Min: 3186.9 / Avg: 3199.37 / Max: 3207.6Min: 3175.6 / Avg: 3190.57 / Max: 3204.7Min: 3195.2 / Avg: 3197.77 / Max: 3202.2Min: 3177.9 / Avg: 3193.63 / Max: 3217

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b EncryptionRun 1aRun 1bRun 2Run 37001400210028003500SE +/- 5.12, N = 3SE +/- 5.08, N = 3SE +/- 7.29, N = 3SE +/- 13.83, N = 33187.53188.33188.13172.0
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b EncryptionRun 1aRun 1bRun 2Run 36001200180024003000Min: 3179.6 / Avg: 3187.5 / Max: 3197.1Min: 3178.6 / Avg: 3188.27 / Max: 3195.8Min: 3180.3 / Avg: 3188.13 / Max: 3202.7Min: 3155.7 / Avg: 3172 / Max: 3199.5

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-whirlpoolRun 1aRun 1bRun 2Run 3140K280K420K560K700KSE +/- 511.00, N = 3SE +/- 511.00, N = 3633963633198633709633709
OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-whirlpoolRun 1aRun 1bRun 2Run 3110K220K330K440K550KMin: 633198 / Avg: 633709 / Max: 634731Min: 633198 / Avg: 633709 / Max: 634731

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-sha512Run 1aRun 1bRun 2Run 3300K600K900K1200K1500KSE +/- 718.67, N = 3SE +/- 718.67, N = 3SE +/- 718.67, N = 31502976150297615036941504413
OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-sha512Run 1aRun 1bRun 2Run 3300K600K900K1200K1500KMin: 1502257 / Avg: 1502975.67 / Max: 1504413Min: 1502257 / Avg: 1502975.67 / Max: 1504413Min: 1502257 / Avg: 1503694.33 / Max: 1504413

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondRun 1aRun 1bRun 2Run 350K100K150K200K250KSE +/- 2901.13, N = 3SE +/- 3067.94, N = 3SE +/- 3116.61, N = 3SE +/- 2898.24, N = 3217568.62217895.75216626.52218109.251. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondRun 1aRun 1bRun 2Run 340K80K120K160K200KMin: 214585.08 / Avg: 217568.62 / Max: 223370.1Min: 214621.06 / Avg: 217895.75 / Max: 224026.88Min: 213198.31 / Avg: 216626.52 / Max: 222848.98Min: 214570.69 / Avg: 218109.25 / Max: 223854.491. (CC) gcc options: -O2 -lrt" -lrt

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Masskrug - Acceleration: CPU-onlyRun 1bRun 2Run 31.32052.6413.96155.2826.6025SE +/- 0.038, N = 15SE +/- 0.040, N = 13SE +/- 0.038, N = 155.8255.8095.869
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Masskrug - Acceleration: CPU-onlyRun 1bRun 2Run 3246810Min: 5.46 / Avg: 5.82 / Max: 5.91Min: 5.47 / Avg: 5.81 / Max: 5.89Min: 5.46 / Avg: 5.87 / Max: 5.95

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedRun 1aRun 1bRun 2Run 32K4K6K8K10KSE +/- 0.47, N = 3SE +/- 4.50, N = 3SE +/- 40.85, N = 3SE +/- 4.52, N = 39725.09788.69639.79684.01. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedRun 1aRun 1bRun 2Run 32K4K6K8K10KMin: 9724.3 / Avg: 9725 / Max: 9725.9Min: 9781.8 / Avg: 9788.6 / Max: 9797.1Min: 9558 / Avg: 9639.67 / Max: 9682.2Min: 9676.9 / Avg: 9684 / Max: 9692.41. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedRun 1aRun 1bRun 2Run 317003400510068008500SE +/- 9.83, N = 3SE +/- 17.24, N = 3SE +/- 10.38, N = 3SE +/- 4.04, N = 37736.937772.677643.087706.801. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedRun 1aRun 1bRun 2Run 313002600390052006500Min: 7717.68 / Avg: 7736.93 / Max: 7750Min: 7738.25 / Avg: 7772.67 / Max: 7791.45Min: 7622.54 / Avg: 7643.08 / Max: 7656.02Min: 7699.3 / Avg: 7706.8 / Max: 7713.141. (CC) gcc options: -O3

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteRun 1bRun 2Run 3140K280K420K560K700KSE +/- 579.44, N = 3SE +/- 1157.47, N = 3SE +/- 494.91, N = 3655005651255654738
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteRun 1bRun 2Run 3110K220K330K440K550KMin: 653992 / Avg: 655004.67 / Max: 655999Min: 649381 / Avg: 651255 / Max: 653369Min: 653753 / Avg: 654738 / Max: 655315

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Run 1aRun 1bRun 2Run 37001400210028003500SE +/- 10.35, N = 3SE +/- 10.95, N = 3SE +/- 12.68, N = 3SE +/- 12.30, N = 33302.93304.53300.53294.01. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Run 1aRun 1bRun 2Run 36001200180024003000Min: 3282.2 / Avg: 3302.9 / Max: 3313.6Min: 3282.9 / Avg: 3304.5 / Max: 3318.4Min: 3275.6 / Avg: 3300.53 / Max: 3317Min: 3279.9 / Avg: 3294 / Max: 3318.51. (CC) gcc options: -O3 -pthread -lz -llzma

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 10Run 1aRun 1bRun 2Run 30.68851.3772.06552.7543.4425SE +/- 0.005, N = 3SE +/- 0.007, N = 3SE +/- 0.004, N = 3SE +/- 0.003, N = 33.0583.0573.0573.060
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 10Run 1aRun 1bRun 2Run 3246810Min: 3.05 / Avg: 3.06 / Max: 3.07Min: 3.05 / Avg: 3.06 / Max: 3.07Min: 3.05 / Avg: 3.06 / Max: 3.06Min: 3.06 / Avg: 3.06 / Max: 3.07

WavPack Audio Encoding

This test times how long it takes to encode a sample WAV file to WavPack format with very high quality settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPackRun 1bRun 2Run 348121620SE +/- 0.03, N = 5SE +/- 0.00, N = 5SE +/- 0.00, N = 517.5817.5417.561. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPackRun 1bRun 2Run 348121620Min: 17.55 / Avg: 17.58 / Max: 17.71Min: 17.53 / Avg: 17.54 / Max: 17.56Min: 17.55 / Avg: 17.56 / Max: 17.571. (CXX) g++ options: -rdynamic

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Run 1aRun 1bRun 2Run 3714212835SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 328.8628.9228.8628.891. (CC) gcc options: -O2 -pedantic -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Run 1aRun 1bRun 2Run 3612182430Min: 28.83 / Avg: 28.86 / Max: 28.88Min: 28.87 / Avg: 28.92 / Max: 28.97Min: 28.8 / Avg: 28.86 / Max: 28.97Min: 28.88 / Avg: 28.89 / Max: 28.911. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random DeleteRun 1Run 1aRun 1bRun 2Run 31224364860SE +/- 0.73, N = 3SE +/- 0.32, N = 3SE +/- 0.51, N = 3SE +/- 0.62, N = 4SE +/- 0.67, N = 353.8353.7352.6553.5552.781. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random DeleteRun 1Run 1aRun 1bRun 2Run 31122334455Min: 52.95 / Avg: 53.83 / Max: 55.27Min: 53.12 / Avg: 53.73 / Max: 54.2Min: 51.87 / Avg: 52.65 / Max: 53.61Min: 52.65 / Avg: 53.55 / Max: 55.38Min: 51.44 / Avg: 52.78 / Max: 53.491. (CXX) g++ options: -O3 -lsnappy -lpthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080pRun 1aRun 1bRun 2Run 3120240360480600SE +/- 6.72, N = 4SE +/- 7.27, N = 3SE +/- 3.01, N = 3SE +/- 0.21, N = 3554.36555.38542.62544.85MIN: 346.4 / MAX: 874.76MIN: 346.53 / MAX: 857.77MIN: 345.88 / MAX: 854.1MIN: 343.78 / MAX: 874.51. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080pRun 1aRun 1bRun 2Run 3100200300400500Min: 544.19 / Avg: 554.36 / Max: 574.1Min: 546.5 / Avg: 555.38 / Max: 569.79Min: 536.61 / Avg: 542.62 / Max: 545.69Min: 544.42 / Avg: 544.85 / Max: 545.091. (CC) gcc options: -pthread

Unpacking Firefox

This simple test profile measures how long it takes to extract the .tar.xz source package of the Mozilla Firefox Web Browser. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzRun 1bRun 2Run 3510152025SE +/- 0.04, N = 4SE +/- 0.03, N = 4SE +/- 0.06, N = 420.7020.7320.73
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzRun 1bRun 2Run 3510152025Min: 20.61 / Avg: 20.7 / Max: 20.8Min: 20.68 / Avg: 20.73 / Max: 20.82Min: 20.63 / Avg: 20.73 / Max: 20.88

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeRun 1aRun 1bRun 2Run 31.5M3M4.5M6M7.5MSE +/- 10232.00, N = 3SE +/- 12801.08, N = 3SE +/- 11588.40, N = 3SE +/- 10087.47, N = 371830047013746693557971941871. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm
OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeRun 1aRun 1bRun 2Run 31.2M2.4M3.6M4.8M6MMin: 7165752 / Avg: 7183004.33 / Max: 7201162Min: 6991730 / Avg: 7013745.67 / Max: 7036071Min: 6913319 / Avg: 6935579.33 / Max: 6952298Min: 7181851 / Avg: 7194186.67 / Max: 72141801. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Sequential FillRun 1Run 1aRun 1bRun 2Run 31224364860SE +/- 0.41, N = 3SE +/- 0.16, N = 3SE +/- 0.42, N = 3SE +/- 0.46, N = 3SE +/- 0.30, N = 353.2552.9953.1953.5052.821. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Sequential FillRun 1Run 1aRun 1bRun 2Run 31122334455Min: 52.76 / Avg: 53.25 / Max: 54.06Min: 52.83 / Avg: 52.99 / Max: 53.31Min: 52.38 / Avg: 53.19 / Max: 53.77Min: 52.59 / Avg: 53.5 / Max: 53.97Min: 52.35 / Avg: 52.82 / Max: 53.371. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Sequential FillRun 1Run 1aRun 1bRun 2Run 3816243240SE +/- 0.25, N = 3SE +/- 0.10, N = 3SE +/- 0.27, N = 3SE +/- 0.30, N = 3SE +/- 0.17, N = 333.233.433.333.133.51. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Sequential FillRun 1Run 1aRun 1bRun 2Run 3714212835Min: 32.7 / Avg: 33.2 / Max: 33.5Min: 33.2 / Avg: 33.4 / Max: 33.5Min: 32.9 / Avg: 33.27 / Max: 33.8Min: 32.8 / Avg: 33.1 / Max: 33.7Min: 33.2 / Avg: 33.5 / Max: 33.81. (CXX) g++ options: -O3 -lsnappy -lpthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2Run 1bRun 2Run 380160240320400SE +/- 1.54, N = 3SE +/- 0.46, N = 3SE +/- 0.90, N = 3368.68367.42368.31MIN: 366.7 / MAX: 377.28MIN: 366.04 / MAX: 404.26MIN: 366.66 / MAX: 370.961. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2Run 1bRun 2Run 370140210280350Min: 367.02 / Avg: 368.68 / Max: 371.77Min: 366.5 / Avg: 367.42 / Max: 367.92Min: 367.31 / Avg: 368.31 / Max: 370.121. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1Run 1bRun 2Run 380160240320400SE +/- 0.08, N = 3SE +/- 0.16, N = 3SE +/- 0.12, N = 3359.68359.57359.69MIN: 359.47 / MAX: 360.35MIN: 359.2 / MAX: 360.48MIN: 359.32 / MAX: 360.381. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1Run 1bRun 2Run 360120180240300Min: 359.6 / Avg: 359.68 / Max: 359.84Min: 359.37 / Avg: 359.57 / Max: 359.88Min: 359.44 / Avg: 359.69 / Max: 359.841. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesRun 1bRun 2Run 32004006008001000SE +/- 2.40, N = 3SE +/- 1.20, N = 3SE +/- 1.00, N = 3107610751074
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesRun 1bRun 2Run 32004006008001000Min: 1073 / Avg: 1076.33 / Max: 1081Min: 1073 / Avg: 1074.67 / Max: 1077Min: 1072 / Avg: 1074 / Max: 1075

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSHRun 1bRun 2Run 3300K600K900K1200K1500KSE +/- 7390.99, N = 3SE +/- 13375.10, N = 14SE +/- 10277.46, N = 31628075.081631778.391622768.131. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSHRun 1bRun 2Run 3300K600K900K1200K1500KMin: 1613367.75 / Avg: 1628075.08 / Max: 1636713.62Min: 1468663.75 / Avg: 1631778.39 / Max: 1680833.5Min: 1602717.88 / Avg: 1622768.13 / Max: 1636713.621. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Monkey Audio Encoding

This test times how long it takes to encode a sample WAV file to Monkey's Audio APE format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APERun 1aRun 1bRun 2Run 348121620SE +/- 0.02, N = 5SE +/- 0.03, N = 5SE +/- 0.02, N = 5SE +/- 0.03, N = 513.7813.7513.7713.811. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt
OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APERun 1aRun 1bRun 2Run 348121620Min: 13.73 / Avg: 13.78 / Max: 13.86Min: 13.67 / Avg: 13.75 / Max: 13.85Min: 13.71 / Avg: 13.77 / Max: 13.82Min: 13.71 / Avg: 13.81 / Max: 13.891. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPURun 1aRun 1bRun 2Run 30.57131.14261.71392.28522.8565SE +/- 0.01996, N = 9SE +/- 0.03195, N = 3SE +/- 0.01703, N = 12SE +/- 0.02750, N = 52.538952.443052.398702.53159MIN: 2.25MIN: 2.2MIN: 2.12MIN: 2.271. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPURun 1aRun 1bRun 2Run 3246810Min: 2.41 / Avg: 2.54 / Max: 2.62Min: 2.38 / Avg: 2.44 / Max: 2.48Min: 2.22 / Avg: 2.4 / Max: 2.44Min: 2.42 / Avg: 2.53 / Max: 2.571. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

librsvg

RSVG/librsvg is an SVG vector graphics library. This test profile times how long it takes to complete various operations by rsvg-convert. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterlibrsvgOperation: SVG Files To PNGRun 1bRun 2Run 3510152025SE +/- 0.08, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 321.2321.1821.311. rsvg-convert version 2.50.1
OpenBenchmarking.orgSeconds, Fewer Is BetterlibrsvgOperation: SVG Files To PNGRun 1bRun 2Run 3510152025Min: 21.15 / Avg: 21.23 / Max: 21.39Min: 21.09 / Avg: 21.18 / Max: 21.28Min: 21.26 / Avg: 21.31 / Max: 21.351. rsvg-convert version 2.50.1

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessRun 1aRun 1bRun 2Run 3510152025SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 3SE +/- 0.06, N = 320.0220.0719.8819.661. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessRun 1aRun 1bRun 2Run 3510152025Min: 20.01 / Avg: 20.02 / Max: 20.04Min: 20.04 / Avg: 20.07 / Max: 20.1Min: 19.87 / Avg: 19.88 / Max: 19.89Min: 19.59 / Avg: 19.66 / Max: 19.791. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeRun 1aRun 1bRun 2Run 33691215SE +/- 0.01, N = 5SE +/- 0.02, N = 5SE +/- 0.01, N = 5SE +/- 0.02, N = 510.2310.2110.2110.241. (CXX) g++ options: -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeRun 1aRun 1bRun 2Run 33691215Min: 10.21 / Avg: 10.23 / Max: 10.29Min: 10.19 / Avg: 10.21 / Max: 10.28Min: 10.19 / Avg: 10.21 / Max: 10.26Min: 10.2 / Avg: 10.23 / Max: 10.31. (CXX) g++ options: -fvisibility=hidden -logg -lm

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Hot ReadRun 1Run 1aRun 1bRun 2Run 33691215SE +/- 0.130, N = 15SE +/- 0.096, N = 3SE +/- 0.101, N = 3SE +/- 0.041, N = 3SE +/- 0.072, N = 39.4368.1098.4058.1728.3191. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Hot ReadRun 1Run 1aRun 1bRun 2Run 33691215Min: 8.24 / Avg: 9.44 / Max: 9.94Min: 7.97 / Avg: 8.11 / Max: 8.29Min: 8.29 / Avg: 8.4 / Max: 8.61Min: 8.13 / Avg: 8.17 / Max: 8.25Min: 8.18 / Avg: 8.32 / Max: 8.421. (CXX) g++ options: -O3 -lsnappy -lpthread

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Boat - Acceleration: CPU-onlyRun 1bRun 2Run 348121620SE +/- 0.15, N = 3SE +/- 0.14, N = 3SE +/- 0.15, N = 314.6614.6814.73
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Boat - Acceleration: CPU-onlyRun 1bRun 2Run 348121620Min: 14.36 / Avg: 14.66 / Max: 14.82Min: 14.41 / Avg: 14.68 / Max: 14.83Min: 14.42 / Avg: 14.73 / Max: 14.91

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPURun 1aRun 1bRun 2Run 30.96781.93562.90343.87124.839SE +/- 0.03344, N = 3SE +/- 0.00211, N = 3SE +/- 0.04125, N = 6SE +/- 0.00705, N = 34.257174.289754.274444.30114MIN: 3.34MIN: 3.69MIN: 3.35MIN: 3.341. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPURun 1aRun 1bRun 2Run 3246810Min: 4.19 / Avg: 4.26 / Max: 4.3Min: 4.29 / Avg: 4.29 / Max: 4.29Min: 4.07 / Avg: 4.27 / Max: 4.33Min: 4.29 / Avg: 4.3 / Max: 4.311. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPURun 1aRun 1bRun 2Run 33691215SE +/- 0.07125, N = 3SE +/- 0.10818, N = 15SE +/- 0.06657, N = 15SE +/- 0.09564, N = 158.807239.059639.159119.10087MIN: 7.81MIN: 7.75MIN: 7.84MIN: 7.761. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPURun 1aRun 1bRun 2Run 33691215Min: 8.67 / Avg: 8.81 / Max: 8.9Min: 8.01 / Avg: 9.06 / Max: 9.26Min: 8.4 / Avg: 9.16 / Max: 9.37Min: 8.03 / Avg: 9.1 / Max: 9.31. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPURun 1aRun 1bRun 2Run 30.89221.78442.67663.56884.461SE +/- 0.03753, N = 3SE +/- 0.00904, N = 3SE +/- 0.01159, N = 3SE +/- 0.01449, N = 33.965353.877853.882083.86362MIN: 3.78MIN: 3.76MIN: 3.76MIN: 3.721. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPURun 1aRun 1bRun 2Run 3246810Min: 3.9 / Avg: 3.97 / Max: 4.03Min: 3.86 / Avg: 3.88 / Max: 3.89Min: 3.87 / Avg: 3.88 / Max: 3.9Min: 3.84 / Avg: 3.86 / Max: 3.891. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETRun 1bRun 2Run 3500K1000K1500K2000K2500KSE +/- 27508.87, N = 4SE +/- 13502.97, N = 3SE +/- 26577.29, N = 32508965.102423777.922400719.001. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETRun 1bRun 2Run 3400K800K1200K1600K2000KMin: 2439102.5 / Avg: 2508965.13 / Max: 2571105.5Min: 2398695.5 / Avg: 2423777.92 / Max: 2444987.75Min: 2353167 / Avg: 2400718.67 / Max: 24450661. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETRun 1bRun 2Run 3400K800K1200K1600K2000KSE +/- 13443.79, N = 3SE +/- 3257.59, N = 3SE +/- 6740.96, N = 31877700.881909663.461896603.711. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETRun 1bRun 2Run 3300K600K900K1200K1500KMin: 1852088.88 / Avg: 1877700.88 / Max: 1897593.88Min: 1904762 / Avg: 1909663.46 / Max: 1915831.38Min: 1883239.12 / Avg: 1896603.71 / Max: 19048231. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOPRun 1bRun 2Run 3600K1200K1800K2400K3000KSE +/- 2271.08, N = 3SE +/- 9493.87, N = 3SE +/- 10611.88, N = 32708105.251651323.171631529.421. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOPRun 1bRun 2Run 3500K1000K1500K2000K2500KMin: 2703567.5 / Avg: 2708105.25 / Max: 2710547.5Min: 1634039.25 / Avg: 1651323.17 / Max: 1666773.25Min: 1610306 / Avg: 1631529.42 / Max: 1642246.251. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDRun 1bRun 2Run 3500K1000K1500K2000K2500KSE +/- 16479.98, N = 3SE +/- 6186.25, N = 3SE +/- 13415.80, N = 32122214.002163184.922120355.171. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDRun 1bRun 2Run 3400K800K1200K1600K2000KMin: 2105465.25 / Avg: 2122214 / Max: 2155172.5Min: 2150813 / Avg: 2163184.92 / Max: 2169475Min: 2100840.25 / Avg: 2120355.17 / Max: 2146060.251. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Timed MAFFT Alignment

This test performs an alignment of 100 pyruvate decarboxylase sequences. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNARun 1aRun 1bRun 2Run 33691215SE +/- 0.06, N = 3SE +/- 0.08, N = 3SE +/- 0.05, N = 3SE +/- 0.05, N = 310.2610.1710.2110.121. (CC) gcc options: -std=c99 -O3 -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNARun 1aRun 1bRun 2Run 33691215Min: 10.17 / Avg: 10.26 / Max: 10.38Min: 10.03 / Avg: 10.17 / Max: 10.29Min: 10.13 / Avg: 10.21 / Max: 10.3Min: 10.02 / Avg: 10.12 / Max: 10.21. (CC) gcc options: -std=c99 -O3 -lm -lpthread

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: OverwriteRun 1Run 1aRun 1bRun 2Run 31224364860SE +/- 0.56, N = 3SE +/- 0.34, N = 3SE +/- 0.26, N = 3SE +/- 0.54, N = 3SE +/- 0.47, N = 1553.0652.3952.3654.6553.811. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: OverwriteRun 1Run 1aRun 1bRun 2Run 31122334455Min: 51.96 / Avg: 53.06 / Max: 53.77Min: 51.99 / Avg: 52.39 / Max: 53.07Min: 51.99 / Avg: 52.36 / Max: 52.86Min: 53.58 / Avg: 54.65 / Max: 55.22Min: 51.48 / Avg: 53.81 / Max: 56.161. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: OverwriteRun 1Run 1aRun 1bRun 2Run 3816243240SE +/- 0.34, N = 3SE +/- 0.23, N = 3SE +/- 0.15, N = 3SE +/- 0.32, N = 3SE +/- 0.29, N = 1533.333.833.832.432.91. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: OverwriteRun 1Run 1aRun 1bRun 2Run 3714212835Min: 32.9 / Avg: 33.33 / Max: 34Min: 33.3 / Avg: 33.77 / Max: 34Min: 33.5 / Avg: 33.8 / Max: 34Min: 32 / Avg: 32.37 / Max: 33Min: 31.5 / Avg: 32.91 / Max: 34.41. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random FillRun 1Run 1aRun 1bRun 2Run 31224364860SE +/- 0.45, N = 15SE +/- 0.17, N = 3SE +/- 0.21, N = 3SE +/- 0.19, N = 3SE +/- 0.06, N = 352.7552.0552.5652.2852.011. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random FillRun 1Run 1aRun 1bRun 2Run 31122334455Min: 51.43 / Avg: 52.75 / Max: 56.08Min: 51.73 / Avg: 52.05 / Max: 52.31Min: 52.26 / Avg: 52.56 / Max: 52.97Min: 51.91 / Avg: 52.28 / Max: 52.51Min: 51.94 / Avg: 52.01 / Max: 52.121. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Random FillRun 1Run 1aRun 1bRun 2Run 3816243240SE +/- 0.28, N = 15SE +/- 0.12, N = 3SE +/- 0.12, N = 3SE +/- 0.13, N = 3SE +/- 0.06, N = 333.634.033.633.834.01. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Random FillRun 1Run 1aRun 1bRun 2Run 3714212835Min: 31.5 / Avg: 33.57 / Max: 34.4Min: 33.8 / Avg: 33.97 / Max: 34.2Min: 33.4 / Avg: 33.63 / Max: 33.8Min: 33.7 / Avg: 33.83 / Max: 34.1Min: 33.9 / Avg: 34 / Max: 34.11. (CXX) g++ options: -O3 -lsnappy -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPURun 1aRun 1bRun 2Run 30.99281.98562.97843.97124.964SE +/- 0.04009, N = 3SE +/- 0.04178, N = 15SE +/- 0.03991, N = 3SE +/- 0.04826, N = 154.205014.412644.262554.35330MIN: 3.68MIN: 3.75MIN: 3.76MIN: 3.641. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPURun 1aRun 1bRun 2Run 3246810Min: 4.13 / Avg: 4.21 / Max: 4.26Min: 3.92 / Avg: 4.41 / Max: 4.62Min: 4.18 / Avg: 4.26 / Max: 4.31Min: 3.88 / Avg: 4.35 / Max: 4.461. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 0Run 1aRun 1bRun 2Run 33691215SE +/- 0.015, N = 3SE +/- 0.010, N = 3SE +/- 0.016, N = 3SE +/- 0.012, N = 39.2529.2609.2629.2581. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 0Run 1aRun 1bRun 2Run 33691215Min: 9.24 / Avg: 9.25 / Max: 9.28Min: 9.25 / Avg: 9.26 / Max: 9.28Min: 9.24 / Avg: 9.26 / Max: 9.29Min: 9.24 / Avg: 9.26 / Max: 9.281. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPURun 1aRun 1bRun 2Run 3246810SE +/- 0.09086, N = 3SE +/- 0.08146, N = 3SE +/- 0.08943, N = 3SE +/- 0.10734, N = 38.618018.657218.661558.61730MIN: 8.34MIN: 8.36MIN: 8.37MIN: 8.31. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPURun 1aRun 1bRun 2Run 33691215Min: 8.44 / Avg: 8.62 / Max: 8.71Min: 8.49 / Avg: 8.66 / Max: 8.74Min: 8.48 / Avg: 8.66 / Max: 8.76Min: 8.4 / Avg: 8.62 / Max: 8.731. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionRun 1aRun 1bRun 2Run 3246810SE +/- 0.011, N = 3SE +/- 0.005, N = 3SE +/- 0.011, N = 3SE +/- 0.001, N = 38.2138.2098.2218.2111. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionRun 1aRun 1bRun 2Run 33691215Min: 8.2 / Avg: 8.21 / Max: 8.23Min: 8.2 / Avg: 8.21 / Max: 8.21Min: 8.2 / Avg: 8.22 / Max: 8.24Min: 8.21 / Avg: 8.21 / Max: 8.211. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

Unpacking The Linux Kernel

This test measures how long it takes to extract the .tar.xz Linux kernel package. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernellinux-4.15.tar.xzRun 1Run 1aRun 1bRun 2Run 3246810SE +/- 0.003, N = 4SE +/- 0.003, N = 4SE +/- 0.003, N = 4SE +/- 0.012, N = 4SE +/- 0.009, N = 46.1516.1766.1676.1626.165
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernellinux-4.15.tar.xzRun 1Run 1aRun 1bRun 2Run 3246810Min: 6.14 / Avg: 6.15 / Max: 6.16Min: 6.17 / Avg: 6.18 / Max: 6.18Min: 6.16 / Avg: 6.17 / Max: 6.17Min: 6.14 / Avg: 6.16 / Max: 6.19Min: 6.15 / Avg: 6.17 / Max: 6.18

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 1080pRun 1aRun 1bRun 2Run 3110220330440550SE +/- 2.59, N = 3SE +/- 3.48, N = 3SE +/- 4.86, N = 3SE +/- 4.30, N = 3509.91507.90509.38506.87MIN: 450.46 / MAX: 567.47MIN: 445.89 / MAX: 566.4MIN: 445.02 / MAX: 568.54MIN: 441.71 / MAX: 566.541. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 1080pRun 1aRun 1bRun 2Run 390180270360450Min: 506.13 / Avg: 509.91 / Max: 514.87Min: 503.67 / Avg: 507.9 / Max: 514.79Min: 502.73 / Avg: 509.38 / Max: 518.85Min: 501.82 / Avg: 506.87 / Max: 515.421. (CC) gcc options: -pthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: FastRun 1aRun 1bRun 2Run 3246810SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 36.756.786.786.771. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: FastRun 1aRun 1bRun 2Run 33691215Min: 6.75 / Avg: 6.75 / Max: 6.76Min: 6.76 / Avg: 6.78 / Max: 6.81Min: 6.74 / Avg: 6.78 / Max: 6.81Min: 6.75 / Avg: 6.77 / Max: 6.81. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPURun 1aRun 1bRun 2Run 348121620SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 314.9714.9715.2515.08MIN: 14.84MIN: 14.85MIN: 14.92MIN: 14.851. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPURun 1aRun 1bRun 2Run 348121620Min: 14.93 / Avg: 14.97 / Max: 15Min: 14.96 / Avg: 14.97 / Max: 14.99Min: 15.2 / Avg: 15.25 / Max: 15.34Min: 15.06 / Avg: 15.08 / Max: 15.11. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

yquake2

This is a test of Yamagi Quake II. Yamagi Quake II is an enhanced client for id Software's Quake II with focus on offline and coop gameplay. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: Software CPU - Resolution: 1920 x 1080Run 1Run 1aRun 1bRun 2Run 3306090120150SE +/- 0.12, N = 3SE +/- 0.07, N = 3SE +/- 0.25, N = 3SE +/- 0.15, N = 3SE +/- 0.17, N = 3113.4113.6113.3113.4111.81. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC
OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: Software CPU - Resolution: 1920 x 1080Run 1Run 1aRun 1bRun 2Run 320406080100Min: 113.2 / Avg: 113.37 / Max: 113.6Min: 113.5 / Avg: 113.63 / Max: 113.7Min: 112.8 / Avg: 113.3 / Max: 113.6Min: 113.2 / Avg: 113.4 / Max: 113.7Min: 111.5 / Avg: 111.8 / Max: 112.11. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 8Run 1aRun 1bRun 2Run 3246810SE +/- 0.001, N = 3SE +/- 0.057, N = 3SE +/- 0.006, N = 3SE +/- 0.002, N = 36.2766.3596.2806.3071. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 8Run 1aRun 1bRun 2Run 33691215Min: 6.28 / Avg: 6.28 / Max: 6.28Min: 6.27 / Avg: 6.36 / Max: 6.46Min: 6.27 / Avg: 6.28 / Max: 6.29Min: 6.31 / Avg: 6.31 / Max: 6.311. (CXX) g++ options: -O3 -fPIC

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPURun 1aRun 1bRun 2Run 348121620SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 316.8216.8116.8816.88MIN: 16.75MIN: 16.75MIN: 16.77MIN: 16.761. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPURun 1aRun 1bRun 2Run 348121620Min: 16.81 / Avg: 16.82 / Max: 16.83Min: 16.8 / Avg: 16.81 / Max: 16.82Min: 16.86 / Avg: 16.88 / Max: 16.9Min: 16.82 / Avg: 16.88 / Max: 16.951. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Fill SyncRun 1Run 1aRun 1bRun 2Run 313002600390052006500SE +/- 1.77, N = 3SE +/- 3.08, N = 3SE +/- 4.19, N = 3SE +/- 2.48, N = 3SE +/- 3.43, N = 35987.165949.585919.325921.315946.821. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Fill SyncRun 1Run 1aRun 1bRun 2Run 310002000300040005000Min: 5983.65 / Avg: 5987.16 / Max: 5989.34Min: 5944.84 / Avg: 5949.58 / Max: 5955.35Min: 5911.12 / Avg: 5919.32 / Max: 5924.91Min: 5916.75 / Avg: 5921.31 / Max: 5925.29Min: 5941.39 / Avg: 5946.82 / Max: 5953.151. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Fill SyncRun 1Run 1aRun 1bRun 2Run 30.06750.1350.20250.270.3375SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.30.30.30.30.31. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Fill SyncRun 1Run 1aRun 1bRun 2Run 312345Min: 0.3 / Avg: 0.3 / Max: 0.3Min: 0.3 / Avg: 0.3 / Max: 0.3Min: 0.3 / Avg: 0.3 / Max: 0.3Min: 0.3 / Avg: 0.3 / Max: 0.3Min: 0.3 / Avg: 0.3 / Max: 0.31. (CXX) g++ options: -O3 -lsnappy -lpthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10Run 1aRun 1bRun 2Run 3246810SE +/- 0.010, N = 3SE +/- 0.012, N = 3SE +/- 0.018, N = 3SE +/- 0.018, N = 35.9505.9586.0326.0201. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10Run 1aRun 1bRun 2Run 3246810Min: 5.93 / Avg: 5.95 / Max: 5.97Min: 5.94 / Avg: 5.96 / Max: 5.98Min: 6 / Avg: 6.03 / Max: 6.06Min: 5.99 / Avg: 6.02 / Max: 6.051. (CXX) g++ options: -O3 -fPIC

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Room - Acceleration: CPU-onlyRun 1bRun 2Run 30.98961.97922.96883.95844.948SE +/- 0.005, N = 3SE +/- 0.002, N = 3SE +/- 0.003, N = 34.3934.3874.398
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Room - Acceleration: CPU-onlyRun 1bRun 2Run 3246810Min: 4.39 / Avg: 4.39 / Max: 4.4Min: 4.38 / Avg: 4.39 / Max: 4.39Min: 4.39 / Avg: 4.4 / Max: 4.4

Betsy GPU Compressor

Betsy is an open-source GPU compressor of various GPU compression techniques. Betsy is written in GLSL for Vulkan/OpenGL (compute shader) support for GPU-based texture compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC2 RGB - Quality: HighestRun 11.14282.28563.42844.57125.7145.0791. (CXX) g++ options: -O3 -O2 -lpthread -ldl

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinRun 1aRun 1bRun 2Run 31.28272.56543.84815.13086.4135SE +/- 0.008, N = 3SE +/- 0.008, N = 3SE +/- 0.013, N = 3SE +/- 0.019, N = 35.6895.6935.6765.7011. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinRun 1aRun 1bRun 2Run 3246810Min: 5.68 / Avg: 5.69 / Max: 5.7Min: 5.68 / Avg: 5.69 / Max: 5.71Min: 5.65 / Avg: 5.68 / Max: 5.7Min: 5.68 / Avg: 5.7 / Max: 5.741. (CXX) g++ options: -O3 -pthread -lm

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100Run 1aRun 1bRun 2Run 30.61071.22141.83212.44283.0535SE +/- 0.001, N = 3SE +/- 0.005, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 32.7052.7142.7042.7051. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100Run 1aRun 1bRun 2Run 3246810Min: 2.7 / Avg: 2.71 / Max: 2.71Min: 2.71 / Avg: 2.71 / Max: 2.72Min: 2.7 / Avg: 2.7 / Max: 2.71Min: 2.7 / Avg: 2.71 / Max: 2.711. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

yquake2

This is a test of Yamagi Quake II. Yamagi Quake II is an enhanced client for id Software's Quake II with focus on offline and coop gameplay. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 3.x - Resolution: 1920 x 1080Run 1Run 1aRun 1bRun 2Run 3120240360480600SE +/- 3.13, N = 3SE +/- 0.50, N = 3SE +/- 1.33, N = 3SE +/- 4.02, N = 3SE +/- 0.20, N = 3546.4551.0552.8551.9486.21. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC
OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 3.x - Resolution: 1920 x 1080Run 1Run 1aRun 1bRun 2Run 3100200300400500Min: 540.2 / Avg: 546.43 / Max: 550Min: 550.5 / Avg: 551 / Max: 552Min: 551 / Avg: 552.8 / Max: 555.4Min: 543.9 / Avg: 551.87 / Max: 556.8Min: 485.8 / Avg: 486.17 / Max: 486.51. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultRun 1aRun 1bRun 2Run 30.39310.78621.17931.57241.9655SE +/- 0.001, N = 3SE +/- 0.003, N = 3SE +/- 0.000, N = 3SE +/- 0.006, N = 31.7381.7441.7401.7471. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultRun 1aRun 1bRun 2Run 3246810Min: 1.74 / Avg: 1.74 / Max: 1.74Min: 1.74 / Avg: 1.74 / Max: 1.75Min: 1.74 / Avg: 1.74 / Max: 1.74Min: 1.74 / Avg: 1.75 / Max: 1.761. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

yquake2

This is a test of Yamagi Quake II. Yamagi Quake II is an enhanced client for id Software's Quake II with focus on offline and coop gameplay. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 1.x - Resolution: 1920 x 1080Run 1Run 1aRun 1bRun 2Run 3120240360480600SE +/- 1.53, N = 3SE +/- 4.14, N = 3SE +/- 3.75, N = 3SE +/- 2.15, N = 3SE +/- 2.08, N = 3558.3558.7552.8555.2504.31. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC
OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: OpenGL 1.x - Resolution: 1920 x 1080Run 1Run 1aRun 1bRun 2Run 3100200300400500Min: 556.3 / Avg: 558.3 / Max: 561.3Min: 550.5 / Avg: 558.7 / Max: 563.8Min: 545.3 / Avg: 552.8 / Max: 556.8Min: 552 / Avg: 555.23 / Max: 559.3Min: 500.8 / Avg: 504.27 / Max: 5081. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Rack - Acceleration: CPU-onlyRun 1bRun 2Run 30.04230.08460.12690.16920.2115SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.1880.1880.188
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.2.1Test: Server Rack - Acceleration: CPU-onlyRun 1bRun 2Run 312345Min: 0.19 / Avg: 0.19 / Max: 0.19Min: 0.19 / Avg: 0.19 / Max: 0.19Min: 0.19 / Avg: 0.19 / Max: 0.19

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Max Ping Pong BandwidthRun 1aRun 1bRun 2Run 33K6K9K12K15KSE +/- 57.13, N = 3SE +/- 72.76, N = 3SE +/- 27.24, N = 3SE +/- 318.38, N = 312714.9613311.5612960.4512614.721. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3
OpenBenchmarking.orgMB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Max Ping Pong BandwidthRun 1aRun 1bRun 2Run 32K4K6K8K10KMin: 12632.56 / Avg: 12714.96 / Max: 12824.71Min: 13194.14 / Avg: 13311.56 / Max: 13444.7Min: 12926.37 / Avg: 12960.45 / Max: 13014.31Min: 12170.74 / Avg: 12614.72 / Max: 13232.021. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Random Ring BandwidthRun 1aRun 1bRun 2Run 30.39270.78541.17811.57081.9635SE +/- 0.00997, N = 3SE +/- 0.00495, N = 3SE +/- 0.01199, N = 3SE +/- 0.02304, N = 31.710101.741301.720801.745341. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3
OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Random Ring BandwidthRun 1aRun 1bRun 2Run 3246810Min: 1.7 / Avg: 1.71 / Max: 1.73Min: 1.73 / Avg: 1.74 / Max: 1.75Min: 1.7 / Avg: 1.72 / Max: 1.74Min: 1.72 / Avg: 1.75 / Max: 1.791. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

OpenBenchmarking.orgusecs, Fewer Is BetterHPC Challenge 1.5.0Test / Class: Random Ring LatencyRun 1aRun 1bRun 2Run 30.09790.19580.29370.39160.4895SE +/- 0.02798, N = 3SE +/- 0.00184, N = 3SE +/- 0.00097, N = 3SE +/- 0.00659, N = 30.434920.405810.406890.400641. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3
OpenBenchmarking.orgusecs, Fewer Is BetterHPC Challenge 1.5.0Test / Class: Random Ring LatencyRun 1aRun 1bRun 2Run 312345Min: 0.4 / Avg: 0.43 / Max: 0.49Min: 0.4 / Avg: 0.41 / Max: 0.41Min: 0.41 / Avg: 0.41 / Max: 0.41Min: 0.39 / Avg: 0.4 / Max: 0.411. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

OpenBenchmarking.orgGUP/s, More Is BetterHPC Challenge 1.5.0Test / Class: G-Random AccessRun 1aRun 1bRun 2Run 30.00650.0130.01950.0260.0325SE +/- 0.00038, N = 3SE +/- 0.00024, N = 3SE +/- 0.00029, N = 3SE +/- 0.00031, N = 30.028800.028470.029110.029081. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3
OpenBenchmarking.orgGUP/s, More Is BetterHPC Challenge 1.5.0Test / Class: G-Random AccessRun 1aRun 1bRun 2Run 312345Min: 0.03 / Avg: 0.03 / Max: 0.03Min: 0.03 / Avg: 0.03 / Max: 0.03Min: 0.03 / Avg: 0.03 / Max: 0.03Min: 0.03 / Avg: 0.03 / Max: 0.031. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: EP-STREAM TriadRun 1aRun 1bRun 2Run 30.67781.35562.03342.71123.389SE +/- 0.00131, N = 3SE +/- 0.00438, N = 3SE +/- 0.00203, N = 3SE +/- 0.05433, N = 32.962432.959572.967783.012521. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3
OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: EP-STREAM TriadRun 1aRun 1bRun 2Run 3246810Min: 2.96 / Avg: 2.96 / Max: 2.96Min: 2.95 / Avg: 2.96 / Max: 2.96Min: 2.96 / Avg: 2.97 / Max: 2.97Min: 2.96 / Avg: 3.01 / Max: 3.121. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: G-PtransRun 1aRun 1bRun 2Run 30.52041.04081.56122.08162.602SE +/- 0.00717, N = 3SE +/- 0.00475, N = 3SE +/- 0.00653, N = 3SE +/- 0.00335, N = 32.302342.304522.313082.310491. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3
OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: G-PtransRun 1aRun 1bRun 2Run 3246810Min: 2.29 / Avg: 2.3 / Max: 2.31Min: 2.3 / Avg: 2.3 / Max: 2.31Min: 2.31 / Avg: 2.31 / Max: 2.33Min: 2.3 / Avg: 2.31 / Max: 2.321. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: EP-DGEMMRun 1aRun 1bRun 2Run 3246810SE +/- 0.03565, N = 3SE +/- 0.09178, N = 3SE +/- 0.03515, N = 3SE +/- 0.05657, N = 36.634736.757636.695936.622561. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3
OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: EP-DGEMMRun 1aRun 1bRun 2Run 33691215Min: 6.57 / Avg: 6.63 / Max: 6.69Min: 6.6 / Avg: 6.76 / Max: 6.92Min: 6.63 / Avg: 6.7 / Max: 6.74Min: 6.56 / Avg: 6.62 / Max: 6.741. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: G-FfteRun 1aRun 1bRun 2Run 30.9151.832.7453.664.575SE +/- 0.00442, N = 3SE +/- 0.00518, N = 3SE +/- 0.00713, N = 3SE +/- 0.02854, N = 34.027284.038534.052294.066821. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3
OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: G-FfteRun 1aRun 1bRun 2Run 3246810Min: 4.02 / Avg: 4.03 / Max: 4.03Min: 4.03 / Avg: 4.04 / Max: 4.05Min: 4.04 / Avg: 4.05 / Max: 4.07Min: 4.03 / Avg: 4.07 / Max: 4.121. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

217 Results Shown

HPC Challenge
Warsow
Blender:
  Barbershop - CPU-Only
  Pabellon Barcelona - CPU-Only
  Classroom - CPU-Only
Basis Universal
VkFFT
Mobile Neural Network:
  inception-v3
  mobilenet-v1-1.0
  MobileNetV2_224
  resnet-v2-50
  SqueezeNetV1.0
Appleseed
AI Benchmark Alpha:
  Device AI Score
  Device Training Score
  Device Inference Score
ASTC Encoder
Blender
BRL-CAD
Appleseed
Stockfish
Blender
Appleseed
GROMACS
asmFish
Build2
Numpy Benchmark
High Performance Conjugate Gradient
Zstd Compression
Embree
dav1d
TensorFlow Lite:
  Inception ResNet V2
  Inception V4
NAMD
libavif avifenc
VKMark
Embree
Timed HMMer Search
DDraceNetwork
DDraceNetwork
Basis Universal
DDraceNetwork
DDraceNetwork
Embree
VkResample
oneDNN:
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
  Recurrent Neural Network Training - u8s8f32 - CPU
  Recurrent Neural Network Training - f32 - CPU
Timed FFmpeg Compilation
Timed Eigen Compilation
Embree
libavif avifenc
Embree
CLOMP
Node.js V8 Web Tooling Benchmark
oneDNN:
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - u8s8f32 - CPU
  Recurrent Neural Network Inference - f32 - CPU
RawTherapee
GraphicsMagick:
  Sharpen
  Swirl
NCNN:
  CPU - regnety_400m
  CPU - squeezenet_ssd
  CPU - yolov4-tiny
  CPU - resnet50
  CPU - alexnet
  CPU - resnet18
  CPU - vgg16
  CPU - googlenet
  CPU - blazeface
  CPU - efficientnet-b0
  CPU - mnasnet
  CPU - shufflenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU-v2-v2 - mobilenet-v2
  CPU - mobilenet
  Vulkan GPU - regnety_400m
  Vulkan GPU - squeezenet_ssd
  Vulkan GPU - yolov4-tiny
  Vulkan GPU - resnet50
  Vulkan GPU - alexnet
  Vulkan GPU - resnet18
  Vulkan GPU - vgg16
  Vulkan GPU - googlenet
  Vulkan GPU - blazeface
  Vulkan GPU - efficientnet-b0
  Vulkan GPU - mnasnet
  Vulkan GPU - shufflenet-v2
  Vulkan GPU-v3-v3 - mobilenet-v3
  Vulkan GPU-v2-v2 - mobilenet-v2
  Vulkan GPU - mobilenet
Embree
simdjson
LZ4 Compression:
  9 - Decompression Speed
  9 - Compression Speed
  3 - Decompression Speed
  3 - Compression Speed
DeepSpeech
Basis Universal
SQLite Speedtest
dav1d
eSpeak-NG Speech Engine
rav1e:
  1
  5
IndigoBench:
  CPU - Bedroom
  CPU - Supercar
TensorFlow Lite:
  NASNet Mobile
  SqueezeNet
  Mobilenet Float
  Mobilenet Quant
oneDNN:
  IP Shapes 1D - f32 - CPU
  IP Shapes 1D - u8s8f32 - CPU
LevelDB
GraphicsMagick:
  Enhanced
  Noise-Gaussian
  Resizing
  HWB Color Space
  Rotate
simdjson
ASTC Encoder:
  Medium
  Thorough
Basis Universal
simdjson:
  PartialTweets
  DistinctUserID
oneDNN:
  Deconvolution Batch shapes_1d - u8s8f32 - CPU
  Deconvolution Batch shapes_1d - f32 - CPU
Libplacebo:
  av1_grain_lap
  hdr_peakdetect
  polar_nocompute
  deband_heavy
VkResample
rav1e
LevelDB
WebP Image Encode
Betsy GPU Compressor
Cryptsetup:
  Twofish-XTS 512b Decryption
  Twofish-XTS 512b Encryption
  Serpent-XTS 512b Decryption
  Serpent-XTS 512b Encryption
  AES-XTS 512b Decryption
  AES-XTS 512b Encryption
  Twofish-XTS 256b Decryption
  Twofish-XTS 256b Encryption
  Serpent-XTS 256b Decryption
  Serpent-XTS 256b Encryption
  AES-XTS 256b Decryption
  AES-XTS 256b Encryption
  PBKDF2-whirlpool
  PBKDF2-sha512
Coremark
Darktable
LZ4 Compression:
  1 - Decompression Speed
  1 - Compression Speed
PHPBench
Zstd Compression
rav1e
WavPack Audio Encoding
RNNoise
LevelDB
dav1d
Unpacking Firefox
Crafty
LevelDB:
  Seq Fill:
    Microseconds Per Op
    MB/s
TNN:
  CPU - MobileNet v2
  CPU - SqueezeNet v1.1
PyBench
Redis
Monkey Audio Encoding
oneDNN
librsvg
WebP Image Encode
Opus Codec Encoding
LevelDB
Darktable
oneDNN:
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
  Deconvolution Batch shapes_3d - f32 - CPU
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
Redis:
  GET
  SET
  LPOP
  SADD
Timed MAFFT Alignment
LevelDB:
  Overwrite:
    Microseconds Per Op
    MB/s
  Rand Fill:
    Microseconds Per Op
    MB/s
oneDNN
Basis Universal
oneDNN
WebP Image Encode
Unpacking The Linux Kernel
dav1d
ASTC Encoder
oneDNN
yquake2
libavif avifenc
oneDNN
LevelDB:
  Fill Sync:
    Microseconds Per Op
    MB/s
libavif avifenc
Darktable
Betsy GPU Compressor
LAMMPS Molecular Dynamics Simulator
WebP Image Encode
yquake2
WebP Image Encode
yquake2
Darktable
HPC Challenge:
  Max Ping Pong Bandwidth
  Rand Ring Bandwidth
  Rand Ring Latency
  G-Rand Access
  EP-STREAM Triad
  G-Ptrans
  EP-DGEMM
  G-Ffte