7900X New

Intel Core i7-7900X testing with a ASRock X299 Extreme4 (P1.50 BIOS) and Zotac NVIDIA GeForce GT 610 1GB on Ubuntu 19.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2010024-FI-7900XNEW720&rdt&grs.

7900X NewProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverCompilerFile-SystemScreen Resolution123Intel Core i7-7900X @ 4.50GHz (10 Cores / 20 Threads)ASRock X299 Extreme4 (P1.50 BIOS)Intel Sky Lake-E DMI3 Registers16GB120GB Corsair Force MP500Zotac NVIDIA GeForce GT 610 1GBRealtek ALC1220LG Ultra HDIntel I219-VUbuntu 19.045.0.0-38-generic (x86_64)GNOME Shell 3.32.1X Server 1.20.4modesetting 1.20.4GCC 11.0.0 20200929ext41920x1080OpenBenchmarking.orgCompiler Details- --disable-multilib --enable-checking=release --enable-languages=c,c++,fortran Processor Details- Scaling Governor: intel_pstate powersave - CPU Microcode: 0x2000064Java Details- OpenJDK Runtime Environment (build 11.0.5+10-post-Ubuntu-0ubuntu1.119.04)Python Details- Python 2.7.16 + Python 3.7.3Security Details- itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + tsx_async_abort: Mitigation of Clear buffers; SMT vulnerable

7900X Newamg: espeak: Text-To-Speech Synthesisjava-gradle-perf: Reactoronednn: Recurrent Neural Network Inference - f32 - CPUncnn: CPU - googlenetrodinia: OpenMP HotSpot3Donednn: Deconvolution Batch deconv_3d - u8s8f32 - CPUneatbench: CPUncnn: CPU - blazefacerenaissance: Savina Reactors.IOrodinia: OpenMP Streamclusteronednn: Deconvolution Batch deconv_1d - f32 - CPUrenaissance: Genetic Algorithm Using Jenetics + Futuresrenaissance: Rand Forestonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUdacapobench: Tradesoaplulesh: renaissance: In-Memory Database Shootoutonednn: IP Batch All - f32 - CPUdacapobench: Tradebeansncnn: CPU - mobilenetrenaissance: Apache Spark PageRankocrmypdf: Processing 60 Page PDF Documentncnn: CPU-v2-v2 - mobilenet-v2yafaray: Total Time For Sample Scenencnn: CPU - mnasnetmafft: Multiple Sequence Alignment - LSU RNAncnn: CPU-v3-v3 - mobilenet-v3renaissance: Scala Dottyncnn: CPU - efficientnet-b0ncnn: CPU - yolov4-tinyonednn: Recurrent Neural Network Training - f32 - CPUpyperformance: nbodydacapobench: Jythononednn: Deconvolution Batch deconv_3d - bf16bf16bf16 - CPUluxcorerender: DLSCrnnoise: ncnn: CPU - squeezenetonednn: IP Batch 1D - u8s8f32 - CPUmlpack: scikit_linearridgeregressionncnn: CPU - resnet18renaissance: Twitter HTTP Requestsembree: Pathtracer - Crownonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUgmic: 2D Function Plotting, 1000 Timesmnn: MobileNetV2_224build-linux-kernel: Time To Compilepyperformance: 2to3dacapobench: H2pyperformance: json_loadsncnn: CPU - resnet50pyperformance: chaosembree: Pathtracer - Asian Dragonlammps: 20k Atomsonednn: IP Batch All - u8s8f32 - CPUopenvkl: vklBenchmarkStructuredVolumencnn: CPU - vgg16avifenc: 10webp: Defaultsvt-vp9: VMAF Optimized - Bosphorus 1080prenaissance: Apache Spark ALSembree: Pathtracer ISPC - Crownncnn: CPU - shufflenet-v2onednn: Deconvolution Batch deconv_3d - f32 - CPUkeydb: webp: Quality 100oidn: Memorialbrl-cad: VGR Performance Metrichugin: Panorama Photo Assistant + Stitching Timencnn: CPU - alexnetrodinia: OpenMP LavaMDblosc: blosclzembree: Pathtracer ISPC - Asian Dragonavifenc: 2webp: Quality 100, Lossless, Highest Compressionavifenc: 8pyperformance: pickle_pure_pythoncompress-zstd: 19svt-vp9: PSNR/SSIM Optimized - Bosphorus 1080pmnn: SqueezeNetV1.0gmic: Plotting Isosurface Of A 3D Volume, 1000 Timesbuild-llvm: Time To Compileavifenc: 0onednn: Convolution Batch Shapes Auto - f32 - CPUbasis: ETC1Scompress-zstd: 3rodinia: OpenMP CFD Solverinfluxdb: 4 - 10000 - 2,5000,1 - 10000mlpack: scikit_icamnn: inception-v3webp: Quality 100, Highest Compressionluxcorerender: Rainbow Colors and Prismonednn: IP Batch 1D - bf16bf16bf16 - CPUdav1d: Summer Nature 4Kpyperformance: floataom-av1: Speed 8 Realtimehint: FLOATmlpack: scikit_qdasvt-vp9: Visual Quality Optimized - Bosphorus 1080ponednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUaom-av1: Speed 4 Two-Passtensorflow-lite: Mobilenet Floatlibraw: Post-Processing Benchmarktensorflow-lite: Inception ResNet V2build-apache: Time To Compilesvt-av1: Enc Mode 4 - 1080pnamd: ATPase Simulation - 327,506 Atomscaffe: GoogleNet - CPU - 100dav1d: Summer Nature 1080ponednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUopenvkl: vklBenchmarkbasis: UASTC Level 0embree: Pathtracer ISPC - Asian Dragon Objpyperformance: crypto_pyaesonednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPUaom-av1: Speed 6 Two-Passrodinia: OpenMP Leukocytecaffe: AlexNet - CPU - 100lammps: Rhodopsin Proteinhmmer: Pfam Database Searchmnn: resnet-v2-50rawtherapee: Total Benchmark Timecaffe: AlexNet - CPU - 200blender: BMW27 - CPU-Onlyonednn: Deconvolution Batch deconv_1d - bf16bf16bf16 - CPUtesseract-ocr: Time To OCR 7 Imagesaom-av1: Speed 6 Realtimedav1d: Chimera 1080pblender: Fishy Cat - CPU-Onlyblender: Classroom - CPU-Onlygpaw: Carbon Nanotubepyperformance: django_templateonednn: Deconvolution Batch deconv_1d - u8s8f32 - CPUtensorflow-lite: Inception V4webp: Quality 100, Losslesstnn: CPU - MobileNet v2mnn: mobilenet-v1-1.0caffe: AlexNet - CPU - 1000gmic: 3D Elevated Function In Rand Colors, 100 Timestensorflow-lite: NASNet Mobileblender: Pabellon Barcelona - CPU-Onlyembree: Pathtracer - Asian Dragon Objsvt-av1: Enc Mode 8 - 1080pcaffe: GoogleNet - CPU - 200mlpack: scikit_svmoctave-benchmark: basis: UASTC Level 2git: Time To Complete Common Git Commandsbyte: Dhrystone 2basis: UASTC Level 3tensorflow-lite: Mobilenet Quantdav1d: Chimera 1080p 10-bitonednn: IP Batch All - bf16bf16bf16 - CPUblender: Barbershop - CPU-Onlytensorflow-lite: SqueezeNettnn: CPU - SqueezeNet v1.1caffe: GoogleNet - CPU - 1000basis: UASTC Level 2 + RDO Post-Processingkripke: pyperformance: python_startuppyperformance: regex_compilepyperformance: raytracepyperformance: pathlibpyperformance: gosvt-av1: Enc Mode 0 - 1080paom-av1: Speed 0 Two-Passopencv: DNN - Deep Neural Networkonednn: IP Batch 1D - f32 - CPUrenaissance: Akka Unbalanced Cobwebbed Treerenaissance: Apache Spark Bayes12321811.2930.389285.59585.758814.1098.0492.2037718.41.9423109.93514.0822.513324831.3462184.9790.870165407911.4801014791.11443.4009650716.843941.90030.9805.50162.8254.9710.5214.641800.1166.8225.07202.214102380616.25431.6826.77615.111.189442.3712.122290.01111.606710.06068114.1564.60582.580302446223.823.0995.913.92057.46516.859756744630.22522545.415.3851.466187.942367.48113.53434.343.44890653144.312.26519.7112385555.08210.99225.5269271.217.667454.46341.2165.66940354.3200.607.55719.277605.55790.00810.608750.7094366.520.1381201483.251.4940.9227.0511.798.06587178.7994.632.78430439779.8902743.02156.082.095072.2413396738.66256399325.5093.9631.59855122110496.0412.5830175.677.95615.695897.72.828633.52109.679479717.510133.45135.63153.94896232151.1414.524926.63617.89610.06208.93451.27249.72246.91.46860283685017.909315.6834.21347860463.181181809509.6112.892033.35824379615.168.11630.99450.94141673100.158.72113660986.7495.1659605.44191171296.3661220000721.816291405210.515440817.82130.1330.3143933.1751910146.3761531.73222928.8630.463289.45989.674513.6396.2052.2558517.81.8823820.78313.6632.583574702.2982121.9620.845101411011.8105214861.35444.6064642117.264038.93030.6115.37166.5404.8610.2904.551768.7756.6824.58205.072102373316.52891.7027.07915.211.209932.3612.122328.08411.78619.97412115.8154.66881.472300448123.623.2197.113.95887.55617.064656075820.36036145.775.4431.450189.662370.42613.54694.333.47977650558.982.26719.6612301955.54210.92223.6949346.117.808454.0440.9025.68640054.4201.187.52419.317606.31690.18410.646150.9334338.320.1181194436.451.8141.1727.0111.788.11044177.8295.132.76429449066.3595743.22155.672.096432.2313347838.82255417725.5003.9671.59226122299495.7512.5949176.287.98215.664497.62.836613.52109.642481007.514133.26435.60253.85696039151.5114.545326.69517.92610.40209.39452.25249.96047.01.47076283260717.884315.3984.22147835363.188181669508.8712.911833.33924415415.148.12630.95750.96741712858.158.76213647986.7395.1430605.36191148296.4201220130721.87310.515440817.82130.1330.3142673.311219866.1301534.53023819.1728.249301.16786.170914.20100.0462.1821618.31.9323326.52813.7912.508314691.5582172.2680.859670419811.7050654926.07143.4983634817.044028.32231.3575.44164.8484.9410.4974.651762.7746.7024.91201.086104376516.21881.6726.61014.951.192452.4011.922311.68411.70949.91643114.5334.62082.110304452023.922.9295.914.09227.49716.935456461883.81982045.255.4461.455187.642390.51713.42124.373.45026647412.542.28519.5412279755.21610.90224.7349318.617.718454.17941.0445.71240354.0199.747.50319.179602.00289.54310.574951.0454345.220.0101201966.851.6841.1057.0191.798.08919178.6494.732.61428315485.2755243.12155.362.087052.2313337738.83255323725.6073.9791.59601121854494.2812.6267176.147.97215.646797.92.828113.51109.373480197.494133.10035.54053.81195991151.2414.510026.63417.93609.04209.24451.81250.26147.01.46784283124317.919315.0784.21547916963.287181512508.7912.902533.30724395715.148.11830.99150.90941700019.858.77713648286.6795.0964605.01191041296.3141220333721.67810.515440817.82130.1330.3143013.1771110202.4761526.989OpenBenchmarking.org

Algebraic Multi-Grid Benchmark

OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark1235K10K15K20K25KSE +/- 181.95, N = 3SE +/- 292.58, N = 4SE +/- 145.70, N = 321811.2922928.8623819.171. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -pthread -lmpi

eSpeak-NG Speech Engine

Text-To-Speech Synthesis

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech Synthesis123714212835SE +/- 0.33, N = 16SE +/- 0.24, N = 4SE +/- 0.14, N = 430.3930.4628.251. (CC) gcc options: -O2 -std=c99

Java Gradle Build

Gradle Build: Reactor

OpenBenchmarking.orgSeconds, Fewer Is BetterJava Gradle BuildGradle Build: Reactor12370140210280350SE +/- 4.07, N = 3SE +/- 4.60, N = 9SE +/- 4.29, N = 3285.60289.46301.17

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU12320406080100SE +/- 0.23, N = 3SE +/- 0.13, N = 3SE +/- 0.16, N = 385.7689.6786.17MIN: 84.62MIN: 88.46MIN: 84.951. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

NCNN

Target: CPU - Model: googlenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenet12348121620SE +/- 0.25, N = 3SE +/- 0.01, N = 3SE +/- 0.40, N = 314.1013.6314.20MIN: 13.53 / MAX: 56.4MIN: 13.5 / MAX: 14.99MIN: 13.48 / MAX: 63.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Rodinia

Test: OpenMP HotSpot3D

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3D12320406080100SE +/- 1.37, N = 4SE +/- 0.01, N = 3SE +/- 1.43, N = 498.0596.21100.051. (CXX) g++ options: -O2 -lOpenCL

oneDNN

Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32 - Engine: CPU1230.50761.01521.52282.03042.538SE +/- 0.02337, N = 3SE +/- 0.02099, N = 3SE +/- 0.01102, N = 32.203772.255852.18216MIN: 2.16MIN: 2.21MIN: 2.151. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

NeatBench

Acceleration: CPU

OpenBenchmarking.orgFPS, More Is BetterNeatBench 5Acceleration: CPU123510152025SE +/- 0.17, N = 10SE +/- 0.28, N = 3SE +/- 0.25, N = 318.417.818.3

NCNN

Target: CPU - Model: blazeface

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazeface1230.43650.8731.30951.7462.1825SE +/- 0.05, N = 3SE +/- 0.00, N = 3SE +/- 0.05, N = 31.941.881.93MIN: 1.85 / MAX: 2.12MIN: 1.85 / MAX: 1.96MIN: 1.84 / MAX: 2.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Renaissance

Test: Savina Reactors.IO

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Savina Reactors.IO1235K10K15K20K25KSE +/- 256.36, N = 5SE +/- 217.82, N = 5SE +/- 179.08, N = 2023109.9423820.7823326.53

Rodinia

Test: OpenMP Streamcluster

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP Streamcluster12348121620SE +/- 0.15, N = 8SE +/- 0.17, N = 15SE +/- 0.16, N = 1514.0813.6613.791. (CXX) g++ options: -O2 -lOpenCL

oneDNN

Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPU1230.58131.16261.74392.32522.9065SE +/- 0.00797, N = 3SE +/- 0.00820, N = 3SE +/- 0.00718, N = 32.513322.583572.50831MIN: 2.45MIN: 2.51MIN: 2.451. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Renaissance

Test: Genetic Algorithm Using Jenetics + Futures

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Genetic Algorithm Using Jenetics + Futures12310002000300040005000SE +/- 51.97, N = 15SE +/- 44.75, N = 20SE +/- 55.22, N = 204831.354702.304691.56

Renaissance

Test: Random Forest

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Random Forest1235001000150020002500SE +/- 22.85, N = 5SE +/- 15.11, N = 17SE +/- 25.98, N = 52184.982121.962172.27

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU1230.19580.39160.58740.78320.979SE +/- 0.002444, N = 3SE +/- 0.001415, N = 3SE +/- 0.000700, N = 30.8701650.8451010.859670MIN: 0.84MIN: 0.82MIN: 0.831. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

DaCapo Benchmark

Java Test: Tradesoap

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradesoap1239001800270036004500SE +/- 27.44, N = 4SE +/- 37.49, N = 4SE +/- 37.21, N = 3407941104198

LULESH

OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.31233691215SE +/- 0.16, N = 3SE +/- 0.06, N = 3SE +/- 0.17, N = 411.4811.8111.711. (CXX) g++ options: -O3 -fopenmp -lm -pthread -lmpi_cxx -lmpi

Renaissance

Test: In-Memory Database Shootout

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: In-Memory Database Shootout12311002200330044005500SE +/- 60.85, N = 5SE +/- 49.84, N = 25SE +/- 36.91, N = 54791.114861.354926.07

oneDNN

Harness: IP Batch All - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: f32 - Engine: CPU1231020304050SE +/- 0.04, N = 3SE +/- 0.56, N = 3SE +/- 0.07, N = 343.4044.6143.50MIN: 41.11MIN: 41.49MIN: 41.121. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

DaCapo Benchmark

Java Test: Tradebeans

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradebeans12314002800420056007000SE +/- 63.74, N = 4SE +/- 62.97, N = 9SE +/- 37.45, N = 4650764216348

NCNN

Target: CPU - Model: mobilenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenet12348121620SE +/- 0.02, N = 3SE +/- 0.32, N = 3SE +/- 0.22, N = 316.8417.2617.04MIN: 16.72 / MAX: 17.69MIN: 16.74 / MAX: 63.4MIN: 16.72 / MAX: 17.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Renaissance

Test: Apache Spark PageRank

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Apache Spark PageRank1239001800270036004500SE +/- 43.45, N = 25SE +/- 42.51, N = 25SE +/- 42.16, N = 253941.904038.934028.32

OCRMyPDF

Processing 60 Page PDF Document

OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 8.0.1+dfsgProcessing 60 Page PDF Document123714212835SE +/- 0.33, N = 3SE +/- 0.11, N = 3SE +/- 0.08, N = 330.9830.6131.36

NCNN

Target: CPU-v2-v2 - Model: mobilenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v21231.23752.4753.71254.956.1875SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.09, N = 35.505.375.44MIN: 5.24 / MAX: 5.69MIN: 5.17 / MAX: 6.06MIN: 5.24 / MAX: 5.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

YafaRay

Total Time For Sample Scene

OpenBenchmarking.orgSeconds, Fewer Is BetterYafaRay 3.4.1Total Time For Sample Scene1234080120160200SE +/- 2.59, N = 3SE +/- 2.58, N = 12SE +/- 2.55, N = 3162.83166.54164.851. (CXX) g++ options: -std=c++11 -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype -lboost_system -lboost_filesystem -lboost_locale

NCNN

Target: CPU - Model: mnasnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnet1231.11832.23663.35494.47325.5915SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.10, N = 34.974.864.94MIN: 4.71 / MAX: 6.29MIN: 4.69 / MAX: 5MIN: 4.73 / MAX: 5.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Timed MAFFT Alignment

Multiple Sequence Alignment - LSU RNA

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNA1233691215SE +/- 0.06, N = 3SE +/- 0.13, N = 3SE +/- 0.06, N = 310.5210.2910.501. (CC) gcc options: -std=c99 -O3 -lm -lpthread

NCNN

Target: CPU-v3-v3 - Model: mobilenet-v3

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v31231.04632.09263.13894.18525.2315SE +/- 0.06, N = 3SE +/- 0.04, N = 3SE +/- 0.04, N = 34.644.554.65MIN: 4.47 / MAX: 4.79MIN: 4.45 / MAX: 5.99MIN: 4.44 / MAX: 5.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Renaissance

Test: Scala Dotty

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Scala Dotty123400800120016002000SE +/- 16.25, N = 5SE +/- 12.30, N = 5SE +/- 10.01, N = 51800.121768.781762.77

NCNN

Target: CPU - Model: efficientnet-b0

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b0123246810SE +/- 0.12, N = 3SE +/- 0.02, N = 3SE +/- 0.08, N = 36.826.686.70MIN: 6.51 / MAX: 7.14MIN: 6.54 / MAX: 6.83MIN: 6.49 / MAX: 6.941. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: yolov4-tiny

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tiny123612182430SE +/- 0.31, N = 3SE +/- 0.14, N = 3SE +/- 0.40, N = 325.0724.5824.91MIN: 24.64 / MAX: 25.79MIN: 24.22 / MAX: 26.04MIN: 24.24 / MAX: 26.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU12350100150200250SE +/- 0.24, N = 3SE +/- 0.48, N = 3SE +/- 0.29, N = 3202.21205.07201.09MIN: 200.84MIN: 203.25MIN: 199.441. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

PyPerformance

Benchmark: nbody

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbody12320406080100SE +/- 0.88, N = 3SE +/- 1.00, N = 3SE +/- 1.20, N = 3102102104

DaCapo Benchmark

Java Test: Jython

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Jython1238001600240032004000SE +/- 7.72, N = 4SE +/- 25.83, N = 4SE +/- 30.00, N = 4380637333765

oneDNN

Harness: Deconvolution Batch deconv_3d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: bf16bf16bf16 - Engine: CPU12348121620SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 316.2516.5316.22MIN: 16.17MIN: 16.44MIN: 16.141. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

LuxCoreRender

Scene: DLSC

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: DLSC1230.38250.7651.14751.531.9125SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 31.681.701.67MIN: 1.61 / MAX: 1.72MIN: 1.63 / MAX: 1.78MIN: 1.6 / MAX: 1.72

RNNoise

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28123612182430SE +/- 0.11, N = 3SE +/- 0.33, N = 13SE +/- 0.01, N = 326.7827.0826.611. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

NCNN

Target: CPU - Model: squeezenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenet12348121620SE +/- 0.15, N = 3SE +/- 0.18, N = 3SE +/- 0.18, N = 315.1115.2114.95MIN: 14.85 / MAX: 61.59MIN: 14.81 / MAX: 15.89MIN: 14.57 / MAX: 19.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

Harness: IP Batch 1D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: u8s8f32 - Engine: CPU1230.27220.54440.81661.08881.361SE +/- 0.00077, N = 3SE +/- 0.00275, N = 3SE +/- 0.00566, N = 31.189441.209931.19245MIN: 1.16MIN: 1.18MIN: 1.161. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Mlpack Benchmark

Benchmark: scikit_linearridgeregression

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregression1230.541.081.622.162.7SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 32.372.362.40

NCNN

Target: CPU - Model: resnet18

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet181233691215SE +/- 0.06, N = 3SE +/- 0.05, N = 3SE +/- 0.22, N = 312.1212.1211.92MIN: 11.98 / MAX: 13.14MIN: 11.99 / MAX: 12.25MIN: 11.41 / MAX: 12.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Renaissance

Test: Twitter HTTP Requests

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Twitter HTTP Requests1235001000150020002500SE +/- 7.20, N = 5SE +/- 12.66, N = 5SE +/- 10.38, N = 52290.012328.082311.68

Embree

Binary: Pathtracer - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Crown1233691215SE +/- 0.14, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 311.6111.7911.71MIN: 11.24 / MAX: 12MIN: 11.66 / MAX: 11.96MIN: 11.61 / MAX: 11.87

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU1233691215SE +/- 0.06923, N = 3SE +/- 0.01762, N = 3SE +/- 0.03654, N = 310.060689.974129.91643MIN: 9.92MIN: 9.89MIN: 9.821. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

G'MIC

Test: 2D Function Plotting, 1000 Times

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 2D Function Plotting, 1000 Times123306090120150SE +/- 1.20, N = 8SE +/- 1.22, N = 3SE +/- 1.02, N = 15114.16115.82114.531. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.

Mobile Neural Network

Model: MobileNetV2_224

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_2241231.05032.10063.15094.20125.2515SE +/- 0.005, N = 3SE +/- 0.058, N = 3SE +/- 0.020, N = 34.6054.6684.620MIN: 4.46 / MAX: 5.98MIN: 4.47 / MAX: 67.24MIN: 4.44 / MAX: 6.021. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Timed Linux Kernel Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To Compile12320406080100SE +/- 0.67, N = 3SE +/- 0.75, N = 3SE +/- 0.86, N = 382.5881.4782.11

PyPerformance

Benchmark: 2to3

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to312370140210280350SE +/- 0.67, N = 3SE +/- 1.33, N = 3302300304

DaCapo Benchmark

Java Test: H2

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H212310002000300040005000SE +/- 31.29, N = 18SE +/- 41.85, N = 20SE +/- 41.45, N = 4446244814520

PyPerformance

Benchmark: json_loads

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loads123612182430SE +/- 0.09, N = 3SE +/- 0.07, N = 3SE +/- 0.06, N = 323.823.623.9

NCNN

Target: CPU - Model: resnet50

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50123612182430SE +/- 0.04, N = 3SE +/- 0.07, N = 3SE +/- 0.24, N = 323.0923.2122.92MIN: 22.94 / MAX: 23.66MIN: 22.97 / MAX: 24.73MIN: 22.34 / MAX: 24.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

PyPerformance

Benchmark: chaos

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaos12320406080100SE +/- 0.12, N = 3SE +/- 1.12, N = 3SE +/- 0.09, N = 395.997.195.9

Embree

Binary: Pathtracer - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon12348121620SE +/- 0.12, N = 3SE +/- 0.05, N = 3SE +/- 0.11, N = 313.9213.9614.09MIN: 13.7 / MAX: 14.23MIN: 13.81 / MAX: 14.14MIN: 13.84 / MAX: 14.36

LAMMPS Molecular Dynamics Simulator

Model: 20k Atoms

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 24Aug2020Model: 20k Atoms123246810SE +/- 0.015, N = 3SE +/- 0.015, N = 3SE +/- 0.011, N = 37.4657.5567.4971. (CXX) g++ options: -O3 -pthread -lm

oneDNN

Harness: IP Batch All - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: u8s8f32 - Engine: CPU12348121620SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 316.8617.0616.94MIN: 16.37MIN: 16.72MIN: 16.431. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenVKL

Benchmark: vklBenchmarkStructuredVolume

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkStructuredVolume12312M24M36M48M60MSE +/- 640455.21, N = 3SE +/- 102024.09, N = 3SE +/- 254970.05, N = 356744630.2356075820.3656461883.82MIN: 1223481 / MAX: 389287872MIN: 1231969 / MAX: 373823712MIN: 1241223 / MAX: 371099664

NCNN

Target: CPU - Model: vgg16

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg161231020304050SE +/- 0.20, N = 3SE +/- 0.28, N = 3SE +/- 0.47, N = 345.4145.7745.25MIN: 45.02 / MAX: 46.7MIN: 45.15 / MAX: 93MIN: 44.04 / MAX: 85.021. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

libavif avifenc

Encoder Speed: 10

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 101231.22542.45083.67624.90166.127SE +/- 0.011, N = 3SE +/- 0.011, N = 3SE +/- 0.001, N = 35.3855.4435.4461. (CXX) g++ options: -O3 -fPIC

WebP Image Encode

Encode Settings: Default

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Default1230.32990.65980.98971.31961.6495SE +/- 0.009, N = 3SE +/- 0.009, N = 3SE +/- 0.013, N = 31.4661.4501.4551. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -lpng16 -ljpeg

SVT-VP9

Tuning: VMAF Optimized - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: VMAF Optimized - Input: Bosphorus 1080p1234080120160200SE +/- 3.14, N = 3SE +/- 2.67, N = 4SE +/- 3.06, N = 3187.94189.66187.641. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Renaissance

Test: Apache Spark ALS

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Apache Spark ALS1235001000150020002500SE +/- 19.20, N = 5SE +/- 13.21, N = 5SE +/- 17.15, N = 172367.482370.432390.52

Embree

Binary: Pathtracer ISPC - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Crown1233691215SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 313.5313.5513.42MIN: 13.4 / MAX: 13.76MIN: 13.41 / MAX: 13.78MIN: 13.28 / MAX: 13.65

NCNN

Target: CPU - Model: shufflenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v21230.98331.96662.94993.93324.9165SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 34.344.334.37MIN: 4.24 / MAX: 4.94MIN: 4.27 / MAX: 4.4MIN: 4.25 / MAX: 5.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPU1230.78291.56582.34873.13163.9145SE +/- 0.00242, N = 3SE +/- 0.01044, N = 3SE +/- 0.00663, N = 33.448903.479773.45026MIN: 3.4MIN: 3.43MIN: 3.41. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

KeyDB

OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.16123140K280K420K560K700KSE +/- 2274.07, N = 3SE +/- 837.37, N = 3SE +/- 422.72, N = 3653144.31650558.98647412.541. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

WebP Image Encode

Encode Settings: Quality 100

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 1001230.51411.02821.54232.05642.5705SE +/- 0.009, N = 3SE +/- 0.010, N = 3SE +/- 0.015, N = 32.2652.2672.2851. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -lpng16 -ljpeg

Intel Open Image Denoise

Scene: Memorial

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.2.0Scene: Memorial123510152025SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.05, N = 319.7119.6619.54

BRL-CAD

VGR Performance Metric

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.30.8VGR Performance Metric12330K60K90K120K150K1238551230191227971. (CXX) g++ options: -std=c++11 -pipe -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -rdynamic -lSM -lICE -lXi -lGLU -lGL -lGLdispatch -lX11 -lXext -lXrender -lpthread -ldl -luuid -lm

Hugin

Panorama Photo Assistant + Stitching Time

OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching Time1231224364860SE +/- 0.35, N = 3SE +/- 0.78, N = 4SE +/- 0.42, N = 355.0855.5455.22

NCNN

Target: CPU - Model: alexnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnet1233691215SE +/- 0.07, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 310.9910.9210.90MIN: 10.85 / MAX: 37.5MIN: 10.86 / MAX: 11.62MIN: 10.83 / MAX: 111. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Rodinia

Test: OpenMP LavaMD

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMD12350100150200250SE +/- 0.96, N = 3SE +/- 0.55, N = 3SE +/- 1.60, N = 3225.53223.69224.731. (CXX) g++ options: -O2 -lOpenCL

C-Blosc

Compressor: blosclz

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0 Beta 5Compressor: blosclz1232K4K6K8K10KSE +/- 3.65, N = 3SE +/- 9.60, N = 3SE +/- 22.72, N = 39271.29346.19318.61. (CXX) g++ options: -rdynamic

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon12348121620SE +/- 0.06, N = 3SE +/- 0.14, N = 3SE +/- 0.05, N = 317.6717.8117.72MIN: 17.49 / MAX: 17.87MIN: 17.44 / MAX: 18.21MIN: 17.53 / MAX: 17.97

libavif avifenc

Encoder Speed: 2

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 21231224364860SE +/- 0.31, N = 3SE +/- 0.24, N = 3SE +/- 0.11, N = 354.4654.0454.181. (CXX) g++ options: -O3 -fPIC

WebP Image Encode

Encode Settings: Quality 100, Lossless, Highest Compression

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest Compression123918273645SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.04, N = 341.2240.9041.041. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -lpng16 -ljpeg

libavif avifenc

Encoder Speed: 8

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 81231.28522.57043.85565.14086.426SE +/- 0.007, N = 3SE +/- 0.027, N = 3SE +/- 0.019, N = 35.6695.6865.7121. (CXX) g++ options: -O3 -fPIC

PyPerformance

Benchmark: pickle_pure_python

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_python12390180270360450SE +/- 2.40, N = 3SE +/- 0.67, N = 3403400403

Zstd Compression

Compression Level: 19

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 191231224364860SE +/- 0.00, N = 3SE +/- 0.32, N = 3SE +/- 0.38, N = 354.354.454.01. (CC) gcc options: -O3 -pthread -lz -llzma

SVT-VP9

Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080p1234080120160200SE +/- 0.14, N = 3SE +/- 0.33, N = 3SE +/- 0.81, N = 3200.60201.18199.741. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Mobile Neural Network

Model: SqueezeNetV1.0

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0123246810SE +/- 0.036, N = 3SE +/- 0.021, N = 3SE +/- 0.018, N = 37.5577.5247.503MIN: 7.35 / MAX: 28.35MIN: 7.37 / MAX: 8.84MIN: 7.35 / MAX: 9.051. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

G'MIC

Test: Plotting Isosurface Of A 3D Volume, 1000 Times

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: Plotting Isosurface Of A 3D Volume, 1000 Times123510152025SE +/- 0.17, N = 3SE +/- 0.08, N = 3SE +/- 0.03, N = 319.2819.3219.181. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.

Timed LLVM Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 10.0Time To Compile123130260390520650SE +/- 2.27, N = 3SE +/- 4.02, N = 3SE +/- 3.87, N = 3605.56606.32602.00

libavif avifenc

Encoder Speed: 0

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 012320406080100SE +/- 0.59, N = 3SE +/- 0.07, N = 3SE +/- 0.30, N = 390.0190.1889.541. (CXX) g++ options: -O3 -fPIC

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU1233691215SE +/- 0.00, N = 3SE +/- 0.07, N = 3SE +/- 0.01, N = 310.6110.6510.57MIN: 10.55MIN: 10.52MIN: 10.51. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Basis Universal

Settings: ETC1S

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1S1231224364860SE +/- 0.24, N = 3SE +/- 0.32, N = 3SE +/- 0.34, N = 350.7150.9351.051. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Zstd Compression

Compression Level: 3

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 31239001800270036004500SE +/- 5.94, N = 3SE +/- 2.76, N = 3SE +/- 4.38, N = 34366.54338.34345.21. (CC) gcc options: -O3 -pthread -lz -llzma

Rodinia

Test: OpenMP CFD Solver

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD Solver123510152025SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 320.1420.1220.011. (CXX) g++ options: -O2 -lOpenCL

InfluxDB

Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000123300K600K900K1200K1500KSE +/- 1855.52, N = 3SE +/- 5496.75, N = 3SE +/- 2799.34, N = 31201483.21194436.41201966.8

Mlpack Benchmark

Benchmark: scikit_ica

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_ica1231224364860SE +/- 0.13, N = 3SE +/- 0.16, N = 3SE +/- 0.08, N = 351.4951.8151.68

Mobile Neural Network

Model: inception-v3

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3123918273645SE +/- 0.32, N = 3SE +/- 0.15, N = 3SE +/- 0.17, N = 340.9241.1741.11MIN: 40.1 / MAX: 98.32MIN: 40.61 / MAX: 100.93MIN: 40.44 / MAX: 104.581. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

WebP Image Encode

Encode Settings: Quality 100, Highest Compression

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest Compression123246810SE +/- 0.019, N = 3SE +/- 0.017, N = 3SE +/- 0.016, N = 37.0517.0117.0191. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -lpng16 -ljpeg

LuxCoreRender

Scene: Rainbow Colors and Prism

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: Rainbow Colors and Prism1230.40280.80561.20841.61122.014SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 31.791.781.79MIN: 1.78 / MAX: 1.82MIN: 1.76 / MAX: 1.82MIN: 1.77 / MAX: 1.85

oneDNN

Harness: IP Batch 1D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: bf16bf16bf16 - Engine: CPU123246810SE +/- 0.00100, N = 3SE +/- 0.03696, N = 3SE +/- 0.02158, N = 38.065878.110448.08919MIN: 8MIN: 8MIN: 81. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

dav1d

Video Input: Summer Nature 4K

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 4K1234080120160200SE +/- 0.11, N = 3SE +/- 0.15, N = 3SE +/- 0.31, N = 3178.79177.82178.64MIN: 156.78 / MAX: 198.38MIN: 155.22 / MAX: 197.63MIN: 154.31 / MAX: 198.671. (CC) gcc options: -pthread

PyPerformance

Benchmark: float

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: float12320406080100SE +/- 0.30, N = 3SE +/- 0.07, N = 3SE +/- 0.41, N = 394.695.194.7

AOM AV1

Encoder Mode: Speed 8 Realtime

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 Realtime123816243240SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.11, N = 332.7832.7632.611. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Hierarchical INTegration

Test: FLOAT

OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOAT12390M180M270M360M450MSE +/- 393118.30, N = 3SE +/- 325962.67, N = 3SE +/- 645266.62, N = 3430439779.89429449066.36428315485.281. (CC) gcc options: -O3 -march=native -lm

Mlpack Benchmark

Benchmark: scikit_qda

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qda1231020304050SE +/- 0.11, N = 3SE +/- 0.32, N = 14SE +/- 0.30, N = 343.0243.2243.12

SVT-VP9

Tuning: Visual Quality Optimized - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: Visual Quality Optimized - Input: Bosphorus 1080p123306090120150SE +/- 0.25, N = 3SE +/- 0.36, N = 3SE +/- 0.51, N = 3156.08155.67155.361. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU1230.47170.94341.41511.88682.3585SE +/- 0.01389, N = 3SE +/- 0.00344, N = 3SE +/- 0.00361, N = 32.095072.096432.08705MIN: 2.04MIN: 2.06MIN: 2.061. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

AOM AV1

Encoder Mode: Speed 4 Two-Pass

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-Pass1230.5041.0081.5122.0162.52SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 32.242.232.231. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

TensorFlow Lite

Model: Mobilenet Float

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Float12330K60K90K120K150KSE +/- 99.22, N = 3SE +/- 36.88, N = 3SE +/- 74.51, N = 3133967133478133377

LibRaw

Post-Processing Benchmark

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing Benchmark123918273645SE +/- 0.07, N = 3SE +/- 0.08, N = 3SE +/- 0.01, N = 338.6638.8238.831. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

TensorFlow Lite

Model: Inception ResNet V2

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2123500K1000K1500K2000K2500KSE +/- 1919.48, N = 3SE +/- 1370.60, N = 3SE +/- 2061.83, N = 3256399325541772553237

Timed Apache Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To Compile123612182430SE +/- 0.02, N = 3SE +/- 0.07, N = 3SE +/- 0.00, N = 325.5125.5025.61

SVT-AV1

Encoder Mode: Enc Mode 4 - Input: 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080p1230.89531.79062.68593.58124.4765SE +/- 0.011, N = 3SE +/- 0.010, N = 3SE +/- 0.012, N = 33.9633.9673.9791. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

NAMD

ATPase Simulation - 327,506 Atoms

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atoms1230.35970.71941.07911.43881.7985SE +/- 0.00295, N = 3SE +/- 0.00324, N = 3SE +/- 0.00436, N = 31.598551.592261.59601

Caffe

Model: GoogleNet - Acceleration: CPU - Iterations: 100

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 10012330K60K90K120K150KSE +/- 72.07, N = 3SE +/- 196.94, N = 3SE +/- 207.96, N = 31221101222991218541. (CXX) g++ options: -fPIC -O3 -rdynamic -lboost_system -lboost_thread -lboost_filesystem -lboost_chrono -lboost_date_time -lboost_atomic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

dav1d

Video Input: Summer Nature 1080p

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 1080p123110220330440550SE +/- 0.66, N = 3SE +/- 1.82, N = 3SE +/- 1.11, N = 3496.04495.75494.28MIN: 384.41 / MAX: 542.46MIN: 384.94 / MAX: 544.39MIN: 380.44 / MAX: 541.51. (CC) gcc options: -pthread

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU1233691215SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 312.5812.5912.63MIN: 12.36MIN: 12.37MIN: 12.431. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenVKL

Benchmark: vklBenchmark

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmark1234080120160200SE +/- 0.76, N = 3SE +/- 1.33, N = 3SE +/- 0.45, N = 3175.67176.28176.14MIN: 1 / MAX: 755MIN: 1 / MAX: 749MIN: 1 / MAX: 759

Basis Universal

Settings: UASTC Level 0

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 0123246810SE +/- 0.014, N = 3SE +/- 0.010, N = 3SE +/- 0.008, N = 37.9567.9827.9721. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon Obj12348121620SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 315.7015.6615.65MIN: 15.6 / MAX: 15.88MIN: 15.58 / MAX: 15.85MIN: 15.57 / MAX: 15.82

PyPerformance

Benchmark: crypto_pyaes

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaes12320406080100SE +/- 0.13, N = 3SE +/- 0.12, N = 3SE +/- 0.10, N = 397.797.697.9

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU1230.63821.27641.91462.55283.191SE +/- 0.00103, N = 3SE +/- 0.00212, N = 3SE +/- 0.02581, N = 32.828632.836612.82811MIN: 2.68MIN: 2.68MIN: 2.651. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

AOM AV1

Encoder Mode: Speed 6 Two-Pass

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-Pass1230.7921.5842.3763.1683.96SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 33.523.523.511. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Rodinia

Test: OpenMP Leukocyte

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP Leukocyte12320406080100SE +/- 0.16, N = 3SE +/- 0.17, N = 3SE +/- 0.15, N = 3109.68109.64109.371. (CXX) g++ options: -O2 -lOpenCL

Caffe

Model: AlexNet - Acceleration: CPU - Iterations: 100

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 10012310K20K30K40K50KSE +/- 101.10, N = 3SE +/- 130.16, N = 3SE +/- 74.48, N = 34797148100480191. (CXX) g++ options: -fPIC -O3 -rdynamic -lboost_system -lboost_thread -lboost_filesystem -lboost_chrono -lboost_date_time -lboost_atomic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

LAMMPS Molecular Dynamics Simulator

Model: Rhodopsin Protein

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 24Aug2020Model: Rhodopsin Protein123246810SE +/- 0.021, N = 3SE +/- 0.008, N = 3SE +/- 0.012, N = 37.5107.5147.4941. (CXX) g++ options: -O3 -pthread -lm

Timed HMMer Search

Pfam Database Search

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database Search123306090120150SE +/- 0.11, N = 3SE +/- 0.28, N = 3SE +/- 0.27, N = 3133.45133.26133.101. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

Mobile Neural Network

Model: resnet-v2-50

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50123816243240SE +/- 0.09, N = 3SE +/- 0.14, N = 3SE +/- 0.07, N = 335.6335.6035.54MIN: 35.04 / MAX: 97.88MIN: 35.19 / MAX: 80.6MIN: 35.2 / MAX: 71.131. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

RawTherapee

Total Benchmark Time

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark Time1231224364860SE +/- 0.09, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 353.9553.8653.811. RawTherapee, version 5.5, command line. An advanced, cross-platform program for developing raw photos. Website: http://www.rawtherapee.com/ Documentation: http://rawpedia.rawtherapee.com/ Forum: https://discuss.pixls.us/c/software/rawtherapee Code and bug reports: https://github.com/Beep6581/RawTherapee Symbols: <Chevrons> indicate parameters you can change. [Square brackets] mean the parameter is optional. The pipe symbol | indicates a choice of one or the other. The dash symbol - denotes a range of possible values from one to the other. Usage: rawtherapee-cli -c <dir>|<files> Convert files in batch with default parameters. rawtherapee-cli <other options> -c <dir>|<files> Convert files in batch with your own settings. Options: rawtherapee-cli[-o <output>|-O <output>] [-q] [-a] [-s|-S] [-p <one.pp3> [-p <two.pp3> ...] ] [-d] [ -j[1-100] -js<1-3> | -t[z] -b<8|16|16f|32> | -n -b<8|16> ] [-Y] [-f] -c <input> -c <files> Specify one or more input files or folders. When specifying folders, Rawtherapee will look for image file types which comply with the selected extensions (see also '-a'). -c must be the last option. -o <file>|<dir> Set output file or folder. Saves output file alongside input file if -o is not specified. -O <file>|<dir> Set output file or folder and copy pp3 file into it. Saves output file alongside input file if -O is not specified. -q Quick-start mode. Does not load cached files to speedup start time. -a Process all supported image file types when specifying a folder, even those not currently selected in Preferences > File Browser > Parsed Extensions. -s Use the existing sidecar file to build the processing parameters, e.g. for photo.raw there should be a photo.raw.pp3 file in the same folder. If the sidecar file does not exist, neutral values will be used. -S Like -s but skip if the sidecar file does not exist. -p <file.pp3> Specify processing profile to be used for all conversions. You can specify as many sets of "-p <file.pp3>" options as you like, each will be built on top of the previous one, as explained below. -d Use the default raw or non-raw processing profile as set in Preferences > Image Processing > Default Processing Profile -j[1-100] Specify output to be JPEG (default, if -t and -n are not set). Optionally, specify compression 1-100 (default value: 92). -js<1-3> Specify the JPEG chroma subsampling parameter, where: 1 = Best compression: 2x2, 1x1, 1x1 (4:2:0) Chroma halved vertically and horizontally. 2 = Balanced (default): 2x1, 1x1, 1x1 (4:2:2) Chroma halved horizontally. 3 = Best quality: 1x1, 1x1, 1x1 (4:4:4) No chroma subsampling. -b<8|16|16f|32> Specify bit depth per channel. 8 = 8-bit integer. Applies to JPEG, PNG and TIFF. Default for JPEG and PNG. 16 = 16-bit integer. Applies to TIFF and PNG. Default for TIFF. 16f = 16-bit float. Applies to TIFF. 32 = 32-bit float. Applies to TIFF. -t[z] Specify output to be TIFF. Uncompressed by default, or deflate compression with 'z'. -n Specify output to be compressed PNG. Compression is hard-coded to PNG_FILTER_PAETH, Z_RLE. -Y Overwrite output if present. -f Use the custom fast-export processing pipeline. Your pp3 files can be incomplete, RawTherapee will build the final values as follows: 1- A new processing profile is created using neutral values, 2- If the "-d" option is set, the values are overridden by those found in the default raw or non-raw processing profile. 3- If one or more "-p" options are set, the values are overridden by those found in these processing profiles. 4- If the "-s" or "-S" options are set, the values are finally overridden by those found in the sidecar files. The processing profiles are processed in the order specified on the command line.

Caffe

Model: AlexNet - Acceleration: CPU - Iterations: 200

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 20012320K40K60K80K100KSE +/- 100.00, N = 3SE +/- 34.60, N = 3SE +/- 123.16, N = 39623296039959911. (CXX) g++ options: -fPIC -O3 -rdynamic -lboost_system -lboost_thread -lboost_filesystem -lboost_chrono -lboost_date_time -lboost_atomic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Blender

Blend File: BMW27 - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: CPU-Only123306090120150SE +/- 0.15, N = 3SE +/- 0.28, N = 3SE +/- 0.04, N = 3151.14151.51151.24

oneDNN

Harness: Deconvolution Batch deconv_1d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: bf16bf16bf16 - Engine: CPU12348121620SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 314.5214.5514.51MIN: 14.31MIN: 14.29MIN: 14.311. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Tesseract OCR

Time To OCR 7 Images

OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.0.0Time To OCR 7 Images123612182430SE +/- 0.04, N = 3SE +/- 0.14, N = 3SE +/- 0.02, N = 326.6426.7026.63

AOM AV1

Encoder Mode: Speed 6 Realtime

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Realtime12348121620SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 317.8917.9217.931. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

dav1d

Video Input: Chimera 1080p

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080p123130260390520650SE +/- 1.75, N = 3SE +/- 1.88, N = 3SE +/- 3.14, N = 3610.06610.40609.04MIN: 465.77 / MAX: 779.74MIN: 465.82 / MAX: 795.4MIN: 463.93 / MAX: 783.911. (CC) gcc options: -pthread

Blender

Blend File: Fishy Cat - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Fishy Cat - Compute: CPU-Only12350100150200250SE +/- 0.17, N = 3SE +/- 0.27, N = 3SE +/- 0.06, N = 3208.93209.39209.24

Blender

Blend File: Classroom - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Classroom - Compute: CPU-Only123100200300400500SE +/- 0.93, N = 3SE +/- 0.31, N = 3SE +/- 0.76, N = 3451.27452.25451.81

GPAW

Input: Carbon Nanotube

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 20.1Input: Carbon Nanotube12350100150200250SE +/- 0.71, N = 3SE +/- 0.35, N = 3SE +/- 0.28, N = 3249.72249.96250.261. (CC) gcc options: -pthread -shared -lxc -lblas -lmpi

PyPerformance

Benchmark: django_template

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_template1231122334455SE +/- 0.09, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 346.947.047.0

oneDNN

Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32 - Engine: CPU1230.33090.66180.99271.32361.6545SE +/- 0.00183, N = 3SE +/- 0.00089, N = 3SE +/- 0.00104, N = 31.468601.470761.46784MIN: 1.45MIN: 1.46MIN: 1.451. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

TensorFlow Lite

Model: Inception V4

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4123600K1200K1800K2400K3000KSE +/- 4029.11, N = 3SE +/- 2935.10, N = 3SE +/- 2268.55, N = 3283685028326072831243

WebP Image Encode

Encode Settings: Quality 100, Lossless

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless12348121620SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 317.9117.8817.921. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -lpng16 -ljpeg

TNN

Target: CPU - Model: MobileNet v2

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v212370140210280350SE +/- 0.40, N = 3SE +/- 0.24, N = 3SE +/- 0.04, N = 3315.68315.40315.08MIN: 314.48 / MAX: 332.45MIN: 314.51 / MAX: 326.8MIN: 314.62 / MAX: 316.041. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

Mobile Neural Network

Model: mobilenet-v1-1.0

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.01230.94971.89942.84913.79884.7485SE +/- 0.008, N = 3SE +/- 0.014, N = 3SE +/- 0.017, N = 34.2134.2214.215MIN: 4.13 / MAX: 5.67MIN: 4.14 / MAX: 5.06MIN: 4.12 / MAX: 8.761. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Caffe

Model: AlexNet - Acceleration: CPU - Iterations: 1000

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 1000123100K200K300K400K500KSE +/- 159.61, N = 3SE +/- 301.22, N = 3SE +/- 305.87, N = 34786044783534791691. (CXX) g++ options: -fPIC -O3 -rdynamic -lboost_system -lboost_thread -lboost_filesystem -lboost_chrono -lboost_date_time -lboost_atomic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

G'MIC

Test: 3D Elevated Function In Random Colors, 100 Times

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 3D Elevated Function In Random Colors, 100 Times1231428425670SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.07, N = 363.1863.1963.291. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.

TensorFlow Lite

Model: NASNet Mobile

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet Mobile12340K80K120K160K200KSE +/- 113.93, N = 3SE +/- 369.09, N = 3SE +/- 248.04, N = 3181809181669181512

Blender

Blend File: Pabellon Barcelona - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Pabellon Barcelona - Compute: CPU-Only123110220330440550SE +/- 0.34, N = 3SE +/- 0.39, N = 3SE +/- 0.19, N = 3509.61508.87508.79

Embree

Binary: Pathtracer - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon Obj1233691215SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 312.8912.9112.90MIN: 12.85 / MAX: 13MIN: 12.87 / MAX: 13.02MIN: 12.82 / MAX: 13.01

SVT-AV1

Encoder Mode: Enc Mode 8 - Input: 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080p123816243240SE +/- 0.11, N = 3SE +/- 0.14, N = 3SE +/- 0.04, N = 333.3633.3433.311. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

Caffe

Model: GoogleNet - Acceleration: CPU - Iterations: 200

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 20012350K100K150K200K250KSE +/- 275.20, N = 3SE +/- 136.15, N = 3SE +/- 490.17, N = 32437962441542439571. (CXX) g++ options: -fPIC -O3 -rdynamic -lboost_system -lboost_thread -lboost_filesystem -lboost_chrono -lboost_date_time -lboost_atomic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Mlpack Benchmark

Benchmark: scikit_svm

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svm12348121620SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 315.1615.1415.14

GNU Octave Benchmark

OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 4.4.1123246810SE +/- 0.023, N = 5SE +/- 0.020, N = 5SE +/- 0.031, N = 58.1168.1268.118

Basis Universal

Settings: UASTC Level 2

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2123714212835SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 330.9930.9630.991. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Git

Time To Complete Common Git Commands

OpenBenchmarking.orgSeconds, Fewer Is BetterGitTime To Complete Common Git Commands1231122334455SE +/- 0.03, N = 3SE +/- 0.08, N = 3SE +/- 0.06, N = 350.9450.9750.911. git version 2.20.1

BYTE Unix Benchmark

Computational Test: Dhrystone 2

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 21239M18M27M36M45MSE +/- 41963.68, N = 3SE +/- 123890.70, N = 3SE +/- 18764.26, N = 341673100.141712858.141700019.8

Basis Universal

Settings: UASTC Level 3

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 31231326395265SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 358.7258.7658.781. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

TensorFlow Lite

Model: Mobilenet Quant

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Quant12330K60K90K120K150KSE +/- 53.26, N = 3SE +/- 46.16, N = 3SE +/- 32.25, N = 3136609136479136482

dav1d

Video Input: Chimera 1080p 10-bit

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080p 10-bit12320406080100SE +/- 0.10, N = 3SE +/- 0.08, N = 3SE +/- 0.05, N = 386.7486.7386.67MIN: 57.94 / MAX: 211.3MIN: 57.89 / MAX: 206.91MIN: 57.82 / MAX: 213.791. (CC) gcc options: -pthread

oneDNN

Harness: IP Batch All - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: bf16bf16bf16 - Engine: CPU12320406080100SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 395.1795.1495.10MIN: 94.26MIN: 94.32MIN: 94.271. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Blender

Blend File: Barbershop - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Barbershop - Compute: CPU-Only123130260390520650SE +/- 0.50, N = 3SE +/- 0.34, N = 3SE +/- 0.33, N = 3605.44605.36605.01

TensorFlow Lite

Model: SqueezeNet

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNet12340K80K120K160K200KSE +/- 377.66, N = 3SE +/- 337.48, N = 3SE +/- 359.21, N = 3191171191148191041

TNN

Target: CPU - Model: SqueezeNet v1.1

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.112360120180240300SE +/- 0.15, N = 3SE +/- 0.22, N = 3SE +/- 0.09, N = 3296.37296.42296.31MIN: 295.7 / MAX: 311.12MIN: 295.78 / MAX: 308.68MIN: 295.84 / MAX: 301.861. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

Caffe

Model: GoogleNet - Acceleration: CPU - Iterations: 1000

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 1000123300K600K900K1200K1500KSE +/- 1694.74, N = 3SE +/- 576.40, N = 3SE +/- 638.13, N = 31220000122013012203331. (CXX) g++ options: -fPIC -O3 -rdynamic -lboost_system -lboost_thread -lboost_filesystem -lboost_chrono -lboost_date_time -lboost_atomic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Basis Universal

Settings: UASTC Level 2 + RDO Post-Processing

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2 + RDO Post-Processing123160320480640800SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.12, N = 3721.82721.87721.681. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Kripke

OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.41600K1200K1800K2400K3000KSE +/- 49600.17, N = 929140521. (CXX) g++ options: -O3 -fopenmp

PyPerformance

Benchmark: python_startup

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startup1233691215SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 310.510.510.5

PyPerformance

Benchmark: regex_compile

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compile123306090120150SE +/- 0.58, N = 3SE +/- 0.33, N = 3154154154

PyPerformance

Benchmark: raytrace

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytrace12390180270360450SE +/- 0.67, N = 3408408408

PyPerformance

Benchmark: pathlib

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlib12348121620SE +/- 0.00, N = 3SE +/- 0.09, N = 3SE +/- 0.03, N = 317.817.817.8

PyPerformance

Benchmark: go

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: go12350100150200250SE +/- 0.67, N = 3213213213

SVT-AV1

Encoder Mode: Enc Mode 0 - Input: 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 0 - Input: 1080p1230.02990.05980.08970.11960.1495SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.1330.1330.1331. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

AOM AV1

Encoder Mode: Speed 0 Two-Pass

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 0 Two-Pass1230.06980.13960.20940.27920.349SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.310.310.311. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenCV

Test: DNN - Deep Neural Network

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.4Test: DNN - Deep Neural Network1239001800270036004500SE +/- 129.03, N = 15SE +/- 128.33, N = 12SE +/- 67.44, N = 124393426743011. (CXX) g++ options: -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -ldl -lm -lpthread -lrt

oneDNN

Harness: IP Batch 1D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: f32 - Engine: CPU1230.7451.492.2352.983.725SE +/- 0.00204, N = 3SE +/- 0.07364, N = 14SE +/- 0.00502, N = 33.175193.311213.17711MIN: 3.07MIN: 3.1MIN: 3.081. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

Renaissance

Test: Akka Unbalanced Cobwebbed Tree

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Akka Unbalanced Cobwebbed Tree1232K4K6K8K10KSE +/- 169.48, N = 15SE +/- 115.84, N = 5SE +/- 97.43, N = 510146.389866.1310202.48

Renaissance

Test: Apache Spark Bayes

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.10.0Test: Apache Spark Bayes12330060090012001500SE +/- 18.43, N = 25SE +/- 18.56, N = 25SE +/- 13.83, N = 51531.731534.531526.99


Phoronix Test Suite v10.8.4