TR 3990X Okt

AMD Ryzen Threadripper 3990X 64-Core testing with a System76 Thelio Major (F4c Z5 BIOS) and AMD Radeon RX 5600 OEM/5600 XT / 5700/5700 8GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2010117-FI-TR3990XOK30
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Bioinformatics 2 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 3 Tests
C/C++ Compiler Tests 7 Tests
CPU Massive 9 Tests
Creator Workloads 6 Tests
Database Test Suite 3 Tests
Fortran Tests 5 Tests
HPC - High Performance Computing 19 Tests
Imaging 2 Tests
Machine Learning 8 Tests
Molecular Dynamics 5 Tests
MPI Benchmarks 5 Tests
Multi-Core 6 Tests
NVIDIA GPU Compute 6 Tests
OpenMPI Tests 5 Tests
Python Tests 6 Tests
Scientific Computing 11 Tests
Server 3 Tests
Server CPU Tests 2 Tests
Single-Threaded 3 Tests
Speech 2 Tests
Telephony 2 Tests
Vulkan Compute 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Linux 5.4
October 10 2020
  8 Hours, 11 Minutes
2
October 10 2020
  8 Hours, 24 Minutes
3
October 10 2020
  8 Hours, 29 Minutes
Invert Hiding All Results Option
  8 Hours, 22 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


TR 3990X OktProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionLinux 5.423AMD Ryzen Threadripper 3990X 64-Core @ 2.90GHz (64 Cores / 128 Threads)System76 Thelio Major (F4c Z5 BIOS)AMD Starship/Matisse126GBSamsung SSD 970 EVO Plus 500GBAMD Radeon RX 5600 OEM/5600 XT / 5700/5700 8GB (1750/875MHz)AMD Navi 10 HDMI AudioG237HLIntel I211 + Intel Wi-Fi 6 AX200Ubuntu 20.045.4.0-47-generic (x86_64)GNOME Shell 3.36.4X Server 1.20.8amdgpu 19.1.04.6 Mesa 20.0.8 (LLVM 10.0.0)GCC 9.3.0ext41920x1080OpenBenchmarking.orgCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq ondemand - CPU Microcode: 0x8301025Graphics Details- GLAMORPython Details- Python 2.7.18rc1 + Python 3.8.2Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

Linux 5.423Result OverviewPhoronix Test Suite100%101%102%103%104%BYTE Unix BenchmarkKeyDBRealSR-NCNNNCNNFFTEMlpack BenchmarkTimed MAFFT AlignmentNAMDLAMMPS Molecular Dynamics SimulatorLibRawKripkeLeelaChessZeroeSpeak-NG Speech EngineDolfynMonte Carlo Simulations of Ionised NebulaeMPVRNNoiseHierarchical INTegrationTimed HMMer SearchWebP Image EncodeApache CouchDBIncompact3DSystem GZIP DecompressionSockperfTNNInfluxDBAOM AV1VkFFTOpenVINOGROMACSMobile Neural NetworkTimed LLVM CompilationGPAWGLmark2Caffe

TR 3990X Oktncnn: CPU - blazefacencnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - efficientnet-b0ncnn: CPU - shufflenet-v2ncnn: CPU - googlenetncnn: CPU - yolov4-tinybyte: Dhrystone 2ncnn: CPU - alexnetncnn: Vulkan GPU - resnet50ncnn: Vulkan GPU - shufflenet-v2ncnn: CPU - mobilenetlczero: BLASncnn: Vulkan GPU - blazefacecaffe: AlexNet - CPU - 100ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - resnet50lczero: Eigenmlpack: scikit_linearridgeregressionkeydb: lammps: Rhodopsin Proteinncnn: CPU - squeezenetncnn: CPU - resnet18realsr-ncnn: 4x - Noncnn: Vulkan GPU - mobilenetwebp: Quality 100, Lossless, Highest Compressionmlpack: scikit_icamlpack: scikit_qdasockperf: Throughputmnn: resnet-v2-50ncnn: Vulkan GPU - alexnetffte: N=256, 3D Complex FFT Routinemafft: Multiple Sequence Alignment - LSU RNAaom-av1: Speed 6 Realtimelammps: 20k Atomsnamd: ATPase Simulation - 327,506 Atomsopenvino: Person Detection 0106 FP16 - CPUsockperf: Latency Ping Pongmpv: Big Buck Bunny Sunflower 4K - Software Onlyopenvino: Person Detection 0106 FP16 - CPUlibraw: Post-Processing Benchmarkncnn: CPU - vgg16tnn: CPU - SqueezeNet v1.1kripke: openvino: Person Detection 0106 FP32 - CPUespeak: Text-To-Speech Synthesisaom-av1: Speed 8 Realtimewebp: Quality 100, Losslesslczero: Randdolfyn: Computational Fluid Dynamicsncnn: Vulkan GPU - squeezenetmocassin: Dust 2D tau100.0mnn: inception-v3influxdb: 1024 - 10000 - 2,5000,1 - 10000sockperf: Latency Under Loadaom-av1: Speed 4 Two-Passmnn: MobileNetV2_224ncnn: Vulkan GPU - vgg16tnn: CPU - MobileNet v2rnnoise: incompact3d: Cylinderopenvino: Face Detection 0106 FP32 - CPUhint: FLOATwebp: Quality 100, Highest Compressionhmmer: Pfam Database Searchncnn: Vulkan GPU - yolov4-tinyncnn: Vulkan GPU - efficientnet-b0webp: Defaultncnn: Vulkan GPU - googlenetcouchdb: 100 - 1000 - 24caffe: AlexNet - CPU - 200system-decompress-gzip: openvino: Age Gender Recognition Retail 0013 FP16 - CPUwebp: Quality 100mpv: Big Buck Bunny Sunflower 1080p - Software Onlycaffe: GoogleNet - CPU - 200openvino: Face Detection 0106 FP32 - CPUinfluxdb: 64 - 10000 - 2,5000,1 - 10000caffe: GoogleNet - CPU - 100influxdb: 4 - 10000 - 2,5000,1 - 10000openvino: Face Detection 0106 FP16 - CPUopenvino: Person Detection 0106 FP32 - CPUopenvino: Age Gender Recognition Retail 0013 FP32 - CPUncnn: Vulkan GPU - resnet18mnn: mobilenet-v1-1.0aom-av1: Speed 6 Two-Passvkfft: ncnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: Vulkan GPU - mnasnetgromacs: Water Benchmarkbuild-llvm: Time To Compilencnn: Vulkan GPU-v3-v3 - mobilenet-v3openvino: Face Detection 0106 FP16 - CPUgpaw: Carbon Nanotubeglmark2: 1920 x 1080mlpack: scikit_svmopenvino: Age Gender Recognition Retail 0013 FP32 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUaom-av1: Speed 0 Two-Passncnn: CPU - mnasnetmnn: SqueezeNetV1.0Linux 5.4235.5112.9616.4913.1125.5936.4042607098.512.169.923.4327.4015161.095462913.1236.7814721.61450800.4223.68225.0718.6316.8829.6833.52851.5042.0956393332.79830.49129902.322618769.00218.0926.5550.439084662.583.7551046.246.6841.8752.69245.844449300634799.7728.58134.8316.22019413816.9655.6922831.8841547196.914.7112.535.46583.73270.10018.639183.0007833293.71376705975.840977.141165.00314.4611.531.5038.45118.0911117612.86433390.812.2942574.813019549.581498935.71502161148311.03294.436.5033014.263.305.4963.90204904.544.613.761203.2208.729.65110.688865620.900.940.930.3312.378.4516.1014.3418.2114.3426.9435.0543503603.111.759.823.5527.4815051.065611813.4736.3914461.62440299.4123.33825.3518.2316.5439.5333.77051.9242.4356349532.62930.27127987.824515969.05618.1226.2870.438374683.323.7981037.846.6441.5552.14244.172448737234752.2628.46334.4916.31019366417.0445.7022931.7231539068.114.6082.515.44284.12268.80918.772183.6944583316.90375641035.452477.123166.11314.3711.461.5018.50118.5691123602.84933277.842.2982563.283017819.561501667.11507091146038.53286.546.5232931.773.315.4803.90205384.534.603.754203.6388.719.64110.650865620.910.940.930.3314.588.5195.8813.8517.3114.1226.3135.6644247620.112.179.583.4328.2114741.085603213.3937.3414831.65447704.1923.88825.6318.4716.5839.7134.12652.3942.7957242132.29130.02128693.183227459.13517.8626.1780.444254721.763.8021050.736.6041.4052.12246.780444765534782.6328.74734.6916.37919230616.8845.7423031.6071534264.514.5932.535.42283.47268.08618.737184.2983653307.26374097932.782227.172165.74814.3911.501.4948.46117.9271120112.85633230.502.3052568.833030729.601495506.91505221144571.83284.116.5132913.003.305.4803.91205324.534.613.753203.3518.719.65110.575864920.910.940.930.3312.938.664OpenBenchmarking.org

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazefaceLinux 5.423246810SE +/- 0.04, N = 3SE +/- 0.08, N = 3SE +/- 0.03, N = 35.516.105.88MIN: 5.29 / MAX: 9.76MIN: 5.59 / MAX: 16.3MIN: 5.66 / MAX: 6.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazefaceLinux 5.423246810Min: 5.46 / Avg: 5.51 / Max: 5.59Min: 5.97 / Avg: 6.1 / Max: 6.24Min: 5.82 / Avg: 5.88 / Max: 5.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v3Linux 5.42348121620SE +/- 0.04, N = 3SE +/- 0.23, N = 3SE +/- 0.10, N = 312.9614.3413.85MIN: 12.63 / MAX: 14.51MIN: 12.98 / MAX: 26.26MIN: 13.09 / MAX: 19.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v3Linux 5.42348121620Min: 12.9 / Avg: 12.96 / Max: 13.05Min: 14.03 / Avg: 14.34 / Max: 14.78Min: 13.65 / Avg: 13.85 / Max: 141. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b0Linux 5.42348121620SE +/- 0.02, N = 3SE +/- 0.25, N = 3SE +/- 0.08, N = 316.4918.2117.31MIN: 16.29 / MAX: 24.66MIN: 16.67 / MAX: 23.54MIN: 16.65 / MAX: 19.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b0Linux 5.423510152025Min: 16.47 / Avg: 16.49 / Max: 16.52Min: 17.72 / Avg: 18.21 / Max: 18.5Min: 17.17 / Avg: 17.31 / Max: 17.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v2Linux 5.42348121620SE +/- 0.08, N = 3SE +/- 0.09, N = 3SE +/- 0.09, N = 313.1114.3414.12MIN: 12.73 / MAX: 15.23MIN: 13.13 / MAX: 18.68MIN: 13.32 / MAX: 23.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v2Linux 5.42348121620Min: 13.02 / Avg: 13.11 / Max: 13.27Min: 14.18 / Avg: 14.34 / Max: 14.49Min: 14 / Avg: 14.12 / Max: 14.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenetLinux 5.423612182430SE +/- 0.27, N = 3SE +/- 0.42, N = 3SE +/- 0.20, N = 325.5926.9426.31MIN: 24.94 / MAX: 42.73MIN: 25.48 / MAX: 29.13MIN: 25.44 / MAX: 33.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenetLinux 5.423612182430Min: 25.27 / Avg: 25.59 / Max: 26.13Min: 26.16 / Avg: 26.94 / Max: 27.62Min: 26.08 / Avg: 26.31 / Max: 26.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tinyLinux 5.423816243240SE +/- 0.03, N = 3SE +/- 0.37, N = 3SE +/- 0.35, N = 336.4035.0535.66MIN: 35.85 / MAX: 41.14MIN: 34.11 / MAX: 40.67MIN: 34.66 / MAX: 38.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tinyLinux 5.423816243240Min: 36.34 / Avg: 36.4 / Max: 36.44Min: 34.51 / Avg: 35.05 / Max: 35.75Min: 34.96 / Avg: 35.66 / Max: 36.081. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

BYTE Unix Benchmark

This is a test of BYTE. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 2Linux 5.4239M18M27M36M45MSE +/- 97131.78, N = 3SE +/- 631231.56, N = 3SE +/- 365672.33, N = 342607098.543503603.144247620.1
OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 2Linux 5.4238M16M24M32M40MMin: 42473400.7 / Avg: 42607098.5 / Max: 42796002.4Min: 42593239.4 / Avg: 43503603.1 / Max: 44716273.1Min: 43551975.1 / Avg: 44247620.13 / Max: 44790909.4

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnetLinux 5.4233691215SE +/- 0.27, N = 3SE +/- 0.09, N = 3SE +/- 0.19, N = 312.1611.7512.17MIN: 11.37 / MAX: 14.77MIN: 11.17 / MAX: 23.47MIN: 11.23 / MAX: 17.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnetLinux 5.42348121620Min: 11.63 / Avg: 12.16 / Max: 12.53Min: 11.61 / Avg: 11.75 / Max: 11.93Min: 11.82 / Avg: 12.17 / Max: 12.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: resnet50Linux 5.4233691215SE +/- 0.34, N = 3SE +/- 0.12, N = 3SE +/- 0.09, N = 39.929.829.58MIN: 9.02 / MAX: 30.94MIN: 9 / MAX: 31.97MIN: 8.97 / MAX: 23.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: resnet50Linux 5.4233691215Min: 9.54 / Avg: 9.92 / Max: 10.59Min: 9.6 / Avg: 9.82 / Max: 10Min: 9.41 / Avg: 9.58 / Max: 9.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: shufflenet-v2Linux 5.4230.79881.59762.39643.19523.994SE +/- 0.00, N = 3SE +/- 0.11, N = 3SE +/- 0.00, N = 33.433.553.43MIN: 3.35 / MAX: 3.68MIN: 3.33 / MAX: 22.77MIN: 3.34 / MAX: 3.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: shufflenet-v2Linux 5.423246810Min: 3.43 / Avg: 3.43 / Max: 3.44Min: 3.42 / Avg: 3.55 / Max: 3.77Min: 3.42 / Avg: 3.43 / Max: 3.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenetLinux 5.423714212835SE +/- 0.43, N = 3SE +/- 0.37, N = 3SE +/- 0.19, N = 327.4027.4828.21MIN: 25.76 / MAX: 32.7MIN: 25.29 / MAX: 35.51MIN: 26.7 / MAX: 32.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenetLinux 5.423612182430Min: 26.88 / Avg: 27.4 / Max: 28.26Min: 26.75 / Avg: 27.48 / Max: 27.85Min: 27.89 / Avg: 28.21 / Max: 28.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASLinux 5.42330060090012001500SE +/- 12.81, N = 3SE +/- 12.57, N = 3SE +/- 14.47, N = 31516150514741. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASLinux 5.42330060090012001500Min: 1491 / Avg: 1515.67 / Max: 1534Min: 1481 / Avg: 1505.33 / Max: 1523Min: 1448 / Avg: 1474 / Max: 14981. (CXX) g++ options: -flto -pthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: blazefaceLinux 5.4230.24530.49060.73590.98121.2265SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 31.091.061.08MIN: 1.01 / MAX: 1.56MIN: 1.01 / MAX: 1.56MIN: 1.01 / MAX: 1.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: blazefaceLinux 5.423246810Min: 1.06 / Avg: 1.09 / Max: 1.11Min: 1.05 / Avg: 1.06 / Max: 1.08Min: 1.06 / Avg: 1.08 / Max: 1.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100Linux 5.42312K24K36K48K60KSE +/- 541.26, N = 3SE +/- 135.75, N = 3SE +/- 239.93, N = 35462956118560321. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100Linux 5.42310K20K30K40K50KMin: 53780 / Avg: 54628.67 / Max: 55635Min: 55851 / Avg: 56118.33 / Max: 56293Min: 55557 / Avg: 56032.33 / Max: 563271. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v2Linux 5.4233691215SE +/- 0.31, N = 3SE +/- 0.23, N = 3SE +/- 0.20, N = 313.1213.4713.39MIN: 12.2 / MAX: 15.14MIN: 12.3 / MAX: 18.22MIN: 12.33 / MAX: 32.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v2Linux 5.42348121620Min: 12.74 / Avg: 13.12 / Max: 13.74Min: 13.1 / Avg: 13.47 / Max: 13.9Min: 13.07 / Avg: 13.39 / Max: 13.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50Linux 5.423918273645SE +/- 0.49, N = 3SE +/- 0.52, N = 3SE +/- 0.64, N = 336.7836.3937.34MIN: 34.95 / MAX: 41.54MIN: 34.7 / MAX: 41.24MIN: 35.19 / MAX: 51.271. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50Linux 5.423816243240Min: 35.9 / Avg: 36.78 / Max: 37.58Min: 35.53 / Avg: 36.39 / Max: 37.34Min: 36.09 / Avg: 37.34 / Max: 38.231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: EigenLinux 5.42330060090012001500SE +/- 17.75, N = 3SE +/- 20.76, N = 3SE +/- 15.07, N = 81472144614831. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: EigenLinux 5.42330060090012001500Min: 1451 / Avg: 1471.67 / Max: 1507Min: 1406 / Avg: 1446.33 / Max: 1475Min: 1440 / Avg: 1482.88 / Max: 15591. (CXX) g++ options: -flto -pthread

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregressionLinux 5.4230.37130.74261.11391.48521.8565SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 31.611.621.65
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregressionLinux 5.423246810Min: 1.57 / Avg: 1.61 / Max: 1.63Min: 1.6 / Avg: 1.62 / Max: 1.64Min: 1.62 / Avg: 1.65 / Max: 1.69

KeyDB

A benchmark of KeyDB as a multi-threaded fork of the Redis server. The KeyDB benchmark is conducted using memtier-benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.16Linux 5.423100K200K300K400K500KSE +/- 7622.83, N = 3SE +/- 3068.12, N = 3SE +/- 5879.33, N = 3450800.42440299.41447704.191. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.16Linux 5.42380K160K240K320K400KMin: 435844.76 / Avg: 450800.42 / Max: 460841.18Min: 434371.46 / Avg: 440299.41 / Max: 444636.23Min: 435975.95 / Avg: 447704.19 / Max: 454300.311. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 24Aug2020Model: Rhodopsin ProteinLinux 5.423612182430SE +/- 0.40, N = 3SE +/- 0.35, N = 3SE +/- 0.34, N = 1523.6823.3423.891. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 24Aug2020Model: Rhodopsin ProteinLinux 5.423612182430Min: 23.1 / Avg: 23.68 / Max: 24.45Min: 22.76 / Avg: 23.34 / Max: 23.96Min: 22.69 / Avg: 23.89 / Max: 26.471. (CXX) g++ options: -O3 -pthread -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenetLinux 5.423612182430SE +/- 0.10, N = 3SE +/- 0.24, N = 3SE +/- 0.08, N = 325.0725.3525.63MIN: 23.9 / MAX: 29.34MIN: 23.99 / MAX: 29.99MIN: 24.27 / MAX: 29.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenetLinux 5.423612182430Min: 24.95 / Avg: 25.07 / Max: 25.27Min: 25 / Avg: 25.35 / Max: 25.8Min: 25.49 / Avg: 25.63 / Max: 25.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet18Linux 5.423510152025SE +/- 0.04, N = 3SE +/- 0.34, N = 3SE +/- 0.05, N = 318.6318.2318.47MIN: 17.97 / MAX: 24.19MIN: 17.31 / MAX: 23.72MIN: 17.93 / MAX: 19.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet18Linux 5.423510152025Min: 18.55 / Avg: 18.63 / Max: 18.69Min: 17.69 / Avg: 18.23 / Max: 18.86Min: 18.41 / Avg: 18.47 / Max: 18.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

RealSR-NCNN

RealSR-NCNN is an NCNN neural network implementation of the RealSR project and accelerated using the Vulkan API. RealSR is the Real-World Super Resolution via Kernel Estimation and Noise Injection. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image by a scale of 4x with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: NoLinux 5.42348121620SE +/- 0.22, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 316.8816.5416.58
OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: NoLinux 5.42348121620Min: 16.56 / Avg: 16.88 / Max: 17.29Min: 16.49 / Avg: 16.54 / Max: 16.59Min: 16.58 / Avg: 16.58 / Max: 16.59

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: mobilenetLinux 5.4233691215SE +/- 0.09, N = 3SE +/- 0.04, N = 3SE +/- 0.13, N = 39.689.539.71MIN: 8.37 / MAX: 29.59MIN: 8.35 / MAX: 36.02MIN: 8.39 / MAX: 40.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: mobilenetLinux 5.4233691215Min: 9.59 / Avg: 9.68 / Max: 9.86Min: 9.46 / Avg: 9.53 / Max: 9.61Min: 9.47 / Avg: 9.71 / Max: 9.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionLinux 5.423816243240SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 333.5333.7734.131. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionLinux 5.423714212835Min: 33.51 / Avg: 33.53 / Max: 33.56Min: 33.73 / Avg: 33.77 / Max: 33.81Min: 34.01 / Avg: 34.13 / Max: 34.221. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_icaLinux 5.4231224364860SE +/- 0.17, N = 3SE +/- 0.45, N = 3SE +/- 0.21, N = 351.5051.9252.39
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_icaLinux 5.4231020304050Min: 51.28 / Avg: 51.5 / Max: 51.83Min: 51.28 / Avg: 51.92 / Max: 52.79Min: 51.99 / Avg: 52.39 / Max: 52.69

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qdaLinux 5.4231020304050SE +/- 0.29, N = 3SE +/- 0.36, N = 3SE +/- 0.14, N = 342.0942.4342.79
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qdaLinux 5.423918273645Min: 41.53 / Avg: 42.09 / Max: 42.5Min: 41.73 / Avg: 42.43 / Max: 42.92Min: 42.54 / Avg: 42.79 / Max: 43.02

Sockperf

This is a network socket API performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMessages Per Second, More Is BetterSockperf 3.4Test: ThroughputLinux 5.423120K240K360K480K600KSE +/- 6120.70, N = 5SE +/- 7244.39, N = 5SE +/- 5468.48, N = 55639335634955724211. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread
OpenBenchmarking.orgMessages Per Second, More Is BetterSockperf 3.4Test: ThroughputLinux 5.423100K200K300K400K500KMin: 548429 / Avg: 563932.6 / Max: 577848Min: 537411 / Avg: 563495 / Max: 580009Min: 551926 / Avg: 572421.2 / Max: 5844041. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Linux 5.423816243240SE +/- 0.21, N = 13SE +/- 0.17, N = 15SE +/- 0.17, N = 1232.8032.6332.29MIN: 30.14 / MAX: 35.82MIN: 30.5 / MAX: 35.76MIN: 30.04 / MAX: 35.431. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Linux 5.423714212835Min: 31.38 / Avg: 32.8 / Max: 33.55Min: 31.25 / Avg: 32.63 / Max: 33.57Min: 31.33 / Avg: 32.29 / Max: 33.171. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: alexnetLinux 5.423714212835SE +/- 0.57, N = 3SE +/- 0.72, N = 3SE +/- 0.32, N = 330.4930.2730.02MIN: 25.2 / MAX: 56.82MIN: 25.44 / MAX: 55.96MIN: 25.34 / MAX: 56.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: alexnetLinux 5.423714212835Min: 29.36 / Avg: 30.49 / Max: 31.15Min: 28.92 / Avg: 30.27 / Max: 31.39Min: 29.48 / Avg: 30.02 / Max: 30.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

FFTE

FFTE is a package by Daisuke Takahashi to compute Discrete Fourier Transforms of 1-, 2- and 3- dimensional sequences of length (2^p)*(3^q)*(5^r). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineLinux 5.42330K60K90K120K150KSE +/- 375.92, N = 3SE +/- 813.33, N = 3SE +/- 201.66, N = 3129902.32127987.82128693.181. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp
OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineLinux 5.42320K40K60K80K100KMin: 129269.27 / Avg: 129902.32 / Max: 130570.09Min: 126910.52 / Avg: 127987.82 / Max: 129581.97Min: 128418.36 / Avg: 128693.18 / Max: 129086.241. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp

Timed MAFFT Alignment

This test performs an alignment of 100 pyruvate decarboxylase sequences. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNALinux 5.4233691215SE +/- 0.056, N = 3SE +/- 0.042, N = 3SE +/- 0.030, N = 39.0029.0569.1351. (CC) gcc options: -std=c99 -O3 -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNALinux 5.4233691215Min: 8.9 / Avg: 9 / Max: 9.09Min: 8.98 / Avg: 9.06 / Max: 9.12Min: 9.08 / Avg: 9.14 / Max: 9.181. (CC) gcc options: -std=c99 -O3 -lm -lpthread

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeLinux 5.42348121620SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.08, N = 318.0918.1217.861. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeLinux 5.423510152025Min: 18.02 / Avg: 18.09 / Max: 18.21Min: 18.06 / Avg: 18.12 / Max: 18.17Min: 17.72 / Avg: 17.86 / Max: 181. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 24Aug2020Model: 20k AtomsLinux 5.423612182430SE +/- 0.06, N = 3SE +/- 0.11, N = 3SE +/- 0.09, N = 326.5626.2926.181. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 24Aug2020Model: 20k AtomsLinux 5.423612182430Min: 26.43 / Avg: 26.55 / Max: 26.63Min: 26.11 / Avg: 26.29 / Max: 26.47Min: 26.01 / Avg: 26.18 / Max: 26.331. (CXX) g++ options: -O3 -pthread -lm

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsLinux 5.4230.10.20.30.40.5SE +/- 0.00089, N = 3SE +/- 0.00080, N = 3SE +/- 0.00402, N = 30.439080.438370.44425
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsLinux 5.42312345Min: 0.44 / Avg: 0.44 / Max: 0.44Min: 0.44 / Avg: 0.44 / Max: 0.44Min: 0.44 / Avg: 0.44 / Max: 0.45

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPULinux 5.42310002000300040005000SE +/- 34.97, N = 3SE +/- 16.66, N = 3SE +/- 29.15, N = 34662.584683.324721.761. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPULinux 5.4238001600240032004000Min: 4606.63 / Avg: 4662.58 / Max: 4726.89Min: 4650.65 / Avg: 4683.32 / Max: 4705.36Min: 4664.35 / Avg: 4721.76 / Max: 4759.231. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

Sockperf

This is a network socket API performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.4Test: Latency Ping PongLinux 5.4230.85551.7112.56653.4224.2775SE +/- 0.027, N = 5SE +/- 0.025, N = 5SE +/- 0.018, N = 53.7553.7983.8021. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread
OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.4Test: Latency Ping PongLinux 5.423246810Min: 3.66 / Avg: 3.75 / Max: 3.8Min: 3.74 / Avg: 3.8 / Max: 3.89Min: 3.76 / Avg: 3.8 / Max: 3.851. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread

MPV

MPV is an open-source, cross-platform media player. This test profile tests the frame-rate that can be achieved unsynchronized in a desynchronized mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterMPVVideo Input: Big Buck Bunny Sunflower 4K - Decode: Software OnlyLinux 5.4232004006008001000SE +/- 5.25, N = 3SE +/- 1.81, N = 3SE +/- 1.72, N = 31046.241037.841050.73MIN: 545.46 / MAX: 2000.04MIN: 545.46 / MAX: 2000.02MIN: 545.45 / MAX: 2000.041. mpv 0.32.0
OpenBenchmarking.orgFPS, More Is BetterMPVVideo Input: Big Buck Bunny Sunflower 4K - Decode: Software OnlyLinux 5.4232004006008001000Min: 1037.1 / Avg: 1046.24 / Max: 1055.28Min: 1035.43 / Avg: 1037.84 / Max: 1041.38Min: 1047.66 / Avg: 1050.73 / Max: 1053.611. mpv 0.32.0

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPULinux 5.423246810SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.04, N = 36.686.646.601. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPULinux 5.4233691215Min: 6.61 / Avg: 6.68 / Max: 6.73Min: 6.59 / Avg: 6.64 / Max: 6.71Min: 6.53 / Avg: 6.6 / Max: 6.681. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkLinux 5.4231020304050SE +/- 0.26, N = 3SE +/- 0.28, N = 3SE +/- 0.08, N = 341.8741.5541.401. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkLinux 5.423918273645Min: 41.5 / Avg: 41.87 / Max: 42.37Min: 40.99 / Avg: 41.55 / Max: 41.84Min: 41.24 / Avg: 41.4 / Max: 41.521. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg16Linux 5.4231224364860SE +/- 0.56, N = 3SE +/- 1.04, N = 3SE +/- 0.64, N = 352.6952.1452.12MIN: 50.22 / MAX: 58.21MIN: 48.42 / MAX: 60.84MIN: 49.36 / MAX: 63.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg16Linux 5.4231122334455Min: 51.83 / Avg: 52.69 / Max: 53.74Min: 50.39 / Avg: 52.14 / Max: 54Min: 50.88 / Avg: 52.12 / Max: 53.021. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1Linux 5.42350100150200250SE +/- 0.60, N = 3SE +/- 0.37, N = 3SE +/- 0.33, N = 3245.84244.17246.78MIN: 244.15 / MAX: 249.18MIN: 241.07 / MAX: 245.76MIN: 243.93 / MAX: 255.191. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1Linux 5.4234080120160200Min: 244.65 / Avg: 245.84 / Max: 246.46Min: 243.56 / Avg: 244.17 / Max: 244.84Min: 246.16 / Avg: 246.78 / Max: 247.311. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

Kripke

Kripke is a simple, scalable, 3D Sn deterministic particle transport code. Its primary purpose is to research how data layout, programming paradigms and architectures effect the implementation and performance of Sn transport. Kripke is developed by LLNL. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.4Linux 5.42310M20M30M40M50MSE +/- 151580.08, N = 3SE +/- 81815.46, N = 3SE +/- 132345.53, N = 34493006344873723444765531. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.4Linux 5.4238M16M24M32M40MMin: 44662320 / Avg: 44930063.33 / Max: 45187080Min: 44710220 / Avg: 44873723.33 / Max: 44961070Min: 44333910 / Avg: 44476553.33 / Max: 447409701. (CXX) g++ options: -O3 -fopenmp

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPULinux 5.42310002000300040005000SE +/- 11.05, N = 3SE +/- 26.72, N = 3SE +/- 16.09, N = 34799.774752.264782.631. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPULinux 5.4238001600240032004000Min: 4787.98 / Avg: 4799.77 / Max: 4821.86Min: 4713.49 / Avg: 4752.26 / Max: 4803.49Min: 4750.58 / Avg: 4782.63 / Max: 4801.121. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisLinux 5.423714212835SE +/- 0.17, N = 4SE +/- 0.05, N = 4SE +/- 0.07, N = 428.5828.4628.751. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisLinux 5.423612182430Min: 28.14 / Avg: 28.58 / Max: 28.88Min: 28.36 / Avg: 28.46 / Max: 28.61Min: 28.58 / Avg: 28.75 / Max: 28.911. (CC) gcc options: -O2 -std=c99

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeLinux 5.423816243240SE +/- 0.04, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 334.8334.4934.691. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeLinux 5.423714212835Min: 34.78 / Avg: 34.83 / Max: 34.9Min: 34.37 / Avg: 34.49 / Max: 34.57Min: 34.61 / Avg: 34.69 / Max: 34.741. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessLinux 5.42348121620SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 316.2216.3116.381. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessLinux 5.42348121620Min: 16.19 / Avg: 16.22 / Max: 16.28Min: 16.27 / Avg: 16.31 / Max: 16.37Min: 16.34 / Avg: 16.38 / Max: 16.431. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: RandomLinux 5.42340K80K120K160K200KSE +/- 434.65, N = 3SE +/- 236.07, N = 3SE +/- 147.36, N = 31941381936641923061. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: RandomLinux 5.42330K60K90K120K150KMin: 193535 / Avg: 194138.33 / Max: 194982Min: 193203 / Avg: 193664.33 / Max: 193982Min: 192063 / Avg: 192306.33 / Max: 1925721. (CXX) g++ options: -flto -pthread

Dolfyn

Dolfyn is a Computational Fluid Dynamics (CFD) code of modern numerical simulation techniques. The Dolfyn test profile measures the execution time of the bundled computational fluid dynamics demos that are bundled with Dolfyn. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid DynamicsLinux 5.42348121620SE +/- 0.23, N = 3SE +/- 0.29, N = 3SE +/- 0.19, N = 316.9717.0416.88
OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid DynamicsLinux 5.42348121620Min: 16.57 / Avg: 16.96 / Max: 17.38Min: 16.7 / Avg: 17.04 / Max: 17.63Min: 16.67 / Avg: 16.88 / Max: 17.27

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: squeezenetLinux 5.4231.29152.5833.87455.1666.4575SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 35.695.705.74MIN: 5.51 / MAX: 6.13MIN: 5.48 / MAX: 9.78MIN: 5.49 / MAX: 11.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: squeezenetLinux 5.423246810Min: 5.68 / Avg: 5.69 / Max: 5.7Min: 5.65 / Avg: 5.7 / Max: 5.72Min: 5.71 / Avg: 5.74 / Max: 5.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Monte Carlo Simulations of Ionised Nebulae

Mocassin is the Monte Carlo Simulations of Ionised Nebulae. MOCASSIN is a fully 3D or 2D photoionisation and dust radiative transfer code which employs a Monte Carlo approach to the transfer of radiation through media of arbitrary geometry and density distribution. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.0Linux 5.42350100150200250SE +/- 1.00, N = 3SE +/- 1.00, N = 3SE +/- 1.00, N = 32282292301. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.0Linux 5.4234080120160200Min: 226 / Avg: 228 / Max: 229Min: 227 / Avg: 229 / Max: 230Min: 228 / Avg: 230 / Max: 2311. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Linux 5.423714212835SE +/- 0.16, N = 13SE +/- 0.20, N = 15SE +/- 0.09, N = 1231.8831.7231.61MIN: 30.07 / MAX: 33.31MIN: 30 / MAX: 33.75MIN: 30.43 / MAX: 33.671. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Linux 5.423714212835Min: 30.52 / Avg: 31.88 / Max: 32.69Min: 30.38 / Avg: 31.72 / Max: 33.08Min: 31 / Avg: 31.61 / Max: 32.151. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Linux 5.423300K600K900K1200K1500KSE +/- 2382.01, N = 3SE +/- 3242.42, N = 3SE +/- 5690.48, N = 31547196.91539068.11534264.5
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Linux 5.423300K600K900K1200K1500KMin: 1542978.9 / Avg: 1547196.87 / Max: 1551223.8Min: 1535634.4 / Avg: 1539068.1 / Max: 1545549.1Min: 1522890.7 / Avg: 1534264.5 / Max: 1540300.8

Sockperf

This is a network socket API performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.4Test: Latency Under LoadLinux 5.42348121620SE +/- 0.06, N = 5SE +/- 0.10, N = 5SE +/- 0.17, N = 614.7114.6114.591. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread
OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.4Test: Latency Under LoadLinux 5.42348121620Min: 14.53 / Avg: 14.71 / Max: 14.88Min: 14.23 / Avg: 14.61 / Max: 14.82Min: 13.94 / Avg: 14.59 / Max: 15.181. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassLinux 5.4230.56931.13861.70792.27722.8465SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 32.532.512.531. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassLinux 5.423246810Min: 2.52 / Avg: 2.53 / Max: 2.53Min: 2.51 / Avg: 2.51 / Max: 2.52Min: 2.53 / Avg: 2.53 / Max: 2.541. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Linux 5.4231.22962.45923.68884.91846.148SE +/- 0.029, N = 13SE +/- 0.012, N = 15SE +/- 0.022, N = 125.4655.4425.422MIN: 5.19 / MAX: 6.36MIN: 5.16 / MAX: 6.35MIN: 5.18 / MAX: 6.031. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Linux 5.423246810Min: 5.38 / Avg: 5.47 / Max: 5.77Min: 5.37 / Avg: 5.44 / Max: 5.51Min: 5.34 / Avg: 5.42 / Max: 5.551. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: vgg16Linux 5.42320406080100SE +/- 0.45, N = 3SE +/- 0.14, N = 3SE +/- 0.10, N = 383.7384.1283.47MIN: 66.8 / MAX: 112.01MIN: 66.71 / MAX: 108.36MIN: 66.56 / MAX: 110.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: vgg16Linux 5.4231632486480Min: 82.83 / Avg: 83.73 / Max: 84.24Min: 83.96 / Avg: 84.12 / Max: 84.39Min: 83.28 / Avg: 83.47 / Max: 83.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2Linux 5.42360120180240300SE +/- 3.67, N = 3SE +/- 0.64, N = 3SE +/- 2.03, N = 3270.10268.81268.09MIN: 262.09 / MAX: 307.37MIN: 262.12 / MAX: 297.98MIN: 261.24 / MAX: 297.961. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2Linux 5.42350100150200250Min: 264.09 / Avg: 270.1 / Max: 276.75Min: 267.94 / Avg: 268.81 / Max: 270.05Min: 264.09 / Avg: 268.09 / Max: 270.71. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Linux 5.423510152025SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.07, N = 318.6418.7718.741. (CC) gcc options: -O2 -pedantic -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Linux 5.423510152025Min: 18.61 / Avg: 18.64 / Max: 18.69Min: 18.73 / Avg: 18.77 / Max: 18.82Min: 18.59 / Avg: 18.74 / Max: 18.821. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

Incompact3D

Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: CylinderLinux 5.4234080120160200SE +/- 0.90, N = 3SE +/- 1.34, N = 3SE +/- 1.01, N = 3183.00183.69184.301. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterIncompact3D 2020-09-17Input: CylinderLinux 5.423306090120150Min: 181.49 / Avg: 183 / Max: 184.6Min: 181.17 / Avg: 183.69 / Max: 185.74Min: 182.41 / Avg: 184.3 / Max: 185.881. (F9X) gfortran options: -cpp -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPULinux 5.4237001400210028003500SE +/- 15.60, N = 3SE +/- 23.91, N = 3SE +/- 7.03, N = 33293.713316.903307.261. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPULinux 5.4236001200180024003000Min: 3276.42 / Avg: 3293.71 / Max: 3324.85Min: 3270.6 / Avg: 3316.9 / Max: 3350.4Min: 3293.42 / Avg: 3307.26 / Max: 3316.381. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

Hierarchical INTegration

This test runs the U.S. Department of Energy's Ames Laboratory Hierarchical INTegration (HINT) benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOATLinux 5.42380M160M240M320M400MSE +/- 918474.47, N = 3SE +/- 1833857.81, N = 3SE +/- 1447224.32, N = 3376705975.84375641035.45374097932.781. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOATLinux 5.42370M140M210M280M350MMin: 375107443.09 / Avg: 376705975.84 / Max: 378289018.85Min: 372024337.39 / Avg: 375641035.45 / Max: 377977328.38Min: 371203582.17 / Avg: 374097932.78 / Max: 375565737.211. (CC) gcc options: -O3 -march=native -lm

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionLinux 5.423246810SE +/- 0.015, N = 3SE +/- 0.009, N = 3SE +/- 0.010, N = 37.1417.1237.1721. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionLinux 5.4233691215Min: 7.12 / Avg: 7.14 / Max: 7.17Min: 7.11 / Avg: 7.12 / Max: 7.13Min: 7.16 / Avg: 7.17 / Max: 7.191. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchLinux 5.4234080120160200SE +/- 0.34, N = 3SE +/- 0.25, N = 3SE +/- 0.30, N = 3165.00166.11165.751. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchLinux 5.423306090120150Min: 164.48 / Avg: 165 / Max: 165.64Min: 165.84 / Avg: 166.11 / Max: 166.6Min: 165.15 / Avg: 165.75 / Max: 166.091. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: yolov4-tinyLinux 5.42348121620SE +/- 0.10, N = 3SE +/- 0.06, N = 3SE +/- 0.05, N = 314.4614.3714.39MIN: 12.36 / MAX: 40.61MIN: 12.27 / MAX: 32.22MIN: 12.33 / MAX: 39.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: yolov4-tinyLinux 5.42348121620Min: 14.33 / Avg: 14.46 / Max: 14.66Min: 14.27 / Avg: 14.37 / Max: 14.46Min: 14.31 / Avg: 14.39 / Max: 14.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: efficientnet-b0Linux 5.4233691215SE +/- 0.22, N = 3SE +/- 0.03, N = 3SE +/- 0.21, N = 311.5311.4611.50MIN: 10.48 / MAX: 36.8MIN: 10.41 / MAX: 42.66MIN: 10.38 / MAX: 33.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: efficientnet-b0Linux 5.4233691215Min: 11.17 / Avg: 11.53 / Max: 11.93Min: 11.43 / Avg: 11.46 / Max: 11.51Min: 11.25 / Avg: 11.5 / Max: 11.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultLinux 5.4230.33820.67641.01461.35281.691SE +/- 0.003, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 31.5031.5011.4941. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultLinux 5.423246810Min: 1.5 / Avg: 1.5 / Max: 1.51Min: 1.5 / Avg: 1.5 / Max: 1.5Min: 1.49 / Avg: 1.49 / Max: 1.51. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: googlenetLinux 5.423246810SE +/- 0.21, N = 3SE +/- 0.15, N = 3SE +/- 0.05, N = 38.458.508.46MIN: 7.12 / MAX: 32.39MIN: 7.1 / MAX: 29.7MIN: 7.12 / MAX: 27.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: googlenetLinux 5.4233691215Min: 8.16 / Avg: 8.45 / Max: 8.86Min: 8.21 / Avg: 8.5 / Max: 8.69Min: 8.38 / Avg: 8.46 / Max: 8.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.1.1Bulk Size: 100 - Inserts: 1000 - Rounds: 24Linux 5.423306090120150SE +/- 1.01, N = 3SE +/- 0.48, N = 3SE +/- 1.12, N = 3118.09118.57117.931. (CXX) g++ options: -std=c++14 -lmozjs-68 -lm -lerl_interface -lei -fPIC -MMD
OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.1.1Bulk Size: 100 - Inserts: 1000 - Rounds: 24Linux 5.42320406080100Min: 116.15 / Avg: 118.09 / Max: 119.56Min: 117.85 / Avg: 118.57 / Max: 119.47Min: 115.82 / Avg: 117.93 / Max: 119.651. (CXX) g++ options: -std=c++14 -lmozjs-68 -lm -lerl_interface -lei -fPIC -MMD

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 200Linux 5.42320K40K60K80K100KSE +/- 186.59, N = 3SE +/- 157.09, N = 3SE +/- 122.83, N = 31117611123601120111. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 200Linux 5.42320K40K60K80K100KMin: 111451 / Avg: 111761.33 / Max: 112096Min: 112047 / Avg: 112360.33 / Max: 112537Min: 111873 / Avg: 112011 / Max: 1122561. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

System GZIP Decompression

This simple test measures the time to decompress a gzipped tarball (the Qt5 toolkit source package). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSystem GZIP DecompressionLinux 5.4230.64441.28881.93322.57763.222SE +/- 0.018, N = 3SE +/- 0.002, N = 3SE +/- 0.001, N = 32.8642.8492.856
OpenBenchmarking.orgSeconds, Fewer Is BetterSystem GZIP DecompressionLinux 5.423246810Min: 2.84 / Avg: 2.86 / Max: 2.9Min: 2.85 / Avg: 2.85 / Max: 2.85Min: 2.85 / Avg: 2.86 / Max: 2.86

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPULinux 5.4237K14K21K28K35KSE +/- 76.15, N = 3SE +/- 207.95, N = 3SE +/- 175.53, N = 333390.8133277.8433230.501. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPULinux 5.4236K12K18K24K30KMin: 33293.46 / Avg: 33390.81 / Max: 33540.92Min: 33066.59 / Avg: 33277.84 / Max: 33693.72Min: 33029.58 / Avg: 33230.5 / Max: 33580.271. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100Linux 5.4230.51861.03721.55582.07442.593SE +/- 0.007, N = 3SE +/- 0.002, N = 3SE +/- 0.002, N = 32.2942.2982.3051. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100Linux 5.423246810Min: 2.28 / Avg: 2.29 / Max: 2.31Min: 2.29 / Avg: 2.3 / Max: 2.3Min: 2.3 / Avg: 2.31 / Max: 2.311. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

MPV

MPV is an open-source, cross-platform media player. This test profile tests the frame-rate that can be achieved unsynchronized in a desynchronized mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterMPVVideo Input: Big Buck Bunny Sunflower 1080p - Decode: Software OnlyLinux 5.4236001200180024003000SE +/- 13.10, N = 3SE +/- 18.75, N = 3SE +/- 3.63, N = 32574.812563.282568.83MIN: 1200 / MAX: 6000.24MIN: 1200.01 / MAX: 6000.24MIN: 1333.32 / MAX: 6000.241. mpv 0.32.0
OpenBenchmarking.orgFPS, More Is BetterMPVVideo Input: Big Buck Bunny Sunflower 1080p - Decode: Software OnlyLinux 5.423400800120016002000Min: 2552.09 / Avg: 2574.81 / Max: 2597.48Min: 2538.54 / Avg: 2563.28 / Max: 2600.06Min: 2562.33 / Avg: 2568.83 / Max: 2574.881. mpv 0.32.0

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 200Linux 5.42360K120K180K240K300KSE +/- 143.11, N = 3SE +/- 1840.25, N = 3SE +/- 72.13, N = 33019543017813030721. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 200Linux 5.42350K100K150K200K250KMin: 301676 / Avg: 301954 / Max: 302152Min: 298101 / Avg: 301781 / Max: 303674Min: 302935 / Avg: 303071.67 / Max: 3031801. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPULinux 5.4233691215SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 39.589.569.601. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPULinux 5.4233691215Min: 9.53 / Avg: 9.58 / Max: 9.63Min: 9.52 / Avg: 9.56 / Max: 9.62Min: 9.56 / Avg: 9.6 / Max: 9.641. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Linux 5.423300K600K900K1200K1500KSE +/- 898.05, N = 3SE +/- 2189.38, N = 3SE +/- 2142.12, N = 31498935.71501667.11495506.9
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Linux 5.423300K600K900K1200K1500KMin: 1497923.9 / Avg: 1498935.73 / Max: 1500726.8Min: 1497515.3 / Avg: 1501667.1 / Max: 1504948Min: 1492755.6 / Avg: 1495506.87 / Max: 1499726.6

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100Linux 5.42330K60K90K120K150KSE +/- 388.21, N = 3SE +/- 53.58, N = 3SE +/- 476.07, N = 31502161507091505221. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100Linux 5.42330K60K90K120K150KMin: 149818 / Avg: 150215.67 / Max: 150992Min: 150644 / Avg: 150708.67 / Max: 150815Min: 149772 / Avg: 150522 / Max: 1514051. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Linux 5.423200K400K600K800K1000KSE +/- 2154.85, N = 3SE +/- 2893.07, N = 3SE +/- 1149.41, N = 31148311.01146038.51144571.8
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Linux 5.423200K400K600K800K1000KMin: 1144277.9 / Avg: 1148311 / Max: 1151643.1Min: 1141697.4 / Avg: 1146038.47 / Max: 1151522Min: 1142696.2 / Avg: 1144571.83 / Max: 1146660.7

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPULinux 5.4237001400210028003500SE +/- 7.33, N = 3SE +/- 21.19, N = 3SE +/- 16.95, N = 33294.433286.543284.111. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPULinux 5.4236001200180024003000Min: 3283.02 / Avg: 3294.43 / Max: 3308.11Min: 3246.15 / Avg: 3286.54 / Max: 3317.86Min: 3253.76 / Avg: 3284.11 / Max: 3312.371. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPULinux 5.423246810SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 36.506.526.511. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPULinux 5.4233691215Min: 6.45 / Avg: 6.5 / Max: 6.53Min: 6.48 / Avg: 6.52 / Max: 6.57Min: 6.47 / Avg: 6.51 / Max: 6.551. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPULinux 5.4237K14K21K28K35KSE +/- 147.89, N = 3SE +/- 275.04, N = 3SE +/- 193.36, N = 333014.2632931.7732913.001. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPULinux 5.4236K12K18K24K30KMin: 32841.98 / Avg: 33014.26 / Max: 33308.62Min: 32583.88 / Avg: 32931.77 / Max: 33474.72Min: 32670.55 / Avg: 32913 / Max: 33295.151. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: resnet18Linux 5.4230.74481.48962.23442.97923.724SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 33.303.313.30MIN: 3.23 / MAX: 6.06MIN: 3.22 / MAX: 5.09MIN: 3.22 / MAX: 4.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: resnet18Linux 5.423246810Min: 3.28 / Avg: 3.3 / Max: 3.31Min: 3.29 / Avg: 3.31 / Max: 3.33Min: 3.28 / Avg: 3.3 / Max: 3.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Linux 5.4231.23662.47323.70984.94646.183SE +/- 0.019, N = 13SE +/- 0.022, N = 14SE +/- 0.021, N = 125.4965.4805.480MIN: 5.09 / MAX: 6.1MIN: 5.07 / MAX: 6.39MIN: 5.04 / MAX: 6.091. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Linux 5.423246810Min: 5.37 / Avg: 5.5 / Max: 5.65Min: 5.32 / Avg: 5.48 / Max: 5.69Min: 5.37 / Avg: 5.48 / Max: 5.621. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassLinux 5.4230.87981.75962.63943.51924.399SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 33.903.903.911. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassLinux 5.423246810Min: 3.9 / Avg: 3.9 / Max: 3.91Min: 3.89 / Avg: 3.9 / Max: 3.9Min: 3.91 / Avg: 3.91 / Max: 3.921. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

VkFFT

VkFFT is a Fast Fourier Transform (FFT) Library that is GPU accelerated by means of the Vulkan API. The VkFFT benchmark runs FFT performance differences of many different sizes before returning an overall benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 2020-09-29Linux 5.4234K8K12K16K20KSE +/- 39.00, N = 3SE +/- 4.16, N = 3SE +/- 2.85, N = 3204902053820532
OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 2020-09-29Linux 5.4234K8K12K16K20KMin: 20412 / Avg: 20490 / Max: 20529Min: 20530 / Avg: 20538 / Max: 20544Min: 20526 / Avg: 20531.67 / Max: 20535

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2Linux 5.4231.02152.0433.06454.0865.1075SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 34.544.534.53MIN: 4.26 / MAX: 7.47MIN: 4.25 / MAX: 5.75MIN: 4.23 / MAX: 5.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2Linux 5.423246810Min: 4.54 / Avg: 4.54 / Max: 4.54Min: 4.53 / Avg: 4.53 / Max: 4.54Min: 4.52 / Avg: 4.53 / Max: 4.531. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: mnasnetLinux 5.4231.03732.07463.11194.14925.1865SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 34.614.604.61MIN: 4.36 / MAX: 4.94MIN: 4.35 / MAX: 4.94MIN: 4.35 / MAX: 5.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU - Model: mnasnetLinux 5.423246810Min: 4.61 / Avg: 4.61 / Max: 4.62Min: 4.58 / Avg: 4.6 / Max: 4.61Min: 4.6 / Avg: 4.61 / Max: 4.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkLinux 5.4230.84621.69242.53863.38484.231SE +/- 0.005, N = 3SE +/- 0.005, N = 3SE +/- 0.006, N = 33.7613.7543.7531. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkLinux 5.423246810Min: 3.75 / Avg: 3.76 / Max: 3.77Min: 3.74 / Avg: 3.75 / Max: 3.76Min: 3.75 / Avg: 3.75 / Max: 3.771. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 10.0Time To CompileLinux 5.4234080120160200SE +/- 1.18, N = 3SE +/- 0.91, N = 3SE +/- 0.41, N = 3203.22203.64203.35
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 10.0Time To CompileLinux 5.4234080120160200Min: 201.11 / Avg: 203.22 / Max: 205.2Min: 201.84 / Avg: 203.64 / Max: 204.76Min: 202.9 / Avg: 203.35 / Max: 204.17

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3Linux 5.423246810SE +/- 0.01, N = 3SE +/- 0.20, N = 3SE +/- 0.19, N = 38.728.718.71MIN: 7.38 / MAX: 31.16MIN: 7.4 / MAX: 31.92MIN: 7.43 / MAX: 30.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3Linux 5.4233691215Min: 8.7 / Avg: 8.72 / Max: 8.74Min: 8.35 / Avg: 8.71 / Max: 9.02Min: 8.46 / Avg: 8.71 / Max: 9.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPULinux 5.4233691215SE +/- 0.03, N = 3SE +/- 0.05, N = 3SE +/- 0.07, N = 39.659.649.651. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPULinux 5.4233691215Min: 9.61 / Avg: 9.65 / Max: 9.7Min: 9.56 / Avg: 9.64 / Max: 9.74Min: 9.52 / Avg: 9.65 / Max: 9.761. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 20.1Input: Carbon NanotubeLinux 5.42320406080100SE +/- 0.21, N = 3SE +/- 0.16, N = 3SE +/- 0.09, N = 3110.69110.65110.581. (CC) gcc options: -pthread -shared -fwrapv -O2 -lxc -lblas -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 20.1Input: Carbon NanotubeLinux 5.42320406080100Min: 110.33 / Avg: 110.69 / Max: 111.06Min: 110.33 / Avg: 110.65 / Max: 110.84Min: 110.41 / Avg: 110.58 / Max: 110.71. (CC) gcc options: -pthread -shared -fwrapv -O2 -lxc -lblas -lmpi

GLmark2

This is a test of Linaro's glmark2 port, currently using the X11 OpenGL 2.0 target. GLmark2 is a basic OpenGL benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterGLmark2 2020.04Resolution: 1920 x 1080Linux 5.4232K4K6K8K10K865686568649

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svmLinux 5.423510152025SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.07, N = 320.9020.9120.91
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svmLinux 5.423510152025Min: 20.83 / Avg: 20.9 / Max: 20.98Min: 20.85 / Avg: 20.91 / Max: 20.98Min: 20.78 / Avg: 20.91 / Max: 21

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPULinux 5.4230.21150.4230.63450.8461.0575SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 30.940.940.941. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPULinux 5.423246810Min: 0.93 / Avg: 0.94 / Max: 0.94Min: 0.93 / Avg: 0.94 / Max: 0.95Min: 0.93 / Avg: 0.94 / Max: 0.951. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPULinux 5.4230.20930.41860.62790.83721.0465SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 30.930.930.931. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPULinux 5.423246810Min: 0.92 / Avg: 0.93 / Max: 0.93Min: 0.92 / Avg: 0.93 / Max: 0.94Min: 0.92 / Avg: 0.93 / Max: 0.941. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 0 Two-PassLinux 5.4230.07430.14860.22290.29720.3715SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.330.330.331. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 0 Two-PassLinux 5.42312345Min: 0.32 / Avg: 0.33 / Max: 0.33Min: 0.33 / Avg: 0.33 / Max: 0.33Min: 0.33 / Avg: 0.33 / Max: 0.331. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnetLinux 5.42348121620SE +/- 0.19, N = 3SE +/- 1.28, N = 3SE +/- 0.05, N = 312.3714.5812.93MIN: 12.06 / MAX: 14.26MIN: 12.24 / MAX: 23.88MIN: 12.34 / MAX: 19.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnetLinux 5.42348121620Min: 12.18 / Avg: 12.37 / Max: 12.75Min: 13 / Avg: 14.58 / Max: 17.11Min: 12.84 / Avg: 12.93 / Max: 13.021. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Linux 5.423246810SE +/- 0.112, N = 13SE +/- 0.181, N = 15SE +/- 0.250, N = 128.4518.5198.664MIN: 7.67 / MAX: 10.49MIN: 7.66 / MAX: 11.88MIN: 7.6 / MAX: 121. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Linux 5.4233691215Min: 8 / Avg: 8.45 / Max: 9.49Min: 7.92 / Avg: 8.52 / Max: 10.62Min: 7.78 / Avg: 8.66 / Max: 10.761. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

100 Results Shown

NCNN:
  CPU - blazeface
  CPU-v3-v3 - mobilenet-v3
  CPU - efficientnet-b0
  CPU - shufflenet-v2
  CPU - googlenet
  CPU - yolov4-tiny
BYTE Unix Benchmark
NCNN:
  CPU - alexnet
  Vulkan GPU - resnet50
  Vulkan GPU - shufflenet-v2
  CPU - mobilenet
LeelaChessZero
NCNN
Caffe
NCNN:
  CPU-v2-v2 - mobilenet-v2
  CPU - resnet50
LeelaChessZero
Mlpack Benchmark
KeyDB
LAMMPS Molecular Dynamics Simulator
NCNN:
  CPU - squeezenet
  CPU - resnet18
RealSR-NCNN
NCNN
WebP Image Encode
Mlpack Benchmark:
  scikit_ica
  scikit_qda
Sockperf
Mobile Neural Network
NCNN
FFTE
Timed MAFFT Alignment
AOM AV1
LAMMPS Molecular Dynamics Simulator
NAMD
OpenVINO
Sockperf
MPV
OpenVINO
LibRaw
NCNN
TNN
Kripke
OpenVINO
eSpeak-NG Speech Engine
AOM AV1
WebP Image Encode
LeelaChessZero
Dolfyn
NCNN
Monte Carlo Simulations of Ionised Nebulae
Mobile Neural Network
InfluxDB
Sockperf
AOM AV1
Mobile Neural Network
NCNN
TNN
RNNoise
Incompact3D
OpenVINO
Hierarchical INTegration
WebP Image Encode
Timed HMMer Search
NCNN:
  Vulkan GPU - yolov4-tiny
  Vulkan GPU - efficientnet-b0
WebP Image Encode
NCNN
Apache CouchDB
Caffe
System GZIP Decompression
OpenVINO
WebP Image Encode
MPV
Caffe
OpenVINO
InfluxDB
Caffe
InfluxDB
OpenVINO:
  Face Detection 0106 FP16 - CPU
  Person Detection 0106 FP32 - CPU
  Age Gender Recognition Retail 0013 FP32 - CPU
NCNN
Mobile Neural Network
AOM AV1
VkFFT
NCNN:
  Vulkan GPU-v2-v2 - mobilenet-v2
  Vulkan GPU - mnasnet
GROMACS
Timed LLVM Compilation
NCNN
OpenVINO
GPAW
GLmark2
Mlpack Benchmark
OpenVINO:
  Age Gender Recognition Retail 0013 FP32 - CPU
  Age Gender Recognition Retail 0013 FP16 - CPU
AOM AV1
NCNN
Mobile Neural Network