core-i5-4670-december

Intel Core i5-4670 testing with a MSI B85M-P33 (MS-7817) v1.0 (V4.9 BIOS) and MSI Intel HD 4600 2GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2012193-HA-COREI546718
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Bioinformatics 2 Tests
Timed Code Compilation 2 Tests
C/C++ Compiler Tests 4 Tests
CPU Massive 3 Tests
HPC - High Performance Computing 4 Tests
Machine Learning 2 Tests
Multi-Core 4 Tests
Programmer / Developer System Benchmarks 5 Tests
Scientific Computing 2 Tests
Server 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
1
December 19 2020
  12 Minutes
1a
December 19 2020
  2 Hours, 51 Minutes
2
December 19 2020
  3 Hours, 2 Minutes
3
December 19 2020
  3 Hours, 7 Minutes
4
December 19 2020
  3 Hours, 7 Minutes
Invert Hiding All Results Option
  2 Hours, 28 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


core-i5-4670-december ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen Resolution11a234Intel Core i5-4670 @ 3.80GHz (4 Cores)MSI B85M-P33 (MS-7817) v1.0 (V4.9 BIOS)Intel 4th Gen Core DRAM8GB2000GB Samsung SSD 860MSI Intel HD 4600 2GB (1200MHz)Intel Xeon E3-1200 v3/4thDELL S2409WRealtek RTL8111/8168/8411Ubuntu 20.045.9.0-050900rc7daily20201002-generic (x86_64) 20201001GNOME Shell 3.36.3X Server 1.20.8modesetting 1.20.84.5 Mesa 20.0.8GCC 9.3.0ext41920x1080OpenBenchmarking.orgCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_cpufreq ondemand - CPU Microcode: 0x28 - Thermald 1.9.1 Security Details- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT disabled + mds: Mitigation of Clear buffers; SMT disabled + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: disabled RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Not affected

core-i5-4670-december onednn: IP Shapes 3D - f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUonednn: IP Shapes 1D - f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUncnn: CPU - mnasnetonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUncnn: CPU-v2-v2 - mobilenet-v2onednn: Recurrent Neural Network Training - f32 - CPUncnn: CPU - yolov4-tinyonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUncnn: CPU - regnety_400monednn: Recurrent Neural Network Training - u8s8f32 - CPUncnn: CPU - alexnetonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUncnn: CPU - mobilenetbuild2: Time To Compilencnn: CPU - squeezenet_ssdonednn: IP Shapes 1D - u8s8f32 - CPUncnn: CPU-v3-v3 - mobilenet-v3node-web-tooling: mafft: Multiple Sequence Alignment - LSU RNAncnn: CPU - resnet18ncnn: CPU - efficientnet-b0ncnn: CPU - resnet50onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUcoremark: CoreMark Size 666 - Iterations Per Secondonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUsqlite-speedtest: Timed Time - Size 1,000build-ffmpeg: Time To Compilencnn: CPU - googlenetncnn: CPU - vgg16ncnn: CPU - blazefaceonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUhmmer: Pfam Database Searchncnn: CPU - shufflenet-v2simdjson: Kostya11a23412.617140.0580.5913.64204.9239210.774617.55055407.3830.30467.935466.349.308703.9052.3130.54135460.1516.159031.7625.197.7001336.33364.76851.946.280978.109.7031.2313.2463.2313.89039078.4995428.2403807.5144212.585078.851163.11928.73134.382.6815.202410.7816.37265.6591711.238317.69335542.5831.43647.925604.529.328840.5952.2731.27445558.7716.209114.8225.717.8197836.95361.99552.256.355998.219.8012.51831.3413.4163.3014.03399159.9595542.9609747.5139912.681578.646162.23328.59134.042.6815.1543140.50510.780.5917.26575.7244111.075917.72845624.8431.42578.065563.079.598933.9353.5631.28205557.6516.519232.9425.537.8450536.87364.16452.006.378528.209.8412.43930.9113.3962.5613.91769174.7594952.4945657.5146612.688079.191162.07528.61133.932.6915.1584140.25610.810.5917.10325.7343911.400518.30295562.3331.37437.775652.749.588939.7452.4431.20575587.2516.179209.4425.327.8267536.93368.10951.406.330318.229.7412.46331.0913.3762.5414.05759171.8495883.7745167.5769012.640079.105162.65528.60134.482.6815.1490140.24010.790.59OpenBenchmarking.org

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU1a23448121620SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 313.6416.3717.2717.10MIN: 13.52MIN: 16.21MIN: 17.03MIN: 16.91. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU1a23448121620Min: 13.6 / Avg: 13.64 / Max: 13.67Min: 16.36 / Avg: 16.37 / Max: 16.38Min: 17.17 / Avg: 17.27 / Max: 17.35Min: 17.04 / Avg: 17.1 / Max: 17.141. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU1a2341.29022.58043.87065.16086.451SE +/- 0.00651, N = 3SE +/- 0.00675, N = 3SE +/- 0.00635, N = 3SE +/- 0.00987, N = 34.923925.659175.724415.73439MIN: 4.88MIN: 5.61MIN: 5.67MIN: 5.681. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU1a234246810Min: 4.91 / Avg: 4.92 / Max: 4.93Min: 5.65 / Avg: 5.66 / Max: 5.67Min: 5.71 / Avg: 5.72 / Max: 5.73Min: 5.72 / Avg: 5.73 / Max: 5.751. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU1a2343691215SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.17, N = 310.7711.2411.0811.40MIN: 10.6MIN: 11.08MIN: 10.83MIN: 10.841. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU1a2343691215Min: 10.73 / Avg: 10.77 / Max: 10.84Min: 11.19 / Avg: 11.24 / Max: 11.27Min: 11.04 / Avg: 11.08 / Max: 11.15Min: 11.08 / Avg: 11.4 / Max: 11.661. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU1a234510152025SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.15, N = 317.5517.6917.7318.30MIN: 17.38MIN: 17.54MIN: 17.51MIN: 17.821. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU1a234510152025Min: 17.5 / Avg: 17.55 / Max: 17.62Min: 17.65 / Avg: 17.69 / Max: 17.73Min: 17.67 / Avg: 17.73 / Max: 17.76Min: 18.01 / Avg: 18.3 / Max: 18.51. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU1a23412002400360048006000SE +/- 28.62, N = 3SE +/- 26.83, N = 3SE +/- 21.34, N = 3SE +/- 42.01, N = 35407.385542.585624.845562.33MIN: 5323.35MIN: 5471.62MIN: 5559.14MIN: 5470.241. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU1a23410002000300040005000Min: 5378.26 / Avg: 5407.38 / Max: 5464.61Min: 5510.78 / Avg: 5542.58 / Max: 5595.91Min: 5585.81 / Avg: 5624.84 / Max: 5659.32Min: 5506.02 / Avg: 5562.33 / Max: 5644.491. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU1a234714212835SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.05, N = 330.3031.4431.4331.37MIN: 29.87MIN: 31.29MIN: 31.24MIN: 31.111. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU1a234714212835Min: 30.22 / Avg: 30.3 / Max: 30.4Min: 31.39 / Avg: 31.44 / Max: 31.48Min: 31.4 / Avg: 31.43 / Max: 31.47Min: 31.27 / Avg: 31.37 / Max: 31.441. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnet1a234246810SE +/- 0.14, N = 3SE +/- 0.08, N = 3SE +/- 0.14, N = 3SE +/- 0.03, N = 37.937.928.067.77MIN: 7.75 / MAX: 8.36MIN: 7.76 / MAX: 8.26MIN: 7.76 / MAX: 10MIN: 7.7 / MAX: 7.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnet1a2343691215Min: 7.78 / Avg: 7.93 / Max: 8.21Min: 7.8 / Avg: 7.92 / Max: 8.06Min: 7.79 / Avg: 8.06 / Max: 8.22Min: 7.74 / Avg: 7.77 / Max: 7.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU1a23412002400360048006000SE +/- 38.18, N = 3SE +/- 22.30, N = 3SE +/- 18.31, N = 3SE +/- 16.53, N = 35466.345604.525563.075652.74MIN: 5366.55MIN: 5533.62MIN: 5497.5MIN: 5550.051. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU1a23410002000300040005000Min: 5390.26 / Avg: 5466.34 / Max: 5510.11Min: 5560.06 / Avg: 5604.52 / Max: 5629.79Min: 5527.04 / Avg: 5563.07 / Max: 5586.75Min: 5620.92 / Avg: 5652.74 / Max: 5676.41. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v21a2343691215SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.30, N = 3SE +/- 0.31, N = 39.309.329.599.58MIN: 9.21 / MAX: 11.83MIN: 9.22 / MAX: 21.83MIN: 9.2 / MAX: 10.85MIN: 9.18 / MAX: 11.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v21a2343691215Min: 9.27 / Avg: 9.3 / Max: 9.33Min: 9.28 / Avg: 9.32 / Max: 9.38Min: 9.28 / Avg: 9.59 / Max: 10.19Min: 9.25 / Avg: 9.58 / Max: 10.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU1a2342K4K6K8K10KSE +/- 86.69, N = 3SE +/- 91.42, N = 3SE +/- 86.21, N = 3SE +/- 113.41, N = 38703.908840.598933.938939.74MIN: 8416.9MIN: 8597.38MIN: 8695.66MIN: 8605.751. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU1a23416003200480064008000Min: 8539.06 / Avg: 8703.9 / Max: 8832.84Min: 8688.33 / Avg: 8840.59 / Max: 9004.39Min: 8780.76 / Avg: 8933.93 / Max: 9079.09Min: 8712.96 / Avg: 8939.74 / Max: 9056.261. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tiny1a2341224364860SE +/- 0.18, N = 3SE +/- 0.28, N = 3SE +/- 0.67, N = 3SE +/- 0.55, N = 352.3152.2753.5652.44MIN: 51.39 / MAX: 75.07MIN: 51.34 / MAX: 63.11MIN: 51.95 / MAX: 63.32MIN: 51.3 / MAX: 54.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tiny1a2341122334455Min: 51.94 / Avg: 52.31 / Max: 52.5Min: 51.98 / Avg: 52.27 / Max: 52.83Min: 52.85 / Avg: 53.56 / Max: 54.9Min: 51.84 / Avg: 52.44 / Max: 53.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU1a234714212835SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 330.5431.2731.2831.21MIN: 30.28MIN: 30.95MIN: 31.06MIN: 30.961. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU1a234714212835Min: 30.51 / Avg: 30.54 / Max: 30.59Min: 31.19 / Avg: 31.27 / Max: 31.32Min: 31.25 / Avg: 31.28 / Max: 31.34Min: 31.17 / Avg: 31.21 / Max: 31.231. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU1a23412002400360048006000SE +/- 28.56, N = 3SE +/- 1.45, N = 3SE +/- 36.92, N = 3SE +/- 15.69, N = 35460.155558.775557.655587.25MIN: 5386.7MIN: 5514.32MIN: 5449.82MIN: 5532.381. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU1a23410002000300040005000Min: 5411.48 / Avg: 5460.15 / Max: 5510.38Min: 5556.45 / Avg: 5558.77 / Max: 5561.45Min: 5493.85 / Avg: 5557.65 / Max: 5621.74Min: 5560.14 / Avg: 5587.25 / Max: 5614.491. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400m1a23448121620SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.16, N = 3SE +/- 0.01, N = 316.1516.2016.5116.17MIN: 16.08 / MAX: 17.02MIN: 16.07 / MAX: 28.3MIN: 16.14 / MAX: 19.26MIN: 16.12 / MAX: 16.951. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400m1a23448121620Min: 16.12 / Avg: 16.15 / Max: 16.17Min: 16.12 / Avg: 16.2 / Max: 16.29Min: 16.18 / Avg: 16.51 / Max: 16.67Min: 16.15 / Avg: 16.17 / Max: 16.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU1a2342K4K6K8K10KSE +/- 15.55, N = 3SE +/- 14.11, N = 3SE +/- 39.97, N = 3SE +/- 29.31, N = 39031.769114.829232.949209.44MIN: 8754.79MIN: 8823.44MIN: 8959.28MIN: 8882.881. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU1a23416003200480064008000Min: 9013.96 / Avg: 9031.76 / Max: 9062.74Min: 9094.03 / Avg: 9114.82 / Max: 9141.75Min: 9171.05 / Avg: 9232.94 / Max: 9307.71Min: 9175.04 / Avg: 9209.44 / Max: 9267.751. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnet1a234612182430SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.17, N = 3SE +/- 0.08, N = 325.1925.7125.5325.32MIN: 25.08 / MAX: 33.51MIN: 25.62 / MAX: 27.63MIN: 25.1 / MAX: 32.92MIN: 25.1 / MAX: 35.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnet1a234612182430Min: 25.16 / Avg: 25.19 / Max: 25.22Min: 25.68 / Avg: 25.71 / Max: 25.72Min: 25.18 / Avg: 25.53 / Max: 25.73Min: 25.24 / Avg: 25.32 / Max: 25.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU1a234246810SE +/- 0.00354, N = 3SE +/- 0.00431, N = 3SE +/- 0.01551, N = 3SE +/- 0.00461, N = 37.700137.819787.845057.82675MIN: 7.61MIN: 7.74MIN: 7.75MIN: 7.741. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU1a2343691215Min: 7.69 / Avg: 7.7 / Max: 7.7Min: 7.81 / Avg: 7.82 / Max: 7.82Min: 7.83 / Avg: 7.85 / Max: 7.88Min: 7.82 / Avg: 7.83 / Max: 7.831. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenet1a234816243240SE +/- 0.02, N = 3SE +/- 0.21, N = 3SE +/- 0.33, N = 3SE +/- 0.36, N = 336.3336.9536.8736.93MIN: 36.21 / MAX: 38.04MIN: 36.53 / MAX: 38.66MIN: 36.18 / MAX: 39.62MIN: 36.12 / MAX: 50.811. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenet1a234816243240Min: 36.31 / Avg: 36.33 / Max: 36.37Min: 36.69 / Avg: 36.95 / Max: 37.36Min: 36.34 / Avg: 36.87 / Max: 37.46Min: 36.55 / Avg: 36.93 / Max: 37.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To Compile1a23480160240320400SE +/- 1.06, N = 3SE +/- 1.03, N = 3SE +/- 1.02, N = 3SE +/- 0.42, N = 3364.77362.00364.16368.11
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To Compile1a23470140210280350Min: 363.42 / Avg: 364.77 / Max: 366.85Min: 360.9 / Avg: 361.99 / Max: 364.05Min: 362.19 / Avg: 364.16 / Max: 365.63Min: 367.3 / Avg: 368.11 / Max: 368.7

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssd1a2341224364860SE +/- 0.32, N = 3SE +/- 0.26, N = 3SE +/- 0.63, N = 3SE +/- 0.21, N = 351.9452.2552.0051.40MIN: 51.15 / MAX: 55.05MIN: 51.5 / MAX: 55.44MIN: 51.17 / MAX: 53.56MIN: 51.03 / MAX: 611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssd1a2341020304050Min: 51.32 / Avg: 51.94 / Max: 52.34Min: 51.77 / Avg: 52.25 / Max: 52.66Min: 51.27 / Avg: 52 / Max: 53.26Min: 51.17 / Avg: 51.4 / Max: 51.811. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU1a234246810SE +/- 0.01198, N = 3SE +/- 0.01670, N = 3SE +/- 0.03782, N = 3SE +/- 0.03023, N = 36.280976.355996.378526.33031MIN: 6.23MIN: 6.28MIN: 6.29MIN: 6.221. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU1a2343691215Min: 6.26 / Avg: 6.28 / Max: 6.29Min: 6.34 / Avg: 6.36 / Max: 6.39Min: 6.33 / Avg: 6.38 / Max: 6.45Min: 6.27 / Avg: 6.33 / Max: 6.381. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v31a234246810SE +/- 0.02, N = 3SE +/- 0.13, N = 3SE +/- 0.13, N = 3SE +/- 0.11, N = 38.108.218.208.22MIN: 8.01 / MAX: 9.63MIN: 8.01 / MAX: 9.57MIN: 8 / MAX: 11.87MIN: 7.99 / MAX: 10.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v31a2343691215Min: 8.06 / Avg: 8.1 / Max: 8.14Min: 8.08 / Avg: 8.21 / Max: 8.46Min: 8.07 / Avg: 8.2 / Max: 8.47Min: 8.07 / Avg: 8.22 / Max: 8.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling Benchmark1a2343691215SE +/- 0.07, N = 3SE +/- 0.09, N = 3SE +/- 0.04, N = 3SE +/- 0.06, N = 39.709.809.849.741. Nodejs v10.19.0
OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling Benchmark1a2343691215Min: 9.61 / Avg: 9.7 / Max: 9.84Min: 9.67 / Avg: 9.8 / Max: 9.98Min: 9.76 / Avg: 9.84 / Max: 9.89Min: 9.63 / Avg: 9.74 / Max: 9.841. Nodejs v10.19.0

Timed MAFFT Alignment

This test performs an alignment of 100 pyruvate decarboxylase sequences. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNA12343691215SE +/- 0.18, N = 3SE +/- 0.02, N = 3SE +/- 0.10, N = 3SE +/- 0.06, N = 312.6212.5212.4412.461. (CC) gcc options: -std=c99 -O3 -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNA123448121620Min: 12.31 / Avg: 12.62 / Max: 12.92Min: 12.49 / Avg: 12.52 / Max: 12.56Min: 12.33 / Avg: 12.44 / Max: 12.65Min: 12.4 / Avg: 12.46 / Max: 12.571. (CC) gcc options: -std=c99 -O3 -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet181a234714212835SE +/- 0.42, N = 3SE +/- 0.12, N = 3SE +/- 0.09, N = 3SE +/- 0.37, N = 331.2331.3430.9131.09MIN: 30.71 / MAX: 32.24MIN: 31.01 / MAX: 44.79MIN: 30.66 / MAX: 31.53MIN: 30.57 / MAX: 34.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet181a234714212835Min: 30.8 / Avg: 31.23 / Max: 32.06Min: 31.16 / Avg: 31.34 / Max: 31.57Min: 30.81 / Avg: 30.91 / Max: 31.09Min: 30.67 / Avg: 31.09 / Max: 31.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b01a2343691215SE +/- 0.05, N = 3SE +/- 0.15, N = 3SE +/- 0.15, N = 3SE +/- 0.19, N = 313.2413.4113.3913.37MIN: 13.14 / MAX: 15.99MIN: 13.16 / MAX: 13.83MIN: 13.17 / MAX: 15.94MIN: 13.14 / MAX: 13.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b01a23448121620Min: 13.18 / Avg: 13.24 / Max: 13.33Min: 13.21 / Avg: 13.41 / Max: 13.7Min: 13.21 / Avg: 13.39 / Max: 13.69Min: 13.17 / Avg: 13.37 / Max: 13.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet501a2341428425670SE +/- 0.71, N = 3SE +/- 0.84, N = 3SE +/- 0.74, N = 3SE +/- 0.65, N = 363.2363.3062.5662.54MIN: 61.59 / MAX: 66.79MIN: 61.67 / MAX: 77.71MIN: 61.64 / MAX: 67.01MIN: 61.66 / MAX: 65.951. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet501a2341224364860Min: 61.9 / Avg: 63.23 / Max: 64.31Min: 61.91 / Avg: 63.3 / Max: 64.82Min: 61.81 / Avg: 62.56 / Max: 64.04Min: 61.89 / Avg: 62.54 / Max: 63.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU1a23448121620SE +/- 0.04, N = 3SE +/- 0.00, N = 3SE +/- 0.07, N = 3SE +/- 0.01, N = 313.8914.0313.9214.06MIN: 13.73MIN: 13.95MIN: 13.79MIN: 13.991. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU1a23448121620Min: 13.81 / Avg: 13.89 / Max: 13.94Min: 14.03 / Avg: 14.03 / Max: 14.04Min: 13.83 / Avg: 13.92 / Max: 14.05Min: 14.04 / Avg: 14.06 / Max: 14.081. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU1a2342K4K6K8K10KSE +/- 88.44, N = 3SE +/- 29.34, N = 3SE +/- 22.47, N = 3SE +/- 25.64, N = 39078.499159.959174.759171.84MIN: 8599.52MIN: 8936.59MIN: 8941.68MIN: 8945.741. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU1a23416003200480064008000Min: 8953.98 / Avg: 9078.49 / Max: 9249.56Min: 9105.04 / Avg: 9159.95 / Max: 9205.34Min: 9140.01 / Avg: 9174.75 / Max: 9216.8Min: 9125.81 / Avg: 9171.84 / Max: 9214.431. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second1a23420K40K60K80K100KSE +/- 1175.16, N = 3SE +/- 272.63, N = 3SE +/- 910.46, N = 15SE +/- 1036.68, N = 1595428.2495542.9694952.4995883.771. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second1a23417K34K51K68K85KMin: 93115.29 / Avg: 95428.24 / Max: 96946.19Min: 95006.23 / Avg: 95542.96 / Max: 95894.52Min: 88760.68 / Avg: 94952.49 / Max: 102518.1Min: 89315.62 / Avg: 95883.77 / Max: 101297.881. (CC) gcc options: -O2 -lrt" -lrt

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU1a234246810SE +/- 0.00608, N = 3SE +/- 0.00444, N = 3SE +/- 0.00223, N = 3SE +/- 0.02615, N = 37.514427.513997.514667.57690MIN: 7.45MIN: 7.46MIN: 7.46MIN: 7.461. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU1a2343691215Min: 7.5 / Avg: 7.51 / Max: 7.52Min: 7.51 / Avg: 7.51 / Max: 7.52Min: 7.51 / Avg: 7.51 / Max: 7.52Min: 7.53 / Avg: 7.58 / Max: 7.611. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU1a2343691215SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 312.5912.6812.6912.64MIN: 12.45MIN: 12.57MIN: 12.56MIN: 12.51. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU1a23448121620Min: 12.55 / Avg: 12.59 / Max: 12.63Min: 12.66 / Avg: 12.68 / Max: 12.71Min: 12.65 / Avg: 12.69 / Max: 12.72Min: 12.58 / Avg: 12.64 / Max: 12.751. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,0001a23420406080100SE +/- 0.16, N = 3SE +/- 0.16, N = 3SE +/- 0.12, N = 3SE +/- 0.17, N = 378.8578.6579.1979.111. (CC) gcc options: -O2 -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,0001a2341530456075Min: 78.54 / Avg: 78.85 / Max: 79.07Min: 78.34 / Avg: 78.65 / Max: 78.89Min: 79.04 / Avg: 79.19 / Max: 79.42Min: 78.92 / Avg: 79.11 / Max: 79.451. (CC) gcc options: -O2 -ldl -lz -lpthread

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To Compile1a2344080120160200SE +/- 0.30, N = 3SE +/- 0.29, N = 3SE +/- 0.08, N = 3SE +/- 0.09, N = 3163.12162.23162.08162.66
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To Compile1a234306090120150Min: 162.52 / Avg: 163.12 / Max: 163.44Min: 161.65 / Avg: 162.23 / Max: 162.58Min: 161.92 / Avg: 162.08 / Max: 162.16Min: 162.5 / Avg: 162.66 / Max: 162.8

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenet1a234714212835SE +/- 0.18, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 328.7328.5928.6128.60MIN: 28.43 / MAX: 30.19MIN: 28.51 / MAX: 31.16MIN: 28.51 / MAX: 30.66MIN: 28.48 / MAX: 41.571. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenet1a234612182430Min: 28.55 / Avg: 28.73 / Max: 29.09Min: 28.58 / Avg: 28.59 / Max: 28.61Min: 28.59 / Avg: 28.61 / Max: 28.64Min: 28.56 / Avg: 28.6 / Max: 28.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg161a234306090120150SE +/- 0.25, N = 3SE +/- 0.10, N = 3SE +/- 0.13, N = 3SE +/- 0.46, N = 3134.38134.04133.93134.48MIN: 133.8 / MAX: 148.18MIN: 133.61 / MAX: 145.21MIN: 133.42 / MAX: 157.33MIN: 133.44 / MAX: 146.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg161a234306090120150Min: 134.06 / Avg: 134.38 / Max: 134.88Min: 133.89 / Avg: 134.04 / Max: 134.24Min: 133.75 / Avg: 133.93 / Max: 134.19Min: 133.61 / Avg: 134.48 / Max: 135.161. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazeface1a2340.60531.21061.81592.42123.0265SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 32.682.682.692.68MIN: 2.65 / MAX: 2.78MIN: 2.65 / MAX: 2.87MIN: 2.65 / MAX: 3.46MIN: 2.65 / MAX: 2.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazeface1a234246810Min: 2.67 / Avg: 2.68 / Max: 2.68Min: 2.68 / Avg: 2.68 / Max: 2.69Min: 2.68 / Avg: 2.69 / Max: 2.7Min: 2.68 / Avg: 2.68 / Max: 2.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU1a23448121620SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 315.2015.1515.1615.15MIN: 15.09MIN: 15.1MIN: 15.1MIN: 15.091. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU1a23448121620Min: 15.13 / Avg: 15.2 / Max: 15.26Min: 15.14 / Avg: 15.15 / Max: 15.17Min: 15.15 / Avg: 15.16 / Max: 15.18Min: 15.14 / Avg: 15.15 / Max: 15.161. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database Search1234306090120150SE +/- 0.03, N = 3SE +/- 0.17, N = 3SE +/- 0.10, N = 3SE +/- 0.18, N = 3140.06140.51140.26140.241. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database Search1234306090120150Min: 140.01 / Avg: 140.06 / Max: 140.12Min: 140.18 / Avg: 140.51 / Max: 140.74Min: 140.1 / Avg: 140.26 / Max: 140.43Min: 140.02 / Avg: 140.24 / Max: 140.61. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v21a2343691215SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 310.7810.7810.8110.79MIN: 10.74 / MAX: 13.51MIN: 10.73 / MAX: 11.61MIN: 10.77 / MAX: 11.05MIN: 10.74 / MAX: 11.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v21a2343691215Min: 10.78 / Avg: 10.78 / Max: 10.79Min: 10.76 / Avg: 10.78 / Max: 10.81Min: 10.8 / Avg: 10.81 / Max: 10.83Min: 10.77 / Avg: 10.79 / Max: 10.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: Kostya12340.13280.26560.39840.53120.664SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.590.590.590.591. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: Kostya1234246810Min: 0.59 / Avg: 0.59 / Max: 0.59Min: 0.59 / Avg: 0.59 / Max: 0.59Min: 0.59 / Avg: 0.59 / Max: 0.59Min: 0.59 / Avg: 0.59 / Max: 0.591. (CXX) g++ options: -O3 -pthread