Pre 10.0 Xeon

2 x Intel Xeon Gold 5220R testing with a TYAN S7106 (V2.01.B40 BIOS) and llvmpipe on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2010091-FI-PRE100XEO81
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
1
October 08 2020
  3 Hours, 34 Minutes
2
October 09 2020
  5 Hours, 30 Minutes
3
October 09 2020
  3 Hours, 41 Minutes
Invert Behavior (Only Show Selected Data)
  4 Hours, 15 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Pre 10.0 XeonProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen Resolution1232 x Intel Xeon Gold 5220R @ 3.90GHz (36 Cores / 72 Threads)TYAN S7106 (V2.01.B40 BIOS)Intel Sky Lake-E DMI3 Registers94GB500GB Samsung SSD 860llvmpipeVE2282 x Intel I210 + 2 x QLogic cLOM8214 1/10GbEUbuntu 20.045.9.0-050900rc6-generic (x86_64) 20200920GNOME Shell 3.36.4X Server 1.20.8modesetting 1.20.83.3 Mesa 20.0.4 (LLVM 9.0.1 256 bits)GCC 9.3.0ext41920x10802 x Intel I210OpenBenchmarking.orgCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_pstate powersave - CPU Microcode: 0x5002f01Python Details- Python 2.7.18rc1 + Python 3.8.2Security Details- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled

123Result OverviewPhoronix Test Suite100%101%102%103%Apache CouchDBDolfynNCNNKeyDBTimed MAFFT AlignmentGROMACSHierarchical INTegrationRNNoiseFFTETimed HMMer SearchBYTE Unix BenchmarkOpenVINOCaffe

Pre 10.0 Xeonncnn: CPU - vgg16couchdb: 100 - 1000 - 24ncnn: CPU - shufflenet-v2ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - efficientnet-b0ncnn: CPU - squeezenetdolfyn: Computational Fluid Dynamicsncnn: CPU - blazefacencnn: CPU - mnasnetkeydb: ncnn: CPU - mobilenetncnn: CPU - resnet18mafft: Multiple Sequence Alignment - LSU RNAncnn: CPU - yolov4-tinyopenvino: Face Detection 0106 FP16 - CPUncnn: CPU - googlenetncnn: CPU - resnet50gromacs: Water Benchmarkopenvino: Face Detection 0106 FP16 - CPUhint: FLOATopenvino: Person Detection 0106 FP16 - CPUrnnoise: ncnn: CPU - alexnethmmer: Pfam Database Searchffte: N=256, 3D Complex FFT Routineopenvino: Person Detection 0106 FP32 - CPUopenvino: Person Detection 0106 FP16 - CPUopenvino: Face Detection 0106 FP32 - CPUopenvino: Face Detection 0106 FP32 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Person Detection 0106 FP32 - CPUbyte: Dhrystone 2openvino: Age Gender Recognition Retail 0013 FP32 - CPUopenvino: Age Gender Recognition Retail 0013 FP32 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUcaffe: GoogleNet - CPU - 200caffe: GoogleNet - CPU - 100caffe: AlexNet - CPU - 200caffe: AlexNet - CPU - 10012340.32220.1689.229.999.5412.7520.5021.4454.539.28428636.1022.7115.7111.76532.107.0023.0328.453.3882518.37385726582.809184.1728.59711.91224.187108341.693252954182.324179.406.972526.4921610.134.1538439689.921620.670.80.83691921750161271006622441.06230.0409.379.959.4412.5420.1121.7704.469.26431112.3922.4715.5711.66132.147.0222.8428.223.4142518.58383680919.909064.1528.47211.86223.494108071.862688824168.044182.736.982519.7021650.994.1538381617.821613.900.80.83409341883831317326458642.27227.4459.6210.249.6612.8320.0921.4184.499.39433887.1222.7315.7411.78331.816.9522.9728.263.4122533.70385694400.252354.1628.53211.90224.406108490.035161954183.354167.666.992519.9321598.604.1438354114.721634.750.80.837477218772212958965576OpenBenchmarking.org

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg163211020304050SE +/- 0.69, N = 3SE +/- 0.45, N = 5SE +/- 0.34, N = 342.2741.0640.32MIN: 39.36 / MAX: 116.79MIN: 39.85 / MAX: 73.71MIN: 37.85 / MAX: 76.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.1.1Bulk Size: 100 - Inserts: 1000 - Rounds: 2423150100150200250SE +/- 2.66, N = 12SE +/- 2.56, N = 12SE +/- 2.11, N = 3230.04227.45220.171. (CXX) g++ options: -std=c++14 -lmozjs-68 -lm -lerl_interface -lei -fPIC -MMD

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v23213691215SE +/- 0.18, N = 3SE +/- 0.18, N = 5SE +/- 0.18, N = 39.629.379.22MIN: 8.69 / MAX: 13.77MIN: 8.52 / MAX: 10.73MIN: 8.65 / MAX: 9.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v23123691215SE +/- 0.17, N = 3SE +/- 0.15, N = 3SE +/- 0.13, N = 510.249.999.95MIN: 9.49 / MAX: 25.47MIN: 9.45 / MAX: 22.3MIN: 9.11 / MAX: 23.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v33123691215SE +/- 0.13, N = 3SE +/- 0.16, N = 3SE +/- 0.13, N = 59.669.549.44MIN: 9.07 / MAX: 25.77MIN: 8.86 / MAX: 29.64MIN: 8.82 / MAX: 31.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b03123691215SE +/- 0.20, N = 3SE +/- 0.03, N = 3SE +/- 0.14, N = 512.8312.7512.54MIN: 12.01 / MAX: 16.14MIN: 12.48 / MAX: 14.63MIN: 11.99 / MAX: 15.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenet123510152025SE +/- 0.16, N = 3SE +/- 0.26, N = 5SE +/- 0.16, N = 320.5020.1120.09MIN: 19.49 / MAX: 93.84MIN: 19.25 / MAX: 84.98MIN: 19.67 / MAX: 22.271. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Dolfyn

Dolfyn is a Computational Fluid Dynamics (CFD) code of modern numerical simulation techniques. The Dolfyn test profile measures the execution time of the bundled computational fluid dynamics demos that are bundled with Dolfyn. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid Dynamics213510152025SE +/- 0.29, N = 4SE +/- 0.06, N = 3SE +/- 0.01, N = 321.7721.4521.42

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazeface1321.01932.03863.05794.07725.0965SE +/- 0.01, N = 3SE +/- 0.10, N = 3SE +/- 0.06, N = 54.534.494.46MIN: 4.45 / MAX: 5.46MIN: 4.25 / MAX: 5.03MIN: 4.23 / MAX: 5.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnet3123691215SE +/- 0.08, N = 3SE +/- 0.11, N = 3SE +/- 0.11, N = 59.399.289.26MIN: 8.67 / MAX: 31.01MIN: 8.7 / MAX: 33.25MIN: 8.61 / MAX: 30.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

KeyDB

A benchmark of KeyDB as a multi-threaded fork of the Redis server. The KeyDB benchmark is conducted using memtier-benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.1612390K180K270K360K450KSE +/- 4163.20, N = 3SE +/- 2528.13, N = 3SE +/- 4369.74, N = 8428636.10431112.39433887.121. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenet312510152025SE +/- 0.26, N = 3SE +/- 0.18, N = 3SE +/- 0.21, N = 522.7322.7122.47MIN: 22.09 / MAX: 28.49MIN: 22.24 / MAX: 30.26MIN: 21.44 / MAX: 42.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet1831248121620SE +/- 0.04, N = 3SE +/- 0.10, N = 3SE +/- 0.11, N = 515.7415.7115.57MIN: 15.32 / MAX: 63.22MIN: 15.24 / MAX: 76.19MIN: 15.22 / MAX: 41.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Timed MAFFT Alignment

This test performs an alignment of 100 pyruvate decarboxylase sequences. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNA3123691215SE +/- 0.20, N = 3SE +/- 0.11, N = 3SE +/- 0.03, N = 311.7811.7711.661. (CC) gcc options: -std=c99 -O3 -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tiny213714212835SE +/- 0.42, N = 5SE +/- 0.22, N = 3SE +/- 0.15, N = 332.1432.1031.81MIN: 30.96 / MAX: 53.56MIN: 31.15 / MAX: 36.29MIN: 31.14 / MAX: 42.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPU312246810SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 36.957.007.021. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenet132612182430SE +/- 0.15, N = 3SE +/- 0.35, N = 3SE +/- 0.16, N = 523.0322.9722.84MIN: 22.75 / MAX: 51.05MIN: 22.25 / MAX: 28.66MIN: 21.98 / MAX: 97.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50132714212835SE +/- 0.29, N = 3SE +/- 0.06, N = 3SE +/- 0.31, N = 528.4528.2628.22MIN: 27.64 / MAX: 56.75MIN: 26.33 / MAX: 68.53MIN: 27.04 / MAX: 114.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water Benchmark1320.76821.53642.30463.07283.841SE +/- 0.007, N = 3SE +/- 0.008, N = 3SE +/- 0.004, N = 33.3883.4123.4141. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPU3215001000150020002500SE +/- 8.10, N = 3SE +/- 5.93, N = 3SE +/- 2.42, N = 32533.702518.582518.371. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

Hierarchical INTegration

This test runs the U.S. Department of Energy's Ames Laboratory Hierarchical INTegration (HINT) benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOAT23180M160M240M320M400MSE +/- 357779.68, N = 3SE +/- 582471.28, N = 3SE +/- 271494.10, N = 3383680919.91385694400.25385726582.811. (CC) gcc options: -O3 -march=native -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPU2310.93831.87662.81493.75324.6915SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 34.154.164.171. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28132714212835SE +/- 0.08, N = 3SE +/- 0.06, N = 3SE +/- 0.02, N = 328.6028.5328.471. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnet1323691215SE +/- 0.24, N = 3SE +/- 0.21, N = 3SE +/- 0.17, N = 511.9111.9011.86MIN: 11.58 / MAX: 13.45MIN: 11.47 / MAX: 12.64MIN: 11.46 / MAX: 14.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database Search31250100150200250SE +/- 0.41, N = 3SE +/- 0.61, N = 3SE +/- 0.39, N = 3224.41224.19223.491. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

FFTE

FFTE is a package by Daisuke Takahashi to compute Discrete Fourier Transforms of 1-, 2- and 3- dimensional sequences of length (2^p)*(3^q)*(5^r). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT Routine21320K40K60K80K100KSE +/- 1010.36, N = 3SE +/- 781.22, N = 3SE +/- 243.36, N = 3108071.86108341.69108490.041. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPU3129001800270036004500SE +/- 3.58, N = 3SE +/- 3.70, N = 3SE +/- 4.93, N = 34183.354182.324168.041. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPU2139001800270036004500SE +/- 2.33, N = 3SE +/- 3.40, N = 3SE +/- 6.74, N = 34182.734179.404167.661. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPU123246810SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 36.976.986.991. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPU1325001000150020002500SE +/- 4.16, N = 3SE +/- 5.25, N = 3SE +/- 3.99, N = 32526.492519.932519.701. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU3125K10K15K20K25KSE +/- 7.40, N = 3SE +/- 27.65, N = 3SE +/- 11.13, N = 321598.6021610.1321650.991. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPU3120.93381.86762.80143.73524.669SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 34.144.154.151. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

BYTE Unix Benchmark

This is a test of BYTE. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 23218M16M24M32M40MSE +/- 30065.94, N = 3SE +/- 57895.53, N = 3SE +/- 11954.47, N = 338354114.738381617.838439689.9

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPU2135K10K15K20K25KSE +/- 16.90, N = 3SE +/- 12.85, N = 3SE +/- 32.77, N = 321613.9021620.6721634.751. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPU3210.180.360.540.720.9SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.80.80.81. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU3210.180.360.540.720.9SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.80.80.81. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 20031280K160K240K320K400KSE +/- 2056.32, N = 3SE +/- 4358.06, N = 3SE +/- 17972.71, N = 93747723691923409341. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 10023140K80K120K160K200KSE +/- 1658.31, N = 12SE +/- 502.77, N = 3SE +/- 8266.82, N = 91883831877221750161. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 20023130K60K90K120K150KSE +/- 472.02, N = 3SE +/- 242.40, N = 3SE +/- 3833.15, N = 111317321295891271001. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 10013214K28K42K56K70KSE +/- 124.86, N = 3SE +/- 92.99, N = 3SE +/- 1797.00, N = 126622465576645861. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas