Xeon Bro Xmas

Intel Xeon E5-2609 v4 testing with a MSI X99A RAIDER (MS-7885) v5.0 (P.50 BIOS) and llvmpipe on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2012226-HA-XEONBROXM06
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 3 Tests
Timed Code Compilation 2 Tests
Creator Workloads 3 Tests
Encoding 3 Tests
Multi-Core 2 Tests
Programmer / Developer System Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
1
December 22 2020
  2 Hours, 2 Minutes
2
December 22 2020
  2 Hours, 16 Minutes
3
December 22 2020
  2 Hours, 2 Minutes
Invert Hiding All Results Option
  2 Hours, 7 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Xeon Bro XmasProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen Resolution123Intel Xeon E5-2609 v4 @ 1.70GHz (8 Cores)MSI X99A RAIDER (MS-7885) v5.0 (P.50 BIOS)Intel Xeon E7 v4/Xeon16GB256GB CORSAIR FORCE LXllvmpipeRealtek ALC892Intel I218-VUbuntu 20.045.9.0-050900rc6daily20200926-generic (x86_64) 20200925GNOME Shell 3.36.2X Server 1.20.8modesetting 1.20.83.3 Mesa 20.0.4 (LLVM 9.0.1 256 bits)GCC 9.3.0ext41024x768OpenBenchmarking.orgCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_cpufreq ondemand - CPU Microcode: 0xb000038Security Details- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT disabled + mds: Mitigation of Clear buffers; SMT disabled + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Mitigation of Clear buffers; SMT disabled

123Result OverviewPhoronix Test Suite100%103%106%110%113%CLOMPBuild2Opus Codec EncodingMonkey Audio EncodingTimed Eigen CompilationWavPack Audio EncodingNCNN

Xeon Bro Xmasbuild2: Time To Compilencnn: CPU - resnet50ncnn: CPU - regnety_400mencode-opus: WAV To Opus Encodencnn: CPU - resnet18ncnn: CPU - yolov4-tinyncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - mobilenetencode-ape: WAV To APEncnn: CPU - vgg16ncnn: CPU - shufflenet-v2ncnn: CPU - alexnetbuild-eigen: Time To Compilencnn: CPU - mnasnetencode-wavpack: WAV To WavPackncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - googlenetncnn: CPU - efficientnet-b0ncnn: CPU - squeezenet_ssdncnn: CPU - blazefaceclomp: Static OMP Speedup123345.00547.1728.8520.52821.4547.918.3134.6431.19369.8512.9920.05188.0068.8735.4869.3924.7313.5151.974.098350.57646.9528.7620.45121.5147.798.2934.7031.22269.9812.9720.07188.0318.8635.5249.3824.7413.5251.984.097.1344.08246.9728.7420.47821.4647.878.3134.6231.26169.9512.9920.04187.7908.8735.4929.3824.7213.5251.974.098.0OpenBenchmarking.org

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To Compile12380160240320400SE +/- 0.83, N = 3SE +/- 2.27, N = 3SE +/- 0.29, N = 3345.01350.58344.08
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To Compile12360120180240300Min: 343.58 / Avg: 345 / Max: 346.46Min: 346.1 / Avg: 350.58 / Max: 353.49Min: 343.79 / Avg: 344.08 / Max: 344.67

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet501231122334455SE +/- 0.24, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 347.1746.9546.97MIN: 46.72 / MAX: 170.29MIN: 46.76 / MAX: 52.36MIN: 46.77 / MAX: 52.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet501231020304050Min: 46.88 / Avg: 47.17 / Max: 47.64Min: 46.9 / Avg: 46.95 / Max: 47.02Min: 46.94 / Avg: 46.97 / Max: 47.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400m123714212835SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.02, N = 328.8528.7628.74MIN: 28.6 / MAX: 49.96MIN: 28.62 / MAX: 30.54MIN: 28.59 / MAX: 30.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400m123612182430Min: 28.8 / Avg: 28.85 / Max: 28.94Min: 28.71 / Avg: 28.76 / Max: 28.8Min: 28.72 / Avg: 28.74 / Max: 28.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus Encode123510152025SE +/- 0.04, N = 5SE +/- 0.02, N = 5SE +/- 0.05, N = 520.5320.4520.481. (CXX) g++ options: -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus Encode123510152025Min: 20.44 / Avg: 20.53 / Max: 20.66Min: 20.42 / Avg: 20.45 / Max: 20.54Min: 20.42 / Avg: 20.48 / Max: 20.671. (CXX) g++ options: -fvisibility=hidden -logg -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18123510152025SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 321.4521.5121.46MIN: 21.36 / MAX: 26.99MIN: 21.4 / MAX: 23.19MIN: 21.38 / MAX: 22.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18123510152025Min: 21.4 / Avg: 21.45 / Max: 21.52Min: 21.45 / Avg: 21.51 / Max: 21.61Min: 21.43 / Avg: 21.46 / Max: 21.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tiny1231122334455SE +/- 0.04, N = 3SE +/- 0.14, N = 3SE +/- 0.17, N = 347.9147.7947.87MIN: 46.88 / MAX: 51.88MIN: 46.84 / MAX: 56.2MIN: 46.93 / MAX: 68.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tiny1231020304050Min: 47.84 / Avg: 47.91 / Max: 47.97Min: 47.52 / Avg: 47.79 / Max: 47.99Min: 47.53 / Avg: 47.87 / Max: 48.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3123246810SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 38.318.298.31MIN: 8.22 / MAX: 13.69MIN: 8.21 / MAX: 13.81MIN: 8.21 / MAX: 13.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v31233691215Min: 8.27 / Avg: 8.31 / Max: 8.36Min: 8.25 / Avg: 8.29 / Max: 8.32Min: 8.24 / Avg: 8.31 / Max: 8.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenet123816243240SE +/- 0.06, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 334.6434.7034.62MIN: 34.09 / MAX: 39.55MIN: 34.14 / MAX: 54.36MIN: 34.1 / MAX: 36.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenet123714212835Min: 34.53 / Avg: 34.64 / Max: 34.71Min: 34.63 / Avg: 34.7 / Max: 34.8Min: 34.54 / Avg: 34.62 / Max: 34.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Monkey Audio Encoding

This test times how long it takes to encode a sample WAV file to Monkey's Audio APE format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APE123714212835SE +/- 0.03, N = 5SE +/- 0.07, N = 5SE +/- 0.06, N = 531.1931.2231.261. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt
OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APE123714212835Min: 31.11 / Avg: 31.19 / Max: 31.28Min: 31.1 / Avg: 31.22 / Max: 31.48Min: 31.14 / Avg: 31.26 / Max: 31.481. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg161231632486480SE +/- 0.07, N = 3SE +/- 0.06, N = 3SE +/- 0.05, N = 369.8569.9869.95MIN: 69.6 / MAX: 72.98MIN: 69.73 / MAX: 75.87MIN: 69.67 / MAX: 89.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg161231428425670Min: 69.73 / Avg: 69.85 / Max: 69.97Min: 69.86 / Avg: 69.98 / Max: 70.06Min: 69.87 / Avg: 69.95 / Max: 70.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v21233691215SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 312.9912.9712.99MIN: 12.94 / MAX: 13.42MIN: 12.93 / MAX: 14.27MIN: 12.93 / MAX: 18.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v212348121620Min: 12.97 / Avg: 12.99 / Max: 13.01Min: 12.97 / Avg: 12.97 / Max: 12.98Min: 12.98 / Avg: 12.99 / Max: 131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnet123510152025SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 320.0520.0720.04MIN: 19.99 / MAX: 25.67MIN: 19.98 / MAX: 40.76MIN: 19.99 / MAX: 23.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnet123510152025Min: 20.03 / Avg: 20.05 / Max: 20.06Min: 20.03 / Avg: 20.07 / Max: 20.14Min: 20.02 / Avg: 20.04 / Max: 20.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Timed Eigen Compilation

This test times how long it takes to build all Eigen examples. The Eigen examples are compiled serially. Eigen is a C++ template library for linear algebra. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To Compile1234080120160200SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.09, N = 3188.01188.03187.79
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To Compile123306090120150Min: 187.94 / Avg: 188.01 / Max: 188.07Min: 187.96 / Avg: 188.03 / Max: 188.1Min: 187.63 / Avg: 187.79 / Max: 187.95

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnet123246810SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 38.878.868.87MIN: 8.82 / MAX: 10.06MIN: 8.82 / MAX: 10.03MIN: 8.81 / MAX: 9.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnet1233691215Min: 8.86 / Avg: 8.87 / Max: 8.88Min: 8.85 / Avg: 8.86 / Max: 8.87Min: 8.86 / Avg: 8.87 / Max: 8.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

WavPack Audio Encoding

This test times how long it takes to encode a sample WAV file to WavPack format with very high quality settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPack123816243240SE +/- 0.00, N = 5SE +/- 0.04, N = 5SE +/- 0.00, N = 535.4935.5235.491. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPack123816243240Min: 35.48 / Avg: 35.49 / Max: 35.49Min: 35.48 / Avg: 35.52 / Max: 35.68Min: 35.48 / Avg: 35.49 / Max: 35.511. (CXX) g++ options: -rdynamic

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v21233691215SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 39.399.389.38MIN: 9.26 / MAX: 14.78MIN: 9.24 / MAX: 15.21MIN: 9.24 / MAX: 17.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v21233691215Min: 9.35 / Avg: 9.39 / Max: 9.41Min: 9.35 / Avg: 9.38 / Max: 9.41Min: 9.33 / Avg: 9.38 / Max: 9.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenet123612182430SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 324.7324.7424.72MIN: 24.62 / MAX: 45.71MIN: 24.62 / MAX: 25.81MIN: 24.64 / MAX: 25.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenet123612182430Min: 24.68 / Avg: 24.73 / Max: 24.83Min: 24.67 / Avg: 24.74 / Max: 24.79Min: 24.69 / Avg: 24.72 / Max: 24.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b01233691215SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 313.5113.5213.52MIN: 13.46 / MAX: 15.54MIN: 13.47 / MAX: 14.7MIN: 13.46 / MAX: 14.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b012348121620Min: 13.49 / Avg: 13.51 / Max: 13.53Min: 13.51 / Avg: 13.52 / Max: 13.53Min: 13.5 / Avg: 13.52 / Max: 13.531. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssd1231224364860SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 351.9751.9851.97MIN: 50.85 / MAX: 55.78MIN: 50.86 / MAX: 53.88MIN: 50.82 / MAX: 54.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssd1231020304050Min: 51.94 / Avg: 51.97 / Max: 52Min: 51.96 / Avg: 51.98 / Max: 51.99Min: 51.94 / Avg: 51.97 / Max: 52.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazeface1230.92031.84062.76093.68124.6015SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 34.094.094.09MIN: 4.05 / MAX: 4.12MIN: 4.05 / MAX: 4.13MIN: 4.05 / MAX: 4.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazeface123246810Min: 4.09 / Avg: 4.09 / Max: 4.1Min: 4.09 / Avg: 4.09 / Max: 4.1Min: 4.08 / Avg: 4.09 / Max: 4.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

CLOMP

CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading in order to influence future system designs. This particular test profile configuration is currently set to look at the OpenMP static schedule speed-up across all available CPU cores using the recommended test configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP Speedup123246810SE +/- 0.63, N = 12SE +/- 0.03, N = 38.07.18.01. (CC) gcc options: -fopenmp -O3 -lm
OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP Speedup1233691215Min: 1.3 / Avg: 7.07 / Max: 8Min: 7.9 / Avg: 7.97 / Max: 81. (CC) gcc options: -fopenmp -O3 -lm