3700X More march

AMD Ryzen 7 3700X 8-Core testing with a Gigabyte A320M-S2H-CF (F52a BIOS) and HIS AMD Radeon HD 7750/8740 / R7 250E 1GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2103170-IB-3700XMORE69
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

CPU Massive 4 Tests
Creator Workloads 3 Tests
Encoding 2 Tests
HPC - High Performance Computing 2 Tests
Multi-Core 5 Tests
Server CPU Tests 4 Tests
Video Encoding 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
1
March 17
  1 Hour, 3 Minutes
2
March 17
  1 Hour, 8 Minutes
3
March 17
  43 Minutes
Invert Hiding All Results Option
  58 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):


3700X More marchProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen Resolution123AMD Ryzen 7 3700X 8-Core @ 3.60GHz (8 Cores / 16 Threads)Gigabyte A320M-S2H-CF (F52a BIOS)AMD Starship/Matisse8GB240GB TOSHIBA RC100HIS AMD Radeon HD 7750/8740 / R7 250E 1GBAMD Oland/Hainan/CapeDELL S2409WRealtek RTL8111/8168/8411Ubuntu 20.045.8.1-050801-generic (x86_64)GNOME Shell 3.36.4X Server 1.20.94.5 Mesa 20.0.8 (LLVM 10.0.0)GCC 9.3.0ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0x8701021 Python Details- Python 3.8.5Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

123Result OverviewPhoronix Test Suite 10.6.1100%104%108%111%oneDNNTimed Mesa CompilationSVT-VP9Xcompact3d Incompact3dSVT-HEVC

3700X More marchonednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUsvt-hevc: 1 - Bosphorus 1080psvt-hevc: 7 - Bosphorus 1080psvt-hevc: 10 - Bosphorus 1080psvt-vp9: VMAF Optimized - Bosphorus 1080psvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080psvt-vp9: Visual Quality Optimized - Bosphorus 1080psysbench: RAM / Memorysysbench: CPUbuild-mesa: Time To Compileincompact3d: input.i3d 129 Cells Per Directionincompact3d: input.i3d 192 Cells Per Direction1235.5671510.23332.630182.3862323.03808.992456.6972421.05843.639764.622873899.222758.753873.422746.854.827793858.112743.143.056607.63106.19209.25136.60143.24114.3910289.3417383.3355.88340.3466771318.2113545.5320410.74002.625242.6882723.09438.705086.7486221.5223.644784.643393936.822830.693938.802819.514.944773919.202801.033.067677.59105.83209.21134.18143.51114.4510276.9017341.9455.55940.4793879316.5289415.5343810.952011.599892.4609622.95388.714116.7355821.24473.642644.636643811.137.61105.92209.45135.52143.34114.1755.75840.7185669317.006622OpenBenchmarking.org

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU1231.25262.50523.75785.01046.263SE +/- 0.02182, N = 3SE +/- 0.01225, N = 3SE +/- 0.00906, N = 35.567155.532045.53438MIN: 5.4MIN: 5.37MIN: 5.41. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU123246810Min: 5.54 / Avg: 5.57 / Max: 5.61Min: 5.51 / Avg: 5.53 / Max: 5.55Min: 5.52 / Avg: 5.53 / Max: 5.551. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU1233691215SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 310.2310.7410.95MIN: 9.78MIN: 10.43MIN: 10.661. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU1233691215Min: 10.19 / Avg: 10.23 / Max: 10.29Min: 10.71 / Avg: 10.74 / Max: 10.79Min: 10.94 / Avg: 10.95 / Max: 10.971. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU1233691215SE +/- 0.00406, N = 3SE +/- 0.00360, N = 3SE +/- 5.08322, N = 122.630182.6252411.59989MIN: 2.58MIN: 2.58MIN: 2.571. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU1233691215Min: 2.62 / Avg: 2.63 / Max: 2.63Min: 2.62 / Avg: 2.63 / Max: 2.63Min: 2.62 / Avg: 11.6 / Max: 52.281. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU1230.60491.20981.81472.41963.0245SE +/- 0.01088, N = 3SE +/- 0.00780, N = 3SE +/- 0.00446, N = 32.386232.688272.46096MIN: 2.3MIN: 2.61MIN: 2.41. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU123246810Min: 2.37 / Avg: 2.39 / Max: 2.4Min: 2.67 / Avg: 2.69 / Max: 2.7Min: 2.45 / Avg: 2.46 / Max: 2.471. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU123612182430SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 323.0423.0922.95MIN: 22.72MIN: 22.81MIN: 22.741. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU123510152025Min: 23.04 / Avg: 23.04 / Max: 23.04Min: 23.06 / Avg: 23.09 / Max: 23.16Min: 22.94 / Avg: 22.95 / Max: 22.971. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU1233691215SE +/- 0.01522, N = 3SE +/- 0.16196, N = 15SE +/- 0.21403, N = 158.992458.705088.71411MIN: 5.17MIN: 5.13MIN: 5.161. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU1233691215Min: 8.97 / Avg: 8.99 / Max: 9.02Min: 6.89 / Avg: 8.71 / Max: 9.04Min: 6.34 / Avg: 8.71 / Max: 9.271. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU123246810SE +/- 0.00606, N = 3SE +/- 0.01474, N = 3SE +/- 0.00795, N = 36.697246.748626.73558MIN: 6.62MIN: 6.64MIN: 6.651. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU1233691215Min: 6.69 / Avg: 6.7 / Max: 6.71Min: 6.72 / Avg: 6.75 / Max: 6.77Min: 6.72 / Avg: 6.74 / Max: 6.751. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU123510152025SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 321.0621.5221.24MIN: 20.68MIN: 21.31MIN: 20.941. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU123510152025Min: 20.98 / Avg: 21.06 / Max: 21.11Min: 21.5 / Avg: 21.52 / Max: 21.55Min: 21.18 / Avg: 21.24 / Max: 21.281. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU1230.82011.64022.46033.28044.1005SE +/- 0.00803, N = 3SE +/- 0.00448, N = 3SE +/- 0.01166, N = 33.639763.644783.64264MIN: 3.47MIN: 3.47MIN: 3.471. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU123246810Min: 3.62 / Avg: 3.64 / Max: 3.65Min: 3.64 / Avg: 3.64 / Max: 3.65Min: 3.62 / Avg: 3.64 / Max: 3.661. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU1231.04482.08963.13444.17925.224SE +/- 0.00500, N = 3SE +/- 0.00974, N = 3SE +/- 0.00935, N = 34.622874.643394.63664MIN: 4.43MIN: 4.47MIN: 4.451. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU123246810Min: 4.62 / Avg: 4.62 / Max: 4.63Min: 4.62 / Avg: 4.64 / Max: 4.66Min: 4.62 / Avg: 4.64 / Max: 4.651. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU1238001600240032004000SE +/- 2.05, N = 3SE +/- 19.48, N = 3SE +/- 4.52, N = 33899.223936.823811.13MIN: 3886.77MIN: 3902.02MIN: 3793.221. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU1237001400210028003500Min: 3896.05 / Avg: 3899.22 / Max: 3903.06Min: 3911.7 / Avg: 3936.82 / Max: 3975.16Min: 3804.27 / Avg: 3811.13 / Max: 3819.661. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU126001200180024003000SE +/- 12.62, N = 3SE +/- 7.42, N = 32758.752830.69MIN: 2706.38MIN: 2793.581. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU125001000150020002500Min: 2733.65 / Avg: 2758.75 / Max: 2773.56Min: 2818.82 / Avg: 2830.69 / Max: 2844.331. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU128001600240032004000SE +/- 16.07, N = 3SE +/- 6.78, N = 33873.423938.80MIN: 3842.1MIN: 3920.911. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU127001400210028003500Min: 3854.64 / Avg: 3873.42 / Max: 3905.39Min: 3927.59 / Avg: 3938.8 / Max: 3951.011. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU126001200180024003000SE +/- 11.92, N = 3SE +/- 7.65, N = 32746.852819.51MIN: 2702.15MIN: 2784.961. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU125001000150020002500Min: 2723.26 / Avg: 2746.85 / Max: 2761.64Min: 2809.55 / Avg: 2819.51 / Max: 2834.561. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU121.11262.22523.33784.45045.563SE +/- 0.01300, N = 3SE +/- 0.01312, N = 34.827794.94477MIN: 4.66MIN: 4.771. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU12246810Min: 4.81 / Avg: 4.83 / Max: 4.85Min: 4.92 / Avg: 4.94 / Max: 4.961. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU128001600240032004000SE +/- 13.76, N = 3SE +/- 2.50, N = 33858.113919.20MIN: 3834.74MIN: 3903.261. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU127001400210028003500Min: 3840.85 / Avg: 3858.11 / Max: 3885.31Min: 3914.43 / Avg: 3919.2 / Max: 3922.861. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU126001200180024003000SE +/- 16.15, N = 3SE +/- 8.10, N = 32743.142801.03MIN: 2706.62MIN: 2771.61. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU125001000150020002500Min: 2716.19 / Avg: 2743.14 / Max: 2772.05Min: 2791.82 / Avg: 2801.03 / Max: 2817.181. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU120.69021.38042.07062.76083.451SE +/- 0.00195, N = 3SE +/- 0.00076, N = 33.056603.06767MIN: 2.98MIN: 31. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU12246810Min: 3.05 / Avg: 3.06 / Max: 3.06Min: 3.07 / Avg: 3.07 / Max: 3.071. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080p123246810SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 37.637.597.611. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080p1233691215Min: 7.59 / Avg: 7.63 / Max: 7.67Min: 7.56 / Avg: 7.59 / Max: 7.61Min: 7.57 / Avg: 7.61 / Max: 7.661. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080p12320406080100SE +/- 0.04, N = 3SE +/- 0.29, N = 3SE +/- 0.22, N = 3106.19105.83105.921. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080p12320406080100Min: 106.1 / Avg: 106.19 / Max: 106.25Min: 105.26 / Avg: 105.83 / Max: 106.19Min: 105.49 / Avg: 105.92 / Max: 106.141. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080p12350100150200250SE +/- 0.31, N = 3SE +/- 0.26, N = 3SE +/- 0.49, N = 3209.25209.21209.451. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080p1234080120160200Min: 208.84 / Avg: 209.25 / Max: 209.86Min: 208.91 / Avg: 209.21 / Max: 209.72Min: 208.55 / Avg: 209.45 / Max: 210.231. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080p123306090120150SE +/- 1.75, N = 3SE +/- 4.27, N = 12SE +/- 3.07, N = 12136.60134.18135.521. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080p123306090120150Min: 133.11 / Avg: 136.6 / Max: 138.56Min: 87.25 / Avg: 134.18 / Max: 139.27Min: 101.76 / Avg: 135.52 / Max: 139.451. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080p123306090120150SE +/- 0.20, N = 3SE +/- 0.39, N = 3SE +/- 0.23, N = 3143.24143.51143.341. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080p123306090120150Min: 142.84 / Avg: 143.24 / Max: 143.46Min: 143.01 / Avg: 143.51 / Max: 144.28Min: 142.99 / Avg: 143.34 / Max: 143.781. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080p123306090120150SE +/- 0.20, N = 3SE +/- 0.10, N = 3SE +/- 0.46, N = 3114.39114.45114.171. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080p12320406080100Min: 114.1 / Avg: 114.39 / Max: 114.78Min: 114.28 / Avg: 114.45 / Max: 114.62Min: 113.43 / Avg: 114.17 / Max: 1151. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/sec, More Is BetterSysbench 1.0.20Test: RAM / Memory122K4K6K8K10KSE +/- 4.79, N = 3SE +/- 23.22, N = 310289.3410276.901. (CC) gcc options: -pthread -O2 -funroll-loops -rdynamic -ldl -laio -lm
OpenBenchmarking.orgMiB/sec, More Is BetterSysbench 1.0.20Test: RAM / Memory122K4K6K8K10KMin: 10280.24 / Avg: 10289.34 / Max: 10296.46Min: 10231.46 / Avg: 10276.9 / Max: 10307.931. (CC) gcc options: -pthread -O2 -funroll-loops -rdynamic -ldl -laio -lm

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPU124K8K12K16K20KSE +/- 5.91, N = 3SE +/- 33.37, N = 317383.3317341.941. (CC) gcc options: -pthread -O2 -funroll-loops -rdynamic -ldl -laio -lm
OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPU123K6K9K12K15KMin: 17371.8 / Avg: 17383.33 / Max: 17391.33Min: 17276.49 / Avg: 17341.94 / Max: 173861. (CC) gcc options: -pthread -O2 -funroll-loops -rdynamic -ldl -laio -lm

Timed Mesa Compilation

This test profile times how long it takes to compile Mesa with Meson/Ninja. For minimizing build dependencies and avoid versioning conflicts, test this is just the core Mesa build without LLVM or the extra Gallium3D/Mesa drivers enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To Compile1231326395265SE +/- 0.01, N = 3SE +/- 0.10, N = 3SE +/- 0.06, N = 355.8855.5655.76
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To Compile1231122334455Min: 55.87 / Avg: 55.88 / Max: 55.9Min: 55.38 / Avg: 55.56 / Max: 55.71Min: 55.67 / Avg: 55.76 / Max: 55.88

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per Direction123918273645SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.22, N = 340.3540.4840.721. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per Direction123816243240Min: 40.26 / Avg: 40.35 / Max: 40.39Min: 40.38 / Avg: 40.48 / Max: 40.55Min: 40.45 / Avg: 40.72 / Max: 41.161. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 192 Cells Per Direction12370140210280350SE +/- 1.26, N = 3SE +/- 0.22, N = 3SE +/- 0.35, N = 3318.21316.53317.011. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 192 Cells Per Direction12360120180240300Min: 316.75 / Avg: 318.21 / Max: 320.71Min: 316.11 / Avg: 316.53 / Max: 316.85Min: 316.32 / Avg: 317.01 / Max: 317.471. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi