Core i9 10980XE Ryzen 9 3990X - Pop OS Skylake Opt Benchmark

Benchmarks by Michael Larabel.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2007127-NE-2007118NE94
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 3 Tests
Bioinformatics 2 Tests
C++ Boost Tests 2 Tests
C/C++ Compiler Tests 11 Tests
CPU Massive 21 Tests
Creator Workloads 9 Tests
Database Test Suite 2 Tests
Encoding 3 Tests
Fortran Tests 4 Tests
HPC - High Performance Computing 16 Tests
Imaging 2 Tests
Machine Learning 6 Tests
Molecular Dynamics 3 Tests
MPI Benchmarks 5 Tests
Multi-Core 15 Tests
NVIDIA GPU Compute 3 Tests
OpenMPI Tests 7 Tests
Programmer / Developer System Benchmarks 5 Tests
Python 7 Tests
Raytracing 2 Tests
Renderers 2 Tests
Scientific Computing 6 Tests
Server 2 Tests
Server CPU Tests 16 Tests
Single-Threaded 4 Tests
Video Encoding 3 Tests
Common Workstation Benchmarks 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs
On Line Graphs With Missing Data, Connect The Line Gaps

Multi-Way Comparison

Condense Comparison
Transpose Comparison

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
i9 10980XE: Default
July 10 2020
  6 Hours, 44 Minutes
i9 10980XE: Optimized
July 10 2020
  6 Hours, 52 Minutes
TR 3990X: Default
July 11 2020
  5 Hours, 46 Minutes
TR 3990X: Optimized
July 11 2020
  5 Hours, 21 Minutes
Invert Hiding All Results Option
  6 Hours, 11 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverCompilerFile-SystemScreen ResolutionOpenGLi9 10980XETR 3990X Default Optimized Default OptimizedIntel Core i9-10980XE @ 4.80GHz (18 Cores / 36 Threads)ASRock X299 Steel Legend (P1.30 BIOS)Intel Sky Lake-E DMI3 Registers32GBSamsung SSD 970 PRO 512GB + 32GB Flash DiskNVIDIA GeForce GTX 1080 Ti 11GBRealtek ALC1220ASUS MG28UIntel I219-V + Intel I211Pop 20.045.4.0-7634-generic (x86_64)GNOME Shell 3.36.3X Server 1.20.8modesetting 1.20.8GCC 9.3.0ext43840x2160NVIDIA NV132 11GB4.3 Mesa 20.0.8AMD Ryzen Threadripper 3990X 64-Core @ 2.90GHz (64 Cores / 128 Threads)System76 Thelio Major (F4c Z5 BIOS)AMD Starship/Matisse126GBSamsung SSD 970 EVO Plus 500GBAMD Radeon RX 5600 OEM/5600 XT / 5700/5700 8GB (1750/875MHz)AMD Navi 10 HDMI AudioLG Ultra HDIntel I211 + Intel Wi-Fi 6 AX200amdgpu 19.1.04.6 Mesa 20.0.8 (LLVM 10.0.0)OpenBenchmarking.orgCompiler Details- i9 10980XE: Default: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - i9 10980XE: Optimized: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch=skylake --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - TR 3990X: Default: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - TR 3990X: Optimized: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-arch=skylake --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- i9 10980XE: Default: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x5002f01- i9 10980XE: Optimized: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x5002f01- TR 3990X: Default: Scaling Governor: acpi-cpufreq ondemand - CPU Microcode: 0x8301025- TR 3990X: Optimized: Scaling Governor: acpi-cpufreq ondemand - CPU Microcode: 0x8301025Python Details- Python 2.7.18rc1 + Python 3.8.2Security Details- i9 10980XE: Default: itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + tsx_async_abort: Mitigation of TSX disabled - i9 10980XE: Optimized: itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + tsx_async_abort: Mitigation of TSX disabled - TR 3990X: Default: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: conditional RSB filling + tsx_async_abort: Not affected - TR 3990X: Optimized: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: conditional RSB filling + tsx_async_abort: Not affected Kernel Details- TR 3990X: Default, TR 3990X: Optimized: snd_usb_audio.ignore_ctl_error=1

plaidml: No - Inference - DenseNet 201 - CPUyafaray: Total Time For Sample Sceneplaidml: No - Inference - ResNet 50 - CPUhpcg: ai-benchmark: Device AI Scoreai-benchmark: Device Training Scoreai-benchmark: Device Inference Scorebrl-cad: VGR Performance Metricplaidml: No - Inference - Inception V3 - CPUdaphne: OpenMP - Points2Imagenumpy: himeno: Poisson Pressure Solversvt-av1: Enc Mode 0 - 1080pplaidml: No - Inference - Mobilenet - CPUscikit-learn: rodinia: OpenMP HotSpot3Dpyperformance: python_startupmrbayes: Primate Phylogeny Analysispovray: Trace Timeplaidml: No - Inference - VGG19 - CPUdav1d: Chimera 1080p 10-bitgromacs: Water Benchmarknumenta-nab: Earthgecko Skylinerodinia: OpenMP LavaMDplaidml: No - Inference - VGG16 - CPUmontage: Mosaic of M17, K band, 1.5 deg x 1.5 degpyperformance: raytracenpb: EP.Djohn-the-ripper: MD5sqlite-speedtest: Timed Time - Size 1,000stockfish: Total Timepyperformance: 2to3npb: BT.Cmt-dgemm: Sustained Floating-Point Raterodinia: OpenMP Leukocytecompress-zstd: 19hugin: Panorama Photo Assistant + Stitching Timepyperformance: goredis: GETcython-bench: npb: LU.Cnumenta-nab: EXPoSEpyperformance: django_templateredis: SETaom-av1: Speed 6 Realtimepyperformance: crypto_pyaescoremark: CoreMark Size 666 - Iterations Per Secondglibc-bench: exppyperformance: regex_compilejohn-the-ripper: Blowfishpyperformance: floatplaidml: No - Inference - IMDB LSTM - CPUaom-av1: Speed 6 Two-Passcompress-zstd: 3numenta-nab: Bayesian Changepointpyperformance: chaospyperformance: pathlibmlpack: scikit_svmpyperformance: pickle_pure_pythondaphne: OpenMP - NDT Mappingpyperformance: json_loadspybench: Total For Average Test Timesaom-av1: Speed 4 Two-Passglibc-bench: ffspyperformance: nbodyglibc-bench: sincosglibc-bench: cosglibc-bench: sinnpb: SP.Bglibc-bench: sqrtnpb: FT.Caom-av1: Speed 8 Realtimedaphne: OpenMP - Euclidean Clusterocrmypdf: Processing 60 Page PDF Documentdav1d: Chimera 1080psvt-av1: Enc Mode 4 - 1080pnumenta-nab: Relative Entropyrodinia: OpenMP Streamclusterdav1d: Summer Nature 4Kglibc-bench: asinhglibc-bench: ffsllglibc-bench: log2glibc-bench: pthread_onceglibc-bench: tanhglibc-bench: modfglibc-bench: sinhglibc-bench: atanhrodinia: OpenMP CFD Solvernpb: MG.Csvt-av1: Enc Mode 8 - 1080pnumenta-nab: Windowed Gaussiandav1d: Summer Nature 1080pnpb: EP.Clammps: Rhodopsin Proteincloverleaf: Lagrangian-Eulerian Hydrodynamicsi9 10980XETR 3990X Default Optimized Default Optimized2.62165.0285.517.889153513156519482094726.5121226.240321173374.564130.5463550.12912.90153.22796.02714.6111.07136.45422.2996.921.58691.367123.86826.8573.1713842458.19308700055.5304922620728543782.847.08680759.24560.245.8241992869432.5038.25744944.5745.51738.32183993.2718.3586.6634750.4768274.706731363197890.7882.833.654751.130.22187.515.325.15353893.8720.28872.301.6613710312.847740.132939.732511958.611.6109719696.8232.971336.4719.274618.445.55014.61614.537231.138.788161.434556.435991.4030111.17812.370707.7587710.408411.79318004.2644.4427.911552.052455.4811.6622.572.6169.7655.507.881203507156319442089356.5321137.642302395379.914757.8099870.1312.94153.31893.92014.6110.79653.10522.55121.941.58692.435114.19826.8771.2993862446.34308633356.2644887329928543185.057.12385064.42060.145.9182012868487.1339.70744872.5745.60538.42120745.1318.3787.3655919.1549265.247201363198291.1890.293.614814.130.10387.715.125.17349888.0620.59142.281.3987510510.173139.682239.257311842.901.8155519590.9333.521345.7019.224617.335.56714.63114.610231.358.200171.604664.625021.401418.777231.622127.156539.3025710.96417814.0945.0467.920560.032473.3512.7692.573.4454.0426.289.0497931291197193281693411.1521228.853285734373.174076.6294690.12814.70107.56386.85512.293.7279.05031.47169.373.89070.96043.74137.5674.1974454999.43512533363.86815414665229967739.7817.17351639.85582.245.4862352704398.1240.69265289.0132.26946.41926576.2818.05100.02332502.5047745.1896215988783109821.023.97205.825.76810416.420.68434968.1222.89312.532.0238310412.366842.915042.560047235.722.2512528563.0234.501183.6814.816882.239.59812.9598.119379.978.776161.789785.956321.7864110.77962.027947.8060910.27086.70526534.1497.1006.213873.544818.1523.6240.403.4454.9866.319.0525331201197192381752911.0621080.524916118375.833508.0981790.1314.98106.73986.1381296.3069.05131.96206.683.85371.61841.05237.2174.1714445027.83511300065.08315342856829967707.3417.42582339.74782.446.0212352774720.9240.76165560.8432.48644.51879274.3718.231012352886.8130375.1855215888894101819.423.897163.325.86110516.720.63414915.0422.69302.511.7926210511.654642.949742.462547137.992.2763228564.5935.621168.8614.865857.099.87613.1678.019375.648.377982.024414.055861.7895310.90982.028177.561529.923256.63726530.5698.2316.059881.354894.0324.3290.40OpenBenchmarking.org

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: DenseNet 201 - Device: CPUDefaultOptimized0.7741.5482.3223.0963.87SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 32.622.603.443.44
i9 10980XETR 3990XOpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: DenseNet 201 - Device: CPUDefaultOptimized246810Min: 2.6 / Avg: 2.62 / Max: 2.63Min: 2.6 / Avg: 2.6 / Max: 2.6Min: 3.44 / Avg: 3.44 / Max: 3.45Min: 3.44 / Avg: 3.44 / Max: 3.45

YafaRay

YafaRay is an open-source physically based montecarlo ray-tracing engine. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterYafaRay 3.4.1Total Time For Sample SceneDefaultOptimized4080120160200SE +/- 12.88, N = 9SE +/- 8.23, N = 12SE +/- 0.61, N = 15SE +/- 0.56, N = 3165.03169.7754.0454.991. (CXX) g++ options: -std=c++11 -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype -lpthread
i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterYafaRay 3.4.1Total Time For Sample SceneDefaultOptimized306090120150Min: 115.17 / Avg: 165.03 / Max: 235.4Min: 132.28 / Avg: 169.76 / Max: 230.95Min: 51.13 / Avg: 54.04 / Max: 57.04Min: 53.87 / Avg: 54.99 / Max: 55.551. (CXX) g++ options: -std=c++11 -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype -lpthread

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: ResNet 50 - Device: CPUDefaultOptimized246810SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 35.515.506.286.31
i9 10980XETR 3990XOpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: ResNet 50 - Device: CPUDefaultOptimized3691215Min: 5.49 / Avg: 5.51 / Max: 5.52Min: 5.48 / Avg: 5.5 / Max: 5.53Min: 6.27 / Avg: 6.28 / Max: 6.29Min: 6.28 / Avg: 6.31 / Max: 6.34

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1DefaultOptimized3691215SE +/- 0.01121, N = 3SE +/- 0.00474, N = 3SE +/- 0.00135, N = 3SE +/- 0.00205, N = 37.889157.881209.049799.052531. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi
i9 10980XETR 3990XOpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1DefaultOptimized3691215Min: 7.87 / Avg: 7.89 / Max: 7.9Min: 7.87 / Avg: 7.88 / Max: 7.89Min: 9.05 / Avg: 9.05 / Max: 9.05Min: 9.05 / Avg: 9.05 / Max: 9.061. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI ScoreDefaultOptimized80016002400320040003513350731293120

i9 10980XETR 3990XOpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training ScoreDefaultOptimized300600900120015001565156311971197

i9 10980XETR 3990XOpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference ScoreDefaultOptimized4008001200160020001948194419321923

BRL-CAD

BRL-CAD 7.28.0 is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.30.8VGR Performance MetricDefaultOptimized200K400K600K800K1000K2094722089358169348175291. (CXX) g++ options: -std=c++11 -pipe -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -rdynamic -lGL -lGLdispatch -lX11 -lpthread -ldl -luuid -lm

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: Inception V3 - Device: CPUDefaultOptimized3691215SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 36.516.5311.1511.06
i9 10980XETR 3990XOpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: Inception V3 - Device: CPUDefaultOptimized3691215Min: 6.5 / Avg: 6.51 / Max: 6.52Min: 6.51 / Avg: 6.53 / Max: 6.54Min: 11.1 / Avg: 11.15 / Max: 11.21Min: 11.03 / Avg: 11.06 / Max: 11.11

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Points2ImageDefaultOptimized5K10K15K20K25KSE +/- 181.23, N = 15SE +/- 165.68, N = 15SE +/- 222.73, N = 3SE +/- 230.98, N = 321226.2421137.6421228.8521080.521. (CXX) g++ options: -O3 -std=c++11 -fopenmp
i9 10980XETR 3990XOpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Points2ImageDefaultOptimized4K8K12K16K20KMin: 19015.5 / Avg: 21226.24 / Max: 21656.19Min: 19379.66 / Avg: 21137.64 / Max: 21672.31Min: 20791.25 / Avg: 21228.85 / Max: 21519.8Min: 20619.12 / Avg: 21080.52 / Max: 21330.831. (CXX) g++ options: -O3 -std=c++11 -fopenmp

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkDefaultOptimized80160240320400SE +/- 0.30, N = 3SE +/- 1.22, N = 3SE +/- 0.91, N = 3SE +/- 0.40, N = 3374.56379.91373.17375.83
i9 10980XETR 3990XOpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkDefaultOptimized70140210280350Min: 374.11 / Avg: 374.56 / Max: 375.12Min: 377.95 / Avg: 379.91 / Max: 382.14Min: 372.18 / Avg: 373.17 / Max: 374.99Min: 375.06 / Avg: 375.83 / Max: 376.36

Himeno Benchmark

The Himeno benchmark is a linear solver of pressure Poisson using a point-Jacobi method. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgMFLOPS, More Is BetterHimeno Benchmark 3.0Poisson Pressure SolverDefaultOptimized10002000300040005000SE +/- 54.21, N = 15SE +/- 9.55, N = 3SE +/- 34.51, N = 15SE +/- 47.31, N = 34130.554757.814076.633508.101. (CC) gcc options: -O3 -mavx2
i9 10980XETR 3990XOpenBenchmarking.orgMFLOPS, More Is BetterHimeno Benchmark 3.0Poisson Pressure SolverDefaultOptimized8001600240032004000Min: 3666.54 / Avg: 4130.55 / Max: 4244.13Min: 4740.31 / Avg: 4757.81 / Max: 4773.21Min: 3811.66 / Avg: 4076.63 / Max: 4245.9Min: 3447.63 / Avg: 3508.1 / Max: 3601.371. (CC) gcc options: -O3 -mavx2

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 0 - Input: 1080pDefaultOptimized0.02930.05860.08790.11720.1465SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.1290.1300.1280.1301. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
i9 10980XETR 3990XOpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 0 - Input: 1080pDefaultOptimized12345Min: 0.13 / Avg: 0.13 / Max: 0.13Min: 0.13 / Avg: 0.13 / Max: 0.13Min: 0.13 / Avg: 0.13 / Max: 0.13Min: 0.13 / Avg: 0.13 / Max: 0.131. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: Mobilenet - Device: CPUDefaultOptimized48121620SE +/- 0.04, N = 3SE +/- 0.07, N = 3SE +/- 0.04, N = 3SE +/- 0.08, N = 312.9012.9414.7014.98
i9 10980XETR 3990XOpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: Mobilenet - Device: CPUDefaultOptimized48121620Min: 12.82 / Avg: 12.9 / Max: 12.97Min: 12.81 / Avg: 12.94 / Max: 13.05Min: 14.63 / Avg: 14.7 / Max: 14.76Min: 14.85 / Avg: 14.98 / Max: 15.11

Scikit-Learn

Scikit-learn is a Python module for machine learning Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 0.22.1DefaultOptimized306090120150SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.69, N = 3SE +/- 0.29, N = 3153.23153.32107.56106.74
i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 0.22.1DefaultOptimized306090120150Min: 153.18 / Avg: 153.23 / Max: 153.25Min: 153.26 / Avg: 153.32 / Max: 153.36Min: 106.18 / Avg: 107.56 / Max: 108.38Min: 106.25 / Avg: 106.74 / Max: 107.27

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DDefaultOptimized20406080100SE +/- 1.23, N = 4SE +/- 1.41, N = 3SE +/- 1.02, N = 6SE +/- 0.50, N = 396.0393.9286.8686.141. (CXX) g++ options: -O2 -lOpenCL
i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DDefaultOptimized20406080100Min: 92.41 / Avg: 96.03 / Max: 97.75Min: 92.44 / Avg: 93.92 / Max: 96.75Min: 84.68 / Avg: 86.85 / Max: 91.58Min: 85.15 / Avg: 86.14 / Max: 86.781. (CXX) g++ options: -O2 -lOpenCL

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupDefaultOptimized48121620SE +/- 0.09, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 314.614.612.212.0
i9 10980XETR 3990XOpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupDefaultOptimized48121620Min: 14.4 / Avg: 14.57 / Max: 14.7Min: 14.5 / Avg: 14.6 / Max: 14.7Min: 12.1 / Avg: 12.17 / Max: 12.2

Timed MrBayes Analysis

This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisDefaultOptimized20406080100SE +/- 0.56, N = 3SE +/- 0.15, N = 3SE +/- 0.32, N = 3SE +/- 0.77, N = 3111.07110.8093.7396.311. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -mabm -O3 -std=c99 -pedantic -lm
i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisDefaultOptimized20406080100Min: 110.45 / Avg: 111.07 / Max: 112.19Min: 110.53 / Avg: 110.8 / Max: 111.04Min: 93.1 / Avg: 93.73 / Max: 94.16Min: 94.93 / Avg: 96.31 / Max: 97.581. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -mabm -O3 -std=c99 -pedantic -lm

POV-Ray

This is a test of POV-Ray, the Persistence of Vision Raytracer. POV-Ray is used to create 3D graphics using ray-tracing. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace TimeDefaultOptimized1224364860SE +/- 1.992, N = 12SE +/- 10.039, N = 12SE +/- 0.065, N = 3SE +/- 0.043, N = 336.45453.1059.0509.0511. (CXX) g++ options: -pipe -O3 -ffast-math -march=native -pthread -lX11 -lIlmImf -lImath -lHalf -lIex -lIexMath -lIlmThread -lpthread -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system
i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace TimeDefaultOptimized1122334455Min: 31.04 / Avg: 36.45 / Max: 56.54Min: 30.97 / Avg: 53.1 / Max: 136.75Min: 8.98 / Avg: 9.05 / Max: 9.18Min: 9 / Avg: 9.05 / Max: 9.141. (CXX) g++ options: -pipe -O3 -ffast-math -march=native -pthread -lX11 -lIlmImf -lImath -lHalf -lIex -lIexMath -lIlmThread -lpthread -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG19 - Device: CPUDefaultOptimized714212835SE +/- 0.01, N = 3SE +/- 0.11, N = 3SE +/- 0.29, N = 3SE +/- 0.29, N = 322.2922.5531.4731.96
i9 10980XETR 3990XOpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG19 - Device: CPUDefaultOptimized714212835Min: 22.27 / Avg: 22.29 / Max: 22.31Min: 22.4 / Avg: 22.55 / Max: 22.76Min: 31.14 / Avg: 31.47 / Max: 32.05Min: 31.53 / Avg: 31.96 / Max: 32.51

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080p 10-bitDefaultOptimized50100150200250SE +/- 0.03, N = 3SE +/- 0.12, N = 3SE +/- 0.19, N = 3SE +/- 0.53, N = 396.92121.94169.37206.681. (CC) gcc options: -pthread
i9 10980XETR 3990XOpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080p 10-bitDefaultOptimized4080120160200Min: 96.88 / Avg: 96.92 / Max: 96.99Min: 121.72 / Avg: 121.94 / Max: 122.13Min: 169.01 / Avg: 169.37 / Max: 169.68Min: 205.62 / Avg: 206.68 / Max: 207.231. (CC) gcc options: -pthread

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.1Water BenchmarkDefaultOptimized0.87531.75062.62593.50124.3765SE +/- 0.002, N = 3SE +/- 0.004, N = 3SE +/- 0.066, N = 3SE +/- 0.054, N = 41.5861.5863.8903.8531. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm
i9 10980XETR 3990XOpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.1Water BenchmarkDefaultOptimized246810Min: 1.58 / Avg: 1.59 / Max: 1.59Min: 1.58 / Avg: 1.59 / Max: 1.59Min: 3.82 / Avg: 3.89 / Max: 4.02Min: 3.79 / Avg: 3.85 / Max: 4.011. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko SkylineDefaultOptimized20406080100SE +/- 0.17, N = 3SE +/- 0.13, N = 3SE +/- 0.25, N = 3SE +/- 0.34, N = 391.3792.4470.9671.62
i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko SkylineDefaultOptimized20406080100Min: 91.08 / Avg: 91.37 / Max: 91.67Min: 92.18 / Avg: 92.44 / Max: 92.59Min: 70.66 / Avg: 70.96 / Max: 71.45Min: 70.94 / Avg: 71.62 / Max: 72.05

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDDefaultOptimized306090120150SE +/- 0.38, N = 3SE +/- 0.43, N = 3SE +/- 0.20, N = 3SE +/- 0.12, N = 3123.87114.2043.7441.051. (CXX) g++ options: -O2 -lOpenCL
i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDDefaultOptimized20406080100Min: 123.3 / Avg: 123.87 / Max: 124.6Min: 113.41 / Avg: 114.2 / Max: 114.91Min: 43.34 / Avg: 43.74 / Max: 43.98Min: 40.82 / Avg: 41.05 / Max: 41.231. (CXX) g++ options: -O2 -lOpenCL

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: CPUDefaultOptimized918273645SE +/- 0.33, N = 3SE +/- 0.11, N = 3SE +/- 0.08, N = 3SE +/- 0.10, N = 326.8526.8737.5637.21
i9 10980XETR 3990XOpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: CPUDefaultOptimized816243240Min: 26.22 / Avg: 26.85 / Max: 27.34Min: 26.76 / Avg: 26.87 / Max: 27.09Min: 37.44 / Avg: 37.56 / Max: 37.7Min: 37.07 / Avg: 37.21 / Max: 37.39

Montage Astronomical Image Mosaic Engine

Montage is an open-source astronomical image mosaic engine. This BSD-licensed astronomy software is developed by the California Institute of Technology, Pasadena. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterMontage Astronomical Image Mosaic Engine 6.0Mosaic of M17, K band, 1.5 deg x 1.5 degDefaultOptimized1632486480SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.13, N = 3SE +/- 0.19, N = 373.1771.3074.2074.171. (CC) gcc options: -std=gnu99 -lcfitsio -lm -O2
i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterMontage Astronomical Image Mosaic Engine 6.0Mosaic of M17, K band, 1.5 deg x 1.5 degDefaultOptimized1428425670Min: 73.09 / Avg: 73.17 / Max: 73.25Min: 71.25 / Avg: 71.3 / Max: 71.33Min: 74.03 / Avg: 74.2 / Max: 74.44Min: 73.81 / Avg: 74.17 / Max: 74.471. (CC) gcc options: -std=gnu99 -lcfitsio -lm -O2

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceDefaultOptimized100200300400500SE +/- 0.58, N = 3SE +/- 0.88, N = 3SE +/- 0.33, N = 3384386445444
i9 10980XETR 3990XOpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceDefaultOptimized80160240320400Min: 385 / Avg: 386 / Max: 387Min: 443 / Avg: 444.67 / Max: 446Min: 444 / Avg: 444.33 / Max: 445

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.DDefaultOptimized11002200330044005500SE +/- 14.22, N = 3SE +/- 25.58, N = 8SE +/- 7.97, N = 3SE +/- 9.51, N = 32458.192446.344999.435027.831. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
i9 10980XETR 3990XOpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.DDefaultOptimized9001800270036004500Min: 2432.02 / Avg: 2458.19 / Max: 2480.93Min: 2357.02 / Avg: 2446.34 / Max: 2507.11Min: 4987.57 / Avg: 4999.43 / Max: 5014.59Min: 5010.97 / Avg: 5027.83 / Max: 5043.891. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: MD5DefaultOptimized1.1M2.2M3.3M4.4M5.5MSE +/- 13453.62, N = 3SE +/- 5666.67, N = 3SE +/- 28386.23, N = 3SE +/- 28827.07, N = 330870003086333512533351130001. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -pthread -lm -lz -ldl -lcrypt
i9 10980XETR 3990XOpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: MD5DefaultOptimized900K1800K2700K3600K4500KMin: 3061000 / Avg: 3087000 / Max: 3106000Min: 3075000 / Avg: 3086333.33 / Max: 3092000Min: 5094000 / Avg: 5125333.33 / Max: 5182000Min: 5077000 / Avg: 5113000 / Max: 51700001. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -pthread -lm -lz -ldl -lcrypt

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000DefaultOptimized1530456075SE +/- 0.24, N = 3SE +/- 0.04, N = 3SE +/- 0.22, N = 3SE +/- 0.25, N = 355.5356.2663.8765.081. (CC) gcc options: -O2 -ldl -lz -lpthread
i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000DefaultOptimized1326395265Min: 55.13 / Avg: 55.53 / Max: 55.95Min: 56.2 / Avg: 56.26 / Max: 56.35Min: 63.62 / Avg: 63.87 / Max: 64.31Min: 64.6 / Avg: 65.08 / Max: 65.41. (CC) gcc options: -O2 -ldl -lz -lpthread

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 9Total TimeDefaultOptimized30M60M90M120M150MSE +/- 124471.83, N = 3SE +/- 116928.17, N = 3SE +/- 860075.69, N = 3SE +/- 793489.01, N = 349226207488732991541466521534285681. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++11 -pedantic -O3 -msse -msse3 -mpopcnt -flto
i9 10980XETR 3990XOpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 9Total TimeDefaultOptimized30M60M90M120M150MMin: 49025567 / Avg: 49226206.67 / Max: 49454149Min: 48650740 / Avg: 48873299.33 / Max: 49046765Min: 152589676 / Avg: 154146651.67 / Max: 155558433Min: 151843910 / Avg: 153428568.33 / Max: 1542951801. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++11 -pedantic -O3 -msse -msse3 -mpopcnt -flto

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3DefaultOptimized70140210280350SE +/- 0.33, N = 3SE +/- 0.58, N = 3285285299299
i9 10980XETR 3990XOpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3DefaultOptimized50100150200250Min: 284 / Avg: 284.67 / Max: 285Min: 284 / Avg: 285 / Max: 286

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.CDefaultOptimized15K30K45K60K75KSE +/- 5.06, N = 3SE +/- 2.04, N = 3SE +/- 25.73, N = 3SE +/- 21.90, N = 343782.8443185.0567739.7867707.341. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
i9 10980XETR 3990XOpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.CDefaultOptimized12K24K36K48K60KMin: 43772.78 / Avg: 43782.84 / Max: 43788.74Min: 43181.07 / Avg: 43185.05 / Max: 43187.81Min: 67688.33 / Avg: 67739.78 / Max: 67766.4Min: 67667.33 / Avg: 67707.34 / Max: 67742.81. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

ACES DGEMM

This is a multi-threaded DGEMM benchmark. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point RateDefaultOptimized48121620SE +/- 0.095264, N = 4SE +/- 0.087891, N = 3SE +/- 0.235301, N = 15SE +/- 0.146005, N = 157.0868077.12385017.17351617.4258231. (CC) gcc options: -O3 -march=native -fopenmp
i9 10980XETR 3990XOpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point RateDefaultOptimized48121620Min: 6.82 / Avg: 7.09 / Max: 7.25Min: 6.96 / Avg: 7.12 / Max: 7.26Min: 15.76 / Avg: 17.17 / Max: 18.38Min: 16.15 / Avg: 17.43 / Max: 18.251. (CC) gcc options: -O3 -march=native -fopenmp

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteDefaultOptimized1428425670SE +/- 0.57, N = 3SE +/- 0.56, N = 3SE +/- 0.23, N = 3SE +/- 0.51, N = 359.2564.4239.8639.751. (CXX) g++ options: -O2 -lOpenCL
i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteDefaultOptimized1326395265Min: 58.43 / Avg: 59.24 / Max: 60.34Min: 63.85 / Avg: 64.42 / Max: 65.53Min: 39.45 / Avg: 39.86 / Max: 40.23Min: 38.73 / Avg: 39.75 / Max: 40.321. (CXX) g++ options: -O2 -lOpenCL

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19DefaultOptimized20406080100SE +/- 0.07, N = 3SE +/- 0.07, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 360.260.182.282.41. (CC) gcc options: -O3 -pthread -lz -llzma
i9 10980XETR 3990XOpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19DefaultOptimized1632486480Min: 60.1 / Avg: 60.17 / Max: 60.3Min: 60 / Avg: 60.13 / Max: 60.2Min: 82.1 / Avg: 82.17 / Max: 82.2Min: 82.4 / Avg: 82.43 / Max: 82.51. (CC) gcc options: -O3 -pthread -lz -llzma

Hugin

Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeDefaultOptimized1020304050SE +/- 0.46, N = 3SE +/- 0.06, N = 3SE +/- 0.34, N = 3SE +/- 0.48, N = 345.8245.9245.4946.02
i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeDefaultOptimized918273645Min: 44.96 / Avg: 45.82 / Max: 46.55Min: 45.85 / Avg: 45.92 / Max: 46.03Min: 45.01 / Avg: 45.49 / Max: 46.16Min: 45.09 / Avg: 46.02 / Max: 46.71

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goDefaultOptimized50100150200250SE +/- 0.33, N = 3SE +/- 0.33, N = 3199201235235
i9 10980XETR 3990XOpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goDefaultOptimized4080120160200Min: 198 / Avg: 198.67 / Max: 199Min: 234 / Avg: 234.67 / Max: 235

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgRequests Per Second, More Is BetterRedis 5.0.5Test: GETDefaultOptimized600K1200K1800K2400K3000KSE +/- 44573.75, N = 3SE +/- 31285.26, N = 15SE +/- 54993.70, N = 15SE +/- 50011.51, N = 152869432.502868487.132704398.122774720.921. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
i9 10980XETR 3990XOpenBenchmarking.orgRequests Per Second, More Is BetterRedis 5.0.5Test: GETDefaultOptimized500K1000K1500K2000K2500KMin: 2824858.75 / Avg: 2869432.5 / Max: 2958580Min: 2652519.75 / Avg: 2868487.13 / Max: 3105590Min: 2237136.5 / Avg: 2704398.12 / Max: 2932551.5Min: 2222222.25 / Avg: 2774720.92 / Max: 2932551.51. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Cython benchmark

Stress benchmark tests to measure time consumed by cython code Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterCython benchmark 0.27DefaultOptimized918273645SE +/- 0.07, N = 3SE +/- 0.10, N = 3SE +/- 0.07, N = 3SE +/- 0.31, N = 338.2639.7140.6940.76
i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterCython benchmark 0.27DefaultOptimized816243240Min: 38.11 / Avg: 38.26 / Max: 38.34Min: 39.59 / Avg: 39.71 / Max: 39.91Min: 40.57 / Avg: 40.69 / Max: 40.81Min: 40.42 / Avg: 40.76 / Max: 41.38

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CDefaultOptimized14K28K42K56K70KSE +/- 19.61, N = 3SE +/- 33.86, N = 3SE +/- 52.68, N = 3SE +/- 7.16, N = 344944.5744872.5765289.0165560.841. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
i9 10980XETR 3990XOpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CDefaultOptimized11K22K33K44K55KMin: 44918.21 / Avg: 44944.57 / Max: 44982.91Min: 44808.16 / Avg: 44872.57 / Max: 44922.88Min: 65191.21 / Avg: 65289.01 / Max: 65371.85Min: 65549.24 / Avg: 65560.84 / Max: 65573.911. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: EXPoSEDefaultOptimized1020304050SE +/- 0.15, N = 3SE +/- 0.25, N = 3SE +/- 0.06, N = 3SE +/- 0.09, N = 345.5245.6132.2732.49
i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: EXPoSEDefaultOptimized918273645Min: 45.22 / Avg: 45.52 / Max: 45.67Min: 45.28 / Avg: 45.61 / Max: 46.09Min: 32.15 / Avg: 32.27 / Max: 32.33Min: 32.37 / Avg: 32.49 / Max: 32.66

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateDefaultOptimized1122334455SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.09, N = 3SE +/- 0.00, N = 338.338.446.444.5
i9 10980XETR 3990XOpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateDefaultOptimized918273645Min: 38.2 / Avg: 38.27 / Max: 38.3Min: 38.3 / Avg: 38.37 / Max: 38.4Min: 46.3 / Avg: 46.43 / Max: 46.6Min: 44.5 / Avg: 44.5 / Max: 44.5

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgRequests Per Second, More Is BetterRedis 5.0.5Test: SETDefaultOptimized500K1000K1500K2000K2500KSE +/- 24384.60, N = 6SE +/- 25394.12, N = 3SE +/- 26705.87, N = 15SE +/- 19531.99, N = 152183993.272120745.131926576.281879274.371. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
i9 10980XETR 3990XOpenBenchmarking.orgRequests Per Second, More Is BetterRedis 5.0.5Test: SETDefaultOptimized400K800K1200K1600K2000KMin: 2087682.62 / Avg: 2183993.27 / Max: 2242152.5Min: 2083333.38 / Avg: 2120745.13 / Max: 2169197.5Min: 1703577.5 / Avg: 1926576.28 / Max: 2044989.75Min: 1751313.5 / Avg: 1879274.37 / Max: 19801981. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeDefaultOptimized510152025SE +/- 0.02, N = 3SE +/- 0.09, N = 3SE +/- 0.10, N = 3SE +/- 0.05, N = 318.3518.3718.0518.231. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
i9 10980XETR 3990XOpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeDefaultOptimized510152025Min: 18.32 / Avg: 18.35 / Max: 18.38Min: 18.22 / Avg: 18.37 / Max: 18.52Min: 17.86 / Avg: 18.05 / Max: 18.16Min: 18.14 / Avg: 18.23 / Max: 18.321. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesDefaultOptimized20406080100SE +/- 0.06, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 386.687.3100.0101.0
i9 10980XETR 3990XOpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesDefaultOptimized20406080100Min: 86.5 / Avg: 86.6 / Max: 86.7Min: 87.2 / Avg: 87.3 / Max: 87.4Min: 99.9 / Avg: 99.97 / Max: 100

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondDefaultOptimized500K1000K1500K2000K2500KSE +/- 3025.86, N = 3SE +/- 2257.25, N = 3SE +/- 16958.53, N = 3SE +/- 4570.17, N = 3634750.48655919.152332502.502352886.811. (CC) gcc options: -O2 -lrt" -lrt
i9 10980XETR 3990XOpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondDefaultOptimized400K800K1200K1600K2000KMin: 629480.68 / Avg: 634750.48 / Max: 639962.08Min: 652331.48 / Avg: 655919.15 / Max: 660086.18Min: 2301330.46 / Avg: 2332502.5 / Max: 2359664.49Min: 2347331.74 / Avg: 2352886.81 / Max: 2361950.451. (CC) gcc options: -O2 -lrt" -lrt

glibc bench

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: expDefaultOptimized1.18062.36123.54184.72245.903SE +/- 0.00286, N = 3SE +/- 0.01567, N = 3SE +/- 0.00032, N = 3SE +/- 0.02078, N = 34.706735.247205.189625.18552
i9 10980XETR 3990XOpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: expDefaultOptimized246810Min: 4.7 / Avg: 4.71 / Max: 4.71Min: 5.22 / Avg: 5.25 / Max: 5.27Min: 5.19 / Avg: 5.19 / Max: 5.19Min: 5.16 / Avg: 5.19 / Max: 5.23

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileDefaultOptimized4080120160200136136159158

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: BlowfishDefaultOptimized20K40K60K80K100KSE +/- 39.01, N = 3SE +/- 36.56, N = 3SE +/- 387.67, N = 3SE +/- 396.97, N = 3319783198288783888941. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -pthread -lm -lz -ldl -lcrypt
i9 10980XETR 3990XOpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: BlowfishDefaultOptimized15K30K45K60K75KMin: 31924 / Avg: 31978.33 / Max: 32054Min: 31935 / Avg: 31982 / Max: 32054Min: 88290 / Avg: 88783.33 / Max: 89548Min: 88367 / Avg: 88894.33 / Max: 896721. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -pthread -lm -lz -ldl -lcrypt

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatDefaultOptimized20406080100SE +/- 0.07, N = 3SE +/- 0.07, N = 3SE +/- 0.33, N = 390.791.1109.0101.0
i9 10980XETR 3990XOpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatDefaultOptimized20406080100Min: 90.6 / Avg: 90.67 / Max: 90.8Min: 91 / Avg: 91.07 / Max: 91.2Min: 108 / Avg: 108.67 / Max: 109

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: IMDB LSTM - Device: CPUDefaultOptimized2004006008001000SE +/- 2.23, N = 3SE +/- 1.51, N = 3SE +/- 1.55, N = 3SE +/- 1.34, N = 3882.83890.29821.02819.42
i9 10980XETR 3990XOpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: IMDB LSTM - Device: CPUDefaultOptimized160320480640800Min: 879.97 / Avg: 882.83 / Max: 887.22Min: 888.02 / Avg: 890.29 / Max: 893.14Min: 817.95 / Avg: 821.02 / Max: 822.93Min: 817.71 / Avg: 819.42 / Max: 822.06

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassDefaultOptimized0.87751.7552.63253.514.3875SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 33.653.613.903.891. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
i9 10980XETR 3990XOpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassDefaultOptimized246810Min: 3.64 / Avg: 3.65 / Max: 3.66Min: 3.59 / Avg: 3.61 / Max: 3.63Min: 3.9 / Avg: 3.9 / Max: 3.9Min: 3.89 / Avg: 3.89 / Max: 3.891. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3DefaultOptimized15003000450060007500SE +/- 13.23, N = 3SE +/- 15.97, N = 3SE +/- 44.91, N = 3SE +/- 4.71, N = 34751.14814.17205.87163.31. (CC) gcc options: -O3 -pthread -lz -llzma
i9 10980XETR 3990XOpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3DefaultOptimized13002600390052006500Min: 4729.6 / Avg: 4751.1 / Max: 4775.2Min: 4795.6 / Avg: 4814.1 / Max: 4845.9Min: 7150.8 / Avg: 7205.8 / Max: 7294.8Min: 7157.8 / Avg: 7163.33 / Max: 7172.71. (CC) gcc options: -O3 -pthread -lz -llzma

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian ChangepointDefaultOptimized714212835SE +/- 0.09, N = 3SE +/- 0.16, N = 3SE +/- 0.26, N = 3SE +/- 0.21, N = 330.2230.1025.7725.86
i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian ChangepointDefaultOptimized714212835Min: 30.09 / Avg: 30.22 / Max: 30.39Min: 29.91 / Avg: 30.1 / Max: 30.41Min: 25.33 / Avg: 25.77 / Max: 26.24Min: 25.62 / Avg: 25.86 / Max: 26.28

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosDefaultOptimized20406080100SE +/- 0.03, N = 3SE +/- 0.03, N = 387.587.7104.0105.0
i9 10980XETR 3990XOpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosDefaultOptimized20406080100Min: 87.4 / Avg: 87.47 / Max: 87.5Min: 87.7 / Avg: 87.73 / Max: 87.8

i9 10980XETR 3990XOpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibDefaultOptimized48121620SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 315.315.116.416.7
i9 10980XETR 3990XOpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibDefaultOptimized48121620Min: 15.3 / Avg: 15.3 / Max: 15.3Min: 15.1 / Avg: 15.1 / Max: 15.1Min: 16.4 / Avg: 16.43 / Max: 16.5Min: 16.6 / Avg: 16.67 / Max: 16.7

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svmDefaultOptimized612182430SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 325.1525.1720.6820.63
i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svmDefaultOptimized612182430Min: 25.1 / Avg: 25.15 / Max: 25.19Min: 25.13 / Avg: 25.17 / Max: 25.24Min: 20.61 / Avg: 20.68 / Max: 20.78Min: 20.62 / Avg: 20.63 / Max: 20.64

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonDefaultOptimized90180270360450SE +/- 1.20, N = 3SE +/- 1.45, N = 3353349434414
i9 10980XETR 3990XOpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonDefaultOptimized80160240320400Min: 351 / Avg: 353.33 / Max: 355Min: 432 / Avg: 434.33 / Max: 437

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: NDT MappingDefaultOptimized2004006008001000SE +/- 2.36, N = 3SE +/- 3.65, N = 3SE +/- 2.68, N = 3SE +/- 4.60, N = 3893.87888.06968.12915.041. (CXX) g++ options: -O3 -std=c++11 -fopenmp
i9 10980XETR 3990XOpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: NDT MappingDefaultOptimized2004006008001000Min: 889.22 / Avg: 893.87 / Max: 896.88Min: 880.87 / Avg: 888.06 / Max: 892.78Min: 965.01 / Avg: 968.12 / Max: 973.45Min: 909.19 / Avg: 915.04 / Max: 924.121. (CXX) g++ options: -O3 -std=c++11 -fopenmp

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsDefaultOptimized510152025SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 320.220.522.822.6
i9 10980XETR 3990XOpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsDefaultOptimized510152025Min: 20.1 / Avg: 20.17 / Max: 20.2Min: 20.4 / Avg: 20.47 / Max: 20.5Min: 22.8 / Avg: 22.8 / Max: 22.8Min: 22.6 / Avg: 22.6 / Max: 22.6

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesDefaultOptimized2004006008001000SE +/- 1.00, N = 3SE +/- 0.67, N = 3SE +/- 2.65, N = 3SE +/- 2.19, N = 3887914931930
i9 10980XETR 3990XOpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesDefaultOptimized160320480640800Min: 885 / Avg: 887 / Max: 888Min: 913 / Avg: 914.33 / Max: 915Min: 926 / Avg: 931 / Max: 935Min: 927 / Avg: 929.67 / Max: 934

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassDefaultOptimized0.56931.13861.70792.27722.8465SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 32.302.282.532.511. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
i9 10980XETR 3990XOpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassDefaultOptimized246810Min: 2.29 / Avg: 2.3 / Max: 2.3Min: 2.28 / Avg: 2.28 / Max: 2.29Min: 2.53 / Avg: 2.53 / Max: 2.53Min: 2.51 / Avg: 2.51 / Max: 2.511. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

glibc bench

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: ffsDefaultOptimized0.45540.91081.36621.82162.277SE +/- 0.02519, N = 15SE +/- 0.00107, N = 3SE +/- 0.00032, N = 3SE +/- 0.00136, N = 31.661371.398752.023831.79262
i9 10980XETR 3990XOpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: ffsDefaultOptimized246810Min: 1.6 / Avg: 1.66 / Max: 1.88Min: 1.4 / Avg: 1.4 / Max: 1.4Min: 2.02 / Avg: 2.02 / Max: 2.02Min: 1.79 / Avg: 1.79 / Max: 1.79

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyDefaultOptimized20406080100SE +/- 0.33, N = 3103105104105
i9 10980XETR 3990XOpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyDefaultOptimized20406080100Min: 104 / Avg: 104.67 / Max: 105

glibc bench

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: sincosDefaultOptimized3691215SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.17, N = 312.8510.1712.3711.65
i9 10980XETR 3990XOpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: sincosDefaultOptimized48121620Min: 12.84 / Avg: 12.85 / Max: 12.85Min: 10.17 / Avg: 10.17 / Max: 10.18Min: 12.36 / Avg: 12.37 / Max: 12.37Min: 11.35 / Avg: 11.65 / Max: 11.96

i9 10980XETR 3990XOpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: cosDefaultOptimized1020304050SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.28, N = 340.1339.6842.9242.95
i9 10980XETR 3990XOpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: cosDefaultOptimized918273645Min: 40.13 / Avg: 40.13 / Max: 40.13Min: 39.67 / Avg: 39.68 / Max: 39.69Min: 42.9 / Avg: 42.91 / Max: 42.94Min: 42.39 / Avg: 42.95 / Max: 43.26

i9 10980XETR 3990XOpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: sinDefaultOptimized1020304050SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.33, N = 339.7339.2642.5642.46
i9 10980XETR 3990XOpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: sinDefaultOptimized918273645Min: 39.73 / Avg: 39.73 / Max: 39.74Min: 39.25 / Avg: 39.26 / Max: 39.27Min: 42.54 / Avg: 42.56 / Max: 42.57Min: 41.95 / Avg: 42.46 / Max: 43.07

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.BDefaultOptimized10K20K30K40K50KSE +/- 14.36, N = 3SE +/- 14.66, N = 3SE +/- 25.58, N = 3SE +/- 317.18, N = 311958.6111842.9047235.7247137.991. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
i9 10980XETR 3990XOpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.BDefaultOptimized8K16K24K32K40KMin: 11933.42 / Avg: 11958.61 / Max: 11983.14Min: 11819.92 / Avg: 11842.9 / Max: 11870.17Min: 47193.75 / Avg: 47235.72 / Max: 47282.04Min: 46535.24 / Avg: 47137.99 / Max: 47610.651. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

glibc bench

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: sqrtDefaultOptimized0.51221.02441.53662.04882.561SE +/- 0.00215, N = 3SE +/- 0.00097, N = 3SE +/- 0.00090, N = 3SE +/- 0.01708, N = 131.610971.815552.251252.27632
i9 10980XETR 3990XOpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: sqrtDefaultOptimized246810Min: 1.61 / Avg: 1.61 / Max: 1.61Min: 1.81 / Avg: 1.82 / Max: 1.82Min: 2.25 / Avg: 2.25 / Max: 2.25Min: 2.25 / Avg: 2.28 / Max: 2.48

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.CDefaultOptimized6K12K18K24K30KSE +/- 34.58, N = 3SE +/- 17.57, N = 3SE +/- 7.08, N = 3SE +/- 13.18, N = 319696.8219590.9328563.0228564.591. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
i9 10980XETR 3990XOpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.CDefaultOptimized5K10K15K20K25KMin: 19627.76 / Avg: 19696.82 / Max: 19734.52Min: 19555.9 / Avg: 19590.93 / Max: 19610.82Min: 28549.11 / Avg: 28563.02 / Max: 28572.23Min: 28538.33 / Avg: 28564.59 / Max: 28579.631. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeDefaultOptimized816243240SE +/- 0.10, N = 3SE +/- 0.11, N = 3SE +/- 0.05, N = 3SE +/- 0.07, N = 332.9733.5234.5035.621. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
i9 10980XETR 3990XOpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeDefaultOptimized816243240Min: 32.79 / Avg: 32.97 / Max: 33.15Min: 33.34 / Avg: 33.52 / Max: 33.71Min: 34.4 / Avg: 34.5 / Max: 34.55Min: 35.51 / Avg: 35.62 / Max: 35.741. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Euclidean ClusterDefaultOptimized30060090012001500SE +/- 1.88, N = 3SE +/- 0.58, N = 3SE +/- 3.08, N = 3SE +/- 3.50, N = 31336.471345.701183.681168.861. (CXX) g++ options: -O3 -std=c++11 -fopenmp
i9 10980XETR 3990XOpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Euclidean ClusterDefaultOptimized2004006008001000Min: 1334.2 / Avg: 1336.47 / Max: 1340.2Min: 1344.69 / Avg: 1345.7 / Max: 1346.7Min: 1177.69 / Avg: 1183.68 / Max: 1187.92Min: 1164.7 / Avg: 1168.86 / Max: 1175.811. (CXX) g++ options: -O3 -std=c++11 -fopenmp

OCRMyPDF

OCRMyPDF is an optical character recognition (OCR) text layer to scanned PDF files, producing new PDFs with the text now selectable/searchable/copy-paste capable. OCRMyPDF leverages the Tesseract OCR engine and is written in Python. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 9.6.0+dfsgProcessing 60 Page PDF DocumentDefaultOptimized510152025SE +/- 0.14, N = 3SE +/- 0.12, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 319.2719.2214.8214.87
i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 9.6.0+dfsgProcessing 60 Page PDF DocumentDefaultOptimized510152025Min: 19.12 / Avg: 19.27 / Max: 19.56Min: 19.02 / Avg: 19.22 / Max: 19.44Min: 14.79 / Avg: 14.82 / Max: 14.85Min: 14.81 / Avg: 14.86 / Max: 14.92

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080pDefaultOptimized2004006008001000SE +/- 0.68, N = 3SE +/- 0.40, N = 3SE +/- 2.94, N = 3SE +/- 2.15, N = 3618.44617.33882.23857.091. (CC) gcc options: -pthread
i9 10980XETR 3990XOpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080pDefaultOptimized150300450600750Min: 617.1 / Avg: 618.44 / Max: 619.31Min: 616.74 / Avg: 617.33 / Max: 618.09Min: 876.9 / Avg: 882.23 / Max: 887.04Min: 854.34 / Avg: 857.09 / Max: 861.341. (CC) gcc options: -pthread

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pDefaultOptimized3691215SE +/- 0.023, N = 3SE +/- 0.034, N = 3SE +/- 0.062, N = 3SE +/- 0.028, N = 35.5505.5679.5989.8761. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
i9 10980XETR 3990XOpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pDefaultOptimized3691215Min: 5.51 / Avg: 5.55 / Max: 5.59Min: 5.5 / Avg: 5.57 / Max: 5.61Min: 9.51 / Avg: 9.6 / Max: 9.72Min: 9.83 / Avg: 9.88 / Max: 9.931. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative EntropyDefaultOptimized48121620SE +/- 0.16, N = 3SE +/- 0.17, N = 3SE +/- 0.07, N = 3SE +/- 0.15, N = 314.6214.6312.9613.17
i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative EntropyDefaultOptimized48121620Min: 14.33 / Avg: 14.62 / Max: 14.9Min: 14.35 / Avg: 14.63 / Max: 14.93Min: 12.84 / Avg: 12.96 / Max: 13.07Min: 12.91 / Avg: 13.17 / Max: 13.41

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterDefaultOptimized48121620SE +/- 0.045, N = 3SE +/- 0.188, N = 5SE +/- 0.020, N = 3SE +/- 0.032, N = 314.53714.6108.1198.0191. (CXX) g++ options: -O2 -lOpenCL
i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterDefaultOptimized48121620Min: 14.45 / Avg: 14.54 / Max: 14.58Min: 14.32 / Avg: 14.61 / Max: 15.35Min: 8.08 / Avg: 8.12 / Max: 8.14Min: 7.96 / Avg: 8.02 / Max: 8.071. (CXX) g++ options: -O2 -lOpenCL

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 4KDefaultOptimized80160240320400SE +/- 0.44, N = 3SE +/- 0.25, N = 3SE +/- 0.42, N = 3SE +/- 1.89, N = 3231.13231.35379.97375.641. (CC) gcc options: -pthread
i9 10980XETR 3990XOpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 4KDefaultOptimized70140210280350Min: 230.62 / Avg: 231.13 / Max: 232.01Min: 230.86 / Avg: 231.35 / Max: 231.68Min: 379.13 / Avg: 379.97 / Max: 380.46Min: 373.14 / Avg: 375.64 / Max: 379.341. (CC) gcc options: -pthread

glibc bench

The GNU C Library project provides the core libraries for the GNU system and GNU/Linux systems, as well as many other systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: asinhDefaultOptimized246810SE +/- 0.00370, N = 3SE +/- 0.00358, N = 3SE +/- 0.07902, N = 3SE +/- 0.03742, N = 38.788168.200178.776168.37798
i9 10980XETR 3990XOpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: asinhDefaultOptimized3691215Min: 8.78 / Avg: 8.79 / Max: 8.8Min: 8.19 / Avg: 8.2 / Max: 8.21Min: 8.66 / Avg: 8.78 / Max: 8.93Min: 8.33 / Avg: 8.38 / Max: 8.45

i9 10980XETR 3990XOpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: ffsllDefaultOptimized0.45550.9111.36651.8222.2775SE +/- 0.02230, N = 3SE +/- 0.00098, N = 3SE +/- 0.00006, N = 3SE +/- 0.00238, N = 31.434551.604661.789782.02441
i9 10980XETR 3990XOpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: ffsllDefaultOptimized246810Min: 1.4 / Avg: 1.43 / Max: 1.47Min: 1.6 / Avg: 1.6 / Max: 1.61Min: 1.79 / Avg: 1.79 / Max: 1.79Min: 2.02 / Avg: 2.02 / Max: 2.03

i9 10980XETR 3990XOpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: log2DefaultOptimized246810SE +/- 0.00530, N = 3SE +/- 0.00035, N = 3SE +/- 0.01640, N = 3SE +/- 0.00364, N = 36.435994.625025.956324.05586
i9 10980XETR 3990XOpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: log2DefaultOptimized3691215Min: 6.43 / Avg: 6.44 / Max: 6.44Min: 4.62 / Avg: 4.63 / Max: 4.63Min: 5.94 / Avg: 5.96 / Max: 5.99Min: 4.05 / Avg: 4.06 / Max: 4.06

i9 10980XETR 3990XOpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: pthread_onceDefaultOptimized0.40260.80521.20781.61042.013SE +/- 0.00026, N = 3SE +/- 0.00083, N = 3SE +/- 0.00153, N = 3SE +/- 0.00158, N = 31.403011.401411.786411.78953
i9 10980XETR 3990XOpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: pthread_onceDefaultOptimized246810Min: 1.4 / Avg: 1.4 / Max: 1.4Min: 1.4 / Avg: 1.4 / Max: 1.4Min: 1.78 / Avg: 1.79 / Max: 1.79Min: 1.79 / Avg: 1.79 / Max: 1.79

i9 10980XETR 3990XOpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: tanhDefaultOptimized3691215SE +/- 0.00570, N = 3SE +/- 0.00427, N = 3SE +/- 0.00468, N = 3SE +/- 0.15465, N = 311.178108.7772310.7796010.90980
i9 10980XETR 3990XOpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: tanhDefaultOptimized3691215Min: 11.17 / Avg: 11.18 / Max: 11.18Min: 8.77 / Avg: 8.78 / Max: 8.78Min: 10.77 / Avg: 10.78 / Max: 10.78Min: 10.68 / Avg: 10.91 / Max: 11.2

i9 10980XETR 3990XOpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: modfDefaultOptimized0.53341.06681.60022.13362.667SE +/- 0.01554, N = 3SE +/- 0.00113, N = 3SE +/- 0.00054, N = 3SE +/- 0.00065, N = 32.370701.622122.027942.02817
i9 10980XETR 3990XOpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: modfDefaultOptimized246810Min: 2.34 / Avg: 2.37 / Max: 2.4Min: 1.62 / Avg: 1.62 / Max: 1.62Min: 2.03 / Avg: 2.03 / Max: 2.03Min: 2.03 / Avg: 2.03 / Max: 2.03

i9 10980XETR 3990XOpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: sinhDefaultOptimized246810SE +/- 0.00603, N = 3SE +/- 0.06365, N = 3SE +/- 0.00239, N = 3SE +/- 0.00451, N = 37.758777.156537.806097.56152
i9 10980XETR 3990XOpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: sinhDefaultOptimized3691215Min: 7.75 / Avg: 7.76 / Max: 7.77Min: 7.07 / Avg: 7.16 / Max: 7.28Min: 7.8 / Avg: 7.81 / Max: 7.81Min: 7.56 / Avg: 7.56 / Max: 7.57

i9 10980XETR 3990XOpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: atanhDefaultOptimized3691215SE +/- 0.00662, N = 3SE +/- 0.01352, N = 3SE +/- 0.01297, N = 3SE +/- 0.06812, N = 310.408409.3025710.270809.92325
i9 10980XETR 3990XOpenBenchmarking.orgnanoseconds, Fewer Is Betterglibc bench 1.0Benchmark: atanhDefaultOptimized3691215Min: 10.4 / Avg: 10.41 / Max: 10.42Min: 9.28 / Avg: 9.3 / Max: 9.32Min: 10.25 / Avg: 10.27 / Max: 10.29Min: 9.79 / Avg: 9.92 / Max: 10

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverDefaultOptimized3691215SE +/- 0.068, N = 3SE +/- 0.045, N = 3SE +/- 0.095, N = 4SE +/- 0.060, N = 311.79310.9646.7056.6371. (CXX) g++ options: -O2 -lOpenCL
i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverDefaultOptimized3691215Min: 11.66 / Avg: 11.79 / Max: 11.89Min: 10.87 / Avg: 10.96 / Max: 11.01Min: 6.53 / Avg: 6.71 / Max: 6.95Min: 6.52 / Avg: 6.64 / Max: 6.721. (CXX) g++ options: -O2 -lOpenCL

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.CDefaultOptimized6K12K18K24K30KSE +/- 22.45, N = 3SE +/- 11.37, N = 3SE +/- 40.15, N = 3SE +/- 10.23, N = 318004.2617814.0926534.1426530.561. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
i9 10980XETR 3990XOpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.CDefaultOptimized5K10K15K20K25KMin: 17968.85 / Avg: 18004.26 / Max: 18045.88Min: 17791.98 / Avg: 17814.09 / Max: 17829.74Min: 26469.3 / Avg: 26534.14 / Max: 26607.57Min: 26512.24 / Avg: 26530.56 / Max: 26547.591. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pDefaultOptimized20406080100SE +/- 0.19, N = 3SE +/- 0.13, N = 3SE +/- 0.35, N = 3SE +/- 0.22, N = 344.4445.0597.1098.231. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
i9 10980XETR 3990XOpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pDefaultOptimized20406080100Min: 44.08 / Avg: 44.44 / Max: 44.72Min: 44.84 / Avg: 45.05 / Max: 45.27Min: 96.44 / Avg: 97.1 / Max: 97.62Min: 97.83 / Avg: 98.23 / Max: 98.581. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed GaussianDefaultOptimized246810SE +/- 0.057, N = 3SE +/- 0.051, N = 3SE +/- 0.018, N = 3SE +/- 0.020, N = 37.9117.9206.2136.059
i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed GaussianDefaultOptimized3691215Min: 7.81 / Avg: 7.91 / Max: 8.01Min: 7.83 / Avg: 7.92 / Max: 8.01Min: 6.18 / Avg: 6.21 / Max: 6.24Min: 6.03 / Avg: 6.06 / Max: 6.1

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 1080pDefaultOptimized2004006008001000SE +/- 1.53, N = 3SE +/- 1.42, N = 3SE +/- 5.28, N = 3SE +/- 0.73, N = 3552.05560.03873.54881.351. (CC) gcc options: -pthread
i9 10980XETR 3990XOpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 1080pDefaultOptimized150300450600750Min: 550.18 / Avg: 552.05 / Max: 555.07Min: 558.61 / Avg: 560.03 / Max: 562.87Min: 865.33 / Avg: 873.54 / Max: 883.41Min: 879.9 / Avg: 881.35 / Max: 882.21. (CC) gcc options: -pthread

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.CDefaultOptimized10002000300040005000SE +/- 8.10, N = 3SE +/- 10.01, N = 3SE +/- 21.84, N = 3SE +/- 31.84, N = 32455.482473.354818.154894.031. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3
i9 10980XETR 3990XOpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.CDefaultOptimized9001800270036004500Min: 2444.46 / Avg: 2455.48 / Max: 2471.27Min: 2457.36 / Avg: 2473.35 / Max: 2491.78Min: 4780.51 / Avg: 4818.15 / Max: 4856.16Min: 4845.67 / Avg: 4894.03 / Max: 4954.091. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 4.0.3

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 9Jan2020Model: Rhodopsin ProteinDefaultOptimized612182430SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.13, N = 3SE +/- 0.41, N = 311.6612.7723.6224.331. (CXX) g++ options: -O3 -rdynamic -ljpeg -lpng -lz -lfftw3 -lm
i9 10980XETR 3990XOpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 9Jan2020Model: Rhodopsin ProteinDefaultOptimized612182430Min: 11.61 / Avg: 11.66 / Max: 11.74Min: 12.7 / Avg: 12.77 / Max: 12.84Min: 23.44 / Avg: 23.62 / Max: 23.88Min: 23.7 / Avg: 24.33 / Max: 25.11. (CXX) g++ options: -O3 -rdynamic -ljpeg -lpng -lz -lfftw3 -lm

CloverLeaf

CloverLeaf is a Lagrangian-Eulerian hydrodynamics benchmark. This test profile currently makes use of CloverLeaf's OpenMP version and benchmarked with the clover_bm8192.in input file. Learn more via the OpenBenchmarking.org test page.

i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeafLagrangian-Eulerian HydrodynamicsDefaultOptimized0.57831.15661.73492.31322.8915SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 32.572.570.400.401. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp
i9 10980XETR 3990XOpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeafLagrangian-Eulerian HydrodynamicsDefaultOptimized246810Min: 2.56 / Avg: 2.57 / Max: 2.58Min: 2.56 / Avg: 2.57 / Max: 2.58Min: 0.4 / Avg: 0.4 / Max: 0.41Min: 0.39 / Avg: 0.4 / Max: 0.41. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp