AMD TR 2920X Linux

AMD Ryzen Threadripper 2920X 12-Core testing with a Gigabyte X399 AORUS Gaming 7 (F11e BIOS) and eVGA NVIDIA NV137 4GB on Ubuntu 18.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 1906246-PTS-AMDTR29220
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Threadripper 2920X
June 23 2019
  6 Hours, 53 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


AMD TR 2920X LinuxOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen Threadripper 2920X 12-Core @ 3.50GHz (12 Cores / 24 Threads)Gigabyte X399 AORUS Gaming 7 (F11e BIOS)AMD 17h16384MB240GB Force MP510 + 120GB Force MP500eVGA NVIDIA NV137 4GBRealtek ALC1220ASUS VP28UQualcomm Atheros Killer E2500 + 2 x QLogic cLOM8214 1/10GbE + Intel 8265 / 8275Ubuntu 18.044.19.0-041900-generic (x86_64)GNOME Shell 3.28.3X Server 1.19.6modesetting 1.19.64.3 Mesa 18.0.5GCC 7.3.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionAMD TR 2920X Linux BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq ondemand- OpenJDK Runtime Environment (build 10.0.2+13-Ubuntu-1ubuntu0.18.04.4)- Python 2.7.15rc1 + Python 3.6.7- l1tf: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB

AMD TR 2920X Linuxmkl-dnn: Convolution Batch conv_all - u8s8u8s32mkl-dnn: Convolution Batch conv_all - u8s8f32s32mkl-dnn: Convolution Batch conv_all - f32mkl-dnn: Deconvolution Batch deconv_all - u8s8u8s32mkl-dnn: Deconvolution Batch deconv_all - f32pgbench: Buffer Test - Normal Load - Read Writepgbench: Buffer Test - Heavy Contention - Read Writecore-latency: Average Latency Between CPU Corespgbench: Buffer Test - Single Thread - Read Writerenaissance: Savina Reactors.IOcp2k: Fayalite-FIST Datanginx: Static Web Page Servingpgbench: Buffer Test - Single Thread - Read Onlyrenaissance: Akka Unbalanced Cobwebbed Treeselenium: Jetstream - Firefoxapache-siege: 250mkl-dnn: Convolution Batch conv_googlenet_v3 - u8s8u8s32apache-siege: 200renaissance: Apache Spark ALSmkl-dnn: Convolution Batch conv_googlenet_v3 - u8s8f32s32vpxenc: vpxenc VP9 1080p Video Encodemkl-dnn: Convolution Batch conv_googlenet_v3 - f32renaissance: Apache Spark Bayespgbench: Buffer Test - Normal Load - Read Onlypgbench: Buffer Test - Heavy Contention - Read Onlyrenaissance: Apache Spark PageRankmkl-dnn: Convolution Batch conv_3d - u8s8u8s32mkl-dnn: Convolution Batch conv_3d - u8s8f32s32indigobench: Bedroomindigobench: Supercarjohn-the-ripper: MD5apache-siege: 100mkl-dnn: IP Batch All - u8s8u8s32mkl-dnn: IP Batch All - u8s8f32s32mkl-dnn: IP Batch All - f32apache: Static Web Page Servingrenaissance: In-Memory Database Shootoutmkl-dnn: Convolution Batch conv_3d - f32mkl-dnn: Deconvolution Batch deconv_3d - u8s8u8s32mkl-dnn: Deconvolution Batch deconv_3d - u8s8f32s32gimp: unsharp-maskmkl-dnn: Deconvolution Batch deconv_1d - u8s8f32s32mkl-dnn: Deconvolution Batch deconv_1d - u8s8u8s32mkl-dnn: Convolution Batch conv_alexnet - u8s8u8s32john-the-ripper: Blowfishmkl-dnn: Convolution Batch conv_alexnet - u8s8f32s32t-test1: 1compress-xz: Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9mkl-dnn: Deconvolution Batch deconv_1d - f32apache-siege: 50renaissance: Scala Dottyprimesieve: 1e12 Prime Number Generationmkl-dnn: Convolution Batch conv_alexnet - f32gimp: rotategimp: auto-levelsmkl-dnn: IP Batch 1D - u8s8f32s32mkl-dnn: IP Batch 1D - u8s8u8s32mkl-dnn: IP Batch 1D - f32t-test1: 2gimp: resizesvt-av1: 1080p 8-bit YUV To AV1 Video Encodeapache-siege: 10tachyon: Total Timemkl-dnn: Deconvolution Batch deconv_3d - f32ctx-clock: Context Switch TimeThreadripper 2920X39375.0039053.703917.8429785.535754.3321146.4020648.68357.811136.0922180.48712.1430180.3229528.0514843.76185.3835673.521847.8522611.656454.551814.92138.59215.904308.11432356.73436923.8320976.977644.857638.651.703.6385583322670.83608.89614.60174.5525252.947366.4823.336117.866095.4134.753423.593436.874225.45270374217.9128.1726.8822.9524888.756522.4917.17484.6715.8915.4254.7454.7414.369.547.3433.8025041.983.789.50175OpenBenchmarking.org

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_all - Data Type: u8s8u8s32Threadripper 2920X8K16K24K32K40KSE +/- 171.16, N = 339375.00MIN: 38813.31. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_all - Data Type: u8s8f32s32Threadripper 2920X8K16K24K32K40KSE +/- 84.29, N = 339053.70MIN: 38722.61. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_all - Data Type: f32Threadripper 2920X8001600240032004000SE +/- 10.64, N = 33917.84MIN: 3840.161. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_all - Data Type: u8s8u8s32Threadripper 2920X6K12K18K24K30KSE +/- 19.95, N = 329785.53MIN: 29521.71. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_all - Data Type: f32Threadripper 2920X12002400360048006000SE +/- 17.67, N = 35754.33MIN: 5617.691. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

PostgreSQL pgbench

This is a simple benchmark of PostgreSQL using pgbench. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 10.3Scaling: Buffer Test - Test: Normal Load - Mode: Read WriteThreadripper 2920X5K10K15K20K25KSE +/- 532.34, N = 1521146.401. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 10.3Scaling: Buffer Test - Test: Heavy Contention - Mode: Read WriteThreadripper 2920X4K8K12K16K20KSE +/- 515.85, N = 1520648.681. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm

Core-Latency

This is a test of core-latency, which measures the latency between all core combinations on the system processor(s). Reported is the average latency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns, Fewer Is BetterCore-LatencyAverage Latency Between CPU CoresThreadripper 2920X80160240320400357.81MIN: 35.88 / MAX: 555.491. (CXX) g++ options: -std=c++11 -pthread -O3

PostgreSQL pgbench

This is a simple benchmark of PostgreSQL using pgbench. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 10.3Scaling: Buffer Test - Test: Single Thread - Mode: Read WriteThreadripper 2920X2004006008001000SE +/- 51.66, N = 131136.091. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.9.0Test: Savina Reactors.IOThreadripper 2920X5K10K15K20K25KSE +/- 313.15, N = 4022180.48

CP2K Molecular Dynamics

CP2K is an open-source molecular dynamics software package focused on quantum chemistry and solid-state physics. This test profile currently makes use of the OpenMP implementation and using the Fayalite-FIST molecular dynamics run and measures the total time to complete. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCP2K Molecular Dynamics 6.1Fayalite-FIST DataThreadripper 2920X150300450600750712.14

NGINX Benchmark

This is a test of ab, which is the Apache Benchmark program running against nginx. This test profile measures how many requests per second a given system can sustain when carrying out 2,000,000 requests with 500 requests being carried out concurrently. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterNGINX Benchmark 1.9.9Static Web Page ServingThreadripper 2920X6K12K18K24K30KSE +/- 333.41, N = 1030180.321. (CC) gcc options: -lpthread -lcrypt -lcrypto -lz -O3 -march=native

PostgreSQL pgbench

This is a simple benchmark of PostgreSQL using pgbench. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 10.3Scaling: Buffer Test - Test: Single Thread - Mode: Read OnlyThreadripper 2920X6K12K18K24K30KSE +/- 337.86, N = 929528.051. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.9.0Test: Akka Unbalanced Cobwebbed TreeThreadripper 2920X3K6K9K12K15KSE +/- 166.13, N = 4014843.76

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream - Browser: FirefoxThreadripper 2920X4080120160200SE +/- 0.67, N = 3185.381. firefox 67.0.3

Apache Siege

This is a test of the Apache web server performance being facilitated by the Siege web serverb enchmark program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTransactions Per Second, More Is BetterApache Siege 2.4.29Concurrent Users: 250Threadripper 2920X8K16K24K32K40K35673.521. (CC) gcc options: -O2 -lpthread -ldl -lssl -lcrypto

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_googlenet_v3 - Data Type: u8s8u8s32Threadripper 2920X400800120016002000SE +/- 2.79, N = 31847.85MIN: 1826.231. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

Apache Siege

This is a test of the Apache web server performance being facilitated by the Siege web serverb enchmark program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTransactions Per Second, More Is BetterApache Siege 2.4.29Concurrent Users: 200Threadripper 2920X5K10K15K20K25K22611.651. (CC) gcc options: -O2 -lpthread -ldl -lssl -lcrypto

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.9.0Test: Apache Spark ALSThreadripper 2920X14002800420056007000SE +/- 170.82, N = 406454.55

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_googlenet_v3 - Data Type: u8s8f32s32Threadripper 2920X400800120016002000SE +/- 1.20, N = 31814.92MIN: 1794.851. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9/WebM format using a sample 1080p video. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.8.0vpxenc VP9 1080p Video EncodeThreadripper 2920X306090120150SE +/- 0.28, N = 3138.591. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=c++11

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_googlenet_v3 - Data Type: f32Threadripper 2920X50100150200250SE +/- 1.74, N = 3215.90MIN: 209.781. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.9.0Test: Apache Spark BayesThreadripper 2920X9001800270036004500SE +/- 27.05, N = 404308.11

PostgreSQL pgbench

This is a simple benchmark of PostgreSQL using pgbench. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 10.3Scaling: Buffer Test - Test: Normal Load - Mode: Read OnlyThreadripper 2920X90K180K270K360K450KSE +/- 2476.73, N = 3432356.731. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 10.3Scaling: Buffer Test - Test: Heavy Contention - Mode: Read OnlyThreadripper 2920X90K180K270K360K450KSE +/- 3007.54, N = 3436923.831. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.9.0Test: Apache Spark PageRankThreadripper 2920X4K8K12K16K20KSE +/- 134.54, N = 820976.97

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_3d - Data Type: u8s8u8s32Threadripper 2920X16003200480064008000SE +/- 16.61, N = 37644.85MIN: 7602.181. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_3d - Data Type: u8s8f32s32Threadripper 2920X16003200480064008000SE +/- 4.05, N = 37638.65MIN: 7622.751. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.0.64Scene: BedroomThreadripper 2920X0.38250.7651.14751.531.9125SE +/- 0.00, N = 31.70

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.0.64Scene: SupercarThreadripper 2920X0.81681.63362.45043.26724.084SE +/- 0.01, N = 33.63

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: MD5Threadripper 2920X200K400K600K800K1000KSE +/- 3242.06, N = 38558331. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2

Apache Siege

This is a test of the Apache web server performance being facilitated by the Siege web serverb enchmark program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTransactions Per Second, More Is BetterApache Siege 2.4.29Concurrent Users: 100Threadripper 2920X5K10K15K20K25KSE +/- 71.95, N = 222670.831. (CC) gcc options: -O2 -lpthread -ldl -lssl -lcrypto

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch All - Data Type: u8s8u8s32Threadripper 2920X130260390520650SE +/- 1.25, N = 3608.89MIN: 594.161. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch All - Data Type: u8s8f32s32Threadripper 2920X130260390520650SE +/- 6.56, N = 3614.60MIN: 598.11. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch All - Data Type: f32Threadripper 2920X4080120160200SE +/- 0.72, N = 3174.55MIN: 171.361. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

Apache Benchmark

This is a test of ab, which is the Apache benchmark program. This test profile measures how many requests per second a given system can sustain when carrying out 1,000,000 requests with 100 requests being carried out concurrently. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterApache Benchmark 2.4.29Static Web Page ServingThreadripper 2920X5K10K15K20K25KSE +/- 14.99, N = 325252.941. (CC) gcc options: -shared -fPIC -O2 -pthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.9.0Test: In-Memory Database ShootoutThreadripper 2920X16003200480064008000SE +/- 59.43, N = 87366.48

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_3d - Data Type: f32Threadripper 2920X612182430SE +/- 0.06, N = 323.33MIN: 22.61. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_3d - Data Type: u8s8u8s32Threadripper 2920X13002600390052006500SE +/- 9.73, N = 36117.86MIN: 6083.121. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32s32Threadripper 2920X13002600390052006500SE +/- 8.40, N = 36095.41MIN: 6069.021. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.8.22Test: unsharp-maskThreadripper 2920X816243240SE +/- 0.10, N = 334.75

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32s32Threadripper 2920X7001400210028003500SE +/- 3.55, N = 33423.59MIN: 3411.481. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_1d - Data Type: u8s8u8s32Threadripper 2920X7001400210028003500SE +/- 6.94, N = 33436.87MIN: 3419.211. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_alexnet - Data Type: u8s8u8s32Threadripper 2920X9001800270036004500SE +/- 41.59, N = 34225.45MIN: 4172.421. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: BlowfishThreadripper 2920X6K12K18K24K30KSE +/- 24.69, N = 3270371. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_alexnet - Data Type: u8s8f32s32Threadripper 2920X9001800270036004500SE +/- 19.37, N = 34217.91MIN: 4172.951. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

t-test1

This is a test of t-test1 for basic memory allocator benchmarks. Note this test profile is currently very basic and the overall time does include the warmup time of the custom t-test1 compilation. Improvements welcome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Bettert-test1 2017-01-13Threads: 1Threadripper 2920X714212835SE +/- 0.28, N = 328.171. (CC) gcc options: -pthread

XZ Compression

This test measures the time needed to compress a sample file (an Ubuntu file-system image) using XZ compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9Threadripper 2920X612182430SE +/- 0.09, N = 326.881. (CC) gcc options: -pthread -fvisibility=hidden -O2

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_1d - Data Type: f32Threadripper 2920X510152025SE +/- 0.03, N = 322.95MIN: 22.561. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

Apache Siege

This is a test of the Apache web server performance being facilitated by the Siege web serverb enchmark program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTransactions Per Second, More Is BetterApache Siege 2.4.29Concurrent Users: 50Threadripper 2920X5K10K15K20K25KSE +/- 95.97, N = 324888.751. (CC) gcc options: -O2 -lpthread -ldl -lssl -lcrypto

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.9.0Test: Scala DottyThreadripper 2920X14002800420056007000SE +/- 49.97, N = 86522.49

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 7.41e12 Prime Number GenerationThreadripper 2920X48121620SE +/- 0.03, N = 317.171. (CXX) g++ options: -O3 -lpthread

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_alexnet - Data Type: f32Threadripper 2920X100200300400500SE +/- 1.21, N = 3484.67MIN: 480.611. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.8.22Test: rotateThreadripper 2920X48121620SE +/- 0.03, N = 315.89

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.8.22Test: auto-levelsThreadripper 2920X48121620SE +/- 0.09, N = 315.42

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch 1D - Data Type: u8s8f32s32Threadripper 2920X1224364860SE +/- 0.60, N = 354.74MIN: 52.781. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch 1D - Data Type: u8s8u8s32Threadripper 2920X1224364860SE +/- 0.76, N = 354.74MIN: 52.821. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch 1D - Data Type: f32Threadripper 2920X48121620SE +/- 0.02, N = 314.36MIN: 14.11. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

t-test1

This is a test of t-test1 for basic memory allocator benchmarks. Note this test profile is currently very basic and the overall time does include the warmup time of the custom t-test1 compilation. Improvements welcome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Bettert-test1 2017-01-13Threads: 2Threadripper 2920X3691215SE +/- 0.05, N = 39.541. (CC) gcc options: -pthread

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.8.22Test: resizeThreadripper 2920X246810SE +/- 0.07, N = 37.34

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.51080p 8-bit YUV To AV1 Video EncodeThreadripper 2920X816243240SE +/- 0.14, N = 333.801. (CXX) g++ options: -O3 -pie -lpthread -lm

Apache Siege

This is a test of the Apache web server performance being facilitated by the Siege web serverb enchmark program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTransactions Per Second, More Is BetterApache Siege 2.4.29Concurrent Users: 10Threadripper 2920X5K10K15K20K25KSE +/- 55.24, N = 325041.981. (CC) gcc options: -O2 -lpthread -ldl -lssl -lcrypto

Tachyon

This is a test of the threaded Tachyon, a parallel ray-tracing system. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.98.9Total TimeThreadripper 2920X0.85051.7012.55153.4024.2525SE +/- 0.01, N = 33.781. (CC) gcc options: -m32 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_3d - Data Type: f32Threadripper 2920X3691215SE +/- 0.16, N = 39.50MIN: 9.051. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

ctx_clock

Ctx_clock is a simple test program to measure the context switch time in clock cycles. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgClocks, Fewer Is Betterctx_clockContext Switch TimeThreadripper 2920X4080120160200175