Threadripper 2950X

AMD Ryzen Threadripper 2950X 16-Core testing with a MSI MEG X399 CREATION (MS-7B92) v1.0 (1.10 BIOS) and llvmpipe 31GB on Ubuntu 18.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 1905136-PTS-THREADRI30
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
TR 2950X
May 13 2019
  4 Hours, 5 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Threadripper 2950XOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen Threadripper 2950X 16-Core @ 3.50GHz (16 Cores / 32 Threads)MSI MEG X399 CREATION (MS-7B92) v1.0 (1.10 BIOS)AMD 17h32768MBSamsung SSD 970 EVO 250GBllvmpipe 31GBRealtek ALC1220ASUS PB2782 x Intel I211 + Intel-AC 9260Ubuntu 18.105.0.0-rc6-phx (x86_64) 20190224GNOME Shell 3.30.1X Server 1.20.1modesetting 1.20.13.3 Mesa 18.2.2 (LLVM 7.0 128 bits)GCC 8.3.0ext42560x1440ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionThreadripper 2950X BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq ondemand- __user pointer sanitization + Full AMD retpoline IBPB: conditional STIBP: disabled RSB filling + SSB disabled via prctl and seccomp

Threadripper 2950Xgeexlab: 1920 x 1080 - GL2 AntTweakBargeexlab: 1920 x 1080 - GL3 Vertex Poolgeexlab: 2560 x 1440 - GL2 AntTweakBargeexlab: 2560 x 1440 - GL3 Vertex Poolgeexlab: 1920 x 1080 - GL2 Cell Shadinggeexlab: 2560 x 1440 - GL2 Cell Shadinggeexlab: 1920 x 1080 - GL2 Tunnel Beautygeexlab: 2560 x 1440 - GL2 Tunnel Beautygeexlab: 1920 x 1080 - GL2 Hot Tunnel DNAgeexlab: 2560 x 1440 - GL2 Hot Tunnel DNAgeexlab: 1920 x 1080 - GL2 Noise Animation Electricgeexlab: 2560 x 1440 - GL2 Noise Animation Electrict-test1: 1t-test1: 2mkl-dnn: IP Batch 1D - f32mkl-dnn: IP Batch All - f32mkl-dnn: IP Batch 1D - u8s8u8s32mkl-dnn: IP Batch 1D - u8s8f32s32mkl-dnn: IP Batch All - u8s8u8s32mkl-dnn: IP Batch All - u8s8f32s32mkl-dnn: Convolution Batch conv_3d - f32mkl-dnn: Convolution Batch conv_all - f32mkl-dnn: Deconvolution Batch deconv_1d - f32mkl-dnn: Deconvolution Batch deconv_3d - f32mkl-dnn: Convolution Batch conv_alexnet - f32mkl-dnn: Deconvolution Batch deconv_all - f32mkl-dnn: Convolution Batch conv_3d - u8s8u8s32mkl-dnn: Convolution Batch conv_3d - u8s8f32s32mkl-dnn: Convolution Batch conv_all - u8s8u8s32mkl-dnn: Convolution Batch conv_all - u8s8f32s32mkl-dnn: Convolution Batch conv_googlenet_v3 - f32mkl-dnn: Deconvolution Batch deconv_1d - u8s8u8s32mkl-dnn: Deconvolution Batch deconv_3d - u8s8u8s32mkl-dnn: Convolution Batch conv_alexnet - u8s8u8s32mkl-dnn: Deconvolution Batch deconv_1d - u8s8f32s32mkl-dnn: Deconvolution Batch deconv_3d - u8s8f32s32mkl-dnn: Deconvolution Batch deconv_all - u8s8u8s32mkl-dnn: Convolution Batch conv_alexnet - u8s8f32s32mkl-dnn: Convolution Batch conv_googlenet_v3 - u8s8u8s32mkl-dnn: Convolution Batch conv_googlenet_v3 - u8s8f32s32svt-av1: 1080p 8-bit YUV To AV1 Video Encodesvt-hevc: 1080p 8-bit YUV To HEVC Video Encodesvt-vp9: 1080p 8-bit YUV To VP9 Video Encodevpxenc: vpxenc VP9 1080p Video Encodex265: H.265 1080p Video Encodingsvt-av1: 1080p 8-bit YUV To AV1 Video Encodey-cruncher: Calculating 500M Pi Digitscompress-xz: Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9dav1d: Summer Nature 4Kdav1d: Summer Nature 1080ptachyon: Total Timegimp: resizegimp: rotategimp: auto-levelsgimp: unsharp-maskv-ray: CPUindigobench: Bedroomindigobench: SupercarTR 2950X97.47109.5395.9093.9349.0342.674.502.605.072.9734.2022.7328.4310.5211.51143.5844.7545.03441.09445.0520.303238.7618.357.91400.743721.147714.978086.5332558.1332777.37179.562861.734913.823167.642524.154577.3524241.503113.271885.411864.3337.08228.7292.3224.0438.6919.3521.3925.1724.259.213.268.2613.7415.2719.66203932.124.53OpenBenchmarking.org

GeeXLab

GeeXLab is a cross-platform tool for 3D programming and demo creation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterGeeXLab 0.28.0Resolution: 1920 x 1080 - Test: GL2 AntTweakBarTR 2950X20406080100SE +/- 0.26, N = 397.47

OpenBenchmarking.orgFPS, More Is BetterGeeXLab 0.28.0Resolution: 1920 x 1080 - Test: GL3 Vertex PoolTR 2950X20406080100SE +/- 0.97, N = 3109.53

OpenBenchmarking.orgFPS, More Is BetterGeeXLab 0.28.0Resolution: 2560 x 1440 - Test: GL2 AntTweakBarTR 2950X20406080100SE +/- 0.45, N = 395.90

OpenBenchmarking.orgFPS, More Is BetterGeeXLab 0.28.0Resolution: 2560 x 1440 - Test: GL3 Vertex PoolTR 2950X20406080100SE +/- 0.44, N = 393.93

OpenBenchmarking.orgFPS, More Is BetterGeeXLab 0.28.0Resolution: 1920 x 1080 - Test: GL2 Cell ShadingTR 2950X1122334455SE +/- 0.60, N = 349.03

OpenBenchmarking.orgFPS, More Is BetterGeeXLab 0.28.0Resolution: 2560 x 1440 - Test: GL2 Cell ShadingTR 2950X1020304050SE +/- 0.07, N = 342.67

OpenBenchmarking.orgFPS, More Is BetterGeeXLab 0.28.0Resolution: 1920 x 1080 - Test: GL2 Tunnel BeautyTR 2950X1.01252.0253.03754.055.0625SE +/- 0.00, N = 34.50

OpenBenchmarking.orgFPS, More Is BetterGeeXLab 0.28.0Resolution: 2560 x 1440 - Test: GL2 Tunnel BeautyTR 2950X0.5851.171.7552.342.925SE +/- 0.00, N = 32.60

OpenBenchmarking.orgFPS, More Is BetterGeeXLab 0.28.0Resolution: 1920 x 1080 - Test: GL2 Hot Tunnel DNATR 2950X1.14082.28163.42244.56325.704SE +/- 0.03, N = 35.07

OpenBenchmarking.orgFPS, More Is BetterGeeXLab 0.28.0Resolution: 2560 x 1440 - Test: GL2 Hot Tunnel DNATR 2950X0.66831.33662.00492.67323.3415SE +/- 0.03, N = 32.97

OpenBenchmarking.orgFPS, More Is BetterGeeXLab 0.28.0Resolution: 1920 x 1080 - Test: GL2 Noise Animation ElectricTR 2950X816243240SE +/- 0.06, N = 334.20

OpenBenchmarking.orgFPS, More Is BetterGeeXLab 0.28.0Resolution: 2560 x 1440 - Test: GL2 Noise Animation ElectricTR 2950X510152025SE +/- 0.03, N = 322.73

t-test1

This is a test of t-test1 for basic memory allocator benchmarks. Note this test profile is currently very basic and the overall time does include the warmup time of the custom t-test1 compilation. Improvements welcome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Bettert-test1 2017-01-13Threads: 1TR 2950X714212835SE +/- 0.16, N = 328.431. (CC) gcc options: -pthread

OpenBenchmarking.orgSeconds, Fewer Is Bettert-test1 2017-01-13Threads: 2TR 2950X3691215SE +/- 0.08, N = 310.521. (CC) gcc options: -pthread

MKL-DNN

This is a test of the Intel MKL-DNN as the Intel Math Kernel Library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch 1D - Data Type: f32TR 2950X3691215SE +/- 0.03, N = 311.51MIN: 11.21. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch All - Data Type: f32TR 2950X306090120150SE +/- 0.50, N = 3143.58MIN: 140.471. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch 1D - Data Type: u8s8u8s32TR 2950X1020304050SE +/- 0.48, N = 344.75MIN: 43.481. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch 1D - Data Type: u8s8f32s32TR 2950X1020304050SE +/- 0.61, N = 345.03MIN: 43.091. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch All - Data Type: u8s8u8s32TR 2950X100200300400500SE +/- 6.27, N = 5441.09MIN: 415.821. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: IP Batch All - Data Type: u8s8f32s32TR 2950X100200300400500SE +/- 5.39, N = 8445.05MIN: 419.591. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_3d - Data Type: f32TR 2950X510152025SE +/- 0.05, N = 320.30MIN: 19.581. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_all - Data Type: f32TR 2950X7001400210028003500SE +/- 9.51, N = 33238.76MIN: 3188.291. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_1d - Data Type: f32TR 2950X510152025SE +/- 0.02, N = 318.35MIN: 17.991. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_3d - Data Type: f32TR 2950X246810SE +/- 0.13, N = 37.91MIN: 7.511. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_alexnet - Data Type: f32TR 2950X90180270360450SE +/- 2.54, N = 3400.74MIN: 392.91. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_all - Data Type: f32TR 2950X8001600240032004000SE +/- 4.58, N = 33721.14MIN: 3659.831. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_3d - Data Type: u8s8u8s32TR 2950X17003400510068008500SE +/- 123.16, N = 47714.97MIN: 7570.221. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_3d - Data Type: u8s8f32s32TR 2950X2K4K6K8K10KSE +/- 5.30, N = 38086.53MIN: 8065.851. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_all - Data Type: u8s8u8s32TR 2950X7K14K21K28K35KSE +/- 8.37, N = 332558.13MIN: 32217.51. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_all - Data Type: u8s8f32s32TR 2950X7K14K21K28K35KSE +/- 48.72, N = 332777.37MIN: 32421.11. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_googlenet_v3 - Data Type: f32TR 2950X4080120160200SE +/- 1.33, N = 3179.56MIN: 173.951. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_1d - Data Type: u8s8u8s32TR 2950X6001200180024003000SE +/- 22.71, N = 32861.73MIN: 2827.421. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_3d - Data Type: u8s8u8s32TR 2950X11002200330044005500SE +/- 12.03, N = 34913.82MIN: 4845.381. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_alexnet - Data Type: u8s8u8s32TR 2950X7001400210028003500SE +/- 7.24, N = 33167.64MIN: 3124.031. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32s32TR 2950X5001000150020002500SE +/- 15.67, N = 32524.15MIN: 2494.981. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32s32TR 2950X10002000300040005000SE +/- 17.03, N = 34577.35MIN: 4504.441. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Deconvolution Batch deconv_all - Data Type: u8s8u8s32TR 2950X5K10K15K20K25KSE +/- 58.64, N = 324241.50MIN: 23843.41. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_alexnet - Data Type: u8s8f32s32TR 2950X7001400210028003500SE +/- 4.17, N = 33113.27MIN: 3079.911. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_googlenet_v3 - Data Type: u8s8u8s32TR 2950X400800120016002000SE +/- 13.46, N = 31885.41MIN: 1838.281. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN 2019-04-16Harness: Convolution Batch conv_googlenet_v3 - Data Type: u8s8f32s32TR 2950X400800120016002000SE +/- 15.60, N = 31864.33MIN: 1820.861. (CXX) g++ options: -std=c++11 -march=native -mtune=native -fPIC -fopenmp -O3 -pie -lmklml_intel -ldl

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. This test profile fork builds the encoder from Git source rather than a snapshot. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 Git1080p 8-bit YUV To AV1 Video EncodeTR 2950X918273645SE +/- 0.21, N = 337.081. (CXX) g++ options: -O3 -pie -lpthread -lm

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. This test uses SVT-HEVC from Git master. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC Git1080p 8-bit YUV To HEVC Video EncodeTR 2950X50100150200250SE +/- 0.88, N = 3228.721. (CC) gcc options: -fPIE -fPIC -O2 -flto -fvisibility=hidden -march=native -pie -rdynamic -lpthread -lrt

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample 1080p YUV video file. This test profile uses the Git snapshot of SVT-VP9. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 Git1080p 8-bit YUV To VP9 Video EncodeTR 2950X20406080100SE +/- 0.21, N = 392.321. (CC) gcc options: -fPIE -fPIC -O2 -flto -fvisibility=hidden -mavx -pie -rdynamic -lpthread -lrt -lm

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9/WebM format using a sample 1080p video. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding Gitvpxenc VP9 1080p Video EncodeTR 2950X612182430SE +/- 0.31, N = 324.041. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=c++11

x265

This is a simple test of the x265 encoder run on the CPU with a sample 1080p video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 GitH.265 1080p Video EncodingTR 2950X918273645SE +/- 0.03, N = 338.691. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2019-03-071080p 8-bit YUV To AV1 Video EncodeTR 2950X510152025SE +/- 0.05, N = 319.351. (CXX) g++ options: -O3 -pie -lpthread -lm

Y-Cruncher

Y-Cruncher is a multi-threaded Pi benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.7Calculating 500M Pi DigitsTR 2950X510152025SE +/- 0.03, N = 321.39

XZ Compression

This test measures the time needed to compress a sample file (an Ubuntu file-system image) using XZ compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9TR 2950X612182430SE +/- 0.31, N = 325.171. (CC) gcc options: -pthread -fvisibility=hidden -O2

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode some sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterdav1d 0.3Video Input: Summer Nature 4KTR 2950X612182430SE +/- 0.04, N = 324.251. (CC) gcc options: -lm -pthread

OpenBenchmarking.orgSeconds, Fewer Is Betterdav1d 0.3Video Input: Summer Nature 1080pTR 2950X3691215SE +/- 0.01, N = 39.211. (CC) gcc options: -lm -pthread

Tachyon

This is a test of the threaded Tachyon, a parallel ray-tracing system. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.98.9Total TimeTR 2950X0.73351.4672.20052.9343.6675SE +/- 0.00, N = 33.261. (CC) gcc options: -m32 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.6Test: resizeTR 2950X246810SE +/- 0.05, N = 38.26

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.6Test: rotateTR 2950X48121620SE +/- 0.01, N = 313.74

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.6Test: auto-levelsTR 2950X48121620SE +/- 0.04, N = 315.27

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.6Test: unsharp-maskTR 2950X510152025SE +/- 0.06, N = 319.66

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgKsamples, More Is BetterChaos Group V-RAY 4.10.03Mode: CPUTR 2950X4K8K12K16K20KSE +/- 33.67, N = 320393

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.0.64Scene: BedroomTR 2950X0.4770.9541.4311.9082.385SE +/- 0.00, N = 32.12

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.0.64Scene: SupercarTR 2950X1.01932.03863.05794.07725.0965SE +/- 0.00, N = 34.53

58 Results Shown

GeeXLab:
  1920 x 1080 - GL2 AntTweakBar
  1920 x 1080 - GL3 Vertex Pool
  2560 x 1440 - GL2 AntTweakBar
  2560 x 1440 - GL3 Vertex Pool
  1920 x 1080 - GL2 Cell Shading
  2560 x 1440 - GL2 Cell Shading
  1920 x 1080 - GL2 Tunnel Beauty
  2560 x 1440 - GL2 Tunnel Beauty
  1920 x 1080 - GL2 Hot Tunnel DNA
  2560 x 1440 - GL2 Hot Tunnel DNA
  1920 x 1080 - GL2 Noise Animation Electric
  2560 x 1440 - GL2 Noise Animation Electric
t-test1:
  1
  2
MKL-DNN:
  IP Batch 1D - f32
  IP Batch All - f32
  IP Batch 1D - u8s8u8s32
  IP Batch 1D - u8s8f32s32
  IP Batch All - u8s8u8s32
  IP Batch All - u8s8f32s32
  Convolution Batch conv_3d - f32
  Convolution Batch conv_all - f32
  Deconvolution Batch deconv_1d - f32
  Deconvolution Batch deconv_3d - f32
  Convolution Batch conv_alexnet - f32
  Deconvolution Batch deconv_all - f32
  Convolution Batch conv_3d - u8s8u8s32
  Convolution Batch conv_3d - u8s8f32s32
  Convolution Batch conv_all - u8s8u8s32
  Convolution Batch conv_all - u8s8f32s32
  Convolution Batch conv_googlenet_v3 - f32
  Deconvolution Batch deconv_1d - u8s8u8s32
  Deconvolution Batch deconv_3d - u8s8u8s32
  Convolution Batch conv_alexnet - u8s8u8s32
  Deconvolution Batch deconv_1d - u8s8f32s32
  Deconvolution Batch deconv_3d - u8s8f32s32
  Deconvolution Batch deconv_all - u8s8u8s32
  Convolution Batch conv_alexnet - u8s8f32s32
  Convolution Batch conv_googlenet_v3 - u8s8u8s32
  Convolution Batch conv_googlenet_v3 - u8s8f32s32
SVT-AV1
SVT-HEVC
SVT-VP9
VP9 libvpx Encoding
x265
SVT-AV1
Y-Cruncher
XZ Compression
dav1d:
  Summer Nature 4K
  Summer Nature 1080p
Tachyon
GIMP:
  resize
  rotate
  auto-levels
  unsharp-mask
Chaos Group V-RAY
IndigoBench:
  Bedroom
  Supercar