Intel Core i9-9900K testing with a ASUS PRIME Z390-A (0802 BIOS) and NVIDIA GeForce GTX 1060 6GB on Ubuntu 19.04 via the Phoronix Test Suite.
Processor: Intel Core i9-9900K @ 5.00GHz (8 Cores / 16 Threads), Motherboard: ASUS PRIME Z390-A (0802 BIOS), Chipset: Intel Cannon Lake PCH, Memory: 16384MB, Disk: Samsung SSD 970 EVO 250GB + 64GB Flash Drive, Graphics: NVIDIA GeForce GTX 1060 6GB (1506/4006MHz), Audio: Realtek ALC1220, Monitor: Acer B286HK, Network: Intel I219-V
OS: Ubuntu 19.04, Kernel: 5.0.9-050009-generic (x86_64), Desktop: GNOME Shell 3.32.0, Display Server: X Server 1.20.4, Display Driver: NVIDIA 430.09, OpenGL: 4.6.0, Vulkan: 1.1.99, Compiler: GCC 8.3.0, File-System: ext4, Screen Resolution: 3840x2160
Processor Notes: Scaling Governor: intel_pstate powersave
Python Notes: Python 2.7.16 + Python 3.7.3
Security Notes: __user pointer sanitization + Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + SSB disabled via prctl and seccomp
Changed Disk to Samsung SSD 970 EVO 250GB.
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v
This test runs ParaView benchmarks: an open-source data analytics and visualization application. Learn more via the OpenBenchmarking.org test page.
Tesseract is a fork of Cube 2 Sauerbraten with numerous graphics and game-play improvements. Tesseract has been in development since 2012 while its first release happened in May of 2014. Learn more via the OpenBenchmarking.org test page.
This test calculates the average frame-rate within the Heaven demo for the Unigine engine. This engine is extremely demanding on the system's graphics card. Learn more via the OpenBenchmarking.org test page.
This test calculates the average frame-rate within the Superposition demo for the Unigine engine, released in 2017. This engine is extremely demanding on the system's graphics card. Learn more via the OpenBenchmarking.org test page.
This test calculates the average frame-rate within the Valley demo for the Unigine engine, released in February 2013. This engine is extremely demanding on the system's graphics card. Unigine Valley relies upon an OpenGL 3 core profile context. Learn more via the OpenBenchmarking.org test page.
This is a benchmark of Xonotic, which is a fork of the DarkPlaces-based Nexuiz game. Development began in March of 2010 on the Xonotic game. Learn more via the OpenBenchmarking.org test page.
The CUDA and OpenCL version of Vetter's Scalable HeterOgeneous Computing benchmark suite. Learn more via the OpenBenchmarking.org test page.
A basic OpenCL memory benchmark. Learn more via the OpenBenchmarking.org test page.
Clpeak is designed to test the peak capabilities of OpenCL devices. Learn more via the OpenBenchmarking.org test page.
The CUDA and OpenCL version of Vetter's Scalable HeterOgeneous Computing benchmark suite. Learn more via the OpenBenchmarking.org test page.
Clpeak is designed to test the peak capabilities of OpenCL devices. Learn more via the OpenBenchmarking.org test page.
The CUDA and OpenCL version of Vetter's Scalable HeterOgeneous Computing benchmark suite. Learn more via the OpenBenchmarking.org test page.
Clpeak is designed to test the peak capabilities of OpenCL devices. Learn more via the OpenBenchmarking.org test page.
This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.
This test runs ParaView benchmarks: an open-source data analytics and visualization application. Learn more via the OpenBenchmarking.org test page.
NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. This version of the NAMD test profile uses CUDA GPU acceleration. Learn more via the OpenBenchmarking.org test page.
Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.
Clpeak is designed to test the peak capabilities of OpenCL devices. Learn more via the OpenBenchmarking.org test page.
Processor: Intel Core i9-9900K @ 5.00GHz (8 Cores / 16 Threads), Motherboard: ASUS PRIME Z390-A (0802 BIOS), Chipset: Intel Cannon Lake PCH, Memory: 16384MB, Disk: Samsung SSD 970 EVO 250GB + 64GB Flash Drive, Graphics: NVIDIA GeForce GTX 1060 6GB (1506/4006MHz), Audio: Realtek ALC1220, Monitor: Acer B286HK, Network: Intel I219-V
OS: Ubuntu 19.04, Kernel: 5.0.9-050009-generic (x86_64), Desktop: GNOME Shell 3.32.0, Display Server: X Server 1.20.4, Display Driver: NVIDIA 430.09, OpenGL: 4.6.0, Vulkan: 1.1.99, Compiler: GCC 8.3.0, File-System: ext4, Screen Resolution: 3840x2160
Processor Notes: Scaling Governor: intel_pstate powersave
Python Notes: Python 2.7.16 + Python 3.7.3
Security Notes: __user pointer sanitization + Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + SSB disabled via prctl and seccomp
Testing initiated at 7 May 2019 08:56 by user phoronix.
Processor: Intel Core i9-9900K @ 5.00GHz (8 Cores / 16 Threads), Motherboard: ASUS PRIME Z390-A (0802 BIOS), Chipset: Intel Cannon Lake PCH, Memory: 16384MB, Disk: Samsung SSD 970 EVO 250GB, Graphics: NVIDIA GeForce GTX 1060 6GB (1506/4006MHz), Audio: Realtek ALC1220, Monitor: Acer B286HK, Network: Intel I219-V
OS: Ubuntu 19.04, Kernel: 5.0.9-050009-generic (x86_64), Desktop: GNOME Shell 3.32.0, Display Server: X Server 1.20.4, Display Driver: NVIDIA 430.09, OpenGL: 4.6.0, Vulkan: 1.1.99, Compiler: GCC 8.3.0, File-System: ext4, Screen Resolution: 3840x2160
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v
Processor Notes: Scaling Governor: intel_pstate powersave
Python Notes: Python 2.7.16 + Python 3.7.3
Security Notes: __user pointer sanitization + Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + SSB disabled via prctl and seccomp
Testing initiated at 7 May 2019 09:57 by user phoronix.