NVIDIA GeForce GTX 680 To RTX 2080 Ti
Tests by Michael Larabel for a future article on Phoronix.
GeForce GTX 680
Processor: Intel Core i7-8086K @ 5.00GHz (6 Cores / 12 Threads), Motherboard: ASUS PRIME Z370-A (1002 BIOS), Chipset: Intel 8th Gen Core, Memory: 16384MB, Disk: 525GB SABRENT + 118GB INTEL SSDPEK1W120GA, Graphics: NVIDIA GeForce GTX 680 2048MB (1006/3004MHz), Audio: Realtek ALC1220, Monitor: Acer B286HK, Network: Intel Connection
OS: Ubuntu 18.04, Kernel: 4.18.0-041800-generic (x86_64), Desktop: GNOME Shell 3.28.3, Display Server: X Server 1.19.6, Display Driver: NVIDIA 410.57, OpenGL: 4.6.0, Compiler: GCC 7.3.0, File-System: ext4, Screen Resolution: 3840x2160
Processor Notes: Scaling Governor: intel_pstate performance
Security Notes: KPTI + __user pointer sanitization + Full generic retpoline IBPB IBRS_FW + SSB disabled via prctl and seccomp
GeForce GTX 780 Ti
Changed Graphics to NVIDIA GeForce GTX 780 Ti 3072MB (875/3500MHz).
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-as=/usr/bin/x86_64-linux-gnu-as --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-ld=/usr/bin/x86_64-linux-gnu-ld --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v
OpenCL Notes: GPU Compute Cores: 2880
GeForce GTX 980 Ti
Changed Graphics to NVIDIA GeForce GTX 980 Ti 6144MB (999/3505MHz).
OpenCL Change: GPU Compute Cores: 2816
GeForce GTX 1080 Ti
Changed Graphics to NVIDIA GeForce GTX 1080 Ti 11264MB (1480/5508MHz).
OpenCL Change: GPU Compute Cores: 3584
GeForce RTX 2080 Ti
Changed Graphics to NVIDIA GeForce RTX 2080 Ti 11264MB (1350/7000MHz).
OpenCL Change: GPU Compute Cores: 4352
Zotac Geforce RTX 2080 Amp
Processor: Intel Core i7-8086K @ 5.10GHz (6 Cores / 12 Threads), Motherboard: ASRock Z370 Extreme4 (P3.10 BIOS), Chipset: Intel 8th Gen Core, Memory: 16384MB, Disk: 4001GB Western Digital WD40EMRX-82U + 8002GB Backup+ Hub BK + 4001GB Backup+ Desk + 240GB Force MP300 + 1000GB Samsung SSD 970 EVO 1TB, Graphics: Zotac NVIDIA GeForce RTX 2080 8192MB (1515/8004MHz), Audio: Realtek ALC1220, Monitor: VX2439wm, Network: Intel Connection
OS: LinuxMint 19, Kernel: 4.19.0-999-lowlatency (x86_64), Desktop: Cinnamon 3.8.9, Display Server: X Server 1.19.6, Display Driver: NVIDIA 410.73, OpenGL: 4.6.0, Compiler: GCC 8.2.0, File-System: ext4, Screen Resolution: 1920x1080
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v
Processor Notes: Scaling Governor: intel_pstate powersave
OpenCL Notes: GPU Compute Cores: 2944
Security Notes: KPTI + __user pointer sanitization + Full generic retpoline IBPB IBRS_FW + SSB disabled via prctl and seccomp + PTE Inversion; VMX: conditional cache flushes SMT vulnerable
Tesseract
Tesseract is a fork of Cube 2 Sauerbraten with numerous graphics and game-play improvements. Tesseract has been in development since 2012 while its first release happened in May of 2014. Learn more via the OpenBenchmarking.org test page.
GpuTest
GpuTest is a cross-platform OpenGL benchmark developed at Geeks3D.com that offers tech demos such as FurMark, TessMark, and other workloads to stress various areas of GPUs and drivers. Learn more via the OpenBenchmarking.org test page.
Unigine Heaven
This test calculates the average frame-rate within the Heaven demo for the Unigine engine. This engine is extremely demanding on the system's graphics card. Learn more via the OpenBenchmarking.org test page.
Unigine Valley
This test calculates the average frame-rate within the Valley demo for the Unigine engine, released in February 2013. This engine is extremely demanding on the system's graphics card. Unigine Valley relies upon an OpenGL 3 core profile context. Learn more via the OpenBenchmarking.org test page.
Unigine Superposition
This test calculates the average frame-rate within the Superposition demo for the Unigine engine, released in 2017. This engine is extremely demanding on the system's graphics card. Learn more via the OpenBenchmarking.org test page.
Deus Ex: Mankind Divided
This is a benchmark of Deus Ex: Mankind Divided on Steam. The test profile assumes you have a Steam account, have Steam installed for the system, and that you own a copy of this game. This automates the process of executing the game and using its built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.
Rise of the Tomb Raider
Rise of the Tomb Raider on Steam. The test profile assumes you have a Steam account, have Steam installed for the system, and that you own a copy of this game. This automates the process of executing the game and using its built-in benchmark mode. Backs up old preferences (in ~/.local/share/feral-interactive/) for the run. NOTES for cross-platform comparisons: Due to the extreme demands of "Very High" 4K Texture Detail on all platforms, which can need ~6GB+ of VRAM, the "Very High" graphics preset on Linux only sets Texture Detail to "High". Nvidia ambient occlusion modes are not featured on the Linux version (HBAO+ and VXAO). See notes in install.sh for a few small tweaks including disabling the CPU governor check when testing for it's effects Learn more via the OpenBenchmarking.org test page.
Serious Sam 3: BFE
This is a benchmark of Serious Sam 3: BFE with the Fusion 2017 update. The test profile assumes you have a Steam account, have Steam installed for the system, and that you own the game. This automates the process of executing the game and using a standardized test. Learn more via the OpenBenchmarking.org test page.
LuxMark
LuxMark is a multi-platform OpenGL benchmark using LuxRender. LuxMark supports targeting different OpenCL devices and has multiple scenes available for rendering. LuxMark is a fully open-source OpenCL program with real-world rendering examples. Learn more via the OpenBenchmarking.org test page.
SHOC Scalable HeterOgeneous Computing
The CUDA and OpenCL version of Vetter's Scalable HeterOgeneous Computing benchmark suite. Learn more via the OpenBenchmarking.org test page.
cl-mem
A basic OpenCL memory benchmark. Learn more via the OpenBenchmarking.org test page.
IndigoBench
This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.
GPU Temperature Monitor
System Power Consumption Monitor
GeForce GTX 680
Processor: Intel Core i7-8086K @ 5.00GHz (6 Cores / 12 Threads), Motherboard: ASUS PRIME Z370-A (1002 BIOS), Chipset: Intel 8th Gen Core, Memory: 16384MB, Disk: 525GB SABRENT + 118GB INTEL SSDPEK1W120GA, Graphics: NVIDIA GeForce GTX 680 2048MB (1006/3004MHz), Audio: Realtek ALC1220, Monitor: Acer B286HK, Network: Intel Connection
OS: Ubuntu 18.04, Kernel: 4.18.0-041800-generic (x86_64), Desktop: GNOME Shell 3.28.3, Display Server: X Server 1.19.6, Display Driver: NVIDIA 410.57, OpenGL: 4.6.0, Compiler: GCC 7.3.0, File-System: ext4, Screen Resolution: 3840x2160
Processor Notes: Scaling Governor: intel_pstate performance
Security Notes: KPTI + __user pointer sanitization + Full generic retpoline IBPB IBRS_FW + SSB disabled via prctl and seccomp
Testing initiated at 19 September 2018 21:12 by user phoronix.
GeForce GTX 780 Ti
Processor: Intel Core i7-8086K @ 5.00GHz (6 Cores / 12 Threads), Motherboard: ASUS PRIME Z370-A (1002 BIOS), Chipset: Intel 8th Gen Core, Memory: 16384MB, Disk: 525GB SABRENT + 118GB INTEL SSDPEK1W120GA, Graphics: NVIDIA GeForce GTX 780 Ti 3072MB (875/3500MHz), Audio: Realtek ALC1220, Monitor: Acer B286HK, Network: Intel Connection
OS: Ubuntu 18.04, Kernel: 4.18.0-041800-generic (x86_64), Desktop: GNOME Shell 3.28.3, Display Server: X Server 1.19.6, Display Driver: NVIDIA 410.57, OpenGL: 4.6.0, Compiler: GCC 7.3.0, File-System: ext4, Screen Resolution: 3840x2160
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-as=/usr/bin/x86_64-linux-gnu-as --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-ld=/usr/bin/x86_64-linux-gnu-ld --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v
Processor Notes: Scaling Governor: intel_pstate performance
OpenCL Notes: GPU Compute Cores: 2880
Security Notes: KPTI + __user pointer sanitization + Full generic retpoline IBPB IBRS_FW + SSB disabled via prctl and seccomp
Testing initiated at 20 September 2018 11:57 by user phoronix.
GeForce GTX 980 Ti
Processor: Intel Core i7-8086K @ 5.00GHz (6 Cores / 12 Threads), Motherboard: ASUS PRIME Z370-A (1002 BIOS), Chipset: Intel 8th Gen Core, Memory: 16384MB, Disk: 525GB SABRENT + 118GB INTEL SSDPEK1W120GA, Graphics: NVIDIA GeForce GTX 980 Ti 6144MB (999/3505MHz), Audio: Realtek ALC1220, Monitor: Acer B286HK, Network: Intel Connection
OS: Ubuntu 18.04, Kernel: 4.18.0-041800-generic (x86_64), Desktop: GNOME Shell 3.28.3, Display Server: X Server 1.19.6, Display Driver: NVIDIA 410.57, OpenGL: 4.6.0, Compiler: GCC 7.3.0, File-System: ext4, Screen Resolution: 3840x2160
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-as=/usr/bin/x86_64-linux-gnu-as --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-ld=/usr/bin/x86_64-linux-gnu-ld --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v
Processor Notes: Scaling Governor: intel_pstate performance
OpenCL Notes: GPU Compute Cores: 2816
Security Notes: KPTI + __user pointer sanitization + Full generic retpoline IBPB IBRS_FW + SSB disabled via prctl and seccomp
Testing initiated at 20 September 2018 10:07 by user phoronix.
GeForce GTX 1080 Ti
Processor: Intel Core i7-8086K @ 5.00GHz (6 Cores / 12 Threads), Motherboard: ASUS PRIME Z370-A (1002 BIOS), Chipset: Intel 8th Gen Core, Memory: 16384MB, Disk: 525GB SABRENT + 118GB INTEL SSDPEK1W120GA, Graphics: NVIDIA GeForce GTX 1080 Ti 11264MB (1480/5508MHz), Audio: Realtek ALC1220, Monitor: Acer B286HK, Network: Intel Connection
OS: Ubuntu 18.04, Kernel: 4.18.0-041800-generic (x86_64), Desktop: GNOME Shell 3.28.3, Display Server: X Server 1.19.6, Display Driver: NVIDIA 410.57, OpenGL: 4.6.0, Compiler: GCC 7.3.0, File-System: ext4, Screen Resolution: 3840x2160
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-as=/usr/bin/x86_64-linux-gnu-as --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-ld=/usr/bin/x86_64-linux-gnu-ld --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v
Processor Notes: Scaling Governor: intel_pstate performance
OpenCL Notes: GPU Compute Cores: 3584
Security Notes: KPTI + __user pointer sanitization + Full generic retpoline IBPB IBRS_FW + SSB disabled via prctl and seccomp
Testing initiated at 20 September 2018 05:09 by user phoronix.
GeForce RTX 2080 Ti
Processor: Intel Core i7-8086K @ 5.00GHz (6 Cores / 12 Threads), Motherboard: ASUS PRIME Z370-A (1002 BIOS), Chipset: Intel 8th Gen Core, Memory: 16384MB, Disk: 525GB SABRENT + 118GB INTEL SSDPEK1W120GA, Graphics: NVIDIA GeForce RTX 2080 Ti 11264MB (1350/7000MHz), Audio: Realtek ALC1220, Monitor: Acer B286HK, Network: Intel Connection
OS: Ubuntu 18.04, Kernel: 4.18.0-041800-generic (x86_64), Desktop: GNOME Shell 3.28.3, Display Server: X Server 1.19.6, Display Driver: NVIDIA 410.57, OpenGL: 4.6.0, Compiler: GCC 7.3.0, File-System: ext4, Screen Resolution: 3840x2160
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-as=/usr/bin/x86_64-linux-gnu-as --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-ld=/usr/bin/x86_64-linux-gnu-ld --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v
Processor Notes: Scaling Governor: intel_pstate performance
OpenCL Notes: GPU Compute Cores: 4352
Security Notes: KPTI + __user pointer sanitization + Full generic retpoline IBPB IBRS_FW + SSB disabled via prctl and seccomp
Testing initiated at 20 September 2018 07:37 by user phoronix.
Zotac Geforce RTX 2080 Amp
Processor: Intel Core i7-8086K @ 5.10GHz (6 Cores / 12 Threads), Motherboard: ASRock Z370 Extreme4 (P3.10 BIOS), Chipset: Intel 8th Gen Core, Memory: 16384MB, Disk: 4001GB Western Digital WD40EMRX-82U + 8002GB Backup+ Hub BK + 4001GB Backup+ Desk + 240GB Force MP300 + 1000GB Samsung SSD 970 EVO 1TB, Graphics: Zotac NVIDIA GeForce RTX 2080 8192MB (1515/8004MHz), Audio: Realtek ALC1220, Monitor: VX2439wm, Network: Intel Connection
OS: LinuxMint 19, Kernel: 4.19.0-999-lowlatency (x86_64), Desktop: Cinnamon 3.8.9, Display Server: X Server 1.19.6, Display Driver: NVIDIA 410.73, OpenGL: 4.6.0, Compiler: GCC 8.2.0, File-System: ext4, Screen Resolution: 1920x1080
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v
Processor Notes: Scaling Governor: intel_pstate powersave
OpenCL Notes: GPU Compute Cores: 2944
Security Notes: KPTI + __user pointer sanitization + Full generic retpoline IBPB IBRS_FW + SSB disabled via prctl and seccomp + PTE Inversion; VMX: conditional cache flushes SMT vulnerable
Testing initiated at 27 October 2018 15:20 by user skeetre.