Linux GPU Compute
Intel Core i9-12900K testing with a ASUS ROG STRIX Z690-E GAMING WIFI (1601 BIOS) and NVIDIA GeForce RTX 3070 Ti 8GB on Ubuntu 22.04 via the Phoronix Test Suite.
RTX 3060
Processor: Intel Core i9-12900K @ 5.20GHz (16 Cores / 24 Threads), Motherboard: ASUS ROG STRIX Z690-E GAMING WIFI (1601 BIOS), Chipset: Intel Alder Lake-S PCH, Memory: 32GB, Disk: 1000GB Western Digital WDS100T1X0E-00AFY0, Graphics: eVGA NVIDIA GeForce RTX 3060 12GB, Audio: Intel Alder Lake-S HD Audio, Monitor: ASUS VP28U, Network: Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411
OS: Ubuntu 22.04, Kernel: 5.18.0-051800-generic (x86_64), Desktop: GNOME Shell 42.2, Display Server: X Server 1.21.1.3, Display Driver: NVIDIA 515.49.06, OpenGL: 4.6.0, OpenCL: OpenCL 3.0 CUDA 11.7.99, Vulkan: 1.3.217, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madvise
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v
Processor Notes: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x1f - Thermald 2.4.9
Graphics Notes: BAR1 / Visible vRAM Size: 16384 MiB - vBIOS Version: 94.06.14.40.46
OpenCL Notes: GPU Compute Cores: 3584
Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
RTX 3060 Ti
Changed Disk to 1000GB Western Digital WDS100T1X0E-00AFY0 + 2000GB.
Changed Graphics to NVIDIA GeForce RTX 3060 Ti 8GB.
Graphics Change: BAR1 / Visible vRAM Size: 8192 MiB - vBIOS Version: 94.04.25.00.2c
OpenCL Change: GPU Compute Cores: 4864
RTX 3070
Changed Graphics to NVIDIA GeForce RTX 3070 8GB.
Graphics Change: BAR1 / Visible vRAM Size: 8192 MiB - vBIOS Version: 94.04.25.00.2b
OpenCL Change: GPU Compute Cores: 5888
RTX 3070 Ti
Changed Graphics to NVIDIA GeForce RTX 3070 Ti 8GB.
Graphics Change: BAR1 / Visible vRAM Size: 8192 MiB - vBIOS Version: 94.04.5b.00.02
OpenCL Change: GPU Compute Cores: 6144
RTX 3080
Changed Graphics to NVIDIA GeForce RTX 3080 10GB.
Graphics Change: BAR1 / Visible vRAM Size: 16384 MiB - vBIOS Version: 94.02.20.00.07
OpenCL Change: GPU Compute Cores: 8704
RTX 3080 Ti
Changed Graphics to NVIDIA GeForce RTX 3080 Ti 12GB.
Graphics Change: BAR1 / Visible vRAM Size: 16384 MiB - vBIOS Version: 94.02.71.00.01
OpenCL Change: GPU Compute Cores: 10240
RTX 3090
Changed Chipset to Intel Device 7aa7.
Changed Graphics to NVIDIA GeForce RTX 3090 24GB.
Graphics Change: BAR1 / Visible vRAM Size: 32768 MiB - vBIOS Version: 94.02.27.00.02
OpenCL Change: GPU Compute Cores: 10496
Python Notes: Python 3.10.4
vkpeak
Vkpeak is a Vulkan compute benchmark inspired by OpenCL's clpeak. Vkpeak provides Vulkan compute performance measurements for FP16 / FP32 / FP64 / INT16 / INT32 scalar and vec4 performance. Learn more via the OpenBenchmarking.org test page.
RealSR-NCNN
RealSR-NCNN is an NCNN neural network implementation of the RealSR project and accelerated using the Vulkan API. RealSR is the Real-World Super Resolution via Kernel Estimation and Noise Injection. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image by a scale of 4x with Vulkan. Learn more via the OpenBenchmarking.org test page.
Waifu2x-NCNN Vulkan
Waifu2x-NCNN is an NCNN neural network implementation of the Waifu2x converter project and accelerated using the Vulkan API. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image with Vulkan. Learn more via the OpenBenchmarking.org test page.
Hashcat
Hashcat is an open-source, advanced password recovery tool supporting GPU acceleration with OpenCL, NVIDIA CUDA, and Radeon ROCm. Learn more via the OpenBenchmarking.org test page.
SHOC Scalable HeterOgeneous Computing
The CUDA and OpenCL version of Vetter's Scalable HeterOgeneous Computing benchmark suite. SHOC provides a number of different benchmark programs for evaluating the performance and stability of compute devices. Learn more via the OpenBenchmarking.org test page.
cl-mem
A basic OpenCL memory benchmark. Learn more via the OpenBenchmarking.org test page.
VkResample
VkResample is a Vulkan-based image upscaling library based on VkFFT. The sample input file is upscaling a 4K image to 8K using Vulkan-based GPU acceleration. Learn more via the OpenBenchmarking.org test page.
OctaneBench
OctaneBench is a test of the OctaneRender on the GPU and requires the use of NVIDIA CUDA. Learn more via the OpenBenchmarking.org test page.
FAHBench
FAHBench is a Folding@Home benchmark on the GPU. Learn more via the OpenBenchmarking.org test page.
LeelaChessZero
LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.
Rodinia
Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.
LuxCoreRender
LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.
FinanceBench
FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.
ViennaCL
ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.
Darktable
Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.
IndigoBench
This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.
JuliaGPU
JuliaGPU is an OpenCL benchmark with this version containing various PTS-specific enhancements. Learn more via the OpenBenchmarking.org test page.
MandelGPU
MandelGPU is an OpenCL benchmark and this test runs with the OpenCL rendering float4 kernel with a maximum of 4096 iterations. Learn more via the OpenBenchmarking.org test page.
SmallPT GPU
SmallPT GPU is an OpenCL benchmark that's run with various PTS changes compared to upstream and multiple rendering scenes are available. Learn more via the OpenBenchmarking.org test page.
clpeak
Clpeak is designed to test the peak capabilities of OpenCL devices. Learn more via the OpenBenchmarking.org test page.
Chaos Group V-RAY
This is a test of Chaos Group's V-RAY benchmark. V-RAY is a commercial renderer that can integrate with various creator software products like SketchUp and 3ds Max. The V-RAY benchmark is standalone and supports CPU and NVIDIA CUDA/RTX based rendering. Learn more via the OpenBenchmarking.org test page.
Meta Performance Per Watts
GPU Power Consumption Monitor
GPU Temperature Monitor
RTX 3060
Processor: Intel Core i9-12900K @ 5.20GHz (16 Cores / 24 Threads), Motherboard: ASUS ROG STRIX Z690-E GAMING WIFI (1601 BIOS), Chipset: Intel Alder Lake-S PCH, Memory: 32GB, Disk: 1000GB Western Digital WDS100T1X0E-00AFY0, Graphics: eVGA NVIDIA GeForce RTX 3060 12GB, Audio: Intel Alder Lake-S HD Audio, Monitor: ASUS VP28U, Network: Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411
OS: Ubuntu 22.04, Kernel: 5.18.0-051800-generic (x86_64), Desktop: GNOME Shell 42.2, Display Server: X Server 1.21.1.3, Display Driver: NVIDIA 515.49.06, OpenGL: 4.6.0, OpenCL: OpenCL 3.0 CUDA 11.7.99, Vulkan: 1.3.217, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madvise
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v
Processor Notes: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x1f - Thermald 2.4.9
Graphics Notes: BAR1 / Visible vRAM Size: 16384 MiB - vBIOS Version: 94.06.14.40.46
OpenCL Notes: GPU Compute Cores: 3584
Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 10 July 2022 20:12 by user pts.
RTX 3060 Ti
Processor: Intel Core i9-12900K @ 5.20GHz (16 Cores / 24 Threads), Motherboard: ASUS ROG STRIX Z690-E GAMING WIFI (1601 BIOS), Chipset: Intel Alder Lake-S PCH, Memory: 32GB, Disk: 1000GB Western Digital WDS100T1X0E-00AFY0 + 2000GB, Graphics: NVIDIA GeForce RTX 3060 Ti 8GB, Audio: Intel Alder Lake-S HD Audio, Monitor: ASUS VP28U, Network: Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411
OS: Ubuntu 22.04, Kernel: 5.18.0-051800-generic (x86_64), Desktop: GNOME Shell 42.2, Display Server: X Server 1.21.1.3, Display Driver: NVIDIA 515.49.06, OpenGL: 4.6.0, OpenCL: OpenCL 3.0 CUDA 11.7.99, Vulkan: 1.3.217, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madvise
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v
Processor Notes: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x1f - Thermald 2.4.9
Graphics Notes: BAR1 / Visible vRAM Size: 8192 MiB - vBIOS Version: 94.04.25.00.2c
OpenCL Notes: GPU Compute Cores: 4864
Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 12 July 2022 15:16 by user pts.
RTX 3070
Processor: Intel Core i9-12900K @ 5.20GHz (16 Cores / 24 Threads), Motherboard: ASUS ROG STRIX Z690-E GAMING WIFI (1601 BIOS), Chipset: Intel Alder Lake-S PCH, Memory: 32GB, Disk: 1000GB Western Digital WDS100T1X0E-00AFY0 + 2000GB, Graphics: NVIDIA GeForce RTX 3070 8GB, Audio: Intel Alder Lake-S HD Audio, Monitor: ASUS VP28U, Network: Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411
OS: Ubuntu 22.04, Kernel: 5.18.0-051800-generic (x86_64), Desktop: GNOME Shell 42.2, Display Server: X Server 1.21.1.3, Display Driver: NVIDIA 515.49.06, OpenGL: 4.6.0, OpenCL: OpenCL 3.0 CUDA 11.7.99, Vulkan: 1.3.217, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madvise
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v
Processor Notes: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x1f - Thermald 2.4.9
Graphics Notes: BAR1 / Visible vRAM Size: 8192 MiB - vBIOS Version: 94.04.25.00.2b
OpenCL Notes: GPU Compute Cores: 5888
Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 12 July 2022 06:25 by user pts.
RTX 3070 Ti
Processor: Intel Core i9-12900K @ 5.20GHz (16 Cores / 24 Threads), Motherboard: ASUS ROG STRIX Z690-E GAMING WIFI (1601 BIOS), Chipset: Intel Alder Lake-S PCH, Memory: 32GB, Disk: 1000GB Western Digital WDS100T1X0E-00AFY0 + 2000GB, Graphics: NVIDIA GeForce RTX 3070 Ti 8GB, Audio: Intel Alder Lake-S HD Audio, Monitor: ASUS VP28U, Network: Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411
OS: Ubuntu 22.04, Kernel: 5.18.0-051800-generic (x86_64), Desktop: GNOME Shell 42.2, Display Server: X Server 1.21.1.3, Display Driver: NVIDIA 515.49.06, OpenGL: 4.6.0, OpenCL: OpenCL 3.0 CUDA 11.7.99, Vulkan: 1.3.217, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madvise
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v
Processor Notes: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x1f - Thermald 2.4.9
Graphics Notes: BAR1 / Visible vRAM Size: 8192 MiB - vBIOS Version: 94.04.5b.00.02
OpenCL Notes: GPU Compute Cores: 6144
Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 12 July 2022 18:06 by user pts.
RTX 3080
Processor: Intel Core i9-12900K @ 5.20GHz (16 Cores / 24 Threads), Motherboard: ASUS ROG STRIX Z690-E GAMING WIFI (1601 BIOS), Chipset: Intel Alder Lake-S PCH, Memory: 32GB, Disk: 1000GB Western Digital WDS100T1X0E-00AFY0 + 2000GB, Graphics: NVIDIA GeForce RTX 3080 10GB, Audio: Intel Alder Lake-S HD Audio, Monitor: ASUS VP28U, Network: Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411
OS: Ubuntu 22.04, Kernel: 5.18.0-051800-generic (x86_64), Desktop: GNOME Shell 42.2, Display Server: X Server 1.21.1.3, Display Driver: NVIDIA 515.49.06, OpenGL: 4.6.0, OpenCL: OpenCL 3.0 CUDA 11.7.99, Vulkan: 1.3.217, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madvise
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v
Processor Notes: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x1f - Thermald 2.4.9
Graphics Notes: BAR1 / Visible vRAM Size: 16384 MiB - vBIOS Version: 94.02.20.00.07
OpenCL Notes: GPU Compute Cores: 8704
Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 12 July 2022 08:57 by user pts.
RTX 3080 Ti
Processor: Intel Core i9-12900K @ 5.20GHz (16 Cores / 24 Threads), Motherboard: ASUS ROG STRIX Z690-E GAMING WIFI (1601 BIOS), Chipset: Intel Alder Lake-S PCH, Memory: 32GB, Disk: 1000GB Western Digital WDS100T1X0E-00AFY0 + 2000GB, Graphics: NVIDIA GeForce RTX 3080 Ti 12GB, Audio: Intel Alder Lake-S HD Audio, Monitor: ASUS VP28U, Network: Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411
OS: Ubuntu 22.04, Kernel: 5.18.0-051800-generic (x86_64), Desktop: GNOME Shell 42.2, Display Server: X Server 1.21.1.3, Display Driver: NVIDIA 515.49.06, OpenGL: 4.6.0, OpenCL: OpenCL 3.0 CUDA 11.7.99, Vulkan: 1.3.217, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madvise
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v
Processor Notes: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x1f - Thermald 2.4.9
Graphics Notes: BAR1 / Visible vRAM Size: 16384 MiB - vBIOS Version: 94.02.71.00.01
OpenCL Notes: GPU Compute Cores: 10240
Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 12 July 2022 12:04 by user pts.
RTX 3090
Processor: Intel Core i9-12900K @ 5.20GHz (16 Cores / 24 Threads), Motherboard: ASUS ROG STRIX Z690-E GAMING WIFI (1601 BIOS), Chipset: Intel Device 7aa7, Memory: 32GB, Disk: 1000GB Western Digital WDS100T1X0E-00AFY0 + 2000GB, Graphics: NVIDIA GeForce RTX 3090 24GB, Audio: Intel Device 7ad0, Monitor: ASUS VP28U, Network: Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411
OS: Ubuntu 22.04, Kernel: 5.18.0-051800-generic (x86_64), Desktop: GNOME Shell 42.2, Display Server: X Server 1.21.1.3, Display Driver: NVIDIA 515.49.06, OpenGL: 4.6.0, OpenCL: OpenCL 3.0 CUDA 11.7.99, Vulkan: 1.3.217, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madvise
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v
Processor Notes: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x1f - Thermald 2.4.9
Graphics Notes: BAR1 / Visible vRAM Size: 32768 MiB - vBIOS Version: 94.02.27.00.02
OpenCL Notes: GPU Compute Cores: 10496
Python Notes: Python 3.10.4
Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 8 July 2022 20:30 by user pts.