Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 2303308-NE-2302139PT36 NVIDIA GeForce RTX 4090/4080 GPU Linux Compute - Phoronix Test Suite NVIDIA GeForce RTX 4090/4080 GPU Linux Compute GPU check
HTML result view exported from: https://openbenchmarking.org/result/2303308-NE-2302139PT36&gru&rdt .
NVIDIA GeForce RTX 4090/4080 GPU Linux Compute Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server Display Driver OpenGL OpenCL Vulkan Compiler File-System Screen Resolution RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 rtx3090 AMD Ryzen 9 7950X 16-Core @ 4.50GHz (16 Cores / 32 Threads) ASUS ROG CROSSHAIR X670E HERO (0805 BIOS) AMD Device 14d8 32GB Western Digital WD_BLACK SN850X 1000GB + 2000GB NVIDIA GeForce RTX 2080 Ti 11GB NVIDIA TU102 HD Audio ASUS MG28U Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411 Ubuntu 22.10 6.2.0-060200rc7daily20230206-generic (x86_64) GNOME Shell 43.1 X Server 1.21.1.4 NVIDIA 525.89.02 4.6.0 OpenCL 3.0 CUDA 12.0.147 1.3.224 GCC 12.2.0 + Clang 15.0.6 ext4 3840x2160 NVIDIA GeForce RTX 2080 SUPER 8GB NVIDIA TU104 HD Audio NVIDIA TITAN RTX 24GB NVIDIA TU102 HD Audio NVIDIA GeForce RTX 4090 24GB NVIDIA Device 22ba NVIDIA GeForce RTX 3070 8GB NVIDIA GA104 HD Audio NVIDIA GeForce RTX 4080 16GB NVIDIA Device 22bb NVIDIA GeForce RTX 3070 Ti 8GB NVIDIA GA104 HD Audio NVIDIA GeForce RTX 3080 10GB NVIDIA GA102 HD Audio NVIDIA GeForce RTX 3080 Ti 12GB NVIDIA GeForce RTX 3090 24GB AMD Ryzen Threadripper 3990X 64-Core @ 2.90GHz (64 Cores / 128 Threads) Gigabyte TRX40 DESIGNARE (F5 BIOS) AMD Starship/Matisse 256GB 2048GB ADATA SX8200PNP + 3 x 2048GB SPCC M.2 PCIe SSD + 5 x 14001GB Western Digital WUH721414AL NVIDIA Device 1aef 2 x Intel I210 + Intel Wi-Fi 6 AX200 Ubuntu 20.04 5.15.0-67-generic (x86_64) X Server 1.20.11 NVIDIA OpenCL 3.0 CUDA 11.6.134 1.3.194 GCC 9.4.0 + CUDA 11.6 btrfs 1024x768 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - RTX 2080 Ti: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - RTX 2080 SUPER: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - TITAN RTX: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - RTX 4090: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - RTX 3070: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - RTX 4080: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - RTX 3070 Ti: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - RTX 3080: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - RTX 3080 Ti: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - RTX 3090: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - rtx3090: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-Av3uEd/gcc-9-9.4.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - RTX 2080 Ti: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa601203 - RTX 2080 SUPER: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa601203 - TITAN RTX: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa601203 - RTX 4090: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa601203 - RTX 3070: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa601203 - RTX 4080: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa601203 - RTX 3070 Ti: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa601203 - RTX 3080: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa601203 - RTX 3080 Ti: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa601203 - RTX 3090: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa601203 - rtx3090: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0x8301039 Graphics Details - RTX 2080 Ti: BAR1 / Visible vRAM Size: 256 MiB - vBIOS Version: 90.02.0b.00.0e - RTX 2080 SUPER: BAR1 / Visible vRAM Size: 256 MiB - vBIOS Version: 90.04.79.00.01 - TITAN RTX: BAR1 / Visible vRAM Size: 256 MiB - vBIOS Version: 90.02.23.00.01 - RTX 4090: BAR1 / Visible vRAM Size: 32768 MiB - vBIOS Version: 95.02.20.00.01 - RTX 3070: BAR1 / Visible vRAM Size: 8192 MiB - vBIOS Version: 94.04.25.00.2b - RTX 4080: BAR1 / Visible vRAM Size: 16384 MiB - vBIOS Version: 95.03.0e.00.04 - RTX 3070 Ti: BAR1 / Visible vRAM Size: 8192 MiB - vBIOS Version: 94.04.5b.00.02 - RTX 3080: BAR1 / Visible vRAM Size: 16384 MiB - vBIOS Version: 94.02.20.00.07 - RTX 3080 Ti: BAR1 / Visible vRAM Size: 16384 MiB - vBIOS Version: 94.02.71.00.01 - RTX 3090: BAR1 / Visible vRAM Size: 32768 MiB - vBIOS Version: 94.02.27.00.02 - rtx3090: BAR1 / Visible vRAM Size: 256 MiB OpenCL Details - RTX 2080 Ti: GPU Compute Cores: 4352 - RTX 2080 SUPER: GPU Compute Cores: 3072 - TITAN RTX: GPU Compute Cores: 4608 - RTX 4090: GPU Compute Cores: 16384 - RTX 3070: GPU Compute Cores: 5888 - RTX 4080: GPU Compute Cores: 9728 - RTX 3070 Ti: GPU Compute Cores: 6144 - RTX 3080: GPU Compute Cores: 8704 - RTX 3080 Ti: GPU Compute Cores: 10240 - RTX 3090: GPU Compute Cores: 10496 Security Details - RTX 2080 Ti: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - RTX 2080 SUPER: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - TITAN RTX: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - RTX 4090: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - RTX 3070: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - RTX 4080: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - RTX 3070 Ti: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - RTX 3080: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - RTX 3080 Ti: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - RTX 3090: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - rtx3090: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
NVIDIA GeForce RTX 4090/4080 GPU Linux Compute shoc: OpenCL - Texture Read Bandwidth shoc: OpenCL - Reduction shoc: OpenCL - S3D clpeak: Single-Precision Float clpeak: Double-Precision Double shoc: OpenCL - FFT SP shoc: OpenCL - GEMM SGEMM_N shoc: OpenCL - MD5 Hash clpeak: Integer Compute INT hashcat: MD5 hashcat: SHA1 hashcat: SHA-512 hashcat: 7-Zip hashcat: TrueCrypt RIPEMD160 + XTS indigobench: OpenCL GPU - Bedroom indigobench: OpenCL GPU - Supercar luxcorerender: DLSC - GPU luxcorerender: Rainbow Colors and Prism - GPU luxcorerender: LuxCore Benchmark - GPU luxcorerender: Orange Juice - GPU luxcorerender: Danish Mood - GPU fluidx3d: FP32-FP32 fluidx3d: FP32-FP16S fluidx3d: FP32-FP16C lczero: OpenCL fahbench: octanebench: Total Score v-ray: NVIDIA CUDA GPU v-ray: NVIDIA RTX GPU namd-cuda: ATPase Simulation - 327,506 Atoms blender: BMW27 - NVIDIA CUDA blender: BMW27 - NVIDIA OptiX blender: Classroom - NVIDIA CUDA blender: Classroom - NVIDIA OptiX blender: Fishy Cat - NVIDIA CUDA blender: Fishy Cat - NVIDIA OptiX blender: Pabellon Barcelona - NVIDIA CUDA blender: Pabellon Barcelona - NVIDIA OptiX blender: Barbershop - NVIDIA CUDA blender: Barbershop - NVIDIA OptiX clpeak: Kernel Latency RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 rtx3090 1174.94 380.027 271.616 13451.55 518.57 1474.31 4802.22 34.7379 12958.38 55854416667 17634283333 2215683333 890525 635433 11.585 33.837 5.60 15.10 5.97 5.95 4.80 3111 6621 6578 35017 307.6220 358.03091 982 1370 0.09403 14.75 8.54 32.24 22.30 30.83 15.90 79.77 26.34 139.50 89.90 4.47 1175.55 342.279 212.552 10385.51 375.45 1196.14 3700.72 26.3972 10370.08 43272383333 13571266667 1719683333 716656 503488 8.407 26.172 4.33 11.55 4.46 4.90 3.76 2519 5386 5213 25293 266.6661 273.762077 824 1000 0.11231 19.32 10.86 42.82 29.10 41.00 20.75 112.12 35.32 186.19 116.06 4.19 1158.90 370.277 288.950 14568.40 544.61 1560.81 5050.99 36.5192 13843.96 58502883333 18463666667 2317200000 940213 627347 12.303 35.741 5.78 16.56 6.26 6.21 5.03 3394 7095 6937 37600 308.7785 379.938611 987 1427 0.09147 14.07 8.08 30.42 21.08 29.39 14.99 74.82 24.49 130.62 84.78 4.43 3020.31 1012.09 648.558 81720.00 1417.46 2794.91 28304.9 93.1202 41543.66 156128571429 49971842857 6355000000 2747311 1879978 33.395 76.758 21.54 40.51 19.13 16.94 18.22 5767 11240 11601 79274 444.1148 1335.761909 4330 5793 0.06811 5.39 3.35 9.71 7.04 11.68 5.24 20.51 7.78 44.81 29.52 3.73 2137.73 329.097 219.699 20086.22 360.78 1139.30 3648.34 25.5159 10240.62 41525483333 13031483333 1641566667 689100 490700 11.414 34.364 6.67 20.98 6.63 7.01 5.52 2619 5198 5159 36089 257.8758 408.686649 1411 1820 0.09655 16.75 8.86 34.65 22.99 35.70 16.68 87.16 24.87 146.10 82.53 4.02 2972.54 612.647 426.939 48185.06 849.65 1816.02 16157.5 59.3038 24329.78 93723200000 29871466667 3804083333 1693263 1115175 24.026 62.775 14.94 33.42 14.55 12.60 12.57 3855 7876 7930 76214 418.0831 961.612012 3065 4204 0.07079 7.42 4.16 13.82 9.13 16.46 7.33 31.55 9.82 61.80 37.59 3.68 2068.22 383.875 286.352 21506.02 369.87 1292.32 4717.35 26.0996 10912.89 42837716667 13261466667 1676050000 729067 503813 12.196 35.836 6.99 21.13 6.93 7.30 5.77 3532 6803 5977 42436 258.3843 449.586225 1493 2009 0.09540 15.72 8.12 32.43 21.12 34.27 15.77 81.99 22.88 136.94 77.27 3.97 2201.46 393.895 340.549 29451.31 543.71 1763.04 6454.05 37.4672 15456.53 60271216667 19002350000 2386133333 1002178 701625 15.682 43.178 9.49 26.53 9.58 9.23 7.80 4362 8507 8005 53248 310.0926 559.22431 1769 2426 0.07909 11.82 6.62 23.99 16.32 25.25 11.84 59.35 18.25 101.34 59.95 4.02 2158.34 390.102 418.480 33745.99 620.87 2016.91 8184.24 41.8086 17294.44 67778783333 21411783333 2684816667 1115444 796125 18.093 47.784 10.94 31.67 11.07 10.31 9.06 5240 10371 9352 58392 319.1114 659.898178 2037 2894 0.07847 10.35 5.78 20.75 14.22 22.29 10.38 50.72 15.62 87.99 52.56 4.08 2228.16 396.215 429.204 35152.14 656.03 2103.35 8248.12 44.1183 18108.44 71362683333 22566533333 2824266667 1160144 820300 18.723 49.443 11.41 31.44 11.47 10.66 9.43 5386 10678 9811 61044 331.4735 674.086517 2103 2988 0.07851 9.98 5.66 19.93 13.77 21.37 9.98 48.61 15.21 84.83 50.74 4.13 35317.79 646.88 18081.93 99587506250 42112533333 6067866667 2145533 1580467 27.93 57.43 21.68 22.94 19.44 5405 10543 9289 19898 332.7129 6.12 OpenBenchmarking.org
GPU Temperature Monitor Phoronix Test Suite System Monitoring OpenBenchmarking.org Celsius GPU Temperature Monitor Phoronix Test Suite System Monitoring RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 16 32 48 64 80 Min: 34 / Avg: 67.84 / Max: 84 Min: 29 / Avg: 60.65 / Max: 77 Min: 37 / Avg: 68.17 / Max: 83 Min: 31 / Avg: 47.84 / Max: 66 Min: 25 / Avg: 59.42 / Max: 75 Min: 23 / Avg: 45.77 / Max: 61 Min: 29 / Avg: 66.31 / Max: 80 Min: 25 / Avg: 66.91 / Max: 80 Min: 28 / Avg: 67.74 / Max: 80 Min: 24 / Avg: 61.23 / Max: 71
GPU Power Consumption Monitor Phoronix Test Suite System Monitoring OpenBenchmarking.org Watts GPU Power Consumption Monitor Phoronix Test Suite System Monitoring RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 80 160 240 320 400 Min: 6.2 / Avg: 163.81 / Max: 270.77 Min: 9.54 / Avg: 128.61 / Max: 255.9 Min: 8.25 / Avg: 179.55 / Max: 358.29 Min: 8.53 / Avg: 153.59 / Max: 447.97 Min: 9.55 / Avg: 124.92 / Max: 219.88 Min: 8.71 / Avg: 129.34 / Max: 305.29 Min: 13.25 / Avg: 169.39 / Max: 290.91 Min: 11.12 / Avg: 209.84 / Max: 321.68 Min: 19.24 / Avg: 235.21 / Max: 353.12 Min: 11.8 / Avg: 236.01 / Max: 350.22
SHOC Scalable HeterOgeneous Computing Target: OpenCL - Benchmark: Texture Read Bandwidth OpenBenchmarking.org GB/s, More Is Better SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Texture Read Bandwidth RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 600 1200 1800 2400 3000 SE +/- 3.30, N = 3 SE +/- 3.21, N = 3 SE +/- 1.25, N = 3 SE +/- 2.83, N = 6 SE +/- 5.99, N = 4 SE +/- 1.64, N = 6 SE +/- 3.72, N = 4 SE +/- 5.24, N = 3 SE +/- 6.11, N = 3 SE +/- 1.47, N = 3 1174.94 1175.55 1158.90 3020.31 2137.73 2972.54 2068.22 2201.46 2158.34 2228.16 1. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -lmpi_cxx -lmpi
SHOC Scalable HeterOgeneous Computing Target: OpenCL - Benchmark: Reduction OpenBenchmarking.org GB/s, More Is Better SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Reduction RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 200 400 600 800 1000 SE +/- 0.08, N = 13 SE +/- 0.09, N = 12 SE +/- 0.15, N = 13 SE +/- 3.20, N = 15 SE +/- 0.28, N = 12 SE +/- 0.19, N = 13 SE +/- 0.11, N = 13 SE +/- 0.07, N = 13 SE +/- 0.04, N = 13 SE +/- 0.23, N = 13 380.03 342.28 370.28 1012.09 329.10 612.65 383.88 393.90 390.10 396.22 1. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -lmpi_cxx -lmpi
SHOC Scalable HeterOgeneous Computing Target: OpenCL - Benchmark: Texture Read Bandwidth OpenBenchmarking.org GB/s Per Watt, More Is Better SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Texture Read Bandwidth RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 8 16 24 32 40 7.972 11.359 7.592 25.992 19.770 33.637 14.223 11.574 9.523 9.712
SHOC Scalable HeterOgeneous Computing Target: OpenCL - Benchmark: Reduction OpenBenchmarking.org GB/s Per Watt, More Is Better SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: Reduction RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 6 12 18 24 30 5.571 6.362 5.105 24.883 6.438 13.613 4.657 4.189 3.382 3.623
SHOC Scalable HeterOgeneous Computing Target: OpenCL - Benchmark: S3D OpenBenchmarking.org GFLOPS, More Is Better SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: S3D RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 140 280 420 560 700 SE +/- 0.13, N = 13 SE +/- 0.12, N = 13 SE +/- 0.08, N = 13 SE +/- 0.14, N = 13 SE +/- 0.09, N = 13 SE +/- 0.23, N = 14 SE +/- 0.06, N = 13 SE +/- 0.05, N = 14 SE +/- 0.11, N = 14 SE +/- 0.36, N = 14 271.62 212.55 288.95 648.56 219.70 426.94 286.35 340.55 418.48 429.20 1. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -lmpi_cxx -lmpi
clpeak OpenCL Test: Single-Precision Float OpenBenchmarking.org GFLOPS Per Watt, More Is Better clpeak OpenCL Test: Single-Precision Float RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 300 600 900 1200 1500 162.97 190.64 165.44 1192.48 434.77 973.07 302.61 325.73 327.81 358.41
clpeak OpenCL Test: Single-Precision Float OpenBenchmarking.org GFLOPS, More Is Better clpeak OpenCL Test: Single-Precision Float RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 rtx3090 20K 40K 60K 80K 100K SE +/- 183.59, N = 15 SE +/- 70.04, N = 15 SE +/- 168.44, N = 15 SE +/- 6.84, N = 15 SE +/- 3.16, N = 14 SE +/- 23.64, N = 15 SE +/- 10.70, N = 15 SE +/- 8.73, N = 13 SE +/- 24.64, N = 15 SE +/- 29.26, N = 14 SE +/- 90.58, N = 3 13451.55 10385.51 14568.40 81720.00 20086.22 48185.06 21506.02 29451.31 33745.99 35152.14 35317.79 1. (CXX) g++ options: -O3 -rdynamic -lOpenCL
SHOC Scalable HeterOgeneous Computing Target: OpenCL - Benchmark: FFT SP OpenBenchmarking.org GFLOPS Per Watt, More Is Better SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: FFT SP RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 15 30 45 60 75 27.12 27.34 27.30 65.23 29.49 49.65 19.34 23.17 22.08 24.52
clpeak OpenCL Test: Double-Precision Double OpenBenchmarking.org GFLOPS, More Is Better clpeak OpenCL Test: Double-Precision Double RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 rtx3090 300 600 900 1200 1500 SE +/- 1.27, N = 3 SE +/- 0.98, N = 3 SE +/- 1.34, N = 3 SE +/- 0.06, N = 3 SE +/- 0.01, N = 3 SE +/- 0.16, N = 3 SE +/- 0.70, N = 3 SE +/- 0.87, N = 3 SE +/- 1.58, N = 3 SE +/- 1.63, N = 3 SE +/- 1.75, N = 3 518.57 375.45 544.61 1417.46 360.78 849.65 369.87 543.71 620.87 656.03 646.88 1. (CXX) g++ options: -O3 -rdynamic -lOpenCL
clpeak OpenCL Test: Double-Precision Double OpenBenchmarking.org GFLOPS Per Watt, More Is Better clpeak OpenCL Test: Double-Precision Double RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 3 6 9 12 15 4.105 4.228 3.937 12.939 4.830 10.877 3.361 3.651 3.627 3.882
SHOC Scalable HeterOgeneous Computing Target: OpenCL - Benchmark: FFT SP OpenBenchmarking.org GFLOPS, More Is Better SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: FFT SP RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 600 1200 1800 2400 3000 SE +/- 0.74, N = 13 SE +/- 0.66, N = 13 SE +/- 1.10, N = 13 SE +/- 1.30, N = 13 SE +/- 0.25, N = 13 SE +/- 1.29, N = 13 SE +/- 0.04, N = 13 SE +/- 0.13, N = 13 SE +/- 0.15, N = 13 SE +/- 0.18, N = 13 1474.31 1196.14 1560.81 2794.91 1139.30 1816.02 1292.32 1763.04 2016.91 2103.35 1. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -lmpi_cxx -lmpi
SHOC Scalable HeterOgeneous Computing Target: OpenCL - Benchmark: GEMM SGEMM_N OpenBenchmarking.org GFLOPS, More Is Better SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: GEMM SGEMM_N RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 6K 12K 18K 24K 30K SE +/- 4.89, N = 10 SE +/- 1.15, N = 10 SE +/- 26.94, N = 15 SE +/- 123.24, N = 15 SE +/- 12.30, N = 10 SE +/- 171.04, N = 15 SE +/- 4.57, N = 11 SE +/- 20.26, N = 11 SE +/- 23.31, N = 12 SE +/- 9.21, N = 12 4802.22 3700.72 5050.99 28304.90 3648.34 16157.50 4717.35 6454.05 8184.24 8248.12 1. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -lmpi_cxx -lmpi
SHOC Scalable HeterOgeneous Computing Target: OpenCL - Benchmark: GEMM SGEMM_N OpenBenchmarking.org GFLOPS Per Watt, More Is Better SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: GEMM SGEMM_N RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 110 220 330 440 550 59.65 47.81 54.49 521.33 53.02 307.32 47.45 59.56 63.08 66.20
SHOC Scalable HeterOgeneous Computing Target: OpenCL - Benchmark: S3D OpenBenchmarking.org GFLOPS Per Watt, More Is Better SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: S3D RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 4 8 12 16 20 6.497 5.715 6.563 17.953 6.664 14.239 5.161 5.658 5.540 6.238
SHOC Scalable HeterOgeneous Computing Target: OpenCL - Benchmark: MD5 Hash OpenBenchmarking.org GHash/s Per Watt, More Is Better SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: MD5 Hash RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 0.4484 0.8968 1.3452 1.7936 2.242 0.575 0.456 0.517 1.993 0.477 1.311 0.326 0.444 0.425 0.476
SHOC Scalable HeterOgeneous Computing Target: OpenCL - Benchmark: MD5 Hash OpenBenchmarking.org GHash/s, More Is Better SHOC Scalable HeterOgeneous Computing 2020-04-17 Target: OpenCL - Benchmark: MD5 Hash RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 20 40 60 80 100 SE +/- 0.03, N = 14 SE +/- 0.02, N = 13 SE +/- 0.04, N = 14 SE +/- 0.94, N = 15 SE +/- 0.01, N = 13 SE +/- 0.01, N = 15 SE +/- 0.02, N = 14 SE +/- 0.02, N = 15 SE +/- 0.04, N = 14 SE +/- 0.00, N = 14 34.74 26.40 36.52 93.12 25.52 59.30 26.10 37.47 41.81 44.12 1. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -lmpi_cxx -lmpi
clpeak OpenCL Test: Integer Compute INT OpenBenchmarking.org GIOPS, More Is Better clpeak OpenCL Test: Integer Compute INT RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 rtx3090 9K 18K 27K 36K 45K SE +/- 114.69, N = 15 SE +/- 81.62, N = 15 SE +/- 147.60, N = 15 SE +/- 173.91, N = 13 SE +/- 57.96, N = 13 SE +/- 60.93, N = 13 SE +/- 37.91, N = 13 SE +/- 103.33, N = 15 SE +/- 110.25, N = 12 SE +/- 123.02, N = 12 SE +/- 192.00, N = 3 12958.38 10370.08 13843.96 41543.66 10240.62 24329.78 10912.89 15456.53 17294.44 18108.44 18081.93 1. (CXX) g++ options: -O3 -rdynamic -lOpenCL
clpeak OpenCL Test: Integer Compute INT OpenBenchmarking.org GIOPS Per Watt, More Is Better clpeak OpenCL Test: Integer Compute INT RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 90 180 270 360 450 124.95 151.25 134.23 391.86 163.64 348.46 123.46 124.21 127.79 137.31
Hashcat Benchmark: MD5 OpenBenchmarking.org H/s, More Is Better Hashcat 6.2.4 Benchmark: MD5 RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 rtx3090 30000M 60000M 90000M 120000M 150000M SE +/- 87506009.00, N = 6 SE +/- 57372010.69, N = 6 SE +/- 90205171.38, N = 6 SE +/- 174184201.51, N = 7 SE +/- 34079293.97, N = 6 SE +/- 22189456.96, N = 6 SE +/- 65387911.30, N = 6 SE +/- 127337788.10, N = 6 SE +/- 76439755.87, N = 6 SE +/- 71473922.59, N = 6 SE +/- 8485707257.57, N = 16 55854416667 43272383333 58502883333 156128571429 41525483333 93723200000 42837716667 60271216667 67778783333 71362683333 99587506250
Hashcat Benchmark: SHA1 OpenBenchmarking.org H/s, More Is Better Hashcat 6.2.4 Benchmark: SHA1 RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 rtx3090 11000M 22000M 33000M 44000M 55000M SE +/- 28508810.53, N = 6 SE +/- 2530173.47, N = 6 SE +/- 34235952.51, N = 6 SE +/- 5632425.56, N = 7 SE +/- 16362322.91, N = 6 SE +/- 11540238.78, N = 6 SE +/- 7505316.63, N = 6 SE +/- 26246622.00, N = 6 SE +/- 27127150.20, N = 6 SE +/- 20798440.11, N = 6 SE +/- 113872228.59, N = 3 17634283333 13571266667 18463666667 49971842857 13031483333 29871466667 13261466667 19002350000 21411783333 22566533333 42112533333
Hashcat Benchmark: SHA-512 OpenBenchmarking.org H/s, More Is Better Hashcat 6.2.4 Benchmark: SHA-512 RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 rtx3090 1400M 2800M 4200M 5600M 7000M SE +/- 4024377.11, N = 6 SE +/- 969335.40, N = 6 SE +/- 3557339.83, N = 6 SE +/- 5095619.03, N = 6 SE +/- 1615480.66, N = 6 SE +/- 353474.81, N = 6 SE +/- 2275192.30, N = 6 SE +/- 825294.56, N = 6 SE +/- 3226702.82, N = 6 SE +/- 2160812.60, N = 6 SE +/- 15974806.55, N = 3 2215683333 1719683333 2317200000 6355000000 1641566667 3804083333 1676050000 2386133333 2684816667 2824266667 6067866667
Hashcat Benchmark: 7-Zip OpenBenchmarking.org H/s, More Is Better Hashcat 6.2.4 Benchmark: 7-Zip RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 rtx3090 600K 1200K 1800K 2400K 3000K SE +/- 618.97, N = 8 SE +/- 625.41, N = 9 SE +/- 721.96, N = 8 SE +/- 1573.52, N = 9 SE +/- 814.96, N = 9 SE +/- 798.87, N = 8 SE +/- 623.61, N = 9 SE +/- 691.17, N = 9 SE +/- 593.28, N = 9 SE +/- 1016.27, N = 9 SE +/- 3493.01, N = 3 890525 716656 940213 2747311 689100 1693263 729067 1002178 1115444 1160144 2145533
Hashcat Benchmark: MD5 OpenBenchmarking.org H/s Per Watt, More Is Better Hashcat 6.2.4 Benchmark: MD5 RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 160M 320M 480M 640M 800M 415863841.73 334054816.50 394400915.26 747547216.75 370299835.98 673743982.45 279437160.25 340726478.35 337806234.21 369720943.61
Hashcat Benchmark: TrueCrypt RIPEMD160 + XTS OpenBenchmarking.org H/s, More Is Better Hashcat 6.2.4 Benchmark: TrueCrypt RIPEMD160 + XTS RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 rtx3090 400K 800K 1200K 1600K 2000K SE +/- 3242.17, N = 9 SE +/- 1162.88, N = 8 SE +/- 5598.71, N = 15 SE +/- 4362.24, N = 9 SE +/- 1619.41, N = 8 SE +/- 9554.03, N = 8 SE +/- 1425.34, N = 8 SE +/- 3279.36, N = 8 SE +/- 1888.10, N = 8 SE +/- 5517.68, N = 7 SE +/- 3556.37, N = 3 635433 503488 627347 1879978 490700 1115175 503813 701625 796125 820300 1580467
Hashcat Benchmark: SHA1 OpenBenchmarking.org H/s Per Watt, More Is Better Hashcat 6.2.4 Benchmark: SHA1 RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 50M 100M 150M 200M 250M 118687287.84 99607831.72 115944921.97 245337711.38 107020025.94 206856415.57 86176328.27 100855357.44 101549687.45 110439111.31
Hashcat Benchmark: SHA-512 OpenBenchmarking.org H/s Per Watt, More Is Better Hashcat 6.2.4 Benchmark: SHA-512 RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 6M 12M 18M 24M 30M 14671067.73 12343167.00 14077100.52 28057376.09 13359379.85 25021065.75 10392933.83 12567360.57 12594221.05 13486146.22
Hashcat Benchmark: 7-Zip OpenBenchmarking.org H/s Per Watt, More Is Better Hashcat 6.2.4 Benchmark: 7-Zip RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 4K 8K 12K 16K 20K 7881.58 8182.41 8147.15 18641.37 9076.68 16550.40 6230.53 7556.32 7116.77 7540.10
Hashcat Benchmark: TrueCrypt RIPEMD160 + XTS OpenBenchmarking.org H/s Per Watt, More Is Better Hashcat 6.2.4 Benchmark: TrueCrypt RIPEMD160 + XTS RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 3K 6K 9K 12K 15K 7348.49 5493.78 6199.78 12907.79 5647.78 12259.68 4101.43 5002.27 4816.78 5094.18
IndigoBench Acceleration: OpenCL GPU - Scene: Bedroom OpenBenchmarking.org M samples/s, More Is Better IndigoBench 4.4 Acceleration: OpenCL GPU - Scene: Bedroom RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 8 16 24 32 40 SE +/- 0.002, N = 3 SE +/- 0.001, N = 3 SE +/- 0.003, N = 3 SE +/- 0.038, N = 3 SE +/- 0.011, N = 3 SE +/- 0.012, N = 3 SE +/- 0.001, N = 3 SE +/- 0.001, N = 3 SE +/- 0.002, N = 3 SE +/- 0.008, N = 3 11.585 8.407 12.303 33.395 11.414 24.026 12.196 15.682 18.093 18.723
IndigoBench Acceleration: OpenCL GPU - Scene: Supercar OpenBenchmarking.org M samples/s Per Watt, More Is Better IndigoBench 4.4 Acceleration: OpenCL GPU - Scene: Supercar RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 0.0947 0.1894 0.2841 0.3788 0.4735 0.171 0.182 0.160 0.421 0.239 0.412 0.192 0.177 0.170 0.177
IndigoBench Acceleration: OpenCL GPU - Scene: Bedroom OpenBenchmarking.org M samples/s Per Watt, More Is Better IndigoBench 4.4 Acceleration: OpenCL GPU - Scene: Bedroom RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 0.034 0.068 0.102 0.136 0.17 0.056 0.057 0.053 0.151 0.075 0.141 0.062 0.060 0.060 0.061
IndigoBench Acceleration: OpenCL GPU - Scene: Supercar OpenBenchmarking.org M samples/s, More Is Better IndigoBench 4.4 Acceleration: OpenCL GPU - Scene: Supercar RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 20 40 60 80 100 SE +/- 0.02, N = 3 SE +/- 0.04, N = 3 SE +/- 0.03, N = 3 SE +/- 0.04, N = 3 SE +/- 0.04, N = 3 SE +/- 0.01, N = 3 SE +/- 0.04, N = 3 SE +/- 0.03, N = 3 SE +/- 0.05, N = 3 SE +/- 0.03, N = 3 33.84 26.17 35.74 76.76 34.36 62.78 35.84 43.18 47.78 49.44
LuxCoreRender Scene: DLSC - Acceleration: GPU OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender 2.6 Scene: DLSC - Acceleration: GPU RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 rtx3090 7 14 21 28 35 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 SE +/- 0.19, N = 3 5.60 4.33 5.78 21.54 6.67 14.94 6.99 9.49 10.94 11.41 27.93 MIN: 5.47 / MAX: 5.77 MIN: 4.09 / MAX: 4.5 MIN: 5.29 / MAX: 6 MIN: 19.77 / MAX: 21.76 MIN: 6.52 / MAX: 6.85 MIN: 14.15 / MAX: 15.07 MIN: 6.45 / MAX: 7.23 MIN: 9.3 / MAX: 9.77 MIN: 10.51 / MAX: 11.13 MIN: 10.53 / MAX: 11.84 MIN: 25.54 / MAX: 28.52
LuxCoreRender Scene: DLSC - Acceleration: GPU OpenBenchmarking.org M samples/sec Per Watt, More Is Better LuxCoreRender 2.6 Scene: DLSC - Acceleration: GPU RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 0.0205 0.041 0.0615 0.082 0.1025 0.036 0.029 0.025 0.091 0.059 0.084 0.036 0.035 0.035 0.036
LuxCoreRender Scene: Rainbow Colors and Prism - Acceleration: GPU OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender 2.6 Scene: Rainbow Colors and Prism - Acceleration: GPU RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 rtx3090 13 26 39 52 65 SE +/- 0.11, N = 4 SE +/- 0.02, N = 4 SE +/- 0.11, N = 5 SE +/- 0.35, N = 7 SE +/- 0.06, N = 5 SE +/- 0.09, N = 6 SE +/- 0.09, N = 5 SE +/- 0.05, N = 6 SE +/- 0.23, N = 6 SE +/- 0.24, N = 15 SE +/- 0.37, N = 3 15.10 11.55 16.56 40.51 20.98 33.42 21.13 26.53 31.67 31.44 57.43 MIN: 14.31 / MAX: 16.35 MIN: 9.51 / MAX: 12.4 MIN: 14.55 / MAX: 17.77 MIN: 36.85 / MAX: 43.48 MIN: 19.19 / MAX: 22.07 MIN: 30.02 / MAX: 34.92 MIN: 19.76 / MAX: 22.63 MIN: 23.68 / MAX: 28.44 MIN: 28.49 / MAX: 34.26 MIN: 28.69 / MAX: 34.95 MIN: 46.94 / MAX: 66.85
LuxCoreRender Scene: Rainbow Colors and Prism - Acceleration: GPU OpenBenchmarking.org M samples/sec Per Watt, More Is Better LuxCoreRender 2.6 Scene: Rainbow Colors and Prism - Acceleration: GPU RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 0.0756 0.1512 0.2268 0.3024 0.378 0.096 0.101 0.103 0.336 0.177 0.303 0.139 0.141 0.139 0.138
LuxCoreRender Scene: Orange Juice - Acceleration: GPU OpenBenchmarking.org M samples/sec Per Watt, More Is Better LuxCoreRender 2.6 Scene: Orange Juice - Acceleration: GPU RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 0.0173 0.0346 0.0519 0.0692 0.0865 0.030 0.035 0.029 0.077 0.045 0.075 0.038 0.036 0.034 0.035
LuxCoreRender Scene: Danish Mood - Acceleration: GPU OpenBenchmarking.org M samples/sec Per Watt, More Is Better LuxCoreRender 2.6 Scene: Danish Mood - Acceleration: GPU RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 0.0207 0.0414 0.0621 0.0828 0.1035 0.026 0.029 0.025 0.092 0.041 0.085 0.034 0.034 0.034 0.035
LuxCoreRender Scene: LuxCore Benchmark - Acceleration: GPU OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender 2.6 Scene: LuxCore Benchmark - Acceleration: GPU RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 rtx3090 5 10 15 20 25 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 SE +/- 0.08, N = 3 5.97 4.46 6.26 19.13 6.63 14.55 6.93 9.58 11.07 11.47 21.68 MIN: 1.91 / MAX: 6.87 MIN: 1.95 / MAX: 5.11 MIN: 2.78 / MAX: 7.17 MIN: 6.67 / MAX: 23.01 MIN: 2.81 / MAX: 7.55 MIN: 4.97 / MAX: 16.87 MIN: 3.08 / MAX: 7.9 MIN: 3.17 / MAX: 10.93 MIN: 4.03 / MAX: 12.64 MIN: 4.04 / MAX: 13.09 MIN: 9.19 / MAX: 27.36
LuxCoreRender Scene: Orange Juice - Acceleration: GPU OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender 2.6 Scene: Orange Juice - Acceleration: GPU RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 rtx3090 5 10 15 20 25 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.05, N = 3 SE +/- 0.06, N = 3 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.06, N = 3 SE +/- 0.04, N = 3 SE +/- 0.27, N = 3 5.95 4.90 6.21 16.94 7.01 12.60 7.30 9.23 10.31 10.66 22.94 MIN: 4.73 / MAX: 7.46 MIN: 4.06 / MAX: 5.43 MIN: 4.77 / MAX: 8.28 MIN: 14.84 / MAX: 23.97 MIN: 5.62 / MAX: 8.48 MIN: 10.63 / MAX: 16.95 MIN: 5.84 / MAX: 8.87 MIN: 7.29 / MAX: 11.91 MIN: 8.24 / MAX: 13.67 MIN: 8.55 / MAX: 14.16 MIN: 19.59 / MAX: 31.58
LuxCoreRender Scene: LuxCore Benchmark - Acceleration: GPU OpenBenchmarking.org M samples/sec Per Watt, More Is Better LuxCoreRender 2.6 Scene: LuxCore Benchmark - Acceleration: GPU RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 0.0365 0.073 0.1095 0.146 0.1825 0.043 0.033 0.031 0.162 0.065 0.095 0.039 0.040 0.039 0.040
LuxCoreRender Scene: Danish Mood - Acceleration: GPU OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender 2.6 Scene: Danish Mood - Acceleration: GPU RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 rtx3090 5 10 15 20 25 SE +/- 0.02, N = 3 SE +/- 0.03, N = 3 SE +/- 0.02, N = 3 SE +/- 0.18, N = 3 SE +/- 0.04, N = 3 SE +/- 0.06, N = 3 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 SE +/- 0.05, N = 3 SE +/- 0.09, N = 3 SE +/- 0.02, N = 3 4.80 3.76 5.03 18.22 5.52 12.57 5.77 7.80 9.06 9.43 19.44 MIN: 1.59 / MAX: 5.52 MIN: 1.31 / MAX: 4.31 MIN: 1.81 / MAX: 5.76 MIN: 6.75 / MAX: 21.09 MIN: 2.12 / MAX: 6.36 MIN: 4.72 / MAX: 14.53 MIN: 2.14 / MAX: 6.67 MIN: 2.99 / MAX: 9.01 MIN: 3.42 / MAX: 10.44 MIN: 3.4 / MAX: 10.86 MIN: 7.75 / MAX: 22.58
FluidX3D Test: FP32-FP32 OpenBenchmarking.org MLUPs/s, More Is Better FluidX3D 2.3 Test: FP32-FP32 RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 rtx3090 1200 2400 3600 4800 6000 SE +/- 2.85, N = 3 SE +/- 0.58, N = 3 SE +/- 0.33, N = 3 SE +/- 1.33, N = 3 SE +/- 0.33, N = 3 SE +/- 11.50, N = 3 SE +/- 0.00, N = 3 SE +/- 0.58, N = 3 SE +/- 0.67, N = 3 SE +/- 1.20, N = 3 3111 2519 3394 5767 2619 3855 3532 4362 5240 5386 5405
FluidX3D Test: FP32-FP16S OpenBenchmarking.org MLUPs/s, More Is Better FluidX3D 2.3 Test: FP32-FP16S RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 rtx3090 2K 4K 6K 8K 10K SE +/- 1.67, N = 3 SE +/- 0.88, N = 3 SE +/- 1.53, N = 3 SE +/- 1.89, N = 4 SE +/- 1.00, N = 3 SE +/- 6.89, N = 3 SE +/- 1.33, N = 3 SE +/- 1.00, N = 3 SE +/- 0.67, N = 3 SE +/- 0.85, N = 4 SE +/- 0.88, N = 3 6621 5386 7095 11240 5198 7876 6803 8507 10371 10678 10543
FluidX3D Test: FP32-FP16C OpenBenchmarking.org MLUPs/s, More Is Better FluidX3D 2.3 Test: FP32-FP16C RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 rtx3090 2K 4K 6K 8K 10K SE +/- 1.53, N = 3 SE +/- 0.88, N = 3 SE +/- 1.67, N = 3 SE +/- 0.63, N = 4 SE +/- 3.06, N = 3 SE +/- 0.67, N = 3 SE +/- 6.36, N = 3 SE +/- 8.67, N = 3 SE +/- 10.97, N = 3 SE +/- 9.21, N = 3 SE +/- 17.48, N = 3 6578 5213 6937 11601 5159 7930 5977 8005 9352 9811 9289
FluidX3D Test: FP32-FP32 OpenBenchmarking.org MLUPs/s Per Watt, More Is Better FluidX3D 2.3 Test: FP32-FP32 RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 7 14 21 28 35 17.07 20.09 18.24 30.70 20.30 27.39 18.05 18.10 18.54 18.74
FluidX3D Test: FP32-FP16S OpenBenchmarking.org MLUPs/s Per Watt, More Is Better FluidX3D 2.3 Test: FP32-FP16S RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 13 26 39 52 65 31.93 36.44 33.30 57.92 36.18 50.83 32.06 31.89 38.43 37.57
FluidX3D Test: FP32-FP16C OpenBenchmarking.org MLUPs/s Per Watt, More Is Better FluidX3D 2.3 Test: FP32-FP16C RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 11 22 33 44 55 29.61 26.13 28.70 48.07 26.49 41.39 23.36 29.46 31.71 34.29
LeelaChessZero Backend: OpenCL OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.28 Backend: OpenCL RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 rtx3090 20K 40K 60K 80K 100K SE +/- 245.88, N = 3 SE +/- 74.35, N = 3 SE +/- 155.70, N = 3 SE +/- 369.50, N = 3 SE +/- 89.51, N = 3 SE +/- 508.16, N = 3 SE +/- 136.52, N = 3 SE +/- 353.32, N = 3 SE +/- 365.35, N = 3 SE +/- 236.14, N = 3 SE +/- 209.52, N = 3 35017 25293 37600 79274 36089 76214 42436 53248 58392 61044 19898 1. (CXX) g++ options: -flto -pthread
LeelaChessZero Backend: OpenCL OpenBenchmarking.org Nodes Per Second Per Watt, More Is Better LeelaChessZero 0.28 Backend: OpenCL RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 90 180 270 360 450 145.71 122.58 139.74 402.91 177.52 400.38 161.31 176.35 177.19 186.58
FAHBench OpenBenchmarking.org Ns Per Day Per Watt, More Is Better FAHBench 2.3.2 RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 0.934 1.868 2.802 3.736 4.67 1.848 1.987 1.801 3.860 2.160 4.151 1.652 1.614 1.486 1.563
FAHBench OpenBenchmarking.org Ns Per Day, More Is Better FAHBench 2.3.2 RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 rtx3090 100 200 300 400 500 SE +/- 1.15, N = 3 SE +/- 0.46, N = 3 SE +/- 0.77, N = 3 SE +/- 0.36, N = 3 SE +/- 0.22, N = 3 SE +/- 0.20, N = 3 SE +/- 0.11, N = 3 SE +/- 0.82, N = 3 SE +/- 0.14, N = 3 SE +/- 0.67, N = 3 SE +/- 0.58, N = 3 307.62 266.67 308.78 444.11 257.88 418.08 258.38 310.09 319.11 331.47 332.71
OctaneBench Total Score OpenBenchmarking.org Score, More Is Better OctaneBench 2020.1 Total Score RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 300 600 900 1200 1500 358.03 273.76 379.94 1335.76 408.69 961.61 449.59 559.22 659.90 674.09
OctaneBench Total Score OpenBenchmarking.org Score Per Watt, More Is Better OctaneBench 2020.1 Total Score RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 1.0602 2.1204 3.1806 4.2408 5.301 1.534 1.734 1.544 4.712 2.321 4.483 1.939 1.857 1.988 1.985
Chaos Group V-RAY Mode: NVIDIA CUDA GPU OpenBenchmarking.org vpaths Per Watt, More Is Better Chaos Group V-RAY 5.02 Mode: NVIDIA CUDA GPU RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 5 10 15 20 25 5.532 6.328 5.280 18.877 9.538 18.197 7.521 7.009 7.156 7.494
Chaos Group V-RAY Mode: NVIDIA CUDA GPU OpenBenchmarking.org vpaths, More Is Better Chaos Group V-RAY 5.02 Mode: NVIDIA CUDA GPU RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 900 1800 2700 3600 4500 SE +/- 1.00, N = 3 SE +/- 0.33, N = 3 SE +/- 0.58, N = 3 SE +/- 2.52, N = 3 SE +/- 1.15, N = 3 SE +/- 0.88, N = 3 SE +/- 1.73, N = 3 SE +/- 1.20, N = 3 SE +/- 1.76, N = 3 SE +/- 4.04, N = 3 982 824 987 4330 1411 3065 1493 1769 2037 2103
Chaos Group V-RAY Mode: NVIDIA RTX GPU OpenBenchmarking.org vrays Per Watt, More Is Better Chaos Group V-RAY 5.02 Mode: NVIDIA RTX GPU RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 7 14 21 28 35 8.468 8.961 7.818 27.045 14.163 28.734 10.632 10.770 10.544 10.415
Chaos Group V-RAY Mode: NVIDIA RTX GPU OpenBenchmarking.org vrays, More Is Better Chaos Group V-RAY 5.02 Mode: NVIDIA RTX GPU RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 1200 2400 3600 4800 6000 SE +/- 1.53, N = 3 SE +/- 1.00, N = 3 SE +/- 2.00, N = 3 SE +/- 9.17, N = 3 SE +/- 0.33, N = 3 SE +/- 15.62, N = 3 SE +/- 1.45, N = 3 SE +/- 1.45, N = 3 SE +/- 4.67, N = 3 SE +/- 2.08, N = 3 1370 1000 1427 5793 1820 4204 2009 2426 2894 2988
Blender GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better Blender 3.4 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 12 24 36 48 60 Min: 34 / Avg: 49.93 / Max: 60 Min: 29 / Avg: 42.76 / Max: 50 Min: 37 / Avg: 51.43 / Max: 61 Min: 36 / Avg: 42.8 / Max: 50 Min: 25 / Avg: 39.85 / Max: 49 Min: 23 / Avg: 34.04 / Max: 43 Min: 29 / Avg: 47.81 / Max: 59 Min: 25 / Avg: 38.42 / Max: 49 Min: 28 / Avg: 43.4 / Max: 55 Min: 24 / Avg: 37.12 / Max: 47
Blender GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better Blender 3.4 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 13 26 39 52 65 Min: 48 / Avg: 60.42 / Max: 68 Min: 41 / Avg: 49.07 / Max: 54 Min: 49 / Avg: 57.4 / Max: 63 Min: 38 / Avg: 41.35 / Max: 46 Min: 37 / Avg: 50.68 / Max: 59 Min: 29 / Avg: 35.32 / Max: 41 Min: 44 / Avg: 56.66 / Max: 64 Min: 39 / Avg: 53.23 / Max: 63 Min: 45 / Avg: 53.9 / Max: 60 Min: 35 / Avg: 44.23 / Max: 51
Blender GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better Blender 3.4 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 15 30 45 60 75 Min: 55 / Avg: 70.07 / Max: 76 Min: 45 / Avg: 58.32 / Max: 64 Min: 51 / Avg: 66.94 / Max: 73 Min: 37 / Avg: 46.98 / Max: 54 Min: 47 / Avg: 61.72 / Max: 67 Min: 31 / Avg: 43.04 / Max: 49 Min: 50 / Avg: 66.49 / Max: 72 Min: 54 / Avg: 65.72 / Max: 71 Min: 51 / Avg: 64.39 / Max: 71 Min: 41 / Avg: 54.19 / Max: 61
Blender GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better Blender 3.4 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 14 28 42 56 70 Min: 60 / Avg: 70.39 / Max: 74 Min: 53 / Avg: 61.12 / Max: 64 Min: 59 / Avg: 68.53 / Max: 73 Min: 40 / Avg: 46.81 / Max: 52 Min: 52 / Avg: 62.36 / Max: 66 Min: 34 / Avg: 42.81 / Max: 49 Min: 55 / Avg: 67.4 / Max: 72 Min: 58 / Avg: 67.05 / Max: 72 Min: 58 / Avg: 67.47 / Max: 73 Min: 49 / Avg: 59.63 / Max: 65
Blender GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better Blender 3.4 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 15 30 45 60 75 Min: 59 / Avg: 70.95 / Max: 76 Min: 54 / Avg: 62.55 / Max: 66 Min: 58 / Avg: 69.44 / Max: 74 Min: 40 / Avg: 46.53 / Max: 53 Min: 52 / Avg: 62.69 / Max: 67 Min: 35 / Avg: 43.35 / Max: 49 Min: 55 / Avg: 67.52 / Max: 72 Min: 59 / Avg: 67.19 / Max: 72 Min: 60 / Avg: 67.86 / Max: 73 Min: 53 / Avg: 61.3 / Max: 66
Blender GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better Blender 3.4 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 14 28 42 56 70 Min: 60 / Avg: 70.04 / Max: 75 Min: 56 / Avg: 62.43 / Max: 66 Min: 60 / Avg: 69.01 / Max: 74 Min: 39 / Avg: 45.38 / Max: 54 Min: 52 / Avg: 61.83 / Max: 66 Min: 35 / Avg: 43.62 / Max: 52 Min: 55 / Avg: 66.65 / Max: 71 Min: 59 / Avg: 66.78 / Max: 71 Min: 60 / Avg: 67.47 / Max: 72 Min: 54 / Avg: 61.2 / Max: 65
Blender GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better Blender 3.4 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 15 30 45 60 75 Min: 59 / Avg: 76.76 / Max: 81 Min: 55 / Avg: 67.15 / Max: 70 Min: 59 / Avg: 75.97 / Max: 80 Min: 39 / Avg: 51.15 / Max: 57 Min: 52 / Avg: 66.93 / Max: 70 Min: 35 / Avg: 48.96 / Max: 54 Min: 54 / Avg: 71.63 / Max: 74 Min: 59 / Avg: 72.84 / Max: 77 Min: 60 / Avg: 74.02 / Max: 78 Min: 54 / Avg: 66.18 / Max: 69
Blender GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better Blender 3.4 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 15 30 45 60 75 Min: 63 / Avg: 72.97 / Max: 76 Min: 58 / Avg: 64.42 / Max: 66 Min: 64 / Avg: 72.52 / Max: 75 Min: 41 / Avg: 48 / Max: 53 Min: 53 / Avg: 64.23 / Max: 67 Min: 37 / Avg: 45.58 / Max: 50 Min: 56 / Avg: 69.22 / Max: 73 Min: 62 / Avg: 69.83 / Max: 73 Min: 63 / Avg: 71.12 / Max: 75 Min: 56 / Avg: 63.88 / Max: 67
Blender GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better Blender 3.4 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 16 32 48 64 80 Min: 60 / Avg: 77.9 / Max: 82 Min: 55 / Avg: 66.87 / Max: 69 Min: 61 / Avg: 76.36 / Max: 81 Min: 40 / Avg: 51.06 / Max: 56 Min: 52 / Avg: 66.41 / Max: 69 Min: 37 / Avg: 48.3 / Max: 53 Min: 55 / Avg: 71.3 / Max: 74 Min: 60 / Avg: 72.49 / Max: 76 Min: 61 / Avg: 74.17 / Max: 78 Min: 55 / Avg: 66.45 / Max: 70
Blender GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better Blender 3.4 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 15 30 45 60 75 Min: 63 / Avg: 76.54 / Max: 81 Min: 57 / Avg: 66.08 / Max: 68 Min: 64 / Avg: 75.5 / Max: 78 Min: 42 / Avg: 50.86 / Max: 55 Min: 53 / Avg: 66.23 / Max: 69 Min: 38 / Avg: 48.05 / Max: 52 Min: 56 / Avg: 71.31 / Max: 75 Min: 61 / Avg: 72.59 / Max: 76 Min: 62 / Avg: 73.99 / Max: 78 Min: 56 / Avg: 66.44 / Max: 70
IndigoBench GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better IndigoBench 4.4 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 15 30 45 60 75 Min: 62 / Avg: 73.62 / Max: 78 Min: 57 / Avg: 65.02 / Max: 67 Min: 64 / Avg: 74.88 / Max: 78 Min: 43 / Avg: 50.6 / Max: 53 Min: 54 / Avg: 64.67 / Max: 69 Min: 38 / Avg: 47.55 / Max: 51 Min: 57 / Avg: 70.19 / Max: 73 Min: 62 / Avg: 71.39 / Max: 74 Min: 63 / Avg: 73.48 / Max: 77 Min: 57 / Avg: 65.95 / Max: 69
IndigoBench GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better IndigoBench 4.4 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 15 30 45 60 75 Min: 62 / Avg: 74.62 / Max: 80 Min: 57 / Avg: 65.25 / Max: 68 Min: 63 / Avg: 75.48 / Max: 79 Min: 43 / Avg: 53.77 / Max: 57 Min: 54 / Avg: 65.56 / Max: 69 Min: 38 / Avg: 49.56 / Max: 53 Min: 57 / Avg: 70.93 / Max: 74 Min: 61 / Avg: 72.75 / Max: 76 Min: 63 / Avg: 74.92 / Max: 78 Min: 57 / Avg: 67.43 / Max: 70
OctaneBench GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better OctaneBench 2020.1 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 16 32 48 64 80 Min: 63 / Avg: 79.29 / Max: 83 Min: 57 / Avg: 67.29 / Max: 71 Min: 63 / Avg: 78.01 / Max: 82 Min: 44 / Avg: 60.12 / Max: 64 Min: 54 / Avg: 69.11 / Max: 72 Min: 39 / Avg: 54.71 / Max: 58 Min: 56 / Avg: 74.62 / Max: 77 Min: 61 / Avg: 76.76 / Max: 80 Min: 63 / Avg: 77.53 / Max: 80 Min: 57 / Avg: 66.68 / Max: 70
Chaos Group V-RAY GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better Chaos Group V-RAY 5.02 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 15 30 45 60 75 Min: 61 / Avg: 71.17 / Max: 77 Min: 55 / Avg: 62.82 / Max: 66 Min: 61 / Avg: 71.08 / Max: 75 Min: 46 / Avg: 56.2 / Max: 60 Min: 52 / Avg: 64.66 / Max: 70 Min: 41 / Avg: 50.68 / Max: 55 Min: 56 / Avg: 70.8 / Max: 76 Min: 60 / Avg: 72 / Max: 77 Min: 61 / Avg: 73.08 / Max: 78 Min: 51 / Avg: 64.62 / Max: 70
Chaos Group V-RAY GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better Chaos Group V-RAY 5.02 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 15 30 45 60 75 Min: 56 / Avg: 68.4 / Max: 74 Min: 51 / Avg: 58.85 / Max: 62 Min: 59 / Avg: 69.83 / Max: 74 Min: 45 / Avg: 53.83 / Max: 57 Min: 49 / Avg: 62.15 / Max: 68 Min: 38 / Avg: 46.62 / Max: 51 Min: 56 / Avg: 69.61 / Max: 74 Min: 57 / Avg: 69.25 / Max: 75 Min: 61 / Avg: 72.17 / Max: 77 Min: 55 / Avg: 65.69 / Max: 70
clpeak GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better clpeak GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 12 24 36 48 60 Min: 53 / Avg: 55.82 / Max: 61 Min: 45 / Avg: 47.63 / Max: 52 Min: 52 / Avg: 55.23 / Max: 61 Min: 38 / Avg: 40.79 / Max: 53 Min: 44 / Avg: 46.82 / Max: 54 Min: 34 / Avg: 36.74 / Max: 50 Min: 50 / Avg: 53.46 / Max: 62 Min: 53 / Avg: 55.38 / Max: 59 Min: 53 / Avg: 55.25 / Max: 58 Min: 50 / Avg: 52.43 / Max: 55
clpeak GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better clpeak GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 13 26 39 52 65 Min: 50 / Avg: 59.46 / Max: 63 Min: 43 / Avg: 49.25 / Max: 51 Min: 50 / Avg: 59.79 / Max: 64 Min: 37 / Avg: 42.01 / Max: 43 Min: 42 / Avg: 48.53 / Max: 51 Min: 33 / Avg: 38.05 / Max: 39 Min: 48 / Avg: 57.62 / Max: 61 Min: 52 / Avg: 59.67 / Max: 63 Min: 51 / Avg: 59.39 / Max: 63 Min: 49 / Avg: 55.91 / Max: 59
clpeak GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better clpeak GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 13 26 39 52 65 Min: 54 / Avg: 57.11 / Max: 63 Min: 45 / Avg: 48.06 / Max: 55 Min: 54 / Avg: 57.76 / Max: 63 Min: 37 / Avg: 42.39 / Max: 59 Min: 44 / Avg: 46.79 / Max: 55 Min: 33 / Avg: 37.31 / Max: 56 Min: 52 / Avg: 56.3 / Max: 66 Min: 57 / Avg: 59.35 / Max: 64 Min: 56 / Avg: 58.72 / Max: 63 Min: 52 / Avg: 54.7 / Max: 58
clpeak GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better clpeak GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 11 22 33 44 55 Min: 44 / Avg: 46.21 / Max: 49 Min: 39 / Avg: 40.4 / Max: 42 Min: 44 / Avg: 45.98 / Max: 48 Min: 32 / Avg: 32.61 / Max: 34 Min: 37 / Avg: 38.15 / Max: 40 Min: 33 / Avg: 33.98 / Max: 35 Min: 44 / Avg: 46.64 / Max: 50 Min: 47 / Avg: 49.66 / Max: 53 Min: 47 / Avg: 49.94 / Max: 52 Min: 44 / Avg: 46.55 / Max: 49
Hashcat GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better Hashcat 6.2.4 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 13 26 39 52 65 Min: 43 / Avg: 51.78 / Max: 61 Min: 38 / Avg: 47.61 / Max: 57 Min: 43 / Avg: 52.49 / Max: 62 Min: 31 / Avg: 47.34 / Max: 63 Min: 36 / Avg: 46.82 / Max: 58 Min: 34 / Avg: 46.3 / Max: 61 Min: 42 / Avg: 56.36 / Max: 68 Min: 46 / Avg: 54.85 / Max: 63 Min: 46 / Avg: 55.18 / Max: 63 Min: 43 / Avg: 51.31 / Max: 58
Hashcat GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better Hashcat 6.2.4 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 14 28 42 56 70 Min: 49 / Avg: 57.8 / Max: 65 Min: 45 / Avg: 53 / Max: 61 Min: 50 / Avg: 58.33 / Max: 66 Min: 37 / Avg: 50.26 / Max: 63 Min: 43 / Avg: 53.21 / Max: 62 Min: 35 / Avg: 47.16 / Max: 59 Min: 50 / Avg: 60.97 / Max: 70 Min: 53 / Avg: 60.68 / Max: 67 Min: 53 / Avg: 61 / Max: 67 Min: 48 / Avg: 55.63 / Max: 61
Hashcat GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better Hashcat 6.2.4 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 14 28 42 56 70 Min: 53 / Avg: 60.63 / Max: 68 Min: 48 / Avg: 55.78 / Max: 63 Min: 53 / Avg: 61.15 / Max: 67 Min: 39 / Avg: 53.29 / Max: 66 Min: 47 / Avg: 56.22 / Max: 65 Min: 35 / Avg: 48.46 / Max: 60 Min: 52 / Avg: 62.74 / Max: 71 Min: 56 / Avg: 62.23 / Max: 68 Min: 56 / Avg: 62.88 / Max: 68 Min: 52 / Avg: 58.23 / Max: 63
Hashcat GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better Hashcat 6.2.4 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 13 26 39 52 65 Min: 54 / Avg: 57.85 / Max: 63 Min: 50 / Avg: 52.73 / Max: 59 Min: 55 / Avg: 58.16 / Max: 64 Min: 40 / Avg: 47.71 / Max: 64 Min: 48 / Avg: 51.69 / Max: 59 Min: 36 / Avg: 42.57 / Max: 60 Min: 52 / Avg: 59.05 / Max: 69 Min: 56 / Avg: 58.94 / Max: 64 Min: 57 / Avg: 59.86 / Max: 64 Min: 53 / Avg: 56.42 / Max: 61
Hashcat GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better Hashcat 6.2.4 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 13 26 39 52 65 Min: 51 / Avg: 54.54 / Max: 61 Min: 47 / Avg: 51.18 / Max: 58 Min: 52 / Avg: 55.58 / Max: 61 Min: 39 / Avg: 46.63 / Max: 63 Min: 45 / Avg: 50.69 / Max: 60 Min: 35 / Avg: 40.47 / Max: 59 Min: 51 / Avg: 59 / Max: 69 Min: 54 / Avg: 58 / Max: 64 Min: 54 / Avg: 58.71 / Max: 64 Min: 52 / Avg: 56 / Max: 61
FAHBench GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better FAHBench 2.3.2 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 14 28 42 56 70 Min: 49 / Avg: 64.17 / Max: 74 Min: 46 / Avg: 57.92 / Max: 66 Min: 50 / Avg: 63.78 / Max: 72 Min: 38 / Avg: 42.66 / Max: 46 Min: 45 / Avg: 57.63 / Max: 66 Min: 34 / Avg: 40.69 / Max: 45 Min: 51 / Avg: 64.32 / Max: 71 Min: 54 / Avg: 64.22 / Max: 70 Min: 54 / Avg: 64.69 / Max: 71 Min: 51 / Avg: 60.67 / Max: 66
LeelaChessZero GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better LeelaChessZero 0.28 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 16 32 48 64 80 Min: 56 / Avg: 81.15 / Max: 84 Min: 55 / Avg: 74.12 / Max: 77 Min: 59 / Avg: 81.19 / Max: 83 Min: 38 / Avg: 53.53 / Max: 58 Min: 49 / Avg: 71.88 / Max: 75 Min: 35 / Avg: 53.88 / Max: 59 Min: 56 / Avg: 77.73 / Max: 80 Min: 59 / Avg: 77.34 / Max: 80 Min: 60 / Avg: 78.18 / Max: 80 Min: 56 / Avg: 68.69 / Max: 70
LuxCoreRender GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better LuxCoreRender 2.6 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 15 30 45 60 75 Min: 51 / Avg: 68.02 / Max: 78 Min: 61 / Avg: 66.91 / Max: 68 Min: 66 / Avg: 77.64 / Max: 80 Min: 43 / Avg: 55.89 / Max: 59 Min: 44 / Avg: 59.24 / Max: 69 Min: 41 / Avg: 52.23 / Max: 54 Min: 57 / Avg: 71.84 / Max: 74 Min: 62 / Avg: 74.74 / Max: 77 Min: 64 / Avg: 76.59 / Max: 79 Min: 55 / Avg: 68.17 / Max: 71
LuxCoreRender GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better LuxCoreRender 2.6 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 14 28 42 56 70 Min: 62 / Avg: 69.02 / Max: 72 Min: 57 / Avg: 61.25 / Max: 63 Min: 63 / Avg: 69.08 / Max: 72 Min: 43 / Avg: 47.5 / Max: 52 Min: 54 / Avg: 60.36 / Max: 64 Min: 40 / Avg: 44.58 / Max: 48 Min: 57 / Avg: 65.84 / Max: 70 Min: 62 / Avg: 66.55 / Max: 70 Min: 64 / Avg: 68.29 / Max: 72 Min: 57 / Avg: 62.18 / Max: 66
LuxCoreRender GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better LuxCoreRender 2.6 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 15 30 45 60 75 Min: 48 / Avg: 63.26 / Max: 75 Min: 54 / Avg: 62.14 / Max: 65 Min: 59 / Avg: 71.72 / Max: 76 Min: 35 / Avg: 42.79 / Max: 51 Min: 41 / Avg: 55.6 / Max: 67 Min: 38 / Avg: 47.35 / Max: 50 Min: 55 / Avg: 68.64 / Max: 72 Min: 58 / Avg: 70.35 / Max: 75 Min: 60 / Avg: 72.69 / Max: 77 Min: 55 / Avg: 66.04 / Max: 69
LuxCoreRender GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better LuxCoreRender 2.6 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 15 30 45 60 75 Min: 60 / Avg: 73.57 / Max: 77 Min: 55 / Avg: 63.68 / Max: 67 Min: 61 / Avg: 73.32 / Max: 77 Min: 41 / Avg: 52.89 / Max: 56 Min: 52 / Avg: 65.56 / Max: 69 Min: 38 / Avg: 49.69 / Max: 52 Min: 56 / Avg: 70.44 / Max: 73 Min: 61 / Avg: 72.72 / Max: 76 Min: 62 / Avg: 74.35 / Max: 78 Min: 56 / Avg: 67.41 / Max: 70
LuxCoreRender GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better LuxCoreRender 2.6 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 15 30 45 60 75 Min: 61 / Avg: 72.03 / Max: 75 Min: 56 / Avg: 62.22 / Max: 64 Min: 62 / Avg: 71.98 / Max: 75 Min: 44 / Avg: 52.06 / Max: 55 Min: 53 / Avg: 63.18 / Max: 66 Min: 39 / Avg: 47.63 / Max: 50 Min: 56 / Avg: 68.12 / Max: 71 Min: 61 / Avg: 70.14 / Max: 73 Min: 62 / Avg: 72.28 / Max: 76 Min: 56 / Avg: 65.44 / Max: 68
NAMD CUDA GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better NAMD CUDA 2.14 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 14 28 42 56 70 Min: 52 / Avg: 57.39 / Max: 63 Min: 54 / Avg: 58.73 / Max: 63 Min: 60 / Avg: 63.81 / Max: 69 Min: 35 / Avg: 38.79 / Max: 44 Min: 41 / Avg: 46.93 / Max: 56 Min: 36 / Avg: 40.15 / Max: 50 Min: 56 / Avg: 62.32 / Max: 72 Min: 59 / Avg: 62.21 / Max: 68 Min: 60 / Avg: 63.42 / Max: 68 Min: 54 / Avg: 57.91 / Max: 63
SHOC Scalable HeterOgeneous Computing GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better SHOC Scalable HeterOgeneous Computing 2020-04-17 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 13 26 39 52 65 Min: 46 / Avg: 57.29 / Max: 65 Min: 42 / Avg: 50.19 / Max: 53 Min: 47 / Avg: 57.57 / Max: 63 Min: 36 / Avg: 42.2 / Max: 47 Min: 38 / Avg: 48.6 / Max: 55 Min: 34 / Avg: 40.27 / Max: 46 Min: 46 / Avg: 59.22 / Max: 65 Min: 48 / Avg: 58.4 / Max: 64 Min: 49 / Avg: 59.42 / Max: 65 Min: 46 / Avg: 55.61 / Max: 60
SHOC Scalable HeterOgeneous Computing GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better SHOC Scalable HeterOgeneous Computing 2020-04-17 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 12 24 36 48 60 Min: 50 / Avg: 52.98 / Max: 56 Min: 43 / Avg: 45.98 / Max: 49 Min: 50 / Avg: 53.05 / Max: 58 Min: 35 / Avg: 36.47 / Max: 39 Min: 42 / Avg: 44.24 / Max: 48 Min: 38 / Avg: 38.9 / Max: 41 Min: 50 / Avg: 54.13 / Max: 57 Min: 53 / Avg: 55.55 / Max: 58 Min: 53 / Avg: 55.85 / Max: 59 Min: 50 / Avg: 52.15 / Max: 55
SHOC Scalable HeterOgeneous Computing GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better SHOC Scalable HeterOgeneous Computing 2020-04-17 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 12 24 36 48 60 Min: 48 / Avg: 51.43 / Max: 55 Min: 42 / Avg: 45.94 / Max: 51 Min: 48 / Avg: 52.3 / Max: 57 Min: 33 / Avg: 35.18 / Max: 41 Min: 40 / Avg: 45.03 / Max: 52 Min: 38 / Avg: 41.25 / Max: 50 Min: 48 / Avg: 55.02 / Max: 63 Min: 51 / Avg: 54.14 / Max: 59 Min: 51 / Avg: 54.49 / Max: 58 Min: 48 / Avg: 51.29 / Max: 55
SHOC Scalable HeterOgeneous Computing GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better SHOC Scalable HeterOgeneous Computing 2020-04-17 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 12 24 36 48 60 Min: 47 / Avg: 49.73 / Max: 56 Min: 43 / Avg: 44.78 / Max: 52 Min: 48 / Avg: 49.97 / Max: 56 Min: 33 / Avg: 35.05 / Max: 48 Min: 41 / Avg: 43.72 / Max: 52 Min: 39 / Avg: 42.11 / Max: 61 Min: 49 / Avg: 53.45 / Max: 63 Min: 51 / Avg: 52.57 / Max: 58 Min: 51 / Avg: 52.67 / Max: 57 Min: 47 / Avg: 49.28 / Max: 55
SHOC Scalable HeterOgeneous Computing GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better SHOC Scalable HeterOgeneous Computing 2020-04-17 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 11 22 33 44 55 Min: 46 / Avg: 48.06 / Max: 51 Min: 41 / Avg: 43.32 / Max: 46 Min: 46 / Avg: 48.39 / Max: 52 Min: 35 / Avg: 37.02 / Max: 39 Min: 40 / Avg: 42.25 / Max: 46 Min: 40 / Avg: 41.53 / Max: 45 Min: 48 / Avg: 52.19 / Max: 57 Min: 49 / Avg: 51.25 / Max: 54 Min: 49 / Avg: 51.73 / Max: 55 Min: 46 / Avg: 48.48 / Max: 52
SHOC Scalable HeterOgeneous Computing GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better SHOC Scalable HeterOgeneous Computing 2020-04-17 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 10 20 30 40 50 Min: 42 / Avg: 42.71 / Max: 43 Min: 38 / Avg: 38.73 / Max: 39 Min: 41 / Avg: 42.62 / Max: 43 Min: 38 / Avg: 38.95 / Max: 40 Min: 37 / Avg: 37.56 / Max: 39 Min: 40 / Avg: 40.42 / Max: 41 Min: 43 / Avg: 45.22 / Max: 47 Min: 44 / Avg: 45.27 / Max: 46 Min: 45 / Avg: 45.92 / Max: 47 Min: 42 / Avg: 43.19 / Max: 44
FluidX3D GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better FluidX3D 2.3 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 14 28 42 56 70 Min: 41 / Avg: 61.07 / Max: 71 Min: 37 / Avg: 52.03 / Max: 58 Min: 40 / Avg: 60.21 / Max: 69 Min: 39 / Avg: 47.19 / Max: 49 Min: 36 / Avg: 55.93 / Max: 64 Min: 40 / Avg: 46.49 / Max: 48 Min: 41 / Avg: 64.83 / Max: 72 Min: 43 / Avg: 61.69 / Max: 71 Min: 43 / Avg: 62.59 / Max: 72 Min: 41 / Avg: 57.22 / Max: 65
FluidX3D GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better FluidX3D 2.3 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 14 28 42 56 70 Min: 58 / Avg: 70.14 / Max: 75 Min: 50 / Avg: 59.24 / Max: 63 Min: 57 / Avg: 68.9 / Max: 74 Min: 41 / Avg: 48.48 / Max: 52 Min: 52 / Avg: 63.38 / Max: 67 Min: 38 / Avg: 46.43 / Max: 49 Min: 56 / Avg: 70.31 / Max: 75 Min: 59 / Avg: 69.43 / Max: 74 Min: 60 / Avg: 67.9 / Max: 72 Min: 54 / Avg: 63.5 / Max: 67
FluidX3D GPU Temperature Monitor OpenBenchmarking.org Celsius, Fewer Is Better FluidX3D 2.3 GPU Temperature Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 15 30 45 60 75 Min: 60 / Avg: 73.09 / Max: 78 Min: 53 / Avg: 66.84 / Max: 73 Min: 59 / Avg: 72.28 / Max: 77 Min: 41 / Avg: 53.01 / Max: 58 Min: 53 / Avg: 68.65 / Max: 73 Min: 38 / Avg: 51.05 / Max: 55 Min: 57 / Avg: 74.7 / Max: 79 Min: 60 / Avg: 70.57 / Max: 75 Min: 61 / Avg: 69.88 / Max: 75 Min: 55 / Avg: 63.67 / Max: 67
NAMD CUDA ATPase Simulation - 327,506 Atoms OpenBenchmarking.org days/ns, Fewer Is Better NAMD CUDA 2.14 ATPase Simulation - 327,506 Atoms RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 0.0253 0.0506 0.0759 0.1012 0.1265 SE +/- 0.00125, N = 3 SE +/- 0.00028, N = 6 SE +/- 0.00061, N = 13 SE +/- 0.00028, N = 3 SE +/- 0.00014, N = 3 SE +/- 0.00068, N = 15 SE +/- 0.00067, N = 6 SE +/- 0.00077, N = 6 SE +/- 0.00031, N = 6 SE +/- 0.00065, N = 9 0.09403 0.11231 0.09147 0.06811 0.09655 0.07079 0.09540 0.07909 0.07847 0.07851
Blender Blend File: BMW27 - Compute: NVIDIA CUDA OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.4 Blend File: BMW27 - Compute: NVIDIA CUDA RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 5 10 15 20 25 SE +/- 0.04, N = 4 SE +/- 0.02, N = 3 SE +/- 0.02, N = 4 SE +/- 0.02, N = 7 SE +/- 0.01, N = 3 SE +/- 0.01, N = 6 SE +/- 0.01, N = 4 SE +/- 0.02, N = 4 SE +/- 0.01, N = 5 SE +/- 0.02, N = 5 14.75 19.32 14.07 5.39 16.75 7.42 15.72 11.82 10.35 9.98
Blender Blend File: BMW27 - Compute: NVIDIA OptiX OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.4 Blend File: BMW27 - Compute: NVIDIA OptiX RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 3 6 9 12 15 SE +/- 0.05, N = 15 SE +/- 0.10, N = 7 SE +/- 0.01, N = 6 SE +/- 0.00, N = 9 SE +/- 0.05, N = 15 SE +/- 0.05, N = 15 SE +/- 0.01, N = 6 SE +/- 0.05, N = 15 SE +/- 0.01, N = 7 SE +/- 0.01, N = 7 8.54 10.86 8.08 3.35 8.86 4.16 8.12 6.62 5.78 5.66
Blender Blend File: Classroom - Compute: NVIDIA CUDA OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.4 Blend File: Classroom - Compute: NVIDIA CUDA RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 10 20 30 40 50 SE +/- 0.04, N = 3 SE +/- 0.03, N = 3 SE +/- 0.05, N = 3 SE +/- 0.01, N = 5 SE +/- 0.03, N = 3 SE +/- 0.01, N = 4 SE +/- 0.03, N = 3 SE +/- 0.03, N = 3 SE +/- 0.03, N = 3 SE +/- 0.03, N = 3 32.24 42.82 30.42 9.71 34.65 13.82 32.43 23.99 20.75 19.93
Blender Blend File: Classroom - Compute: NVIDIA OptiX OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.4 Blend File: Classroom - Compute: NVIDIA OptiX RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 7 14 21 28 35 SE +/- 0.07, N = 3 SE +/- 0.02, N = 3 SE +/- 0.07, N = 3 SE +/- 0.01, N = 6 SE +/- 0.08, N = 3 SE +/- 0.02, N = 5 SE +/- 0.02, N = 3 SE +/- 0.04, N = 4 SE +/- 0.01, N = 4 SE +/- 0.00, N = 4 22.30 29.10 21.08 7.04 22.99 9.13 21.12 16.32 14.22 13.77
Blender Blend File: Fishy Cat - Compute: NVIDIA CUDA OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.4 Blend File: Fishy Cat - Compute: NVIDIA CUDA RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 9 18 27 36 45 SE +/- 0.03, N = 3 SE +/- 0.07, N = 3 SE +/- 0.06, N = 3 SE +/- 0.01, N = 4 SE +/- 0.03, N = 3 SE +/- 0.03, N = 3 SE +/- 0.02, N = 3 SE +/- 0.04, N = 3 SE +/- 0.03, N = 3 SE +/- 0.01, N = 3 30.83 41.00 29.39 11.68 35.70 16.46 34.27 25.25 22.29 21.37
Blender Blend File: Fishy Cat - Compute: NVIDIA OptiX OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.4 Blend File: Fishy Cat - Compute: NVIDIA OptiX RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 5 10 15 20 25 SE +/- 0.17, N = 4 SE +/- 0.24, N = 3 SE +/- 0.01, N = 4 SE +/- 0.01, N = 7 SE +/- 0.18, N = 4 SE +/- 0.05, N = 15 SE +/- 0.01, N = 4 SE +/- 0.11, N = 6 SE +/- 0.01, N = 5 SE +/- 0.00, N = 5 15.90 20.75 14.99 5.24 16.68 7.33 15.77 11.84 10.38 9.98
Blender Blend File: Pabellon Barcelona - Compute: NVIDIA CUDA OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.4 Blend File: Pabellon Barcelona - Compute: NVIDIA CUDA RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 30 60 90 120 150 SE +/- 0.08, N = 3 SE +/- 0.09, N = 3 SE +/- 0.05, N = 3 SE +/- 0.01, N = 3 SE +/- 0.07, N = 3 SE +/- 0.03, N = 3 SE +/- 0.03, N = 3 SE +/- 0.07, N = 3 SE +/- 0.09, N = 3 SE +/- 0.02, N = 3 79.77 112.12 74.82 20.51 87.16 31.55 81.99 59.35 50.72 48.61
Blender Blend File: Pabellon Barcelona - Compute: NVIDIA OptiX OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.4 Blend File: Pabellon Barcelona - Compute: NVIDIA OptiX RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 8 16 24 32 40 SE +/- 0.02, N = 3 SE +/- 0.03, N = 3 SE +/- 0.01, N = 3 SE +/- 0.00, N = 6 SE +/- 0.01, N = 3 SE +/- 0.01, N = 5 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 SE +/- 0.01, N = 4 SE +/- 0.01, N = 4 26.34 35.32 24.49 7.78 24.87 9.82 22.88 18.25 15.62 15.21
Blender Blend File: Barbershop - Compute: NVIDIA CUDA OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.4 Blend File: Barbershop - Compute: NVIDIA CUDA RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 40 80 120 160 200 SE +/- 0.09, N = 3 SE +/- 0.05, N = 3 SE +/- 0.10, N = 3 SE +/- 0.03, N = 3 SE +/- 0.13, N = 3 SE +/- 0.04, N = 3 SE +/- 0.12, N = 3 SE +/- 0.06, N = 3 SE +/- 0.08, N = 3 SE +/- 0.09, N = 3 139.50 186.19 130.62 44.81 146.10 61.80 136.94 101.34 87.99 84.83
Blender Blend File: Barbershop - Compute: NVIDIA OptiX OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.4 Blend File: Barbershop - Compute: NVIDIA OptiX RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 30 60 90 120 150 SE +/- 0.04, N = 3 SE +/- 0.19, N = 3 SE +/- 0.04, N = 3 SE +/- 0.06, N = 3 SE +/- 0.07, N = 3 SE +/- 0.01, N = 3 SE +/- 0.03, N = 3 SE +/- 0.06, N = 3 SE +/- 0.07, N = 3 SE +/- 0.02, N = 3 89.90 116.06 84.78 29.52 82.53 37.59 77.27 59.95 52.56 50.74
clpeak OpenCL Test: Kernel Latency OpenBenchmarking.org us, Fewer Is Better clpeak OpenCL Test: Kernel Latency RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 rtx3090 2 4 6 8 10 SE +/- 0.01, N = 15 SE +/- 0.01, N = 15 SE +/- 0.01, N = 15 SE +/- 0.01, N = 15 SE +/- 0.01, N = 15 SE +/- 0.00, N = 15 SE +/- 0.01, N = 15 SE +/- 0.01, N = 15 SE +/- 0.01, N = 15 SE +/- 0.01, N = 15 SE +/- 0.00, N = 3 4.47 4.19 4.43 3.73 4.02 3.68 3.97 4.02 4.08 4.13 6.12 1. (CXX) g++ options: -O3 -rdynamic -lOpenCL
Blender GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better Blender 3.4 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 60 120 180 240 300 Min: 6.53 / Avg: 177.5 / Max: 242.08 Min: 9.54 / Avg: 134.11 / Max: 175.89 Min: 8.25 / Avg: 185.43 / Max: 261.15 Min: 11.3 / Avg: 136.49 / Max: 264.35 Min: 9.55 / Avg: 123.88 / Max: 166.13 Min: 8.71 / Avg: 112.28 / Max: 190.74 Min: 13.25 / Avg: 157.38 / Max: 206.55 Min: 11.12 / Avg: 191.25 / Max: 272.31 Min: 19.24 / Avg: 224.09 / Max: 326.39 Min: 11.85 / Avg: 218.46 / Max: 320.81
Blender GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better Blender 3.4 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 60 120 180 240 300 Min: 7.21 / Avg: 153.74 / Max: 224.65 Min: 10.09 / Avg: 110.6 / Max: 150.01 Min: 10.22 / Avg: 154.85 / Max: 227.13 Min: 11.66 / Avg: 107.6 / Max: 236.24 Min: 9.71 / Avg: 108.63 / Max: 157.03 Min: 13.55 / Avg: 92.77 / Max: 181.99 Min: 22.73 / Avg: 143.81 / Max: 204.26 Min: 13.33 / Avg: 173.17 / Max: 264.5 Min: 33.27 / Avg: 199.24 / Max: 310.42 Min: 14.44 / Avg: 190.31 / Max: 306.2
Blender GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better Blender 3.4 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 60 120 180 240 300 Min: 8.29 / Avg: 215.93 / Max: 258.52 Min: 10.32 / Avg: 155.05 / Max: 174.73 Min: 10.19 / Avg: 227.38 / Max: 265.43 Min: 11.77 / Avg: 189.66 / Max: 279.44 Min: 16.44 / Avg: 153.62 / Max: 174.36 Min: 13.45 / Avg: 152.28 / Max: 202.31 Min: 23.48 / Avg: 191.46 / Max: 218.09 Min: 26.88 / Avg: 250.03 / Max: 300.2 Min: 34.19 / Avg: 285.96 / Max: 345.3 Min: 27.43 / Avg: 279.4 / Max: 341.77
Blender GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better Blender 3.4 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 60 120 180 240 300 Min: 9.3 / Avg: 198.06 / Max: 247.6 Min: 10.4 / Avg: 143.67 / Max: 170.11 Min: 12.49 / Avg: 204.37 / Max: 253.49 Min: 11.8 / Avg: 167.56 / Max: 278.88 Min: 16.65 / Avg: 143.68 / Max: 172.72 Min: 14.6 / Avg: 140.41 / Max: 211.52 Min: 24.51 / Avg: 185.99 / Max: 221.49 Min: 29.4 / Avg: 237.3 / Max: 301.21 Min: 36.6 / Avg: 271.41 / Max: 347.74 Min: 29.37 / Avg: 267.54 / Max: 345.66
Blender GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better Blender 3.4 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 60 120 180 240 300 Min: 9.14 / Avg: 198.33 / Max: 255.67 Min: 10.76 / Avg: 148.56 / Max: 180.44 Min: 12.56 / Avg: 212.36 / Max: 268.33 Min: 11.14 / Avg: 147.15 / Max: 254.89 Min: 16.61 / Avg: 139.86 / Max: 168.2 Min: 14.86 / Avg: 122.55 / Max: 182.31 Min: 24.78 / Avg: 175.34 / Max: 211.03 Min: 25.21 / Avg: 226.84 / Max: 289.96 Min: 36.95 / Avg: 256.92 / Max: 332.78 Min: 30.08 / Avg: 253.37 / Max: 331.9
Blender GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better Blender 3.4 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 60 120 180 240 300 Min: 9.3 / Avg: 189.05 / Max: 256.64 Min: 10.44 / Avg: 139.75 / Max: 181.18 Min: 13.17 / Avg: 201.24 / Max: 270.09 Min: 11.78 / Avg: 138.99 / Max: 289.96 Min: 16.25 / Avg: 134.77 / Max: 176.93 Min: 13.96 / Avg: 121.61 / Max: 206 Min: 24.93 / Avg: 170.41 / Max: 216.55 Min: 29.15 / Avg: 217.3 / Max: 302.18 Min: 36.54 / Avg: 246.44 / Max: 346.18 Min: 29.74 / Avg: 238.96 / Max: 345.41
Blender GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better Blender 3.4 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 60 120 180 240 300 Min: 9.86 / Avg: 231.12 / Max: 261.95 Min: 10.49 / Avg: 162.28 / Max: 187.08 Min: 12.73 / Avg: 248.81 / Max: 281.73 Min: 12.02 / Avg: 222.41 / Max: 291.32 Min: 16.52 / Avg: 164.93 / Max: 179.34 Min: 14.77 / Avg: 169.83 / Max: 203.35 Min: 24.73 / Avg: 203.86 / Max: 220.61 Min: 29.48 / Avg: 275.03 / Max: 301.54 Min: 36.66 / Avg: 312.05 / Max: 342.45 Min: 30.01 / Avg: 309.43 / Max: 341.96
Blender GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better Blender 3.4 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 60 120 180 240 300 Min: 10.42 / Avg: 199.6 / Max: 243.21 Min: 11.31 / Avg: 140.91 / Max: 168.12 Min: 14.37 / Avg: 207.79 / Max: 261.19 Min: 12.99 / Avg: 171.59 / Max: 279.21 Min: 16.34 / Avg: 146.82 / Max: 178.08 Min: 15.06 / Avg: 145.31 / Max: 219.38 Min: 25.68 / Avg: 190.71 / Max: 229.88 Min: 30.78 / Avg: 244.17 / Max: 302.46 Min: 37.86 / Avg: 272.38 / Max: 342.07 Min: 31.77 / Avg: 271.18 / Max: 344.96
Blender GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better Blender 3.4 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 60 120 180 240 300 Min: 9.33 / Avg: 227.64 / Max: 260.58 Min: 10.32 / Avg: 154.34 / Max: 171.26 Min: 13.38 / Avg: 237.94 / Max: 269.23 Min: 9.82 / Avg: 203.02 / Max: 267.49 Min: 15.79 / Avg: 156.37 / Max: 171.41 Min: 14.81 / Avg: 159.62 / Max: 195.59 Min: 25.56 / Avg: 195.94 / Max: 213.52 Min: 29.82 / Avg: 259.27 / Max: 291.4 Min: 33.37 / Avg: 301.07 / Max: 344.6 Min: 24.66 / Avg: 298.04 / Max: 341.76
Blender GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better Blender 3.4 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 60 120 180 240 300 Min: 9.29 / Avg: 213.85 / Max: 249.28 Min: 10.14 / Avg: 147.66 / Max: 166.15 Min: 14.11 / Avg: 225.06 / Max: 260.45 Min: 10.11 / Avg: 194.26 / Max: 261.53 Min: 15.81 / Avg: 154.89 / Max: 173.12 Min: 15.4 / Avg: 158.65 / Max: 200.48 Min: 25.81 / Avg: 199.35 / Max: 221.82 Min: 29.98 / Avg: 261.25 / Max: 299.1 Min: 37.6 / Avg: 297.02 / Max: 346.21 Min: 31.64 / Avg: 296.6 / Max: 344.69
IndigoBench GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better IndigoBench 4.4 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 60 120 180 240 300 Min: 10.93 / Avg: 197.55 / Max: 243.35 Min: 11.52 / Avg: 143.65 / Max: 172.62 Min: 14.09 / Avg: 223.95 / Max: 262.77 Min: 12.55 / Avg: 182.47 / Max: 202.49 Min: 16.67 / Avg: 143.56 / Max: 170.96 Min: 15.7 / Avg: 152.22 / Max: 170.66 Min: 26.01 / Avg: 187.08 / Max: 207.82 Min: 31.09 / Avg: 243.4 / Max: 270.26 Min: 38.46 / Avg: 280.76 / Max: 310.42 Min: 31.86 / Avg: 279.53 / Max: 312.1
IndigoBench GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better IndigoBench 4.4 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 60 120 180 240 300 Min: 10.78 / Avg: 207.63 / Max: 250.4 Min: 10.36 / Avg: 146.66 / Max: 175.66 Min: 13.97 / Avg: 230.29 / Max: 272.32 Min: 12.65 / Avg: 221.09 / Max: 248.74 Min: 16.27 / Avg: 151.2 / Max: 175.45 Min: 16.36 / Avg: 170.92 / Max: 194.11 Min: 26.07 / Avg: 196.31 / Max: 213.14 Min: 30.45 / Avg: 259.28 / Max: 285.45 Min: 38.42 / Avg: 302.75 / Max: 332.79 Min: 32.19 / Avg: 304.76 / Max: 336.28
OctaneBench GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better OctaneBench 2020.1 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 60 120 180 240 300 Min: 10.86 / Avg: 233.37 / Max: 264.49 Min: 10.84 / Avg: 157.92 / Max: 203.6 Min: 14.3 / Avg: 246.11 / Max: 286.05 Min: 12.91 / Avg: 283.5 / Max: 322.69 Min: 16.47 / Avg: 176.1 / Max: 199.09 Min: 16.36 / Avg: 214.52 / Max: 251.02 Min: 26.26 / Avg: 231.88 / Max: 245.87 Min: 30.84 / Avg: 301.18 / Max: 318.2 Min: 38.87 / Avg: 331.89 / Max: 349.2 Min: 31.79 / Avg: 339.51 / Max: 348.7
Chaos Group V-RAY GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better Chaos Group V-RAY 5.02 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 60 120 180 240 300 Min: 9.83 / Avg: 177.5 / Max: 237.69 Min: 9.96 / Avg: 130.21 / Max: 167.94 Min: 14.17 / Avg: 186.92 / Max: 241.32 Min: 9.79 / Avg: 229.38 / Max: 290.21 Min: 15.74 / Avg: 147.93 / Max: 187.12 Min: 16.55 / Avg: 168.44 / Max: 207.72 Min: 25.92 / Avg: 198.52 / Max: 240.62 Min: 29.88 / Avg: 252.4 / Max: 309.06 Min: 36.93 / Avg: 284.64 / Max: 343.03 Min: 30.25 / Avg: 280.62 / Max: 345.12
Chaos Group V-RAY GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better Chaos Group V-RAY 5.02 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 60 120 180 240 300 Min: 8.24 / Avg: 161.78 / Max: 217.11 Min: 9.73 / Avg: 111.59 / Max: 143.49 Min: 13.04 / Avg: 182.52 / Max: 225.35 Min: 9.65 / Avg: 214.2 / Max: 268.47 Min: 15.39 / Avg: 128.5 / Max: 168.45 Min: 15.55 / Avg: 146.31 / Max: 197.27 Min: 26.15 / Avg: 188.95 / Max: 227.2 Min: 28.52 / Avg: 225.25 / Max: 292.4 Min: 36.82 / Avg: 274.47 / Max: 329.8 Min: 31.47 / Avg: 286.89 / Max: 348.36
clpeak GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better clpeak GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 40 80 120 160 200 Min: 7.22 / Avg: 82.54 / Max: 224.07 Min: 10.2 / Avg: 54.48 / Max: 155.2 Min: 11.08 / Avg: 88.06 / Max: 240.96 Min: 11.55 / Avg: 68.53 / Max: 179.26 Min: 16.12 / Avg: 46.2 / Max: 98.14 Min: 15.97 / Avg: 49.52 / Max: 104.2 Min: 24.55 / Avg: 71.07 / Max: 141.7 Min: 26.36 / Avg: 90.42 / Max: 185.82 Min: 35.69 / Avg: 102.94 / Max: 202.96 Min: 29.28 / Avg: 98.08 / Max: 176.86
clpeak GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better clpeak GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 30 60 90 120 150 Min: 7.64 / Avg: 126.33 / Max: 138.49 Min: 10.08 / Avg: 88.81 / Max: 99.14 Min: 10.85 / Avg: 138.32 / Max: 152.29 Min: 10.81 / Avg: 109.55 / Max: 121.87 Min: 16.06 / Avg: 74.7 / Max: 83.55 Min: 15.81 / Avg: 78.12 / Max: 86.57 Min: 23.95 / Avg: 110.05 / Max: 119.84 Min: 27.81 / Avg: 148.91 / Max: 163.15 Min: 33.97 / Avg: 171.16 / Max: 189.04 Min: 29.35 / Avg: 169.01 / Max: 184.88
clpeak GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better clpeak GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 70 140 210 280 350 Min: 7.92 / Avg: 103.71 / Max: 266.79 Min: 10.17 / Avg: 68.56 / Max: 254.7 Min: 11.47 / Avg: 103.13 / Max: 291.33 Min: 12 / Avg: 106.02 / Max: 378.4 Min: 16.18 / Avg: 62.58 / Max: 173.94 Min: 14.37 / Avg: 69.82 / Max: 221.66 Min: 24.65 / Avg: 88.39 / Max: 223.26 Min: 27.34 / Avg: 124.44 / Max: 318.88 Min: 35.87 / Avg: 135.33 / Max: 330.94 Min: 31.3 / Avg: 131.88 / Max: 336.43
clpeak GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better clpeak GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 20 40 60 80 100 Min: 6.99 / Avg: 32.87 / Max: 69.23 Min: 9.92 / Avg: 30.28 / Max: 60.24 Min: 9.32 / Avg: 36.21 / Max: 79.11 Min: 12.78 / Avg: 34.04 / Max: 63.91 Min: 16.09 / Avg: 28.5 / Max: 52.47 Min: 14.81 / Avg: 26.27 / Max: 42.77 Min: 23.28 / Avg: 47.92 / Max: 87.7 Min: 25.78 / Avg: 57.04 / Max: 109.95 Min: 33.58 / Avg: 71.79 / Max: 132.91 Min: 27.95 / Avg: 66.96 / Max: 129.24
Hashcat GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better Hashcat 6.2.4 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 80 160 240 320 400 Min: 6.32 / Avg: 134.31 / Max: 262.83 Min: 9.8 / Avg: 129.54 / Max: 255.9 Min: 9.53 / Avg: 148.33 / Max: 306.42 Min: 11.03 / Avg: 208.85 / Max: 435.9 Min: 16.01 / Avg: 112.14 / Max: 219.47 Min: 14.18 / Avg: 139.11 / Max: 291.12 Min: 23.13 / Avg: 153.3 / Max: 279.48 Min: 24.03 / Avg: 176.89 / Max: 321.68 Min: 33.3 / Avg: 200.64 / Max: 349.26 Min: 27.66 / Avg: 193.02 / Max: 350.22
Hashcat GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better Hashcat 6.2.4 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 70 140 210 280 350 Min: 7.4 / Avg: 148.58 / Max: 266.19 Min: 10.28 / Avg: 136.25 / Max: 248.41 Min: 10.65 / Avg: 159.25 / Max: 284.53 Min: 12.35 / Avg: 203.69 / Max: 394.44 Min: 16.53 / Avg: 121.77 / Max: 219.68 Min: 15.41 / Avg: 144.41 / Max: 264.01 Min: 24.11 / Avg: 153.89 / Max: 260.52 Min: 26.59 / Avg: 188.41 / Max: 319.33 Min: 29.36 / Avg: 210.85 / Max: 349.25 Min: 20.61 / Avg: 204.33 / Max: 346.83
Hashcat GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better Hashcat 6.2.4 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 70 140 210 280 350 Min: 7.74 / Avg: 151.02 / Max: 268.38 Min: 10.31 / Avg: 139.32 / Max: 254.53 Min: 11.24 / Avg: 164.61 / Max: 358.29 Min: 12.59 / Avg: 226.5 / Max: 410.78 Min: 16.86 / Avg: 122.88 / Max: 219.77 Min: 15.25 / Avg: 152.04 / Max: 274.74 Min: 24.69 / Avg: 161.27 / Max: 269.19 Min: 20.82 / Avg: 189.87 / Max: 320.58 Min: 35.74 / Avg: 213.18 / Max: 348.93 Min: 29.66 / Avg: 209.42 / Max: 346.22
Hashcat GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better Hashcat 6.2.4 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 80 160 240 320 400 Min: 8.02 / Avg: 112.99 / Max: 267.64 Min: 10.32 / Avg: 87.59 / Max: 253.36 Min: 11.48 / Avg: 115.4 / Max: 299.72 Min: 11.92 / Avg: 147.38 / Max: 447.97 Min: 16.82 / Avg: 75.92 / Max: 219.23 Min: 15.74 / Avg: 102.31 / Max: 305.29 Min: 24.78 / Avg: 117.02 / Max: 290.91 Min: 28 / Avg: 132.63 / Max: 319.78 Min: 36.26 / Avg: 156.73 / Max: 353.12 Min: 29.95 / Avg: 153.86 / Max: 323.97
Hashcat GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better Hashcat 6.2.4 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 80 160 240 320 400 Min: 7.68 / Avg: 86.47 / Max: 266.05 Min: 10.16 / Avg: 91.65 / Max: 252.4 Min: 11.34 / Avg: 101.19 / Max: 284.25 Min: 11.39 / Avg: 145.65 / Max: 427.83 Min: 16.4 / Avg: 86.88 / Max: 219.59 Min: 15.91 / Avg: 90.96 / Max: 287.87 Min: 24.77 / Avg: 122.84 / Max: 275.29 Min: 26.48 / Avg: 140.26 / Max: 319.79 Min: 35.45 / Avg: 165.28 / Max: 350.11 Min: 29.36 / Avg: 161.03 / Max: 345.65
FAHBench GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better FAHBench 2.3.2 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 50 100 150 200 250 Min: 7.48 / Avg: 166.51 / Max: 227.74 Min: 10.22 / Avg: 134.18 / Max: 180.34 Min: 11 / Avg: 171.47 / Max: 229.76 Min: 11.02 / Avg: 115.07 / Max: 149.35 Min: 16.53 / Avg: 119.37 / Max: 160.88 Min: 15.14 / Avg: 100.72 / Max: 133.28 Min: 24.41 / Avg: 156.37 / Max: 199.91 Min: 24.78 / Avg: 192.14 / Max: 248.86 Min: 35.07 / Avg: 214.72 / Max: 274.05 Min: 29.32 / Avg: 212.05 / Max: 273.54
LeelaChessZero GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better LeelaChessZero 0.28 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 60 120 180 240 300 Min: 9.38 / Avg: 240.32 / Max: 270.77 Min: 10.75 / Avg: 206.34 / Max: 237.63 Min: 12.55 / Avg: 269.08 / Max: 288.41 Min: 12.74 / Avg: 196.76 / Max: 250.26 Min: 17.03 / Avg: 203.29 / Max: 219.88 Min: 15.17 / Avg: 190.35 / Max: 249.68 Min: 25.8 / Avg: 263.08 / Max: 289.83 Min: 29.57 / Avg: 301.95 / Max: 320.01 Min: 36.87 / Avg: 329.54 / Max: 349.65 Min: 31.04 / Avg: 327.18 / Max: 348.83
LuxCoreRender GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better LuxCoreRender 2.6 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 60 120 180 240 300 Min: 7.3 / Avg: 154.29 / Max: 233.5 Min: 11.83 / Avg: 148 / Max: 161.79 Min: 14.53 / Avg: 232.96 / Max: 254.12 Min: 12.64 / Avg: 237.38 / Max: 258.04 Min: 15.21 / Avg: 113.47 / Max: 170.52 Min: 16.56 / Avg: 178.07 / Max: 192.82 Min: 26.7 / Avg: 196.65 / Max: 210.74 Min: 30.62 / Avg: 269.76 / Max: 290.59 Min: 38.58 / Avg: 314.49 / Max: 339.24 Min: 30.85 / Avg: 318.18 / Max: 343.76
LuxCoreRender GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better LuxCoreRender 2.6 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 60 120 180 240 300 Min: 10.24 / Avg: 157.19 / Max: 212.41 Min: 11.01 / Avg: 114.66 / Max: 144.35 Min: 13.64 / Avg: 161.38 / Max: 223.74 Min: 13.15 / Avg: 120.52 / Max: 217.74 Min: 16.61 / Avg: 118.42 / Max: 170.64 Min: 16.74 / Avg: 110.31 / Max: 172.22 Min: 26.15 / Avg: 152.48 / Max: 214.1 Min: 30.65 / Avg: 188.28 / Max: 285.25 Min: 39.08 / Avg: 227.53 / Max: 337.84 Min: 31.78 / Avg: 227.11 / Max: 343.73
LuxCoreRender GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better LuxCoreRender 2.6 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 60 120 180 240 300 Min: 6.92 / Avg: 137.96 / Max: 222.57 Min: 10.75 / Avg: 133.59 / Max: 151.66 Min: 12.78 / Avg: 202.14 / Max: 232.65 Min: 9.17 / Avg: 117.79 / Max: 232.78 Min: 15.2 / Avg: 101.27 / Max: 159.81 Min: 15.81 / Avg: 152.68 / Max: 179.26 Min: 25.72 / Avg: 178.02 / Max: 199.08 Min: 29.11 / Avg: 241.75 / Max: 273.44 Min: 36.22 / Avg: 284.31 / Max: 320.58 Min: 30.92 / Avg: 286.39 / Max: 323.61
LuxCoreRender GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better LuxCoreRender 2.6 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 60 120 180 240 300 Min: 9.85 / Avg: 200.44 / Max: 229.76 Min: 10.26 / Avg: 141.65 / Max: 159.57 Min: 13.43 / Avg: 213.06 / Max: 248.15 Min: 10.67 / Avg: 219.15 / Max: 246.65 Min: 16.29 / Avg: 155.86 / Max: 174.85 Min: 16.11 / Avg: 167.16 / Max: 187.35 Min: 25.95 / Avg: 192.43 / Max: 212.23 Min: 30.18 / Avg: 258.21 / Max: 287.55 Min: 37.11 / Avg: 299.23 / Max: 334.85 Min: 31.91 / Avg: 304.51 / Max: 338.55
LuxCoreRender GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better LuxCoreRender 2.6 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 60 120 180 240 300 Min: 9.86 / Avg: 182.9 / Max: 214.28 Min: 11 / Avg: 128.94 / Max: 147.46 Min: 13.65 / Avg: 197.66 / Max: 229.95 Min: 12.02 / Avg: 197.56 / Max: 228.45 Min: 16.65 / Avg: 133.79 / Max: 153.36 Min: 15.32 / Avg: 148.6 / Max: 171.79 Min: 26.41 / Avg: 170.22 / Max: 191.61 Min: 30.39 / Avg: 228.18 / Max: 260.87 Min: 37.59 / Avg: 267.85 / Max: 303.05 Min: 31.67 / Avg: 269.34 / Max: 307.45
NAMD CUDA GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better NAMD CUDA 2.14 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 60 120 180 240 300 Min: 7.4 / Avg: 58.09 / Max: 267.34 Min: 10.56 / Avg: 116.42 / Max: 251.97 Min: 13.34 / Avg: 133.63 / Max: 314.33 Min: 8.53 / Avg: 35.22 / Max: 191.6 Min: 15.06 / Avg: 40.01 / Max: 205.99 Min: 15.57 / Avg: 77.03 / Max: 191.75 Min: 25.93 / Avg: 131.69 / Max: 267.23 Min: 30.36 / Avg: 141.19 / Max: 296.01 Min: 37.26 / Avg: 157.43 / Max: 319.91 Min: 31.81 / Avg: 156.55 / Max: 315.39
SHOC Scalable HeterOgeneous Computing GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better SHOC Scalable HeterOgeneous Computing 2020-04-17 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 50 100 150 200 250 Min: 6.76 / Avg: 147.38 / Max: 253.46 Min: 10.39 / Avg: 103.49 / Max: 151.15 Min: 10.41 / Avg: 152.64 / Max: 212.74 Min: 10.32 / Avg: 116.2 / Max: 170.79 Min: 15.89 / Avg: 108.13 / Max: 149.74 Min: 13.74 / Avg: 88.37 / Max: 132.45 Min: 23.77 / Avg: 145.41 / Max: 192.2 Min: 25.34 / Avg: 190.21 / Max: 249.48 Min: 33.51 / Avg: 226.65 / Max: 295.7 Min: 28.64 / Avg: 229.42 / Max: 296.97
SHOC Scalable HeterOgeneous Computing GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better SHOC Scalable HeterOgeneous Computing 2020-04-17 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 30 60 90 120 150 Min: 8.42 / Avg: 54.36 / Max: 129.92 Min: 10.37 / Avg: 43.75 / Max: 109.64 Min: 11.86 / Avg: 57.17 / Max: 137.24 Min: 12.83 / Avg: 42.85 / Max: 69.32 Min: 16.11 / Avg: 38.64 / Max: 64.78 Min: 14.32 / Avg: 36.58 / Max: 57.86 Min: 25.11 / Avg: 66.83 / Max: 102.38 Min: 27.79 / Avg: 76.08 / Max: 119.69 Min: 35.95 / Avg: 91.36 / Max: 143.83 Min: 29.67 / Avg: 85.79 / Max: 141.9
SHOC Scalable HeterOgeneous Computing GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better SHOC Scalable HeterOgeneous Computing 2020-04-17 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 50 100 150 200 250 Min: 7.88 / Avg: 80.51 / Max: 230.03 Min: 10.31 / Avg: 77.41 / Max: 194.03 Min: 10.65 / Avg: 92.7 / Max: 236.45 Min: 11.57 / Avg: 54.29 / Max: 126.15 Min: 16.04 / Avg: 68.81 / Max: 153.24 Min: 15.97 / Avg: 52.58 / Max: 128.2 Min: 24.08 / Avg: 99.41 / Max: 215.02 Min: 25.57 / Avg: 108.37 / Max: 229.68 Min: 33.85 / Avg: 129.75 / Max: 254.74 Min: 29.18 / Avg: 124.6 / Max: 250.77
SHOC Scalable HeterOgeneous Computing GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better SHOC Scalable HeterOgeneous Computing 2020-04-17 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 50 100 150 200 250 Min: 7.4 / Avg: 60.46 / Max: 265.33 Min: 10.11 / Avg: 57.87 / Max: 251.23 Min: 9.76 / Avg: 70.7 / Max: 284.33 Min: 11.81 / Avg: 46.72 / Max: 100.42 Min: 16.04 / Avg: 53.47 / Max: 130.1 Min: 15.17 / Avg: 45.24 / Max: 96.79 Min: 24.17 / Avg: 80.08 / Max: 181.58 Min: 25.67 / Avg: 84.46 / Max: 177.07 Min: 33.89 / Avg: 98.41 / Max: 196.15 Min: 28.78 / Avg: 92.74 / Max: 190.74
SHOC Scalable HeterOgeneous Computing GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better SHOC Scalable HeterOgeneous Computing 2020-04-17 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 40 80 120 160 200 Min: 7.04 / Avg: 68.21 / Max: 167.47 Min: 10.1 / Avg: 53.8 / Max: 116.14 Min: 9.84 / Avg: 72.53 / Max: 167.15 Min: 11.24 / Avg: 40.67 / Max: 72.68 Min: 15.87 / Avg: 51.11 / Max: 108.84 Min: 15.42 / Avg: 45.01 / Max: 83.41 Min: 24.06 / Avg: 82.43 / Max: 149.28 Min: 25.09 / Avg: 94.04 / Max: 170.23 Min: 33.29 / Avg: 115.33 / Max: 196.47 Min: 27.77 / Avg: 109.36 / Max: 188.59
SHOC Scalable HeterOgeneous Computing GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better SHOC Scalable HeterOgeneous Computing 2020-04-17 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 20 40 60 80 100 Min: 6.76 / Avg: 41.81 / Max: 63.03 Min: 10.32 / Avg: 37.19 / Max: 56.11 Min: 9.6 / Avg: 44.03 / Max: 67.54 Min: 11.4 / Avg: 36.12 / Max: 55.77 Min: 15.92 / Avg: 32.97 / Max: 46.9 Min: 15.36 / Avg: 29.98 / Max: 43.82 Min: 23.32 / Avg: 55.49 / Max: 81.34 Min: 24.64 / Avg: 60.19 / Max: 90.58 Min: 33.12 / Avg: 75.54 / Max: 111.03 Min: 27.2 / Avg: 68.8 / Max: 106.74
FluidX3D GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better FluidX3D 2.3 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 60 120 180 240 300 Min: 6.48 / Avg: 182.27 / Max: 200.61 Min: 10.11 / Avg: 125.38 / Max: 136.11 Min: 9.21 / Avg: 186.1 / Max: 210.18 Min: 11.54 / Avg: 187.83 / Max: 213.24 Min: 16.03 / Avg: 129 / Max: 140.37 Min: 14.93 / Avg: 140.73 / Max: 154.04 Min: 22.76 / Avg: 195.68 / Max: 215.34 Min: 24.64 / Avg: 241 / Max: 275.82 Min: 32.87 / Avg: 282.59 / Max: 317.66 Min: 27.11 / Avg: 287.45 / Max: 329.25
FluidX3D GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better FluidX3D 2.3 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 60 120 180 240 300 Min: 8.87 / Avg: 207.39 / Max: 243.17 Min: 10.53 / Avg: 147.82 / Max: 165.47 Min: 12.12 / Avg: 213.05 / Max: 253.08 Min: 12.05 / Avg: 194.07 / Max: 246.77 Min: 16.3 / Avg: 143.67 / Max: 162.63 Min: 15.57 / Avg: 154.96 / Max: 183.54 Min: 26.1 / Avg: 212.21 / Max: 245.16 Min: 29.06 / Avg: 266.77 / Max: 319.33 Min: 36.57 / Avg: 269.85 / Max: 327.33 Min: 29.97 / Avg: 284.23 / Max: 346.25
FluidX3D GPU Power Consumption Monitor OpenBenchmarking.org Watts, Fewer Is Better FluidX3D 2.3 GPU Power Consumption Monitor RTX 2080 Ti RTX 2080 SUPER TITAN RTX RTX 4090 RTX 3070 RTX 4080 RTX 3070 Ti RTX 3080 RTX 3080 Ti RTX 3090 60 120 180 240 300 Min: 9.38 / Avg: 222.17 / Max: 265.36 Min: 10.59 / Avg: 199.53 / Max: 232.99 Min: 13.28 / Avg: 241.71 / Max: 285.07 Min: 11.59 / Avg: 241.32 / Max: 313.16 Min: 16.44 / Avg: 194.79 / Max: 219.83 Min: 15.32 / Avg: 191.62 / Max: 230.29 Min: 26.33 / Avg: 255.9 / Max: 289.84 Min: 30.46 / Avg: 271.72 / Max: 319.82 Min: 36.88 / Avg: 294.92 / Max: 350.05 Min: 29.07 / Avg: 286.12 / Max: 345.02
Phoronix Test Suite v10.8.4