ai-overclock
Intel Core i7-8700K testing with a ASUS ROG STRIX Z370-E GAMING (2801 BIOS) and eVGA NVIDIA GeForce RTX 3060 12GB on Arch rolling via the Phoronix Test Suite.
eVGA NVIDIA GeForce RTX 3060
Kernel Notes: Transparent Huge Pages: always
Compiler Notes: --disable-libssp --disable-libstdcxx-pch --disable-libunwind-exceptions --disable-werror --enable-__cxa_atexit --enable-cet=auto --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-default-ssp --enable-gnu-indirect-function --enable-gnu-unique-object --enable-install-libiberty --enable-languages=c,c++,ada,fortran,go,lto,objc,obj-c++,d --enable-lto --enable-multilib --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-isl --with-linker-hash-style=gnu
Processor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xea - Thermald 2.4.6
Graphics Notes: BAR1 / Visible vRAM Size: 256 MiB
Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Mitigation of TSX disabled
Intel Core i7-8700K
Processor: Intel Core i7-8700K @ 4.90GHz (6 Cores / 12 Threads), Motherboard: ASUS ROG STRIX Z370-E GAMING (2801 BIOS), Chipset: Intel 8th Gen Core, Memory: 32GB, Disk: 256GB ADATA SX7000NP + 256GB XPG GAMMIX S5 + 1000GB CT1000MX500SSD1 + 2000GB Seagate ST2000NM0033-9ZM + 256GB Toshiba MKNSSDRE256GB + 2000GB Western Digital WD20EARX-00Z, Graphics: eVGA NVIDIA GeForce RTX 3060 12GB, Audio: Realtek ALC1220, Monitor: VN279 + VG270U P, Network: Intel I219-V
OS: Arch rolling, Kernel: 5.15.2-zen1-1-zen (x86_64), Desktop: KDE Plasma 5.23.3, Display Server: X Server 1.21.1.1, Display Driver: NVIDIA 495.44, OpenGL: 4.6.0, OpenCL: OpenCL 3.0 CUDA 11.5.100, Vulkan: 1.2.186, Compiler: GCC 11.1.0 + Clang 13.0.0, File-System: ext4, Screen Resolution: 2560x1440
Kernel Notes: Transparent Huge Pages: always
Compiler Notes: --disable-libssp --disable-libstdcxx-pch --disable-libunwind-exceptions --disable-werror --enable-__cxa_atexit --enable-cet=auto --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-default-ssp --enable-gnu-indirect-function --enable-gnu-unique-object --enable-install-libiberty --enable-languages=c,c++,ada,fortran,go,lto,objc,obj-c++,d --enable-lto --enable-multilib --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-isl --with-linker-hash-style=gnu
Processor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xea - Thermald 2.4.6
Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Mitigation of TSX disabled
Counter-Strike: Global Offensive
This is a benchmark of Valve's Counter-Strike: Global Offensive game. The test profile assumes you have a Steam account, have Steam installed for the system, and that Counter-Strike: Global Offensive is already installed. This automates the process of executing the game and using a standardized time demo. Learn more via the OpenBenchmarking.org test page.
Timed Linux Kernel Compilation
This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested. Learn more via the OpenBenchmarking.org test page.
FLAC Audio Encoding
This test times how long it takes to encode a sample WAV file to FLAC format ten times. Learn more via the OpenBenchmarking.org test page.
eVGA NVIDIA GeForce RTX 3060
Kernel Notes: Transparent Huge Pages: always
Compiler Notes: --disable-libssp --disable-libstdcxx-pch --disable-libunwind-exceptions --disable-werror --enable-__cxa_atexit --enable-cet=auto --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-default-ssp --enable-gnu-indirect-function --enable-gnu-unique-object --enable-install-libiberty --enable-languages=c,c++,ada,fortran,go,lto,objc,obj-c++,d --enable-lto --enable-multilib --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-isl --with-linker-hash-style=gnu
Processor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xea - Thermald 2.4.6
Graphics Notes: BAR1 / Visible vRAM Size: 256 MiB
Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Mitigation of TSX disabled
Testing initiated at 17 November 2021 21:59 by user sef7.
Intel Core i7-8700K
Processor: Intel Core i7-8700K @ 4.90GHz (6 Cores / 12 Threads), Motherboard: ASUS ROG STRIX Z370-E GAMING (2801 BIOS), Chipset: Intel 8th Gen Core, Memory: 32GB, Disk: 256GB ADATA SX7000NP + 256GB XPG GAMMIX S5 + 1000GB CT1000MX500SSD1 + 2000GB Seagate ST2000NM0033-9ZM + 256GB Toshiba MKNSSDRE256GB + 2000GB Western Digital WD20EARX-00Z, Graphics: eVGA NVIDIA GeForce RTX 3060 12GB, Audio: Realtek ALC1220, Monitor: VN279 + VG270U P, Network: Intel I219-V
OS: Arch rolling, Kernel: 5.15.2-zen1-1-zen (x86_64), Desktop: KDE Plasma 5.23.3, Display Server: X Server 1.21.1.1, Display Driver: NVIDIA 495.44, OpenGL: 4.6.0, OpenCL: OpenCL 3.0 CUDA 11.5.100, Vulkan: 1.2.186, Compiler: GCC 11.1.0 + Clang 13.0.0, File-System: ext4, Screen Resolution: 2560x1440
Kernel Notes: Transparent Huge Pages: always
Compiler Notes: --disable-libssp --disable-libstdcxx-pch --disable-libunwind-exceptions --disable-werror --enable-__cxa_atexit --enable-cet=auto --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-default-ssp --enable-gnu-indirect-function --enable-gnu-unique-object --enable-install-libiberty --enable-languages=c,c++,ada,fortran,go,lto,objc,obj-c++,d --enable-lto --enable-multilib --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-isl --with-linker-hash-style=gnu
Processor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xea - Thermald 2.4.6
Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Mitigation of TSX disabled
Testing initiated at 17 November 2021 22:06 by user sef7.