gpu_hotel

Intel Core i7-6800K testing with a ASUS X99-E WS/USB 3.1 (3502 BIOS) and NVIDIA TITAN X 12GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2012304-FI-GPUHOTEL848
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Intel Core i7-6800K - NVIDIA TITAN X 12GB - ASUS
December 30 2020
  7 Minutes


gpu_hotelOpenBenchmarking.orgPhoronix Test SuiteIntel Core i7-6800K @ 4.00GHz (6 Cores / 12 Threads)ASUS X99-E WS/USB 3.1 (3502 BIOS)Intel Xeon E7 v4/Xeon32GB1000GB Western Digital WD10EZEX-08W + 1500GB MARVELL Raid VDNVIDIA TITAN X 12GB (1417/5005MHz)Realtek ALC1150C32R50xIntel I218-LM + Intel I210 + Broadcom BCM4352 802.11acUbuntu 20.045.4.0-58-generic (x86_64)GNOME Shell 3.36.4X Server 1.20.8NVIDIA 455.384.6.0OpenCL 1.2 CUDA 11.1.1101.2.142GCC 9.3.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLOpenCLVulkanCompilerFile-SystemScreen ResolutionGpu_hotel BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave - CPU Microcode: 0xb000038- GPU Compute Cores: 3584- itlb_multihit: KVM: Vulnerable + l1tf: Mitigation of PTE Inversion + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Mitigation of Clear buffers; SMT vulnerable

LuxMark

LuxMark is a multi-platform OpenGL benchmark using LuxRender. LuxMark supports targeting different OpenCL devices and has multiple scenes available for rendering. LuxMark is a fully open-source OpenCL program with real-world rendering examples. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterLuxMark 3.1OpenCL Device: GPU - Scene: HotelIntel Core i7-6800K - NVIDIA TITAN X 12GB - ASUS13002600390052006500SE +/- 1.86, N = 35894