nuc-ubuntu-nvidia-egpu-20.04

Intel Core i7-8809G testing with a Intel NUC8i7HVB (HNKBLi70.86A.0059.2019.1112.1124 BIOS) and NVIDIA GeForce RTX 2070 SUPER 8GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2003237-VE-NUCUBUNTU78
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Intel Core i7-8809G - NVIDIA GeForce RTX 2070 SUPER
March 23 2020
  7 Minutes


nuc-ubuntu-nvidia-egpu-20.04OpenBenchmarking.orgPhoronix Test SuiteIntel Core i7-8809G @ 4.20GHz (4 Cores / 8 Threads)Intel NUC8i7HVB (HNKBLi70.86A.0059.2019.1112.1124 BIOS)Intel Xeon E3-1200 v6/7th64GB2000GB Samsung SSD 970 EVO 2TB + 1024GB Samsung SSD 970 PRO 1TBNVIDIA GeForce RTX 2070 SUPER 8GB (435/810MHz)Realtek ALC700LG HDR 4KIntel I219-LM + Intel I210 + Intel 8265 / 8275Ubuntu 20.045.4.0-18-generic (x86_64)GNOME Shell 3.35.91X Server 1.20.7NVIDIA 440.644.6.0OpenCL 1.1 Mesa 20.0.0 + OpenCL 1.2 CUDA 10.2.1411.1.119GCC 9.3.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLOpenCLVulkanCompilerFile-SystemScreen ResolutionNuc-ubuntu-nvidia-egpu-20.04 BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave - CPU Microcode: 0xca- GPU Compute Cores: 2560- itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + tsx_async_abort: Not affected

LuxMark

LuxMark is a multi-platform OpenGL benchmark using LuxRender. LuxMark supports targeting different OpenCL devices and has multiple scenes available for rendering. LuxMark is a fully open-source OpenCL program with real-world rendering examples. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterLuxMark 3.1OpenCL Device: GPU - Scene: Luxball HDRIntel Core i7-8809G - NVIDIA GeForce RTX 2070 SUPER10K20K30K40K50KSE +/- 708.55, N = 347169