pts_tf_lite1
Intel Core i5-7500 testing with a Gigabyte B250M-D3H-CF (F10 BIOS) and NVIDIA GeForce RTX 2060 SUPER 8GB on Ubuntu 20.04 via the Phoronix Test Suite.
Intel Core i5-7500 - NVIDIA GeForce RTX 2060 SUPER
Processor: Intel Core i5-7500 @ 3.80GHz (4 Cores), Motherboard: Gigabyte B250M-D3H-CF (F10 BIOS), Chipset: Intel Xeon E3-1200 v6/7th + B250, Memory: 32768MB, Disk: 14GB INTEL MEMPEK1J016GAD + 250GB Samsung SSD 840 + 2000GB TOSHIBA DT01ACA2 + 250GB Samsung SSD 850 + 64GB USB 2.0 FD, Graphics: NVIDIA GeForce RTX 2060 SUPER 8GB (435/405MHz), Audio: Realtek ALC892, Network: Intel I219-V
OS: Ubuntu 20.04, Kernel: 5.4.0-45-generic (x86_64), Desktop: GNOME Shell 3.36.4, Display Server: X Server 1.20.8, Display Driver: NVIDIA 440.100, OpenGL: 4.6.0, OpenCL: OpenCL 2.1 + OpenCL 1.2 CUDA 10.2.185, Compiler: GCC 9.3.0 + CUDA 10.1, File-System: ext4, Screen Resolution: 3600x1080
Processor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd6
Security Notes: itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT disabled + mds: Mitigation of Clear buffers; SMT disabled + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: disabled RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Mitigation of Clear buffers; SMT disabled
TensorFlow Lite
This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.
Intel Core i5-7500 - NVIDIA GeForce RTX 2060 SUPER
Processor: Intel Core i5-7500 @ 3.80GHz (4 Cores), Motherboard: Gigabyte B250M-D3H-CF (F10 BIOS), Chipset: Intel Xeon E3-1200 v6/7th + B250, Memory: 32768MB, Disk: 14GB INTEL MEMPEK1J016GAD + 250GB Samsung SSD 840 + 2000GB TOSHIBA DT01ACA2 + 250GB Samsung SSD 850 + 64GB USB 2.0 FD, Graphics: NVIDIA GeForce RTX 2060 SUPER 8GB (435/405MHz), Audio: Realtek ALC892, Network: Intel I219-V
OS: Ubuntu 20.04, Kernel: 5.4.0-45-generic (x86_64), Desktop: GNOME Shell 3.36.4, Display Server: X Server 1.20.8, Display Driver: NVIDIA 440.100, OpenGL: 4.6.0, OpenCL: OpenCL 2.1 + OpenCL 1.2 CUDA 10.2.185, Compiler: GCC 9.3.0 + CUDA 10.1, File-System: ext4, Screen Resolution: 3600x1080
Processor Notes: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd6
Security Notes: itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT disabled + mds: Mitigation of Clear buffers; SMT disabled + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: disabled RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Mitigation of Clear buffers; SMT disabled
Testing initiated at 3 September 2020 20:17 by user dgt.