kali

VMware testing on Kali 2021.1 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2105262-IB-KALI3039702
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
kalivmware
May 25 2021
  1 Day, 9 Hours, 5 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):


kaliOpenBenchmarking.orgPhoronix Test Suite 10.8.3Intel Xeon E-2288G (8 Cores)Intel 440BX (6.00 BIOS)Intel 440BX/ZX/DX8 GB + 4 GB DRAM94GB VMware Virtual SSVGA3D; buildEnsoniq ES1371/ES1373Intel 82545EMKali 2021.15.10.0-kali7-amd64 (x86_64)Xfce 4.16X Server 1.20.113.3 Mesa 20.3.4OpenCL 1.2 pocl 1.6 +Asserts LLVM 9.0.1 RELOC SLEEF DISTRO POCL_DEBUG1.0.2GCC 10.2.1 20210110 + Clang 11.0.1-2ext41280x1024VMwareProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLOpenCLVulkanCompilerFile-SystemScreen ResolutionSystem LayerKali BenchmarksSystem Logs- Transparent Huge Pages: always- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-mutex --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-Km9U7s/gcc-10-10.2.1/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-Km9U7s/gcc-10-10.2.1/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - CPU Microcode: 0xde- Gallium3D XA- itlb_multihit: KVM: Mitigation of VMX unsupported + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Mitigation of TSX disabled + tsx_async_abort: Not affected

kalilibplacebo: deband_heavylibplacebo: polar_nocomputelibplacebo: hdr_peakdetectlibplacebo: av1_grain_lapncnn: Vulkan GPU - mobilenetncnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: Vulkan GPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU - shufflenet-v2ncnn: Vulkan GPU - mnasnetncnn: Vulkan GPU - efficientnet-b0ncnn: Vulkan GPU - blazefacencnn: Vulkan GPU - googlenetncnn: Vulkan GPU - vgg16ncnn: Vulkan GPU - resnet18ncnn: Vulkan GPU - alexnetncnn: Vulkan GPU - resnet50ncnn: Vulkan GPU - yolov4-tinyncnn: Vulkan GPU - squeezenet_ssdncnn: Vulkan GPU - regnety_400mrealsr-ncnn: 4x - Norealsr-ncnn: 4x - Yesvkpeak: fp32-scalarvkpeak: fp32-vec4vkpeak: fp64-scalarvkpeak: fp64-vec4vkpeak: int32-scalarvkpeak: int32-vec4waifu2x-ncnn: 2x - 3 - Nowaifu2x-ncnn: 2x - 3 - Yeskalivmware16.788.3212.5017.5617.865.424.246.434.367.491.8015.3169.9615.7114.8632.1225.6422.1911.603506.46528008.13751.97183.5150.8983.2137.64100.24159.7491271.621OpenBenchmarking.org

Libplacebo

Libplacebo is a multimedia rendering library based on the core rendering code of the MPV player. The libplacebo benchmark relies on the Vulkan API and tests various primitives. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterLibplacebo 2.72.2Test: deband_heavykalivmware48121620SE +/- 0.13, N = 1416.781. (CXX) g++ options: -lm -lglslang -lHLSL -lOGLCompiler -lOSDependent -lSPIRV -lSPVRemapper -lGenericCodeGen -lMachineIndependent -lSPIRV-Tools -lSPIRV-Tools-opt -lpthread -pthread -pipe -std=c++11 -fvisibility=hidden -fPIC -MD -MQ -MF

OpenBenchmarking.orgFPS, More Is BetterLibplacebo 2.72.2Test: polar_nocomputekalivmware246810SE +/- 0.03, N = 148.321. (CXX) g++ options: -lm -lglslang -lHLSL -lOGLCompiler -lOSDependent -lSPIRV -lSPVRemapper -lGenericCodeGen -lMachineIndependent -lSPIRV-Tools -lSPIRV-Tools-opt -lpthread -pthread -pipe -std=c++11 -fvisibility=hidden -fPIC -MD -MQ -MF

OpenBenchmarking.orgFPS, More Is BetterLibplacebo 2.72.2Test: hdr_peakdetectkalivmware3691215SE +/- 0.05, N = 1412.501. (CXX) g++ options: -lm -lglslang -lHLSL -lOGLCompiler -lOSDependent -lSPIRV -lSPVRemapper -lGenericCodeGen -lMachineIndependent -lSPIRV-Tools -lSPIRV-Tools-opt -lpthread -pthread -pipe -std=c++11 -fvisibility=hidden -fPIC -MD -MQ -MF

OpenBenchmarking.orgFPS, More Is BetterLibplacebo 2.72.2Test: av1_grain_lapkalivmware48121620SE +/- 0.05, N = 1417.561. (CXX) g++ options: -lm -lglslang -lHLSL -lOGLCompiler -lOSDependent -lSPIRV -lSPVRemapper -lGenericCodeGen -lMachineIndependent -lSPIRV-Tools -lSPIRV-Tools-opt -lpthread -pthread -pipe -std=c++11 -fvisibility=hidden -fPIC -MD -MQ -MF

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mobilenetkalivmware48121620SE +/- 0.17, N = 717.86MIN: 16.73 / MAX: 27.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2kalivmware1.21952.4393.65854.8786.0975SE +/- 0.10, N = 75.42MIN: 4.7 / MAX: 10.161. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3kalivmware0.9541.9082.8623.8164.77SE +/- 0.07, N = 74.24MIN: 3.78 / MAX: 6.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: shufflenet-v2kalivmware246810SE +/- 0.28, N = 76.43MIN: 5 / MAX: 9.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mnasnetkalivmware0.9811.9622.9433.9244.905SE +/- 0.06, N = 64.36MIN: 3.73 / MAX: 6.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: efficientnet-b0kalivmware246810SE +/- 0.11, N = 77.49MIN: 6.41 / MAX: 9.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: blazefacekalivmware0.4050.811.2151.622.025SE +/- 0.10, N = 71.80MIN: 1.43 / MAX: 4.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: googlenetkalivmware48121620SE +/- 0.28, N = 715.31MIN: 13.23 / MAX: 23.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: vgg16kalivmware1632486480SE +/- 0.04, N = 769.96MIN: 68.02 / MAX: 109.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet18kalivmware48121620SE +/- 0.24, N = 715.71MIN: 14.42 / MAX: 24.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: alexnetkalivmware48121620SE +/- 0.42, N = 714.86MIN: 13.75 / MAX: 138.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet50kalivmware714212835SE +/- 0.41, N = 732.12MIN: 27.57 / MAX: 48.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: yolov4-tinykalivmware612182430SE +/- 0.08, N = 725.64MIN: 24.34 / MAX: 42.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: squeezenet_ssdkalivmware510152025SE +/- 0.04, N = 722.19MIN: 21.39 / MAX: 55.461. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: regnety_400mkalivmware3691215SE +/- 0.13, N = 711.60MIN: 10.22 / MAX: 18.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

RealSR-NCNN

RealSR-NCNN is an NCNN neural network implementation of the RealSR project and accelerated using the Vulkan API. RealSR is the Real-World Super Resolution via Kernel Estimation and Noise Injection. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image by a scale of 4x with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: Nokalivmware8001600240032004000SE +/- 1.55, N = 33506.47

OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: Yeskalivmware6K12K18K24K30KSE +/- 1.33, N = 328008.14

vkpeak

Vkpeak is a Vulkan compute benchmark inspired by OpenCL's clpeak. Vkpeak provides Vulkan compute performance measurements for FP16 / FP32 / FP64 / INT16 / INT32 scalar and vec4 performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20210424fp32-scalarkalivmware1224364860SE +/- 9.43, N = 1051.97

OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20210424fp32-vec4kalivmware4080120160200SE +/- 33.36, N = 10183.51

OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20210424fp64-scalarkalivmware1122334455SE +/- 9.24, N = 1050.89

OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20210424fp64-vec4kalivmware20406080100SE +/- 15.14, N = 1083.21

OpenBenchmarking.orgGIOPS, More Is Bettervkpeak 20210424int32-scalarkalivmware918273645SE +/- 6.83, N = 1037.64

OpenBenchmarking.orgGIOPS, More Is Bettervkpeak 20210424int32-vec4kalivmware20406080100SE +/- 18.26, N = 10100.24

Waifu2x-NCNN Vulkan

Waifu2x-NCNN is an NCNN neural network implementation of the Waifu2x converter project and accelerated using the Vulkan API. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWaifu2x-NCNN Vulkan 20200818Scale: 2x - Denoise: 3 - TAA: Nokalivmware4080120160200SE +/- 0.32, N = 3159.75

OpenBenchmarking.orgSeconds, Fewer Is BetterWaifu2x-NCNN Vulkan 20200818Scale: 2x - Denoise: 3 - TAA: Yeskalivmware30060090012001500SE +/- 0.21, N = 31271.62