workingnow?

Intel Core i9-12900KF testing with a Gigabyte Z690 UD DDR4 (F7 BIOS) and MSI NVIDIA GeForce RTX 4090 24GB on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2305189-NE-WORKINGNO89
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
workingnow
May 18 2023
  7 Hours, 10 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


workingnow?OpenBenchmarking.orgPhoronix Test SuiteIntel Core i9-12900KF @ 5.10GHz (16 Cores / 24 Threads)Gigabyte Z690 UD DDR4 (F7 BIOS)Intel Device 7aa732GB2000GB KINGSTON SNVS2000GMSI NVIDIA GeForce RTX 4090 24GBRealtek ALC897DELL P2419HRealtek RTL8125 2.5GbE + Intel Wi-Fi 6 AX200Ubuntu 22.045.15.0-71-generic (x86_64)LXQt 0.17.0X Server 1.21.1.3NVIDIA 530.30.024.6.0OpenCL 3.0 CUDA 12.1.681.3.236GCC 11.3.0 + CUDA 12.1ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLOpenCLVulkanCompilerFile-SystemScreen ResolutionWorkingnow? BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x2c - Thermald 2.4.9 - BAR1 / Visible vRAM Size: 256 MiB - vBIOS Version: 95.02.18.08.01- GPU Compute Cores: 16384- Python 3.10.6- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

workingnow?neatbench: GPUmandelgpu: GPUindigobench: OpenCL GPU - Supercarindigobench: OpenCL GPU - Bedroomblender: Pabellon Barcelona - NVIDIA OptiXblender: Barbershop - NVIDIA OptiXblender: Fishy Cat - NVIDIA OptiXblender: Classroom - NVIDIA OptiXncnn: Vulkan GPU - resnet18ncnn: Vulkan GPU - FastestDetncnn: Vulkan GPU - regnety_400mncnn: Vulkan GPU - squeezenet_ssdncnn: Vulkan GPU - resnet50ncnn: Vulkan GPU - alexnetncnn: Vulkan GPU - blazefacencnn: Vulkan GPU - shufflenet-v2ncnn: Vulkan GPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: Vulkan GPU - mobilenetcaffe: GoogleNet - NVIDIA CUDA - 1000caffe: GoogleNet - NVIDIA CUDA - 200caffe: GoogleNet - NVIDIA CUDA - 100caffe: AlexNet - NVIDIA CUDA - 1000caffe: AlexNet - NVIDIA CUDA - 200caffe: AlexNet - NVIDIA CUDA - 100gromacs: NVIDIA CUDA GPU - water_GMX50_bareviennacl: OpenCL BLAS - dGEMM-TTviennacl: OpenCL BLAS - dGEMM-TNviennacl: OpenCL BLAS - dGEMM-NTviennacl: OpenCL BLAS - dGEMM-NNviennacl: OpenCL BLAS - dGEMV-Tviennacl: OpenCL BLAS - dGEMV-Nviennacl: OpenCL BLAS - dDOTviennacl: OpenCL BLAS - dAXPYviennacl: OpenCL BLAS - dCOPYviennacl: OpenCL BLAS - sDOTviennacl: OpenCL BLAS - sAXPYviennacl: OpenCL BLAS - sCOPYviennacl: CPU BLAS - dGEMM-TTviennacl: CPU BLAS - dGEMM-NTviennacl: CPU BLAS - dGEMM-NNviennacl: CPU BLAS - dGEMV-Tviennacl: CPU BLAS - dDOTviennacl: CPU BLAS - dAXPYviennacl: CPU BLAS - dCOPYviennacl: CPU BLAS - sDOTviennacl: CPU BLAS - sCOPYfinancebench: Black-Scholes OpenCLluxcorerender: Rainbow Colors and Prism - GPUluxcorerender: LuxCore Benchmark - GPUluxcorerender: Orange Juice - GPUluxcorerender: Danish Mood - GPUluxcorerender: DLSC - GPUarrayfire: Conjugate Gradient OpenCLlczero: OpenCLclpeak: Global Memory Bandwidthclpeak: Double-Precision Doubleclpeak: Single-Precision Floatclpeak: Integer Compute INTfahbench: octanebench: Total Scorevkresample: 2x - Singlevkresample: 2x - Doublenamd-cuda: ATPase Simulation - 327,506 Atomscl-mem: Writecl-mem: Readcl-mem: Copyshoc: OpenCL - Texture Read Bandwidthshoc: OpenCL - Bus Speed Readbackshoc: OpenCL - Bus Speed Downloadshoc: OpenCL - Max SP Flopsshoc: OpenCL - GEMM SGEMM_Nshoc: OpenCL - Reductionshoc: OpenCL - MD5 Hashshoc: OpenCL - FFT SPshoc: OpenCL - Triadshoc: OpenCL - S3Dmixbench: NVIDIA CUDA - Single Precisionmixbench: NVIDIA CUDA - Double Precisionmixbench: NVIDIA CUDA - Half Precisionmixbench: OpenCL - Single Precisionmixbench: OpenCL - Double Precisionmixbench: NVIDIA CUDA - Integermixbench: OpenCL - Integerhashcat: TrueCrypt RIPEMD160 + XTShashcat: SHA-512hashcat: 7-Ziphashcat: SHA1hashcat: MD5vkfft: waifu2x-ncnn: 2x - 3 - Yesrealsr-ncnn: 4x - Yesrealsr-ncnn: 4x - Novkpeak: int16-vec4vkpeak: int16-scalarvkpeak: int32-vec4vkpeak: int32-scalarvkpeak: fp64-vec4vkpeak: fp64-scalarvkpeak: fp16-vec4vkpeak: fp16-scalarvkpeak: fp32-vec4vkpeak: fp32-scalarblender: BMW27 - NVIDIA OptiXncnn: Vulkan GPU - vision_transformerncnn: Vulkan GPU - yolov4-tinyncnn: Vulkan GPU - vgg16ncnn: Vulkan GPU - googlenetncnn: Vulkan GPU - efficientnet-b0ncnn: Vulkan GPU - mnasnetviennacl: CPU BLAS - dGEMM-TNviennacl: CPU BLAS - dGEMV-Nviennacl: CPU BLAS - sAXPYwaifu2x-ncnn: 2x - 3 - Noworkingnow40901035158945.780.00435.7648.0930.065.327.140.932.181.442.321.541.030.791.201.230.952.9916459.93303.731661.774291.08870.773443.70341.711138013371320119045022473278066746060048795.884.379.539.833.733.628.051.141.62.81544.7319.7520.0418.3825.860.847221422873.431434.3980554.1041343.87437.38451312.6048697.74754.1820.13372806.8888.9414.43084.4326.354224.736088834.026966.3991.17594.22262789.7021.6700647.51575020.161098.8480736.4577320.861098.7135349.1540702.14190626774244333332651800506830000001558000000001334832.09419.7934.68540527.8430436.1345552.2445787.931446.251443.9790594.9645678.7260521.2645893.5612.18208.305.831.571.852.111.1392.835.046.0OpenBenchmarking.org

NeatBench

NeatBench is a benchmark of the cross-platform Neat Video software on the CPU and optional GPU (OpenCL / CUDA) support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNeatBench 5Acceleration: GPUworkingnow9001800270036004500SE +/- 0.00, N = 34090

MandelGPU

MandelGPU is an OpenCL benchmark and this test runs with the OpenCL rendering float4 kernel with a maximum of 4096 iterations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSamples/sec, More Is BetterMandelGPU 1.3pts1OpenCL Device: GPUworkingnow200M400M600M800M1000MSE +/- 5134240.32, N = 31035158945.71. (CC) gcc options: -O3 -lm -ftree-vectorize -funroll-loops -lglut -lOpenCL -lGL

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: OpenCL GPU - Scene: Supercarworkingnow20406080100SE +/- 0.03, N = 380.00

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: OpenCL GPU - Scene: Bedroomworkingnow816243240SE +/- 0.06, N = 335.76

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Pabellon Barcelona - Compute: NVIDIA OptiXworkingnow246810SE +/- 0.01, N = 38.09

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Barbershop - Compute: NVIDIA OptiXworkingnow714212835SE +/- 0.04, N = 330.06

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Fishy Cat - Compute: NVIDIA OptiXworkingnow1.1972.3943.5914.7885.985SE +/- 0.06, N = 145.32

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Classroom - Compute: NVIDIA OptiXworkingnow246810SE +/- 0.03, N = 37.14

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: resnet18workingnow0.20930.41860.62790.83721.0465SE +/- 0.01, N = 30.93MIN: 0.89 / MAX: 1.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: FastestDetworkingnow0.49050.9811.47151.9622.4525SE +/- 0.03, N = 52.18MIN: 1.68 / MAX: 24.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: regnety_400mworkingnow0.3240.6480.9721.2961.62SE +/- 0.02, N = 51.44MIN: 1.37 / MAX: 20.221. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: squeezenet_ssdworkingnow0.5221.0441.5662.0882.61SE +/- 0.00, N = 52.32MIN: 2.28 / MAX: 2.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: resnet50workingnow0.34650.6931.03951.3861.7325SE +/- 0.03, N = 51.54MIN: 1.49 / MAX: 32.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: alexnetworkingnow0.23180.46360.69540.92721.159SE +/- 0.01, N = 51.03MIN: 0.99 / MAX: 5.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: blazefaceworkingnow0.17780.35560.53340.71120.889SE +/- 0.01, N = 50.79MIN: 0.75 / MAX: 4.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: shufflenet-v2workingnow0.270.540.811.081.35SE +/- 0.01, N = 51.20MIN: 1.12 / MAX: 5.991. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3workingnow0.27680.55360.83041.10721.384SE +/- 0.01, N = 51.23MIN: 1.19 / MAX: 5.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2workingnow0.21380.42760.64140.85521.069SE +/- 0.01, N = 50.95MIN: 0.91 / MAX: 18.221. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: mobilenetworkingnow0.67281.34562.01842.69123.364SE +/- 0.03, N = 52.99MIN: 2.89 / MAX: 14.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: NVIDIA CUDA - Iterations: 1000workingnow4K8K12K16K20KSE +/- 35.06, N = 316459.91. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: NVIDIA CUDA - Iterations: 200workingnow7001400210028003500SE +/- 3.41, N = 33303.731. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: NVIDIA CUDA - Iterations: 100workingnow400800120016002000SE +/- 7.59, N = 31661.771. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: NVIDIA CUDA - Iterations: 1000workingnow9001800270036004500SE +/- 2.18, N = 34291.081. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: NVIDIA CUDA - Iterations: 200workingnow2004006008001000SE +/- 2.69, N = 3870.771. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: NVIDIA CUDA - Iterations: 100workingnow100200300400500SE +/- 2.48, N = 3443.701. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2023Implementation: NVIDIA CUDA GPU - Input: water_GMX50_bareworkingnow1020304050SE +/- 0.04, N = 341.711. (CXX) g++ options: -O3

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: OpenCL BLAS - dGEMM-TTworkingnow30060090012001500SE +/- 0.00, N = 313801. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: OpenCL BLAS - dGEMM-TNworkingnow30060090012001500SE +/- 3.33, N = 313371. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: OpenCL BLAS - dGEMM-NTworkingnow30060090012001500SE +/- 0.00, N = 313201. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: OpenCL BLAS - dGEMM-NNworkingnow30060090012001500SE +/- 0.00, N = 311901. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: OpenCL BLAS - dGEMV-Tworkingnow100200300400500SE +/- 0.33, N = 34501. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: OpenCL BLAS - dGEMV-Nworkingnow50100150200250SE +/- 0.33, N = 32241. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: OpenCL BLAS - dDOTworkingnow160320480640800SE +/- 0.88, N = 37321. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: OpenCL BLAS - dAXPYworkingnow2004006008001000SE +/- 0.33, N = 37801. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: OpenCL BLAS - dCOPYworkingnow140280420560700SE +/- 0.33, N = 36671. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: OpenCL BLAS - sDOTworkingnow100200300400500SE +/- 0.33, N = 34601. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: OpenCL BLAS - sAXPYworkingnow130260390520650SE +/- 0.33, N = 36001. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: OpenCL BLAS - sCOPYworkingnow110220330440550SE +/- 0.67, N = 34871. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-TTworkingnow20406080100SE +/- 0.17, N = 395.81. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-NTworkingnow20406080100SE +/- 2.74, N = 384.31. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-NNworkingnow20406080100SE +/- 1.59, N = 379.51. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMV-Tworkingnow918273645SE +/- 0.12, N = 339.81. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dDOTworkingnow816243240SE +/- 0.97, N = 333.71. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dAXPYworkingnow816243240SE +/- 0.09, N = 333.61. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dCOPYworkingnow714212835SE +/- 0.09, N = 328.01. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sDOTworkingnow1224364860SE +/- 0.36, N = 351.11. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sCOPYworkingnow918273645SE +/- 0.03, N = 341.61. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Black-Scholes OpenCLworkingnow0.63341.26681.90022.53363.167SE +/- 0.001, N = 32.8151. (CXX) g++ options: -O3 -march=native -fopenmp

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Rainbow Colors and Prism - Acceleration: GPUworkingnow1020304050SE +/- 0.04, N = 344.73MIN: 39.81 / MAX: 49.16

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: LuxCore Benchmark - Acceleration: GPUworkingnow510152025SE +/- 0.06, N = 319.75MIN: 7.72 / MAX: 24.83

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Orange Juice - Acceleration: GPUworkingnow510152025SE +/- 0.02, N = 320.04MIN: 17.15 / MAX: 27.53

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Danish Mood - Acceleration: GPUworkingnow510152025SE +/- 0.13, N = 318.38MIN: 4.67 / MAX: 22.55

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: DLSC - Acceleration: GPUworkingnow612182430SE +/- 0.02, N = 325.86MIN: 24.57 / MAX: 26.16

ArrayFire

ArrayFire is an GPU and CPU numeric processing library, this test uses the built-in CPU and OpenCL ArrayFire benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterArrayFire 3.7Test: Conjugate Gradient OpenCLworkingnow0.19060.38120.57180.76240.953SE +/- 0.0012, N = 30.84721. (CXX) g++ options: -rdynamic

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: OpenCLworkingnow5K10K15K20K25KSE +/- 222.66, N = 9214221. (CXX) g++ options: -flto -pthread

clpeak

Clpeak is designed to test the peak capabilities of OpenCL devices. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGBPS, More Is Betterclpeak 1.1.2OpenCL Test: Global Memory Bandwidthworkingnow2004006008001000SE +/- 0.45, N = 3873.431. (CXX) g++ options: -O3

OpenBenchmarking.orgGFLOPS, More Is Betterclpeak 1.1.2OpenCL Test: Double-Precision Doubleworkingnow30060090012001500SE +/- 1.74, N = 31434.391. (CXX) g++ options: -O3

OpenBenchmarking.orgGFLOPS, More Is Betterclpeak 1.1.2OpenCL Test: Single-Precision Floatworkingnow20K40K60K80K100KSE +/- 124.24, N = 380554.101. (CXX) g++ options: -O3

OpenBenchmarking.orgGIOPS, More Is Betterclpeak 1.1.2OpenCL Test: Integer Compute INTworkingnow9K18K27K36K45KSE +/- 1.32, N = 341343.871. (CXX) g++ options: -O3

FAHBench

FAHBench is a Folding@Home benchmark on the GPU. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterFAHBench 2.3.2workingnow90180270360450SE +/- 0.79, N = 3437.38

OctaneBench

OctaneBench is a test of the OctaneRender on the GPU and requires the use of NVIDIA CUDA. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterOctaneBench 2020.1Total Scoreworkingnow300600900120015001312.60

VkResample

VkResample is a Vulkan-based image upscaling library based on VkFFT. The sample input file is upscaling a 4K image to 8K using Vulkan-based GPU acceleration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterVkResample 1.0Upscale: 2x - Precision: Singleworkingnow246810SE +/- 0.002, N = 37.7471. (CXX) g++ options: -O3

OpenBenchmarking.orgms, Fewer Is BetterVkResample 1.0Upscale: 2x - Precision: Doubleworkingnow1224364860SE +/- 0.02, N = 354.181. (CXX) g++ options: -O3

NAMD CUDA

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. This version of the NAMD test profile uses CUDA GPU acceleration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD CUDA 2.14ATPase Simulation - 327,506 Atomsworkingnow0.03010.06020.09030.12040.1505SE +/- 0.00029, N = 30.13372

cl-mem

A basic OpenCL memory benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettercl-mem 2017-01-13Benchmark: Writeworkingnow2004006008001000SE +/- 0.28, N = 3806.81. (CC) gcc options: -O2 -flto -lOpenCL

OpenBenchmarking.orgGB/s, More Is Bettercl-mem 2017-01-13Benchmark: Readworkingnow2004006008001000SE +/- 0.73, N = 3888.91. (CC) gcc options: -O2 -flto -lOpenCL

OpenBenchmarking.orgGB/s, More Is Bettercl-mem 2017-01-13Benchmark: Copyworkingnow90180270360450SE +/- 0.03, N = 3414.41. (CC) gcc options: -O2 -flto -lOpenCL

SHOC Scalable HeterOgeneous Computing

The CUDA and OpenCL version of Vetter's Scalable HeterOgeneous Computing benchmark suite. SHOC provides a number of different benchmark programs for evaluating the performance and stability of compute devices. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: Texture Read Bandwidthworkingnow7001400210028003500SE +/- 4.12, N = 33084.431. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -lmpi_cxx -lmpi

OpenBenchmarking.orgGB/s, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: Bus Speed Readbackworkingnow612182430SE +/- 0.00, N = 326.351. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -lmpi_cxx -lmpi

OpenBenchmarking.orgGB/s, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: Bus Speed Downloadworkingnow612182430SE +/- 0.01, N = 324.741. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -lmpi_cxx -lmpi

OpenBenchmarking.orgGFLOPS, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: Max SP Flopsworkingnow20K40K60K80K100KSE +/- 182.99, N = 388834.01. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -lmpi_cxx -lmpi

OpenBenchmarking.orgGFLOPS, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: GEMM SGEMM_Nworkingnow6K12K18K24K30KSE +/- 47.83, N = 326966.31. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -lmpi_cxx -lmpi

OpenBenchmarking.orgGB/s, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: Reductionworkingnow2004006008001000SE +/- 11.65, N = 15991.181. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -lmpi_cxx -lmpi

OpenBenchmarking.orgGHash/s, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: MD5 Hashworkingnow20406080100SE +/- 0.89, N = 1594.221. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -lmpi_cxx -lmpi

OpenBenchmarking.orgGFLOPS, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: FFT SPworkingnow6001200180024003000SE +/- 1.24, N = 32789.701. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -lmpi_cxx -lmpi

OpenBenchmarking.orgGB/s, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: Triadworkingnow510152025SE +/- 0.08, N = 321.671. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -lmpi_cxx -lmpi

OpenBenchmarking.orgGFLOPS, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: S3Dworkingnow140280420560700SE +/- 0.37, N = 3647.521. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -lmpi_cxx -lmpi

Mixbench

A benchmark suite for GPUs on mixed operational intensity kernels. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is BetterMixbench 2020-06-23Backend: NVIDIA CUDA - Benchmark: Single Precisionworkingnow16K32K48K64K80KSE +/- 19.41, N = 375020.161. (CXX) g++ options: -lm -lstdc++ -lOpenCL -lrt -O2

OpenBenchmarking.orgGFLOPS, More Is BetterMixbench 2020-06-23Backend: NVIDIA CUDA - Benchmark: Double Precisionworkingnow2004006008001000SE +/- 0.00, N = 31098.841. (CXX) g++ options: -lm -lstdc++ -lOpenCL -lrt -O2

OpenBenchmarking.orgGFLOPS, More Is BetterMixbench 2020-06-23Backend: NVIDIA CUDA - Benchmark: Half Precisionworkingnow20K40K60K80K100KSE +/- 58.22, N = 380736.451. (CXX) g++ options: -lm -lstdc++ -lOpenCL -lrt -O2

OpenBenchmarking.orgGFLOPS, More Is BetterMixbench 2020-06-23Backend: OpenCL - Benchmark: Single Precisionworkingnow17K34K51K68K85KSE +/- 98.70, N = 377320.861. (CXX) g++ options: -lm -lstdc++ -lOpenCL -lrt -O2

OpenBenchmarking.orgGFLOPS, More Is BetterMixbench 2020-06-23Backend: OpenCL - Benchmark: Double Precisionworkingnow2004006008001000SE +/- 0.08, N = 31098.711. (CXX) g++ options: -lm -lstdc++ -lOpenCL -lrt -O2

OpenBenchmarking.orgGIOPS, More Is BetterMixbench 2020-06-23Backend: NVIDIA CUDA - Benchmark: Integerworkingnow8K16K24K32K40KSE +/- 21.23, N = 335349.151. (CXX) g++ options: -lm -lstdc++ -lOpenCL -lrt -O2

OpenBenchmarking.orgGIOPS, More Is BetterMixbench 2020-06-23Backend: OpenCL - Benchmark: Integerworkingnow9K18K27K36K45KSE +/- 0.00, N = 340702.141. (CXX) g++ options: -lm -lstdc++ -lOpenCL -lrt -O2

Hashcat

Hashcat is an open-source, advanced password recovery tool supporting GPU acceleration with OpenCL, NVIDIA CUDA, and Radeon ROCm. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterHashcat 6.2.4Benchmark: TrueCrypt RIPEMD160 + XTSworkingnow400K800K1200K1600K2000KSE +/- 733.33, N = 31906267

OpenBenchmarking.orgH/s, More Is BetterHashcat 6.2.4Benchmark: SHA-512workingnow1600M3200M4800M6400M8000MSE +/- 7995276.38, N = 37424433333

OpenBenchmarking.orgH/s, More Is BetterHashcat 6.2.4Benchmark: 7-Zipworkingnow600K1200K1800K2400K3000KSE +/- 7559.32, N = 32651800

OpenBenchmarking.orgH/s, More Is BetterHashcat 6.2.4Benchmark: SHA1workingnow11000M22000M33000M44000M55000MSE +/- 15159485.48, N = 350683000000

OpenBenchmarking.orgH/s, More Is BetterHashcat 6.2.4Benchmark: MD5workingnow30000M60000M90000M120000M150000MSE +/- 305505046.33, N = 3155800000000

VkFFT

VkFFT is a Fast Fourier Transform (FFT) Library that is GPU accelerated by means of the Vulkan API. The VkFFT benchmark runs FFT performance differences of many different sizes before returning an overall benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.1.1workingnow30K60K90K120K150KSE +/- 438.58, N = 31334831. (CXX) g++ options: -O3

Waifu2x-NCNN Vulkan

Waifu2x-NCNN is an NCNN neural network implementation of the Waifu2x converter project and accelerated using the Vulkan API. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWaifu2x-NCNN Vulkan 20200818Scale: 2x - Denoise: 3 - TAA: Yesworkingnow0.47120.94241.41361.88482.356SE +/- 0.008, N = 32.094

RealSR-NCNN

RealSR-NCNN is an NCNN neural network implementation of the RealSR project and accelerated using the Vulkan API. RealSR is the Real-World Super Resolution via Kernel Estimation and Noise Injection. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image by a scale of 4x with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: Yesworkingnow510152025SE +/- 0.07, N = 319.79

OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: Noworkingnow1.05412.10823.16234.21645.2705SE +/- 0.063, N = 34.685

vkpeak

Vkpeak is a Vulkan compute benchmark inspired by OpenCL's clpeak. Vkpeak provides Vulkan compute performance measurements for FP16 / FP32 / FP64 / INT16 / INT32 scalar and vec4 performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGIOPS, More Is Bettervkpeak 20210424int16-vec4workingnow9K18K27K36K45KSE +/- 32.63, N = 340527.84

OpenBenchmarking.orgGIOPS, More Is Bettervkpeak 20210424int16-scalarworkingnow7K14K21K28K35KSE +/- 11.73, N = 330436.13

OpenBenchmarking.orgGIOPS, More Is Bettervkpeak 20210424int32-vec4workingnow10K20K30K40K50KSE +/- 0.36, N = 345552.24

OpenBenchmarking.orgGIOPS, More Is Bettervkpeak 20210424int32-scalarworkingnow10K20K30K40K50KSE +/- 13.40, N = 345787.93

OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20210424fp64-vec4workingnow30060090012001500SE +/- 0.19, N = 31446.25

OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20210424fp64-scalarworkingnow30060090012001500SE +/- 0.99, N = 31443.97

OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20210424fp16-vec4workingnow20K40K60K80K100KSE +/- 28.95, N = 390594.96

OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20210424fp16-scalarworkingnow10K20K30K40K50KSE +/- 21.76, N = 345678.72

OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20210424fp32-vec4workingnow13K26K39K52K65KSE +/- 59.33, N = 360521.26

OpenBenchmarking.orgGFLOPS, More Is Bettervkpeak 20210424fp32-scalarworkingnow10K20K30K40K50KSE +/- 47.33, N = 345893.56

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: BMW27 - Compute: NVIDIA OptiXworkingnow3691215SE +/- 8.77, N = 1212.18

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

FP16: No - Mode: Inference - Network: DenseNet 201 - Device: OpenCL

workingnow: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: ImportError: cannot import name 'Iterable' from 'collections' (/usr/lib/python3.10/collections/__init__.py)

FP16: Yes - Mode: Inference - Network: Mobilenet - Device: OpenCL

workingnow: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: ImportError: cannot import name 'Iterable' from 'collections' (/usr/lib/python3.10/collections/__init__.py)

FP16: No - Mode: Inference - Network: Mobilenet - Device: OpenCL

workingnow: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: ImportError: cannot import name 'Iterable' from 'collections' (/usr/lib/python3.10/collections/__init__.py)

FP16: No - Mode: Inference - Network: IMDB LSTM - Device: OpenCL

workingnow: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: ImportError: cannot import name 'Iterable' from 'collections' (/usr/lib/python3.10/collections/__init__.py)

FP16: No - Mode: Training - Network: Mobilenet - Device: OpenCL

workingnow: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: ImportError: cannot import name 'Iterable' from 'collections' (/usr/lib/python3.10/collections/__init__.py)

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: vision_transformerworkingnow50100150200250SE +/- 6.29, N = 5208.30MIN: 128.21 / MAX: 992.021. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: yolov4-tinyworkingnow1.31182.62363.93545.24726.559SE +/- 0.22, N = 55.83MIN: 4.9 / MAX: 54.41. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: vgg16workingnow0.35330.70661.05991.41321.7665SE +/- 0.06, N = 51.57MIN: 1.45 / MAX: 26.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: googlenetworkingnow0.41630.83261.24891.66522.0815SE +/- 0.35, N = 51.85MIN: 1.46 / MAX: 26.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: efficientnet-b0workingnow0.47480.94961.42441.89922.374SE +/- 0.09, N = 52.11MIN: 1.8 / MAX: 20.241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: Vulkan GPU - Model: mnasnetworkingnow0.25430.50860.76291.01721.2715SE +/- 0.19, N = 51.13MIN: 0.91 / MAX: 35.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-TNworkingnow20406080100SE +/- 4.90, N = 392.81. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMV-Nworkingnow816243240SE +/- 1.74, N = 335.01. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sAXPYworkingnow1020304050SE +/- 1.82, N = 346.01. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

Test: OpenCL Particle Filter

workingnow: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result. E: ERROR: clEnqueueWriteBuffer seed_GPU (size:400000) => -388080597

RedShift Demo

This is a test of MAXON's RedShift demo build that currently requires NVIDIA GPU acceleration. Learn more via the OpenBenchmarking.org test page.

workingnow: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: ./redshift: 3: /usr/redshift/bin/redshiftBenchmark: not found

Betsy GPU Compressor

Betsy is an open-source GPU compressor of various GPU compression techniques. Betsy is written in GLSL for Vulkan/OpenGL (compute shader) support for GPU-based texture compression. Learn more via the OpenBenchmarking.org test page.

Codec: ETC2 RGB - Quality: Highest

workingnow: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: ./betsy: 3: ./betsy: not found

Codec: ETC1 - Quality: Highest

workingnow: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: ./betsy: 3: ./betsy: not found

Libplacebo

Libplacebo is a multimedia rendering library based on the core rendering code of the MPV player. The libplacebo benchmark relies on the Vulkan API and tests various primitives. Learn more via the OpenBenchmarking.org test page.

workingnow: The test quit with a non-zero exit status. The test quit with a non-zero exit status. The test quit with a non-zero exit status. E: fatal: Failed initializing vulkan device

Waifu2x-NCNN Vulkan

Waifu2x-NCNN is an NCNN neural network implementation of the Waifu2x converter project and accelerated using the Vulkan API. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image with Vulkan. Learn more via the OpenBenchmarking.org test page.

Scale: 2x - Denoise: 3 - TAA: No

workingnow: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

113 Results Shown

NeatBench
MandelGPU
IndigoBench:
  OpenCL GPU - Supercar
  OpenCL GPU - Bedroom
Blender:
  Pabellon Barcelona - NVIDIA OptiX
  Barbershop - NVIDIA OptiX
  Fishy Cat - NVIDIA OptiX
  Classroom - NVIDIA OptiX
NCNN:
  Vulkan GPU - resnet18
  Vulkan GPU - FastestDet
  Vulkan GPU - regnety_400m
  Vulkan GPU - squeezenet_ssd
  Vulkan GPU - resnet50
  Vulkan GPU - alexnet
  Vulkan GPU - blazeface
  Vulkan GPU - shufflenet-v2
  Vulkan GPU-v3-v3 - mobilenet-v3
  Vulkan GPU-v2-v2 - mobilenet-v2
  Vulkan GPU - mobilenet
Caffe:
  GoogleNet - NVIDIA CUDA - 1000
  GoogleNet - NVIDIA CUDA - 200
  GoogleNet - NVIDIA CUDA - 100
  AlexNet - NVIDIA CUDA - 1000
  AlexNet - NVIDIA CUDA - 200
  AlexNet - NVIDIA CUDA - 100
GROMACS
ViennaCL:
  OpenCL BLAS - dGEMM-TT
  OpenCL BLAS - dGEMM-TN
  OpenCL BLAS - dGEMM-NT
  OpenCL BLAS - dGEMM-NN
  OpenCL BLAS - dGEMV-T
  OpenCL BLAS - dGEMV-N
  OpenCL BLAS - dDOT
  OpenCL BLAS - dAXPY
  OpenCL BLAS - dCOPY
  OpenCL BLAS - sDOT
  OpenCL BLAS - sAXPY
  OpenCL BLAS - sCOPY
  CPU BLAS - dGEMM-TT
  CPU BLAS - dGEMM-NT
  CPU BLAS - dGEMM-NN
  CPU BLAS - dGEMV-T
  CPU BLAS - dDOT
  CPU BLAS - dAXPY
  CPU BLAS - dCOPY
  CPU BLAS - sDOT
  CPU BLAS - sCOPY
FinanceBench
LuxCoreRender:
  Rainbow Colors and Prism - GPU
  LuxCore Benchmark - GPU
  Orange Juice - GPU
  Danish Mood - GPU
  DLSC - GPU
ArrayFire
LeelaChessZero
clpeak:
  Global Memory Bandwidth
  Double-Precision Double
  Single-Precision Float
  Integer Compute INT
FAHBench
OctaneBench
VkResample:
  2x - Single
  2x - Double
NAMD CUDA
cl-mem:
  Write
  Read
  Copy
SHOC Scalable HeterOgeneous Computing:
  OpenCL - Texture Read Bandwidth
  OpenCL - Bus Speed Readback
  OpenCL - Bus Speed Download
  OpenCL - Max SP Flops
  OpenCL - GEMM SGEMM_N
  OpenCL - Reduction
  OpenCL - MD5 Hash
  OpenCL - FFT SP
  OpenCL - Triad
  OpenCL - S3D
Mixbench:
  NVIDIA CUDA - Single Precision
  NVIDIA CUDA - Double Precision
  NVIDIA CUDA - Half Precision
  OpenCL - Single Precision
  OpenCL - Double Precision
  NVIDIA CUDA - Integer
  OpenCL - Integer
Hashcat:
  TrueCrypt RIPEMD160 + XTS
  SHA-512
  7-Zip
  SHA1
  MD5
VkFFT
Waifu2x-NCNN Vulkan
RealSR-NCNN:
  4x - Yes
  4x - No
vkpeak:
  int16-vec4
  int16-scalar
  int32-vec4
  int32-scalar
  fp64-vec4
  fp64-scalar
  fp16-vec4
  fp16-scalar
  fp32-vec4
  fp32-scalar
Blender
NCNN:
  Vulkan GPU - vision_transformer
  Vulkan GPU - yolov4-tiny
  Vulkan GPU - vgg16
  Vulkan GPU - googlenet
  Vulkan GPU - efficientnet-b0
  Vulkan GPU - mnasnet
ViennaCL:
  CPU BLAS - dGEMM-TN
  CPU BLAS - dGEMV-N
  CPU BLAS - sAXPY