RTX 3070 Compute

AMD Ryzen 9 5900X 12-Core testing with a ASUS ROG CROSSHAIR VIII HERO (3402 BIOS) and NVIDIA GeForce RTX 3070 8GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2104078-IB-RTX3070CO18
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

BLAS (Basic Linear Algebra Sub-Routine) Tests 2 Tests
CPU Massive 4 Tests
Creator Workloads 4 Tests
Game Development 2 Tests
HPC - High Performance Computing 5 Tests
Machine Learning 3 Tests
Multi-Core 5 Tests
NVIDIA GPU Compute 26 Tests
OpenCL 6 Tests
Renderers 3 Tests
Server CPU Tests 2 Tests
Vulkan Compute 6 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Perf-Per
Dollar
Date
Triggered
  Test
  Duration
1
April 06
  5 Hours, 24 Minutes
2
April 06
  5 Hours, 24 Minutes
3
April 07
  4 Minutes
Invert Hiding All Results Option
  3 Hours, 37 Minutes
Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):


RTX 3070 ComputeProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLOpenCLVulkanCompilerFile-SystemScreen Resolution123AMD Ryzen 9 5900X 12-Core @ 3.70GHz (12 Cores / 24 Threads)ASUS ROG CROSSHAIR VIII HERO (3402 BIOS)AMD Starship/Matisse16GB1000GB Sabrent Rocket 4.0 Plus + 2000GBNVIDIA GeForce RTX 3070 8GBNVIDIA Device 228bASUS VP28URealtek RTL8125 2.5GbE + Intel I211Ubuntu 20.045.8.0-48-generic (x86_64)GNOME Shell 3.36.7X Server 1.20.9NVIDIA 460.674.6.0OpenCL 1.2 CUDA 11.2.1621.2.155GCC 9.3.0 + CUDA 11.2ext43840x2160OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa201009 OpenCL Details- GPU Compute Cores: 5888Python Details- 1: Python 3.8.5Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

RTX 3070 Computevkfft: viennacl: CPU BLAS - dGEMV-Tviennacl: CPU BLAS - dAXPYcl-mem: Readcl-mem: Writeviennacl: OpenCL BLAS - dGEMV-Nviennacl: CPU BLAS - sCOPYviennacl: CPU BLAS - sAXPYviennacl: CPU BLAS - sDOTviennacl: CPU BLAS - dCOPYviennacl: CPU BLAS - dDOTshoc: OpenCL - Texture Read Bandwidthviennacl: CPU BLAS - dGEMV-Nviennacl: OpenCL BLAS - dGEMV-Tviennacl: OpenCL BLAS - sCOPYviennacl: OpenCL BLAS - sAXPYviennacl: OpenCL BLAS - sDOTviennacl: OpenCL BLAS - dCOPYviennacl: OpenCL BLAS - dAXPYcl-mem: Copyviennacl: OpenCL BLAS - dDOTshoc: OpenCL - Bus Speed Readbackshoc: OpenCL - Triadshoc: OpenCL - Reductionshoc: OpenCL - Bus Speed Downloadclpeak: Global Memory Bandwidthshoc: OpenCL - Max SP Flopsshoc: OpenCL - GEMM SGEMM_Nclpeak: Double-Precision Doubleclpeak: Single-Precision Floatmixbench: OpenCL - Double Precisionmixbench: OpenCL - Single Precisionshoc: OpenCL - S3Dshoc: OpenCL - FFT SPviennacl: CPU BLAS - dGEMM-TTviennacl: OpenCL BLAS - dGEMM-NNviennacl: CPU BLAS - dGEMM-TNviennacl: CPU BLAS - dGEMM-NTviennacl: CPU BLAS - dGEMM-NNviennacl: OpenCL BLAS - dGEMM-NTviennacl: OpenCL BLAS - dGEMM-TNshoc: OpenCL - MD5 Hashclpeak: Integer Compute INTmixbench: OpenCL - Integerhashcat: MD5hashcat: SHA1hashcat: 7-Ziphashcat: SHA-512hashcat: TrueCrypt RIPEMD160 + XTSindigobench: OpenCL GPU - Supercarindigobench: OpenCL GPU - Bedroomluxcorerender-cl: DLSCluxcorerender-cl: Foodluxcorerender-cl: LuxCore Benchmarkluxcorerender-cl: Rainbow Colors and Prismlczero: OpenCLfahbench: gromacs-gpu: Water Benchmarkmandelgpu: GPUoctanebench: Total Scorev-ray: NVIDIA CUDA GPUv-ray: NVIDIA RTX GPUnamd-cuda: ATPase Simulation - 327,506 Atomsncnn: Vulkan GPU - resnet18vkresample: 2x - Singlefinancebench: Black-Scholes OpenCLncnn: Vulkan GPU - regnety_400mncnn: Vulkan GPU - squeezenet_ssdncnn: Vulkan GPU - yolov4-tinyncnn: Vulkan GPU - resnet50ncnn: Vulkan GPU - alexnetvkresample: 2x - Doublencnn: Vulkan GPU - googlenetncnn: Vulkan GPU - vgg16ncnn: Vulkan GPU - blazefacencnn: Vulkan GPU - efficientnet-b0ncnn: Vulkan GPU - mnasnetncnn: Vulkan GPU - shufflenet-v2ncnn: Vulkan GPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: Vulkan GPU - mobilenetarrayfire: Conjugate Gradient OpenCLbetsy: ETC2 RGB - Highestrealsr-ncnn: 4x - Yesbetsy: ETC1 - Highestwaifu2x-ncnn: 2x - 3 - Yesblender: Pabellon Barcelona - CUDAblender: Pabellon Barcelona - NVIDIA OptiXredshift: blender: Barbershop - NVIDIA OptiXblender: Fishy Cat - NVIDIA OptiXblender: Classroom - NVIDIA OptiXblender: BMW27 - NVIDIA OptiXblender: Barbershop - CUDAblender: Fishy Cat - CUDAblender: Classroom - CUDAblender: BMW27 - CUDArodinia: OpenCL Particle Filterrealsr-ncnn: 4x - No1233200481.334.5393.6380.222262.094.014123.244.32120.5676.5334293358325365396297.639726.395024.6721325.67126.2814389.5723117.83806.57360.9020099.75295.1122081.79218.4401134.9955.334257.053.454.734334225.471210264.2811433.743877883333313120133333686733166480000050103337.25712.9177.973.396.5119.1828914267.09018.138319970658.9410.514043134217130.1327313.9517.48910.48316.7514.4922.2024.4711.08220.45213.2755.711.845.584.054.854.224.3912.852.0866.03950.0584.3094.297190.2976.77228465.1435.7048.1716.11509.3354.2076.1029.075.9588.4383232381.933.4393.2379.922063.492.813922.643.52131.2877.0332293357324364395296.939626.390924.6714325.80126.3097389.6123179.03769.31364.9919991.73299.6521965.66218.6051133.8254.533855.852.753.534033625.494010202.3911336.973883903333313144266667686700166956666750283337.16312.8827.943.396.5019.1929117265.83248.083319688588.0411.111675133717100.1338814.0317.53210.49616.8715.4523.7324.7511.22221.03713.1755.891.905.624.104.924.244.3513.562.0946.05149.9944.3004.303190.9877.16228464.0135.8448.2616.17508.3854.4476.4929.036.0168.42750.1814.3218.437OpenBenchmarking.org

VkFFT

VkFFT is a Fast Fourier Transform (FFT) Library that is GPU accelerated by means of the Vulkan API. The VkFFT benchmark runs FFT performance differences of many different sizes before returning an overall benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.1.1127K14K21K28K35KSE +/- 238.49, N = 3SE +/- 422.26, N = 332004323231. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgBenchmark Score, More Is BetterVkFFT 1.1.1126K12K18K24K30KMin: 31572 / Avg: 32004.33 / Max: 32395Min: 31796 / Avg: 32323 / Max: 331581. (CXX) g++ options: -O3 -pthread

ViennaCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMV-T1220406080100SE +/- 1.43, N = 3SE +/- 0.57, N = 381.381.91. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMV-T121632486480Min: 78.4 / Avg: 81.27 / Max: 82.7Min: 80.8 / Avg: 81.93 / Max: 82.51. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dAXPY12816243240SE +/- 0.22, N = 3SE +/- 0.65, N = 334.533.41. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dAXPY12714212835Min: 34.1 / Avg: 34.53 / Max: 34.8Min: 32.2 / Avg: 33.43 / Max: 34.41. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

cl-mem

A basic OpenCL memory benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettercl-mem 2017-01-13Benchmark: Read1290180270360450SE +/- 0.07, N = 3SE +/- 0.36, N = 3393.6393.21. (CC) gcc options: -O2 -flto -lOpenCL
OpenBenchmarking.orgGB/s, More Is Bettercl-mem 2017-01-13Benchmark: Read1270140210280350Min: 393.5 / Avg: 393.63 / Max: 393.7Min: 392.5 / Avg: 393.2 / Max: 393.71. (CC) gcc options: -O2 -flto -lOpenCL

OpenBenchmarking.orgGB/s, More Is Bettercl-mem 2017-01-13Benchmark: Write1280160240320400SE +/- 0.06, N = 3SE +/- 0.13, N = 3380.2379.91. (CC) gcc options: -O2 -flto -lOpenCL
OpenBenchmarking.orgGB/s, More Is Bettercl-mem 2017-01-13Benchmark: Write1270140210280350Min: 380.1 / Avg: 380.2 / Max: 380.3Min: 379.6 / Avg: 379.87 / Max: 3801. (CC) gcc options: -O2 -flto -lOpenCL

ViennaCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: OpenCL BLAS - dGEMV-N12501001502002502222201. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sCOPY121428425670SE +/- 0.84, N = 3SE +/- 0.59, N = 362.063.41. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sCOPY121224364860Min: 60.5 / Avg: 62.03 / Max: 63.4Min: 62.7 / Avg: 63.43 / Max: 64.61. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sAXPY1220406080100SE +/- 0.87, N = 3SE +/- 2.27, N = 394.092.81. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sAXPY1220406080100Min: 92.4 / Avg: 94 / Max: 95.4Min: 88.6 / Avg: 92.8 / Max: 96.41. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sDOT12306090120150SE +/- 1.15, N = 3SE +/- 3.53, N = 31411391. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sDOT12306090120150Min: 139 / Avg: 141 / Max: 143Min: 132 / Avg: 138.67 / Max: 1441. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dCOPY12612182430SE +/- 0.10, N = 3SE +/- 0.74, N = 323.222.61. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dCOPY12510152025Min: 23.1 / Avg: 23.2 / Max: 23.4Min: 21.1 / Avg: 22.57 / Max: 23.51. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dDOT121020304050SE +/- 0.42, N = 3SE +/- 0.62, N = 344.343.51. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dDOT12918273645Min: 43.5 / Avg: 44.3 / Max: 44.9Min: 42.3 / Avg: 43.5 / Max: 44.41. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

SHOC Scalable HeterOgeneous Computing

OpenBenchmarking.orgGB/s, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: Texture Read Bandwidth125001000150020002500SE +/- 7.35, N = 3SE +/- 5.26, N = 32120.562131.281. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgGB/s, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: Texture Read Bandwidth12400800120016002000Min: 2111.1 / Avg: 2120.56 / Max: 2135.03Min: 2120.76 / Avg: 2131.28 / Max: 2136.541. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -pthread -lmpi_cxx -lmpi

ViennaCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMV-N1220406080100SE +/- 0.45, N = 3SE +/- 0.49, N = 376.577.01. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMV-N121530456075Min: 75.9 / Avg: 76.53 / Max: 77.4Min: 76.1 / Avg: 77 / Max: 77.81. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: OpenCL BLAS - dGEMV-T12701402102803503343321. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: OpenCL BLAS - sCOPY1260120180240300SE +/- 0.58, N = 32932931. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: OpenCL BLAS - sCOPY1250100150200250Min: 292 / Avg: 293 / Max: 2941. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: OpenCL BLAS - sAXPY12801602403204003583571. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: OpenCL BLAS - sDOT12701402102803503253241. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: OpenCL BLAS - dCOPY1280160240320400SE +/- 0.33, N = 33653641. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: OpenCL BLAS - dCOPY1270140210280350Min: 363 / Avg: 363.67 / Max: 3641. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: OpenCL BLAS - dAXPY12901802703604503963951. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

cl-mem

A basic OpenCL memory benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettercl-mem 2017-01-13Benchmark: Copy1260120180240300SE +/- 0.17, N = 3SE +/- 0.20, N = 3297.6296.91. (CC) gcc options: -O2 -flto -lOpenCL
OpenBenchmarking.orgGB/s, More Is Bettercl-mem 2017-01-13Benchmark: Copy1250100150200250Min: 297.3 / Avg: 297.6 / Max: 297.9Min: 296.5 / Avg: 296.9 / Max: 297.11. (CC) gcc options: -O2 -flto -lOpenCL

ViennaCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: OpenCL BLAS - dDOT12901802703604503973961. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

SHOC Scalable HeterOgeneous Computing

OpenBenchmarking.orgGB/s, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: Bus Speed Readback12612182430SE +/- 0.01, N = 3SE +/- 0.01, N = 326.4026.391. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgGB/s, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: Bus Speed Readback12612182430Min: 26.38 / Avg: 26.4 / Max: 26.4Min: 26.37 / Avg: 26.39 / Max: 26.41. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -pthread -lmpi_cxx -lmpi

OpenBenchmarking.orgGB/s, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: Triad12612182430SE +/- 0.01, N = 3SE +/- 0.01, N = 324.6724.671. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgGB/s, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: Triad12612182430Min: 24.65 / Avg: 24.67 / Max: 24.69Min: 24.66 / Avg: 24.67 / Max: 24.681. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -pthread -lmpi_cxx -lmpi

OpenBenchmarking.orgGB/s, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: Reduction1270140210280350SE +/- 0.56, N = 3SE +/- 0.46, N = 3325.67325.801. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgGB/s, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: Reduction1260120180240300Min: 325.08 / Avg: 325.67 / Max: 326.79Min: 325.12 / Avg: 325.8 / Max: 326.671. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -pthread -lmpi_cxx -lmpi

OpenBenchmarking.orgGB/s, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: Bus Speed Download12612182430SE +/- 0.03, N = 3SE +/- 0.03, N = 326.2826.311. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgGB/s, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: Bus Speed Download12612182430Min: 26.25 / Avg: 26.28 / Max: 26.33Min: 26.25 / Avg: 26.31 / Max: 26.341. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -pthread -lmpi_cxx -lmpi

clpeak

Clpeak is designed to test the peak capabilities of OpenCL devices. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGBPS, More Is BetterclpeakOpenCL Test: Global Memory Bandwidth1280160240320400SE +/- 0.02, N = 3SE +/- 0.03, N = 3389.57389.611. (CXX) g++ options: -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGBPS, More Is BetterclpeakOpenCL Test: Global Memory Bandwidth1270140210280350Min: 389.54 / Avg: 389.57 / Max: 389.59Min: 389.58 / Avg: 389.61 / Max: 389.681. (CXX) g++ options: -O3 -rdynamic -lOpenCL

SHOC Scalable HeterOgeneous Computing

OpenBenchmarking.orgGFLOPS, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: Max SP Flops125K10K15K20K25KSE +/- 31.09, N = 3SE +/- 49.54, N = 323117.823179.01. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgGFLOPS, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: Max SP Flops124K8K12K16K20KMin: 23085.8 / Avg: 23117.83 / Max: 23180Min: 23095.8 / Avg: 23178.97 / Max: 23267.21. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -pthread -lmpi_cxx -lmpi

OpenBenchmarking.orgGFLOPS, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: GEMM SGEMM_N128001600240032004000SE +/- 10.00, N = 3SE +/- 22.78, N = 33806.573769.311. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgGFLOPS, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: GEMM SGEMM_N127001400210028003500Min: 3792.07 / Avg: 3806.57 / Max: 3825.76Min: 3739.88 / Avg: 3769.31 / Max: 3814.151. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -pthread -lmpi_cxx -lmpi

clpeak

Clpeak is designed to test the peak capabilities of OpenCL devices. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is BetterclpeakOpenCL Test: Double-Precision Double1280160240320400SE +/- 0.04, N = 3SE +/- 0.03, N = 3360.90364.991. (CXX) g++ options: -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGFLOPS, More Is BetterclpeakOpenCL Test: Double-Precision Double1270140210280350Min: 360.82 / Avg: 360.9 / Max: 360.94Min: 364.95 / Avg: 364.99 / Max: 365.041. (CXX) g++ options: -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGFLOPS, More Is BetterclpeakOpenCL Test: Single-Precision Float124K8K12K16K20KSE +/- 2.13, N = 3SE +/- 109.12, N = 320099.7519991.731. (CXX) g++ options: -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGFLOPS, More Is BetterclpeakOpenCL Test: Single-Precision Float123K6K9K12K15KMin: 20097.33 / Avg: 20099.75 / Max: 20104Min: 19773.49 / Avg: 19991.73 / Max: 20101.641. (CXX) g++ options: -O3 -rdynamic -lOpenCL

Mixbench

A benchmark suite for GPUs on mixed operational intensity kernels. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is BetterMixbench 2020-06-23Backend: OpenCL - Benchmark: Double Precision1270140210280350SE +/- 1.86, N = 3SE +/- 0.76, N = 3295.11299.651. (CXX) g++ options: -lm -lstdc++ -lOpenCL -lrt -O2
OpenBenchmarking.orgGFLOPS, More Is BetterMixbench 2020-06-23Backend: OpenCL - Benchmark: Double Precision1250100150200250Min: 292.81 / Avg: 295.11 / Max: 298.79Min: 298.63 / Avg: 299.65 / Max: 301.131. (CXX) g++ options: -lm -lstdc++ -lOpenCL -lrt -O2

OpenBenchmarking.orgGFLOPS, More Is BetterMixbench 2020-06-23Backend: OpenCL - Benchmark: Single Precision125K10K15K20K25KSE +/- 8.94, N = 3SE +/- 109.50, N = 322081.7921965.661. (CXX) g++ options: -lm -lstdc++ -lOpenCL -lrt -O2
OpenBenchmarking.orgGFLOPS, More Is BetterMixbench 2020-06-23Backend: OpenCL - Benchmark: Single Precision124K8K12K16K20KMin: 22063.97 / Avg: 22081.79 / Max: 22092.06Min: 21747.3 / Avg: 21965.66 / Max: 22089.341. (CXX) g++ options: -lm -lstdc++ -lOpenCL -lrt -O2

SHOC Scalable HeterOgeneous Computing

OpenBenchmarking.orgGFLOPS, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: S3D1250100150200250SE +/- 0.27, N = 3SE +/- 0.28, N = 3218.44218.611. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgGFLOPS, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: S3D124080120160200Min: 218.11 / Avg: 218.44 / Max: 218.98Min: 218.1 / Avg: 218.61 / Max: 219.071. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -pthread -lmpi_cxx -lmpi

OpenBenchmarking.orgGFLOPS, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: FFT SP122004006008001000SE +/- 0.71, N = 3SE +/- 0.99, N = 31134.991133.821. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgGFLOPS, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: FFT SP122004006008001000Min: 1133.81 / Avg: 1134.99 / Max: 1136.27Min: 1131.96 / Avg: 1133.82 / Max: 1135.341. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -pthread -lmpi_cxx -lmpi

ViennaCL

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-TT121224364860SE +/- 0.15, N = 3SE +/- 0.97, N = 355.354.51. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-TT121122334455Min: 55.1 / Avg: 55.3 / Max: 55.6Min: 52.6 / Avg: 54.53 / Max: 55.61. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: OpenCL BLAS - dGEMM-NN1270140210280350SE +/- 1.00, N = 3SE +/- 0.67, N = 33423381. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: OpenCL BLAS - dGEMM-NN1260120180240300Min: 340 / Avg: 342 / Max: 343Min: 337 / Avg: 337.67 / Max: 3391. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-TN121326395265SE +/- 0.12, N = 3SE +/- 1.22, N = 357.055.81. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-TN121122334455Min: 56.8 / Avg: 56.97 / Max: 57.2Min: 53.4 / Avg: 55.83 / Max: 57.11. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-NT121224364860SE +/- 0.15, N = 3SE +/- 0.70, N = 353.452.71. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-NT121122334455Min: 53.2 / Avg: 53.4 / Max: 53.7Min: 51.3 / Avg: 52.7 / Max: 53.41. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-NN121224364860SE +/- 0.10, N = 3SE +/- 1.00, N = 354.753.51. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-NN121122334455Min: 54.6 / Avg: 54.7 / Max: 54.9Min: 51.5 / Avg: 53.5 / Max: 54.61. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: OpenCL BLAS - dGEMM-NT1270140210280350SE +/- 1.50, N = 23433401. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: OpenCL BLAS - dGEMM-NT1260120180240300Min: 341 / Avg: 342.5 / Max: 3441. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: OpenCL BLAS - dGEMM-TN12701402102803503423361. (CXX) g++ options: -fopenmp -O3 -rdynamic -lOpenCL

SHOC Scalable HeterOgeneous Computing

OpenBenchmarking.orgGHash/s, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: MD5 Hash12612182430SE +/- 0.02, N = 3SE +/- 0.04, N = 325.4725.491. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgGHash/s, More Is BetterSHOC Scalable HeterOgeneous Computing 2020-04-17Target: OpenCL - Benchmark: MD5 Hash12612182430Min: 25.45 / Avg: 25.47 / Max: 25.51Min: 25.45 / Avg: 25.49 / Max: 25.581. (CXX) g++ options: -O2 -lSHOCCommonMPI -lSHOCCommonOpenCL -lSHOCCommon -lOpenCL -lrt -pthread -lmpi_cxx -lmpi

clpeak

Clpeak is designed to test the peak capabilities of OpenCL devices. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGIOPS, More Is BetterclpeakOpenCL Test: Integer Compute INT122K4K6K8K10KSE +/- 105.22, N = 5SE +/- 86.71, N = 310264.2810202.391. (CXX) g++ options: -O3 -rdynamic -lOpenCL
OpenBenchmarking.orgGIOPS, More Is BetterclpeakOpenCL Test: Integer Compute INT122K4K6K8K10KMin: 9998.32 / Avg: 10264.28 / Max: 10619.94Min: 10111.31 / Avg: 10202.39 / Max: 10375.731. (CXX) g++ options: -O3 -rdynamic -lOpenCL

Mixbench

A benchmark suite for GPUs on mixed operational intensity kernels. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGIOPS, More Is BetterMixbench 2020-06-23Backend: OpenCL - Benchmark: Integer122K4K6K8K10KSE +/- 3.19, N = 3SE +/- 53.09, N = 311433.7411336.971. (CXX) g++ options: -lm -lstdc++ -lOpenCL -lrt -O2
OpenBenchmarking.orgGIOPS, More Is BetterMixbench 2020-06-23Backend: OpenCL - Benchmark: Integer122K4K6K8K10KMin: 11427.59 / Avg: 11433.74 / Max: 11438.28Min: 11283.29 / Avg: 11336.97 / Max: 11443.151. (CXX) g++ options: -lm -lstdc++ -lOpenCL -lrt -O2

Hashcat

Hashcat is an open-source, advanced password recovery tool supporting GPU acceleration with OpenCL, NVIDIA CUDA, and Radeon ROCm. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterHashcat 6.1.1Benchmark: MD5128000M16000M24000M32000M40000MSE +/- 49139065.70, N = 3SE +/- 42362968.63, N = 33877883333338839033333
OpenBenchmarking.orgH/s, More Is BetterHashcat 6.1.1Benchmark: MD5127000M14000M21000M28000M35000MMin: 38728400000 / Avg: 38778833333.33 / Max: 38877100000Min: 38771800000 / Avg: 38839033333.33 / Max: 38917300000

OpenBenchmarking.orgH/s, More Is BetterHashcat 6.1.1Benchmark: SHA1123000M6000M9000M12000M15000MSE +/- 21817526.08, N = 3SE +/- 5394235.61, N = 31312013333313144266667
OpenBenchmarking.orgH/s, More Is BetterHashcat 6.1.1Benchmark: SHA1122000M4000M6000M8000M10000MMin: 13093600000 / Avg: 13120133333.33 / Max: 13163400000Min: 13133600000 / Avg: 13144266666.67 / Max: 13151000000

OpenBenchmarking.orgH/s, More Is BetterHashcat 6.1.1Benchmark: 7-Zip12150K300K450K600K750KSE +/- 1386.04, N = 3SE +/- 1069.27, N = 3686733686700
OpenBenchmarking.orgH/s, More Is BetterHashcat 6.1.1Benchmark: 7-Zip12120K240K360K480K600KMin: 684100 / Avg: 686733.33 / Max: 688800Min: 684900 / Avg: 686700 / Max: 688600

OpenBenchmarking.orgH/s, More Is BetterHashcat 6.1.1Benchmark: SHA-51212400M800M1200M1600M2000MSE +/- 723417.81, N = 3SE +/- 866666.67, N = 316648000001669566667
OpenBenchmarking.orgH/s, More Is BetterHashcat 6.1.1Benchmark: SHA-51212300M600M900M1200M1500MMin: 1663600000 / Avg: 1664800000 / Max: 1666100000Min: 1668100000 / Avg: 1669566666.67 / Max: 1671100000

OpenBenchmarking.orgH/s, More Is BetterHashcat 6.1.1Benchmark: TrueCrypt RIPEMD160 + XTS12110K220K330K440K550KSE +/- 1550.63, N = 3SE +/- 1197.68, N = 3501033502833
OpenBenchmarking.orgH/s, More Is BetterHashcat 6.1.1Benchmark: TrueCrypt RIPEMD160 + XTS1290K180K270K360K450KMin: 499100 / Avg: 501033.33 / Max: 504100Min: 500600 / Avg: 502833.33 / Max: 504700

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: OpenCL GPU - Scene: Supercar12918273645SE +/- 0.03, N = 3SE +/- 0.02, N = 337.2637.16
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: OpenCL GPU - Scene: Supercar12816243240Min: 37.21 / Avg: 37.26 / Max: 37.3Min: 37.13 / Avg: 37.16 / Max: 37.21

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: OpenCL GPU - Scene: Bedroom123691215SE +/- 0.02, N = 3SE +/- 0.02, N = 312.9212.88
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: OpenCL GPU - Scene: Bedroom1248121620Min: 12.89 / Avg: 12.92 / Max: 12.95Min: 12.86 / Avg: 12.88 / Max: 12.91

LuxCoreRender OpenCL

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on OpenCL accelerators/GPUs. The alternative luxcorerender test profile is for CPU execution due to a difference in tests, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender OpenCL 2.3Scene: DLSC12246810SE +/- 0.00, N = 3SE +/- 0.00, N = 37.977.94MIN: 7.86 / MAX: 8.17MIN: 7.82 / MAX: 8.14
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender OpenCL 2.3Scene: DLSC123691215Min: 7.97 / Avg: 7.97 / Max: 7.98Min: 7.94 / Avg: 7.94 / Max: 7.95

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender OpenCL 2.3Scene: Food120.76281.52562.28843.05123.814SE +/- 0.03, N = 3SE +/- 0.02, N = 33.393.39MIN: 0.23 / MAX: 4.24MIN: 0.26 / MAX: 4.22
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender OpenCL 2.3Scene: Food12246810Min: 3.35 / Avg: 3.39 / Max: 3.44Min: 3.35 / Avg: 3.39 / Max: 3.43

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender OpenCL 2.3Scene: LuxCore Benchmark12246810SE +/- 0.00, N = 3SE +/- 0.01, N = 36.516.50MIN: 0.27 / MAX: 7.46MIN: 0.32 / MAX: 7.45
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender OpenCL 2.3Scene: LuxCore Benchmark123691215Min: 6.51 / Avg: 6.51 / Max: 6.51Min: 6.48 / Avg: 6.5 / Max: 6.52

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender OpenCL 2.3Scene: Rainbow Colors and Prism12510152025SE +/- 0.04, N = 3SE +/- 0.02, N = 319.1819.19MIN: 17.88 / MAX: 20.08MIN: 17.89 / MAX: 20.09
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender OpenCL 2.3Scene: Rainbow Colors and Prism12510152025Min: 19.09 / Avg: 19.18 / Max: 19.24Min: 19.16 / Avg: 19.19 / Max: 19.23

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: OpenCL126K12K18K24K30KSE +/- 74.99, N = 3SE +/- 234.03, N = 328914291171. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: OpenCL125K10K15K20K25KMin: 28766 / Avg: 28914 / Max: 29009Min: 28765 / Avg: 29116.67 / Max: 295601. (CXX) g++ options: -flto -pthread

FAHBench

FAHBench is a Folding@Home benchmark on the GPU. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterFAHBench 2.3.21260120180240300SE +/- 0.04, N = 3SE +/- 0.14, N = 3267.09265.83
OpenBenchmarking.orgNs Per Day, More Is BetterFAHBench 2.3.21250100150200250Min: 267.01 / Avg: 267.09 / Max: 267.14Min: 265.6 / Avg: 265.83 / Max: 266.08

GROMACS

The CUDA version of the Gromacs molecular dynamics package. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water Benchmark12246810SE +/- 0.024, N = 3SE +/- 0.016, N = 38.1388.0831. (CXX) g++ options: -O3 -lpthread -ldl -lrt -lm
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water Benchmark123691215Min: 8.09 / Avg: 8.14 / Max: 8.17Min: 8.05 / Avg: 8.08 / Max: 8.11. (CXX) g++ options: -O3 -lpthread -ldl -lrt -lm

MandelGPU

MandelGPU is an OpenCL benchmark and this test runs with the OpenCL rendering float4 kernel with a maximum of 4096 iterations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSamples/sec, More Is BetterMandelGPU 1.3pts1OpenCL Device: GPU1270M140M210M280M350MSE +/- 682590.08, N = 3SE +/- 1129798.58, N = 3319970658.9319688588.01. (CC) gcc options: -O3 -lm -ftree-vectorize -funroll-loops -lglut -lOpenCL -lGL
OpenBenchmarking.orgSamples/sec, More Is BetterMandelGPU 1.3pts1OpenCL Device: GPU1260M120M180M240M300MMin: 318652422.3 / Avg: 319970658.93 / Max: 320937148Min: 317903217.9 / Avg: 319688587.97 / Max: 321780720.71. (CC) gcc options: -O3 -lm -ftree-vectorize -funroll-loops -lglut -lOpenCL -lGL

OctaneBench

OctaneBench is a test of the OctaneRender on the GPU and requires the use of NVIDIA CUDA. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterOctaneBench 2020.1Total Score1290180270360450410.51411.11

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. V-RAY is a commercial renderer that can integrate with various creator software products like SketchUp and 3ds Max. The V-RAY benchmark is standalone and supports CPU and NVIDIA CUDA/RTX based rendering. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgvpaths, More Is BetterChaos Group V-RAY 5Mode: NVIDIA CUDA GPU1230060090012001500SE +/- 1.20, N = 3SE +/- 0.67, N = 313421337
OpenBenchmarking.orgvpaths, More Is BetterChaos Group V-RAY 5Mode: NVIDIA CUDA GPU122004006008001000Min: 1340 / Avg: 1342.33 / Max: 1344Min: 1336 / Avg: 1337.33 / Max: 1338

OpenBenchmarking.orgvrays, More Is BetterChaos Group V-RAY 5Mode: NVIDIA RTX GPU12400800120016002000SE +/- 2.00, N = 317131710
OpenBenchmarking.orgvrays, More Is BetterChaos Group V-RAY 5Mode: NVIDIA RTX GPU1230060090012001500Min: 1711 / Avg: 1713 / Max: 1717

NAMD CUDA

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. This version of the NAMD test profile uses CUDA GPU acceleration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD CUDA 2.14ATPase Simulation - 327,506 Atoms120.03010.06020.09030.12040.1505SE +/- 0.00056, N = 3SE +/- 0.00147, N = 30.132730.13388
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD CUDA 2.14ATPase Simulation - 327,506 Atoms1212345Min: 0.13 / Avg: 0.13 / Max: 0.13Min: 0.13 / Avg: 0.13 / Max: 0.14

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet181248121620SE +/- 0.01, N = 3SE +/- 0.07, N = 313.9514.03MIN: 13.09 / MAX: 42.15MIN: 12.85 / MAX: 39.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet181248121620Min: 13.93 / Avg: 13.95 / Max: 13.97Min: 13.89 / Avg: 14.03 / Max: 14.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

VkResample

VkResample is a Vulkan-based image upscaling library based on VkFFT. The sample input file is upscaling a 4K image to 8K using Vulkan-based GPU acceleration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterVkResample 1.0Upscale: 2x - Precision: Single1248121620SE +/- 0.01, N = 3SE +/- 0.01, N = 317.4917.531. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgms, Fewer Is BetterVkResample 1.0Upscale: 2x - Precision: Single1248121620Min: 17.48 / Avg: 17.49 / Max: 17.5Min: 17.51 / Avg: 17.53 / Max: 17.551. (CXX) g++ options: -O3 -pthread

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Black-Scholes OpenCL123691215SE +/- 0.01, N = 3SE +/- 0.04, N = 310.4810.501. (CXX) g++ options: -O3 -march=native -fopenmp
OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Black-Scholes OpenCL123691215Min: 10.46 / Avg: 10.48 / Max: 10.5Min: 10.46 / Avg: 10.5 / Max: 10.571. (CXX) g++ options: -O3 -march=native -fopenmp

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: regnety_400m1248121620SE +/- 0.25, N = 3SE +/- 0.32, N = 316.7516.87MIN: 15.59 / MAX: 33.2MIN: 15.42 / MAX: 40.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: regnety_400m1248121620Min: 16.25 / Avg: 16.75 / Max: 17Min: 16.28 / Avg: 16.87 / Max: 17.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: squeezenet_ssd1248121620SE +/- 0.15, N = 3SE +/- 0.12, N = 314.4915.45MIN: 13.53 / MAX: 35.44MIN: 13.94 / MAX: 74.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: squeezenet_ssd1248121620Min: 14.29 / Avg: 14.49 / Max: 14.79Min: 15.25 / Avg: 15.45 / Max: 15.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: yolov4-tiny12612182430SE +/- 0.01, N = 3SE +/- 1.20, N = 322.2023.73MIN: 21.01 / MAX: 44.66MIN: 21.14 / MAX: 142.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: yolov4-tiny12612182430Min: 22.19 / Avg: 22.2 / Max: 22.22Min: 22.5 / Avg: 23.73 / Max: 26.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet5012612182430SE +/- 0.33, N = 3SE +/- 0.37, N = 324.4724.75MIN: 22.75 / MAX: 54.75MIN: 22.79 / MAX: 62.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: resnet5012612182430Min: 23.94 / Avg: 24.47 / Max: 25.09Min: 24.08 / Avg: 24.75 / Max: 25.361. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: alexnet123691215SE +/- 0.03, N = 3SE +/- 0.07, N = 311.0811.22MIN: 10.26 / MAX: 26.8MIN: 10.16 / MAX: 47.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: alexnet123691215Min: 11.03 / Avg: 11.08 / Max: 11.14Min: 11.09 / Avg: 11.22 / Max: 11.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

VkResample

VkResample is a Vulkan-based image upscaling library based on VkFFT. The sample input file is upscaling a 4K image to 8K using Vulkan-based GPU acceleration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterVkResample 1.0Upscale: 2x - Precision: Double1250100150200250SE +/- 0.07, N = 3SE +/- 0.07, N = 3220.45221.041. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgms, Fewer Is BetterVkResample 1.0Upscale: 2x - Precision: Double124080120160200Min: 220.32 / Avg: 220.45 / Max: 220.55Min: 220.9 / Avg: 221.04 / Max: 221.121. (CXX) g++ options: -O3 -pthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: googlenet123691215SE +/- 0.08, N = 3SE +/- 0.27, N = 313.2713.17MIN: 12.01 / MAX: 35.62MIN: 11.88 / MAX: 33.161. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: googlenet1248121620Min: 13.12 / Avg: 13.27 / Max: 13.4Min: 12.63 / Avg: 13.17 / Max: 13.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: vgg16121326395265SE +/- 0.18, N = 3SE +/- 0.11, N = 355.7155.89MIN: 51.94 / MAX: 108.04MIN: 52.71 / MAX: 91.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: vgg16121122334455Min: 55.4 / Avg: 55.71 / Max: 56.02Min: 55.67 / Avg: 55.89 / Max: 56.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: blazeface120.42750.8551.28251.712.1375SE +/- 0.01, N = 3SE +/- 0.06, N = 31.841.90MIN: 1.75 / MAX: 3.08MIN: 1.73 / MAX: 11.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: blazeface12246810Min: 1.83 / Avg: 1.84 / Max: 1.85Min: 1.81 / Avg: 1.9 / Max: 2.021. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: efficientnet-b0121.26452.5293.79355.0586.3225SE +/- 0.10, N = 3SE +/- 0.12, N = 35.585.62MIN: 5.12 / MAX: 20.98MIN: 5.1 / MAX: 21.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: efficientnet-b012246810Min: 5.4 / Avg: 5.58 / Max: 5.75Min: 5.41 / Avg: 5.62 / Max: 5.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mnasnet120.92251.8452.76753.694.6125SE +/- 0.11, N = 3SE +/- 0.01, N = 34.054.10MIN: 3.65 / MAX: 20.14MIN: 3.66 / MAX: 26.141. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mnasnet12246810Min: 3.84 / Avg: 4.05 / Max: 4.22Min: 4.09 / Avg: 4.1 / Max: 4.111. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: shufflenet-v2121.1072.2143.3214.4285.535SE +/- 0.06, N = 3SE +/- 0.02, N = 34.854.92MIN: 4.59 / MAX: 6.05MIN: 4.48 / MAX: 25.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: shufflenet-v212246810Min: 4.78 / Avg: 4.85 / Max: 4.97Min: 4.89 / Avg: 4.92 / Max: 4.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3120.9541.9082.8623.8164.77SE +/- 0.06, N = 3SE +/- 0.15, N = 34.224.24MIN: 3.9 / MAX: 30.64MIN: 3.8 / MAX: 24.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v3-v3 - Model: mobilenet-v312246810Min: 4.15 / Avg: 4.22 / Max: 4.35Min: 4 / Avg: 4.24 / Max: 4.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2120.98781.97562.96343.95124.939SE +/- 0.04, N = 3SE +/- 0.05, N = 34.394.35MIN: 4.12 / MAX: 5.78MIN: 3.99 / MAX: 6.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU-v2-v2 - Model: mobilenet-v212246810Min: 4.34 / Avg: 4.39 / Max: 4.47Min: 4.26 / Avg: 4.35 / Max: 4.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mobilenet123691215SE +/- 0.08, N = 3SE +/- 0.01, N = 312.8513.56MIN: 11.95 / MAX: 34.82MIN: 12.13 / MAX: 52.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: Vulkan GPU - Model: mobilenet1248121620Min: 12.68 / Avg: 12.85 / Max: 12.93Min: 13.54 / Avg: 13.56 / Max: 13.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

ArrayFire

ArrayFire is an GPU and CPU numeric processing library, this test uses the built-in CPU and OpenCL ArrayFire benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterArrayFire 3.7Test: Conjugate Gradient OpenCL120.47120.94241.41361.88482.356SE +/- 0.000, N = 3SE +/- 0.000, N = 32.0862.0941. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgms, Fewer Is BetterArrayFire 3.7Test: Conjugate Gradient OpenCL12246810Min: 2.09 / Avg: 2.09 / Max: 2.09Min: 2.09 / Avg: 2.09 / Max: 2.11. (CXX) g++ options: -rdynamic

Betsy GPU Compressor

Betsy is an open-source GPU compressor of various GPU compression techniques. Betsy is written in GLSL for Vulkan/OpenGL (compute shader) support for GPU-based texture compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC2 RGB - Quality: Highest12246810SE +/- 0.025, N = 3SE +/- 0.015, N = 36.0396.0511. (CXX) g++ options: -O3 -O2 -lpthread -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC2 RGB - Quality: Highest12246810Min: 6 / Avg: 6.04 / Max: 6.09Min: 6.02 / Avg: 6.05 / Max: 6.071. (CXX) g++ options: -O3 -O2 -lpthread -ldl

RealSR-NCNN

RealSR-NCNN is an NCNN neural network implementation of the RealSR project and accelerated using the Vulkan API. RealSR is the Real-World Super Resolution via Kernel Estimation and Noise Injection. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image by a scale of 4x with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: Yes1231122334455SE +/- 0.09, N = 3SE +/- 0.07, N = 3SE +/- 0.08, N = 350.0649.9950.18
OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: Yes1231020304050Min: 49.87 / Avg: 50.06 / Max: 50.16Min: 49.85 / Avg: 49.99 / Max: 50.1Min: 50.03 / Avg: 50.18 / Max: 50.3

Betsy GPU Compressor

Betsy is an open-source GPU compressor of various GPU compression techniques. Betsy is written in GLSL for Vulkan/OpenGL (compute shader) support for GPU-based texture compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC1 - Quality: Highest120.96951.9392.90853.8784.8475SE +/- 0.010, N = 3SE +/- 0.019, N = 34.3094.3001. (CXX) g++ options: -O3 -O2 -lpthread -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterBetsy GPU Compressor 1.1 BetaCodec: ETC1 - Quality: Highest12246810Min: 4.3 / Avg: 4.31 / Max: 4.33Min: 4.27 / Avg: 4.3 / Max: 4.331. (CXX) g++ options: -O3 -O2 -lpthread -ldl

Waifu2x-NCNN Vulkan

Waifu2x-NCNN is an NCNN neural network implementation of the Waifu2x converter project and accelerated using the Vulkan API. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWaifu2x-NCNN Vulkan 20200818Scale: 2x - Denoise: 3 - TAA: Yes1230.97221.94442.91663.88884.861SE +/- 0.003, N = 3SE +/- 0.004, N = 3SE +/- 0.001, N = 34.2974.3034.321
OpenBenchmarking.orgSeconds, Fewer Is BetterWaifu2x-NCNN Vulkan 20200818Scale: 2x - Denoise: 3 - TAA: Yes123246810Min: 4.29 / Avg: 4.3 / Max: 4.3Min: 4.3 / Avg: 4.3 / Max: 4.31Min: 4.32 / Avg: 4.32 / Max: 4.32

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL, NVIDIA OptiX, and NVIDIA CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: Pabellon Barcelona - Compute: CUDA124080120160200SE +/- 0.01, N = 3SE +/- 0.01, N = 3190.29190.98
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: Pabellon Barcelona - Compute: CUDA124080120160200Min: 190.27 / Avg: 190.29 / Max: 190.31Min: 190.96 / Avg: 190.98 / Max: 191

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: Pabellon Barcelona - Compute: NVIDIA OptiX1220406080100SE +/- 0.05, N = 3SE +/- 0.04, N = 376.7777.16
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: Pabellon Barcelona - Compute: NVIDIA OptiX121530456075Min: 76.68 / Avg: 76.77 / Max: 76.84Min: 77.08 / Avg: 77.16 / Max: 77.21

RedShift Demo

This is a test of MAXON's RedShift demo build that currently requires NVIDIA GPU acceleration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRedShift Demo 3.01250100150200250SE +/- 0.88, N = 3SE +/- 1.00, N = 3228228
OpenBenchmarking.orgSeconds, Fewer Is BetterRedShift Demo 3.0124080120160200Min: 227 / Avg: 228.33 / Max: 230Min: 227 / Avg: 228 / Max: 230

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL, NVIDIA OptiX, and NVIDIA CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: Barbershop - Compute: NVIDIA OptiX12100200300400500SE +/- 1.92, N = 3SE +/- 1.26, N = 3465.14464.01
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: Barbershop - Compute: NVIDIA OptiX1280160240320400Min: 462.19 / Avg: 465.14 / Max: 468.73Min: 461.51 / Avg: 464.01 / Max: 465.48

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: Fishy Cat - Compute: NVIDIA OptiX12816243240SE +/- 0.01, N = 3SE +/- 0.03, N = 335.7035.84
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: Fishy Cat - Compute: NVIDIA OptiX12816243240Min: 35.68 / Avg: 35.7 / Max: 35.72Min: 35.79 / Avg: 35.84 / Max: 35.88

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: Classroom - Compute: NVIDIA OptiX121122334455SE +/- 0.02, N = 3SE +/- 0.03, N = 348.1748.26
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: Classroom - Compute: NVIDIA OptiX121020304050Min: 48.14 / Avg: 48.17 / Max: 48.19Min: 48.2 / Avg: 48.26 / Max: 48.29

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: BMW27 - Compute: NVIDIA OptiX1248121620SE +/- 0.04, N = 3SE +/- 0.04, N = 316.1116.17
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: BMW27 - Compute: NVIDIA OptiX1248121620Min: 16.06 / Avg: 16.11 / Max: 16.19Min: 16.12 / Avg: 16.17 / Max: 16.25

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: Barbershop - Compute: CUDA12110220330440550SE +/- 0.43, N = 3SE +/- 0.25, N = 3509.33508.38
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: Barbershop - Compute: CUDA1290180270360450Min: 508.51 / Avg: 509.33 / Max: 509.98Min: 507.89 / Avg: 508.38 / Max: 508.69

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: Fishy Cat - Compute: CUDA121224364860SE +/- 0.02, N = 3SE +/- 0.04, N = 354.2054.44
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: Fishy Cat - Compute: CUDA121122334455Min: 54.18 / Avg: 54.2 / Max: 54.23Min: 54.39 / Avg: 54.44 / Max: 54.51

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: Classroom - Compute: CUDA1220406080100SE +/- 0.01, N = 3SE +/- 0.02, N = 376.1076.49
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: Classroom - Compute: CUDA121530456075Min: 76.08 / Avg: 76.1 / Max: 76.11Min: 76.46 / Avg: 76.49 / Max: 76.51

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: BMW27 - Compute: CUDA12714212835SE +/- 0.03, N = 3SE +/- 0.01, N = 329.0729.03
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: BMW27 - Compute: CUDA12612182430Min: 29.01 / Avg: 29.07 / Max: 29.11Min: 29.02 / Avg: 29.03 / Max: 29.04

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenCL Particle Filter12246810SE +/- 0.014, N = 3SE +/- 0.023, N = 35.9586.0161. (CXX) g++ options: -m64 -lm -lcuda -lcudart -lcudadevrt -lcudart_static -lrt -lpthread -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenCL Particle Filter12246810Min: 5.93 / Avg: 5.96 / Max: 5.97Min: 5.99 / Avg: 6.02 / Max: 6.061. (CXX) g++ options: -m64 -lm -lcuda -lcudart -lcudadevrt -lcudart_static -lrt -lpthread -ldl

RealSR-NCNN

RealSR-NCNN is an NCNN neural network implementation of the RealSR project and accelerated using the Vulkan API. RealSR is the Real-World Super Resolution via Kernel Estimation and Noise Injection. NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. This test profile times how long it takes to increase the resolution of a sample image by a scale of 4x with Vulkan. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: No123246810SE +/- 0.085, N = 3SE +/- 0.093, N = 3SE +/- 0.096, N = 38.4388.4278.437
OpenBenchmarking.orgSeconds, Fewer Is BetterRealSR-NCNN 20200818Scale: 4x - TAA: No1233691215Min: 8.34 / Avg: 8.44 / Max: 8.61Min: 8.31 / Avg: 8.43 / Max: 8.61Min: 8.32 / Avg: 8.44 / Max: 8.63