EPYC 7601 July

AMD EPYC 7601 32-Core testing with a TYAN B8026T70AE24HR (V1.02.B10 BIOS) and llvmpipe 126GB on Ubuntu 19.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2007117-NE-EPYC7601J26
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
EPYC 7601
July 10 2020
  10 Hours, 5 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


EPYC 7601 JulyOpenBenchmarking.orgPhoronix Test SuiteAMD EPYC 7601 32-Core @ 2.20GHz (32 Cores / 64 Threads)TYAN B8026T70AE24HR (V1.02.B10 BIOS)AMD 17h126GB280GB INTEL SSDPE21D280GAllvmpipe 126GBVE2282 x Broadcom NetXtreme BCM5720 PCIeUbuntu 19.045.5.0-rc7-phx-k10temp6 (x86_64) 20200123GNOME Shell 3.32.2X Server 1.20.4modesetting 1.20.43.3 Mesa 19.0.8 (LLVM 8.0 128 bits)GCC 8.3.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionEPYC 7601 July BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --enable-libmpx --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq ondemand - CPU Microcode: 0x8001227- OpenJDK Runtime Environment (build 11.0.5+10-post-Ubuntu-0ubuntu1.119.04)- Python 2.7.16 + Python 3.7.3- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: disabled RSB filling + tsx_async_abort: Not affected

EPYC 7601 Julyblosc: blosclznpb: BT.Cnpb: CG.Cnpb: EP.Cnpb: EP.Dnpb: FT.Cnpb: IS.Dnpb: LU.Cnpb: MG.Cnpb: SP.Brodinia: OpenMP LavaMDrodinia: OpenMP HotSpot3Drodinia: OpenMP Leukocyterodinia: OpenMP CFD Solverrodinia: OpenMP Streamclusterneat: amg: lulesh: java-gradle-perf: Reactorcompress-zstd: 3compress-zstd: 19nettle: aes256nettle: chachanettle: sha512nettle: poly1305-aesonednn: IP Batch 1D - f32 - CPUonednn: IP Batch All - f32 - CPUonednn: IP Batch 1D - u8s8f32 - CPUonednn: IP Batch All - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch deconv_1d - f32 - CPUonednn: Deconvolution Batch deconv_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch deconv_1d - u8s8f32 - CPUonednn: Deconvolution Batch deconv_3d - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUdav1d: Chimera 1080pdav1d: Summer Nature 4Kdav1d: Summer Nature 1080pdav1d: Chimera 1080p 10-bitaom-av1: Speed 0 Two-Passaom-av1: Speed 4 Two-Passaom-av1: Speed 6 Realtimeaom-av1: Speed 6 Two-Passaom-av1: Speed 8 Realtimesvt-av1: Enc Mode 0 - 1080psvt-av1: Enc Mode 4 - 1080psvt-av1: Enc Mode 8 - 1080psvt-vp9: VMAF Optimized - Bosphorus 1080psvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080psvt-vp9: Visual Quality Optimized - Bosphorus 1080poidn: Memorialopenvkl: vklBenchmarkluxcorerender: DLSCluxcorerender: Rainbow Colors and Prismavifenc: 0avifenc: 2avifenc: 8avifenc: 10build-apache: Time To Compilebuild-linux-kernel: Time To Compilebuild-llvm: Time To Compilebuild2: Time To Compilemontage: Mosaic of M17, K band, 1.5 deg x 1.5 degleveldb: Hot Readleveldb: Fill Syncleveldb: Fill Syncleveldb: Overwriteleveldb: Overwriteleveldb: Rand Fillleveldb: Rand Fillleveldb: Rand Readleveldb: Seek Randleveldb: Rand Deleteleveldb: Seq Fillleveldb: Seq Fillgromacs: Water Benchmarktensorflow: Cifar10basis: ETC1Sbasis: UASTC Level 0basis: UASTC Level 2basis: UASTC Level 3basis: UASTC Level 2 + RDO Post-Processingrawtherapee: Total Benchmark Timestress-ng: MMAPstress-ng: NUMAstress-ng: MEMFDstress-ng: Atomicstress-ng: Cryptostress-ng: Mallocstress-ng: Forkingstress-ng: SENDFILEstress-ng: CPU Cachestress-ng: CPU Stressstress-ng: Semaphoresstress-ng: Matrix Mathstress-ng: Vector Mathstress-ng: Memory Copyingstress-ng: Socket Activitystress-ng: Context Switchingstress-ng: Glibc C String Functionsstress-ng: Glibc Qsort Data Sortingstress-ng: System V Message Passingv-ray: CPUcassandra: Readscassandra: Writescassandra: Mixed 1:1cassandra: Mixed 1:3pyperformance: gopyperformance: 2to3pyperformance: chaospyperformance: floatpyperformance: nbodypyperformance: pathlibpyperformance: raytracepyperformance: json_loadspyperformance: crypto_pyaespyperformance: regex_compilepyperformance: python_startuppyperformance: django_templatepyperformance: pickle_pure_pythonneatbench: CPUgit: Time To Complete Common Git Commandstesseract-ocr: Time To OCR 7 Imagesbrl-cad: VGR Performance MetricEPYC 76015931.160942.0217122.981232.951233.3641521.071293.7074855.6141658.6632802.74105.190138.04671.37110.71817.30829.35665836962439.1054461.3432976.862.94533.65695.424443.901837.754.0852969.88042.6520437.866418.71893.784418.5134220.35774.748444.30839436.558110.6401.737521.75202382.61173.97401.1189.030.21.4811.422.3421.050.0974.73137.689159.65158.58143.167.56207.173.763.9991.07455.6107.3796.98428.60443.293315.26477.757114.91668.4816.31110.53810.1699.43510.2690.60067.183127.661647.78810.3685.7571.99680.2764.87610.08020.81432.894909.77664.9052335.63488.922952.69189239.295862.06332163474.0554226.43365052.4268.628928.165069837.35118905.54190449.844525.5618216.4413278421.762520648.89383.9111747827.4324535258511494263624148637848618417217827.377638.816827318.992.977513.168.49442.749237098OpenBenchmarking.org

C-Blosc

A simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0 Beta 5Compressor: blosclzEPYC 760113002600390052006500SE +/- 13.58, N = 35931.11. (CXX) g++ options: -rdynamic

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.CEPYC 760113K26K39K52K65KSE +/- 639.49, N = 360942.021. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 3.1.3

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.CEPYC 76014K8K12K16K20KSE +/- 13.74, N = 317122.981. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 3.1.3

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.CEPYC 760130060090012001500SE +/- 0.59, N = 31232.951. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 3.1.3

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.DEPYC 760130060090012001500SE +/- 0.37, N = 31233.361. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 3.1.3

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.CEPYC 76019K18K27K36K45KSE +/- 271.96, N = 341521.071. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 3.1.3

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.DEPYC 760130060090012001500SE +/- 1.73, N = 31293.701. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 3.1.3

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CEPYC 760116K32K48K64K80KSE +/- 47.01, N = 374855.611. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 3.1.3

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.CEPYC 76019K18K27K36K45KSE +/- 308.83, N = 341658.661. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 3.1.3

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.BEPYC 76017K14K21K28K35KSE +/- 299.91, N = 332802.741. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi2. Open MPI 3.1.3

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDEPYC 760120406080100SE +/- 0.30, N = 3105.191. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DEPYC 7601306090120150SE +/- 0.96, N = 3138.051. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteEPYC 76011632486480SE +/- 0.42, N = 371.371. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverEPYC 76013691215SE +/- 0.04, N = 310.721. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterEPYC 760148121620SE +/- 0.18, N = 317.311. (CXX) g++ options: -O2 -lOpenCL

Nebular Empirical Analysis Tool

NEAT is the Nebular Empirical Analysis Tool for empirical analysis of ionised nebulae, with uncertainty propagation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNebular Empirical Analysis Tool 2020-02-29EPYC 7601714212835SE +/- 0.37, N = 1529.361. (F9X) gfortran options: -cpp -ffree-line-length-0 -Jsource/ -fopenmp -O3 -fno-backtrace

Algebraic Multi-Grid Benchmark

AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid BenchmarkEPYC 76011.4M2.8M4.2M5.6M7MSE +/- 61683.05, N = 365836961. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -pthread -lmpi

LULESH

LULESH is the Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3EPYC 76015001000150020002500SE +/- 7.44, N = 32439.111. (CXX) g++ options: -O3 -fopenmp -lm -pthread -lmpi_cxx -lmpi

Java Gradle Build

This test runs Java software project builds using the Gradle build system. It is intended to give developers an idea as to the build performance for development activities and build servers. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterJava Gradle BuildGradle Build: ReactorEPYC 7601100200300400500SE +/- 5.23, N = 9461.34

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3EPYC 76016001200180024003000SE +/- 55.81, N = 152976.81. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19EPYC 76011428425670SE +/- 0.15, N = 362.91. (CC) gcc options: -O3 -pthread -lz -llzma

Nettle

GNU Nettle is a low-level cryptographic library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbyte/s, More Is BetterNettle 3.5.1Test: aes256EPYC 760110002000300040005000SE +/- 1.61, N = 34533.65MIN: 3322.88 / MAX: 6950.841. (CC) gcc options: -O2 -ggdb3 -lnettle -lgmp -lm -lcrypto

OpenBenchmarking.orgMbyte/s, More Is BetterNettle 3.5.1Test: chachaEPYC 7601150300450600750SE +/- 0.47, N = 3695.42MIN: 364.14 / MAX: 1841.341. (CC) gcc options: -O2 -ggdb3 -lnettle -lgmp -lm -lcrypto

OpenBenchmarking.orgMbyte/s, More Is BetterNettle 3.5.1Test: sha512EPYC 7601100200300400500SE +/- 0.28, N = 3443.901. (CC) gcc options: -O2 -ggdb3 -lnettle -lgmp -lm -lcrypto

OpenBenchmarking.orgMbyte/s, More Is BetterNettle 3.5.1Test: poly1305-aesEPYC 7601400800120016002000SE +/- 1.79, N = 31837.751. (CC) gcc options: -O2 -ggdb3 -lnettle -lgmp -lm -lcrypto

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: f32 - Engine: CPUEPYC 76010.91921.83842.75763.67684.596SE +/- 0.08086, N = 154.08529MIN: 2.821. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: f32 - Engine: CPUEPYC 76011632486480SE +/- 0.13, N = 369.88MIN: 66.891. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch 1D - Data Type: u8s8f32 - Engine: CPUEPYC 76010.59671.19341.79012.38682.9835SE +/- 0.00638, N = 32.65204MIN: 2.551. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: IP Batch All - Data Type: u8s8f32 - Engine: CPUEPYC 7601918273645SE +/- 0.52, N = 1537.87MIN: 331. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUEPYC 7601510152025SE +/- 0.28, N = 318.72MIN: 17.131. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPUEPYC 76010.85151.7032.55453.4064.2575SE +/- 0.02327, N = 33.78441MIN: 3.461. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPUEPYC 7601246810SE +/- 0.10506, N = 38.51342MIN: 6.951. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUEPYC 7601510152025SE +/- 0.30, N = 420.36MIN: 17.641. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32 - Engine: CPUEPYC 76011.06842.13683.20524.27365.342SE +/- 0.18762, N = 154.74844MIN: 3.931. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32 - Engine: CPUEPYC 76010.96941.93882.90823.87764.847SE +/- 0.05572, N = 34.30839MIN: 4.071. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUEPYC 760190180270360450SE +/- 2.45, N = 3436.56MIN: 423.331. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUEPYC 760120406080100SE +/- 0.57, N = 3110.64MIN: 108.371. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUEPYC 76010.39090.78181.17271.56361.9545SE +/- 0.02267, N = 31.73752MIN: 0.971. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 1.5Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUEPYC 76010.39420.78841.18261.57681.971SE +/- 0.00348, N = 31.75202MIN: 1.631. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080pEPYC 760180160240320400SE +/- 0.91, N = 3382.61MIN: 292.08 / MAX: 482.591. (CC) gcc options: -pthread

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 4KEPYC 76014080120160200SE +/- 0.14, N = 3173.97MIN: 110.45 / MAX: 188.351. (CC) gcc options: -pthread

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 1080pEPYC 760190180270360450SE +/- 1.40, N = 3401.11MIN: 236.84 / MAX: 443.141. (CC) gcc options: -pthread

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080p 10-bitEPYC 760120406080100SE +/- 0.05, N = 389.03MIN: 62.64 / MAX: 145.461. (CC) gcc options: -pthread

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 0 Two-PassEPYC 76010.0450.090.1350.180.225SE +/- 0.00, N = 30.21. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassEPYC 76010.3330.6660.9991.3321.665SE +/- 0.01, N = 31.481. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeEPYC 76013691215SE +/- 0.05, N = 311.421. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassEPYC 76010.52651.0531.57952.1062.6325SE +/- 0.01, N = 32.341. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeEPYC 7601510152025SE +/- 0.25, N = 521.051. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 0 - Input: 1080pEPYC 76010.02180.04360.06540.08720.109SE +/- 0.000, N = 30.0971. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pEPYC 76011.06452.1293.19354.2585.3225SE +/- 0.042, N = 34.7311. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pEPYC 7601918273645SE +/- 0.39, N = 337.691. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: VMAF Optimized - Input: Bosphorus 1080pEPYC 76014080120160200SE +/- 1.19, N = 3159.651. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pEPYC 76014080120160200SE +/- 0.10, N = 3158.581. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: Visual Quality Optimized - Input: Bosphorus 1080pEPYC 7601306090120150SE +/- 2.08, N = 3143.161. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.2.0Scene: MemorialEPYC 7601246810SE +/- 0.01, N = 37.56

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkEPYC 760150100150200250SE +/- 0.64, N = 3207.17MIN: 1 / MAX: 742

LuxCoreRender

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on the CPU as opposed to the OpenCL version. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: DLSCEPYC 76010.8461.6922.5383.3844.23SE +/- 0.02, N = 33.76MIN: 3.65 / MAX: 3.95

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: Rainbow Colors and PrismEPYC 76010.89781.79562.69343.59124.489SE +/- 0.01, N = 33.99MIN: 3.97 / MAX: 4.02

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 0EPYC 760120406080100SE +/- 0.31, N = 391.071. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 2EPYC 76011224364860SE +/- 0.50, N = 355.611. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 8EPYC 7601246810SE +/- 0.068, N = 157.3791. (CXX) g++ options: -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10EPYC 7601246810SE +/- 0.055, N = 36.9841. (CXX) g++ options: -O3 -fPIC

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileEPYC 7601714212835SE +/- 0.05, N = 328.60

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileEPYC 76011020304050SE +/- 0.58, N = 543.29

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 10.0Time To CompileEPYC 760170140210280350SE +/- 1.02, N = 3315.26

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.12Time To CompileEPYC 760120406080100SE +/- 0.26, N = 377.76

Montage Astronomical Image Mosaic Engine

Montage is an open-source astronomical image mosaic engine. This BSD-licensed astronomy software is developed by the California Institute of Technology, Pasadena. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMontage Astronomical Image Mosaic Engine 6.0Mosaic of M17, K band, 1.5 deg x 1.5 degEPYC 7601306090120150SE +/- 0.19, N = 3114.921. (CC) gcc options: -std=gnu99 -lcfitsio -lm -O2

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Hot ReadEPYC 76011530456075SE +/- 0.62, N = 368.481. (CXX) g++ options: -O3 -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Fill SyncEPYC 7601246810SE +/- 0.03, N = 36.31. (CXX) g++ options: -O3 -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Fill SyncEPYC 76012004006008001000SE +/- 9.04, N = 31110.541. (CXX) g++ options: -O3 -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: OverwriteEPYC 76013691215SE +/- 0.10, N = 310.11. (CXX) g++ options: -O3 -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: OverwriteEPYC 7601150300450600750SE +/- 7.97, N = 3699.441. (CXX) g++ options: -O3 -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Random FillEPYC 76013691215SE +/- 0.03, N = 310.21. (CXX) g++ options: -O3 -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random FillEPYC 7601150300450600750SE +/- 0.69, N = 3690.601. (CXX) g++ options: -O3 -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random ReadEPYC 76011530456075SE +/- 0.77, N = 367.181. (CXX) g++ options: -O3 -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Seek RandomEPYC 7601306090120150SE +/- 1.04, N = 3127.661. (CXX) g++ options: -O3 -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random DeleteEPYC 7601140280420560700SE +/- 2.53, N = 3647.791. (CXX) g++ options: -O3 -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Sequential FillEPYC 76013691215SE +/- 0.12, N = 310.31. (CXX) g++ options: -O3 -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Sequential FillEPYC 7601150300450600750SE +/- 7.84, N = 3685.761. (CXX) g++ options: -O3 -lpthread

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.1Water BenchmarkEPYC 76010.44910.89821.34731.79642.2455SE +/- 0.019, N = 31.9961. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

Tensorflow

This is a benchmark of the Tensorflow deep learning framework using the CIFAR10 data set. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTensorflowBuild: Cifar10EPYC 760120406080100SE +/- 1.02, N = 580.27

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1SEPYC 76011428425670SE +/- 0.20, N = 364.881. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 0EPYC 76013691215SE +/- 0.01, N = 310.081. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2EPYC 7601510152025SE +/- 0.12, N = 320.811. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 3EPYC 7601816243240SE +/- 0.08, N = 332.891. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2 + RDO Post-ProcessingEPYC 76012004006008001000SE +/- 0.17, N = 3909.781. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeEPYC 76011428425670SE +/- 0.39, N = 364.911. RawTherapee, version 5.5, command line. An advanced, cross-platform program for developing raw photos. Website: http://www.rawtherapee.com/ Documentation: http://rawpedia.rawtherapee.com/ Forum: https://discuss.pixls.us/c/software/rawtherapee Code and bug reports: https://github.com/Beep6581/RawTherapee Symbols: <Chevrons> indicate parameters you can change. [Square brackets] mean the parameter is optional. The pipe symbol | indicates a choice of one or the other. The dash symbol - denotes a range of possible values from one to the other. Usage: rawtherapee-cli -c <dir>|<files> Convert files in batch with default parameters. rawtherapee-cli <other options> -c <dir>|<files> Convert files in batch with your own settings. Options: rawtherapee-cli[-o <output>|-O <output>] [-q] [-a] [-s|-S] [-p <one.pp3> [-p <two.pp3> ...] ] [-d] [ -j[1-100] -js<1-3> | -t[z] -b<8|16|16f|32> | -n -b<8|16> ] [-Y] [-f] -c <input> -c <files> Specify one or more input files or folders. When specifying folders, Rawtherapee will look for image file types which comply with the selected extensions (see also '-a'). -c must be the last option. -o <file>|<dir> Set output file or folder. Saves output file alongside input file if -o is not specified. -O <file>|<dir> Set output file or folder and copy pp3 file into it. Saves output file alongside input file if -O is not specified. -q Quick-start mode. Does not load cached files to speedup start time. -a Process all supported image file types when specifying a folder, even those not currently selected in Preferences > File Browser > Parsed Extensions. -s Use the existing sidecar file to build the processing parameters, e.g. for photo.raw there should be a photo.raw.pp3 file in the same folder. If the sidecar file does not exist, neutral values will be used. -S Like -s but skip if the sidecar file does not exist. -p <file.pp3> Specify processing profile to be used for all conversions. You can specify as many sets of "-p <file.pp3>" options as you like, each will be built on top of the previous one, as explained below. -d Use the default raw or non-raw processing profile as set in Preferences > Image Processing > Default Processing Profile -j[1-100] Specify output to be JPEG (default, if -t and -n are not set). Optionally, specify compression 1-100 (default value: 92). -js<1-3> Specify the JPEG chroma subsampling parameter, where: 1 = Best compression: 2x2, 1x1, 1x1 (4:2:0) Chroma halved vertically and horizontally. 2 = Balanced (default): 2x1, 1x1, 1x1 (4:2:2) Chroma halved horizontally. 3 = Best quality: 1x1, 1x1, 1x1 (4:4:4) No chroma subsampling. -b<8|16|16f|32> Specify bit depth per channel. 8 = 8-bit integer. Applies to JPEG, PNG and TIFF. Default for JPEG and PNG. 16 = 16-bit integer. Applies to TIFF and PNG. Default for TIFF. 16f = 16-bit float. Applies to TIFF. 32 = 32-bit float. Applies to TIFF. -t[z] Specify output to be TIFF. Uncompressed by default, or deflate compression with 'z'. -n Specify output to be compressed PNG. Compression is hard-coded to PNG_FILTER_PAETH, Z_RLE. -Y Overwrite output if present. -f Use the custom fast-export processing pipeline. Your pp3 files can be incomplete, RawTherapee will build the final values as follows: 1- A new processing profile is created using neutral values, 2- If the "-d" option is set, the values are overridden by those found in the default raw or non-raw processing profile. 3- If one or more "-p" options are set, the values are overridden by those found in these processing profiles. 4- If the "-s" or "-S" options are set, the values are finally overridden by those found in the sidecar files. The processing profiles are processed in the order specified on the command line.

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MMAPEPYC 76015001000150020002500SE +/- 7.83, N = 32335.631. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: NUMAEPYC 7601110220330440550SE +/- 1.31, N = 3488.921. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MEMFDEPYC 76016001200180024003000SE +/- 1.73, N = 32952.691. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: AtomicEPYC 760140K80K120K160K200KSE +/- 265.81, N = 3189239.291. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CryptoEPYC 760113002600390052006500SE +/- 23.62, N = 35862.061. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MallocEPYC 760170M140M210M280M350MSE +/- 114923.91, N = 3332163474.051. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: ForkingEPYC 760112K24K36K48K60KSE +/- 483.71, N = 1554226.431. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SENDFILEEPYC 760180K160K240K320K400KSE +/- 1651.95, N = 3365052.421. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU CacheEPYC 76011530456075SE +/- 0.77, N = 1568.621. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU StressEPYC 76012K4K6K8K10KSE +/- 98.51, N = 38928.161. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SemaphoresEPYC 76011.1M2.2M3.3M4.4M5.5MSE +/- 1684.88, N = 35069837.351. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Matrix MathEPYC 760130K60K90K120K150KSE +/- 504.08, N = 3118905.541. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Vector MathEPYC 760140K80K120K160K200KSE +/- 191.81, N = 3190449.841. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Memory CopyingEPYC 760110002000300040005000SE +/- 39.24, N = 114525.561. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Socket ActivityEPYC 76014K8K12K16K20KSE +/- 178.61, N = 318216.441. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context SwitchingEPYC 76013M6M9M12M15MSE +/- 155764.34, N = 1513278421.761. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc C String FunctionsEPYC 7601500K1000K1500K2000K2500KSE +/- 42398.84, N = 32520648.891. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc Qsort Data SortingEPYC 760180160240320400SE +/- 0.35, N = 3383.911. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: System V Message PassingEPYC 76013M6M9M12M15MSE +/- 12456.14, N = 311747827.431. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgKsamples, More Is BetterChaos Group V-RAY 4.10.07Mode: CPUEPYC 76015K10K15K20K25KSE +/- 272.09, N = 324535

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 3.11.4Test: ReadsEPYC 76016K12K18K24K30KSE +/- 9815.11, N = 925851

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 3.11.4Test: WritesEPYC 760130K60K90K120K150KSE +/- 1627.85, N = 3149426

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 3.11.4Test: Mixed 1:1EPYC 76018001600240032004000SE +/- 195.48, N = 93624

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 3.11.4Test: Mixed 1:3EPYC 760130060090012001500SE +/- 76.25, N = 91486

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goEPYC 760180160240320400SE +/- 1.15, N = 3378

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3EPYC 7601110220330440550486

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosEPYC 76014080120160200184

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatEPYC 76014080120160200172

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyEPYC 76014080120160200SE +/- 0.33, N = 3178

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibEPYC 7601612182430SE +/- 0.03, N = 327.3

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceEPYC 76012004006008001000SE +/- 1.53, N = 3776

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsEPYC 7601918273645SE +/- 0.06, N = 338.8

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesEPYC 76014080120160200SE +/- 0.67, N = 3168

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileEPYC 760160120180240300273

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupEPYC 7601510152025SE +/- 0.03, N = 318.9

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateEPYC 760120406080100SE +/- 0.00, N = 392.9

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonEPYC 76012004006008001000SE +/- 0.58, N = 3775

NeatBench

NeatBench is a benchmark of the cross-platform Neat Video software on the CPU and optional GPU (OpenCL / CUDA) support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNeatBench 5Acceleration: CPUEPYC 76013691215SE +/- 0.18, N = 313.1

Git

This test measures the time needed to carry out some sample Git operations on an example, static repository that happens to be a copy of the GNOME GTK tool-kit repository. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGitTime To Complete Common Git CommandsEPYC 76011530456075SE +/- 0.10, N = 368.491. git version 2.20.1

Tesseract OCR

Tesseract-OCR is the open-source optical character recognition (OCR) engine for the conversion of text within images to raw text output. This test profile relies upon a system-supplied Tesseract installation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.0.0Time To OCR 7 ImagesEPYC 76011020304050SE +/- 0.14, N = 342.75

BRL-CAD

BRL-CAD 7.28.0 is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.30.8VGR Performance MetricEPYC 760150K100K150K200K250K2370981. (CXX) g++ options: -std=c++11 -pipe -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -rdynamic -lSM -lICE -lXi -lGLU -lGL -lGLdispatch -lX11 -lXext -lXrender -lpthread -ldl -luuid -lm

128 Results Shown

C-Blosc
NAS Parallel Benchmarks:
  BT.C
  CG.C
  EP.C
  EP.D
  FT.C
  IS.D
  LU.C
  MG.C
  SP.B
Rodinia:
  OpenMP LavaMD
  OpenMP HotSpot3D
  OpenMP Leukocyte
  OpenMP CFD Solver
  OpenMP Streamcluster
Nebular Empirical Analysis Tool
Algebraic Multi-Grid Benchmark
LULESH
Java Gradle Build
Zstd Compression:
  3
  19
Nettle:
  aes256
  chacha
  sha512
  poly1305-aes
oneDNN:
  IP Batch 1D - f32 - CPU
  IP Batch All - f32 - CPU
  IP Batch 1D - u8s8f32 - CPU
  IP Batch All - u8s8f32 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
  Deconvolution Batch deconv_1d - f32 - CPU
  Deconvolution Batch deconv_3d - f32 - CPU
  Convolution Batch Shapes Auto - u8s8f32 - CPU
  Deconvolution Batch deconv_1d - u8s8f32 - CPU
  Deconvolution Batch deconv_3d - u8s8f32 - CPU
  Recurrent Neural Network Training - f32 - CPU
  Recurrent Neural Network Inference - f32 - CPU
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
dav1d:
  Chimera 1080p
  Summer Nature 4K
  Summer Nature 1080p
  Chimera 1080p 10-bit
AOM AV1:
  Speed 0 Two-Pass
  Speed 4 Two-Pass
  Speed 6 Realtime
  Speed 6 Two-Pass
  Speed 8 Realtime
SVT-AV1:
  Enc Mode 0 - 1080p
  Enc Mode 4 - 1080p
  Enc Mode 8 - 1080p
SVT-VP9:
  VMAF Optimized - Bosphorus 1080p
  PSNR/SSIM Optimized - Bosphorus 1080p
  Visual Quality Optimized - Bosphorus 1080p
Intel Open Image Denoise
OpenVKL
LuxCoreRender:
  DLSC
  Rainbow Colors and Prism
libavif avifenc:
  0
  2
  8
  10
Timed Apache Compilation
Timed Linux Kernel Compilation
Timed LLVM Compilation
Build2
Montage Astronomical Image Mosaic Engine
LevelDB:
  Hot Read
  Fill Sync
  Fill Sync
  Overwrite
  Overwrite
  Rand Fill
  Rand Fill
  Rand Read
  Seek Rand
  Rand Delete
  Seq Fill
  Seq Fill
GROMACS
Tensorflow
Basis Universal:
  ETC1S
  UASTC Level 0
  UASTC Level 2
  UASTC Level 3
  UASTC Level 2 + RDO Post-Processing
RawTherapee
Stress-NG:
  MMAP
  NUMA
  MEMFD
  Atomic
  Crypto
  Malloc
  Forking
  SENDFILE
  CPU Cache
  CPU Stress
  Semaphores
  Matrix Math
  Vector Math
  Memory Copying
  Socket Activity
  Context Switching
  Glibc C String Functions
  Glibc Qsort Data Sorting
  System V Message Passing
Chaos Group V-RAY
Apache Cassandra:
  Reads
  Writes
  Mixed 1:1
  Mixed 1:3
PyPerformance:
  go
  2to3
  chaos
  float
  nbody
  pathlib
  raytrace
  json_loads
  crypto_pyaes
  regex_compile
  python_startup
  django_template
  pickle_pure_python
NeatBench
Git
Tesseract OCR
BRL-CAD