AMD EPYC 7F52

AMD EPYC 7F52 16-Core testing with a Supermicro H11DSi-NT v2.00 (2.1 BIOS) and llvmpipe on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2012294-HA-AMDEPYC7F75
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 6 Tests
AV1 4 Tests
Bioinformatics 2 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 2 Tests
C++ Boost Tests 2 Tests
Chess Test Suite 3 Tests
Timed Code Compilation 7 Tests
C/C++ Compiler Tests 31 Tests
Compression Tests 4 Tests
CPU Massive 38 Tests
Creator Workloads 37 Tests
Cryptography 3 Tests
Database Test Suite 5 Tests
Encoding 15 Tests
Fortran Tests 4 Tests
Game Development 4 Tests
HPC - High Performance Computing 25 Tests
Imaging 5 Tests
Common Kernel Benchmarks 5 Tests
Machine Learning 15 Tests
Molecular Dynamics 3 Tests
MPI Benchmarks 4 Tests
Multi-Core 39 Tests
NVIDIA GPU Compute 8 Tests
Intel oneAPI 5 Tests
OpenCV Tests 2 Tests
OpenMPI Tests 5 Tests
Productivity 3 Tests
Programmer / Developer System Benchmarks 12 Tests
Python 4 Tests
Raytracing 2 Tests
Renderers 6 Tests
Scientific Computing 9 Tests
Server 9 Tests
Server CPU Tests 20 Tests
Single-Threaded 11 Tests
Speech 3 Tests
Telephony 3 Tests
Video Encoding 9 Tests
Common Workstation Benchmarks 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
EPYC 7F52
December 27 2020
  1 Day, 3 Hours, 59 Minutes
Linux 5.10.3
December 28 2020
  1 Day, 3 Hours, 47 Minutes
Invert Hiding All Results Option
  1 Day, 3 Hours, 53 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


AMD EPYC 7F52OpenBenchmarking.orgPhoronix Test SuiteAMD EPYC 7F52 16-Core @ 3.50GHz (16 Cores / 32 Threads)Supermicro H11DSi-NT v2.00 (2.1 BIOS)AMD Starship/Matisse64GB280GB INTEL SSDPE21D280GAllvmpipeVE2282 x Intel 10G X550TUbuntu 20.045.8.0-050800rc6daily20200721-generic (x86_64) 202007205.10.3-051003-generic (x86_64)GNOME Shell 3.36.1X Server 1.20.8modesetting 1.20.83.3 Mesa 20.0.4 (LLVM 9.0.1 128 bits)GCC 9.3.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelsDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionAMD EPYC 7F52 BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0x8301034 - OpenJDK Runtime Environment (build 11.0.7+10-post-Ubuntu-3ubuntu1)- Python 2.7.18rc1 + Python 3.8.2- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

EPYC 7F52 vs. Linux 5.10.3 ComparisonPhoronix Test SuiteBaseline+41.1%+41.1%+82.2%+82.2%+123.3%+123.3%+164.4%+164.4%8%7%6.6%6.2%5%4.9%4.7%4.7%4.1%4%4%3.9%3.9%3.4%3%3%2.8%2.6%2.6%2.3%2.2%2.1%2.1%2.1%2%2%tfft2164.5%IP Shapes 3D - u8s8f32 - CPU73%LPOP56%C.B.S.A - f32 - CPU45.3%M.M.B.S.T - f32 - CPU38.6%IP Shapes 3D - f32 - CPU32.7%Forking26.8%S.V.M.P26.4%IP Shapes 1D - f32 - CPU20.1%D.B.s - f32 - CPU17.9%D.B.s - f32 - CPU17.3%C.B.S.A - u8s8f32 - CPU12.1%python_startup12%R.N.N.T - u8s8f32 - CPU11%R.N.N.T - f32 - CPU10.6%R.N.N.T - bf16bf16bf16 - CPU9.9%R.N.N.I - f32 - CPU9.5%R.N.N.I - bf16bf16bf16 - CPU8.7%R.N.N.I - u8s8f32 - CPU8.7%MMAPGET7.6%HWB Color SpacevklBenchmarkVdbVolumeSqueezeNetV1.0SENDFILE6.1%Bosphorus 1080p - Ultra FastvklBenchmarkStructuredVolumeMEMFDP.S.O - Bosphorus 1080pSocket Activity4.2%SADD4.1%Flow MPI Norne - 81 - 250 - Read Only - Average Latency4%Flow MPI Norne - 16V.Q.O - Bosphorus 1080pCPU - squeezenet_ssdBosphorus 1080p - Very Fast1 - 250 - Read Only3.8%OpenMP - Points2Image3.7%D.B.s - u8s8f32 - CPU3.6%SVG Files To PNG3.6%LPUSH3.6%floatSpeed 0 Two-Pass3.3%No - Inference - ResNet 50 - CPU3%P.D.STime To Compile3%VMAF Optimized - Bosphorus 1080p1 - 1 - Read Only - Average Latency2.9%Carbon NanotubeEXPoSE2.8%192.8%2.7%LargeRandMemory Copying2.6%test_fpu232.4%pathlib2.3%P1B2No - Inference - VGG16 - CPUB.C2.2%No - Inference - Inception V3 - CPU2.2%Noise-GaussianBosphorus 4K - Ultra FastP3B12.1%No - Inference - VGG19 - CPUSET2.1%1 - 50 - Read Only - Average Latency9 - Compression SpeedContext Switching2%Polyhedron Fortran BenchmarksoneDNNRedisoneDNNoneDNNoneDNNStress-NGStress-NGoneDNNoneDNNoneDNNoneDNNPyPerformanceoneDNNoneDNNoneDNNoneDNNoneDNNoneDNNStress-NGRedisGraphicsMagickOpenVKLMobile Neural NetworkStress-NGKvazaarOpenVKLStress-NGSVT-VP9Stress-NGRedisOpen Porous MediaPostgreSQL pgbenchOpen Porous MediaSVT-VP9NCNNKvazaarPostgreSQL pgbenchDarmstadt Automotive Parallel Heterogeneous SuiteoneDNNlibrsvgRedisPyPerformanceAOM AV1PlaidMLTimed HMMer SearchTimed GDB GNU Debugger CompilationSVT-VP9PostgreSQL pgbenchGPAWNumenta Anomaly BenchmarkZstd CompressionWireGuard + Linux Networking Stack Stress TestsimdjsonStress-NGPolyhedron Fortran BenchmarksZstd CompressionPyPerformanceECP-CANDLEPlaidMLNumenta Anomaly BenchmarkPlaidMLGraphicsMagickKvazaarECP-CANDLEPlaidMLRedisPostgreSQL pgbenchLZ4 CompressionStress-NGEPYC 7F52Linux 5.10.3

AMD EPYC 7F52plaidml: No - Inference - NASNer Large - CPUopenvkl: vklBenchmarkUnstructuredVolumepolyhedron: tfft2numenta-nab: EXPoSEplaidml: No - Inference - DenseNet 201 - CPUlammps: 20k Atomsplaidml: No - Inference - ResNet 50 - CPUpolyhedron: test_fpu2mnn: inception-v3mnn: mobilenet-v1-1.0mnn: MobileNetV2_224mnn: resnet-v2-50mnn: SqueezeNetV1.0build-clash: Time To Compilebrl-cad: VGR Performance Metricai-benchmark: Device AI Scoreai-benchmark: Device Training Scoreai-benchmark: Device Inference Scoreopm: Flow MPI Norne - 1caffe: GoogleNet - CPU - 200openvkl: vklBenchmarkblender: Barbershop - CPU-Onlyopm: Flow MPI Norne - 16polyhedron: gas_dyn2ecp-candle: P3B2ncnn: CPU - regnety_400mncnn: CPU - squeezenet_ssdncnn: CPU - yolov4-tinyncnn: CPU - resnet50ncnn: CPU - alexnetncnn: CPU - resnet18ncnn: CPU - vgg16ncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - mobilenetwireguard: daphne: OpenMP - Points2Imageblender: Pabellon Barcelona - CPU-Onlynumpy: blender: Classroom - CPU-Onlyhint: FLOATplaidml: No - Inference - Inception V3 - CPUecp-candle: P3B1opm: Flow MPI Norne - 8opm: Flow MPI Norne - 2polyhedron: channel2polyhedron: mp_prop_designmocassin: Dust 2D tau100.0caffe: GoogleNet - CPU - 100polyhedron: fatigue2svt-av1: Enc Mode 0 - 1080popm: Flow MPI Norne - 4asmfish: 1024 Hash Memory, 26 Depthplaidml: No - Inference - Mobilenet - CPUcaffe: AlexNet - CPU - 200tachyon: Total Timegpaw: Carbon Nanotubestress-ng: CPU Cacheyafaray: Total Time For Sample Scenehmmer: Pfam Database Searchpgbench: 1 - 250 - Read Write - Average Latencypgbench: 1 - 250 - Read Writebyte: Dhrystone 2dav1d: Chimera 1080p 10-bitplaidml: No - Inference - VGG19 - CPUastcenc: Exhaustiveblender: Fishy Cat - CPU-Onlycompress-lz4: 3 - Decompression Speedcompress-lz4: 3 - Compression Speedplaidml: No - Inference - VGG16 - CPUbuild-gdb: Time To Compilenode-web-tooling: stress-ng: System V Message Passinginfluxdb: 4 - 10000 - 2,5000,1 - 10000mlpack: scikit_qdavpxenc: Speed 0build-eigen: Time To Compileblender: BMW27 - CPU-Onlypyperformance: raytracepolyhedron: induct2gromacs: Water Benchmarkonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUnumenta-nab: Earthgecko Skylinetensorflow-lite: Inception V4influxdb: 64 - 10000 - 2,5000,1 - 10000pgbench: 1 - 50 - Read Only - Average Latencypgbench: 1 - 50 - Read Onlybuild2: Time To Compileonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUsimdjson: Kostyacaffe: AlexNet - CPU - 100pyperformance: python_startupkeydb: mlpack: scikit_linearridgeregressiontensorflow-lite: Inception ResNet V2openvino: Person Detection 0106 FP16 - CPUopenvino: Person Detection 0106 FP16 - CPUopenvino: Face Detection 0106 FP16 - CPUopenvino: Face Detection 0106 FP16 - CPUopenvino: Face Detection 0106 FP32 - CPUopenvino: Face Detection 0106 FP32 - CPUopenvino: Person Detection 0106 FP32 - CPUopenvino: Person Detection 0106 FP32 - CPUv-ray: CPUsqlite-speedtest: Timed Time - Size 1,000openvkl: vklBenchmarkVdbVolumemlpack: scikit_icapolyhedron: capacitapolyhedron: rnflownamd: ATPase Simulation - 327,506 Atomsonednn: Deconvolution Batch shapes_1d - f32 - CPUluxcorerender: Rainbow Colors and Prismluxcorerender: DLSCindigobench: CPU - Supercarindigobench: CPU - Bedroomtensorflow-lite: SqueezeNettensorflow-lite: NASNet Mobilepyperformance: 2to3tensorflow-lite: Mobilenet Quanttensorflow-lite: Mobilenet Floatbuild-linux-kernel: Time To Compileopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP32 - CPUopenvino: Age Gender Recognition Retail 0013 FP32 - CPUjohn-the-ripper: MD5graphics-magick: Sharpengraphics-magick: Noise-Gaussiangraphics-magick: Enhancedgraphics-magick: Rotategraphics-magick: Swirlgraphics-magick: Resizinggraphics-magick: HWB Color Spacekvazaar: Bosphorus 4K - Slowcompress-lz4: 9 - Decompression Speedcompress-lz4: 9 - Compression Speedkvazaar: Bosphorus 4K - Mediumopenvkl: vklBenchmarkStructuredVolumesimdjson: LargeRandrav1e: 5redis: SETrav1e: 1simdjson: PartialTweetsdeepspeech: CPUsimdjson: DistinctUserIDredis: GEThugin: Panorama Photo Assistant + Stitching Timepyperformance: gopolyhedron: proteincompress-zstd: 19embree: Pathtracer ISPC - Asian Dragon Objembree: Pathtracer ISPC - Asian Dragonembree: Pathtracer - Asian Dragon Objclomp: Static OMP Speedupstockfish: Total Timeespeak: Text-To-Speech Synthesisrav1e: 6pyperformance: django_templatewebp: Quality 100, Lossless, Highest Compressionplaidml: No - Inference - IMDB LSTM - CPUpyperformance: regex_compilecompress-7zip: Compress Speed Testbuild-ffmpeg: Time To Compileaom-av1: Speed 0 Two-Passembree: Pathtracer ISPC - Crownphpbench: PHP Benchmark Suitelibraw: Post-Processing Benchmarkredis: LPOPaom-av1: Speed 6 Realtimeembree: Pathtracer - Crowncompress-lz4: 1 - Decompression Speedcompress-lz4: 1 - Compression Speedpolyhedron: aermodstress-ng: CPU Stressjohn-the-ripper: Blowfishaircrack-ng: stress-ng: NUMAstress-ng: Mallocstress-ng: MEMFDstress-ng: Matrix Mathstress-ng: Cryptostress-ng: Glibc C String Functionsstress-ng: Context Switchingstress-ng: SENDFILEstress-ng: Memory Copyingstress-ng: Vector Mathstress-ng: MMAPstress-ng: Socket Activitystress-ng: Forkingstress-ng: Atomicstress-ng: Glibc Qsort Data Sortingstress-ng: Semaphorescompress-zstd: 3pyperformance: pathlibembree: Pathtracer - Asian Dragonaom-av1: Speed 6 Two-Passx265: Bosphorus 4Krav1e: 10pyperformance: pickle_pure_pythondaphne: OpenMP - NDT Mappingpolyhedron: doducnumenta-nab: Bayesian Changepointunpack-firefox: firefox-84.0.source.tar.xzmlpack: scikit_svmvpxenc: Speed 5crafty: Elapsed Timepgbench: 1 - 100 - Read Write - Average Latencypgbench: 1 - 100 - Read Writepgbench: 1 - 250 - Read Only - Average Latencypgbench: 1 - 250 - Read Onlypgbench: 1 - 50 - Read Write - Average Latencypgbench: 1 - 50 - Read Writepgbench: 1 - 100 - Read Only - Average Latencypgbench: 1 - 100 - Read Onlypgbench: 1 - 1 - Read Only - Average Latencypgbench: 1 - 1 - Read Onlypgbench: 1 - 1 - Read Write - Average Latencypgbench: 1 - 1 - Read Writesunflow: Global Illumination + Image Synthesisrsvg: SVG Files To PNGkvazaar: Bosphorus 4K - Very Fastpyperformance: json_loadsencode-wavpack: WAV To WavPackpyperformance: chaospyperformance: floatbuild-apache: Time To Compilepolyhedron: linpkcoremark: CoreMark Size 666 - Iterations Per Secondpolyhedron: acpyperformance: nbodyonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUcompress-xz: Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9pyperformance: crypto_pyaesencode-ape: WAV To APEencode-ogg: WAV To Oggaom-av1: Speed 4 Two-Passbuild-mplayer: Time To Compiledaphne: OpenMP - Euclidean Clusteronednn: IP Shapes 1D - f32 - CPUrnnoise: openssl: RSA 4096-bit Performanceocrmypdf: Processing 60 Page PDF Documentdav1d: Chimera 1080ptnn: CPU - MobileNet v2astcenc: Thoroughtnn: CPU - SqueezeNet v1.1aom-av1: Speed 8 Realtimewebp: Quality 100, Losslesskvazaar: Bosphorus 1080p - Slowkvazaar: Bosphorus 1080p - Mediumsvt-av1: Enc Mode 4 - 1080predis: LPUSHdav1d: Summer Nature 4Kpolyhedron: mdbxonednn: IP Shapes 1D - u8s8f32 - CPUkvazaar: Bosphorus 4K - Ultra Fastnumenta-nab: Relative Entropyencode-flac: WAV To FLACoidn: Memorialencode-opus: WAV To Opus Encodeecp-candle: P1B2octave-benchmark: onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUlibreoffice: 20 Documents To PDFlammps: Rhodopsin Proteinredis: SADDx265: Bosphorus 1080psvt-av1: Enc Mode 8 - 1080ponednn: IP Shapes 3D - u8s8f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUmafft: Multiple Sequence Alignment - LSU RNAkvazaar: Bosphorus 1080p - Very Fastwebp: Quality 100, Highest Compressionencode-mp3: WAV To MP3numenta-nab: Windowed Gaussianpolyhedron: airastcenc: Mediumdav1d: Summer Nature 1080ponednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUkvazaar: Bosphorus 1080p - Ultra Fastastcenc: Fastx264: H.264 Video Encodingsvt-vp9: Visual Quality Optimized - Bosphorus 1080ponednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUsvt-vp9: VMAF Optimized - Bosphorus 1080psvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080pwebp: Quality 100ffte: N=256, 3D Complex FFT Routinewebp: DefaultEPYC 7F52Linux 5.10.31.041818093.885383821.79756.8803.1912.2736.1432.1233.5306.5756.20834.55410.933450.479245516318914211768365.373363998217.81354.80361.91744.16899.4244.5121.8925.9421.347.0310.6930.1717.653.6911.067.608.987.688.4919.27293.86722093.912701350266.54367.10239.80347415262.1302010.39648.699217.414212.22042.5259.3819218165252.310.117168.7644624079714.5314362247.2389117.37544.86130.097131.051112.581222741016145.2110.6420.27108.83108.0010768.253.4924.3594.1839.2710610267.691211752.030.047.1683.44383.5247523.792.3651992.602006.801994.6276.83314945901425736.10.10249182775.2571068.481069.391057.420.52716677.76432105.731.7313439202582.883.061988.754.021986.914.012600.353.042733466.72115263784.66666752.8717.6116.61.142262.763213.493.277.7613.57710629612727532969832.468415.045.1170.789974.700.799935.5517263332354193746198961597117110.0610898.451.7810.2468692259.8828830.381.0941350619.520.3690.6168.293460.621753884.3750.70325413.8476.819.635621.178920.405050.13638825130.7811.46448.336.309665.8817310680333.8900.3118.766261855238.181915545.9619.1619.783611455.89947.806.126244.332639056912.740409.25332554816.83680.7877530.484565.971144375.858409881.77297122.816435.73142981.97229.8010784.4056181.28512936.21269.572314681.138221.517.120.97043.7420.933.186477977.657.2627.20820.43423.3623.08777618930.03733320.44955682511.82242310.1955143070.035282730.26338030.82024.20924.4424.913.75311312022.3963.18688169.9301226.531135.5228020.99310912.50520.5962.4220.3791090.642.0051920.1334571.419.515574.78274.96613.79263.03834.1217.50435.0535.975.3601216222.50227.674.721.5111540.4114.4668.56214.227.98038.5857.4020.6759151.838447.15811.6781565518.8861.7638.5310.7720582.367524.038019.00968.397.7327.9337.5301.776.89533.805.556823.29403105.125.35162.77203.982.86890248.38252.182.49899888.4436406591.6181.041817665.472134657.63778.1593.2112.2555.9631.3232.9646.5516.12633.96010.291450.376242323320714341773364.936362841218.94355.80348.10544.26896.05344.7921.0625.8420.947.0110.7130.0217.703.6711.147.608.977.738.4419.55301.89721297.660270591266.70368.45239.22347380647.3623510.17662.322208.913212.31342.359.3919218100852.210.116166.5404644165314.5114403948.1285114.13244.52130.908127.228113.781220141240132.3111.4420.69108.81108.3310815.353.4824.8996.9799.358395010.731198820.329.967.284.47982.8947623.812.3742211.732220.502192.8276.75314990171419536.40.10050104775.8021169.571162.781148.930.53726058.69424609.601.7213462432590.333.071988.904.011989.914.012605.513.032711067.13816277364.24242453.5017.5616.591.148013.256733.503.277.7313.57710651012661932670037.468630.945.2690.789966.930.789953.0717286672354283746148951591125310.1010851.652.8110.3172087200.3153150.391.0941323358.980.3700.6168.166680.621630009.8150.67925313.8474.719.590520.891820.424650.13638381631.0511.46147.536.275666.7917310820934.1290.3018.623962544138.561228236.4119.4019.529711490.610009.496.166266.842639756766.004416.60332331122.53712.7476518.794555.431143670.008245888.97280154.746274.43142907.67248.1710348.9144312.12510793.92268.942278162.658027.817.521.06013.7521.223.192470969.387.2627.80420.48323.0223.40763324730.33733000.46753641212.04841510.1985071610.036278960.26437820.81825.07924.5624.713.74211211622.6843.18694463.1008946.511135.5275821.23311012.50120.6922.4120.4281085.362.4081020.1014579.819.548581.26275.50613.79264.35733.9817.57435.3536.275.3881174489.56227.344.721.5222041.2714.4788.61014.227.97837.7237.4920.9370891.821577.20811.5231503178.0062.2739.0071.335933.142014.737069.03371.057.7167.9397.4461.776.91541.836.228774.78691110.365.35163.65212.072.97264255.72264.012.491100233.0552615971.618OpenBenchmarking.org

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: NASNer Large - Device: CPULinux 5.10.3EPYC 7F520.2340.4680.7020.9361.17SE +/- 0.00, N = 3SE +/- 0.00, N = 31.041.04
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: NASNer Large - Device: CPULinux 5.10.3EPYC 7F52246810Min: 1.04 / Avg: 1.04 / Max: 1.04Min: 1.04 / Avg: 1.04 / Max: 1.05

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkUnstructuredVolumeEPYC 7F52Linux 5.10.3400K800K1200K1600K2000KSE +/- 2560.46, N = 3SE +/- 1848.12, N = 31818093.891817665.47MIN: 19110 / MAX: 6113054MIN: 19297 / MAX: 6122055
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkUnstructuredVolumeEPYC 7F52Linux 5.10.3300K600K900K1200K1500KMin: 1813068.69 / Avg: 1818093.89 / Max: 1821459.92Min: 1814049.49 / Avg: 1817665.47 / Max: 1820136.9

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: tfft2EPYC 7F52Linux 5.10.3132639526521.7957.63

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: EXPoSEEPYC 7F52Linux 5.10.32004006008001000SE +/- 2.18, N = 3SE +/- 0.49, N = 3756.88778.16
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: EXPoSEEPYC 7F52Linux 5.10.3140280420560700Min: 753.14 / Avg: 756.88 / Max: 760.71Min: 777.34 / Avg: 778.16 / Max: 779.04

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: DenseNet 201 - Device: CPULinux 5.10.3EPYC 7F520.72231.44462.16692.88923.6115SE +/- 0.01, N = 3SE +/- 0.01, N = 33.213.19
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: DenseNet 201 - Device: CPULinux 5.10.3EPYC 7F52246810Min: 3.2 / Avg: 3.21 / Max: 3.22Min: 3.17 / Avg: 3.19 / Max: 3.21

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k AtomsEPYC 7F52Linux 5.10.33691215SE +/- 0.01, N = 3SE +/- 0.01, N = 312.2712.261. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k AtomsEPYC 7F52Linux 5.10.348121620Min: 12.27 / Avg: 12.27 / Max: 12.28Min: 12.24 / Avg: 12.26 / Max: 12.281. (CXX) g++ options: -O3 -pthread -lm

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: ResNet 50 - Device: CPUEPYC 7F52Linux 5.10.3246810SE +/- 0.00, N = 3SE +/- 0.08, N = 36.145.96
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: ResNet 50 - Device: CPUEPYC 7F52Linux 5.10.3246810Min: 6.14 / Avg: 6.14 / Max: 6.14Min: 5.86 / Avg: 5.96 / Max: 6.12

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: test_fpu2Linux 5.10.3EPYC 7F5271421283531.3232.12

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Linux 5.10.3EPYC 7F52816243240SE +/- 0.18, N = 15SE +/- 0.23, N = 1532.9633.53MIN: 31.71 / MAX: 49.44MIN: 31.39 / MAX: 50.331. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Linux 5.10.3EPYC 7F52714212835Min: 32.35 / Avg: 32.96 / Max: 35.17Min: 32.2 / Avg: 33.53 / Max: 34.671. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Linux 5.10.3EPYC 7F52246810SE +/- 0.007, N = 15SE +/- 0.012, N = 156.5516.575MIN: 6.45 / MAX: 22.13MIN: 6.41 / MAX: 20.061. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Linux 5.10.3EPYC 7F523691215Min: 6.5 / Avg: 6.55 / Max: 6.59Min: 6.48 / Avg: 6.57 / Max: 6.641. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Linux 5.10.3EPYC 7F52246810SE +/- 0.012, N = 15SE +/- 0.012, N = 156.1266.208MIN: 5.97 / MAX: 20.61MIN: 6.01 / MAX: 211. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Linux 5.10.3EPYC 7F52246810Min: 6.07 / Avg: 6.13 / Max: 6.21Min: 6.11 / Avg: 6.21 / Max: 6.281. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Linux 5.10.3EPYC 7F52816243240SE +/- 0.04, N = 15SE +/- 0.05, N = 1533.9634.55MIN: 32.06 / MAX: 51.84MIN: 32.75 / MAX: 67.951. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Linux 5.10.3EPYC 7F52714212835Min: 33.53 / Avg: 33.96 / Max: 34.23Min: 34.34 / Avg: 34.55 / Max: 35.131. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Linux 5.10.3EPYC 7F523691215SE +/- 0.11, N = 15SE +/- 0.24, N = 1510.2910.93MIN: 9.72 / MAX: 23.37MIN: 9.63 / MAX: 23.961. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Linux 5.10.3EPYC 7F523691215Min: 9.79 / Avg: 10.29 / Max: 11.14Min: 9.83 / Avg: 10.93 / Max: 12.951. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Timed Clash Compilation

Build the clash-lang Haskell to VHDL/Verilog/SystemVerilog compiler with GHC 8.10.1 Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Clash CompilationTime To CompileLinux 5.10.3EPYC 7F52100200300400500SE +/- 1.29, N = 3SE +/- 0.56, N = 3450.38450.48
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Clash CompilationTime To CompileLinux 5.10.3EPYC 7F5280160240320400Min: 447.91 / Avg: 450.38 / Max: 452.27Min: 449.43 / Avg: 450.48 / Max: 451.37

BRL-CAD

BRL-CAD 7.28.0 is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.30.8VGR Performance MetricEPYC 7F52Linux 5.10.350K100K150K200K250K2455162423231. (CXX) g++ options: -std=c++11 -pipe -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -rdynamic -lSM -lICE -lXi -lGLU -lGL -lGLdispatch -lX11 -lXext -lXrender -lpthread -ldl -luuid -lm

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI ScoreLinux 5.10.3EPYC 7F52700140021002800350032073189

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training ScoreLinux 5.10.3EPYC 7F523006009001200150014341421

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference ScoreLinux 5.10.3EPYC 7F5240080012001600200017731768

Open Porous Media

This is a test of Open Porous Media, a set of open-source tools concerning simulation of flow and transport of fluids in porous media. This test profile depends upon MPI/Flow already being installed on the system. Install instructions at https://opm-project.org/?page_id=36. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 1Linux 5.10.3EPYC 7F5280160240320400SE +/- 0.73, N = 3SE +/- 1.92, N = 3364.94365.371. flow 2020.04
OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 1Linux 5.10.3EPYC 7F5270140210280350Min: 364.11 / Avg: 364.94 / Max: 366.39Min: 363.16 / Avg: 365.37 / Max: 369.191. flow 2020.04

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 200Linux 5.10.3EPYC 7F5280K160K240K320K400KSE +/- 312.30, N = 3SE +/- 75.84, N = 33628413639981. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 200Linux 5.10.3EPYC 7F5260K120K180K240K300KMin: 362407 / Avg: 362841 / Max: 363447Min: 363905 / Avg: 363997.67 / Max: 3641481. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkLinux 5.10.3EPYC 7F5250100150200250SE +/- 0.38, N = 3SE +/- 0.60, N = 3218.94217.81MIN: 1 / MAX: 772MIN: 1 / MAX: 765
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkLinux 5.10.3EPYC 7F524080120160200Min: 218.22 / Avg: 218.94 / Max: 219.5Min: 216.67 / Avg: 217.81 / Max: 218.67

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Barbershop - Compute: CPU-OnlyEPYC 7F52Linux 5.10.380160240320400SE +/- 0.32, N = 3SE +/- 0.30, N = 3354.80355.80
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Barbershop - Compute: CPU-OnlyEPYC 7F52Linux 5.10.360120180240300Min: 354.16 / Avg: 354.8 / Max: 355.15Min: 355.43 / Avg: 355.8 / Max: 356.39

Open Porous Media

This is a test of Open Porous Media, a set of open-source tools concerning simulation of flow and transport of fluids in porous media. This test profile depends upon MPI/Flow already being installed on the system. Install instructions at https://opm-project.org/?page_id=36. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 16Linux 5.10.3EPYC 7F5280160240320400SE +/- 0.13, N = 3SE +/- 0.72, N = 3348.11361.921. flow 2020.04
OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 16Linux 5.10.3EPYC 7F5260120180240300Min: 347.85 / Avg: 348.1 / Max: 348.24Min: 360.75 / Avg: 361.92 / Max: 363.241. flow 2020.04

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: gas_dyn2EPYC 7F52Linux 5.10.3102030405044.1644.26

ECP-CANDLE

The CANDLE benchmark codes implement deep learning architectures relevant to problems in cancer. These architectures address problems at different biological scales, specifically problems at the molecular, cellular and population scales. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterECP-CANDLE 0.3Benchmark: P3B2Linux 5.10.3EPYC 7F522004006008001000896.05899.42

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mEPYC 7F52Linux 5.10.31020304050SE +/- 0.18, N = 15SE +/- 0.14, N = 1244.5144.79MIN: 42.64 / MAX: 117.01MIN: 43.38 / MAX: 124.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mEPYC 7F52Linux 5.10.3918273645Min: 43.2 / Avg: 44.51 / Max: 45.32Min: 44.08 / Avg: 44.79 / Max: 45.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdLinux 5.10.3EPYC 7F52510152025SE +/- 0.23, N = 12SE +/- 0.04, N = 1521.0621.89MIN: 19.62 / MAX: 77.67MIN: 21.44 / MAX: 101.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdLinux 5.10.3EPYC 7F52510152025Min: 20.14 / Avg: 21.06 / Max: 21.92Min: 21.66 / Avg: 21.89 / Max: 22.231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyLinux 5.10.3EPYC 7F52612182430SE +/- 0.21, N = 12SE +/- 0.13, N = 1525.8425.94MIN: 24.84 / MAX: 30.66MIN: 25.13 / MAX: 86.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyLinux 5.10.3EPYC 7F52612182430Min: 25.2 / Avg: 25.84 / Max: 27.61Min: 25.54 / Avg: 25.94 / Max: 26.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50Linux 5.10.3EPYC 7F52510152025SE +/- 0.04, N = 12SE +/- 0.05, N = 1520.9421.34MIN: 20.35 / MAX: 23.55MIN: 20.69 / MAX: 102.241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50Linux 5.10.3EPYC 7F52510152025Min: 20.67 / Avg: 20.94 / Max: 21.05Min: 21.09 / Avg: 21.34 / Max: 21.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetLinux 5.10.3EPYC 7F52246810SE +/- 0.09, N = 12SE +/- 0.08, N = 157.017.03MIN: 6.57 / MAX: 10.41MIN: 6.6 / MAX: 43.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetLinux 5.10.3EPYC 7F523691215Min: 6.63 / Avg: 7.01 / Max: 7.33Min: 6.64 / Avg: 7.03 / Max: 7.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18EPYC 7F52Linux 5.10.33691215SE +/- 0.03, N = 15SE +/- 0.04, N = 1210.6910.71MIN: 10.34 / MAX: 13.84MIN: 10.34 / MAX: 64.221. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18EPYC 7F52Linux 5.10.33691215Min: 10.56 / Avg: 10.69 / Max: 10.92Min: 10.58 / Avg: 10.71 / Max: 10.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16Linux 5.10.3EPYC 7F52714212835SE +/- 0.04, N = 12SE +/- 0.03, N = 1530.0230.17MIN: 29.27 / MAX: 43.79MIN: 29.55 / MAX: 90.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16Linux 5.10.3EPYC 7F52714212835Min: 29.63 / Avg: 30.02 / Max: 30.13Min: 29.91 / Avg: 30.17 / Max: 30.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetEPYC 7F52Linux 5.10.348121620SE +/- 0.06, N = 15SE +/- 0.14, N = 1217.6517.70MIN: 17.22 / MAX: 117.52MIN: 17.12 / MAX: 260.941. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetEPYC 7F52Linux 5.10.348121620Min: 17.46 / Avg: 17.65 / Max: 18.17Min: 17.37 / Avg: 17.7 / Max: 19.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceLinux 5.10.3EPYC 7F520.83031.66062.49093.32124.1515SE +/- 0.02, N = 12SE +/- 0.02, N = 153.673.69MIN: 3.53 / MAX: 4.35MIN: 3.52 / MAX: 75.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceLinux 5.10.3EPYC 7F52246810Min: 3.61 / Avg: 3.67 / Max: 3.79Min: 3.6 / Avg: 3.69 / Max: 3.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0EPYC 7F52Linux 5.10.33691215SE +/- 0.03, N = 15SE +/- 0.03, N = 1211.0611.14MIN: 10.67 / MAX: 13.4MIN: 10.78 / MAX: 14.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0EPYC 7F52Linux 5.10.33691215Min: 10.91 / Avg: 11.06 / Max: 11.24Min: 11.02 / Avg: 11.14 / Max: 11.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetEPYC 7F52Linux 5.10.3246810SE +/- 0.02, N = 15SE +/- 0.02, N = 127.607.60MIN: 6.99 / MAX: 10.58MIN: 7.34 / MAX: 8.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetEPYC 7F52Linux 5.10.33691215Min: 7.44 / Avg: 7.6 / Max: 7.76Min: 7.47 / Avg: 7.6 / Max: 7.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2Linux 5.10.3EPYC 7F523691215SE +/- 0.02, N = 12SE +/- 0.02, N = 158.978.98MIN: 8.55 / MAX: 22.64MIN: 8.73 / MAX: 14.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2Linux 5.10.3EPYC 7F523691215Min: 8.88 / Avg: 8.97 / Max: 9.12Min: 8.85 / Avg: 8.98 / Max: 9.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3EPYC 7F52Linux 5.10.3246810SE +/- 0.02, N = 15SE +/- 0.02, N = 127.687.73MIN: 7.21 / MAX: 12.54MIN: 7.28 / MAX: 11.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3EPYC 7F52Linux 5.10.33691215Min: 7.54 / Avg: 7.68 / Max: 7.78Min: 7.66 / Avg: 7.73 / Max: 7.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2Linux 5.10.3EPYC 7F52246810SE +/- 0.07, N = 12SE +/- 0.04, N = 158.448.49MIN: 6.92 / MAX: 12.58MIN: 7.06 / MAX: 72.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2Linux 5.10.3EPYC 7F523691215Min: 7.86 / Avg: 8.44 / Max: 8.72Min: 8.31 / Avg: 8.49 / Max: 8.941. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetEPYC 7F52Linux 5.10.3510152025SE +/- 0.15, N = 15SE +/- 0.27, N = 1219.2719.55MIN: 17.82 / MAX: 79.15MIN: 17.94 / MAX: 34.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetEPYC 7F52Linux 5.10.3510152025Min: 18.58 / Avg: 19.27 / Max: 20.47Min: 18.44 / Avg: 19.55 / Max: 20.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

WireGuard + Linux Networking Stack Stress Test

This is a benchmark of the WireGuard secure VPN tunnel and Linux networking stack stress test. The test runs on the local host but does require root permissions to run. The way it works is it creates three namespaces. ns0 has a loopback device. ns1 and ns2 each have wireguard devices. Those two wireguard devices send traffic through the loopback device of ns0. The end result of this is that tests wind up testing encryption and decryption at the same time -- a pretty CPU and scheduler-heavy workflow. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestEPYC 7F52Linux 5.10.370140210280350SE +/- 0.38, N = 3SE +/- 1.23, N = 3293.87301.90
OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestEPYC 7F52Linux 5.10.350100150200250Min: 293.12 / Avg: 293.87 / Max: 294.34Min: 300.62 / Avg: 301.9 / Max: 304.36

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Points2ImageEPYC 7F52Linux 5.10.35K10K15K20K25KSE +/- 156.61, N = 15SE +/- 123.09, N = 322093.9121297.661. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Points2ImageEPYC 7F52Linux 5.10.34K8K12K16K20KMin: 21280.29 / Avg: 22093.91 / Max: 23658.56Min: 21056.25 / Avg: 21297.66 / Max: 21460.151. (CXX) g++ options: -O3 -std=c++11 -fopenmp

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Pabellon Barcelona - Compute: CPU-OnlyEPYC 7F52Linux 5.10.360120180240300SE +/- 1.48, N = 3SE +/- 0.33, N = 3266.54266.70
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Pabellon Barcelona - Compute: CPU-OnlyEPYC 7F52Linux 5.10.350100150200250Min: 264.9 / Avg: 266.54 / Max: 269.5Min: 266.1 / Avg: 266.7 / Max: 267.25

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkLinux 5.10.3EPYC 7F5280160240320400SE +/- 0.27, N = 3SE +/- 1.66, N = 3368.45367.10
OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkLinux 5.10.3EPYC 7F5270140210280350Min: 368.11 / Avg: 368.45 / Max: 368.99Min: 363.81 / Avg: 367.1 / Max: 369.13

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Classroom - Compute: CPU-OnlyLinux 5.10.3EPYC 7F5250100150200250SE +/- 0.12, N = 3SE +/- 0.22, N = 3239.22239.80
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Classroom - Compute: CPU-OnlyLinux 5.10.3EPYC 7F524080120160200Min: 238.98 / Avg: 239.22 / Max: 239.34Min: 239.43 / Avg: 239.8 / Max: 240.2

Hierarchical INTegration

This test runs the U.S. Department of Energy's Ames Laboratory Hierarchical INTegration (HINT) benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOATEPYC 7F52Linux 5.10.370M140M210M280M350MSE +/- 82650.29, N = 3SE +/- 37054.68, N = 3347415262.13347380647.361. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOATEPYC 7F52Linux 5.10.360M120M180M240M300MMin: 347294652.45 / Avg: 347415262.13 / Max: 347573460.73Min: 347320726.74 / Avg: 347380647.36 / Max: 347448373.91. (CC) gcc options: -O3 -march=native -lm

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: Inception V3 - Device: CPUEPYC 7F52Linux 5.10.33691215SE +/- 0.00, N = 3SE +/- 0.04, N = 310.3910.17
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: Inception V3 - Device: CPUEPYC 7F52Linux 5.10.33691215Min: 10.38 / Avg: 10.39 / Max: 10.39Min: 10.1 / Avg: 10.17 / Max: 10.22

ECP-CANDLE

The CANDLE benchmark codes implement deep learning architectures relevant to problems in cancer. These architectures address problems at different biological scales, specifically problems at the molecular, cellular and population scales. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterECP-CANDLE 0.3Benchmark: P3B1EPYC 7F52Linux 5.10.3140280420560700648.70662.32

Open Porous Media

This is a test of Open Porous Media, a set of open-source tools concerning simulation of flow and transport of fluids in porous media. This test profile depends upon MPI/Flow already being installed on the system. Install instructions at https://opm-project.org/?page_id=36. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 8Linux 5.10.3EPYC 7F5250100150200250SE +/- 0.17, N = 3SE +/- 0.40, N = 3208.91217.411. flow 2020.04
OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 8Linux 5.10.3EPYC 7F524080120160200Min: 208.62 / Avg: 208.91 / Max: 209.2Min: 216.79 / Avg: 217.41 / Max: 218.171. flow 2020.04

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 2EPYC 7F52Linux 5.10.350100150200250SE +/- 0.16, N = 3SE +/- 0.34, N = 3212.22212.311. flow 2020.04
OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 2EPYC 7F52Linux 5.10.34080120160200Min: 211.93 / Avg: 212.22 / Max: 212.48Min: 211.66 / Avg: 212.31 / Max: 212.771. flow 2020.04

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: channel2Linux 5.10.3EPYC 7F52102030405042.3042.52

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: mp_prop_designEPYC 7F52Linux 5.10.3132639526559.3859.39

Monte Carlo Simulations of Ionised Nebulae

Mocassin is the Monte Carlo Simulations of Ionised Nebulae. MOCASSIN is a fully 3D or 2D photoionisation and dust radiative transfer code which employs a Monte Carlo approach to the transfer of radiation through media of arbitrary geometry and density distribution. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.0EPYC 7F52Linux 5.10.340801201602001921921. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100Linux 5.10.3EPYC 7F5240K80K120K160K200KSE +/- 172.36, N = 3SE +/- 222.17, N = 31810081816521. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100Linux 5.10.3EPYC 7F5230K60K90K120K150KMin: 180707 / Avg: 181008 / Max: 181304Min: 181404 / Avg: 181651.67 / Max: 1820951. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: fatigue2Linux 5.10.3EPYC 7F52122436486052.2152.31

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 0 - Input: 1080pEPYC 7F52Linux 5.10.30.02630.05260.07890.10520.1315SE +/- 0.000, N = 3SE +/- 0.000, N = 30.1170.1161. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 0 - Input: 1080pEPYC 7F52Linux 5.10.312345Min: 0.12 / Avg: 0.12 / Max: 0.12Min: 0.12 / Avg: 0.12 / Max: 0.121. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

Open Porous Media

This is a test of Open Porous Media, a set of open-source tools concerning simulation of flow and transport of fluids in porous media. This test profile depends upon MPI/Flow already being installed on the system. Install instructions at https://opm-project.org/?page_id=36. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 4Linux 5.10.3EPYC 7F524080120160200SE +/- 0.30, N = 3SE +/- 0.45, N = 3166.54168.761. flow 2020.04
OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 4Linux 5.10.3EPYC 7F52306090120150Min: 166.22 / Avg: 166.54 / Max: 167.14Min: 167.94 / Avg: 168.76 / Max: 169.491. flow 2020.04

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthLinux 5.10.3EPYC 7F5210M20M30M40M50MSE +/- 434223.37, N = 3SE +/- 250444.78, N = 34644165346240797
OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthLinux 5.10.3EPYC 7F528M16M24M32M40MMin: 45963345 / Avg: 46441653.33 / Max: 47308554Min: 45902281 / Avg: 46240796.67 / Max: 46729778

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: Mobilenet - Device: CPUEPYC 7F52Linux 5.10.348121620SE +/- 0.10, N = 3SE +/- 0.09, N = 314.5314.51
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: Mobilenet - Device: CPUEPYC 7F52Linux 5.10.348121620Min: 14.34 / Avg: 14.53 / Max: 14.64Min: 14.41 / Avg: 14.51 / Max: 14.69

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 200EPYC 7F52Linux 5.10.330K60K90K120K150KSE +/- 359.19, N = 3SE +/- 381.87, N = 31436221440391. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 200EPYC 7F52Linux 5.10.320K40K60K80K100KMin: 143151 / Avg: 143621.67 / Max: 144327Min: 143399 / Avg: 144039.33 / Max: 1447201. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Tachyon

This is a test of the threaded Tachyon, a parallel ray-tracing system, measuring the time to ray-trace a sample scene. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total TimeEPYC 7F52Linux 5.10.31122334455SE +/- 0.36, N = 3SE +/- 0.39, N = 1547.2448.131. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total TimeEPYC 7F52Linux 5.10.31020304050Min: 46.59 / Avg: 47.24 / Max: 47.84Min: 46.86 / Avg: 48.13 / Max: 51.521. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 20.1Input: Carbon NanotubeLinux 5.10.3EPYC 7F52306090120150SE +/- 0.05, N = 3SE +/- 1.45, N = 4114.13117.381. (CC) gcc options: -pthread -shared -fwrapv -O2 -lxc -lblas -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 20.1Input: Carbon NanotubeLinux 5.10.3EPYC 7F5220406080100Min: 114.03 / Avg: 114.13 / Max: 114.19Min: 114.48 / Avg: 117.38 / Max: 120.521. (CC) gcc options: -pthread -shared -fwrapv -O2 -lxc -lblas -lmpi

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU CacheEPYC 7F52Linux 5.10.31020304050SE +/- 1.52, N = 12SE +/- 1.40, N = 1544.8644.521. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU CacheEPYC 7F52Linux 5.10.3918273645Min: 35.89 / Avg: 44.86 / Max: 52.99Min: 36.39 / Avg: 44.52 / Max: 54.091. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

YafaRay

YafaRay is an open-source physically based montecarlo ray-tracing engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterYafaRay 3.4.1Total Time For Sample SceneEPYC 7F52Linux 5.10.3306090120150SE +/- 0.76, N = 3SE +/- 0.50, N = 3130.10130.911. (CXX) g++ options: -std=c++11 -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterYafaRay 3.4.1Total Time For Sample SceneEPYC 7F52Linux 5.10.320406080100Min: 128.65 / Avg: 130.1 / Max: 131.24Min: 129.91 / Avg: 130.91 / Max: 131.411. (CXX) g++ options: -std=c++11 -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype -lpthread

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchLinux 5.10.3EPYC 7F52306090120150SE +/- 0.30, N = 3SE +/- 0.02, N = 3127.23131.051. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchLinux 5.10.3EPYC 7F5220406080100Min: 126.63 / Avg: 127.23 / Max: 127.54Min: 131 / Avg: 131.05 / Max: 131.081. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read Write - Average LatencyEPYC 7F52Linux 5.10.3306090120150SE +/- 1.44, N = 15SE +/- 0.89, N = 15112.58113.781. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read Write - Average LatencyEPYC 7F52Linux 5.10.320406080100Min: 106 / Avg: 112.58 / Max: 123.44Min: 107.67 / Avg: 113.78 / Max: 119.691. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read WriteEPYC 7F52Linux 5.10.35001000150020002500SE +/- 27.65, N = 15SE +/- 17.17, N = 15222722011. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read WriteEPYC 7F52Linux 5.10.3400800120016002000Min: 2026.69 / Avg: 2227.06 / Max: 2360.07Min: 2090.35 / Avg: 2200.91 / Max: 2323.681. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

BYTE Unix Benchmark

This is a test of BYTE. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 2Linux 5.10.3EPYC 7F529M18M27M36M45MSE +/- 142195.68, N = 3SE +/- 336403.00, N = 341240132.341016145.2
OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 2Linux 5.10.3EPYC 7F527M14M21M28M35MMin: 40957050.5 / Avg: 41240132.33 / Max: 41405281.3Min: 40568980.6 / Avg: 41016145.2 / Max: 41675082.3

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080p 10-bitLinux 5.10.3EPYC 7F5220406080100SE +/- 0.07, N = 3SE +/- 0.05, N = 3111.44110.64MIN: 74.8 / MAX: 220.43MIN: 74.39 / MAX: 217.071. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080p 10-bitLinux 5.10.3EPYC 7F5220406080100Min: 111.29 / Avg: 111.44 / Max: 111.52Min: 110.56 / Avg: 110.64 / Max: 110.741. (CC) gcc options: -pthread

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG19 - Device: CPULinux 5.10.3EPYC 7F52510152025SE +/- 0.10, N = 3SE +/- 0.05, N = 320.6920.27
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG19 - Device: CPULinux 5.10.3EPYC 7F52510152025Min: 20.48 / Avg: 20.69 / Max: 20.8Min: 20.19 / Avg: 20.27 / Max: 20.35

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveLinux 5.10.3EPYC 7F5220406080100SE +/- 0.12, N = 3SE +/- 0.13, N = 3108.81108.831. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveLinux 5.10.3EPYC 7F5220406080100Min: 108.56 / Avg: 108.81 / Max: 108.94Min: 108.57 / Avg: 108.83 / Max: 108.981. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Fishy Cat - Compute: CPU-OnlyEPYC 7F52Linux 5.10.320406080100SE +/- 0.29, N = 3SE +/- 0.08, N = 3108.00108.33
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Fishy Cat - Compute: CPU-OnlyEPYC 7F52Linux 5.10.320406080100Min: 107.64 / Avg: 108 / Max: 108.57Min: 108.24 / Avg: 108.33 / Max: 108.48

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedLinux 5.10.3EPYC 7F522K4K6K8K10KSE +/- 25.66, N = 3SE +/- 18.07, N = 810815.310768.21. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedLinux 5.10.3EPYC 7F522K4K6K8K10KMin: 10767.2 / Avg: 10815.33 / Max: 10854.8Min: 10730 / Avg: 10768.18 / Max: 10859.81. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedEPYC 7F52Linux 5.10.31224364860SE +/- 0.46, N = 8SE +/- 0.44, N = 353.4953.481. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedEPYC 7F52Linux 5.10.31122334455Min: 52.45 / Avg: 53.49 / Max: 55.11Min: 52.99 / Avg: 53.48 / Max: 54.351. (CC) gcc options: -O3

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: CPULinux 5.10.3EPYC 7F52612182430SE +/- 0.24, N = 3SE +/- 0.01, N = 324.8924.35
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: CPULinux 5.10.3EPYC 7F52612182430Min: 24.46 / Avg: 24.89 / Max: 25.29Min: 24.33 / Avg: 24.35 / Max: 24.37

Timed GDB GNU Debugger Compilation

This test times how long it takes to build the GNU Debugger (GDB) in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 9.1Time To CompileEPYC 7F52Linux 5.10.320406080100SE +/- 0.04, N = 3SE +/- 0.11, N = 394.1896.98
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 9.1Time To CompileEPYC 7F52Linux 5.10.320406080100Min: 94.1 / Avg: 94.18 / Max: 94.24Min: 96.78 / Avg: 96.98 / Max: 97.16

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkLinux 5.10.3EPYC 7F523691215SE +/- 0.08, N = 3SE +/- 0.05, N = 39.359.271. Nodejs v10.19.0
OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkLinux 5.10.3EPYC 7F523691215Min: 9.2 / Avg: 9.35 / Max: 9.49Min: 9.18 / Avg: 9.27 / Max: 9.371. Nodejs v10.19.0

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: System V Message PassingEPYC 7F52Linux 5.10.32M4M6M8M10MSE +/- 128008.77, N = 15SE +/- 112749.98, N = 310610267.698395010.731. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: System V Message PassingEPYC 7F52Linux 5.10.32M4M6M8M10MMin: 9646937.38 / Avg: 10610267.69 / Max: 11308995.65Min: 8280189.87 / Avg: 8395010.73 / Max: 8620497.931. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000EPYC 7F52Linux 5.10.3300K600K900K1200K1500KSE +/- 1586.66, N = 3SE +/- 1744.82, N = 31211752.01198820.3
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000EPYC 7F52Linux 5.10.3200K400K600K800K1000KMin: 1209943.2 / Avg: 1211751.97 / Max: 1214914.4Min: 1196305.6 / Avg: 1198820.33 / Max: 1202173

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qdaLinux 5.10.3EPYC 7F52714212835SE +/- 0.12, N = 3SE +/- 0.13, N = 329.9630.04
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qdaLinux 5.10.3EPYC 7F52714212835Min: 29.72 / Avg: 29.96 / Max: 30.09Min: 29.9 / Avg: 30.04 / Max: 30.29

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9/WebM format using a sample 1080p video. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.8.2Speed: Speed 0Linux 5.10.3EPYC 7F52246810SE +/- 0.00, N = 3SE +/- 0.01, N = 37.207.161. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=c++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.8.2Speed: Speed 0Linux 5.10.3EPYC 7F523691215Min: 7.2 / Avg: 7.2 / Max: 7.2Min: 7.14 / Avg: 7.16 / Max: 7.181. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=c++11

Timed Eigen Compilation

This test times how long it takes to build all Eigen examples. The Eigen examples are compiled serially. Eigen is a C++ template library for linear algebra. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To CompileEPYC 7F52Linux 5.10.320406080100SE +/- 0.03, N = 3SE +/- 0.00, N = 383.4484.48
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To CompileEPYC 7F52Linux 5.10.31632486480Min: 83.41 / Avg: 83.44 / Max: 83.49Min: 84.47 / Avg: 84.48 / Max: 84.48

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: CPU-OnlyLinux 5.10.3EPYC 7F5220406080100SE +/- 0.05, N = 3SE +/- 0.25, N = 382.8983.52
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: CPU-OnlyLinux 5.10.3EPYC 7F521632486480Min: 82.79 / Avg: 82.89 / Max: 82.94Min: 83.17 / Avg: 83.52 / Max: 84.01

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceEPYC 7F52Linux 5.10.3100200300400500SE +/- 0.33, N = 3SE +/- 0.58, N = 3475476
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceEPYC 7F52Linux 5.10.380160240320400Min: 474 / Avg: 474.67 / Max: 475Min: 475 / Avg: 476 / Max: 477

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: induct2EPYC 7F52Linux 5.10.361218243023.7923.81

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkLinux 5.10.3EPYC 7F520.53421.06841.60262.13682.671SE +/- 0.002, N = 3SE +/- 0.003, N = 32.3742.3651. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkLinux 5.10.3EPYC 7F52246810Min: 2.37 / Avg: 2.37 / Max: 2.38Min: 2.36 / Avg: 2.36 / Max: 2.371. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUEPYC 7F52Linux 5.10.35001000150020002500SE +/- 6.20, N = 3SE +/- 4.81, N = 31992.602211.73MIN: 1974.03MIN: 2193.81. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUEPYC 7F52Linux 5.10.3400800120016002000Min: 1983.76 / Avg: 1992.6 / Max: 2004.54Min: 2202.89 / Avg: 2211.73 / Max: 2219.441. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUEPYC 7F52Linux 5.10.35001000150020002500SE +/- 1.98, N = 3SE +/- 10.97, N = 32006.802220.50MIN: 1996.47MIN: 2191.851. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUEPYC 7F52Linux 5.10.3400800120016002000Min: 2004.52 / Avg: 2006.8 / Max: 2010.75Min: 2201.74 / Avg: 2220.5 / Max: 2239.721. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUEPYC 7F52Linux 5.10.35001000150020002500SE +/- 7.07, N = 3SE +/- 9.42, N = 31994.622192.82MIN: 1976.29MIN: 2169.841. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUEPYC 7F52Linux 5.10.3400800120016002000Min: 1980.48 / Avg: 1994.62 / Max: 2001.72Min: 2180.99 / Avg: 2192.82 / Max: 2211.441. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko SkylineLinux 5.10.3EPYC 7F5220406080100SE +/- 0.55, N = 3SE +/- 0.72, N = 376.7576.83
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko SkylineLinux 5.10.3EPYC 7F521530456075Min: 75.7 / Avg: 76.75 / Max: 77.57Min: 75.39 / Avg: 76.83 / Max: 77.68

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4EPYC 7F52Linux 5.10.3300K600K900K1200K1500KSE +/- 1084.27, N = 3SE +/- 1266.68, N = 314945901499017
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4EPYC 7F52Linux 5.10.3300K600K900K1200K1500KMin: 1492620 / Avg: 1494590 / Max: 1496360Min: 1496660 / Avg: 1499016.67 / Max: 1501000

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000EPYC 7F52Linux 5.10.3300K600K900K1200K1500KSE +/- 1494.65, N = 3SE +/- 2047.76, N = 31425736.11419536.4
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000EPYC 7F52Linux 5.10.3200K400K600K800K1000KMin: 1423811.1 / Avg: 1425736.13 / Max: 1428679.2Min: 1416392.8 / Avg: 1419536.43 / Max: 1423381.6

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average LatencyLinux 5.10.3EPYC 7F520.0230.0460.0690.0920.115SE +/- 0.001, N = 15SE +/- 0.001, N = 30.1000.1021. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average LatencyLinux 5.10.3EPYC 7F5212345Min: 0.09 / Avg: 0.1 / Max: 0.1Min: 0.1 / Avg: 0.1 / Max: 0.11. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read OnlyLinux 5.10.3EPYC 7F52110K220K330K440K550KSE +/- 7128.53, N = 15SE +/- 5689.06, N = 35010474918271. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read OnlyLinux 5.10.3EPYC 7F5290K180K270K360K450KMin: 481274.39 / Avg: 501046.8 / Max: 586825.76Min: 481284.17 / Avg: 491826.61 / Max: 500804.421. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileEPYC 7F52Linux 5.10.320406080100SE +/- 0.10, N = 3SE +/- 0.35, N = 375.2675.80
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileEPYC 7F52Linux 5.10.31530456075Min: 75.08 / Avg: 75.26 / Max: 75.42Min: 75.14 / Avg: 75.8 / Max: 76.35

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUEPYC 7F52Linux 5.10.330060090012001500SE +/- 1.05, N = 3SE +/- 1.69, N = 31068.481169.57MIN: 1062.66MIN: 1161.341. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUEPYC 7F52Linux 5.10.32004006008001000Min: 1067.38 / Avg: 1068.48 / Max: 1070.58Min: 1166.19 / Avg: 1169.57 / Max: 1171.331. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUEPYC 7F52Linux 5.10.330060090012001500SE +/- 1.57, N = 3SE +/- 9.31, N = 31069.391162.78MIN: 1062.1MIN: 1139.61. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUEPYC 7F52Linux 5.10.32004006008001000Min: 1066.26 / Avg: 1069.39 / Max: 1071.25Min: 1144.15 / Avg: 1162.78 / Max: 1172.11. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUEPYC 7F52Linux 5.10.32004006008001000SE +/- 2.50, N = 3SE +/- 11.57, N = 31057.421148.93MIN: 1047.72MIN: 1133.531. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUEPYC 7F52Linux 5.10.32004006008001000Min: 1052.51 / Avg: 1057.42 / Max: 1060.67Min: 1136.89 / Avg: 1148.93 / Max: 1172.061. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: KostyaLinux 5.10.3EPYC 7F520.11930.23860.35790.47720.5965SE +/- 0.00, N = 3SE +/- 0.00, N = 30.530.521. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: KostyaLinux 5.10.3EPYC 7F52246810Min: 0.53 / Avg: 0.53 / Max: 0.53Min: 0.52 / Avg: 0.52 / Max: 0.531. (CXX) g++ options: -O3 -pthread

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100EPYC 7F52Linux 5.10.316K32K48K64K80KSE +/- 136.23, N = 3SE +/- 793.90, N = 371667726051. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100EPYC 7F52Linux 5.10.313K26K39K52K65KMin: 71479 / Avg: 71667.33 / Max: 71932Min: 71557 / Avg: 72605 / Max: 741621. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupEPYC 7F52Linux 5.10.3246810SE +/- 0.01, N = 3SE +/- 0.04, N = 37.768.69
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupEPYC 7F52Linux 5.10.33691215Min: 7.75 / Avg: 7.76 / Max: 7.77Min: 8.62 / Avg: 8.69 / Max: 8.74

KeyDB

A benchmark of KeyDB as a multi-threaded fork of the Redis server. The KeyDB benchmark is conducted using memtier-benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.16EPYC 7F52Linux 5.10.390K180K270K360K450KSE +/- 3437.31, N = 3SE +/- 1060.81, N = 3432105.73424609.601. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.16EPYC 7F52Linux 5.10.370K140K210K280K350KMin: 427512.92 / Avg: 432105.73 / Max: 438832.13Min: 422868.39 / Avg: 424609.6 / Max: 4265301. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregressionLinux 5.10.3EPYC 7F520.38930.77861.16791.55721.9465SE +/- 0.01, N = 3SE +/- 0.02, N = 41.721.73
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregressionLinux 5.10.3EPYC 7F52246810Min: 1.7 / Avg: 1.72 / Max: 1.73Min: 1.69 / Avg: 1.73 / Max: 1.78

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2EPYC 7F52Linux 5.10.3300K600K900K1200K1500KSE +/- 1087.40, N = 3SE +/- 1103.67, N = 313439201346243
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2EPYC 7F52Linux 5.10.3200K400K600K800K1000KMin: 1341750 / Avg: 1343920 / Max: 1345130Min: 1344040 / Avg: 1346243.33 / Max: 1347460

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPUEPYC 7F52Linux 5.10.36001200180024003000SE +/- 1.99, N = 3SE +/- 3.43, N = 32582.882590.331. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPUEPYC 7F52Linux 5.10.35001000150020002500Min: 2580.21 / Avg: 2582.88 / Max: 2586.78Min: 2584.9 / Avg: 2590.33 / Max: 2596.671. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPULinux 5.10.3EPYC 7F520.69081.38162.07242.76323.454SE +/- 0.01, N = 3SE +/- 0.00, N = 33.073.061. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPULinux 5.10.3EPYC 7F52246810Min: 3.06 / Avg: 3.07 / Max: 3.08Min: 3.06 / Avg: 3.06 / Max: 3.071. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPUEPYC 7F52Linux 5.10.3400800120016002000SE +/- 1.96, N = 3SE +/- 2.02, N = 31988.751988.901. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPUEPYC 7F52Linux 5.10.330060090012001500Min: 1986.54 / Avg: 1988.75 / Max: 1992.66Min: 1984.89 / Avg: 1988.9 / Max: 1991.261. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPUEPYC 7F52Linux 5.10.30.90451.8092.71353.6184.5225SE +/- 0.00, N = 3SE +/- 0.01, N = 34.024.011. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPUEPYC 7F52Linux 5.10.3246810Min: 4.01 / Avg: 4.02 / Max: 4.02Min: 4 / Avg: 4.01 / Max: 4.021. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPUEPYC 7F52Linux 5.10.3400800120016002000SE +/- 2.66, N = 3SE +/- 2.14, N = 31986.911989.911. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPUEPYC 7F52Linux 5.10.330060090012001500Min: 1981.75 / Avg: 1986.91 / Max: 1990.59Min: 1986.12 / Avg: 1989.91 / Max: 1993.541. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPULinux 5.10.3EPYC 7F520.90231.80462.70693.60924.5115SE +/- 0.00, N = 3SE +/- 0.01, N = 34.014.011. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPULinux 5.10.3EPYC 7F52246810Min: 4.01 / Avg: 4.01 / Max: 4.01Min: 3.99 / Avg: 4.01 / Max: 4.031. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPUEPYC 7F52Linux 5.10.36001200180024003000SE +/- 3.48, N = 3SE +/- 2.25, N = 32600.352605.511. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPUEPYC 7F52Linux 5.10.35001000150020002500Min: 2593.83 / Avg: 2600.35 / Max: 2605.72Min: 2601.58 / Avg: 2605.51 / Max: 2609.361. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPUEPYC 7F52Linux 5.10.30.6841.3682.0522.7363.42SE +/- 0.02, N = 3SE +/- 0.01, N = 33.043.031. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPUEPYC 7F52Linux 5.10.3246810Min: 3 / Avg: 3.04 / Max: 3.06Min: 3.02 / Avg: 3.03 / Max: 3.041. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgKsamples, More Is BetterChaos Group V-RAY 4.10.07Mode: CPUEPYC 7F52Linux 5.10.36K12K18K24K30KSE +/- 255.78, N = 3SE +/- 33.00, N = 32733427110
OpenBenchmarking.orgKsamples, More Is BetterChaos Group V-RAY 4.10.07Mode: CPUEPYC 7F52Linux 5.10.35K10K15K20K25KMin: 27026 / Avg: 27334.33 / Max: 27842Min: 27077 / Avg: 27110 / Max: 27176

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000EPYC 7F52Linux 5.10.31530456075SE +/- 0.22, N = 3SE +/- 0.02, N = 366.7267.141. (CC) gcc options: -O2 -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000EPYC 7F52Linux 5.10.31326395265Min: 66.35 / Avg: 66.72 / Max: 67.11Min: 67.1 / Avg: 67.14 / Max: 67.171. (CC) gcc options: -O2 -ldl -lz -lpthread

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkVdbVolumeLinux 5.10.3EPYC 7F523M6M9M12M15MSE +/- 19338.94, N = 3SE +/- 100190.60, N = 316277364.2415263784.67MIN: 790262 / MAX: 65640384MIN: 798247 / MAX: 56683584
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkVdbVolumeLinux 5.10.3EPYC 7F523M6M9M12M15MMin: 16243824.43 / Avg: 16277364.24 / Max: 16310816.3Min: 15133963.18 / Avg: 15263784.67 / Max: 15460885.64

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_icaEPYC 7F52Linux 5.10.31224364860SE +/- 0.56, N = 4SE +/- 0.36, N = 352.8753.50
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_icaEPYC 7F52Linux 5.10.31122334455Min: 51.61 / Avg: 52.87 / Max: 54.33Min: 52.85 / Avg: 53.5 / Max: 54.11

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: capacitaLinux 5.10.3EPYC 7F524812162017.5617.61

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: rnflowLinux 5.10.3EPYC 7F524812162016.5916.60

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsEPYC 7F52Linux 5.10.30.25830.51660.77491.03321.2915SE +/- 0.00082, N = 3SE +/- 0.00649, N = 31.142261.14801
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsEPYC 7F52Linux 5.10.3246810Min: 1.14 / Avg: 1.14 / Max: 1.14Min: 1.14 / Avg: 1.15 / Max: 1.16

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUEPYC 7F52Linux 5.10.30.73281.46562.19842.93123.664SE +/- 0.01184, N = 3SE +/- 0.04703, N = 152.763213.25673MIN: 2.65MIN: 2.891. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUEPYC 7F52Linux 5.10.3246810Min: 2.75 / Avg: 2.76 / Max: 2.79Min: 3.01 / Avg: 3.26 / Max: 3.51. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

LuxCoreRender

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on the CPU as opposed to the OpenCL version. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: Rainbow Colors and PrismLinux 5.10.3EPYC 7F520.78751.5752.36253.153.9375SE +/- 0.01, N = 3SE +/- 0.01, N = 33.503.49MIN: 3.43 / MAX: 3.52MIN: 3.42 / MAX: 3.52
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: Rainbow Colors and PrismLinux 5.10.3EPYC 7F52246810Min: 3.48 / Avg: 3.5 / Max: 3.52Min: 3.48 / Avg: 3.49 / Max: 3.5

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: DLSCLinux 5.10.3EPYC 7F520.73581.47162.20742.94323.679SE +/- 0.01, N = 3SE +/- 0.01, N = 33.273.27MIN: 3.17 / MAX: 3.42MIN: 3.12 / MAX: 3.42
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: DLSCLinux 5.10.3EPYC 7F52246810Min: 3.25 / Avg: 3.27 / Max: 3.29Min: 3.26 / Avg: 3.27 / Max: 3.28

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarEPYC 7F52Linux 5.10.3246810SE +/- 0.012, N = 3SE +/- 0.008, N = 37.7617.731
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarEPYC 7F52Linux 5.10.33691215Min: 7.74 / Avg: 7.76 / Max: 7.78Min: 7.72 / Avg: 7.73 / Max: 7.74

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomLinux 5.10.3EPYC 7F520.80481.60962.41443.21924.024SE +/- 0.002, N = 3SE +/- 0.009, N = 33.5773.577
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomLinux 5.10.3EPYC 7F52246810Min: 3.57 / Avg: 3.58 / Max: 3.58Min: 3.57 / Avg: 3.58 / Max: 3.59

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetEPYC 7F52Linux 5.10.320K40K60K80K100KSE +/- 36.86, N = 3SE +/- 86.53, N = 3106296106510
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetEPYC 7F52Linux 5.10.320K40K60K80K100KMin: 106226 / Avg: 106296 / Max: 106351Min: 106367 / Avg: 106510.33 / Max: 106666

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileLinux 5.10.3EPYC 7F5230K60K90K120K150KSE +/- 388.72, N = 3SE +/- 356.27, N = 3126619127275
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileLinux 5.10.3EPYC 7F5220K40K60K80K100KMin: 126190 / Avg: 126619 / Max: 127395Min: 126904 / Avg: 127274.67 / Max: 127987

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3Linux 5.10.3EPYC 7F5270140210280350SE +/- 0.33, N = 3SE +/- 0.33, N = 3326329
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3Linux 5.10.3EPYC 7F5260120180240300Min: 326 / Avg: 326.33 / Max: 327Min: 328 / Avg: 328.67 / Max: 329

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantEPYC 7F52Linux 5.10.315K30K45K60K75KSE +/- 53.57, N = 3SE +/- 9.17, N = 369832.470037.4
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantEPYC 7F52Linux 5.10.312K24K36K48K60KMin: 69725.3 / Avg: 69832.43 / Max: 69886.1Min: 70019.5 / Avg: 70037.43 / Max: 70049.7

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatEPYC 7F52Linux 5.10.315K30K45K60K75KSE +/- 46.97, N = 3SE +/- 59.93, N = 368415.068630.9
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatEPYC 7F52Linux 5.10.312K24K36K48K60KMin: 68327 / Avg: 68414.97 / Max: 68487.5Min: 68521.9 / Avg: 68630.87 / Max: 68728.6

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileEPYC 7F52Linux 5.10.31020304050SE +/- 0.50, N = 4SE +/- 0.51, N = 445.1245.27
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileEPYC 7F52Linux 5.10.3918273645Min: 44.22 / Avg: 45.12 / Max: 46.53Min: 44.6 / Avg: 45.27 / Max: 46.78

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUEPYC 7F52Linux 5.10.30.17550.3510.52650.7020.8775SE +/- 0.00, N = 3SE +/- 0.00, N = 30.780.781. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUEPYC 7F52Linux 5.10.3246810Min: 0.78 / Avg: 0.78 / Max: 0.78Min: 0.78 / Avg: 0.78 / Max: 0.791. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUEPYC 7F52Linux 5.10.32K4K6K8K10KSE +/- 6.06, N = 3SE +/- 5.41, N = 39974.709966.931. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUEPYC 7F52Linux 5.10.32K4K6K8K10KMin: 9962.58 / Avg: 9974.7 / Max: 9981.16Min: 9956.6 / Avg: 9966.93 / Max: 9974.861. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPULinux 5.10.3EPYC 7F520.17780.35560.53340.71120.889SE +/- 0.00, N = 3SE +/- 0.00, N = 30.780.791. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPULinux 5.10.3EPYC 7F52246810Min: 0.78 / Avg: 0.78 / Max: 0.78Min: 0.78 / Avg: 0.79 / Max: 0.791. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPULinux 5.10.3EPYC 7F522K4K6K8K10KSE +/- 5.75, N = 3SE +/- 16.24, N = 39953.079935.551. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPULinux 5.10.3EPYC 7F522K4K6K8K10KMin: 9945.47 / Avg: 9953.07 / Max: 9964.35Min: 9913.95 / Avg: 9935.55 / Max: 9967.351. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: MD5Linux 5.10.3EPYC 7F52400K800K1200K1600K2000KSE +/- 3179.80, N = 3SE +/- 2962.73, N = 3172866717263331. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2
OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: MD5Linux 5.10.3EPYC 7F52300K600K900K1200K1500KMin: 1725000 / Avg: 1728666.67 / Max: 1735000Min: 1722000 / Avg: 1726333.33 / Max: 17320001. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SharpenLinux 5.10.3EPYC 7F5250100150200250SE +/- 0.33, N = 3SE +/- 0.33, N = 32352351. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SharpenLinux 5.10.3EPYC 7F524080120160200Min: 235 / Avg: 235.33 / Max: 236Min: 235 / Avg: 235.33 / Max: 2361. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-GaussianLinux 5.10.3EPYC 7F5290180270360450SE +/- 0.33, N = 3SE +/- 0.33, N = 34284191. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-GaussianLinux 5.10.3EPYC 7F5280160240320400Min: 427 / Avg: 427.67 / Max: 428Min: 418 / Avg: 418.67 / Max: 4191. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: EnhancedLinux 5.10.3EPYC 7F5280160240320400SE +/- 0.33, N = 3SE +/- 0.33, N = 33743741. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: EnhancedLinux 5.10.3EPYC 7F5270140210280350Min: 374 / Avg: 374.33 / Max: 375Min: 374 / Avg: 374.33 / Max: 3751. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: RotateEPYC 7F52Linux 5.10.3130260390520650SE +/- 5.81, N = 3SE +/- 4.04, N = 36196141. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: RotateEPYC 7F52Linux 5.10.3110220330440550Min: 608 / Avg: 618.67 / Max: 628Min: 606 / Avg: 614 / Max: 6191. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SwirlEPYC 7F52Linux 5.10.32004006008001000SE +/- 1.20, N = 3SE +/- 0.33, N = 38968951. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SwirlEPYC 7F52Linux 5.10.3160320480640800Min: 894 / Avg: 895.67 / Max: 898Min: 895 / Avg: 895.33 / Max: 8961. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: ResizingEPYC 7F52Linux 5.10.330060090012001500SE +/- 18.67, N = 3SE +/- 9.84, N = 3159715911. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: ResizingEPYC 7F52Linux 5.10.330060090012001500Min: 1560 / Avg: 1597.33 / Max: 1616Min: 1573 / Avg: 1590.67 / Max: 16071. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: HWB Color SpaceLinux 5.10.3EPYC 7F5230060090012001500SE +/- 1.86, N = 3SE +/- 1.33, N = 3125311711. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: HWB Color SpaceLinux 5.10.3EPYC 7F522004006008001000Min: 1249 / Avg: 1252.67 / Max: 1255Min: 1168 / Avg: 1170.67 / Max: 11721. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: SlowLinux 5.10.3EPYC 7F523691215SE +/- 0.01, N = 3SE +/- 0.01, N = 310.1010.061. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: SlowLinux 5.10.3EPYC 7F523691215Min: 10.09 / Avg: 10.1 / Max: 10.12Min: 10.04 / Avg: 10.06 / Max: 10.081. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedEPYC 7F52Linux 5.10.32K4K6K8K10KSE +/- 40.67, N = 3SE +/- 4.82, N = 310898.410851.61. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedEPYC 7F52Linux 5.10.32K4K6K8K10KMin: 10854.1 / Avg: 10898.37 / Max: 10979.6Min: 10842 / Avg: 10851.63 / Max: 10856.61. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedLinux 5.10.3EPYC 7F521224364860SE +/- 0.46, N = 3SE +/- 0.32, N = 352.8151.781. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedLinux 5.10.3EPYC 7F521122334455Min: 51.9 / Avg: 52.81 / Max: 53.31Min: 51.46 / Avg: 51.78 / Max: 52.411. (CC) gcc options: -O3

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: MediumLinux 5.10.3EPYC 7F523691215SE +/- 0.01, N = 3SE +/- 0.01, N = 310.3110.241. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: MediumLinux 5.10.3EPYC 7F523691215Min: 10.3 / Avg: 10.31 / Max: 10.33Min: 10.23 / Avg: 10.24 / Max: 10.261. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkStructuredVolumeLinux 5.10.3EPYC 7F5215M30M45M60M75MSE +/- 168051.69, N = 3SE +/- 788054.78, N = 372087200.3268692259.88MIN: 921866 / MAX: 575712792MIN: 909007 / MAX: 535870728
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkStructuredVolumeLinux 5.10.3EPYC 7F5212M24M36M48M60MMin: 71796921.92 / Avg: 72087200.32 / Max: 72379063.57Min: 67452425.35 / Avg: 68692259.88 / Max: 70154910.05

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandomLinux 5.10.3EPYC 7F520.08780.17560.26340.35120.439SE +/- 0.00, N = 3SE +/- 0.00, N = 30.390.381. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandomLinux 5.10.3EPYC 7F5212345Min: 0.38 / Avg: 0.39 / Max: 0.39Min: 0.38 / Avg: 0.38 / Max: 0.391. (CXX) g++ options: -O3 -pthread

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 5Linux 5.10.3EPYC 7F520.24620.49240.73860.98481.231SE +/- 0.001, N = 3SE +/- 0.001, N = 31.0941.094
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 5Linux 5.10.3EPYC 7F52246810Min: 1.09 / Avg: 1.09 / Max: 1.1Min: 1.09 / Avg: 1.09 / Max: 1.1

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETEPYC 7F52Linux 5.10.3300K600K900K1200K1500KSE +/- 15975.34, N = 15SE +/- 10427.33, N = 151350619.521323358.981. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETEPYC 7F52Linux 5.10.3200K400K600K800K1000KMin: 1241012.38 / Avg: 1350619.52 / Max: 1434949.75Min: 1279017.88 / Avg: 1323358.98 / Max: 1398735.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 1Linux 5.10.3EPYC 7F520.08330.16660.24990.33320.4165SE +/- 0.001, N = 3SE +/- 0.001, N = 30.3700.369
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 1Linux 5.10.3EPYC 7F5212345Min: 0.37 / Avg: 0.37 / Max: 0.37Min: 0.37 / Avg: 0.37 / Max: 0.37

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweetsLinux 5.10.3EPYC 7F520.13730.27460.41190.54920.6865SE +/- 0.00, N = 3SE +/- 0.00, N = 30.610.611. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweetsLinux 5.10.3EPYC 7F52246810Min: 0.61 / Avg: 0.61 / Max: 0.61Min: 0.6 / Avg: 0.61 / Max: 0.611. (CXX) g++ options: -O3 -pthread

DeepSpeech

Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPULinux 5.10.3EPYC 7F521530456075SE +/- 0.08, N = 3SE +/- 0.20, N = 368.1768.29
OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPULinux 5.10.3EPYC 7F521326395265Min: 68.03 / Avg: 68.17 / Max: 68.3Min: 67.9 / Avg: 68.29 / Max: 68.51

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDLinux 5.10.3EPYC 7F520.13950.2790.41850.5580.6975SE +/- 0.00, N = 3SE +/- 0.00, N = 30.620.621. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDLinux 5.10.3EPYC 7F52246810Min: 0.62 / Avg: 0.62 / Max: 0.62Min: 0.62 / Avg: 0.62 / Max: 0.631. (CXX) g++ options: -O3 -pthread

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETEPYC 7F52Linux 5.10.3400K800K1200K1600K2000KSE +/- 22488.85, N = 15SE +/- 18986.94, N = 151753884.371630009.811. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETEPYC 7F52Linux 5.10.3300K600K900K1200K1500KMin: 1631634.62 / Avg: 1753884.37 / Max: 1923446.25Min: 1506265.12 / Avg: 1630009.81 / Max: 1724744.881. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Hugin

Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeLinux 5.10.3EPYC 7F521122334455SE +/- 0.25, N = 3SE +/- 0.07, N = 350.6850.70
OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeLinux 5.10.3EPYC 7F521020304050Min: 50.2 / Avg: 50.68 / Max: 51.03Min: 50.58 / Avg: 50.7 / Max: 50.83

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goLinux 5.10.3EPYC 7F5260120180240300253254

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: proteinEPYC 7F52Linux 5.10.34812162013.8413.84

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19EPYC 7F52Linux 5.10.320406080100SE +/- 0.17, N = 3SE +/- 0.03, N = 376.874.71. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19EPYC 7F52Linux 5.10.31530456075Min: 76.5 / Avg: 76.8 / Max: 77.1Min: 74.7 / Avg: 74.73 / Max: 74.81. (CC) gcc options: -O3 -pthread -lz -llzma

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon ObjEPYC 7F52Linux 5.10.3510152025SE +/- 0.03, N = 3SE +/- 0.04, N = 319.6419.59MIN: 18.92 / MAX: 19.94MIN: 18.78 / MAX: 19.96
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon ObjEPYC 7F52Linux 5.10.3510152025Min: 19.6 / Avg: 19.64 / Max: 19.69Min: 19.51 / Avg: 19.59 / Max: 19.66

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian DragonEPYC 7F52Linux 5.10.3510152025SE +/- 0.20, N = 6SE +/- 0.03, N = 321.1820.89MIN: 20.68 / MAX: 22.95MIN: 20.71 / MAX: 22.1
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian DragonEPYC 7F52Linux 5.10.3510152025Min: 20.83 / Avg: 21.18 / Max: 22.14Min: 20.85 / Avg: 20.89 / Max: 20.96

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon ObjLinux 5.10.3EPYC 7F52510152025SE +/- 0.03, N = 3SE +/- 0.11, N = 320.4220.41MIN: 19.57 / MAX: 20.77MIN: 19.42 / MAX: 20.8
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon ObjLinux 5.10.3EPYC 7F52510152025Min: 20.36 / Avg: 20.42 / Max: 20.47Min: 20.19 / Avg: 20.41 / Max: 20.55

CLOMP

CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading in order to influence future system designs. This particular test profile configuration is currently set to look at the OpenMP static schedule speed-up across all available CPU cores using the recommended test configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP SpeedupLinux 5.10.3EPYC 7F521122334455SE +/- 0.09, N = 3SE +/- 0.21, N = 350.150.11. (CC) gcc options: -fopenmp -O3 -lm
OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP SpeedupLinux 5.10.3EPYC 7F521020304050Min: 49.9 / Avg: 50.07 / Max: 50.2Min: 49.8 / Avg: 50.1 / Max: 50.51. (CC) gcc options: -fopenmp -O3 -lm

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total TimeEPYC 7F52Linux 5.10.38M16M24M32M40MSE +/- 300939.62, N = 3SE +/- 178225.83, N = 336388251363838161. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total TimeEPYC 7F52Linux 5.10.36M12M18M24M30MMin: 35813405 / Avg: 36388251.33 / Max: 36830134Min: 36202829 / Avg: 36383815.67 / Max: 367402531. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisEPYC 7F52Linux 5.10.3714212835SE +/- 0.26, N = 4SE +/- 0.07, N = 430.7831.051. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisEPYC 7F52Linux 5.10.3714212835Min: 30.21 / Avg: 30.78 / Max: 31.23Min: 30.86 / Avg: 31.05 / Max: 31.181. (CC) gcc options: -O2 -std=c99

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 6EPYC 7F52Linux 5.10.30.32940.65880.98821.31761.647SE +/- 0.003, N = 3SE +/- 0.001, N = 31.4641.461
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 6EPYC 7F52Linux 5.10.3246810Min: 1.46 / Avg: 1.46 / Max: 1.47Min: 1.46 / Avg: 1.46 / Max: 1.46

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateLinux 5.10.3EPYC 7F521122334455SE +/- 0.27, N = 3SE +/- 0.32, N = 347.548.3
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateLinux 5.10.3EPYC 7F521020304050Min: 47 / Avg: 47.53 / Max: 47.9Min: 47.7 / Avg: 48.27 / Max: 48.8

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionLinux 5.10.3EPYC 7F52816243240SE +/- 0.05, N = 3SE +/- 0.04, N = 336.2836.311. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionLinux 5.10.3EPYC 7F52816243240Min: 36.18 / Avg: 36.28 / Max: 36.35Min: 36.23 / Avg: 36.31 / Max: 36.361. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: IMDB LSTM - Device: CPULinux 5.10.3EPYC 7F52140280420560700SE +/- 2.01, N = 3SE +/- 2.99, N = 3666.79665.88
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: IMDB LSTM - Device: CPULinux 5.10.3EPYC 7F52120240360480600Min: 663.75 / Avg: 666.79 / Max: 670.6Min: 659.97 / Avg: 665.88 / Max: 669.68

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileEPYC 7F52Linux 5.10.34080120160200173173

7-Zip Compression

This is a test of 7-Zip using p7zip with its integrated benchmark feature or upstream 7-Zip for the Windows x64 build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestLinux 5.10.3EPYC 7F5220K40K60K80K100KSE +/- 140.42, N = 3SE +/- 904.09, N = 31082091068031. (CXX) g++ options: -pipe -lpthread
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestLinux 5.10.3EPYC 7F5220K40K60K80K100KMin: 107998 / Avg: 108209 / Max: 108475Min: 105145 / Avg: 106802.67 / Max: 1082571. (CXX) g++ options: -pipe -lpthread

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileEPYC 7F52Linux 5.10.3816243240SE +/- 0.08, N = 3SE +/- 0.06, N = 333.8934.13
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileEPYC 7F52Linux 5.10.3714212835Min: 33.8 / Avg: 33.89 / Max: 34.05Min: 34.03 / Avg: 34.13 / Max: 34.24

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 0 Two-PassEPYC 7F52Linux 5.10.30.06980.13960.20940.27920.349SE +/- 0.00, N = 3SE +/- 0.00, N = 30.310.301. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 0 Two-PassEPYC 7F52Linux 5.10.312345Min: 0.31 / Avg: 0.31 / Max: 0.31Min: 0.3 / Avg: 0.3 / Max: 0.311. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: CrownEPYC 7F52Linux 5.10.3510152025SE +/- 0.12, N = 3SE +/- 0.16, N = 318.7718.62MIN: 18.46 / MAX: 19.39MIN: 17.94 / MAX: 19.11
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: CrownEPYC 7F52Linux 5.10.3510152025Min: 18.59 / Avg: 18.77 / Max: 18.99Min: 18.31 / Avg: 18.62 / Max: 18.81

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteLinux 5.10.3EPYC 7F52130K260K390K520K650KSE +/- 627.03, N = 3SE +/- 1384.96, N = 3625441618552
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteLinux 5.10.3EPYC 7F52110K220K330K440K550KMin: 624291 / Avg: 625441.33 / Max: 626449Min: 616209 / Avg: 618552 / Max: 621003

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkLinux 5.10.3EPYC 7F52918273645SE +/- 0.07, N = 3SE +/- 0.04, N = 338.5638.181. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkLinux 5.10.3EPYC 7F52816243240Min: 38.45 / Avg: 38.56 / Max: 38.69Min: 38.09 / Avg: 38.18 / Max: 38.231. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOPEPYC 7F52Linux 5.10.3400K800K1200K1600K2000KSE +/- 19724.55, N = 15SE +/- 7581.95, N = 31915545.961228236.411. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOPEPYC 7F52Linux 5.10.3300K600K900K1200K1500KMin: 1782759.38 / Avg: 1915545.96 / Max: 2012394.38Min: 1213747.62 / Avg: 1228236.41 / Max: 1239355.621. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeLinux 5.10.3EPYC 7F52510152025SE +/- 0.08, N = 3SE +/- 0.09, N = 319.4019.161. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeLinux 5.10.3EPYC 7F52510152025Min: 19.27 / Avg: 19.4 / Max: 19.55Min: 18.99 / Avg: 19.16 / Max: 19.311. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: CrownEPYC 7F52Linux 5.10.3510152025SE +/- 0.07, N = 3SE +/- 0.08, N = 319.7819.53MIN: 19.53 / MAX: 20.18MIN: 19.27 / MAX: 19.84
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: CrownEPYC 7F52Linux 5.10.3510152025Min: 19.65 / Avg: 19.78 / Max: 19.87Min: 19.38 / Avg: 19.53 / Max: 19.64

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedLinux 5.10.3EPYC 7F522K4K6K8K10KSE +/- 30.69, N = 3SE +/- 42.92, N = 311490.611455.81. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedLinux 5.10.3EPYC 7F522K4K6K8K10KMin: 11430.3 / Avg: 11490.63 / Max: 11530.6Min: 11380.2 / Avg: 11455.77 / Max: 11528.81. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedLinux 5.10.3EPYC 7F522K4K6K8K10KSE +/- 59.35, N = 3SE +/- 51.25, N = 310009.499947.801. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedLinux 5.10.3EPYC 7F522K4K6K8K10KMin: 9891.06 / Avg: 10009.49 / Max: 10075.62Min: 9892.02 / Avg: 9947.8 / Max: 10050.171. (CC) gcc options: -O3

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: aermodEPYC 7F52Linux 5.10.32468106.126.16

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU StressLinux 5.10.3EPYC 7F5213002600390052006500SE +/- 5.69, N = 3SE +/- 22.12, N = 36266.846244.331. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU StressLinux 5.10.3EPYC 7F5211002200330044005500Min: 6257.59 / Avg: 6266.84 / Max: 6277.21Min: 6200.79 / Avg: 6244.33 / Max: 6272.911. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: BlowfishLinux 5.10.3EPYC 7F526K12K18K24K30KSE +/- 6.17, N = 3SE +/- 5.78, N = 326397263901. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2
OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: BlowfishLinux 5.10.3EPYC 7F525K10K15K20K25KMin: 26390 / Avg: 26396.67 / Max: 26409Min: 26380 / Avg: 26390.33 / Max: 264001. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2

Aircrack-ng

Aircrack-ng is a tool for assessing WiFi/WLAN network security. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.5.2EPYC 7F52Linux 5.10.312K24K36K48K60KSE +/- 72.75, N = 3SE +/- 97.98, N = 356912.7456766.001. (CXX) g++ options: -O3 -fvisibility=hidden -masm=intel -fcommon -rdynamic -lpthread -lz -lcrypto -lhwloc -ldl -lm -pthread
OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.5.2EPYC 7F52Linux 5.10.310K20K30K40K50KMin: 56771 / Avg: 56912.74 / Max: 57012.05Min: 56667.44 / Avg: 56766 / Max: 56961.951. (CXX) g++ options: -O3 -fvisibility=hidden -masm=intel -fcommon -rdynamic -lpthread -lz -lcrypto -lhwloc -ldl -lm -pthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: NUMALinux 5.10.3EPYC 7F5290180270360450SE +/- 0.07, N = 3SE +/- 2.52, N = 3416.60409.251. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: NUMALinux 5.10.3EPYC 7F5270140210280350Min: 416.48 / Avg: 416.6 / Max: 416.72Min: 404.24 / Avg: 409.25 / Max: 412.311. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MallocEPYC 7F52Linux 5.10.370M140M210M280M350MSE +/- 811855.41, N = 3SE +/- 693009.74, N = 3332554816.83332331122.531. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MallocEPYC 7F52Linux 5.10.360M120M180M240M300MMin: 331019569.63 / Avg: 332554816.83 / Max: 333780250.03Min: 330945590.83 / Avg: 332331122.53 / Max: 333055730.621. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MEMFDLinux 5.10.3EPYC 7F52150300450600750SE +/- 0.29, N = 3SE +/- 0.22, N = 3712.74680.781. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MEMFDLinux 5.10.3EPYC 7F52130260390520650Min: 712.18 / Avg: 712.74 / Max: 713.15Min: 680.4 / Avg: 680.78 / Max: 681.161. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Matrix MathEPYC 7F52Linux 5.10.317K34K51K68K85KSE +/- 117.38, N = 3SE +/- 608.49, N = 377530.4876518.791. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Matrix MathEPYC 7F52Linux 5.10.313K26K39K52K65KMin: 77300.25 / Avg: 77530.48 / Max: 77685.33Min: 75895.92 / Avg: 76518.79 / Max: 77735.651. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CryptoEPYC 7F52Linux 5.10.310002000300040005000SE +/- 0.84, N = 3SE +/- 5.74, N = 34565.974555.431. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CryptoEPYC 7F52Linux 5.10.38001600240032004000Min: 4565.13 / Avg: 4565.97 / Max: 4567.64Min: 4543.96 / Avg: 4555.43 / Max: 4561.411. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc C String FunctionsEPYC 7F52Linux 5.10.3200K400K600K800K1000KSE +/- 2051.22, N = 3SE +/- 2853.73, N = 31144375.851143670.001. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc C String FunctionsEPYC 7F52Linux 5.10.3200K400K600K800K1000KMin: 1141681.39 / Avg: 1144375.85 / Max: 1148402.16Min: 1140177.61 / Avg: 1143670 / Max: 1149325.641. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context SwitchingEPYC 7F52Linux 5.10.32M4M6M8M10MSE +/- 27679.79, N = 3SE +/- 21287.84, N = 38409881.778245888.971. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context SwitchingEPYC 7F52Linux 5.10.31.5M3M4.5M6M7.5MMin: 8362250.79 / Avg: 8409881.77 / Max: 8458130.44Min: 8205143.3 / Avg: 8245888.97 / Max: 8276955.711. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SENDFILEEPYC 7F52Linux 5.10.360K120K180K240K300KSE +/- 100.47, N = 3SE +/- 302.83, N = 3297122.81280154.741. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SENDFILEEPYC 7F52Linux 5.10.350K100K150K200K250KMin: 296962.2 / Avg: 297122.81 / Max: 297307.7Min: 279577.24 / Avg: 280154.74 / Max: 280601.561. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Memory CopyingEPYC 7F52Linux 5.10.314002800420056007000SE +/- 58.89, N = 3SE +/- 3.47, N = 36435.736274.431. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Memory CopyingEPYC 7F52Linux 5.10.311002200330044005500Min: 6317.95 / Avg: 6435.73 / Max: 6495.19Min: 6267.56 / Avg: 6274.43 / Max: 6278.71. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Vector MathEPYC 7F52Linux 5.10.330K60K90K120K150KSE +/- 6.50, N = 3SE +/- 19.82, N = 3142981.97142907.671. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Vector MathEPYC 7F52Linux 5.10.320K40K60K80K100KMin: 142971.08 / Avg: 142981.97 / Max: 142993.55Min: 142868.09 / Avg: 142907.67 / Max: 142929.441. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MMAPLinux 5.10.3EPYC 7F5250100150200250SE +/- 0.19, N = 3SE +/- 0.32, N = 3248.17229.801. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MMAPLinux 5.10.3EPYC 7F524080120160200Min: 247.79 / Avg: 248.17 / Max: 248.37Min: 229.19 / Avg: 229.8 / Max: 230.291. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Socket ActivityEPYC 7F52Linux 5.10.32K4K6K8K10KSE +/- 43.37, N = 3SE +/- 36.73, N = 310784.4010348.911. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Socket ActivityEPYC 7F52Linux 5.10.32K4K6K8K10KMin: 10710.44 / Avg: 10784.4 / Max: 10860.64Min: 10288.33 / Avg: 10348.91 / Max: 10415.181. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: ForkingEPYC 7F52Linux 5.10.312K24K36K48K60KSE +/- 229.33, N = 3SE +/- 139.19, N = 356181.2844312.121. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: ForkingEPYC 7F52Linux 5.10.310K20K30K40K50KMin: 55872.64 / Avg: 56181.28 / Max: 56629.43Min: 44117.52 / Avg: 44312.12 / Max: 44581.811. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: AtomicEPYC 7F52Linux 5.10.3110K220K330K440K550KSE +/- 436.80, N = 3SE +/- 203.22, N = 3512936.21510793.921. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: AtomicEPYC 7F52Linux 5.10.390K180K270K360K450KMin: 512387.81 / Avg: 512936.21 / Max: 513799.33Min: 510444.26 / Avg: 510793.92 / Max: 511148.21. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc Qsort Data SortingEPYC 7F52Linux 5.10.360120180240300SE +/- 0.99, N = 3SE +/- 0.93, N = 3269.57268.941. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc Qsort Data SortingEPYC 7F52Linux 5.10.350100150200250Min: 267.9 / Avg: 269.57 / Max: 271.33Min: 268 / Avg: 268.94 / Max: 270.81. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SemaphoresEPYC 7F52Linux 5.10.3500K1000K1500K2000K2500KSE +/- 14921.24, N = 3SE +/- 2645.51, N = 32314681.132278162.651. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SemaphoresEPYC 7F52Linux 5.10.3400K800K1200K1600K2000KMin: 2284970.57 / Avg: 2314681.13 / Max: 2331963.83Min: 2273647.24 / Avg: 2278162.65 / Max: 2282808.761. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3EPYC 7F52Linux 5.10.32K4K6K8K10KSE +/- 33.83, N = 3SE +/- 75.92, N = 38221.58027.81. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3EPYC 7F52Linux 5.10.314002800420056007000Min: 8157.8 / Avg: 8221.5 / Max: 8273.1Min: 7924.3 / Avg: 8027.83 / Max: 8175.81. (CC) gcc options: -O3 -pthread -lz -llzma

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibEPYC 7F52Linux 5.10.348121620SE +/- 0.00, N = 3SE +/- 0.00, N = 317.117.5
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibEPYC 7F52Linux 5.10.348121620Min: 17.1 / Avg: 17.1 / Max: 17.1Min: 17.5 / Avg: 17.5 / Max: 17.5

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian DragonLinux 5.10.3EPYC 7F52510152025SE +/- 0.21, N = 3SE +/- 0.05, N = 321.0620.97MIN: 20.53 / MAX: 22.46MIN: 20.82 / MAX: 22.27
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian DragonLinux 5.10.3EPYC 7F52510152025Min: 20.65 / Avg: 21.06 / Max: 21.31Min: 20.88 / Avg: 20.97 / Max: 21.06

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassLinux 5.10.3EPYC 7F520.84381.68762.53143.37524.219SE +/- 0.01, N = 3SE +/- 0.01, N = 33.753.741. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassLinux 5.10.3EPYC 7F52246810Min: 3.74 / Avg: 3.75 / Max: 3.77Min: 3.73 / Avg: 3.74 / Max: 3.751. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KLinux 5.10.3EPYC 7F52510152025SE +/- 0.03, N = 3SE +/- 0.07, N = 321.2220.931. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KLinux 5.10.3EPYC 7F52510152025Min: 21.17 / Avg: 21.22 / Max: 21.25Min: 20.83 / Avg: 20.93 / Max: 21.071. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 10Linux 5.10.3EPYC 7F520.71821.43642.15462.87283.591SE +/- 0.003, N = 3SE +/- 0.002, N = 33.1923.186
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 10Linux 5.10.3EPYC 7F52246810Min: 3.19 / Avg: 3.19 / Max: 3.2Min: 3.18 / Avg: 3.19 / Max: 3.19

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonLinux 5.10.3EPYC 7F52100200300400500SE +/- 2.08, N = 3SE +/- 0.67, N = 3470477
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonLinux 5.10.3EPYC 7F5280160240320400Min: 466 / Avg: 470 / Max: 473Min: 476 / Avg: 476.67 / Max: 478

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: NDT MappingEPYC 7F52Linux 5.10.32004006008001000SE +/- 3.32, N = 3SE +/- 6.61, N = 3977.65969.381. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: NDT MappingEPYC 7F52Linux 5.10.32004006008001000Min: 971.61 / Avg: 977.65 / Max: 983.06Min: 957.7 / Avg: 969.38 / Max: 980.581. (CXX) g++ options: -O3 -std=c++11 -fopenmp

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: doducEPYC 7F52Linux 5.10.32468107.267.26

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian ChangepointEPYC 7F52Linux 5.10.3714212835SE +/- 0.25, N = 3SE +/- 0.09, N = 327.2127.80
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian ChangepointEPYC 7F52Linux 5.10.3612182430Min: 26.74 / Avg: 27.21 / Max: 27.58Min: 27.7 / Avg: 27.8 / Max: 27.98

Unpacking Firefox

This simple test profile measures how long it takes to extract the .tar.xz source package of the Mozilla Firefox Web Browser. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzEPYC 7F52Linux 5.10.3510152025SE +/- 0.06, N = 4SE +/- 0.03, N = 420.4320.48
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzEPYC 7F52Linux 5.10.3510152025Min: 20.27 / Avg: 20.43 / Max: 20.54Min: 20.39 / Avg: 20.48 / Max: 20.55

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svmLinux 5.10.3EPYC 7F52612182430SE +/- 0.01, N = 3SE +/- 0.27, N = 323.0223.36
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svmLinux 5.10.3EPYC 7F52510152025Min: 23.01 / Avg: 23.02 / Max: 23.04Min: 23.08 / Avg: 23.36 / Max: 23.9

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9/WebM format using a sample 1080p video. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.8.2Speed: Speed 5Linux 5.10.3EPYC 7F52612182430SE +/- 0.10, N = 3SE +/- 0.05, N = 323.4023.081. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=c++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.8.2Speed: Speed 5Linux 5.10.3EPYC 7F52510152025Min: 23.21 / Avg: 23.4 / Max: 23.51Min: 22.99 / Avg: 23.08 / Max: 23.141. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=c++11

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeEPYC 7F52Linux 5.10.31.7M3.4M5.1M6.8M8.5MSE +/- 19965.32, N = 3SE +/- 30182.11, N = 3777618976332471. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm
OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeEPYC 7F52Linux 5.10.31.3M2.6M3.9M5.2M6.5MMin: 7739919 / Avg: 7776189 / Max: 7808788Min: 7576064 / Avg: 7633247.33 / Max: 76785851. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average LatencyEPYC 7F52Linux 5.10.3714212835SE +/- 0.22, N = 3SE +/- 0.32, N = 330.0430.341. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average LatencyEPYC 7F52Linux 5.10.3714212835Min: 29.59 / Avg: 30.04 / Max: 30.26Min: 29.84 / Avg: 30.34 / Max: 30.951. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read WriteEPYC 7F52Linux 5.10.37001400210028003500SE +/- 24.88, N = 3SE +/- 35.07, N = 3333233001. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read WriteEPYC 7F52Linux 5.10.36001200180024003000Min: 3306.87 / Avg: 3331.99 / Max: 3381.76Min: 3233.96 / Avg: 3299.79 / Max: 3353.671. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read Only - Average LatencyEPYC 7F52Linux 5.10.30.10510.21020.31530.42040.5255SE +/- 0.000, N = 3SE +/- 0.001, N = 30.4490.4671. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read Only - Average LatencyEPYC 7F52Linux 5.10.312345Min: 0.45 / Avg: 0.45 / Max: 0.45Min: 0.46 / Avg: 0.47 / Max: 0.471. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read OnlyEPYC 7F52Linux 5.10.3120K240K360K480K600KSE +/- 733.28, N = 3SE +/- 1662.47, N = 35568255364121. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read OnlyEPYC 7F52Linux 5.10.3100K200K300K400K500KMin: 555375.67 / Avg: 556824.57 / Max: 557745.6Min: 534022.19 / Avg: 536411.72 / Max: 539608.741. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average LatencyEPYC 7F52Linux 5.10.33691215SE +/- 0.01, N = 3SE +/- 0.00, N = 311.8212.051. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average LatencyEPYC 7F52Linux 5.10.348121620Min: 11.8 / Avg: 11.82 / Max: 11.84Min: 12.04 / Avg: 12.05 / Max: 12.051. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read WriteEPYC 7F52Linux 5.10.39001800270036004500SE +/- 4.77, N = 3SE +/- 1.61, N = 3423141511. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read WriteEPYC 7F52Linux 5.10.37001400210028003500Min: 4222.66 / Avg: 4230.61 / Max: 4239.14Min: 4149.57 / Avg: 4151.25 / Max: 4154.471. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average LatencyEPYC 7F52Linux 5.10.30.04460.08920.13380.17840.223SE +/- 0.001, N = 3SE +/- 0.001, N = 30.1950.1981. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average LatencyEPYC 7F52Linux 5.10.312345Min: 0.19 / Avg: 0.19 / Max: 0.2Min: 0.2 / Avg: 0.2 / Max: 0.21. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read OnlyEPYC 7F52Linux 5.10.3110K220K330K440K550KSE +/- 3559.12, N = 3SE +/- 1764.57, N = 35143075071611. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read OnlyEPYC 7F52Linux 5.10.390K180K270K360K450KMin: 509047.1 / Avg: 514306.68 / Max: 521090.34Min: 503639.44 / Avg: 507160.7 / Max: 509125.51. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Only - Average LatencyEPYC 7F52Linux 5.10.30.00810.01620.02430.03240.0405SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0350.0361. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Only - Average LatencyEPYC 7F52Linux 5.10.312345Min: 0.04 / Avg: 0.04 / Max: 0.04Min: 0.04 / Avg: 0.04 / Max: 0.041. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read OnlyEPYC 7F52Linux 5.10.36K12K18K24K30KSE +/- 403.02, N = 3SE +/- 272.58, N = 328273278961. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read OnlyEPYC 7F52Linux 5.10.35K10K15K20K25KMin: 27526.61 / Avg: 28272.78 / Max: 28909.88Min: 27516.31 / Avg: 27895.54 / Max: 28424.331. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Write - Average LatencyEPYC 7F52Linux 5.10.30.05940.11880.17820.23760.297SE +/- 0.002, N = 3SE +/- 0.001, N = 30.2630.2641. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Write - Average LatencyEPYC 7F52Linux 5.10.312345Min: 0.26 / Avg: 0.26 / Max: 0.27Min: 0.26 / Avg: 0.26 / Max: 0.271. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read WriteEPYC 7F52Linux 5.10.38001600240032004000SE +/- 24.75, N = 3SE +/- 16.52, N = 3380337821. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read WriteEPYC 7F52Linux 5.10.37001400210028003500Min: 3777.18 / Avg: 3802.87 / Max: 3852.36Min: 3761 / Avg: 3781.92 / Max: 3814.531. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Sunflow Rendering System

This test runs benchmarks of the Sunflow Rendering System. The Sunflow Rendering System is an open-source render engine for photo-realistic image synthesis with a ray-tracing core. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSunflow Rendering System 0.07.2Global Illumination + Image SynthesisLinux 5.10.3EPYC 7F520.18450.3690.55350.7380.9225SE +/- 0.013, N = 15SE +/- 0.008, N = 30.8180.820MIN: 0.58 / MAX: 1.49MIN: 0.56 / MAX: 1.43
OpenBenchmarking.orgSeconds, Fewer Is BetterSunflow Rendering System 0.07.2Global Illumination + Image SynthesisLinux 5.10.3EPYC 7F52246810Min: 0.73 / Avg: 0.82 / Max: 0.96Min: 0.81 / Avg: 0.82 / Max: 0.84

librsvg

RSVG/librsvg is an SVG vector graphics library. This test profile times how long it takes to complete various operations by rsvg-convert. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterlibrsvgOperation: SVG Files To PNGEPYC 7F52Linux 5.10.3612182430SE +/- 0.05, N = 3SE +/- 0.11, N = 324.2125.081. rsvg-convert version 2.48.2
OpenBenchmarking.orgSeconds, Fewer Is BetterlibrsvgOperation: SVG Files To PNGEPYC 7F52Linux 5.10.3612182430Min: 24.11 / Avg: 24.21 / Max: 24.3Min: 24.94 / Avg: 25.08 / Max: 25.291. rsvg-convert version 2.48.2

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very FastLinux 5.10.3EPYC 7F52612182430SE +/- 0.02, N = 3SE +/- 0.03, N = 324.5624.441. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very FastLinux 5.10.3EPYC 7F52612182430Min: 24.52 / Avg: 24.56 / Max: 24.58Min: 24.39 / Avg: 24.44 / Max: 24.481. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsLinux 5.10.3EPYC 7F52612182430SE +/- 0.00, N = 3SE +/- 0.03, N = 324.724.9
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsLinux 5.10.3EPYC 7F52612182430Min: 24.7 / Avg: 24.7 / Max: 24.7Min: 24.8 / Avg: 24.87 / Max: 24.9

WavPack Audio Encoding

This test times how long it takes to encode a sample WAV file to WavPack format with very high quality settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPackLinux 5.10.3EPYC 7F5248121620SE +/- 0.01, N = 5SE +/- 0.01, N = 513.7413.751. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPackLinux 5.10.3EPYC 7F5248121620Min: 13.73 / Avg: 13.74 / Max: 13.78Min: 13.73 / Avg: 13.75 / Max: 13.81. (CXX) g++ options: -rdynamic

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosLinux 5.10.3EPYC 7F52306090120150112113

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatLinux 5.10.3EPYC 7F52306090120150SE +/- 0.33, N = 3116120
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatLinux 5.10.3EPYC 7F5220406080100Min: 116 / Avg: 116.33 / Max: 117

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileEPYC 7F52Linux 5.10.3510152025SE +/- 0.03, N = 3SE +/- 0.04, N = 322.4022.68
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileEPYC 7F52Linux 5.10.3510152025Min: 22.36 / Avg: 22.4 / Max: 22.45Min: 22.62 / Avg: 22.68 / Max: 22.75

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: linpkEPYC 7F52Linux 5.10.30.71551.4312.14652.8623.57753.183.18

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondLinux 5.10.3EPYC 7F52150K300K450K600K750KSE +/- 3903.98, N = 3SE +/- 1877.36, N = 3694463.10688169.931. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondLinux 5.10.3EPYC 7F52120K240K360K480K600KMin: 688468.16 / Avg: 694463.1 / Max: 701792.86Min: 685702.04 / Avg: 688169.93 / Max: 691854.491. (CC) gcc options: -O2 -lrt" -lrt

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: acLinux 5.10.3EPYC 7F522468106.516.53

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyEPYC 7F52Linux 5.10.3306090120150113113

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUEPYC 7F52Linux 5.10.31.24372.48743.73114.97486.2185SE +/- 0.02675, N = 3SE +/- 0.01913, N = 35.522805.52758MIN: 5.32MIN: 5.381. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUEPYC 7F52Linux 5.10.3246810Min: 5.49 / Avg: 5.52 / Max: 5.58Min: 5.5 / Avg: 5.53 / Max: 5.561. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

XZ Compression

This test measures the time needed to compress a sample file (an Ubuntu file-system image) using XZ compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9EPYC 7F52Linux 5.10.3510152025SE +/- 0.05, N = 3SE +/- 0.06, N = 320.9921.231. (CC) gcc options: -pthread -fvisibility=hidden -O2
OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9EPYC 7F52Linux 5.10.3510152025Min: 20.95 / Avg: 20.99 / Max: 21.09Min: 21.17 / Avg: 21.23 / Max: 21.351. (CC) gcc options: -pthread -fvisibility=hidden -O2

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesEPYC 7F52Linux 5.10.320406080100109110

Monkey Audio Encoding

This test times how long it takes to encode a sample WAV file to Monkey's Audio APE format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APELinux 5.10.3EPYC 7F523691215SE +/- 0.01, N = 5SE +/- 0.01, N = 512.5012.511. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt
OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APELinux 5.10.3EPYC 7F5248121620Min: 12.48 / Avg: 12.5 / Max: 12.52Min: 12.49 / Avg: 12.51 / Max: 12.521. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt

Ogg Audio Encoding

This test times how long it takes to encode a sample WAV file to Ogg format using the reference Xiph.org tools/libraries. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOgg Audio Encoding 1.3.4WAV To OggEPYC 7F52Linux 5.10.3510152025SE +/- 0.03, N = 3SE +/- 0.03, N = 320.6020.691. (CC) gcc options: -O2 -ffast-math -fsigned-char
OpenBenchmarking.orgSeconds, Fewer Is BetterOgg Audio Encoding 1.3.4WAV To OggEPYC 7F52Linux 5.10.3510152025Min: 20.56 / Avg: 20.6 / Max: 20.65Min: 20.65 / Avg: 20.69 / Max: 20.761. (CC) gcc options: -O2 -ffast-math -fsigned-char

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassEPYC 7F52Linux 5.10.30.54451.0891.63352.1782.7225SE +/- 0.00, N = 3SE +/- 0.00, N = 32.422.411. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassEPYC 7F52Linux 5.10.3246810Min: 2.41 / Avg: 2.42 / Max: 2.42Min: 2.4 / Avg: 2.41 / Max: 2.411. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.4Time To CompileEPYC 7F52Linux 5.10.3510152025SE +/- 0.05, N = 3SE +/- 0.02, N = 320.3820.43
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.4Time To CompileEPYC 7F52Linux 5.10.3510152025Min: 20.29 / Avg: 20.38 / Max: 20.47Min: 20.39 / Avg: 20.43 / Max: 20.46

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Euclidean ClusterEPYC 7F52Linux 5.10.32004006008001000SE +/- 1.60, N = 3SE +/- 1.78, N = 31090.641085.361. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Euclidean ClusterEPYC 7F52Linux 5.10.32004006008001000Min: 1087.77 / Avg: 1090.64 / Max: 1093.31Min: 1083.3 / Avg: 1085.36 / Max: 1088.911. (CXX) g++ options: -O3 -std=c++11 -fopenmp

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUEPYC 7F52Linux 5.10.30.54181.08361.62542.16722.709SE +/- 0.01148, N = 3SE +/- 0.02630, N = 52.005192.40810MIN: 1.87MIN: 2.251. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUEPYC 7F52Linux 5.10.3246810Min: 1.98 / Avg: 2.01 / Max: 2.02Min: 2.31 / Avg: 2.41 / Max: 2.471. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Linux 5.10.3EPYC 7F52510152025SE +/- 0.02, N = 3SE +/- 0.00, N = 320.1020.131. (CC) gcc options: -O2 -pedantic -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Linux 5.10.3EPYC 7F52510152025Min: 20.08 / Avg: 20.1 / Max: 20.15Min: 20.12 / Avg: 20.13 / Max: 20.141. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test measures the RSA 4096-bit performance of OpenSSL. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSigns Per Second, More Is BetterOpenSSL 1.1.1RSA 4096-bit PerformanceLinux 5.10.3EPYC 7F5210002000300040005000SE +/- 0.71, N = 3SE +/- 0.76, N = 34579.84571.41. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgSigns Per Second, More Is BetterOpenSSL 1.1.1RSA 4096-bit PerformanceLinux 5.10.3EPYC 7F528001600240032004000Min: 4578.5 / Avg: 4579.83 / Max: 4580.9Min: 4570 / Avg: 4571.4 / Max: 4572.61. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OCRMyPDF

OCRMyPDF is an optical character recognition (OCR) text layer to scanned PDF files, producing new PDFs with the text now selectable/searchable/copy-paste capable. OCRMyPDF leverages the Tesseract OCR engine and is written in Python. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 9.6.0+dfsgProcessing 60 Page PDF DocumentEPYC 7F52Linux 5.10.3510152025SE +/- 0.07, N = 3SE +/- 0.04, N = 319.5219.55
OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 9.6.0+dfsgProcessing 60 Page PDF DocumentEPYC 7F52Linux 5.10.3510152025Min: 19.41 / Avg: 19.51 / Max: 19.64Min: 19.47 / Avg: 19.55 / Max: 19.6

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080pLinux 5.10.3EPYC 7F52130260390520650SE +/- 1.10, N = 3SE +/- 1.17, N = 3581.26574.78MIN: 460.79 / MAX: 716.22MIN: 454.24 / MAX: 710.141. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080pLinux 5.10.3EPYC 7F52100200300400500Min: 579.17 / Avg: 581.26 / Max: 582.88Min: 572.44 / Avg: 574.78 / Max: 576.021. (CC) gcc options: -pthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2EPYC 7F52Linux 5.10.360120180240300SE +/- 0.53, N = 3SE +/- 0.42, N = 3274.97275.51MIN: 272.73 / MAX: 289.81MIN: 272.98 / MAX: 294.911. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2EPYC 7F52Linux 5.10.350100150200250Min: 274.23 / Avg: 274.97 / Max: 275.99Min: 274.94 / Avg: 275.51 / Max: 276.321. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughEPYC 7F52Linux 5.10.348121620SE +/- 0.01, N = 3SE +/- 0.01, N = 313.7913.791. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughEPYC 7F52Linux 5.10.348121620Min: 13.77 / Avg: 13.79 / Max: 13.81Min: 13.77 / Avg: 13.79 / Max: 13.81. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1EPYC 7F52Linux 5.10.360120180240300SE +/- 0.77, N = 3SE +/- 0.25, N = 3263.04264.36MIN: 260.98 / MAX: 265.86MIN: 261.25 / MAX: 266.061. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1EPYC 7F52Linux 5.10.350100150200250Min: 261.76 / Avg: 263.04 / Max: 264.43Min: 264.07 / Avg: 264.36 / Max: 264.861. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeEPYC 7F52Linux 5.10.3816243240SE +/- 0.23, N = 3SE +/- 0.06, N = 334.1233.981. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeEPYC 7F52Linux 5.10.3714212835Min: 33.66 / Avg: 34.12 / Max: 34.36Min: 33.86 / Avg: 33.98 / Max: 34.041. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessEPYC 7F52Linux 5.10.348121620SE +/- 0.07, N = 3SE +/- 0.08, N = 317.5017.571. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessEPYC 7F52Linux 5.10.348121620Min: 17.37 / Avg: 17.5 / Max: 17.59Min: 17.41 / Avg: 17.57 / Max: 17.681. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: SlowLinux 5.10.3EPYC 7F52816243240SE +/- 0.02, N = 3SE +/- 0.02, N = 335.3535.051. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: SlowLinux 5.10.3EPYC 7F52816243240Min: 35.32 / Avg: 35.35 / Max: 35.38Min: 35.03 / Avg: 35.05 / Max: 35.091. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: MediumLinux 5.10.3EPYC 7F52816243240SE +/- 0.15, N = 3SE +/- 0.02, N = 336.2735.971. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: MediumLinux 5.10.3EPYC 7F52816243240Min: 36.09 / Avg: 36.27 / Max: 36.57Min: 35.93 / Avg: 35.97 / Max: 36.011. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pLinux 5.10.3EPYC 7F521.21232.42463.63694.84926.0615SE +/- 0.009, N = 3SE +/- 0.023, N = 35.3885.3601. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pLinux 5.10.3EPYC 7F52246810Min: 5.38 / Avg: 5.39 / Max: 5.41Min: 5.32 / Avg: 5.36 / Max: 5.41. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSHEPYC 7F52Linux 5.10.3300K600K900K1200K1500KSE +/- 14085.60, N = 3SE +/- 11678.97, N = 61216222.501174489.561. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSHEPYC 7F52Linux 5.10.3200K400K600K800K1000KMin: 1192238.38 / Avg: 1216222.5 / Max: 1241012.38Min: 1132684 / Avg: 1174489.56 / Max: 1216700.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 4KEPYC 7F52Linux 5.10.350100150200250SE +/- 0.89, N = 3SE +/- 0.24, N = 3227.67227.34MIN: 160.75 / MAX: 250.13MIN: 166.45 / MAX: 246.551. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 4KEPYC 7F52Linux 5.10.34080120160200Min: 226.22 / Avg: 227.67 / Max: 229.29Min: 226.89 / Avg: 227.34 / Max: 227.721. (CC) gcc options: -pthread

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: mdbxEPYC 7F52Linux 5.10.31.0622.1243.1864.2485.314.724.72

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUEPYC 7F52Linux 5.10.30.34250.6851.02751.371.7125SE +/- 0.00269, N = 3SE +/- 0.00375, N = 31.511151.52220MIN: 1.48MIN: 1.491. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUEPYC 7F52Linux 5.10.3246810Min: 1.51 / Avg: 1.51 / Max: 1.52Min: 1.52 / Avg: 1.52 / Max: 1.531. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra FastLinux 5.10.3EPYC 7F52918273645SE +/- 0.05, N = 3SE +/- 0.06, N = 341.2740.411. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra FastLinux 5.10.3EPYC 7F52918273645Min: 41.2 / Avg: 41.27 / Max: 41.36Min: 40.3 / Avg: 40.41 / Max: 40.471. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative EntropyEPYC 7F52Linux 5.10.348121620SE +/- 0.02, N = 3SE +/- 0.17, N = 314.4714.48
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative EntropyEPYC 7F52Linux 5.10.348121620Min: 14.43 / Avg: 14.47 / Max: 14.51Min: 14.21 / Avg: 14.48 / Max: 14.78

FLAC Audio Encoding

This test times how long it takes to encode a sample WAV file to FLAC format five times. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLACEPYC 7F52Linux 5.10.3246810SE +/- 0.013, N = 5SE +/- 0.005, N = 58.5628.6101. (CXX) g++ options: -O2 -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLACEPYC 7F52Linux 5.10.33691215Min: 8.51 / Avg: 8.56 / Max: 8.59Min: 8.6 / Avg: 8.61 / Max: 8.631. (CXX) g++ options: -O2 -fvisibility=hidden -logg -lm

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.2.0Scene: MemorialLinux 5.10.3EPYC 7F5248121620SE +/- 0.04, N = 3SE +/- 0.04, N = 314.2214.22
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.2.0Scene: MemorialLinux 5.10.3EPYC 7F5248121620Min: 14.14 / Avg: 14.22 / Max: 14.29Min: 14.15 / Avg: 14.22 / Max: 14.3

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeLinux 5.10.3EPYC 7F52246810SE +/- 0.012, N = 5SE +/- 0.016, N = 57.9787.9801. (CXX) g++ options: -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeLinux 5.10.3EPYC 7F523691215Min: 7.96 / Avg: 7.98 / Max: 8.03Min: 7.96 / Avg: 7.98 / Max: 8.041. (CXX) g++ options: -fvisibility=hidden -logg -lm

ECP-CANDLE

The CANDLE benchmark codes implement deep learning architectures relevant to problems in cancer. These architectures address problems at different biological scales, specifically problems at the molecular, cellular and population scales. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterECP-CANDLE 0.3Benchmark: P1B2Linux 5.10.3EPYC 7F5291827364537.7238.59

GNU Octave Benchmark

This test profile measures how long it takes to complete several reference GNU Octave files via octave-benchmark. GNU Octave is used for numerical computations and is an open-source alternative to MATLAB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 5.2.0EPYC 7F52Linux 5.10.3246810SE +/- 0.044, N = 5SE +/- 0.050, N = 57.4027.492
OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 5.2.0EPYC 7F52Linux 5.10.33691215Min: 7.33 / Avg: 7.4 / Max: 7.57Min: 7.35 / Avg: 7.49 / Max: 7.6

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUEPYC 7F52Linux 5.10.30.21080.42160.63240.84321.054SE +/- 0.003167, N = 3SE +/- 0.009336, N = 30.6759150.937089MIN: 0.64MIN: 0.891. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUEPYC 7F52Linux 5.10.3246810Min: 0.67 / Avg: 0.68 / Max: 0.68Min: 0.92 / Avg: 0.94 / Max: 0.951. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPULinux 5.10.3EPYC 7F520.41360.82721.24081.65442.068SE +/- 0.00255, N = 3SE +/- 0.00177, N = 31.821571.83844MIN: 1.78MIN: 1.811. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPULinux 5.10.3EPYC 7F52246810Min: 1.82 / Avg: 1.82 / Max: 1.83Min: 1.84 / Avg: 1.84 / Max: 1.841. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

LibreOffice

Various benchmarking operations with the LibreOffice open-source office suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLibreOfficeTest: 20 Documents To PDFEPYC 7F52Linux 5.10.3246810SE +/- 0.077, N = 5SE +/- 0.029, N = 57.1587.2081. LibreOffice 6.4.3.2 40(Build:2)
OpenBenchmarking.orgSeconds, Fewer Is BetterLibreOfficeTest: 20 Documents To PDFEPYC 7F52Linux 5.10.33691215Min: 7.04 / Avg: 7.16 / Max: 7.46Min: 7.15 / Avg: 7.21 / Max: 7.321. LibreOffice 6.4.3.2 40(Build:2)

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinEPYC 7F52Linux 5.10.33691215SE +/- 0.16, N = 15SE +/- 0.24, N = 1511.6811.521. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinEPYC 7F52Linux 5.10.33691215Min: 9.99 / Avg: 11.68 / Max: 12.23Min: 9.44 / Avg: 11.52 / Max: 12.231. (CXX) g++ options: -O3 -pthread -lm

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDEPYC 7F52Linux 5.10.3300K600K900K1200K1500KSE +/- 16740.24, N = 3SE +/- 11341.57, N = 31565518.881503178.001. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDEPYC 7F52Linux 5.10.3300K600K900K1200K1500KMin: 1541177.25 / Avg: 1565518.88 / Max: 1597597.5Min: 1481481.5 / Avg: 1503178 / Max: 1519756.881. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pLinux 5.10.3EPYC 7F521428425670SE +/- 0.12, N = 3SE +/- 0.06, N = 362.2761.761. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pLinux 5.10.3EPYC 7F521224364860Min: 62.05 / Avg: 62.27 / Max: 62.44Min: 61.65 / Avg: 61.76 / Max: 61.861. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pLinux 5.10.3EPYC 7F52918273645SE +/- 0.06, N = 3SE +/- 0.07, N = 339.0138.531. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pLinux 5.10.3EPYC 7F52816243240Min: 38.94 / Avg: 39.01 / Max: 39.13Min: 38.39 / Avg: 38.53 / Max: 38.631. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUEPYC 7F52Linux 5.10.30.30060.60120.90181.20241.503SE +/- 0.010953, N = 3SE +/- 0.004643, N = 30.7720581.335930MIN: 0.72MIN: 1.281. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUEPYC 7F52Linux 5.10.3246810Min: 0.75 / Avg: 0.77 / Max: 0.79Min: 1.33 / Avg: 1.34 / Max: 1.351. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUEPYC 7F52Linux 5.10.30.7071.4142.1212.8283.535SE +/- 0.01483, N = 3SE +/- 0.00548, N = 32.367523.14201MIN: 2.3MIN: 3.11. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUEPYC 7F52Linux 5.10.3246810Min: 2.34 / Avg: 2.37 / Max: 2.39Min: 3.13 / Avg: 3.14 / Max: 3.151. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUEPYC 7F52Linux 5.10.31.06582.13163.19744.26325.329SE +/- 0.05484, N = 3SE +/- 0.04867, N = 154.038014.73706MIN: 3.84MIN: 4.421. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUEPYC 7F52Linux 5.10.3246810Min: 3.94 / Avg: 4.04 / Max: 4.13Min: 4.51 / Avg: 4.74 / Max: 5.071. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Timed MAFFT Alignment

This test performs an alignment of 100 pyruvate decarboxylase sequences. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNAEPYC 7F52Linux 5.10.33691215SE +/- 0.079, N = 3SE +/- 0.021, N = 39.0099.0331. (CC) gcc options: -std=c99 -O3 -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNAEPYC 7F52Linux 5.10.33691215Min: 8.91 / Avg: 9.01 / Max: 9.17Min: 8.99 / Avg: 9.03 / Max: 9.061. (CC) gcc options: -std=c99 -O3 -lm -lpthread

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Very FastLinux 5.10.3EPYC 7F521632486480SE +/- 0.30, N = 3SE +/- 0.10, N = 371.0568.391. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Very FastLinux 5.10.3EPYC 7F521428425670Min: 70.58 / Avg: 71.05 / Max: 71.61Min: 68.19 / Avg: 68.39 / Max: 68.531. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionLinux 5.10.3EPYC 7F52246810SE +/- 0.006, N = 3SE +/- 0.007, N = 37.7167.7321. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionLinux 5.10.3EPYC 7F523691215Min: 7.71 / Avg: 7.72 / Max: 7.73Min: 7.72 / Avg: 7.73 / Max: 7.741. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

LAME MP3 Encoding

LAME is an MP3 encoder licensed under the LGPL. This test measures the time required to encode a WAV file to MP3 format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3EPYC 7F52Linux 5.10.3246810SE +/- 0.004, N = 3SE +/- 0.008, N = 37.9337.9391. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lncurses -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3EPYC 7F52Linux 5.10.33691215Min: 7.93 / Avg: 7.93 / Max: 7.94Min: 7.93 / Avg: 7.94 / Max: 7.951. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lncurses -lm

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed GaussianLinux 5.10.3EPYC 7F52246810SE +/- 0.027, N = 3SE +/- 0.023, N = 37.4467.530
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed GaussianLinux 5.10.3EPYC 7F523691215Min: 7.39 / Avg: 7.45 / Max: 7.48Min: 7.48 / Avg: 7.53 / Max: 7.56

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: airEPYC 7F52Linux 5.10.30.39830.79661.19491.59321.99151.771.77

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: MediumEPYC 7F52Linux 5.10.3246810SE +/- 0.01, N = 3SE +/- 0.01, N = 36.896.911. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: MediumEPYC 7F52Linux 5.10.33691215Min: 6.88 / Avg: 6.89 / Max: 6.92Min: 6.9 / Avg: 6.91 / Max: 6.921. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 1080pLinux 5.10.3EPYC 7F52120240360480600SE +/- 2.05, N = 3SE +/- 1.44, N = 3541.83533.80MIN: 374.84 / MAX: 590.34MIN: 341.27 / MAX: 581.441. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 1080pLinux 5.10.3EPYC 7F52100200300400500Min: 537.75 / Avg: 541.83 / Max: 544.28Min: 531.39 / Avg: 533.8 / Max: 536.381. (CC) gcc options: -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUEPYC 7F52Linux 5.10.3246810SE +/- 0.07217, N = 3SE +/- 0.01174, N = 35.556826.22877MIN: 5.13MIN: 6.141. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUEPYC 7F52Linux 5.10.3246810Min: 5.41 / Avg: 5.56 / Max: 5.63Min: 6.21 / Avg: 6.23 / Max: 6.251. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUEPYC 7F52Linux 5.10.31.07712.15423.23134.30845.3855SE +/- 0.01620, N = 3SE +/- 0.02803, N = 33.294034.78691MIN: 3.12MIN: 4.621. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUEPYC 7F52Linux 5.10.3246810Min: 3.28 / Avg: 3.29 / Max: 3.33Min: 4.74 / Avg: 4.79 / Max: 4.841. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Ultra FastLinux 5.10.3EPYC 7F5220406080100SE +/- 0.45, N = 3SE +/- 0.58, N = 3110.36105.121. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Ultra FastLinux 5.10.3EPYC 7F5220406080100Min: 109.57 / Avg: 110.36 / Max: 111.14Min: 104.16 / Avg: 105.12 / Max: 106.181. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: FastEPYC 7F52Linux 5.10.31.20382.40763.61144.81526.019SE +/- 0.01, N = 3SE +/- 0.00, N = 35.355.351. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: FastEPYC 7F52Linux 5.10.3246810Min: 5.33 / Avg: 5.35 / Max: 5.36Min: 5.35 / Avg: 5.35 / Max: 5.351. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

x264

This is a simple test of the x264 encoder run on the CPU (OpenCL support disabled) with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2019-12-17H.264 Video EncodingLinux 5.10.3EPYC 7F524080120160200SE +/- 0.78, N = 3SE +/- 1.00, N = 3163.65162.771. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -ffast-math -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize
OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2019-12-17H.264 Video EncodingLinux 5.10.3EPYC 7F52306090120150Min: 162.08 / Avg: 163.65 / Max: 164.44Min: 160.93 / Avg: 162.77 / Max: 164.391. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -ffast-math -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: Visual Quality Optimized - Input: Bosphorus 1080pLinux 5.10.3EPYC 7F5250100150200250SE +/- 0.66, N = 3SE +/- 1.03, N = 3212.07203.981. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: Visual Quality Optimized - Input: Bosphorus 1080pLinux 5.10.3EPYC 7F524080120160200Min: 210.75 / Avg: 212.07 / Max: 212.84Min: 202.84 / Avg: 203.98 / Max: 206.041. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUEPYC 7F52Linux 5.10.30.66881.33762.00642.67523.344SE +/- 0.00136, N = 3SE +/- 0.00487, N = 32.868902.97264MIN: 2.83MIN: 2.931. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUEPYC 7F52Linux 5.10.3246810Min: 2.87 / Avg: 2.87 / Max: 2.87Min: 2.97 / Avg: 2.97 / Max: 2.981. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: VMAF Optimized - Input: Bosphorus 1080pLinux 5.10.3EPYC 7F5260120180240300SE +/- 2.05, N = 3SE +/- 0.71, N = 3255.72248.381. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: VMAF Optimized - Input: Bosphorus 1080pLinux 5.10.3EPYC 7F5250100150200250Min: 251.68 / Avg: 255.72 / Max: 258.4Min: 247.12 / Avg: 248.38 / Max: 249.581. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pLinux 5.10.3EPYC 7F5260120180240300SE +/- 0.75, N = 3SE +/- 0.99, N = 3264.01252.181. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pLinux 5.10.3EPYC 7F5250100150200250Min: 263.04 / Avg: 264.01 / Max: 265.49Min: 250.21 / Avg: 252.18 / Max: 253.271. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100Linux 5.10.3EPYC 7F520.56211.12421.68632.24842.8105SE +/- 0.000, N = 3SE +/- 0.001, N = 32.4912.4981. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100Linux 5.10.3EPYC 7F52246810Min: 2.49 / Avg: 2.49 / Max: 2.49Min: 2.5 / Avg: 2.5 / Max: 2.51. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

FFTE

FFTE is a package by Daisuke Takahashi to compute Discrete Fourier Transforms of 1-, 2- and 3- dimensional sequences of length (2^p)*(3^q)*(5^r). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineLinux 5.10.3EPYC 7F5220K40K60K80K100KSE +/- 226.53, N = 3SE +/- 100.09, N = 3100233.0699888.441. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp
OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineLinux 5.10.3EPYC 7F5220K40K60K80K100KMin: 99913.34 / Avg: 100233.06 / Max: 100670.9Min: 99724.72 / Avg: 99888.44 / Max: 100070.071. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultEPYC 7F52Linux 5.10.30.36410.72821.09231.45641.8205SE +/- 0.001, N = 3SE +/- 0.001, N = 31.6181.6181. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultEPYC 7F52Linux 5.10.3246810Min: 1.62 / Avg: 1.62 / Max: 1.62Min: 1.62 / Avg: 1.62 / Max: 1.621. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

301 Results Shown

PlaidML
OpenVKL
Polyhedron Fortran Benchmarks
Numenta Anomaly Benchmark
PlaidML
LAMMPS Molecular Dynamics Simulator
PlaidML
Polyhedron Fortran Benchmarks
Mobile Neural Network:
  inception-v3
  mobilenet-v1-1.0
  MobileNetV2_224
  resnet-v2-50
  SqueezeNetV1.0
Timed Clash Compilation
BRL-CAD
AI Benchmark Alpha:
  Device AI Score
  Device Training Score
  Device Inference Score
Open Porous Media
Caffe
OpenVKL
Blender
Open Porous Media
Polyhedron Fortran Benchmarks
ECP-CANDLE
NCNN:
  CPU - regnety_400m
  CPU - squeezenet_ssd
  CPU - yolov4-tiny
  CPU - resnet50
  CPU - alexnet
  CPU - resnet18
  CPU - vgg16
  CPU - googlenet
  CPU - blazeface
  CPU - efficientnet-b0
  CPU - mnasnet
  CPU - shufflenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU-v2-v2 - mobilenet-v2
  CPU - mobilenet
WireGuard + Linux Networking Stack Stress Test
Darmstadt Automotive Parallel Heterogeneous Suite
Blender
Numpy Benchmark
Blender
Hierarchical INTegration
PlaidML
ECP-CANDLE
Open Porous Media:
  Flow MPI Norne - 8
  Flow MPI Norne - 2
Polyhedron Fortran Benchmarks:
  channel2
  mp_prop_design
Monte Carlo Simulations of Ionised Nebulae
Caffe
Polyhedron Fortran Benchmarks
SVT-AV1
Open Porous Media
asmFish
PlaidML
Caffe
Tachyon
GPAW
Stress-NG
YafaRay
Timed HMMer Search
PostgreSQL pgbench:
  1 - 250 - Read Write - Average Latency
  1 - 250 - Read Write
BYTE Unix Benchmark
dav1d
PlaidML
ASTC Encoder
Blender
LZ4 Compression:
  3 - Decompression Speed
  3 - Compression Speed
PlaidML
Timed GDB GNU Debugger Compilation
Node.js V8 Web Tooling Benchmark
Stress-NG
InfluxDB
Mlpack Benchmark
VP9 libvpx Encoding
Timed Eigen Compilation
Blender
PyPerformance
Polyhedron Fortran Benchmarks
GROMACS
oneDNN:
  Recurrent Neural Network Training - u8s8f32 - CPU
  Recurrent Neural Network Training - f32 - CPU
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
Numenta Anomaly Benchmark
TensorFlow Lite
InfluxDB
PostgreSQL pgbench:
  1 - 50 - Read Only - Average Latency
  1 - 50 - Read Only
Build2
oneDNN:
  Recurrent Neural Network Inference - f32 - CPU
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - u8s8f32 - CPU
simdjson
Caffe
PyPerformance
KeyDB
Mlpack Benchmark
TensorFlow Lite
OpenVINO:
  Person Detection 0106 FP16 - CPU:
    ms
    FPS
  Face Detection 0106 FP16 - CPU:
    ms
    FPS
  Face Detection 0106 FP32 - CPU:
    ms
    FPS
  Person Detection 0106 FP32 - CPU:
    ms
    FPS
Chaos Group V-RAY
SQLite Speedtest
OpenVKL
Mlpack Benchmark
Polyhedron Fortran Benchmarks:
  capacita
  rnflow
NAMD
oneDNN
LuxCoreRender:
  Rainbow Colors and Prism
  DLSC
IndigoBench:
  CPU - Supercar
  CPU - Bedroom
TensorFlow Lite:
  SqueezeNet
  NASNet Mobile
PyPerformance
TensorFlow Lite:
  Mobilenet Quant
  Mobilenet Float
Timed Linux Kernel Compilation
OpenVINO:
  Age Gender Recognition Retail 0013 FP16 - CPU:
    ms
    FPS
  Age Gender Recognition Retail 0013 FP32 - CPU:
    ms
    FPS
John The Ripper
GraphicsMagick:
  Sharpen
  Noise-Gaussian
  Enhanced
  Rotate
  Swirl
  Resizing
  HWB Color Space
Kvazaar
LZ4 Compression:
  9 - Decompression Speed
  9 - Compression Speed
Kvazaar
OpenVKL
simdjson
rav1e
Redis
rav1e
simdjson
DeepSpeech
simdjson
Redis
Hugin
PyPerformance
Polyhedron Fortran Benchmarks
Zstd Compression
Embree:
  Pathtracer ISPC - Asian Dragon Obj
  Pathtracer ISPC - Asian Dragon
  Pathtracer - Asian Dragon Obj
CLOMP
Stockfish
eSpeak-NG Speech Engine
rav1e
PyPerformance
WebP Image Encode
PlaidML
PyPerformance
7-Zip Compression
Timed FFmpeg Compilation
AOM AV1
Embree
PHPBench
LibRaw
Redis
AOM AV1
Embree
LZ4 Compression:
  1 - Decompression Speed
  1 - Compression Speed
Polyhedron Fortran Benchmarks
Stress-NG
John The Ripper
Aircrack-ng
Stress-NG:
  NUMA
  Malloc
  MEMFD
  Matrix Math
  Crypto
  Glibc C String Functions
  Context Switching
  SENDFILE
  Memory Copying
  Vector Math
  MMAP
  Socket Activity
  Forking
  Atomic
  Glibc Qsort Data Sorting
  Semaphores
Zstd Compression
PyPerformance
Embree
AOM AV1
x265
rav1e
PyPerformance
Darmstadt Automotive Parallel Heterogeneous Suite
Polyhedron Fortran Benchmarks
Numenta Anomaly Benchmark
Unpacking Firefox
Mlpack Benchmark
VP9 libvpx Encoding
Crafty
PostgreSQL pgbench:
  1 - 100 - Read Write - Average Latency
  1 - 100 - Read Write
  1 - 250 - Read Only - Average Latency
  1 - 250 - Read Only
  1 - 50 - Read Write - Average Latency
  1 - 50 - Read Write
  1 - 100 - Read Only - Average Latency
  1 - 100 - Read Only
  1 - 1 - Read Only - Average Latency
  1 - 1 - Read Only
  1 - 1 - Read Write - Average Latency
  1 - 1 - Read Write
Sunflow Rendering System
librsvg
Kvazaar
PyPerformance
WavPack Audio Encoding
PyPerformance:
  chaos
  float
Timed Apache Compilation
Polyhedron Fortran Benchmarks
Coremark
Polyhedron Fortran Benchmarks
PyPerformance
oneDNN
XZ Compression
PyPerformance
Monkey Audio Encoding
Ogg Audio Encoding
AOM AV1
Timed MPlayer Compilation
Darmstadt Automotive Parallel Heterogeneous Suite
oneDNN
RNNoise
OpenSSL
OCRMyPDF
dav1d
TNN
ASTC Encoder
TNN
AOM AV1
WebP Image Encode
Kvazaar:
  Bosphorus 1080p - Slow
  Bosphorus 1080p - Medium
SVT-AV1
Redis
dav1d
Polyhedron Fortran Benchmarks
oneDNN
Kvazaar
Numenta Anomaly Benchmark
FLAC Audio Encoding
Intel Open Image Denoise
Opus Codec Encoding
ECP-CANDLE
GNU Octave Benchmark
oneDNN:
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
LibreOffice
LAMMPS Molecular Dynamics Simulator
Redis
x265
SVT-AV1
oneDNN:
  IP Shapes 3D - u8s8f32 - CPU
  IP Shapes 3D - f32 - CPU
  Deconvolution Batch shapes_3d - f32 - CPU
Timed MAFFT Alignment
Kvazaar
WebP Image Encode
LAME MP3 Encoding
Numenta Anomaly Benchmark
Polyhedron Fortran Benchmarks
ASTC Encoder
dav1d
oneDNN:
  Convolution Batch Shapes Auto - u8s8f32 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
Kvazaar
ASTC Encoder
x264
SVT-VP9
oneDNN
SVT-VP9:
  VMAF Optimized - Bosphorus 1080p
  PSNR/SSIM Optimized - Bosphorus 1080p
WebP Image Encode
FFTE
WebP Image Encode