AMD EPYC 7F52

AMD EPYC 7F52 16-Core testing with a Supermicro H11DSi-NT v2.00 (2.1 BIOS) and llvmpipe on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2012294-HA-AMDEPYC7F75
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 6 Tests
AV1 4 Tests
Bioinformatics 2 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 2 Tests
C++ Boost Tests 2 Tests
Chess Test Suite 3 Tests
Timed Code Compilation 7 Tests
C/C++ Compiler Tests 31 Tests
Compression Tests 4 Tests
CPU Massive 38 Tests
Creator Workloads 37 Tests
Cryptography 3 Tests
Database Test Suite 5 Tests
Encoding 15 Tests
Fortran Tests 4 Tests
Game Development 4 Tests
HPC - High Performance Computing 25 Tests
Imaging 5 Tests
Common Kernel Benchmarks 5 Tests
Machine Learning 15 Tests
Molecular Dynamics 3 Tests
MPI Benchmarks 4 Tests
Multi-Core 39 Tests
NVIDIA GPU Compute 8 Tests
Intel oneAPI 5 Tests
OpenCV Tests 2 Tests
OpenMPI Tests 5 Tests
Productivity 3 Tests
Programmer / Developer System Benchmarks 12 Tests
Python 4 Tests
Raytracing 2 Tests
Renderers 6 Tests
Scientific Computing 9 Tests
Server 9 Tests
Server CPU Tests 20 Tests
Single-Threaded 11 Tests
Speech 3 Tests
Telephony 3 Tests
Video Encoding 9 Tests
Common Workstation Benchmarks 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
EPYC 7F52
December 27 2020
  1 Day, 3 Hours, 59 Minutes
Linux 5.10.3
December 28 2020
  1 Day, 3 Hours, 47 Minutes
Invert Hiding All Results Option
  1 Day, 3 Hours, 53 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


AMD EPYC 7F52OpenBenchmarking.orgPhoronix Test SuiteAMD EPYC 7F52 16-Core @ 3.50GHz (16 Cores / 32 Threads)Supermicro H11DSi-NT v2.00 (2.1 BIOS)AMD Starship/Matisse64GB280GB INTEL SSDPE21D280GAllvmpipeVE2282 x Intel 10G X550TUbuntu 20.045.8.0-050800rc6daily20200721-generic (x86_64) 202007205.10.3-051003-generic (x86_64)GNOME Shell 3.36.1X Server 1.20.8modesetting 1.20.83.3 Mesa 20.0.4 (LLVM 9.0.1 128 bits)GCC 9.3.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelsDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionAMD EPYC 7F52 BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0x8301034 - OpenJDK Runtime Environment (build 11.0.7+10-post-Ubuntu-3ubuntu1)- Python 2.7.18rc1 + Python 3.8.2- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

EPYC 7F52 vs. Linux 5.10.3 ComparisonPhoronix Test SuiteBaseline+41.1%+41.1%+82.2%+82.2%+123.3%+123.3%+164.4%+164.4%8%7%6.6%6.2%5%4.9%4.7%4.7%4.1%4%4%3.9%3.9%3.4%3%3%2.8%2.6%2.6%2.3%2.2%2.1%2.1%2.1%2%2%tfft2164.5%IP Shapes 3D - u8s8f32 - CPU73%LPOP56%C.B.S.A - f32 - CPU45.3%M.M.B.S.T - f32 - CPU38.6%IP Shapes 3D - f32 - CPU32.7%Forking26.8%S.V.M.P26.4%IP Shapes 1D - f32 - CPU20.1%D.B.s - f32 - CPU17.9%D.B.s - f32 - CPU17.3%C.B.S.A - u8s8f32 - CPU12.1%python_startup12%R.N.N.T - u8s8f32 - CPU11%R.N.N.T - f32 - CPU10.6%R.N.N.T - bf16bf16bf16 - CPU9.9%R.N.N.I - f32 - CPU9.5%R.N.N.I - bf16bf16bf16 - CPU8.7%R.N.N.I - u8s8f32 - CPU8.7%MMAPGET7.6%HWB Color SpacevklBenchmarkVdbVolumeSqueezeNetV1.0SENDFILE6.1%Bosphorus 1080p - Ultra FastvklBenchmarkStructuredVolumeMEMFDP.S.O - Bosphorus 1080pSocket Activity4.2%SADD4.1%Flow MPI Norne - 81 - 250 - Read Only - Average Latency4%Flow MPI Norne - 16V.Q.O - Bosphorus 1080pCPU - squeezenet_ssdBosphorus 1080p - Very Fast1 - 250 - Read Only3.8%OpenMP - Points2Image3.7%D.B.s - u8s8f32 - CPU3.6%SVG Files To PNG3.6%LPUSH3.6%floatSpeed 0 Two-Pass3.3%No - Inference - ResNet 50 - CPU3%P.D.STime To Compile3%VMAF Optimized - Bosphorus 1080p1 - 1 - Read Only - Average Latency2.9%Carbon NanotubeEXPoSE2.8%192.8%2.7%LargeRandMemory Copying2.6%test_fpu232.4%pathlib2.3%P1B2No - Inference - VGG16 - CPUB.C2.2%No - Inference - Inception V3 - CPU2.2%Noise-GaussianBosphorus 4K - Ultra FastP3B12.1%No - Inference - VGG19 - CPUSET2.1%1 - 50 - Read Only - Average Latency9 - Compression SpeedContext Switching2%Polyhedron Fortran BenchmarksoneDNNRedisoneDNNoneDNNoneDNNStress-NGStress-NGoneDNNoneDNNoneDNNoneDNNPyPerformanceoneDNNoneDNNoneDNNoneDNNoneDNNoneDNNStress-NGRedisGraphicsMagickOpenVKLMobile Neural NetworkStress-NGKvazaarOpenVKLStress-NGSVT-VP9Stress-NGRedisOpen Porous MediaPostgreSQL pgbenchOpen Porous MediaSVT-VP9NCNNKvazaarPostgreSQL pgbenchDarmstadt Automotive Parallel Heterogeneous SuiteoneDNNlibrsvgRedisPyPerformanceAOM AV1PlaidMLTimed HMMer SearchTimed GDB GNU Debugger CompilationSVT-VP9PostgreSQL pgbenchGPAWNumenta Anomaly BenchmarkZstd CompressionWireGuard + Linux Networking Stack Stress TestsimdjsonStress-NGPolyhedron Fortran BenchmarksZstd CompressionPyPerformanceECP-CANDLEPlaidMLNumenta Anomaly BenchmarkPlaidMLGraphicsMagickKvazaarECP-CANDLEPlaidMLRedisPostgreSQL pgbenchLZ4 CompressionStress-NGEPYC 7F52Linux 5.10.3

AMD EPYC 7F52compress-7zip: Compress Speed Testai-benchmark: Device Inference Scoreai-benchmark: Device Training Scoreai-benchmark: Device AI Scoreaircrack-ng: aom-av1: Speed 0 Two-Passaom-av1: Speed 4 Two-Passaom-av1: Speed 6 Realtimeaom-av1: Speed 6 Two-Passaom-av1: Speed 8 Realtimeasmfish: 1024 Hash Memory, 26 Depthastcenc: Fastastcenc: Mediumastcenc: Thoroughastcenc: Exhaustiveblender: BMW27 - CPU-Onlyblender: Classroom - CPU-Onlyblender: Fishy Cat - CPU-Onlyblender: Barbershop - CPU-Onlyblender: Pabellon Barcelona - CPU-Onlybrl-cad: VGR Performance Metricbuild2: Time To Compilebyte: Dhrystone 2caffe: AlexNet - CPU - 100caffe: AlexNet - CPU - 200caffe: GoogleNet - CPU - 100caffe: GoogleNet - CPU - 200v-ray: CPUclomp: Static OMP Speedupcoremark: CoreMark Size 666 - Iterations Per Secondcrafty: Elapsed Timedaphne: OpenMP - NDT Mappingdaphne: OpenMP - Points2Imagedaphne: OpenMP - Euclidean Clusterdav1d: Chimera 1080pdav1d: Summer Nature 4Kdav1d: Summer Nature 1080pdav1d: Chimera 1080p 10-bitdeepspeech: CPUecp-candle: P1B2ecp-candle: P3B1ecp-candle: P3B2embree: Pathtracer - Crownembree: Pathtracer ISPC - Crownembree: Pathtracer - Asian Dragonembree: Pathtracer - Asian Dragon Objembree: Pathtracer ISPC - Asian Dragonembree: Pathtracer ISPC - Asian Dragon Objespeak: Text-To-Speech Synthesisffte: N=256, 3D Complex FFT Routineencode-flac: WAV To FLACoctave-benchmark: gpaw: Carbon Nanotubegraphics-magick: Swirlgraphics-magick: Rotategraphics-magick: Sharpengraphics-magick: Enhancedgraphics-magick: Resizinggraphics-magick: Noise-Gaussiangraphics-magick: HWB Color Spacegromacs: Water Benchmarkhint: FLOAThugin: Panorama Photo Assistant + Stitching Timeindigobench: CPU - Bedroomindigobench: CPU - Supercarinfluxdb: 4 - 10000 - 2,5000,1 - 10000influxdb: 64 - 10000 - 2,5000,1 - 10000oidn: Memorialjohn-the-ripper: Blowfishjohn-the-ripper: MD5keydb: kvazaar: Bosphorus 4K - Slowkvazaar: Bosphorus 4K - Mediumkvazaar: Bosphorus 1080p - Slowkvazaar: Bosphorus 1080p - Mediumkvazaar: Bosphorus 4K - Very Fastkvazaar: Bosphorus 4K - Ultra Fastkvazaar: Bosphorus 1080p - Very Fastkvazaar: Bosphorus 1080p - Ultra Fastencode-mp3: WAV To MP3lammps: 20k Atomslammps: Rhodopsin Proteinlibraw: Post-Processing Benchmarklibreoffice: 20 Documents To PDFrsvg: SVG Files To PNGluxcorerender: DLSCluxcorerender: Rainbow Colors and Prismcompress-lz4: 1 - Compression Speedcompress-lz4: 1 - Decompression Speedcompress-lz4: 3 - Compression Speedcompress-lz4: 3 - Decompression Speedcompress-lz4: 9 - Compression Speedcompress-lz4: 9 - Decompression Speedmlpack: scikit_icamlpack: scikit_qdamlpack: scikit_svmmlpack: scikit_linearridgeregressionmnn: SqueezeNetV1.0mnn: resnet-v2-50mnn: MobileNetV2_224mnn: mobilenet-v1-1.0mnn: inception-v3encode-ape: WAV To APEmocassin: Dust 2D tau100.0namd: ATPase Simulation - 327,506 Atomsncnn: CPU - mobilenetncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - shufflenet-v2ncnn: CPU - mnasnetncnn: CPU - efficientnet-b0ncnn: CPU - blazefacencnn: CPU - googlenetncnn: CPU - vgg16ncnn: CPU - resnet18ncnn: CPU - alexnetncnn: CPU - resnet50ncnn: CPU - yolov4-tinyncnn: CPU - squeezenet_ssdncnn: CPU - regnety_400mnode-web-tooling: numenta-nab: EXPoSEnumenta-nab: Relative Entropynumenta-nab: Windowed Gaussiannumenta-nab: Earthgecko Skylinenumenta-nab: Bayesian Changepointnumpy: ocrmypdf: Processing 60 Page PDF Documentencode-ogg: WAV To Oggonednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUopm: Flow MPI Norne - 1opm: Flow MPI Norne - 2opm: Flow MPI Norne - 4opm: Flow MPI Norne - 8opm: Flow MPI Norne - 16openssl: RSA 4096-bit Performanceopenvino: Face Detection 0106 FP16 - CPUopenvino: Face Detection 0106 FP16 - CPUopenvino: Face Detection 0106 FP32 - CPUopenvino: Face Detection 0106 FP32 - CPUopenvino: Person Detection 0106 FP16 - CPUopenvino: Person Detection 0106 FP16 - CPUopenvino: Person Detection 0106 FP32 - CPUopenvino: Person Detection 0106 FP32 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP32 - CPUopenvino: Age Gender Recognition Retail 0013 FP32 - CPUopenvkl: vklBenchmarkopenvkl: vklBenchmarkVdbVolumeopenvkl: vklBenchmarkStructuredVolumeopenvkl: vklBenchmarkUnstructuredVolumeencode-opus: WAV To Opus Encodephpbench: PHP Benchmark Suiteplaidml: No - Inference - VGG16 - CPUplaidml: No - Inference - VGG19 - CPUplaidml: No - Inference - IMDB LSTM - CPUplaidml: No - Inference - Mobilenet - CPUplaidml: No - Inference - ResNet 50 - CPUplaidml: No - Inference - DenseNet 201 - CPUplaidml: No - Inference - Inception V3 - CPUplaidml: No - Inference - NASNer Large - CPUpolyhedron: acpolyhedron: airpolyhedron: mdbxpolyhedron: doducpolyhedron: linpkpolyhedron: tfft2polyhedron: aermodpolyhedron: rnflowpolyhedron: induct2polyhedron: proteinpolyhedron: capacitapolyhedron: channel2polyhedron: fatigue2polyhedron: gas_dyn2polyhedron: test_fpu2polyhedron: mp_prop_designpgbench: 1 - 1 - Read Onlypgbench: 1 - 1 - Read Only - Average Latencypgbench: 1 - 1 - Read Writepgbench: 1 - 1 - Read Write - Average Latencypgbench: 1 - 50 - Read Onlypgbench: 1 - 50 - Read Only - Average Latencypgbench: 1 - 100 - Read Onlypgbench: 1 - 100 - Read Only - Average Latencypgbench: 1 - 250 - Read Onlypgbench: 1 - 250 - Read Only - Average Latencypgbench: 1 - 50 - Read Writepgbench: 1 - 50 - Read Write - Average Latencypgbench: 1 - 100 - Read Writepgbench: 1 - 100 - Read Write - Average Latencypgbench: 1 - 250 - Read Writepgbench: 1 - 250 - Read Write - Average Latencypyperformance: gopyperformance: 2to3pyperformance: chaospyperformance: floatpyperformance: nbodypyperformance: pathlibpyperformance: raytracepyperformance: json_loadspyperformance: crypto_pyaespyperformance: regex_compilepyperformance: python_startuppyperformance: django_templatepyperformance: pickle_pure_pythonrav1e: 1rav1e: 5rav1e: 6rav1e: 10redis: LPOPredis: SADDredis: LPUSHredis: GETredis: SETrnnoise: simdjson: Kostyasimdjson: LargeRandsimdjson: PartialTweetssimdjson: DistinctUserIDsqlite-speedtest: Timed Time - Size 1,000stockfish: Total Timestress-ng: MMAPstress-ng: NUMAstress-ng: MEMFDstress-ng: Atomicstress-ng: Cryptostress-ng: Mallocstress-ng: Forkingstress-ng: SENDFILEstress-ng: CPU Cachestress-ng: CPU Stressstress-ng: Semaphoresstress-ng: Matrix Mathstress-ng: Vector Mathstress-ng: Memory Copyingstress-ng: Socket Activitystress-ng: Context Switchingstress-ng: Glibc C String Functionsstress-ng: Glibc Qsort Data Sortingstress-ng: System V Message Passingsunflow: Global Illumination + Image Synthesissvt-av1: Enc Mode 0 - 1080psvt-av1: Enc Mode 4 - 1080psvt-av1: Enc Mode 8 - 1080psvt-vp9: VMAF Optimized - Bosphorus 1080psvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080psvt-vp9: Visual Quality Optimized - Bosphorus 1080ptachyon: Total Timetensorflow-lite: SqueezeNettensorflow-lite: Inception V4tensorflow-lite: NASNet Mobiletensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quanttensorflow-lite: Inception ResNet V2build-apache: Time To Compilebuild-clash: Time To Compilebuild-eigen: Time To Compilebuild-ffmpeg: Time To Compilebuild-gdb: Time To Compilehmmer: Pfam Database Searchbuild-linux-kernel: Time To Compilemafft: Multiple Sequence Alignment - LSU RNAbuild-mplayer: Time To Compiletnn: CPU - MobileNet v2tnn: CPU - SqueezeNet v1.1unpack-firefox: firefox-84.0.source.tar.xzvpxenc: Speed 0vpxenc: Speed 5encode-wavpack: WAV To WavPackwebp: Defaultwebp: Quality 100webp: Quality 100, Losslesswebp: Quality 100, Highest Compressionwebp: Quality 100, Lossless, Highest Compressionwireguard: x264: H.264 Video Encodingx265: Bosphorus 4Kx265: Bosphorus 1080pcompress-xz: Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9yafaray: Total Time For Sample Scenecompress-zstd: 3compress-zstd: 19EPYC 7F52Linux 5.10.310680317681421318956912.7400.312.4219.163.7434.12462407975.356.8913.79108.8383.52239.80108.00354.80266.5424551675.25741016145.2716671436221816523639982733450.1688169.9301227776189977.6522093.9127013501090.64574.78227.67533.80110.6468.2934638.585648.699899.4219.783618.766220.970420.405021.178919.635630.78199888.4436406598.5627.402117.375896619235374159741911712.365347415262.1302050.7033.5777.7611211752.01425736.114.22263901726333432105.7310.0610.2435.0535.9724.4440.4168.39105.127.93312.27311.67838.187.15824.2093.273.499947.8011455.853.4910768.251.7810898.452.8730.0423.361.7310.93334.5546.2086.57533.53012.5051921.1422619.278.497.688.987.6011.063.6917.6530.1710.697.0321.3425.9421.8944.519.27756.88014.4667.53076.83327.208367.1019.51520.5962.005192.367521.511150.7720583.294032.763214.038015.556825.522802.868902006.801068.481992.601057.420.6759151994.621069.391.83844365.373212.220168.764217.414361.9174571.44.021988.754.011986.913.062582.883.042600.359974.700.789935.550.79217.8115263784.66666768692259.8828831818093.88538387.98061855224.3520.27665.8814.536.143.1910.391.046.531.774.727.263.1821.796.1216.623.7913.8417.6142.5252.3144.1632.1259.38282730.03538030.2634918270.1025143070.1955568250.449423111.822333230.0372227112.58125432911312011317.147524.91091737.7648.34770.3691.0941.4643.1861915545.961565518.881216222.501753884.371350619.5220.1330.520.380.610.6266.72136388251229.80409.25680.78512936.214565.97332554816.8356181.28297122.8144.866244.332314681.1377530.48142981.976435.7310784.408409881.771144375.85269.5710610267.690.8200.1175.36038.531248.38252.18203.9847.2389106296149459012727568415.069832.4134392022.396450.47983.44333.89094.183131.05145.1179.00920.379274.966263.03820.4347.1623.0813.7531.6182.49817.5047.73236.309293.867162.7720.9361.7620.993130.0978221.576.810820917731434320756766.0040.302.4119.403.7533.98464416535.356.9113.79108.8182.89239.22108.33355.80266.7024232375.80241240132.3726051440391810083628412711050.1694463.1008947633247969.3821297.6602705911085.36581.26227.34541.83111.4468.1666837.723662.322896.05319.529718.623921.060120.424620.891819.590531.051100233.0552615978.6107.492114.132895614235374159142812532.374347380647.3623550.6793.5777.7311198820.31419536.414.22263971728667424609.6010.1010.3135.3536.2724.5641.2771.05110.367.93912.25511.52338.567.20825.0793.273.5010009.4911490.653.4810815.352.8110851.653.5029.9623.021.7210.29133.9606.1266.55132.96412.5011921.1480119.558.447.738.977.6011.143.6717.7030.0210.717.0120.9425.8421.0644.799.35778.15914.4787.44676.75327.804368.4519.54820.6922.408103.142011.522201.335934.786913.256734.737066.228775.527582.972642220.501169.572211.731148.930.9370892192.821162.781.82157364.936212.313166.540208.913348.1054579.84.011988.904.011989.913.072590.333.032605.519966.930.789953.070.78218.9416277364.24242472087200.3153151817665.47213467.97862544124.8920.69666.7914.515.963.2110.171.046.511.774.727.263.1857.636.1616.5923.8113.8417.5642.352.2144.2631.3259.39278960.03637820.2645010470.1005071610.1985364120.467415112.048330030.3372201113.78125332611211611317.547624.71101738.6947.54700.3701.0941.4613.1921228236.411503178.001174489.561630009.811323358.9820.1010.530.390.610.6267.13836383816248.17416.60712.74510793.924555.43332331122.5344312.12280154.7444.526266.842278162.6576518.79142907.676274.4310348.918245888.971143670.00268.948395010.730.8180.1165.38839.007255.72264.01212.0748.1285106510149901712661968630.970037.4134624322.684450.37684.47934.12996.979127.22845.2699.03320.428275.506264.35720.4837.223.4013.7421.6182.49117.5747.71636.275301.897163.6521.2262.2721.233130.9088027.874.7OpenBenchmarking.org

7-Zip Compression

This is a test of 7-Zip using p7zip with its integrated benchmark feature or upstream 7-Zip for the Windows x64 build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestEPYC 7F52Linux 5.10.320K40K60K80K100KSE +/- 904.09, N = 3SE +/- 140.42, N = 31068031082091. (CXX) g++ options: -pipe -lpthread
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestEPYC 7F52Linux 5.10.320K40K60K80K100KMin: 105145 / Avg: 106802.67 / Max: 108257Min: 107998 / Avg: 108209 / Max: 1084751. (CXX) g++ options: -pipe -lpthread

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference ScoreEPYC 7F52Linux 5.10.340080012001600200017681773

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training ScoreEPYC 7F52Linux 5.10.33006009001200150014211434

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI ScoreEPYC 7F52Linux 5.10.3700140021002800350031893207

Aircrack-ng

Aircrack-ng is a tool for assessing WiFi/WLAN network security. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.5.2EPYC 7F52Linux 5.10.312K24K36K48K60KSE +/- 72.75, N = 3SE +/- 97.98, N = 356912.7456766.001. (CXX) g++ options: -O3 -fvisibility=hidden -masm=intel -fcommon -rdynamic -lpthread -lz -lcrypto -lhwloc -ldl -lm -pthread
OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.5.2EPYC 7F52Linux 5.10.310K20K30K40K50KMin: 56771 / Avg: 56912.74 / Max: 57012.05Min: 56667.44 / Avg: 56766 / Max: 56961.951. (CXX) g++ options: -O3 -fvisibility=hidden -masm=intel -fcommon -rdynamic -lpthread -lz -lcrypto -lhwloc -ldl -lm -pthread

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 0 Two-PassEPYC 7F52Linux 5.10.30.06980.13960.20940.27920.349SE +/- 0.00, N = 3SE +/- 0.00, N = 30.310.301. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 0 Two-PassEPYC 7F52Linux 5.10.312345Min: 0.31 / Avg: 0.31 / Max: 0.31Min: 0.3 / Avg: 0.3 / Max: 0.311. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassEPYC 7F52Linux 5.10.30.54451.0891.63352.1782.7225SE +/- 0.00, N = 3SE +/- 0.00, N = 32.422.411. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassEPYC 7F52Linux 5.10.3246810Min: 2.41 / Avg: 2.42 / Max: 2.42Min: 2.4 / Avg: 2.41 / Max: 2.411. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeEPYC 7F52Linux 5.10.3510152025SE +/- 0.09, N = 3SE +/- 0.08, N = 319.1619.401. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeEPYC 7F52Linux 5.10.3510152025Min: 18.99 / Avg: 19.16 / Max: 19.31Min: 19.27 / Avg: 19.4 / Max: 19.551. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassEPYC 7F52Linux 5.10.30.84381.68762.53143.37524.219SE +/- 0.01, N = 3SE +/- 0.01, N = 33.743.751. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassEPYC 7F52Linux 5.10.3246810Min: 3.73 / Avg: 3.74 / Max: 3.75Min: 3.74 / Avg: 3.75 / Max: 3.771. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeEPYC 7F52Linux 5.10.3816243240SE +/- 0.23, N = 3SE +/- 0.06, N = 334.1233.981. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeEPYC 7F52Linux 5.10.3714212835Min: 33.66 / Avg: 34.12 / Max: 34.36Min: 33.86 / Avg: 33.98 / Max: 34.041. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthEPYC 7F52Linux 5.10.310M20M30M40M50MSE +/- 250444.78, N = 3SE +/- 434223.37, N = 34624079746441653
OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthEPYC 7F52Linux 5.10.38M16M24M32M40MMin: 45902281 / Avg: 46240796.67 / Max: 46729778Min: 45963345 / Avg: 46441653.33 / Max: 47308554

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: FastEPYC 7F52Linux 5.10.31.20382.40763.61144.81526.019SE +/- 0.01, N = 3SE +/- 0.00, N = 35.355.351. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: FastEPYC 7F52Linux 5.10.3246810Min: 5.33 / Avg: 5.35 / Max: 5.36Min: 5.35 / Avg: 5.35 / Max: 5.351. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: MediumEPYC 7F52Linux 5.10.3246810SE +/- 0.01, N = 3SE +/- 0.01, N = 36.896.911. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: MediumEPYC 7F52Linux 5.10.33691215Min: 6.88 / Avg: 6.89 / Max: 6.92Min: 6.9 / Avg: 6.91 / Max: 6.921. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughEPYC 7F52Linux 5.10.348121620SE +/- 0.01, N = 3SE +/- 0.01, N = 313.7913.791. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughEPYC 7F52Linux 5.10.348121620Min: 13.77 / Avg: 13.79 / Max: 13.81Min: 13.77 / Avg: 13.79 / Max: 13.81. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveEPYC 7F52Linux 5.10.320406080100SE +/- 0.13, N = 3SE +/- 0.12, N = 3108.83108.811. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveEPYC 7F52Linux 5.10.320406080100Min: 108.57 / Avg: 108.83 / Max: 108.98Min: 108.56 / Avg: 108.81 / Max: 108.941. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: CPU-OnlyEPYC 7F52Linux 5.10.320406080100SE +/- 0.25, N = 3SE +/- 0.05, N = 383.5282.89
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: CPU-OnlyEPYC 7F52Linux 5.10.31632486480Min: 83.17 / Avg: 83.52 / Max: 84.01Min: 82.79 / Avg: 82.89 / Max: 82.94

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Classroom - Compute: CPU-OnlyEPYC 7F52Linux 5.10.350100150200250SE +/- 0.22, N = 3SE +/- 0.12, N = 3239.80239.22
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Classroom - Compute: CPU-OnlyEPYC 7F52Linux 5.10.34080120160200Min: 239.43 / Avg: 239.8 / Max: 240.2Min: 238.98 / Avg: 239.22 / Max: 239.34

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Fishy Cat - Compute: CPU-OnlyEPYC 7F52Linux 5.10.320406080100SE +/- 0.29, N = 3SE +/- 0.08, N = 3108.00108.33
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Fishy Cat - Compute: CPU-OnlyEPYC 7F52Linux 5.10.320406080100Min: 107.64 / Avg: 108 / Max: 108.57Min: 108.24 / Avg: 108.33 / Max: 108.48

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Barbershop - Compute: CPU-OnlyEPYC 7F52Linux 5.10.380160240320400SE +/- 0.32, N = 3SE +/- 0.30, N = 3354.80355.80
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Barbershop - Compute: CPU-OnlyEPYC 7F52Linux 5.10.360120180240300Min: 354.16 / Avg: 354.8 / Max: 355.15Min: 355.43 / Avg: 355.8 / Max: 356.39

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Pabellon Barcelona - Compute: CPU-OnlyEPYC 7F52Linux 5.10.360120180240300SE +/- 1.48, N = 3SE +/- 0.33, N = 3266.54266.70
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Pabellon Barcelona - Compute: CPU-OnlyEPYC 7F52Linux 5.10.350100150200250Min: 264.9 / Avg: 266.54 / Max: 269.5Min: 266.1 / Avg: 266.7 / Max: 267.25

BRL-CAD

BRL-CAD 7.28.0 is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.30.8VGR Performance MetricEPYC 7F52Linux 5.10.350K100K150K200K250K2455162423231. (CXX) g++ options: -std=c++11 -pipe -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -rdynamic -lSM -lICE -lXi -lGLU -lGL -lGLdispatch -lX11 -lXext -lXrender -lpthread -ldl -luuid -lm

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileEPYC 7F52Linux 5.10.320406080100SE +/- 0.10, N = 3SE +/- 0.35, N = 375.2675.80
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileEPYC 7F52Linux 5.10.31530456075Min: 75.08 / Avg: 75.26 / Max: 75.42Min: 75.14 / Avg: 75.8 / Max: 76.35

BYTE Unix Benchmark

This is a test of BYTE. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 2EPYC 7F52Linux 5.10.39M18M27M36M45MSE +/- 336403.00, N = 3SE +/- 142195.68, N = 341016145.241240132.3
OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 2EPYC 7F52Linux 5.10.37M14M21M28M35MMin: 40568980.6 / Avg: 41016145.2 / Max: 41675082.3Min: 40957050.5 / Avg: 41240132.33 / Max: 41405281.3

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100EPYC 7F52Linux 5.10.316K32K48K64K80KSE +/- 136.23, N = 3SE +/- 793.90, N = 371667726051. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100EPYC 7F52Linux 5.10.313K26K39K52K65KMin: 71479 / Avg: 71667.33 / Max: 71932Min: 71557 / Avg: 72605 / Max: 741621. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 200EPYC 7F52Linux 5.10.330K60K90K120K150KSE +/- 359.19, N = 3SE +/- 381.87, N = 31436221440391. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 200EPYC 7F52Linux 5.10.320K40K60K80K100KMin: 143151 / Avg: 143621.67 / Max: 144327Min: 143399 / Avg: 144039.33 / Max: 1447201. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100EPYC 7F52Linux 5.10.340K80K120K160K200KSE +/- 222.17, N = 3SE +/- 172.36, N = 31816521810081. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100EPYC 7F52Linux 5.10.330K60K90K120K150KMin: 181404 / Avg: 181651.67 / Max: 182095Min: 180707 / Avg: 181008 / Max: 1813041. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 200EPYC 7F52Linux 5.10.380K160K240K320K400KSE +/- 75.84, N = 3SE +/- 312.30, N = 33639983628411. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 200EPYC 7F52Linux 5.10.360K120K180K240K300KMin: 363905 / Avg: 363997.67 / Max: 364148Min: 362407 / Avg: 362841 / Max: 3634471. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgKsamples, More Is BetterChaos Group V-RAY 4.10.07Mode: CPUEPYC 7F52Linux 5.10.36K12K18K24K30KSE +/- 255.78, N = 3SE +/- 33.00, N = 32733427110
OpenBenchmarking.orgKsamples, More Is BetterChaos Group V-RAY 4.10.07Mode: CPUEPYC 7F52Linux 5.10.35K10K15K20K25KMin: 27026 / Avg: 27334.33 / Max: 27842Min: 27077 / Avg: 27110 / Max: 27176

CLOMP

CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading in order to influence future system designs. This particular test profile configuration is currently set to look at the OpenMP static schedule speed-up across all available CPU cores using the recommended test configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP SpeedupEPYC 7F52Linux 5.10.31122334455SE +/- 0.21, N = 3SE +/- 0.09, N = 350.150.11. (CC) gcc options: -fopenmp -O3 -lm
OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP SpeedupEPYC 7F52Linux 5.10.31020304050Min: 49.8 / Avg: 50.1 / Max: 50.5Min: 49.9 / Avg: 50.07 / Max: 50.21. (CC) gcc options: -fopenmp -O3 -lm

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondEPYC 7F52Linux 5.10.3150K300K450K600K750KSE +/- 1877.36, N = 3SE +/- 3903.98, N = 3688169.93694463.101. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondEPYC 7F52Linux 5.10.3120K240K360K480K600KMin: 685702.04 / Avg: 688169.93 / Max: 691854.49Min: 688468.16 / Avg: 694463.1 / Max: 701792.861. (CC) gcc options: -O2 -lrt" -lrt

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeEPYC 7F52Linux 5.10.31.7M3.4M5.1M6.8M8.5MSE +/- 19965.32, N = 3SE +/- 30182.11, N = 3777618976332471. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm
OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeEPYC 7F52Linux 5.10.31.3M2.6M3.9M5.2M6.5MMin: 7739919 / Avg: 7776189 / Max: 7808788Min: 7576064 / Avg: 7633247.33 / Max: 76785851. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: NDT MappingEPYC 7F52Linux 5.10.32004006008001000SE +/- 3.32, N = 3SE +/- 6.61, N = 3977.65969.381. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: NDT MappingEPYC 7F52Linux 5.10.32004006008001000Min: 971.61 / Avg: 977.65 / Max: 983.06Min: 957.7 / Avg: 969.38 / Max: 980.581. (CXX) g++ options: -O3 -std=c++11 -fopenmp

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Points2ImageEPYC 7F52Linux 5.10.35K10K15K20K25KSE +/- 156.61, N = 15SE +/- 123.09, N = 322093.9121297.661. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Points2ImageEPYC 7F52Linux 5.10.34K8K12K16K20KMin: 21280.29 / Avg: 22093.91 / Max: 23658.56Min: 21056.25 / Avg: 21297.66 / Max: 21460.151. (CXX) g++ options: -O3 -std=c++11 -fopenmp

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Euclidean ClusterEPYC 7F52Linux 5.10.32004006008001000SE +/- 1.60, N = 3SE +/- 1.78, N = 31090.641085.361. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Euclidean ClusterEPYC 7F52Linux 5.10.32004006008001000Min: 1087.77 / Avg: 1090.64 / Max: 1093.31Min: 1083.3 / Avg: 1085.36 / Max: 1088.911. (CXX) g++ options: -O3 -std=c++11 -fopenmp

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080pEPYC 7F52Linux 5.10.3130260390520650SE +/- 1.17, N = 3SE +/- 1.10, N = 3574.78581.26MIN: 454.24 / MAX: 710.14MIN: 460.79 / MAX: 716.221. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080pEPYC 7F52Linux 5.10.3100200300400500Min: 572.44 / Avg: 574.78 / Max: 576.02Min: 579.17 / Avg: 581.26 / Max: 582.881. (CC) gcc options: -pthread

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 4KEPYC 7F52Linux 5.10.350100150200250SE +/- 0.89, N = 3SE +/- 0.24, N = 3227.67227.34MIN: 160.75 / MAX: 250.13MIN: 166.45 / MAX: 246.551. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 4KEPYC 7F52Linux 5.10.34080120160200Min: 226.22 / Avg: 227.67 / Max: 229.29Min: 226.89 / Avg: 227.34 / Max: 227.721. (CC) gcc options: -pthread

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 1080pEPYC 7F52Linux 5.10.3120240360480600SE +/- 1.44, N = 3SE +/- 2.05, N = 3533.80541.83MIN: 341.27 / MAX: 581.44MIN: 374.84 / MAX: 590.341. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 1080pEPYC 7F52Linux 5.10.3100200300400500Min: 531.39 / Avg: 533.8 / Max: 536.38Min: 537.75 / Avg: 541.83 / Max: 544.281. (CC) gcc options: -pthread

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080p 10-bitEPYC 7F52Linux 5.10.320406080100SE +/- 0.05, N = 3SE +/- 0.07, N = 3110.64111.44MIN: 74.39 / MAX: 217.07MIN: 74.8 / MAX: 220.431. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080p 10-bitEPYC 7F52Linux 5.10.320406080100Min: 110.56 / Avg: 110.64 / Max: 110.74Min: 111.29 / Avg: 111.44 / Max: 111.521. (CC) gcc options: -pthread

DeepSpeech

Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUEPYC 7F52Linux 5.10.31530456075SE +/- 0.20, N = 3SE +/- 0.08, N = 368.2968.17
OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUEPYC 7F52Linux 5.10.31326395265Min: 67.9 / Avg: 68.29 / Max: 68.51Min: 68.03 / Avg: 68.17 / Max: 68.3

ECP-CANDLE

The CANDLE benchmark codes implement deep learning architectures relevant to problems in cancer. These architectures address problems at different biological scales, specifically problems at the molecular, cellular and population scales. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterECP-CANDLE 0.3Benchmark: P1B2EPYC 7F52Linux 5.10.391827364538.5937.72

OpenBenchmarking.orgSeconds, Fewer Is BetterECP-CANDLE 0.3Benchmark: P3B1EPYC 7F52Linux 5.10.3140280420560700648.70662.32

OpenBenchmarking.orgSeconds, Fewer Is BetterECP-CANDLE 0.3Benchmark: P3B2EPYC 7F52Linux 5.10.32004006008001000899.42896.05

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: CrownEPYC 7F52Linux 5.10.3510152025SE +/- 0.07, N = 3SE +/- 0.08, N = 319.7819.53MIN: 19.53 / MAX: 20.18MIN: 19.27 / MAX: 19.84
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: CrownEPYC 7F52Linux 5.10.3510152025Min: 19.65 / Avg: 19.78 / Max: 19.87Min: 19.38 / Avg: 19.53 / Max: 19.64

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: CrownEPYC 7F52Linux 5.10.3510152025SE +/- 0.12, N = 3SE +/- 0.16, N = 318.7718.62MIN: 18.46 / MAX: 19.39MIN: 17.94 / MAX: 19.11
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: CrownEPYC 7F52Linux 5.10.3510152025Min: 18.59 / Avg: 18.77 / Max: 18.99Min: 18.31 / Avg: 18.62 / Max: 18.81

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian DragonEPYC 7F52Linux 5.10.3510152025SE +/- 0.05, N = 3SE +/- 0.21, N = 320.9721.06MIN: 20.82 / MAX: 22.27MIN: 20.53 / MAX: 22.46
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian DragonEPYC 7F52Linux 5.10.3510152025Min: 20.88 / Avg: 20.97 / Max: 21.06Min: 20.65 / Avg: 21.06 / Max: 21.31

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon ObjEPYC 7F52Linux 5.10.3510152025SE +/- 0.11, N = 3SE +/- 0.03, N = 320.4120.42MIN: 19.42 / MAX: 20.8MIN: 19.57 / MAX: 20.77
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon ObjEPYC 7F52Linux 5.10.3510152025Min: 20.19 / Avg: 20.41 / Max: 20.55Min: 20.36 / Avg: 20.42 / Max: 20.47

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian DragonEPYC 7F52Linux 5.10.3510152025SE +/- 0.20, N = 6SE +/- 0.03, N = 321.1820.89MIN: 20.68 / MAX: 22.95MIN: 20.71 / MAX: 22.1
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian DragonEPYC 7F52Linux 5.10.3510152025Min: 20.83 / Avg: 21.18 / Max: 22.14Min: 20.85 / Avg: 20.89 / Max: 20.96

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon ObjEPYC 7F52Linux 5.10.3510152025SE +/- 0.03, N = 3SE +/- 0.04, N = 319.6419.59MIN: 18.92 / MAX: 19.94MIN: 18.78 / MAX: 19.96
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon ObjEPYC 7F52Linux 5.10.3510152025Min: 19.6 / Avg: 19.64 / Max: 19.69Min: 19.51 / Avg: 19.59 / Max: 19.66

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisEPYC 7F52Linux 5.10.3714212835SE +/- 0.26, N = 4SE +/- 0.07, N = 430.7831.051. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisEPYC 7F52Linux 5.10.3714212835Min: 30.21 / Avg: 30.78 / Max: 31.23Min: 30.86 / Avg: 31.05 / Max: 31.181. (CC) gcc options: -O2 -std=c99

FFTE

FFTE is a package by Daisuke Takahashi to compute Discrete Fourier Transforms of 1-, 2- and 3- dimensional sequences of length (2^p)*(3^q)*(5^r). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineEPYC 7F52Linux 5.10.320K40K60K80K100KSE +/- 100.09, N = 3SE +/- 226.53, N = 399888.44100233.061. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp
OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineEPYC 7F52Linux 5.10.320K40K60K80K100KMin: 99724.72 / Avg: 99888.44 / Max: 100070.07Min: 99913.34 / Avg: 100233.06 / Max: 100670.91. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp

FLAC Audio Encoding

This test times how long it takes to encode a sample WAV file to FLAC format five times. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLACEPYC 7F52Linux 5.10.3246810SE +/- 0.013, N = 5SE +/- 0.005, N = 58.5628.6101. (CXX) g++ options: -O2 -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLACEPYC 7F52Linux 5.10.33691215Min: 8.51 / Avg: 8.56 / Max: 8.59Min: 8.6 / Avg: 8.61 / Max: 8.631. (CXX) g++ options: -O2 -fvisibility=hidden -logg -lm

GNU Octave Benchmark

This test profile measures how long it takes to complete several reference GNU Octave files via octave-benchmark. GNU Octave is used for numerical computations and is an open-source alternative to MATLAB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 5.2.0EPYC 7F52Linux 5.10.3246810SE +/- 0.044, N = 5SE +/- 0.050, N = 57.4027.492
OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 5.2.0EPYC 7F52Linux 5.10.33691215Min: 7.33 / Avg: 7.4 / Max: 7.57Min: 7.35 / Avg: 7.49 / Max: 7.6

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 20.1Input: Carbon NanotubeEPYC 7F52Linux 5.10.3306090120150SE +/- 1.45, N = 4SE +/- 0.05, N = 3117.38114.131. (CC) gcc options: -pthread -shared -fwrapv -O2 -lxc -lblas -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 20.1Input: Carbon NanotubeEPYC 7F52Linux 5.10.320406080100Min: 114.48 / Avg: 117.38 / Max: 120.52Min: 114.03 / Avg: 114.13 / Max: 114.191. (CC) gcc options: -pthread -shared -fwrapv -O2 -lxc -lblas -lmpi

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SwirlEPYC 7F52Linux 5.10.32004006008001000SE +/- 1.20, N = 3SE +/- 0.33, N = 38968951. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SwirlEPYC 7F52Linux 5.10.3160320480640800Min: 894 / Avg: 895.67 / Max: 898Min: 895 / Avg: 895.33 / Max: 8961. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: RotateEPYC 7F52Linux 5.10.3130260390520650SE +/- 5.81, N = 3SE +/- 4.04, N = 36196141. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: RotateEPYC 7F52Linux 5.10.3110220330440550Min: 608 / Avg: 618.67 / Max: 628Min: 606 / Avg: 614 / Max: 6191. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SharpenEPYC 7F52Linux 5.10.350100150200250SE +/- 0.33, N = 3SE +/- 0.33, N = 32352351. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SharpenEPYC 7F52Linux 5.10.34080120160200Min: 235 / Avg: 235.33 / Max: 236Min: 235 / Avg: 235.33 / Max: 2361. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: EnhancedEPYC 7F52Linux 5.10.380160240320400SE +/- 0.33, N = 3SE +/- 0.33, N = 33743741. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: EnhancedEPYC 7F52Linux 5.10.370140210280350Min: 374 / Avg: 374.33 / Max: 375Min: 374 / Avg: 374.33 / Max: 3751. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: ResizingEPYC 7F52Linux 5.10.330060090012001500SE +/- 18.67, N = 3SE +/- 9.84, N = 3159715911. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: ResizingEPYC 7F52Linux 5.10.330060090012001500Min: 1560 / Avg: 1597.33 / Max: 1616Min: 1573 / Avg: 1590.67 / Max: 16071. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-GaussianEPYC 7F52Linux 5.10.390180270360450SE +/- 0.33, N = 3SE +/- 0.33, N = 34194281. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-GaussianEPYC 7F52Linux 5.10.380160240320400Min: 418 / Avg: 418.67 / Max: 419Min: 427 / Avg: 427.67 / Max: 4281. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: HWB Color SpaceEPYC 7F52Linux 5.10.330060090012001500SE +/- 1.33, N = 3SE +/- 1.86, N = 3117112531. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: HWB Color SpaceEPYC 7F52Linux 5.10.32004006008001000Min: 1168 / Avg: 1170.67 / Max: 1172Min: 1249 / Avg: 1252.67 / Max: 12551. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkEPYC 7F52Linux 5.10.30.53421.06841.60262.13682.671SE +/- 0.003, N = 3SE +/- 0.002, N = 32.3652.3741. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkEPYC 7F52Linux 5.10.3246810Min: 2.36 / Avg: 2.36 / Max: 2.37Min: 2.37 / Avg: 2.37 / Max: 2.381. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

Hierarchical INTegration

This test runs the U.S. Department of Energy's Ames Laboratory Hierarchical INTegration (HINT) benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOATEPYC 7F52Linux 5.10.370M140M210M280M350MSE +/- 82650.29, N = 3SE +/- 37054.68, N = 3347415262.13347380647.361. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOATEPYC 7F52Linux 5.10.360M120M180M240M300MMin: 347294652.45 / Avg: 347415262.13 / Max: 347573460.73Min: 347320726.74 / Avg: 347380647.36 / Max: 347448373.91. (CC) gcc options: -O3 -march=native -lm

Hugin

Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeEPYC 7F52Linux 5.10.31122334455SE +/- 0.07, N = 3SE +/- 0.25, N = 350.7050.68
OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeEPYC 7F52Linux 5.10.31020304050Min: 50.58 / Avg: 50.7 / Max: 50.83Min: 50.2 / Avg: 50.68 / Max: 51.03

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomEPYC 7F52Linux 5.10.30.80481.60962.41443.21924.024SE +/- 0.009, N = 3SE +/- 0.002, N = 33.5773.577
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomEPYC 7F52Linux 5.10.3246810Min: 3.57 / Avg: 3.58 / Max: 3.59Min: 3.57 / Avg: 3.58 / Max: 3.58

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarEPYC 7F52Linux 5.10.3246810SE +/- 0.012, N = 3SE +/- 0.008, N = 37.7617.731
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarEPYC 7F52Linux 5.10.33691215Min: 7.74 / Avg: 7.76 / Max: 7.78Min: 7.72 / Avg: 7.73 / Max: 7.74

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000EPYC 7F52Linux 5.10.3300K600K900K1200K1500KSE +/- 1586.66, N = 3SE +/- 1744.82, N = 31211752.01198820.3
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000EPYC 7F52Linux 5.10.3200K400K600K800K1000KMin: 1209943.2 / Avg: 1211751.97 / Max: 1214914.4Min: 1196305.6 / Avg: 1198820.33 / Max: 1202173

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000EPYC 7F52Linux 5.10.3300K600K900K1200K1500KSE +/- 1494.65, N = 3SE +/- 2047.76, N = 31425736.11419536.4
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000EPYC 7F52Linux 5.10.3200K400K600K800K1000KMin: 1423811.1 / Avg: 1425736.13 / Max: 1428679.2Min: 1416392.8 / Avg: 1419536.43 / Max: 1423381.6

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.2.0Scene: MemorialEPYC 7F52Linux 5.10.348121620SE +/- 0.04, N = 3SE +/- 0.04, N = 314.2214.22
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.2.0Scene: MemorialEPYC 7F52Linux 5.10.348121620Min: 14.15 / Avg: 14.22 / Max: 14.3Min: 14.14 / Avg: 14.22 / Max: 14.29

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: BlowfishEPYC 7F52Linux 5.10.36K12K18K24K30KSE +/- 5.78, N = 3SE +/- 6.17, N = 326390263971. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2
OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: BlowfishEPYC 7F52Linux 5.10.35K10K15K20K25KMin: 26380 / Avg: 26390.33 / Max: 26400Min: 26390 / Avg: 26396.67 / Max: 264091. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: MD5EPYC 7F52Linux 5.10.3400K800K1200K1600K2000KSE +/- 2962.73, N = 3SE +/- 3179.80, N = 3172633317286671. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2
OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: MD5EPYC 7F52Linux 5.10.3300K600K900K1200K1500KMin: 1722000 / Avg: 1726333.33 / Max: 1732000Min: 1725000 / Avg: 1728666.67 / Max: 17350001. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2

KeyDB

A benchmark of KeyDB as a multi-threaded fork of the Redis server. The KeyDB benchmark is conducted using memtier-benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.16EPYC 7F52Linux 5.10.390K180K270K360K450KSE +/- 3437.31, N = 3SE +/- 1060.81, N = 3432105.73424609.601. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.16EPYC 7F52Linux 5.10.370K140K210K280K350KMin: 427512.92 / Avg: 432105.73 / Max: 438832.13Min: 422868.39 / Avg: 424609.6 / Max: 4265301. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: SlowEPYC 7F52Linux 5.10.33691215SE +/- 0.01, N = 3SE +/- 0.01, N = 310.0610.101. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: SlowEPYC 7F52Linux 5.10.33691215Min: 10.04 / Avg: 10.06 / Max: 10.08Min: 10.09 / Avg: 10.1 / Max: 10.121. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: MediumEPYC 7F52Linux 5.10.33691215SE +/- 0.01, N = 3SE +/- 0.01, N = 310.2410.311. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: MediumEPYC 7F52Linux 5.10.33691215Min: 10.23 / Avg: 10.24 / Max: 10.26Min: 10.3 / Avg: 10.31 / Max: 10.331. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: SlowEPYC 7F52Linux 5.10.3816243240SE +/- 0.02, N = 3SE +/- 0.02, N = 335.0535.351. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: SlowEPYC 7F52Linux 5.10.3816243240Min: 35.03 / Avg: 35.05 / Max: 35.09Min: 35.32 / Avg: 35.35 / Max: 35.381. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: MediumEPYC 7F52Linux 5.10.3816243240SE +/- 0.02, N = 3SE +/- 0.15, N = 335.9736.271. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: MediumEPYC 7F52Linux 5.10.3816243240Min: 35.93 / Avg: 35.97 / Max: 36.01Min: 36.09 / Avg: 36.27 / Max: 36.571. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very FastEPYC 7F52Linux 5.10.3612182430SE +/- 0.03, N = 3SE +/- 0.02, N = 324.4424.561. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very FastEPYC 7F52Linux 5.10.3612182430Min: 24.39 / Avg: 24.44 / Max: 24.48Min: 24.52 / Avg: 24.56 / Max: 24.581. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra FastEPYC 7F52Linux 5.10.3918273645SE +/- 0.06, N = 3SE +/- 0.05, N = 340.4141.271. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra FastEPYC 7F52Linux 5.10.3918273645Min: 40.3 / Avg: 40.41 / Max: 40.47Min: 41.2 / Avg: 41.27 / Max: 41.361. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Very FastEPYC 7F52Linux 5.10.31632486480SE +/- 0.10, N = 3SE +/- 0.30, N = 368.3971.051. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Very FastEPYC 7F52Linux 5.10.31428425670Min: 68.19 / Avg: 68.39 / Max: 68.53Min: 70.58 / Avg: 71.05 / Max: 71.611. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Ultra FastEPYC 7F52Linux 5.10.320406080100SE +/- 0.58, N = 3SE +/- 0.45, N = 3105.12110.361. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Ultra FastEPYC 7F52Linux 5.10.320406080100Min: 104.16 / Avg: 105.12 / Max: 106.18Min: 109.57 / Avg: 110.36 / Max: 111.141. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

LAME MP3 Encoding

LAME is an MP3 encoder licensed under the LGPL. This test measures the time required to encode a WAV file to MP3 format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3EPYC 7F52Linux 5.10.3246810SE +/- 0.004, N = 3SE +/- 0.008, N = 37.9337.9391. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lncurses -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3EPYC 7F52Linux 5.10.33691215Min: 7.93 / Avg: 7.93 / Max: 7.94Min: 7.93 / Avg: 7.94 / Max: 7.951. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lncurses -lm

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k AtomsEPYC 7F52Linux 5.10.33691215SE +/- 0.01, N = 3SE +/- 0.01, N = 312.2712.261. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k AtomsEPYC 7F52Linux 5.10.348121620Min: 12.27 / Avg: 12.27 / Max: 12.28Min: 12.24 / Avg: 12.26 / Max: 12.281. (CXX) g++ options: -O3 -pthread -lm

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinEPYC 7F52Linux 5.10.33691215SE +/- 0.16, N = 15SE +/- 0.24, N = 1511.6811.521. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinEPYC 7F52Linux 5.10.33691215Min: 9.99 / Avg: 11.68 / Max: 12.23Min: 9.44 / Avg: 11.52 / Max: 12.231. (CXX) g++ options: -O3 -pthread -lm

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkEPYC 7F52Linux 5.10.3918273645SE +/- 0.04, N = 3SE +/- 0.07, N = 338.1838.561. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkEPYC 7F52Linux 5.10.3816243240Min: 38.09 / Avg: 38.18 / Max: 38.23Min: 38.45 / Avg: 38.56 / Max: 38.691. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

LibreOffice

Various benchmarking operations with the LibreOffice open-source office suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLibreOfficeTest: 20 Documents To PDFEPYC 7F52Linux 5.10.3246810SE +/- 0.077, N = 5SE +/- 0.029, N = 57.1587.2081. LibreOffice 6.4.3.2 40(Build:2)
OpenBenchmarking.orgSeconds, Fewer Is BetterLibreOfficeTest: 20 Documents To PDFEPYC 7F52Linux 5.10.33691215Min: 7.04 / Avg: 7.16 / Max: 7.46Min: 7.15 / Avg: 7.21 / Max: 7.321. LibreOffice 6.4.3.2 40(Build:2)

librsvg

RSVG/librsvg is an SVG vector graphics library. This test profile times how long it takes to complete various operations by rsvg-convert. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterlibrsvgOperation: SVG Files To PNGEPYC 7F52Linux 5.10.3612182430SE +/- 0.05, N = 3SE +/- 0.11, N = 324.2125.081. rsvg-convert version 2.48.2
OpenBenchmarking.orgSeconds, Fewer Is BetterlibrsvgOperation: SVG Files To PNGEPYC 7F52Linux 5.10.3612182430Min: 24.11 / Avg: 24.21 / Max: 24.3Min: 24.94 / Avg: 25.08 / Max: 25.291. rsvg-convert version 2.48.2

LuxCoreRender

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on the CPU as opposed to the OpenCL version. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: DLSCEPYC 7F52Linux 5.10.30.73581.47162.20742.94323.679SE +/- 0.01, N = 3SE +/- 0.01, N = 33.273.27MIN: 3.12 / MAX: 3.42MIN: 3.17 / MAX: 3.42
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: DLSCEPYC 7F52Linux 5.10.3246810Min: 3.26 / Avg: 3.27 / Max: 3.28Min: 3.25 / Avg: 3.27 / Max: 3.29

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: Rainbow Colors and PrismEPYC 7F52Linux 5.10.30.78751.5752.36253.153.9375SE +/- 0.01, N = 3SE +/- 0.01, N = 33.493.50MIN: 3.42 / MAX: 3.52MIN: 3.43 / MAX: 3.52
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: Rainbow Colors and PrismEPYC 7F52Linux 5.10.3246810Min: 3.48 / Avg: 3.49 / Max: 3.5Min: 3.48 / Avg: 3.5 / Max: 3.52

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedEPYC 7F52Linux 5.10.32K4K6K8K10KSE +/- 51.25, N = 3SE +/- 59.35, N = 39947.8010009.491. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedEPYC 7F52Linux 5.10.32K4K6K8K10KMin: 9892.02 / Avg: 9947.8 / Max: 10050.17Min: 9891.06 / Avg: 10009.49 / Max: 10075.621. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedEPYC 7F52Linux 5.10.32K4K6K8K10KSE +/- 42.92, N = 3SE +/- 30.69, N = 311455.811490.61. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedEPYC 7F52Linux 5.10.32K4K6K8K10KMin: 11380.2 / Avg: 11455.77 / Max: 11528.8Min: 11430.3 / Avg: 11490.63 / Max: 11530.61. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedEPYC 7F52Linux 5.10.31224364860SE +/- 0.46, N = 8SE +/- 0.44, N = 353.4953.481. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedEPYC 7F52Linux 5.10.31122334455Min: 52.45 / Avg: 53.49 / Max: 55.11Min: 52.99 / Avg: 53.48 / Max: 54.351. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedEPYC 7F52Linux 5.10.32K4K6K8K10KSE +/- 18.07, N = 8SE +/- 25.66, N = 310768.210815.31. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedEPYC 7F52Linux 5.10.32K4K6K8K10KMin: 10730 / Avg: 10768.18 / Max: 10859.8Min: 10767.2 / Avg: 10815.33 / Max: 10854.81. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedEPYC 7F52Linux 5.10.31224364860SE +/- 0.32, N = 3SE +/- 0.46, N = 351.7852.811. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedEPYC 7F52Linux 5.10.31122334455Min: 51.46 / Avg: 51.78 / Max: 52.41Min: 51.9 / Avg: 52.81 / Max: 53.311. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedEPYC 7F52Linux 5.10.32K4K6K8K10KSE +/- 40.67, N = 3SE +/- 4.82, N = 310898.410851.61. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedEPYC 7F52Linux 5.10.32K4K6K8K10KMin: 10854.1 / Avg: 10898.37 / Max: 10979.6Min: 10842 / Avg: 10851.63 / Max: 10856.61. (CC) gcc options: -O3

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_icaEPYC 7F52Linux 5.10.31224364860SE +/- 0.56, N = 4SE +/- 0.36, N = 352.8753.50
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_icaEPYC 7F52Linux 5.10.31122334455Min: 51.61 / Avg: 52.87 / Max: 54.33Min: 52.85 / Avg: 53.5 / Max: 54.11

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qdaEPYC 7F52Linux 5.10.3714212835SE +/- 0.13, N = 3SE +/- 0.12, N = 330.0429.96
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qdaEPYC 7F52Linux 5.10.3714212835Min: 29.9 / Avg: 30.04 / Max: 30.29Min: 29.72 / Avg: 29.96 / Max: 30.09

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svmEPYC 7F52Linux 5.10.3612182430SE +/- 0.27, N = 3SE +/- 0.01, N = 323.3623.02
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svmEPYC 7F52Linux 5.10.3510152025Min: 23.08 / Avg: 23.36 / Max: 23.9Min: 23.01 / Avg: 23.02 / Max: 23.04

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregressionEPYC 7F52Linux 5.10.30.38930.77861.16791.55721.9465SE +/- 0.02, N = 4SE +/- 0.01, N = 31.731.72
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregressionEPYC 7F52Linux 5.10.3246810Min: 1.69 / Avg: 1.73 / Max: 1.78Min: 1.7 / Avg: 1.72 / Max: 1.73

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0EPYC 7F52Linux 5.10.33691215SE +/- 0.24, N = 15SE +/- 0.11, N = 1510.9310.29MIN: 9.63 / MAX: 23.96MIN: 9.72 / MAX: 23.371. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0EPYC 7F52Linux 5.10.33691215Min: 9.83 / Avg: 10.93 / Max: 12.95Min: 9.79 / Avg: 10.29 / Max: 11.141. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50EPYC 7F52Linux 5.10.3816243240SE +/- 0.05, N = 15SE +/- 0.04, N = 1534.5533.96MIN: 32.75 / MAX: 67.95MIN: 32.06 / MAX: 51.841. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50EPYC 7F52Linux 5.10.3714212835Min: 34.34 / Avg: 34.55 / Max: 35.13Min: 33.53 / Avg: 33.96 / Max: 34.231. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224EPYC 7F52Linux 5.10.3246810SE +/- 0.012, N = 15SE +/- 0.012, N = 156.2086.126MIN: 6.01 / MAX: 21MIN: 5.97 / MAX: 20.611. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224EPYC 7F52Linux 5.10.3246810Min: 6.11 / Avg: 6.21 / Max: 6.28Min: 6.07 / Avg: 6.13 / Max: 6.211. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0EPYC 7F52Linux 5.10.3246810SE +/- 0.012, N = 15SE +/- 0.007, N = 156.5756.551MIN: 6.41 / MAX: 20.06MIN: 6.45 / MAX: 22.131. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0EPYC 7F52Linux 5.10.33691215Min: 6.48 / Avg: 6.57 / Max: 6.64Min: 6.5 / Avg: 6.55 / Max: 6.591. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3EPYC 7F52Linux 5.10.3816243240SE +/- 0.23, N = 15SE +/- 0.18, N = 1533.5332.96MIN: 31.39 / MAX: 50.33MIN: 31.71 / MAX: 49.441. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3EPYC 7F52Linux 5.10.3714212835Min: 32.2 / Avg: 33.53 / Max: 34.67Min: 32.35 / Avg: 32.96 / Max: 35.171. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Monkey Audio Encoding

This test times how long it takes to encode a sample WAV file to Monkey's Audio APE format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APEEPYC 7F52Linux 5.10.33691215SE +/- 0.01, N = 5SE +/- 0.01, N = 512.5112.501. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt
OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APEEPYC 7F52Linux 5.10.348121620Min: 12.49 / Avg: 12.51 / Max: 12.52Min: 12.48 / Avg: 12.5 / Max: 12.521. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt

Monte Carlo Simulations of Ionised Nebulae

Mocassin is the Monte Carlo Simulations of Ionised Nebulae. MOCASSIN is a fully 3D or 2D photoionisation and dust radiative transfer code which employs a Monte Carlo approach to the transfer of radiation through media of arbitrary geometry and density distribution. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.0EPYC 7F52Linux 5.10.340801201602001921921. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsEPYC 7F52Linux 5.10.30.25830.51660.77491.03321.2915SE +/- 0.00082, N = 3SE +/- 0.00649, N = 31.142261.14801
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsEPYC 7F52Linux 5.10.3246810Min: 1.14 / Avg: 1.14 / Max: 1.14Min: 1.14 / Avg: 1.15 / Max: 1.16

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetEPYC 7F52Linux 5.10.3510152025SE +/- 0.15, N = 15SE +/- 0.27, N = 1219.2719.55MIN: 17.82 / MAX: 79.15MIN: 17.94 / MAX: 34.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetEPYC 7F52Linux 5.10.3510152025Min: 18.58 / Avg: 19.27 / Max: 20.47Min: 18.44 / Avg: 19.55 / Max: 20.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2EPYC 7F52Linux 5.10.3246810SE +/- 0.04, N = 15SE +/- 0.07, N = 128.498.44MIN: 7.06 / MAX: 72.31MIN: 6.92 / MAX: 12.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2EPYC 7F52Linux 5.10.33691215Min: 8.31 / Avg: 8.49 / Max: 8.94Min: 7.86 / Avg: 8.44 / Max: 8.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3EPYC 7F52Linux 5.10.3246810SE +/- 0.02, N = 15SE +/- 0.02, N = 127.687.73MIN: 7.21 / MAX: 12.54MIN: 7.28 / MAX: 11.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3EPYC 7F52Linux 5.10.33691215Min: 7.54 / Avg: 7.68 / Max: 7.78Min: 7.66 / Avg: 7.73 / Max: 7.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2EPYC 7F52Linux 5.10.33691215SE +/- 0.02, N = 15SE +/- 0.02, N = 128.988.97MIN: 8.73 / MAX: 14.04MIN: 8.55 / MAX: 22.641. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2EPYC 7F52Linux 5.10.33691215Min: 8.85 / Avg: 8.98 / Max: 9.06Min: 8.88 / Avg: 8.97 / Max: 9.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetEPYC 7F52Linux 5.10.3246810SE +/- 0.02, N = 15SE +/- 0.02, N = 127.607.60MIN: 6.99 / MAX: 10.58MIN: 7.34 / MAX: 8.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetEPYC 7F52Linux 5.10.33691215Min: 7.44 / Avg: 7.6 / Max: 7.76Min: 7.47 / Avg: 7.6 / Max: 7.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0EPYC 7F52Linux 5.10.33691215SE +/- 0.03, N = 15SE +/- 0.03, N = 1211.0611.14MIN: 10.67 / MAX: 13.4MIN: 10.78 / MAX: 14.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0EPYC 7F52Linux 5.10.33691215Min: 10.91 / Avg: 11.06 / Max: 11.24Min: 11.02 / Avg: 11.14 / Max: 11.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceEPYC 7F52Linux 5.10.30.83031.66062.49093.32124.1515SE +/- 0.02, N = 15SE +/- 0.02, N = 123.693.67MIN: 3.52 / MAX: 75.15MIN: 3.53 / MAX: 4.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceEPYC 7F52Linux 5.10.3246810Min: 3.6 / Avg: 3.69 / Max: 3.97Min: 3.61 / Avg: 3.67 / Max: 3.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetEPYC 7F52Linux 5.10.348121620SE +/- 0.06, N = 15SE +/- 0.14, N = 1217.6517.70MIN: 17.22 / MAX: 117.52MIN: 17.12 / MAX: 260.941. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetEPYC 7F52Linux 5.10.348121620Min: 17.46 / Avg: 17.65 / Max: 18.17Min: 17.37 / Avg: 17.7 / Max: 19.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16EPYC 7F52Linux 5.10.3714212835SE +/- 0.03, N = 15SE +/- 0.04, N = 1230.1730.02MIN: 29.55 / MAX: 90.42MIN: 29.27 / MAX: 43.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16EPYC 7F52Linux 5.10.3714212835Min: 29.91 / Avg: 30.17 / Max: 30.49Min: 29.63 / Avg: 30.02 / Max: 30.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18EPYC 7F52Linux 5.10.33691215SE +/- 0.03, N = 15SE +/- 0.04, N = 1210.6910.71MIN: 10.34 / MAX: 13.84MIN: 10.34 / MAX: 64.221. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18EPYC 7F52Linux 5.10.33691215Min: 10.56 / Avg: 10.69 / Max: 10.92Min: 10.58 / Avg: 10.71 / Max: 10.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetEPYC 7F52Linux 5.10.3246810SE +/- 0.08, N = 15SE +/- 0.09, N = 127.037.01MIN: 6.6 / MAX: 43.31MIN: 6.57 / MAX: 10.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetEPYC 7F52Linux 5.10.33691215Min: 6.64 / Avg: 7.03 / Max: 7.44Min: 6.63 / Avg: 7.01 / Max: 7.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50EPYC 7F52Linux 5.10.3510152025SE +/- 0.05, N = 15SE +/- 0.04, N = 1221.3420.94MIN: 20.69 / MAX: 102.24MIN: 20.35 / MAX: 23.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50EPYC 7F52Linux 5.10.3510152025Min: 21.09 / Avg: 21.34 / Max: 21.87Min: 20.67 / Avg: 20.94 / Max: 21.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyEPYC 7F52Linux 5.10.3612182430SE +/- 0.13, N = 15SE +/- 0.21, N = 1225.9425.84MIN: 25.13 / MAX: 86.32MIN: 24.84 / MAX: 30.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyEPYC 7F52Linux 5.10.3612182430Min: 25.54 / Avg: 25.94 / Max: 26.85Min: 25.2 / Avg: 25.84 / Max: 27.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdEPYC 7F52Linux 5.10.3510152025SE +/- 0.04, N = 15SE +/- 0.23, N = 1221.8921.06MIN: 21.44 / MAX: 101.39MIN: 19.62 / MAX: 77.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdEPYC 7F52Linux 5.10.3510152025Min: 21.66 / Avg: 21.89 / Max: 22.23Min: 20.14 / Avg: 21.06 / Max: 21.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mEPYC 7F52Linux 5.10.31020304050SE +/- 0.18, N = 15SE +/- 0.14, N = 1244.5144.79MIN: 42.64 / MAX: 117.01MIN: 43.38 / MAX: 124.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mEPYC 7F52Linux 5.10.3918273645Min: 43.2 / Avg: 44.51 / Max: 45.32Min: 44.08 / Avg: 44.79 / Max: 45.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkEPYC 7F52Linux 5.10.33691215SE +/- 0.05, N = 3SE +/- 0.08, N = 39.279.351. Nodejs v10.19.0
OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkEPYC 7F52Linux 5.10.33691215Min: 9.18 / Avg: 9.27 / Max: 9.37Min: 9.2 / Avg: 9.35 / Max: 9.491. Nodejs v10.19.0

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: EXPoSEEPYC 7F52Linux 5.10.32004006008001000SE +/- 2.18, N = 3SE +/- 0.49, N = 3756.88778.16
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: EXPoSEEPYC 7F52Linux 5.10.3140280420560700Min: 753.14 / Avg: 756.88 / Max: 760.71Min: 777.34 / Avg: 778.16 / Max: 779.04

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative EntropyEPYC 7F52Linux 5.10.348121620SE +/- 0.02, N = 3SE +/- 0.17, N = 314.4714.48
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative EntropyEPYC 7F52Linux 5.10.348121620Min: 14.43 / Avg: 14.47 / Max: 14.51Min: 14.21 / Avg: 14.48 / Max: 14.78

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed GaussianEPYC 7F52Linux 5.10.3246810SE +/- 0.023, N = 3SE +/- 0.027, N = 37.5307.446
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed GaussianEPYC 7F52Linux 5.10.33691215Min: 7.48 / Avg: 7.53 / Max: 7.56Min: 7.39 / Avg: 7.45 / Max: 7.48

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko SkylineEPYC 7F52Linux 5.10.320406080100SE +/- 0.72, N = 3SE +/- 0.55, N = 376.8376.75
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko SkylineEPYC 7F52Linux 5.10.31530456075Min: 75.39 / Avg: 76.83 / Max: 77.68Min: 75.7 / Avg: 76.75 / Max: 77.57

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian ChangepointEPYC 7F52Linux 5.10.3714212835SE +/- 0.25, N = 3SE +/- 0.09, N = 327.2127.80
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian ChangepointEPYC 7F52Linux 5.10.3612182430Min: 26.74 / Avg: 27.21 / Max: 27.58Min: 27.7 / Avg: 27.8 / Max: 27.98

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkEPYC 7F52Linux 5.10.380160240320400SE +/- 1.66, N = 3SE +/- 0.27, N = 3367.10368.45
OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkEPYC 7F52Linux 5.10.370140210280350Min: 363.81 / Avg: 367.1 / Max: 369.13Min: 368.11 / Avg: 368.45 / Max: 368.99

OCRMyPDF

OCRMyPDF is an optical character recognition (OCR) text layer to scanned PDF files, producing new PDFs with the text now selectable/searchable/copy-paste capable. OCRMyPDF leverages the Tesseract OCR engine and is written in Python. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 9.6.0+dfsgProcessing 60 Page PDF DocumentEPYC 7F52Linux 5.10.3510152025SE +/- 0.07, N = 3SE +/- 0.04, N = 319.5219.55
OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 9.6.0+dfsgProcessing 60 Page PDF DocumentEPYC 7F52Linux 5.10.3510152025Min: 19.41 / Avg: 19.51 / Max: 19.64Min: 19.47 / Avg: 19.55 / Max: 19.6

Ogg Audio Encoding

This test times how long it takes to encode a sample WAV file to Ogg format using the reference Xiph.org tools/libraries. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOgg Audio Encoding 1.3.4WAV To OggEPYC 7F52Linux 5.10.3510152025SE +/- 0.03, N = 3SE +/- 0.03, N = 320.6020.691. (CC) gcc options: -O2 -ffast-math -fsigned-char
OpenBenchmarking.orgSeconds, Fewer Is BetterOgg Audio Encoding 1.3.4WAV To OggEPYC 7F52Linux 5.10.3510152025Min: 20.56 / Avg: 20.6 / Max: 20.65Min: 20.65 / Avg: 20.69 / Max: 20.761. (CC) gcc options: -O2 -ffast-math -fsigned-char

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUEPYC 7F52Linux 5.10.30.54181.08361.62542.16722.709SE +/- 0.01148, N = 3SE +/- 0.02630, N = 52.005192.40810MIN: 1.87MIN: 2.251. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUEPYC 7F52Linux 5.10.3246810Min: 1.98 / Avg: 2.01 / Max: 2.02Min: 2.31 / Avg: 2.41 / Max: 2.471. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUEPYC 7F52Linux 5.10.30.7071.4142.1212.8283.535SE +/- 0.01483, N = 3SE +/- 0.00548, N = 32.367523.14201MIN: 2.3MIN: 3.11. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUEPYC 7F52Linux 5.10.3246810Min: 2.34 / Avg: 2.37 / Max: 2.39Min: 3.13 / Avg: 3.14 / Max: 3.151. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUEPYC 7F52Linux 5.10.30.34250.6851.02751.371.7125SE +/- 0.00269, N = 3SE +/- 0.00375, N = 31.511151.52220MIN: 1.48MIN: 1.491. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUEPYC 7F52Linux 5.10.3246810Min: 1.51 / Avg: 1.51 / Max: 1.52Min: 1.52 / Avg: 1.52 / Max: 1.531. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUEPYC 7F52Linux 5.10.30.30060.60120.90181.20241.503SE +/- 0.010953, N = 3SE +/- 0.004643, N = 30.7720581.335930MIN: 0.72MIN: 1.281. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUEPYC 7F52Linux 5.10.3246810Min: 0.75 / Avg: 0.77 / Max: 0.79Min: 1.33 / Avg: 1.34 / Max: 1.351. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUEPYC 7F52Linux 5.10.31.07712.15423.23134.30845.3855SE +/- 0.01620, N = 3SE +/- 0.02803, N = 33.294034.78691MIN: 3.12MIN: 4.621. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUEPYC 7F52Linux 5.10.3246810Min: 3.28 / Avg: 3.29 / Max: 3.33Min: 4.74 / Avg: 4.79 / Max: 4.841. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUEPYC 7F52Linux 5.10.30.73281.46562.19842.93123.664SE +/- 0.01184, N = 3SE +/- 0.04703, N = 152.763213.25673MIN: 2.65MIN: 2.891. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUEPYC 7F52Linux 5.10.3246810Min: 2.75 / Avg: 2.76 / Max: 2.79Min: 3.01 / Avg: 3.26 / Max: 3.51. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUEPYC 7F52Linux 5.10.31.06582.13163.19744.26325.329SE +/- 0.05484, N = 3SE +/- 0.04867, N = 154.038014.73706MIN: 3.84MIN: 4.421. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUEPYC 7F52Linux 5.10.3246810Min: 3.94 / Avg: 4.04 / Max: 4.13Min: 4.51 / Avg: 4.74 / Max: 5.071. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUEPYC 7F52Linux 5.10.3246810SE +/- 0.07217, N = 3SE +/- 0.01174, N = 35.556826.22877MIN: 5.13MIN: 6.141. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUEPYC 7F52Linux 5.10.3246810Min: 5.41 / Avg: 5.56 / Max: 5.63Min: 6.21 / Avg: 6.23 / Max: 6.251. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUEPYC 7F52Linux 5.10.31.24372.48743.73114.97486.2185SE +/- 0.02675, N = 3SE +/- 0.01913, N = 35.522805.52758MIN: 5.32MIN: 5.381. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUEPYC 7F52Linux 5.10.3246810Min: 5.49 / Avg: 5.52 / Max: 5.58Min: 5.5 / Avg: 5.53 / Max: 5.561. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUEPYC 7F52Linux 5.10.30.66881.33762.00642.67523.344SE +/- 0.00136, N = 3SE +/- 0.00487, N = 32.868902.97264MIN: 2.83MIN: 2.931. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUEPYC 7F52Linux 5.10.3246810Min: 2.87 / Avg: 2.87 / Max: 2.87Min: 2.97 / Avg: 2.97 / Max: 2.981. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUEPYC 7F52Linux 5.10.35001000150020002500SE +/- 1.98, N = 3SE +/- 10.97, N = 32006.802220.50MIN: 1996.47MIN: 2191.851. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUEPYC 7F52Linux 5.10.3400800120016002000Min: 2004.52 / Avg: 2006.8 / Max: 2010.75Min: 2201.74 / Avg: 2220.5 / Max: 2239.721. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUEPYC 7F52Linux 5.10.330060090012001500SE +/- 1.05, N = 3SE +/- 1.69, N = 31068.481169.57MIN: 1062.66MIN: 1161.341. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUEPYC 7F52Linux 5.10.32004006008001000Min: 1067.38 / Avg: 1068.48 / Max: 1070.58Min: 1166.19 / Avg: 1169.57 / Max: 1171.331. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUEPYC 7F52Linux 5.10.35001000150020002500SE +/- 6.20, N = 3SE +/- 4.81, N = 31992.602211.73MIN: 1974.03MIN: 2193.81. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUEPYC 7F52Linux 5.10.3400800120016002000Min: 1983.76 / Avg: 1992.6 / Max: 2004.54Min: 2202.89 / Avg: 2211.73 / Max: 2219.441. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUEPYC 7F52Linux 5.10.32004006008001000SE +/- 2.50, N = 3SE +/- 11.57, N = 31057.421148.93MIN: 1047.72MIN: 1133.531. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUEPYC 7F52Linux 5.10.32004006008001000Min: 1052.51 / Avg: 1057.42 / Max: 1060.67Min: 1136.89 / Avg: 1148.93 / Max: 1172.061. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUEPYC 7F52Linux 5.10.30.21080.42160.63240.84321.054SE +/- 0.003167, N = 3SE +/- 0.009336, N = 30.6759150.937089MIN: 0.64MIN: 0.891. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUEPYC 7F52Linux 5.10.3246810Min: 0.67 / Avg: 0.68 / Max: 0.68Min: 0.92 / Avg: 0.94 / Max: 0.951. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUEPYC 7F52Linux 5.10.35001000150020002500SE +/- 7.07, N = 3SE +/- 9.42, N = 31994.622192.82MIN: 1976.29MIN: 2169.841. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUEPYC 7F52Linux 5.10.3400800120016002000Min: 1980.48 / Avg: 1994.62 / Max: 2001.72Min: 2180.99 / Avg: 2192.82 / Max: 2211.441. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUEPYC 7F52Linux 5.10.330060090012001500SE +/- 1.57, N = 3SE +/- 9.31, N = 31069.391162.78MIN: 1062.1MIN: 1139.61. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUEPYC 7F52Linux 5.10.32004006008001000Min: 1066.26 / Avg: 1069.39 / Max: 1071.25Min: 1144.15 / Avg: 1162.78 / Max: 1172.11. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUEPYC 7F52Linux 5.10.30.41360.82721.24081.65442.068SE +/- 0.00177, N = 3SE +/- 0.00255, N = 31.838441.82157MIN: 1.81MIN: 1.781. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUEPYC 7F52Linux 5.10.3246810Min: 1.84 / Avg: 1.84 / Max: 1.84Min: 1.82 / Avg: 1.82 / Max: 1.831. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Open Porous Media

This is a test of Open Porous Media, a set of open-source tools concerning simulation of flow and transport of fluids in porous media. This test profile depends upon MPI/Flow already being installed on the system. Install instructions at https://opm-project.org/?page_id=36. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 1EPYC 7F52Linux 5.10.380160240320400SE +/- 1.92, N = 3SE +/- 0.73, N = 3365.37364.941. flow 2020.04
OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 1EPYC 7F52Linux 5.10.370140210280350Min: 363.16 / Avg: 365.37 / Max: 369.19Min: 364.11 / Avg: 364.94 / Max: 366.391. flow 2020.04

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 2EPYC 7F52Linux 5.10.350100150200250SE +/- 0.16, N = 3SE +/- 0.34, N = 3212.22212.311. flow 2020.04
OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 2EPYC 7F52Linux 5.10.34080120160200Min: 211.93 / Avg: 212.22 / Max: 212.48Min: 211.66 / Avg: 212.31 / Max: 212.771. flow 2020.04

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 4EPYC 7F52Linux 5.10.34080120160200SE +/- 0.45, N = 3SE +/- 0.30, N = 3168.76166.541. flow 2020.04
OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 4EPYC 7F52Linux 5.10.3306090120150Min: 167.94 / Avg: 168.76 / Max: 169.49Min: 166.22 / Avg: 166.54 / Max: 167.141. flow 2020.04

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 8EPYC 7F52Linux 5.10.350100150200250SE +/- 0.40, N = 3SE +/- 0.17, N = 3217.41208.911. flow 2020.04
OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 8EPYC 7F52Linux 5.10.34080120160200Min: 216.79 / Avg: 217.41 / Max: 218.17Min: 208.62 / Avg: 208.91 / Max: 209.21. flow 2020.04

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 16EPYC 7F52Linux 5.10.380160240320400SE +/- 0.72, N = 3SE +/- 0.13, N = 3361.92348.111. flow 2020.04
OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 16EPYC 7F52Linux 5.10.360120180240300Min: 360.75 / Avg: 361.92 / Max: 363.24Min: 347.85 / Avg: 348.1 / Max: 348.241. flow 2020.04

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test measures the RSA 4096-bit performance of OpenSSL. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSigns Per Second, More Is BetterOpenSSL 1.1.1RSA 4096-bit PerformanceEPYC 7F52Linux 5.10.310002000300040005000SE +/- 0.76, N = 3SE +/- 0.71, N = 34571.44579.81. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgSigns Per Second, More Is BetterOpenSSL 1.1.1RSA 4096-bit PerformanceEPYC 7F52Linux 5.10.38001600240032004000Min: 4570 / Avg: 4571.4 / Max: 4572.6Min: 4578.5 / Avg: 4579.83 / Max: 4580.91. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPUEPYC 7F52Linux 5.10.30.90451.8092.71353.6184.5225SE +/- 0.00, N = 3SE +/- 0.01, N = 34.024.011. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPUEPYC 7F52Linux 5.10.3246810Min: 4.01 / Avg: 4.02 / Max: 4.02Min: 4 / Avg: 4.01 / Max: 4.021. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPUEPYC 7F52Linux 5.10.3400800120016002000SE +/- 1.96, N = 3SE +/- 2.02, N = 31988.751988.901. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPUEPYC 7F52Linux 5.10.330060090012001500Min: 1986.54 / Avg: 1988.75 / Max: 1992.66Min: 1984.89 / Avg: 1988.9 / Max: 1991.261. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPUEPYC 7F52Linux 5.10.30.90231.80462.70693.60924.5115SE +/- 0.01, N = 3SE +/- 0.00, N = 34.014.011. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPUEPYC 7F52Linux 5.10.3246810Min: 3.99 / Avg: 4.01 / Max: 4.03Min: 4.01 / Avg: 4.01 / Max: 4.011. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPUEPYC 7F52Linux 5.10.3400800120016002000SE +/- 2.66, N = 3SE +/- 2.14, N = 31986.911989.911. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPUEPYC 7F52Linux 5.10.330060090012001500Min: 1981.75 / Avg: 1986.91 / Max: 1990.59Min: 1986.12 / Avg: 1989.91 / Max: 1993.541. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPUEPYC 7F52Linux 5.10.30.69081.38162.07242.76323.454SE +/- 0.00, N = 3SE +/- 0.01, N = 33.063.071. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPUEPYC 7F52Linux 5.10.3246810Min: 3.06 / Avg: 3.06 / Max: 3.07Min: 3.06 / Avg: 3.07 / Max: 3.081. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPUEPYC 7F52Linux 5.10.36001200180024003000SE +/- 1.99, N = 3SE +/- 3.43, N = 32582.882590.331. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPUEPYC 7F52Linux 5.10.35001000150020002500Min: 2580.21 / Avg: 2582.88 / Max: 2586.78Min: 2584.9 / Avg: 2590.33 / Max: 2596.671. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPUEPYC 7F52Linux 5.10.30.6841.3682.0522.7363.42SE +/- 0.02, N = 3SE +/- 0.01, N = 33.043.031. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPUEPYC 7F52Linux 5.10.3246810Min: 3 / Avg: 3.04 / Max: 3.06Min: 3.02 / Avg: 3.03 / Max: 3.041. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPUEPYC 7F52Linux 5.10.36001200180024003000SE +/- 3.48, N = 3SE +/- 2.25, N = 32600.352605.511. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPUEPYC 7F52Linux 5.10.35001000150020002500Min: 2593.83 / Avg: 2600.35 / Max: 2605.72Min: 2601.58 / Avg: 2605.51 / Max: 2609.361. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUEPYC 7F52Linux 5.10.32K4K6K8K10KSE +/- 6.06, N = 3SE +/- 5.41, N = 39974.709966.931. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUEPYC 7F52Linux 5.10.32K4K6K8K10KMin: 9962.58 / Avg: 9974.7 / Max: 9981.16Min: 9956.6 / Avg: 9966.93 / Max: 9974.861. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUEPYC 7F52Linux 5.10.30.17550.3510.52650.7020.8775SE +/- 0.00, N = 3SE +/- 0.00, N = 30.780.781. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUEPYC 7F52Linux 5.10.3246810Min: 0.78 / Avg: 0.78 / Max: 0.78Min: 0.78 / Avg: 0.78 / Max: 0.791. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPUEPYC 7F52Linux 5.10.32K4K6K8K10KSE +/- 16.24, N = 3SE +/- 5.75, N = 39935.559953.071. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPUEPYC 7F52Linux 5.10.32K4K6K8K10KMin: 9913.95 / Avg: 9935.55 / Max: 9967.35Min: 9945.47 / Avg: 9953.07 / Max: 9964.351. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPUEPYC 7F52Linux 5.10.30.17780.35560.53340.71120.889SE +/- 0.00, N = 3SE +/- 0.00, N = 30.790.781. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPUEPYC 7F52Linux 5.10.3246810Min: 0.78 / Avg: 0.79 / Max: 0.79Min: 0.78 / Avg: 0.78 / Max: 0.781. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkEPYC 7F52Linux 5.10.350100150200250SE +/- 0.60, N = 3SE +/- 0.38, N = 3217.81218.94MIN: 1 / MAX: 765MIN: 1 / MAX: 772
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkEPYC 7F52Linux 5.10.34080120160200Min: 216.67 / Avg: 217.81 / Max: 218.67Min: 218.22 / Avg: 218.94 / Max: 219.5

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkVdbVolumeEPYC 7F52Linux 5.10.33M6M9M12M15MSE +/- 100190.60, N = 3SE +/- 19338.94, N = 315263784.6716277364.24MIN: 798247 / MAX: 56683584MIN: 790262 / MAX: 65640384
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkVdbVolumeEPYC 7F52Linux 5.10.33M6M9M12M15MMin: 15133963.18 / Avg: 15263784.67 / Max: 15460885.64Min: 16243824.43 / Avg: 16277364.24 / Max: 16310816.3

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkStructuredVolumeEPYC 7F52Linux 5.10.315M30M45M60M75MSE +/- 788054.78, N = 3SE +/- 168051.69, N = 368692259.8872087200.32MIN: 909007 / MAX: 535870728MIN: 921866 / MAX: 575712792
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkStructuredVolumeEPYC 7F52Linux 5.10.312M24M36M48M60MMin: 67452425.35 / Avg: 68692259.88 / Max: 70154910.05Min: 71796921.92 / Avg: 72087200.32 / Max: 72379063.57

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkUnstructuredVolumeEPYC 7F52Linux 5.10.3400K800K1200K1600K2000KSE +/- 2560.46, N = 3SE +/- 1848.12, N = 31818093.891817665.47MIN: 19110 / MAX: 6113054MIN: 19297 / MAX: 6122055
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkUnstructuredVolumeEPYC 7F52Linux 5.10.3300K600K900K1200K1500KMin: 1813068.69 / Avg: 1818093.89 / Max: 1821459.92Min: 1814049.49 / Avg: 1817665.47 / Max: 1820136.9

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeEPYC 7F52Linux 5.10.3246810SE +/- 0.016, N = 5SE +/- 0.012, N = 57.9807.9781. (CXX) g++ options: -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeEPYC 7F52Linux 5.10.33691215Min: 7.96 / Avg: 7.98 / Max: 8.04Min: 7.96 / Avg: 7.98 / Max: 8.031. (CXX) g++ options: -fvisibility=hidden -logg -lm

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteEPYC 7F52Linux 5.10.3130K260K390K520K650KSE +/- 1384.96, N = 3SE +/- 627.03, N = 3618552625441
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteEPYC 7F52Linux 5.10.3110K220K330K440K550KMin: 616209 / Avg: 618552 / Max: 621003Min: 624291 / Avg: 625441.33 / Max: 626449

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: CPUEPYC 7F52Linux 5.10.3612182430SE +/- 0.01, N = 3SE +/- 0.24, N = 324.3524.89
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: CPUEPYC 7F52Linux 5.10.3612182430Min: 24.33 / Avg: 24.35 / Max: 24.37Min: 24.46 / Avg: 24.89 / Max: 25.29

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG19 - Device: CPUEPYC 7F52Linux 5.10.3510152025SE +/- 0.05, N = 3SE +/- 0.10, N = 320.2720.69
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG19 - Device: CPUEPYC 7F52Linux 5.10.3510152025Min: 20.19 / Avg: 20.27 / Max: 20.35Min: 20.48 / Avg: 20.69 / Max: 20.8

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: IMDB LSTM - Device: CPUEPYC 7F52Linux 5.10.3140280420560700SE +/- 2.99, N = 3SE +/- 2.01, N = 3665.88666.79
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: IMDB LSTM - Device: CPUEPYC 7F52Linux 5.10.3120240360480600Min: 659.97 / Avg: 665.88 / Max: 669.68Min: 663.75 / Avg: 666.79 / Max: 670.6

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: Mobilenet - Device: CPUEPYC 7F52Linux 5.10.348121620SE +/- 0.10, N = 3SE +/- 0.09, N = 314.5314.51
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: Mobilenet - Device: CPUEPYC 7F52Linux 5.10.348121620Min: 14.34 / Avg: 14.53 / Max: 14.64Min: 14.41 / Avg: 14.51 / Max: 14.69

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: ResNet 50 - Device: CPUEPYC 7F52Linux 5.10.3246810SE +/- 0.00, N = 3SE +/- 0.08, N = 36.145.96
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: ResNet 50 - Device: CPUEPYC 7F52Linux 5.10.3246810Min: 6.14 / Avg: 6.14 / Max: 6.14Min: 5.86 / Avg: 5.96 / Max: 6.12

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: DenseNet 201 - Device: CPUEPYC 7F52Linux 5.10.30.72231.44462.16692.88923.6115SE +/- 0.01, N = 3SE +/- 0.01, N = 33.193.21
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: DenseNet 201 - Device: CPUEPYC 7F52Linux 5.10.3246810Min: 3.17 / Avg: 3.19 / Max: 3.21Min: 3.2 / Avg: 3.21 / Max: 3.22

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: Inception V3 - Device: CPUEPYC 7F52Linux 5.10.33691215SE +/- 0.00, N = 3SE +/- 0.04, N = 310.3910.17
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: Inception V3 - Device: CPUEPYC 7F52Linux 5.10.33691215Min: 10.38 / Avg: 10.39 / Max: 10.39Min: 10.1 / Avg: 10.17 / Max: 10.22

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: NASNer Large - Device: CPUEPYC 7F52Linux 5.10.30.2340.4680.7020.9361.17SE +/- 0.00, N = 3SE +/- 0.00, N = 31.041.04
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: NASNer Large - Device: CPUEPYC 7F52Linux 5.10.3246810Min: 1.04 / Avg: 1.04 / Max: 1.05Min: 1.04 / Avg: 1.04 / Max: 1.04

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: acEPYC 7F52Linux 5.10.32468106.536.51

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: airEPYC 7F52Linux 5.10.30.39830.79661.19491.59321.99151.771.77

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: mdbxEPYC 7F52Linux 5.10.31.0622.1243.1864.2485.314.724.72

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: doducEPYC 7F52Linux 5.10.32468107.267.26

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: linpkEPYC 7F52Linux 5.10.30.71551.4312.14652.8623.57753.183.18

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: tfft2EPYC 7F52Linux 5.10.3132639526521.7957.63

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: aermodEPYC 7F52Linux 5.10.32468106.126.16

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: rnflowEPYC 7F52Linux 5.10.34812162016.6016.59

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: induct2EPYC 7F52Linux 5.10.361218243023.7923.81

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: proteinEPYC 7F52Linux 5.10.34812162013.8413.84

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: capacitaEPYC 7F52Linux 5.10.34812162017.6117.56

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: channel2EPYC 7F52Linux 5.10.3102030405042.5242.30

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: fatigue2EPYC 7F52Linux 5.10.3122436486052.3152.21

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: gas_dyn2EPYC 7F52Linux 5.10.3102030405044.1644.26

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: test_fpu2EPYC 7F52Linux 5.10.371421283532.1231.32

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: mp_prop_designEPYC 7F52Linux 5.10.3132639526559.3859.39

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read OnlyEPYC 7F52Linux 5.10.36K12K18K24K30KSE +/- 403.02, N = 3SE +/- 272.58, N = 328273278961. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read OnlyEPYC 7F52Linux 5.10.35K10K15K20K25KMin: 27526.61 / Avg: 28272.78 / Max: 28909.88Min: 27516.31 / Avg: 27895.54 / Max: 28424.331. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Only - Average LatencyEPYC 7F52Linux 5.10.30.00810.01620.02430.03240.0405SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0350.0361. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Only - Average LatencyEPYC 7F52Linux 5.10.312345Min: 0.04 / Avg: 0.04 / Max: 0.04Min: 0.04 / Avg: 0.04 / Max: 0.041. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read WriteEPYC 7F52Linux 5.10.38001600240032004000SE +/- 24.75, N = 3SE +/- 16.52, N = 3380337821. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read WriteEPYC 7F52Linux 5.10.37001400210028003500Min: 3777.18 / Avg: 3802.87 / Max: 3852.36Min: 3761 / Avg: 3781.92 / Max: 3814.531. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Write - Average LatencyEPYC 7F52Linux 5.10.30.05940.11880.17820.23760.297SE +/- 0.002, N = 3SE +/- 0.001, N = 30.2630.2641. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Write - Average LatencyEPYC 7F52Linux 5.10.312345Min: 0.26 / Avg: 0.26 / Max: 0.27Min: 0.26 / Avg: 0.26 / Max: 0.271. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read OnlyEPYC 7F52Linux 5.10.3110K220K330K440K550KSE +/- 5689.06, N = 3SE +/- 7128.53, N = 154918275010471. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read OnlyEPYC 7F52Linux 5.10.390K180K270K360K450KMin: 481284.17 / Avg: 491826.61 / Max: 500804.42Min: 481274.39 / Avg: 501046.8 / Max: 586825.761. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average LatencyEPYC 7F52Linux 5.10.30.0230.0460.0690.0920.115SE +/- 0.001, N = 3SE +/- 0.001, N = 150.1020.1001. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average LatencyEPYC 7F52Linux 5.10.312345Min: 0.1 / Avg: 0.1 / Max: 0.1Min: 0.09 / Avg: 0.1 / Max: 0.11. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read OnlyEPYC 7F52Linux 5.10.3110K220K330K440K550KSE +/- 3559.12, N = 3SE +/- 1764.57, N = 35143075071611. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read OnlyEPYC 7F52Linux 5.10.390K180K270K360K450KMin: 509047.1 / Avg: 514306.68 / Max: 521090.34Min: 503639.44 / Avg: 507160.7 / Max: 509125.51. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average LatencyEPYC 7F52Linux 5.10.30.04460.08920.13380.17840.223SE +/- 0.001, N = 3SE +/- 0.001, N = 30.1950.1981. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average LatencyEPYC 7F52Linux 5.10.312345Min: 0.19 / Avg: 0.19 / Max: 0.2Min: 0.2 / Avg: 0.2 / Max: 0.21. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read OnlyEPYC 7F52Linux 5.10.3120K240K360K480K600KSE +/- 733.28, N = 3SE +/- 1662.47, N = 35568255364121. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read OnlyEPYC 7F52Linux 5.10.3100K200K300K400K500KMin: 555375.67 / Avg: 556824.57 / Max: 557745.6Min: 534022.19 / Avg: 536411.72 / Max: 539608.741. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read Only - Average LatencyEPYC 7F52Linux 5.10.30.10510.21020.31530.42040.5255SE +/- 0.000, N = 3SE +/- 0.001, N = 30.4490.4671. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read Only - Average LatencyEPYC 7F52Linux 5.10.312345Min: 0.45 / Avg: 0.45 / Max: 0.45Min: 0.46 / Avg: 0.47 / Max: 0.471. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read WriteEPYC 7F52Linux 5.10.39001800270036004500SE +/- 4.77, N = 3SE +/- 1.61, N = 3423141511. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read WriteEPYC 7F52Linux 5.10.37001400210028003500Min: 4222.66 / Avg: 4230.61 / Max: 4239.14Min: 4149.57 / Avg: 4151.25 / Max: 4154.471. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average LatencyEPYC 7F52Linux 5.10.33691215SE +/- 0.01, N = 3SE +/- 0.00, N = 311.8212.051. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average LatencyEPYC 7F52Linux 5.10.348121620Min: 11.8 / Avg: 11.82 / Max: 11.84Min: 12.04 / Avg: 12.05 / Max: 12.051. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read WriteEPYC 7F52Linux 5.10.37001400210028003500SE +/- 24.88, N = 3SE +/- 35.07, N = 3333233001. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read WriteEPYC 7F52Linux 5.10.36001200180024003000Min: 3306.87 / Avg: 3331.99 / Max: 3381.76Min: 3233.96 / Avg: 3299.79 / Max: 3353.671. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average LatencyEPYC 7F52Linux 5.10.3714212835SE +/- 0.22, N = 3SE +/- 0.32, N = 330.0430.341. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average LatencyEPYC 7F52Linux 5.10.3714212835Min: 29.59 / Avg: 30.04 / Max: 30.26Min: 29.84 / Avg: 30.34 / Max: 30.951. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read WriteEPYC 7F52Linux 5.10.35001000150020002500SE +/- 27.65, N = 15SE +/- 17.17, N = 15222722011. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read WriteEPYC 7F52Linux 5.10.3400800120016002000Min: 2026.69 / Avg: 2227.06 / Max: 2360.07Min: 2090.35 / Avg: 2200.91 / Max: 2323.681. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read Write - Average LatencyEPYC 7F52Linux 5.10.3306090120150SE +/- 1.44, N = 15SE +/- 0.89, N = 15112.58113.781. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read Write - Average LatencyEPYC 7F52Linux 5.10.320406080100Min: 106 / Avg: 112.58 / Max: 123.44Min: 107.67 / Avg: 113.78 / Max: 119.691. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goEPYC 7F52Linux 5.10.360120180240300254253

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3EPYC 7F52Linux 5.10.370140210280350SE +/- 0.33, N = 3SE +/- 0.33, N = 3329326
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3EPYC 7F52Linux 5.10.360120180240300Min: 328 / Avg: 328.67 / Max: 329Min: 326 / Avg: 326.33 / Max: 327

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosEPYC 7F52Linux 5.10.3306090120150113112

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatEPYC 7F52Linux 5.10.3306090120150SE +/- 0.33, N = 3120116
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatEPYC 7F52Linux 5.10.320406080100Min: 116 / Avg: 116.33 / Max: 117

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyEPYC 7F52Linux 5.10.3306090120150113113

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibEPYC 7F52Linux 5.10.348121620SE +/- 0.00, N = 3SE +/- 0.00, N = 317.117.5
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibEPYC 7F52Linux 5.10.348121620Min: 17.1 / Avg: 17.1 / Max: 17.1Min: 17.5 / Avg: 17.5 / Max: 17.5

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceEPYC 7F52Linux 5.10.3100200300400500SE +/- 0.33, N = 3SE +/- 0.58, N = 3475476
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceEPYC 7F52Linux 5.10.380160240320400Min: 474 / Avg: 474.67 / Max: 475Min: 475 / Avg: 476 / Max: 477

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsEPYC 7F52Linux 5.10.3612182430SE +/- 0.03, N = 3SE +/- 0.00, N = 324.924.7
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsEPYC 7F52Linux 5.10.3612182430Min: 24.8 / Avg: 24.87 / Max: 24.9Min: 24.7 / Avg: 24.7 / Max: 24.7

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesEPYC 7F52Linux 5.10.320406080100109110

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileEPYC 7F52Linux 5.10.34080120160200173173

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupEPYC 7F52Linux 5.10.3246810SE +/- 0.01, N = 3SE +/- 0.04, N = 37.768.69
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupEPYC 7F52Linux 5.10.33691215Min: 7.75 / Avg: 7.76 / Max: 7.77Min: 8.62 / Avg: 8.69 / Max: 8.74

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateEPYC 7F52Linux 5.10.31122334455SE +/- 0.32, N = 3SE +/- 0.27, N = 348.347.5
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateEPYC 7F52Linux 5.10.31020304050Min: 47.7 / Avg: 48.27 / Max: 48.8Min: 47 / Avg: 47.53 / Max: 47.9

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonEPYC 7F52Linux 5.10.3100200300400500SE +/- 0.67, N = 3SE +/- 2.08, N = 3477470
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonEPYC 7F52Linux 5.10.380160240320400Min: 476 / Avg: 476.67 / Max: 478Min: 466 / Avg: 470 / Max: 473

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 1EPYC 7F52Linux 5.10.30.08330.16660.24990.33320.4165SE +/- 0.001, N = 3SE +/- 0.001, N = 30.3690.370
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 1EPYC 7F52Linux 5.10.312345Min: 0.37 / Avg: 0.37 / Max: 0.37Min: 0.37 / Avg: 0.37 / Max: 0.37

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 5EPYC 7F52Linux 5.10.30.24620.49240.73860.98481.231SE +/- 0.001, N = 3SE +/- 0.001, N = 31.0941.094
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 5EPYC 7F52Linux 5.10.3246810Min: 1.09 / Avg: 1.09 / Max: 1.1Min: 1.09 / Avg: 1.09 / Max: 1.1

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 6EPYC 7F52Linux 5.10.30.32940.65880.98821.31761.647SE +/- 0.003, N = 3SE +/- 0.001, N = 31.4641.461
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 6EPYC 7F52Linux 5.10.3246810Min: 1.46 / Avg: 1.46 / Max: 1.47Min: 1.46 / Avg: 1.46 / Max: 1.46

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 10EPYC 7F52Linux 5.10.30.71821.43642.15462.87283.591SE +/- 0.002, N = 3SE +/- 0.003, N = 33.1863.192
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 10EPYC 7F52Linux 5.10.3246810Min: 3.18 / Avg: 3.19 / Max: 3.19Min: 3.19 / Avg: 3.19 / Max: 3.2

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOPEPYC 7F52Linux 5.10.3400K800K1200K1600K2000KSE +/- 19724.55, N = 15SE +/- 7581.95, N = 31915545.961228236.411. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOPEPYC 7F52Linux 5.10.3300K600K900K1200K1500KMin: 1782759.38 / Avg: 1915545.96 / Max: 2012394.38Min: 1213747.62 / Avg: 1228236.41 / Max: 1239355.621. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDEPYC 7F52Linux 5.10.3300K600K900K1200K1500KSE +/- 16740.24, N = 3SE +/- 11341.57, N = 31565518.881503178.001. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDEPYC 7F52Linux 5.10.3300K600K900K1200K1500KMin: 1541177.25 / Avg: 1565518.88 / Max: 1597597.5Min: 1481481.5 / Avg: 1503178 / Max: 1519756.881. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSHEPYC 7F52Linux 5.10.3300K600K900K1200K1500KSE +/- 14085.60, N = 3SE +/- 11678.97, N = 61216222.501174489.561. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSHEPYC 7F52Linux 5.10.3200K400K600K800K1000KMin: 1192238.38 / Avg: 1216222.5 / Max: 1241012.38Min: 1132684 / Avg: 1174489.56 / Max: 1216700.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETEPYC 7F52Linux 5.10.3400K800K1200K1600K2000KSE +/- 22488.85, N = 15SE +/- 18986.94, N = 151753884.371630009.811. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETEPYC 7F52Linux 5.10.3300K600K900K1200K1500KMin: 1631634.62 / Avg: 1753884.37 / Max: 1923446.25Min: 1506265.12 / Avg: 1630009.81 / Max: 1724744.881. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETEPYC 7F52Linux 5.10.3300K600K900K1200K1500KSE +/- 15975.34, N = 15SE +/- 10427.33, N = 151350619.521323358.981. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETEPYC 7F52Linux 5.10.3200K400K600K800K1000KMin: 1241012.38 / Avg: 1350619.52 / Max: 1434949.75Min: 1279017.88 / Avg: 1323358.98 / Max: 1398735.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28EPYC 7F52Linux 5.10.3510152025SE +/- 0.00, N = 3SE +/- 0.02, N = 320.1320.101. (CC) gcc options: -O2 -pedantic -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28EPYC 7F52Linux 5.10.3510152025Min: 20.12 / Avg: 20.13 / Max: 20.14Min: 20.08 / Avg: 20.1 / Max: 20.151. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: KostyaEPYC 7F52Linux 5.10.30.11930.23860.35790.47720.5965SE +/- 0.00, N = 3SE +/- 0.00, N = 30.520.531. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: KostyaEPYC 7F52Linux 5.10.3246810Min: 0.52 / Avg: 0.52 / Max: 0.53Min: 0.53 / Avg: 0.53 / Max: 0.531. (CXX) g++ options: -O3 -pthread

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandomEPYC 7F52Linux 5.10.30.08780.17560.26340.35120.439SE +/- 0.00, N = 3SE +/- 0.00, N = 30.380.391. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandomEPYC 7F52Linux 5.10.312345Min: 0.38 / Avg: 0.38 / Max: 0.39Min: 0.38 / Avg: 0.39 / Max: 0.391. (CXX) g++ options: -O3 -pthread

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweetsEPYC 7F52Linux 5.10.30.13730.27460.41190.54920.6865SE +/- 0.00, N = 3SE +/- 0.00, N = 30.610.611. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweetsEPYC 7F52Linux 5.10.3246810Min: 0.6 / Avg: 0.61 / Max: 0.61Min: 0.61 / Avg: 0.61 / Max: 0.611. (CXX) g++ options: -O3 -pthread

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDEPYC 7F52Linux 5.10.30.13950.2790.41850.5580.6975SE +/- 0.00, N = 3SE +/- 0.00, N = 30.620.621. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDEPYC 7F52Linux 5.10.3246810Min: 0.62 / Avg: 0.62 / Max: 0.63Min: 0.62 / Avg: 0.62 / Max: 0.621. (CXX) g++ options: -O3 -pthread

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000EPYC 7F52Linux 5.10.31530456075SE +/- 0.22, N = 3SE +/- 0.02, N = 366.7267.141. (CC) gcc options: -O2 -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000EPYC 7F52Linux 5.10.31326395265Min: 66.35 / Avg: 66.72 / Max: 67.11Min: 67.1 / Avg: 67.14 / Max: 67.171. (CC) gcc options: -O2 -ldl -lz -lpthread

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total TimeEPYC 7F52Linux 5.10.38M16M24M32M40MSE +/- 300939.62, N = 3SE +/- 178225.83, N = 336388251363838161. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total TimeEPYC 7F52Linux 5.10.36M12M18M24M30MMin: 35813405 / Avg: 36388251.33 / Max: 36830134Min: 36202829 / Avg: 36383815.67 / Max: 367402531. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MMAPEPYC 7F52Linux 5.10.350100150200250SE +/- 0.32, N = 3SE +/- 0.19, N = 3229.80248.171. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MMAPEPYC 7F52Linux 5.10.34080120160200Min: 229.19 / Avg: 229.8 / Max: 230.29Min: 247.79 / Avg: 248.17 / Max: 248.371. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: NUMAEPYC 7F52Linux 5.10.390180270360450SE +/- 2.52, N = 3SE +/- 0.07, N = 3409.25416.601. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: NUMAEPYC 7F52Linux 5.10.370140210280350Min: 404.24 / Avg: 409.25 / Max: 412.31Min: 416.48 / Avg: 416.6 / Max: 416.721. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MEMFDEPYC 7F52Linux 5.10.3150300450600750SE +/- 0.22, N = 3SE +/- 0.29, N = 3680.78712.741. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MEMFDEPYC 7F52Linux 5.10.3130260390520650Min: 680.4 / Avg: 680.78 / Max: 681.16Min: 712.18 / Avg: 712.74 / Max: 713.151. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: AtomicEPYC 7F52Linux 5.10.3110K220K330K440K550KSE +/- 436.80, N = 3SE +/- 203.22, N = 3512936.21510793.921. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: AtomicEPYC 7F52Linux 5.10.390K180K270K360K450KMin: 512387.81 / Avg: 512936.21 / Max: 513799.33Min: 510444.26 / Avg: 510793.92 / Max: 511148.21. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CryptoEPYC 7F52Linux 5.10.310002000300040005000SE +/- 0.84, N = 3SE +/- 5.74, N = 34565.974555.431. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CryptoEPYC 7F52Linux 5.10.38001600240032004000Min: 4565.13 / Avg: 4565.97 / Max: 4567.64Min: 4543.96 / Avg: 4555.43 / Max: 4561.411. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MallocEPYC 7F52Linux 5.10.370M140M210M280M350MSE +/- 811855.41, N = 3SE +/- 693009.74, N = 3332554816.83332331122.531. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MallocEPYC 7F52Linux 5.10.360M120M180M240M300MMin: 331019569.63 / Avg: 332554816.83 / Max: 333780250.03Min: 330945590.83 / Avg: 332331122.53 / Max: 333055730.621. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: ForkingEPYC 7F52Linux 5.10.312K24K36K48K60KSE +/- 229.33, N = 3SE +/- 139.19, N = 356181.2844312.121. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: ForkingEPYC 7F52Linux 5.10.310K20K30K40K50KMin: 55872.64 / Avg: 56181.28 / Max: 56629.43Min: 44117.52 / Avg: 44312.12 / Max: 44581.811. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SENDFILEEPYC 7F52Linux 5.10.360K120K180K240K300KSE +/- 100.47, N = 3SE +/- 302.83, N = 3297122.81280154.741. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SENDFILEEPYC 7F52Linux 5.10.350K100K150K200K250KMin: 296962.2 / Avg: 297122.81 / Max: 297307.7Min: 279577.24 / Avg: 280154.74 / Max: 280601.561. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU CacheEPYC 7F52Linux 5.10.31020304050SE +/- 1.52, N = 12SE +/- 1.40, N = 1544.8644.521. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU CacheEPYC 7F52Linux 5.10.3918273645Min: 35.89 / Avg: 44.86 / Max: 52.99Min: 36.39 / Avg: 44.52 / Max: 54.091. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU StressEPYC 7F52Linux 5.10.313002600390052006500SE +/- 22.12, N = 3SE +/- 5.69, N = 36244.336266.841. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU StressEPYC 7F52Linux 5.10.311002200330044005500Min: 6200.79 / Avg: 6244.33 / Max: 6272.91Min: 6257.59 / Avg: 6266.84 / Max: 6277.211. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SemaphoresEPYC 7F52Linux 5.10.3500K1000K1500K2000K2500KSE +/- 14921.24, N = 3SE +/- 2645.51, N = 32314681.132278162.651. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SemaphoresEPYC 7F52Linux 5.10.3400K800K1200K1600K2000KMin: 2284970.57 / Avg: 2314681.13 / Max: 2331963.83Min: 2273647.24 / Avg: 2278162.65 / Max: 2282808.761. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Matrix MathEPYC 7F52Linux 5.10.317K34K51K68K85KSE +/- 117.38, N = 3SE +/- 608.49, N = 377530.4876518.791. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Matrix MathEPYC 7F52Linux 5.10.313K26K39K52K65KMin: 77300.25 / Avg: 77530.48 / Max: 77685.33Min: 75895.92 / Avg: 76518.79 / Max: 77735.651. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Vector MathEPYC 7F52Linux 5.10.330K60K90K120K150KSE +/- 6.50, N = 3SE +/- 19.82, N = 3142981.97142907.671. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Vector MathEPYC 7F52Linux 5.10.320K40K60K80K100KMin: 142971.08 / Avg: 142981.97 / Max: 142993.55Min: 142868.09 / Avg: 142907.67 / Max: 142929.441. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Memory CopyingEPYC 7F52Linux 5.10.314002800420056007000SE +/- 58.89, N = 3SE +/- 3.47, N = 36435.736274.431. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Memory CopyingEPYC 7F52Linux 5.10.311002200330044005500Min: 6317.95 / Avg: 6435.73 / Max: 6495.19Min: 6267.56 / Avg: 6274.43 / Max: 6278.71. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Socket ActivityEPYC 7F52Linux 5.10.32K4K6K8K10KSE +/- 43.37, N = 3SE +/- 36.73, N = 310784.4010348.911. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Socket ActivityEPYC 7F52Linux 5.10.32K4K6K8K10KMin: 10710.44 / Avg: 10784.4 / Max: 10860.64Min: 10288.33 / Avg: 10348.91 / Max: 10415.181. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context SwitchingEPYC 7F52Linux 5.10.32M4M6M8M10MSE +/- 27679.79, N = 3SE +/- 21287.84, N = 38409881.778245888.971. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context SwitchingEPYC 7F52Linux 5.10.31.5M3M4.5M6M7.5MMin: 8362250.79 / Avg: 8409881.77 / Max: 8458130.44Min: 8205143.3 / Avg: 8245888.97 / Max: 8276955.711. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc C String FunctionsEPYC 7F52Linux 5.10.3200K400K600K800K1000KSE +/- 2051.22, N = 3SE +/- 2853.73, N = 31144375.851143670.001. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc C String FunctionsEPYC 7F52Linux 5.10.3200K400K600K800K1000KMin: 1141681.39 / Avg: 1144375.85 / Max: 1148402.16Min: 1140177.61 / Avg: 1143670 / Max: 1149325.641. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc Qsort Data SortingEPYC 7F52Linux 5.10.360120180240300SE +/- 0.99, N = 3SE +/- 0.93, N = 3269.57268.941. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc Qsort Data SortingEPYC 7F52Linux 5.10.350100150200250Min: 267.9 / Avg: 269.57 / Max: 271.33Min: 268 / Avg: 268.94 / Max: 270.81. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: System V Message PassingEPYC 7F52Linux 5.10.32M4M6M8M10MSE +/- 128008.77, N = 15SE +/- 112749.98, N = 310610267.698395010.731. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: System V Message PassingEPYC 7F52Linux 5.10.32M4M6M8M10MMin: 9646937.38 / Avg: 10610267.69 / Max: 11308995.65Min: 8280189.87 / Avg: 8395010.73 / Max: 8620497.931. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

Sunflow Rendering System

This test runs benchmarks of the Sunflow Rendering System. The Sunflow Rendering System is an open-source render engine for photo-realistic image synthesis with a ray-tracing core. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSunflow Rendering System 0.07.2Global Illumination + Image SynthesisEPYC 7F52Linux 5.10.30.18450.3690.55350.7380.9225SE +/- 0.008, N = 3SE +/- 0.013, N = 150.8200.818MIN: 0.56 / MAX: 1.43MIN: 0.58 / MAX: 1.49
OpenBenchmarking.orgSeconds, Fewer Is BetterSunflow Rendering System 0.07.2Global Illumination + Image SynthesisEPYC 7F52Linux 5.10.3246810Min: 0.81 / Avg: 0.82 / Max: 0.84Min: 0.73 / Avg: 0.82 / Max: 0.96

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 0 - Input: 1080pEPYC 7F52Linux 5.10.30.02630.05260.07890.10520.1315SE +/- 0.000, N = 3SE +/- 0.000, N = 30.1170.1161. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 0 - Input: 1080pEPYC 7F52Linux 5.10.312345Min: 0.12 / Avg: 0.12 / Max: 0.12Min: 0.12 / Avg: 0.12 / Max: 0.121. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pEPYC 7F52Linux 5.10.31.21232.42463.63694.84926.0615SE +/- 0.023, N = 3SE +/- 0.009, N = 35.3605.3881. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pEPYC 7F52Linux 5.10.3246810Min: 5.32 / Avg: 5.36 / Max: 5.4Min: 5.38 / Avg: 5.39 / Max: 5.411. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pEPYC 7F52Linux 5.10.3918273645SE +/- 0.07, N = 3SE +/- 0.06, N = 338.5339.011. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pEPYC 7F52Linux 5.10.3816243240Min: 38.39 / Avg: 38.53 / Max: 38.63Min: 38.94 / Avg: 39.01 / Max: 39.131. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: VMAF Optimized - Input: Bosphorus 1080pEPYC 7F52Linux 5.10.360120180240300SE +/- 0.71, N = 3SE +/- 2.05, N = 3248.38255.721. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: VMAF Optimized - Input: Bosphorus 1080pEPYC 7F52Linux 5.10.350100150200250Min: 247.12 / Avg: 248.38 / Max: 249.58Min: 251.68 / Avg: 255.72 / Max: 258.41. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pEPYC 7F52Linux 5.10.360120180240300SE +/- 0.99, N = 3SE +/- 0.75, N = 3252.18264.011. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pEPYC 7F52Linux 5.10.350100150200250Min: 250.21 / Avg: 252.18 / Max: 253.27Min: 263.04 / Avg: 264.01 / Max: 265.491. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: Visual Quality Optimized - Input: Bosphorus 1080pEPYC 7F52Linux 5.10.350100150200250SE +/- 1.03, N = 3SE +/- 0.66, N = 3203.98212.071. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: Visual Quality Optimized - Input: Bosphorus 1080pEPYC 7F52Linux 5.10.34080120160200Min: 202.84 / Avg: 203.98 / Max: 206.04Min: 210.75 / Avg: 212.07 / Max: 212.841. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Tachyon

This is a test of the threaded Tachyon, a parallel ray-tracing system, measuring the time to ray-trace a sample scene. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total TimeEPYC 7F52Linux 5.10.31122334455SE +/- 0.36, N = 3SE +/- 0.39, N = 1547.2448.131. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total TimeEPYC 7F52Linux 5.10.31020304050Min: 46.59 / Avg: 47.24 / Max: 47.84Min: 46.86 / Avg: 48.13 / Max: 51.521. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetEPYC 7F52Linux 5.10.320K40K60K80K100KSE +/- 36.86, N = 3SE +/- 86.53, N = 3106296106510
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetEPYC 7F52Linux 5.10.320K40K60K80K100KMin: 106226 / Avg: 106296 / Max: 106351Min: 106367 / Avg: 106510.33 / Max: 106666

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4EPYC 7F52Linux 5.10.3300K600K900K1200K1500KSE +/- 1084.27, N = 3SE +/- 1266.68, N = 314945901499017
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4EPYC 7F52Linux 5.10.3300K600K900K1200K1500KMin: 1492620 / Avg: 1494590 / Max: 1496360Min: 1496660 / Avg: 1499016.67 / Max: 1501000

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileEPYC 7F52Linux 5.10.330K60K90K120K150KSE +/- 356.27, N = 3SE +/- 388.72, N = 3127275126619
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileEPYC 7F52Linux 5.10.320K40K60K80K100KMin: 126904 / Avg: 127274.67 / Max: 127987Min: 126190 / Avg: 126619 / Max: 127395

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatEPYC 7F52Linux 5.10.315K30K45K60K75KSE +/- 46.97, N = 3SE +/- 59.93, N = 368415.068630.9
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatEPYC 7F52Linux 5.10.312K24K36K48K60KMin: 68327 / Avg: 68414.97 / Max: 68487.5Min: 68521.9 / Avg: 68630.87 / Max: 68728.6

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantEPYC 7F52Linux 5.10.315K30K45K60K75KSE +/- 53.57, N = 3SE +/- 9.17, N = 369832.470037.4
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantEPYC 7F52Linux 5.10.312K24K36K48K60KMin: 69725.3 / Avg: 69832.43 / Max: 69886.1Min: 70019.5 / Avg: 70037.43 / Max: 70049.7

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2EPYC 7F52Linux 5.10.3300K600K900K1200K1500KSE +/- 1087.40, N = 3SE +/- 1103.67, N = 313439201346243
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2EPYC 7F52Linux 5.10.3200K400K600K800K1000KMin: 1341750 / Avg: 1343920 / Max: 1345130Min: 1344040 / Avg: 1346243.33 / Max: 1347460

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileEPYC 7F52Linux 5.10.3510152025SE +/- 0.03, N = 3SE +/- 0.04, N = 322.4022.68
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileEPYC 7F52Linux 5.10.3510152025Min: 22.36 / Avg: 22.4 / Max: 22.45Min: 22.62 / Avg: 22.68 / Max: 22.75

Timed Clash Compilation

Build the clash-lang Haskell to VHDL/Verilog/SystemVerilog compiler with GHC 8.10.1 Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Clash CompilationTime To CompileEPYC 7F52Linux 5.10.3100200300400500SE +/- 0.56, N = 3SE +/- 1.29, N = 3450.48450.38
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Clash CompilationTime To CompileEPYC 7F52Linux 5.10.380160240320400Min: 449.43 / Avg: 450.48 / Max: 451.37Min: 447.91 / Avg: 450.38 / Max: 452.27

Timed Eigen Compilation

This test times how long it takes to build all Eigen examples. The Eigen examples are compiled serially. Eigen is a C++ template library for linear algebra. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To CompileEPYC 7F52Linux 5.10.320406080100SE +/- 0.03, N = 3SE +/- 0.00, N = 383.4484.48
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To CompileEPYC 7F52Linux 5.10.31632486480Min: 83.41 / Avg: 83.44 / Max: 83.49Min: 84.47 / Avg: 84.48 / Max: 84.48

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileEPYC 7F52Linux 5.10.3816243240SE +/- 0.08, N = 3SE +/- 0.06, N = 333.8934.13
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileEPYC 7F52Linux 5.10.3714212835Min: 33.8 / Avg: 33.89 / Max: 34.05Min: 34.03 / Avg: 34.13 / Max: 34.24

Timed GDB GNU Debugger Compilation

This test times how long it takes to build the GNU Debugger (GDB) in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 9.1Time To CompileEPYC 7F52Linux 5.10.320406080100SE +/- 0.04, N = 3SE +/- 0.11, N = 394.1896.98
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 9.1Time To CompileEPYC 7F52Linux 5.10.320406080100Min: 94.1 / Avg: 94.18 / Max: 94.24Min: 96.78 / Avg: 96.98 / Max: 97.16

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchEPYC 7F52Linux 5.10.3306090120150SE +/- 0.02, N = 3SE +/- 0.30, N = 3131.05127.231. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchEPYC 7F52Linux 5.10.320406080100Min: 131 / Avg: 131.05 / Max: 131.08Min: 126.63 / Avg: 127.23 / Max: 127.541. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileEPYC 7F52Linux 5.10.31020304050SE +/- 0.50, N = 4SE +/- 0.51, N = 445.1245.27
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileEPYC 7F52Linux 5.10.3918273645Min: 44.22 / Avg: 45.12 / Max: 46.53Min: 44.6 / Avg: 45.27 / Max: 46.78

Timed MAFFT Alignment

This test performs an alignment of 100 pyruvate decarboxylase sequences. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNAEPYC 7F52Linux 5.10.33691215SE +/- 0.079, N = 3SE +/- 0.021, N = 39.0099.0331. (CC) gcc options: -std=c99 -O3 -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNAEPYC 7F52Linux 5.10.33691215Min: 8.91 / Avg: 9.01 / Max: 9.17Min: 8.99 / Avg: 9.03 / Max: 9.061. (CC) gcc options: -std=c99 -O3 -lm -lpthread

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.4Time To CompileEPYC 7F52Linux 5.10.3510152025SE +/- 0.05, N = 3SE +/- 0.02, N = 320.3820.43
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.4Time To CompileEPYC 7F52Linux 5.10.3510152025Min: 20.29 / Avg: 20.38 / Max: 20.47Min: 20.39 / Avg: 20.43 / Max: 20.46

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2EPYC 7F52Linux 5.10.360120180240300SE +/- 0.53, N = 3SE +/- 0.42, N = 3274.97275.51MIN: 272.73 / MAX: 289.81MIN: 272.98 / MAX: 294.911. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2EPYC 7F52Linux 5.10.350100150200250Min: 274.23 / Avg: 274.97 / Max: 275.99Min: 274.94 / Avg: 275.51 / Max: 276.321. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1EPYC 7F52Linux 5.10.360120180240300SE +/- 0.77, N = 3SE +/- 0.25, N = 3263.04264.36MIN: 260.98 / MAX: 265.86MIN: 261.25 / MAX: 266.061. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1EPYC 7F52Linux 5.10.350100150200250Min: 261.76 / Avg: 263.04 / Max: 264.43Min: 264.07 / Avg: 264.36 / Max: 264.861. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

Unpacking Firefox

This simple test profile measures how long it takes to extract the .tar.xz source package of the Mozilla Firefox Web Browser. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzEPYC 7F52Linux 5.10.3510152025SE +/- 0.06, N = 4SE +/- 0.03, N = 420.4320.48
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzEPYC 7F52Linux 5.10.3510152025Min: 20.27 / Avg: 20.43 / Max: 20.54Min: 20.39 / Avg: 20.48 / Max: 20.55

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9/WebM format using a sample 1080p video. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.8.2Speed: Speed 0EPYC 7F52Linux 5.10.3246810SE +/- 0.01, N = 3SE +/- 0.00, N = 37.167.201. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=c++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.8.2Speed: Speed 0EPYC 7F52Linux 5.10.33691215Min: 7.14 / Avg: 7.16 / Max: 7.18Min: 7.2 / Avg: 7.2 / Max: 7.21. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=c++11

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.8.2Speed: Speed 5EPYC 7F52Linux 5.10.3612182430SE +/- 0.05, N = 3SE +/- 0.10, N = 323.0823.401. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=c++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.8.2Speed: Speed 5EPYC 7F52Linux 5.10.3510152025Min: 22.99 / Avg: 23.08 / Max: 23.14Min: 23.21 / Avg: 23.4 / Max: 23.511. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=c++11

WavPack Audio Encoding

This test times how long it takes to encode a sample WAV file to WavPack format with very high quality settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPackEPYC 7F52Linux 5.10.348121620SE +/- 0.01, N = 5SE +/- 0.01, N = 513.7513.741. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPackEPYC 7F52Linux 5.10.348121620Min: 13.73 / Avg: 13.75 / Max: 13.8Min: 13.73 / Avg: 13.74 / Max: 13.781. (CXX) g++ options: -rdynamic

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultEPYC 7F52Linux 5.10.30.36410.72821.09231.45641.8205SE +/- 0.001, N = 3SE +/- 0.001, N = 31.6181.6181. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultEPYC 7F52Linux 5.10.3246810Min: 1.62 / Avg: 1.62 / Max: 1.62Min: 1.62 / Avg: 1.62 / Max: 1.621. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100EPYC 7F52Linux 5.10.30.56211.12421.68632.24842.8105SE +/- 0.001, N = 3SE +/- 0.000, N = 32.4982.4911. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100EPYC 7F52Linux 5.10.3246810Min: 2.5 / Avg: 2.5 / Max: 2.5Min: 2.49 / Avg: 2.49 / Max: 2.491. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessEPYC 7F52Linux 5.10.348121620SE +/- 0.07, N = 3SE +/- 0.08, N = 317.5017.571. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessEPYC 7F52Linux 5.10.348121620Min: 17.37 / Avg: 17.5 / Max: 17.59Min: 17.41 / Avg: 17.57 / Max: 17.681. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionEPYC 7F52Linux 5.10.3246810SE +/- 0.007, N = 3SE +/- 0.006, N = 37.7327.7161. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionEPYC 7F52Linux 5.10.33691215Min: 7.72 / Avg: 7.73 / Max: 7.74Min: 7.71 / Avg: 7.72 / Max: 7.731. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionEPYC 7F52Linux 5.10.3816243240SE +/- 0.04, N = 3SE +/- 0.05, N = 336.3136.281. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionEPYC 7F52Linux 5.10.3816243240Min: 36.23 / Avg: 36.31 / Max: 36.36Min: 36.18 / Avg: 36.28 / Max: 36.351. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

WireGuard + Linux Networking Stack Stress Test

This is a benchmark of the WireGuard secure VPN tunnel and Linux networking stack stress test. The test runs on the local host but does require root permissions to run. The way it works is it creates three namespaces. ns0 has a loopback device. ns1 and ns2 each have wireguard devices. Those two wireguard devices send traffic through the loopback device of ns0. The end result of this is that tests wind up testing encryption and decryption at the same time -- a pretty CPU and scheduler-heavy workflow. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestEPYC 7F52Linux 5.10.370140210280350SE +/- 0.38, N = 3SE +/- 1.23, N = 3293.87301.90
OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestEPYC 7F52Linux 5.10.350100150200250Min: 293.12 / Avg: 293.87 / Max: 294.34Min: 300.62 / Avg: 301.9 / Max: 304.36

x264

This is a simple test of the x264 encoder run on the CPU (OpenCL support disabled) with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2019-12-17H.264 Video EncodingEPYC 7F52Linux 5.10.34080120160200SE +/- 1.00, N = 3SE +/- 0.78, N = 3162.77163.651. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -ffast-math -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize
OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2019-12-17H.264 Video EncodingEPYC 7F52Linux 5.10.3306090120150Min: 160.93 / Avg: 162.77 / Max: 164.39Min: 162.08 / Avg: 163.65 / Max: 164.441. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -ffast-math -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KEPYC 7F52Linux 5.10.3510152025SE +/- 0.07, N = 3SE +/- 0.03, N = 320.9321.221. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KEPYC 7F52Linux 5.10.3510152025Min: 20.83 / Avg: 20.93 / Max: 21.07Min: 21.17 / Avg: 21.22 / Max: 21.251. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pEPYC 7F52Linux 5.10.31428425670SE +/- 0.06, N = 3SE +/- 0.12, N = 361.7662.271. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pEPYC 7F52Linux 5.10.31224364860Min: 61.65 / Avg: 61.76 / Max: 61.86Min: 62.05 / Avg: 62.27 / Max: 62.441. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

XZ Compression

This test measures the time needed to compress a sample file (an Ubuntu file-system image) using XZ compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9EPYC 7F52Linux 5.10.3510152025SE +/- 0.05, N = 3SE +/- 0.06, N = 320.9921.231. (CC) gcc options: -pthread -fvisibility=hidden -O2
OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9EPYC 7F52Linux 5.10.3510152025Min: 20.95 / Avg: 20.99 / Max: 21.09Min: 21.17 / Avg: 21.23 / Max: 21.351. (CC) gcc options: -pthread -fvisibility=hidden -O2

YafaRay

YafaRay is an open-source physically based montecarlo ray-tracing engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterYafaRay 3.4.1Total Time For Sample SceneEPYC 7F52Linux 5.10.3306090120150SE +/- 0.76, N = 3SE +/- 0.50, N = 3130.10130.911. (CXX) g++ options: -std=c++11 -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterYafaRay 3.4.1Total Time For Sample SceneEPYC 7F52Linux 5.10.320406080100Min: 128.65 / Avg: 130.1 / Max: 131.24Min: 129.91 / Avg: 130.91 / Max: 131.411. (CXX) g++ options: -std=c++11 -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype -lpthread

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3EPYC 7F52Linux 5.10.32K4K6K8K10KSE +/- 33.83, N = 3SE +/- 75.92, N = 38221.58027.81. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3EPYC 7F52Linux 5.10.314002800420056007000Min: 8157.8 / Avg: 8221.5 / Max: 8273.1Min: 7924.3 / Avg: 8027.83 / Max: 8175.81. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19EPYC 7F52Linux 5.10.320406080100SE +/- 0.17, N = 3SE +/- 0.03, N = 376.874.71. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19EPYC 7F52Linux 5.10.31530456075Min: 76.5 / Avg: 76.8 / Max: 77.1Min: 74.7 / Avg: 74.73 / Max: 74.81. (CC) gcc options: -O3 -pthread -lz -llzma

301 Results Shown

7-Zip Compression
AI Benchmark Alpha:
  Device Inference Score
  Device Training Score
  Device AI Score
Aircrack-ng
AOM AV1:
  Speed 0 Two-Pass
  Speed 4 Two-Pass
  Speed 6 Realtime
  Speed 6 Two-Pass
  Speed 8 Realtime
asmFish
ASTC Encoder:
  Fast
  Medium
  Thorough
  Exhaustive
Blender:
  BMW27 - CPU-Only
  Classroom - CPU-Only
  Fishy Cat - CPU-Only
  Barbershop - CPU-Only
  Pabellon Barcelona - CPU-Only
BRL-CAD
Build2
BYTE Unix Benchmark
Caffe:
  AlexNet - CPU - 100
  AlexNet - CPU - 200
  GoogleNet - CPU - 100
  GoogleNet - CPU - 200
Chaos Group V-RAY
CLOMP
Coremark
Crafty
Darmstadt Automotive Parallel Heterogeneous Suite:
  OpenMP - NDT Mapping
  OpenMP - Points2Image
  OpenMP - Euclidean Cluster
dav1d:
  Chimera 1080p
  Summer Nature 4K
  Summer Nature 1080p
  Chimera 1080p 10-bit
DeepSpeech
ECP-CANDLE:
  P1B2
  P3B1
  P3B2
Embree:
  Pathtracer - Crown
  Pathtracer ISPC - Crown
  Pathtracer - Asian Dragon
  Pathtracer - Asian Dragon Obj
  Pathtracer ISPC - Asian Dragon
  Pathtracer ISPC - Asian Dragon Obj
eSpeak-NG Speech Engine
FFTE
FLAC Audio Encoding
GNU Octave Benchmark
GPAW
GraphicsMagick:
  Swirl
  Rotate
  Sharpen
  Enhanced
  Resizing
  Noise-Gaussian
  HWB Color Space
GROMACS
Hierarchical INTegration
Hugin
IndigoBench:
  CPU - Bedroom
  CPU - Supercar
InfluxDB:
  4 - 10000 - 2,5000,1 - 10000
  64 - 10000 - 2,5000,1 - 10000
Intel Open Image Denoise
John The Ripper:
  Blowfish
  MD5
KeyDB
Kvazaar:
  Bosphorus 4K - Slow
  Bosphorus 4K - Medium
  Bosphorus 1080p - Slow
  Bosphorus 1080p - Medium
  Bosphorus 4K - Very Fast
  Bosphorus 4K - Ultra Fast
  Bosphorus 1080p - Very Fast
  Bosphorus 1080p - Ultra Fast
LAME MP3 Encoding
LAMMPS Molecular Dynamics Simulator:
  20k Atoms
  Rhodopsin Protein
LibRaw
LibreOffice
librsvg
LuxCoreRender:
  DLSC
  Rainbow Colors and Prism
LZ4 Compression:
  1 - Compression Speed
  1 - Decompression Speed
  3 - Compression Speed
  3 - Decompression Speed
  9 - Compression Speed
  9 - Decompression Speed
Mlpack Benchmark:
  scikit_ica
  scikit_qda
  scikit_svm
  scikit_linearridgeregression
Mobile Neural Network:
  SqueezeNetV1.0
  resnet-v2-50
  MobileNetV2_224
  mobilenet-v1-1.0
  inception-v3
Monkey Audio Encoding
Monte Carlo Simulations of Ionised Nebulae
NAMD
NCNN:
  CPU - mobilenet
  CPU-v2-v2 - mobilenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU - shufflenet-v2
  CPU - mnasnet
  CPU - efficientnet-b0
  CPU - blazeface
  CPU - googlenet
  CPU - vgg16
  CPU - resnet18
  CPU - alexnet
  CPU - resnet50
  CPU - yolov4-tiny
  CPU - squeezenet_ssd
  CPU - regnety_400m
Node.js V8 Web Tooling Benchmark
Numenta Anomaly Benchmark:
  EXPoSE
  Relative Entropy
  Windowed Gaussian
  Earthgecko Skyline
  Bayesian Changepoint
Numpy Benchmark
OCRMyPDF
Ogg Audio Encoding
oneDNN:
  IP Shapes 1D - f32 - CPU
  IP Shapes 3D - f32 - CPU
  IP Shapes 1D - u8s8f32 - CPU
  IP Shapes 3D - u8s8f32 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
  Deconvolution Batch shapes_1d - f32 - CPU
  Deconvolution Batch shapes_3d - f32 - CPU
  Convolution Batch Shapes Auto - u8s8f32 - CPU
  Deconvolution Batch shapes_1d - u8s8f32 - CPU
  Deconvolution Batch shapes_3d - u8s8f32 - CPU
  Recurrent Neural Network Training - f32 - CPU
  Recurrent Neural Network Inference - f32 - CPU
  Recurrent Neural Network Training - u8s8f32 - CPU
  Recurrent Neural Network Inference - u8s8f32 - CPU
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
Open Porous Media:
  Flow MPI Norne - 1
  Flow MPI Norne - 2
  Flow MPI Norne - 4
  Flow MPI Norne - 8
  Flow MPI Norne - 16
OpenSSL
OpenVINO:
  Face Detection 0106 FP16 - CPU:
    FPS
    ms
  Face Detection 0106 FP32 - CPU:
    FPS
    ms
  Person Detection 0106 FP16 - CPU:
    FPS
    ms
  Person Detection 0106 FP32 - CPU:
    FPS
    ms
  Age Gender Recognition Retail 0013 FP16 - CPU:
    FPS
    ms
  Age Gender Recognition Retail 0013 FP32 - CPU:
    FPS
    ms
OpenVKL:
  vklBenchmark
  vklBenchmarkVdbVolume
  vklBenchmarkStructuredVolume
  vklBenchmarkUnstructuredVolume
Opus Codec Encoding
PHPBench
PlaidML:
  No - Inference - VGG16 - CPU
  No - Inference - VGG19 - CPU
  No - Inference - IMDB LSTM - CPU
  No - Inference - Mobilenet - CPU
  No - Inference - ResNet 50 - CPU
  No - Inference - DenseNet 201 - CPU
  No - Inference - Inception V3 - CPU
  No - Inference - NASNer Large - CPU
Polyhedron Fortran Benchmarks:
  ac
  air
  mdbx
  doduc
  linpk
  tfft2
  aermod
  rnflow
  induct2
  protein
  capacita
  channel2
  fatigue2
  gas_dyn2
  test_fpu2
  mp_prop_design
PostgreSQL pgbench:
  1 - 1 - Read Only
  1 - 1 - Read Only - Average Latency
  1 - 1 - Read Write
  1 - 1 - Read Write - Average Latency
  1 - 50 - Read Only
  1 - 50 - Read Only - Average Latency
  1 - 100 - Read Only
  1 - 100 - Read Only - Average Latency
  1 - 250 - Read Only
  1 - 250 - Read Only - Average Latency
  1 - 50 - Read Write
  1 - 50 - Read Write - Average Latency
  1 - 100 - Read Write
  1 - 100 - Read Write - Average Latency
  1 - 250 - Read Write
  1 - 250 - Read Write - Average Latency
PyPerformance:
  go
  2to3
  chaos
  float
  nbody
  pathlib
  raytrace
  json_loads
  crypto_pyaes
  regex_compile
  python_startup
  django_template
  pickle_pure_python
rav1e:
  1
  5
  6
  10
Redis:
  LPOP
  SADD
  LPUSH
  GET
  SET
RNNoise
simdjson:
  Kostya
  LargeRand
  PartialTweets
  DistinctUserID
SQLite Speedtest
Stockfish
Stress-NG:
  MMAP
  NUMA
  MEMFD
  Atomic
  Crypto
  Malloc
  Forking
  SENDFILE
  CPU Cache
  CPU Stress
  Semaphores
  Matrix Math
  Vector Math
  Memory Copying
  Socket Activity
  Context Switching
  Glibc C String Functions
  Glibc Qsort Data Sorting
  System V Message Passing
Sunflow Rendering System
SVT-AV1:
  Enc Mode 0 - 1080p
  Enc Mode 4 - 1080p
  Enc Mode 8 - 1080p
SVT-VP9:
  VMAF Optimized - Bosphorus 1080p
  PSNR/SSIM Optimized - Bosphorus 1080p
  Visual Quality Optimized - Bosphorus 1080p
Tachyon
TensorFlow Lite:
  SqueezeNet
  Inception V4
  NASNet Mobile
  Mobilenet Float
  Mobilenet Quant
  Inception ResNet V2
Timed Apache Compilation
Timed Clash Compilation
Timed Eigen Compilation
Timed FFmpeg Compilation
Timed GDB GNU Debugger Compilation
Timed HMMer Search
Timed Linux Kernel Compilation
Timed MAFFT Alignment
Timed MPlayer Compilation
TNN:
  CPU - MobileNet v2
  CPU - SqueezeNet v1.1
Unpacking Firefox
VP9 libvpx Encoding:
  Speed 0
  Speed 5
WavPack Audio Encoding
WebP Image Encode:
  Default
  Quality 100
  Quality 100, Lossless
  Quality 100, Highest Compression
  Quality 100, Lossless, Highest Compression
WireGuard + Linux Networking Stack Stress Test
x264
x265:
  Bosphorus 4K
  Bosphorus 1080p
XZ Compression
YafaRay
Zstd Compression:
  3
  19