AMD EPYC 7F52

AMD EPYC 7F52 16-Core testing with a Supermicro H11DSi-NT v2.00 (2.1 BIOS) and llvmpipe on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2012294-HA-AMDEPYC7F75
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 6 Tests
AV1 4 Tests
Bioinformatics 2 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 2 Tests
C++ Boost Tests 2 Tests
Chess Test Suite 3 Tests
Timed Code Compilation 7 Tests
C/C++ Compiler Tests 31 Tests
Compression Tests 4 Tests
CPU Massive 38 Tests
Creator Workloads 37 Tests
Cryptography 3 Tests
Database Test Suite 5 Tests
Encoding 15 Tests
Fortran Tests 4 Tests
Game Development 4 Tests
HPC - High Performance Computing 25 Tests
Imaging 5 Tests
Common Kernel Benchmarks 5 Tests
Machine Learning 15 Tests
Molecular Dynamics 3 Tests
MPI Benchmarks 4 Tests
Multi-Core 39 Tests
NVIDIA GPU Compute 8 Tests
Intel oneAPI 5 Tests
OpenCV Tests 2 Tests
OpenMPI Tests 5 Tests
Productivity 3 Tests
Programmer / Developer System Benchmarks 12 Tests
Python 4 Tests
Raytracing 2 Tests
Renderers 6 Tests
Scientific Computing 9 Tests
Server 9 Tests
Server CPU Tests 20 Tests
Single-Threaded 11 Tests
Speech 3 Tests
Telephony 3 Tests
Video Encoding 9 Tests
Common Workstation Benchmarks 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
EPYC 7F52
December 27 2020
  1 Day, 3 Hours, 59 Minutes
Linux 5.10.3
December 28 2020
  1 Day, 3 Hours, 47 Minutes
Invert Hiding All Results Option
  1 Day, 3 Hours, 53 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


AMD EPYC 7F52OpenBenchmarking.orgPhoronix Test SuiteAMD EPYC 7F52 16-Core @ 3.50GHz (16 Cores / 32 Threads)Supermicro H11DSi-NT v2.00 (2.1 BIOS)AMD Starship/Matisse64GB280GB INTEL SSDPE21D280GAllvmpipeVE2282 x Intel 10G X550TUbuntu 20.045.8.0-050800rc6daily20200721-generic (x86_64) 202007205.10.3-051003-generic (x86_64)GNOME Shell 3.36.1X Server 1.20.8modesetting 1.20.83.3 Mesa 20.0.4 (LLVM 9.0.1 128 bits)GCC 9.3.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelsDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionAMD EPYC 7F52 BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0x8301034 - OpenJDK Runtime Environment (build 11.0.7+10-post-Ubuntu-3ubuntu1)- Python 2.7.18rc1 + Python 3.8.2- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

EPYC 7F52 vs. Linux 5.10.3 ComparisonPhoronix Test SuiteBaseline+41.1%+41.1%+82.2%+82.2%+123.3%+123.3%+164.4%+164.4%8%7%6.6%6.2%5%4.9%4.7%4.7%4.1%4%4%3.9%3.9%3.4%3%3%2.8%2.6%2.6%2.3%2.2%2.1%2.1%2.1%2%2%tfft2164.5%IP Shapes 3D - u8s8f32 - CPU73%LPOP56%C.B.S.A - f32 - CPU45.3%M.M.B.S.T - f32 - CPU38.6%IP Shapes 3D - f32 - CPU32.7%Forking26.8%S.V.M.P26.4%IP Shapes 1D - f32 - CPU20.1%D.B.s - f32 - CPU17.9%D.B.s - f32 - CPU17.3%C.B.S.A - u8s8f32 - CPU12.1%python_startup12%R.N.N.T - u8s8f32 - CPU11%R.N.N.T - f32 - CPU10.6%R.N.N.T - bf16bf16bf16 - CPU9.9%R.N.N.I - f32 - CPU9.5%R.N.N.I - bf16bf16bf16 - CPU8.7%R.N.N.I - u8s8f32 - CPU8.7%MMAPGET7.6%HWB Color SpacevklBenchmarkVdbVolumeSqueezeNetV1.0SENDFILE6.1%Bosphorus 1080p - Ultra FastvklBenchmarkStructuredVolumeMEMFDP.S.O - Bosphorus 1080pSocket Activity4.2%SADD4.1%Flow MPI Norne - 81 - 250 - Read Only - Average Latency4%Flow MPI Norne - 16V.Q.O - Bosphorus 1080pCPU - squeezenet_ssdBosphorus 1080p - Very Fast1 - 250 - Read Only3.8%OpenMP - Points2Image3.7%D.B.s - u8s8f32 - CPU3.6%SVG Files To PNG3.6%LPUSH3.6%floatSpeed 0 Two-Pass3.3%No - Inference - ResNet 50 - CPU3%P.D.STime To Compile3%VMAF Optimized - Bosphorus 1080p1 - 1 - Read Only - Average Latency2.9%Carbon NanotubeEXPoSE2.8%192.8%2.7%LargeRandMemory Copying2.6%test_fpu232.4%pathlib2.3%P1B2No - Inference - VGG16 - CPUB.C2.2%No - Inference - Inception V3 - CPU2.2%Noise-GaussianBosphorus 4K - Ultra FastP3B12.1%No - Inference - VGG19 - CPUSET2.1%1 - 50 - Read Only - Average Latency9 - Compression SpeedContext Switching2%Polyhedron Fortran BenchmarksoneDNNRedisoneDNNoneDNNoneDNNStress-NGStress-NGoneDNNoneDNNoneDNNoneDNNPyPerformanceoneDNNoneDNNoneDNNoneDNNoneDNNoneDNNStress-NGRedisGraphicsMagickOpenVKLMobile Neural NetworkStress-NGKvazaarOpenVKLStress-NGSVT-VP9Stress-NGRedisOpen Porous MediaPostgreSQL pgbenchOpen Porous MediaSVT-VP9NCNNKvazaarPostgreSQL pgbenchDarmstadt Automotive Parallel Heterogeneous SuiteoneDNNlibrsvgRedisPyPerformanceAOM AV1PlaidMLTimed HMMer SearchTimed GDB GNU Debugger CompilationSVT-VP9PostgreSQL pgbenchGPAWNumenta Anomaly BenchmarkZstd CompressionWireGuard + Linux Networking Stack Stress TestsimdjsonStress-NGPolyhedron Fortran BenchmarksZstd CompressionPyPerformanceECP-CANDLEPlaidMLNumenta Anomaly BenchmarkPlaidMLGraphicsMagickKvazaarECP-CANDLEPlaidMLRedisPostgreSQL pgbenchLZ4 CompressionStress-NGEPYC 7F52Linux 5.10.3

AMD EPYC 7F52polyhedron: tfft2onednn: IP Shapes 3D - u8s8f32 - CPUredis: LPOPonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: IP Shapes 3D - f32 - CPUstress-ng: Forkingstress-ng: System V Message Passingonednn: IP Shapes 1D - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUpyperformance: python_startuponednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUstress-ng: MMAPredis: GETgraphics-magick: HWB Color Spaceopenvkl: vklBenchmarkVdbVolumestress-ng: SENDFILEkvazaar: Bosphorus 1080p - Ultra Fastopenvkl: vklBenchmarkStructuredVolumestress-ng: MEMFDsvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080pstress-ng: Socket Activityredis: SADDopm: Flow MPI Norne - 8pgbench: 1 - 250 - Read Only - Average Latencyopm: Flow MPI Norne - 16svt-vp9: Visual Quality Optimized - Bosphorus 1080pncnn: CPU - squeezenet_ssdkvazaar: Bosphorus 1080p - Very Fastpgbench: 1 - 250 - Read Onlydaphne: OpenMP - Points2Imageonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUrsvg: SVG Files To PNGredis: LPUSHpyperformance: floataom-av1: Speed 0 Two-Passplaidml: No - Inference - ResNet 50 - CPUhmmer: Pfam Database Searchbuild-gdb: Time To Compilesvt-vp9: VMAF Optimized - Bosphorus 1080ppgbench: 1 - 1 - Read Only - Average Latencygpaw: Carbon Nanotubenumenta-nab: EXPoSEcompress-zstd: 19wireguard: simdjson: LargeRandstress-ng: Memory Copyingpolyhedron: test_fpu2compress-zstd: 3pyperformance: pathlibecp-candle: P1B2plaidml: No - Inference - VGG16 - CPUnumenta-nab: Bayesian Changepointplaidml: No - Inference - Inception V3 - CPUgraphics-magick: Noise-Gaussiankvazaar: Bosphorus 4K - Ultra Fastecp-candle: P3B1plaidml: No - Inference - VGG19 - CPUredis: SETpgbench: 1 - 50 - Read Only - Average Latencycompress-lz4: 9 - Compression Speedstress-ng: Context Switchingpgbench: 1 - 50 - Read Writesimdjson: Kostyapgbench: 1 - 50 - Read Write - Average Latencyncnn: CPU - resnet50tachyon: Total Timepgbench: 1 - 50 - Read Onlycrafty: Elapsed Timestress-ng: NUMAkeydb: mnn: resnet-v2-50mnn: inception-v3pyperformance: django_templatestress-ng: Semaphorespgbench: 1 - 100 - Read Only - Average Latencydav1d: Summer Nature 1080ppyperformance: pickle_pure_pythonmlpack: scikit_svmncnn: CPU - mobilenetpgbench: 1 - 100 - Read Onlyvpxenc: Speed 5x265: Bosphorus 4Kembree: Pathtracer ISPC - Asian Dragonpgbench: 1 - 1 - Read Onlymnn: MobileNetV2_224opm: Flow MPI Norne - 4stress-ng: Matrix Mathbrl-cad: VGR Performance Metriccompress-7zip: Compress Speed Testcaffe: AlexNet - CPU - 100embree: Pathtracer - Crownbuild-apache: Time To Compileopenvino: Age Gender Recognition Retail 0013 FP32 - CPUaom-av1: Speed 6 Realtimebuild-eigen: Time To Compilesvt-av1: Enc Mode 8 - 1080poctave-benchmark: mlpack: scikit_icapgbench: 1 - 250 - Read Writecompress-xz: Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9numenta-nab: Windowed Gaussiandav1d: Chimera 1080pphpbench: PHP Benchmark Suiteinfluxdb: 4 - 10000 - 2,5000,1 - 10000pgbench: 1 - 250 - Read Write - Average Latencypgbench: 1 - 100 - Read Write - Average Latencylibraw: Post-Processing Benchmarkpgbench: 1 - 100 - Read Writeonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUpyperformance: 2to3pyperformance: crypto_pyaesai-benchmark: Device Training Scorecoremark: CoreMark Size 666 - Iterations Per Secondpyperformance: chaosespeak: Text-To-Speech Synthesisnode-web-tooling: svt-av1: Enc Mode 0 - 1080pkvazaar: Bosphorus 1080p - Slowdaphne: OpenMP - NDT Mappingkvazaar: Bosphorus 1080p - Mediumv-ray: CPUx265: Bosphorus 1080pgraphics-magick: Rotatepyperformance: json_loadsembree: Pathtracer ISPC - Crownblender: BMW27 - CPU-Onlyonednn: IP Shapes 1D - u8s8f32 - CPUbuild2: Time To Compilencnn: CPU - efficientnet-b0dav1d: Chimera 1080p 10-bitbuild-ffmpeg: Time To Compilelibreoffice: 20 Documents To PDFkvazaar: Bosphorus 4K - Mediumpolyhedron: aermodncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - regnety_400mplaidml: No - Inference - DenseNet 201 - CPUsqlite-speedtest: Timed Time - Size 1,000yafaray: Total Time For Sample Scenecompress-lz4: 1 - Compression Speedncnn: CPU-v2-v2 - mobilenet-v2mlpack: scikit_linearridgeregressionai-benchmark: Device AI Scoreencode-flac: WAV To FLACvpxenc: Speed 0pgbench: 1 - 1 - Read Writebyte: Dhrystone 2ncnn: CPU - blazefacex264: H.264 Video Encodingsvt-av1: Enc Mode 4 - 1080ppolyhedron: channel2openvkl: vklBenchmarktensorflow-lite: NASNet Mobilenamd: ATPase Simulation - 327,506 Atomstnn: CPU - SqueezeNet v1.1ncnn: CPU - vgg16kvazaar: Bosphorus 4K - Very Fastdaphne: OpenMP - Euclidean Clusterencode-ogg: WAV To Oggcompress-lz4: 3 - Decompression Speedinfluxdb: 64 - 10000 - 2,5000,1 - 10000asmfish: 1024 Hash Memory, 26 Depthcompress-lz4: 9 - Decompression Speedembree: Pathtracer - Asian Dragonstress-ng: Atomicaom-av1: Speed 4 Two-Passaom-av1: Speed 8 Realtimewebp: Quality 100, Losslesskvazaar: Bosphorus 4K - Slowpyperformance: goindigobench: CPU - Supercarncnn: CPU - yolov4-tinygromacs: Water Benchmarkpgbench: 1 - 1 - Read Write - Average Latencygraphics-magick: Resizingecp-candle: P3B2numpy: mnn: mobilenet-v1-1.0stress-ng: CPU Stresscaffe: GoogleNet - CPU - 100ffte: N=256, 3D Complex FFT Routinebuild-linux-kernel: Time To Compileopenvino: Person Detection 0106 FP32 - CPUopenvino: Person Detection 0106 FP16 - CPUcaffe: GoogleNet - CPU - 200tensorflow-lite: Mobilenet Floatpolyhedron: acblender: Fishy Cat - CPU-Onlycompress-lz4: 1 - Decompression Speedtensorflow-lite: Inception V4tensorflow-lite: Mobilenet Quantcaffe: AlexNet - CPU - 200astcenc: Mediumopenvino: Person Detection 0106 FP16 - CPUluxcorerender: Rainbow Colors and Prismncnn: CPU - alexnetpolyhedron: capacitancnn: CPU - googlenetai-benchmark: Device Inference Scoreblender: Barbershop - CPU-Onlywebp: Quality 100rav1e: 1aom-av1: Speed 6 Two-Passmlpack: scikit_qdamafft: Multiple Sequence Alignment - LSU RNAaircrack-ng: openvino: Face Detection 0106 FP16 - CPUblender: Classroom - CPU-Onlybuild-mplayer: Time To Compileunpack-firefox: firefox-84.0.source.tar.xzstress-ng: Glibc Qsort Data Sortingstress-ng: Cryptoembree: Pathtracer ISPC - Asian Dragon Objpolyhedron: gas_dyn2pyperformance: raytracewebp: Quality 100, Highest Compressionrav1e: 6tensorflow-lite: SqueezeNetopenvino: Person Detection 0106 FP32 - CPUtnn: CPU - MobileNet v2polyhedron: fatigue2rav1e: 10ncnn: CPU - resnet18deepspeech: CPUopenssl: RSA 4096-bit Performanceopenvino: Age Gender Recognition Retail 0013 FP32 - CPUtensorflow-lite: Inception ResNet V2ocrmypdf: Processing 60 Page PDF Documentrnnoise: openvino: Face Detection 0106 FP32 - CPUlammps: 20k Atomsdav1d: Summer Nature 4Kplaidml: No - Inference - Mobilenet - CPUplaidml: No - Inference - IMDB LSTM - CPUjohn-the-ripper: MD5opm: Flow MPI Norne - 1graphics-magick: Swirlncnn: CPU - shufflenet-v2numenta-nab: Earthgecko Skylineembree: Pathtracer - Asian Dragon Objwebp: Quality 100, Lossless, Highest Compressiononednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUpolyhedron: induct2numenta-nab: Relative Entropyencode-wavpack: WAV To WavPackopenvino: Age Gender Recognition Retail 0013 FP16 - CPUencode-mp3: WAV To MP3stress-ng: Mallocstress-ng: Glibc C String Functionspolyhedron: rnflowblender: Pabellon Barcelona - CPU-Onlystress-ng: Vector Mathhugin: Panorama Photo Assistant + Stitching Timeopm: Flow MPI Norne - 2encode-ape: WAV To APEjohn-the-ripper: Blowfishencode-opus: WAV To Opus Encodeopenvkl: vklBenchmarkUnstructuredVolumebuild-clash: Time To Compilecompress-lz4: 3 - Compression Speedastcenc: Exhaustivepolyhedron: mp_prop_designstockfish: Total Timehint: FLOATopenvino: Face Detection 0106 FP16 - CPUpyperformance: regex_compilepyperformance: nbodyindigobench: CPU - Bedroomopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Face Detection 0106 FP32 - CPUplaidml: No - Inference - NASNer Large - CPUncnn: CPU - mnasnetastcenc: Thoroughastcenc: Fastluxcorerender: DLSCoidn: Memorialrav1e: 5graphics-magick: Enhancedgraphics-magick: Sharpensimdjson: DistinctUserIDsimdjson: PartialTweetswebp: Defaultmocassin: Dust 2D tau100.0polyhedron: proteinpolyhedron: linpkpolyhedron: doducpolyhedron: mdbxpolyhedron: airclomp: Static OMP Speedupsunflow: Global Illumination + Image Synthesismnn: SqueezeNetV1.0stress-ng: CPU Cachelammps: Rhodopsin ProteinEPYC 7F52Linux 5.10.321.790.7720581915545.963.294030.6759152.3675256181.2810610267.692.005192.763214.038015.556827.761992.602006.801994.621068.481069.391057.42229.801753884.37117115263784.666667297122.81105.1268692259.882883680.78252.1810784.401565518.88217.4140.449361.917203.9821.8968.3955682522093.9127013502.8689024.2091216222.501200.316.14131.05194.183248.380.035117.375756.88076.8293.8670.386435.7332.128221.517.138.58524.3527.20810.3941940.41648.69920.271350619.520.10251.788409881.7742310.5211.82221.3447.23894918277776189409.25432105.7334.55433.53048.32314681.130.195533.8047723.3619.2751430723.0820.9321.1789282736.208168.76477530.482455161068037166719.783622.3960.7919.1683.44338.5317.40252.87222720.9937.530574.786185521211752.0112.58130.03738.1833321.838443291091421688169.93012211330.7819.270.11735.05977.6535.972733461.7661924.918.766283.521.5111575.25711.06110.6433.8907.15810.246.127.6844.513.1966.721130.0979947.808.491.7331898.5627.16380341016145.23.69162.775.36042.52217.811272751.14226263.03830.1724.441090.6420.59610768.21425736.14624079710898.420.9704512936.212.4234.1217.50410.062547.76125.942.3650.2631597899.42367.106.5756244.3318165299888.44364065945.1173.043.0636399868415.06.53108.0011455.8149459069832.41436226.892582.883.497.0317.6117.651768354.802.4980.3693.7430.049.00956912.7404.02239.8020.37920.434269.574565.9719.635644.164757.7321.4641062962600.35274.96652.313.18610.6968.293464571.49935.55134392019.51520.1331986.9112.273227.6714.53665.881726333365.3738968.9876.83320.405036.3095.5228023.7914.46613.7539974.707.933332554816.831144375.8516.6266.54142981.9750.703212.22012.505263907.9801818093.8853838450.47953.49108.8359.3836388251347415262.130201988.751731133.5770.784.011.047.6013.795.353.2714.221.0943742350.620.611.61819213.843.187.264.721.7750.10.82010.93344.8611.67857.631.335931228236.414.786910.9370893.1420144312.128395010.732.408103.256734.737066.228778.692211.732220.502192.821169.571162.781148.93248.171630009.81125316277364.242424280154.74110.3672087200.315315712.74264.0110348.911503178.00208.9130.467348.105212.0721.0671.0553641221297.6602705912.9726425.0791174489.561160.305.96127.22896.979255.720.036114.132778.15974.7301.8970.396274.4331.328027.817.537.72324.8927.80410.1742841.27662.32220.691323358.980.10052.818245888.9741510.5312.04820.9448.12855010477633247416.60424609.6033.96032.96447.52278162.650.198541.8347023.0219.5550716123.4021.2220.8918278966.126166.54076518.792423231082097260519.529722.6840.7819.4084.47939.0077.49253.50220121.2337.446581.266254411198820.3113.78130.33738.5633001.821573261101434694463.10089411231.0519.350.11635.35969.3836.272711062.2761424.718.623982.891.5222075.80211.14111.4434.1297.20810.316.167.7344.793.2167.138130.90810009.498.441.7232078.6107.2378241240132.33.67163.655.38842.3218.941266191.14801264.35730.0224.561085.3620.69210815.31419536.44644165310851.621.0601510793.922.4133.9817.57410.102537.73125.842.3740.2641591896.053368.456.5516266.84181008100233.05526159745.2693.033.0736284168630.96.51108.3311490.6149901770037.41440396.912590.333.507.0117.5617.701773355.802.4910.3703.7529.969.03356766.0044.01239.2220.42820.483268.944555.4319.590544.264767.7161.4611065102605.51275.50652.213.19210.7168.166684579.89953.07134624319.54820.1011989.9112.255227.3414.51666.791728667364.9368958.9776.75320.424636.2755.5275823.8114.47813.7429966.937.939332331122.531143670.0016.59266.70142907.6750.679212.31312.501263977.9781817665.4721346450.37653.48108.8159.3936383816347380647.362351988.901731133.5770.784.011.047.6013.795.353.2714.221.0943742350.620.611.61819213.843.187.264.721.7750.10.81810.29144.5211.523OpenBenchmarking.org

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: tfft2Linux 5.10.3EPYC 7F52132639526557.6321.79

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPULinux 5.10.3EPYC 7F520.30060.60120.90181.20241.503SE +/- 0.004643, N = 3SE +/- 0.010953, N = 31.3359300.772058MIN: 1.28MIN: 0.721. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPULinux 5.10.3EPYC 7F52246810Min: 1.33 / Avg: 1.34 / Max: 1.35Min: 0.75 / Avg: 0.77 / Max: 0.791. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOPLinux 5.10.3EPYC 7F52400K800K1200K1600K2000KSE +/- 7581.95, N = 3SE +/- 19724.55, N = 151228236.411915545.961. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOPLinux 5.10.3EPYC 7F52300K600K900K1200K1500KMin: 1213747.62 / Avg: 1228236.41 / Max: 1239355.62Min: 1782759.38 / Avg: 1915545.96 / Max: 2012394.381. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPULinux 5.10.3EPYC 7F521.07712.15423.23134.30845.3855SE +/- 0.02803, N = 3SE +/- 0.01620, N = 34.786913.29403MIN: 4.62MIN: 3.121. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPULinux 5.10.3EPYC 7F52246810Min: 4.74 / Avg: 4.79 / Max: 4.84Min: 3.28 / Avg: 3.29 / Max: 3.331. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPULinux 5.10.3EPYC 7F520.21080.42160.63240.84321.054SE +/- 0.009336, N = 3SE +/- 0.003167, N = 30.9370890.675915MIN: 0.89MIN: 0.641. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPULinux 5.10.3EPYC 7F52246810Min: 0.92 / Avg: 0.94 / Max: 0.95Min: 0.67 / Avg: 0.68 / Max: 0.681. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPULinux 5.10.3EPYC 7F520.7071.4142.1212.8283.535SE +/- 0.00548, N = 3SE +/- 0.01483, N = 33.142012.36752MIN: 3.1MIN: 2.31. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPULinux 5.10.3EPYC 7F52246810Min: 3.13 / Avg: 3.14 / Max: 3.15Min: 2.34 / Avg: 2.37 / Max: 2.391. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: ForkingLinux 5.10.3EPYC 7F5212K24K36K48K60KSE +/- 139.19, N = 3SE +/- 229.33, N = 344312.1256181.281. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: ForkingLinux 5.10.3EPYC 7F5210K20K30K40K50KMin: 44117.52 / Avg: 44312.12 / Max: 44581.81Min: 55872.64 / Avg: 56181.28 / Max: 56629.431. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: System V Message PassingLinux 5.10.3EPYC 7F522M4M6M8M10MSE +/- 112749.98, N = 3SE +/- 128008.77, N = 158395010.7310610267.691. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: System V Message PassingLinux 5.10.3EPYC 7F522M4M6M8M10MMin: 8280189.87 / Avg: 8395010.73 / Max: 8620497.93Min: 9646937.38 / Avg: 10610267.69 / Max: 11308995.651. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPULinux 5.10.3EPYC 7F520.54181.08361.62542.16722.709SE +/- 0.02630, N = 5SE +/- 0.01148, N = 32.408102.00519MIN: 2.25MIN: 1.871. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPULinux 5.10.3EPYC 7F52246810Min: 2.31 / Avg: 2.41 / Max: 2.47Min: 1.98 / Avg: 2.01 / Max: 2.021. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPULinux 5.10.3EPYC 7F520.73281.46562.19842.93123.664SE +/- 0.04703, N = 15SE +/- 0.01184, N = 33.256732.76321MIN: 2.89MIN: 2.651. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPULinux 5.10.3EPYC 7F52246810Min: 3.01 / Avg: 3.26 / Max: 3.5Min: 2.75 / Avg: 2.76 / Max: 2.791. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPULinux 5.10.3EPYC 7F521.06582.13163.19744.26325.329SE +/- 0.04867, N = 15SE +/- 0.05484, N = 34.737064.03801MIN: 4.42MIN: 3.841. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPULinux 5.10.3EPYC 7F52246810Min: 4.51 / Avg: 4.74 / Max: 5.07Min: 3.94 / Avg: 4.04 / Max: 4.131. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPULinux 5.10.3EPYC 7F52246810SE +/- 0.01174, N = 3SE +/- 0.07217, N = 36.228775.55682MIN: 6.14MIN: 5.131. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPULinux 5.10.3EPYC 7F52246810Min: 6.21 / Avg: 6.23 / Max: 6.25Min: 5.41 / Avg: 5.56 / Max: 5.631. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupLinux 5.10.3EPYC 7F52246810SE +/- 0.04, N = 3SE +/- 0.01, N = 38.697.76
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupLinux 5.10.3EPYC 7F523691215Min: 8.62 / Avg: 8.69 / Max: 8.74Min: 7.75 / Avg: 7.76 / Max: 7.77

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPULinux 5.10.3EPYC 7F525001000150020002500SE +/- 4.81, N = 3SE +/- 6.20, N = 32211.731992.60MIN: 2193.8MIN: 1974.031. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPULinux 5.10.3EPYC 7F52400800120016002000Min: 2202.89 / Avg: 2211.73 / Max: 2219.44Min: 1983.76 / Avg: 1992.6 / Max: 2004.541. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPULinux 5.10.3EPYC 7F525001000150020002500SE +/- 10.97, N = 3SE +/- 1.98, N = 32220.502006.80MIN: 2191.85MIN: 1996.471. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPULinux 5.10.3EPYC 7F52400800120016002000Min: 2201.74 / Avg: 2220.5 / Max: 2239.72Min: 2004.52 / Avg: 2006.8 / Max: 2010.751. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPULinux 5.10.3EPYC 7F525001000150020002500SE +/- 9.42, N = 3SE +/- 7.07, N = 32192.821994.62MIN: 2169.84MIN: 1976.291. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPULinux 5.10.3EPYC 7F52400800120016002000Min: 2180.99 / Avg: 2192.82 / Max: 2211.44Min: 1980.48 / Avg: 1994.62 / Max: 2001.721. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPULinux 5.10.3EPYC 7F5230060090012001500SE +/- 1.69, N = 3SE +/- 1.05, N = 31169.571068.48MIN: 1161.34MIN: 1062.661. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPULinux 5.10.3EPYC 7F522004006008001000Min: 1166.19 / Avg: 1169.57 / Max: 1171.33Min: 1067.38 / Avg: 1068.48 / Max: 1070.581. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPULinux 5.10.3EPYC 7F5230060090012001500SE +/- 9.31, N = 3SE +/- 1.57, N = 31162.781069.39MIN: 1139.6MIN: 1062.11. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPULinux 5.10.3EPYC 7F522004006008001000Min: 1144.15 / Avg: 1162.78 / Max: 1172.1Min: 1066.26 / Avg: 1069.39 / Max: 1071.251. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPULinux 5.10.3EPYC 7F522004006008001000SE +/- 11.57, N = 3SE +/- 2.50, N = 31148.931057.42MIN: 1133.53MIN: 1047.721. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPULinux 5.10.3EPYC 7F522004006008001000Min: 1136.89 / Avg: 1148.93 / Max: 1172.06Min: 1052.51 / Avg: 1057.42 / Max: 1060.671. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MMAPLinux 5.10.3EPYC 7F5250100150200250SE +/- 0.19, N = 3SE +/- 0.32, N = 3248.17229.801. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MMAPLinux 5.10.3EPYC 7F524080120160200Min: 247.79 / Avg: 248.17 / Max: 248.37Min: 229.19 / Avg: 229.8 / Max: 230.291. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETLinux 5.10.3EPYC 7F52400K800K1200K1600K2000KSE +/- 18986.94, N = 15SE +/- 22488.85, N = 151630009.811753884.371. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETLinux 5.10.3EPYC 7F52300K600K900K1200K1500KMin: 1506265.12 / Avg: 1630009.81 / Max: 1724744.88Min: 1631634.62 / Avg: 1753884.37 / Max: 1923446.251. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: HWB Color SpaceLinux 5.10.3EPYC 7F5230060090012001500SE +/- 1.86, N = 3SE +/- 1.33, N = 3125311711. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: HWB Color SpaceLinux 5.10.3EPYC 7F522004006008001000Min: 1249 / Avg: 1252.67 / Max: 1255Min: 1168 / Avg: 1170.67 / Max: 11721. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkVdbVolumeLinux 5.10.3EPYC 7F523M6M9M12M15MSE +/- 19338.94, N = 3SE +/- 100190.60, N = 316277364.2415263784.67MIN: 790262 / MAX: 65640384MIN: 798247 / MAX: 56683584
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkVdbVolumeLinux 5.10.3EPYC 7F523M6M9M12M15MMin: 16243824.43 / Avg: 16277364.24 / Max: 16310816.3Min: 15133963.18 / Avg: 15263784.67 / Max: 15460885.64

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SENDFILELinux 5.10.3EPYC 7F5260K120K180K240K300KSE +/- 302.83, N = 3SE +/- 100.47, N = 3280154.74297122.811. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SENDFILELinux 5.10.3EPYC 7F5250K100K150K200K250KMin: 279577.24 / Avg: 280154.74 / Max: 280601.56Min: 296962.2 / Avg: 297122.81 / Max: 297307.71. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Ultra FastLinux 5.10.3EPYC 7F5220406080100SE +/- 0.45, N = 3SE +/- 0.58, N = 3110.36105.121. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Ultra FastLinux 5.10.3EPYC 7F5220406080100Min: 109.57 / Avg: 110.36 / Max: 111.14Min: 104.16 / Avg: 105.12 / Max: 106.181. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkStructuredVolumeLinux 5.10.3EPYC 7F5215M30M45M60M75MSE +/- 168051.69, N = 3SE +/- 788054.78, N = 372087200.3268692259.88MIN: 921866 / MAX: 575712792MIN: 909007 / MAX: 535870728
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkStructuredVolumeLinux 5.10.3EPYC 7F5212M24M36M48M60MMin: 71796921.92 / Avg: 72087200.32 / Max: 72379063.57Min: 67452425.35 / Avg: 68692259.88 / Max: 70154910.05

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MEMFDLinux 5.10.3EPYC 7F52150300450600750SE +/- 0.29, N = 3SE +/- 0.22, N = 3712.74680.781. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MEMFDLinux 5.10.3EPYC 7F52130260390520650Min: 712.18 / Avg: 712.74 / Max: 713.15Min: 680.4 / Avg: 680.78 / Max: 681.161. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pLinux 5.10.3EPYC 7F5260120180240300SE +/- 0.75, N = 3SE +/- 0.99, N = 3264.01252.181. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pLinux 5.10.3EPYC 7F5250100150200250Min: 263.04 / Avg: 264.01 / Max: 265.49Min: 250.21 / Avg: 252.18 / Max: 253.271. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Socket ActivityLinux 5.10.3EPYC 7F522K4K6K8K10KSE +/- 36.73, N = 3SE +/- 43.37, N = 310348.9110784.401. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Socket ActivityLinux 5.10.3EPYC 7F522K4K6K8K10KMin: 10288.33 / Avg: 10348.91 / Max: 10415.18Min: 10710.44 / Avg: 10784.4 / Max: 10860.641. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDLinux 5.10.3EPYC 7F52300K600K900K1200K1500KSE +/- 11341.57, N = 3SE +/- 16740.24, N = 31503178.001565518.881. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDLinux 5.10.3EPYC 7F52300K600K900K1200K1500KMin: 1481481.5 / Avg: 1503178 / Max: 1519756.88Min: 1541177.25 / Avg: 1565518.88 / Max: 1597597.51. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Open Porous Media

This is a test of Open Porous Media, a set of open-source tools concerning simulation of flow and transport of fluids in porous media. This test profile depends upon MPI/Flow already being installed on the system. Install instructions at https://opm-project.org/?page_id=36. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 8Linux 5.10.3EPYC 7F5250100150200250SE +/- 0.17, N = 3SE +/- 0.40, N = 3208.91217.411. flow 2020.04
OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 8Linux 5.10.3EPYC 7F524080120160200Min: 208.62 / Avg: 208.91 / Max: 209.2Min: 216.79 / Avg: 217.41 / Max: 218.171. flow 2020.04

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read Only - Average LatencyLinux 5.10.3EPYC 7F520.10510.21020.31530.42040.5255SE +/- 0.001, N = 3SE +/- 0.000, N = 30.4670.4491. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read Only - Average LatencyLinux 5.10.3EPYC 7F5212345Min: 0.46 / Avg: 0.47 / Max: 0.47Min: 0.45 / Avg: 0.45 / Max: 0.451. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Open Porous Media

This is a test of Open Porous Media, a set of open-source tools concerning simulation of flow and transport of fluids in porous media. This test profile depends upon MPI/Flow already being installed on the system. Install instructions at https://opm-project.org/?page_id=36. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 16Linux 5.10.3EPYC 7F5280160240320400SE +/- 0.13, N = 3SE +/- 0.72, N = 3348.11361.921. flow 2020.04
OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 16Linux 5.10.3EPYC 7F5260120180240300Min: 347.85 / Avg: 348.1 / Max: 348.24Min: 360.75 / Avg: 361.92 / Max: 363.241. flow 2020.04

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: Visual Quality Optimized - Input: Bosphorus 1080pLinux 5.10.3EPYC 7F5250100150200250SE +/- 0.66, N = 3SE +/- 1.03, N = 3212.07203.981. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: Visual Quality Optimized - Input: Bosphorus 1080pLinux 5.10.3EPYC 7F524080120160200Min: 210.75 / Avg: 212.07 / Max: 212.84Min: 202.84 / Avg: 203.98 / Max: 206.041. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdLinux 5.10.3EPYC 7F52510152025SE +/- 0.23, N = 12SE +/- 0.04, N = 1521.0621.89MIN: 19.62 / MAX: 77.67MIN: 21.44 / MAX: 101.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdLinux 5.10.3EPYC 7F52510152025Min: 20.14 / Avg: 21.06 / Max: 21.92Min: 21.66 / Avg: 21.89 / Max: 22.231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Very FastLinux 5.10.3EPYC 7F521632486480SE +/- 0.30, N = 3SE +/- 0.10, N = 371.0568.391. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Very FastLinux 5.10.3EPYC 7F521428425670Min: 70.58 / Avg: 71.05 / Max: 71.61Min: 68.19 / Avg: 68.39 / Max: 68.531. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read OnlyLinux 5.10.3EPYC 7F52120K240K360K480K600KSE +/- 1662.47, N = 3SE +/- 733.28, N = 35364125568251. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read OnlyLinux 5.10.3EPYC 7F52100K200K300K400K500KMin: 534022.19 / Avg: 536411.72 / Max: 539608.74Min: 555375.67 / Avg: 556824.57 / Max: 557745.61. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Points2ImageLinux 5.10.3EPYC 7F525K10K15K20K25KSE +/- 123.09, N = 3SE +/- 156.61, N = 1521297.6622093.911. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Points2ImageLinux 5.10.3EPYC 7F524K8K12K16K20KMin: 21056.25 / Avg: 21297.66 / Max: 21460.15Min: 21280.29 / Avg: 22093.91 / Max: 23658.561. (CXX) g++ options: -O3 -std=c++11 -fopenmp

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPULinux 5.10.3EPYC 7F520.66881.33762.00642.67523.344SE +/- 0.00487, N = 3SE +/- 0.00136, N = 32.972642.86890MIN: 2.93MIN: 2.831. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPULinux 5.10.3EPYC 7F52246810Min: 2.97 / Avg: 2.97 / Max: 2.98Min: 2.87 / Avg: 2.87 / Max: 2.871. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

librsvg

RSVG/librsvg is an SVG vector graphics library. This test profile times how long it takes to complete various operations by rsvg-convert. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterlibrsvgOperation: SVG Files To PNGLinux 5.10.3EPYC 7F52612182430SE +/- 0.11, N = 3SE +/- 0.05, N = 325.0824.211. rsvg-convert version 2.48.2
OpenBenchmarking.orgSeconds, Fewer Is BetterlibrsvgOperation: SVG Files To PNGLinux 5.10.3EPYC 7F52612182430Min: 24.94 / Avg: 25.08 / Max: 25.29Min: 24.11 / Avg: 24.21 / Max: 24.31. rsvg-convert version 2.48.2

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSHLinux 5.10.3EPYC 7F52300K600K900K1200K1500KSE +/- 11678.97, N = 6SE +/- 14085.60, N = 31174489.561216222.501. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSHLinux 5.10.3EPYC 7F52200K400K600K800K1000KMin: 1132684 / Avg: 1174489.56 / Max: 1216700.75Min: 1192238.38 / Avg: 1216222.5 / Max: 1241012.381. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatLinux 5.10.3EPYC 7F52306090120150SE +/- 0.33, N = 3116120
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatLinux 5.10.3EPYC 7F5220406080100Min: 116 / Avg: 116.33 / Max: 117

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 0 Two-PassLinux 5.10.3EPYC 7F520.06980.13960.20940.27920.349SE +/- 0.00, N = 3SE +/- 0.00, N = 30.300.311. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 0 Two-PassLinux 5.10.3EPYC 7F5212345Min: 0.3 / Avg: 0.3 / Max: 0.31Min: 0.31 / Avg: 0.31 / Max: 0.311. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: ResNet 50 - Device: CPULinux 5.10.3EPYC 7F52246810SE +/- 0.08, N = 3SE +/- 0.00, N = 35.966.14
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: ResNet 50 - Device: CPULinux 5.10.3EPYC 7F52246810Min: 5.86 / Avg: 5.96 / Max: 6.12Min: 6.14 / Avg: 6.14 / Max: 6.14

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchLinux 5.10.3EPYC 7F52306090120150SE +/- 0.30, N = 3SE +/- 0.02, N = 3127.23131.051. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchLinux 5.10.3EPYC 7F5220406080100Min: 126.63 / Avg: 127.23 / Max: 127.54Min: 131 / Avg: 131.05 / Max: 131.081. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

Timed GDB GNU Debugger Compilation

This test times how long it takes to build the GNU Debugger (GDB) in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 9.1Time To CompileLinux 5.10.3EPYC 7F5220406080100SE +/- 0.11, N = 3SE +/- 0.04, N = 396.9894.18
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 9.1Time To CompileLinux 5.10.3EPYC 7F5220406080100Min: 96.78 / Avg: 96.98 / Max: 97.16Min: 94.1 / Avg: 94.18 / Max: 94.24

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: VMAF Optimized - Input: Bosphorus 1080pLinux 5.10.3EPYC 7F5260120180240300SE +/- 2.05, N = 3SE +/- 0.71, N = 3255.72248.381. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: VMAF Optimized - Input: Bosphorus 1080pLinux 5.10.3EPYC 7F5250100150200250Min: 251.68 / Avg: 255.72 / Max: 258.4Min: 247.12 / Avg: 248.38 / Max: 249.581. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Only - Average LatencyLinux 5.10.3EPYC 7F520.00810.01620.02430.03240.0405SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0360.0351. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Only - Average LatencyLinux 5.10.3EPYC 7F5212345Min: 0.04 / Avg: 0.04 / Max: 0.04Min: 0.04 / Avg: 0.04 / Max: 0.041. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 20.1Input: Carbon NanotubeLinux 5.10.3EPYC 7F52306090120150SE +/- 0.05, N = 3SE +/- 1.45, N = 4114.13117.381. (CC) gcc options: -pthread -shared -fwrapv -O2 -lxc -lblas -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 20.1Input: Carbon NanotubeLinux 5.10.3EPYC 7F5220406080100Min: 114.03 / Avg: 114.13 / Max: 114.19Min: 114.48 / Avg: 117.38 / Max: 120.521. (CC) gcc options: -pthread -shared -fwrapv -O2 -lxc -lblas -lmpi

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: EXPoSELinux 5.10.3EPYC 7F522004006008001000SE +/- 0.49, N = 3SE +/- 2.18, N = 3778.16756.88
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: EXPoSELinux 5.10.3EPYC 7F52140280420560700Min: 777.34 / Avg: 778.16 / Max: 779.04Min: 753.14 / Avg: 756.88 / Max: 760.71

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Linux 5.10.3EPYC 7F5220406080100SE +/- 0.03, N = 3SE +/- 0.17, N = 374.776.81. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Linux 5.10.3EPYC 7F521530456075Min: 74.7 / Avg: 74.73 / Max: 74.8Min: 76.5 / Avg: 76.8 / Max: 77.11. (CC) gcc options: -O3 -pthread -lz -llzma

WireGuard + Linux Networking Stack Stress Test

This is a benchmark of the WireGuard secure VPN tunnel and Linux networking stack stress test. The test runs on the local host but does require root permissions to run. The way it works is it creates three namespaces. ns0 has a loopback device. ns1 and ns2 each have wireguard devices. Those two wireguard devices send traffic through the loopback device of ns0. The end result of this is that tests wind up testing encryption and decryption at the same time -- a pretty CPU and scheduler-heavy workflow. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestLinux 5.10.3EPYC 7F5270140210280350SE +/- 1.23, N = 3SE +/- 0.38, N = 3301.90293.87
OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestLinux 5.10.3EPYC 7F5250100150200250Min: 300.62 / Avg: 301.9 / Max: 304.36Min: 293.12 / Avg: 293.87 / Max: 294.34

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandomLinux 5.10.3EPYC 7F520.08780.17560.26340.35120.439SE +/- 0.00, N = 3SE +/- 0.00, N = 30.390.381. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandomLinux 5.10.3EPYC 7F5212345Min: 0.38 / Avg: 0.39 / Max: 0.39Min: 0.38 / Avg: 0.38 / Max: 0.391. (CXX) g++ options: -O3 -pthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Memory CopyingLinux 5.10.3EPYC 7F5214002800420056007000SE +/- 3.47, N = 3SE +/- 58.89, N = 36274.436435.731. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Memory CopyingLinux 5.10.3EPYC 7F5211002200330044005500Min: 6267.56 / Avg: 6274.43 / Max: 6278.7Min: 6317.95 / Avg: 6435.73 / Max: 6495.191. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: test_fpu2Linux 5.10.3EPYC 7F5271421283531.3232.12

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Linux 5.10.3EPYC 7F522K4K6K8K10KSE +/- 75.92, N = 3SE +/- 33.83, N = 38027.88221.51. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Linux 5.10.3EPYC 7F5214002800420056007000Min: 7924.3 / Avg: 8027.83 / Max: 8175.8Min: 8157.8 / Avg: 8221.5 / Max: 8273.11. (CC) gcc options: -O3 -pthread -lz -llzma

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibLinux 5.10.3EPYC 7F5248121620SE +/- 0.00, N = 3SE +/- 0.00, N = 317.517.1
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibLinux 5.10.3EPYC 7F5248121620Min: 17.5 / Avg: 17.5 / Max: 17.5Min: 17.1 / Avg: 17.1 / Max: 17.1

ECP-CANDLE

The CANDLE benchmark codes implement deep learning architectures relevant to problems in cancer. These architectures address problems at different biological scales, specifically problems at the molecular, cellular and population scales. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterECP-CANDLE 0.3Benchmark: P1B2Linux 5.10.3EPYC 7F5291827364537.7238.59

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: CPULinux 5.10.3EPYC 7F52612182430SE +/- 0.24, N = 3SE +/- 0.01, N = 324.8924.35
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: CPULinux 5.10.3EPYC 7F52612182430Min: 24.46 / Avg: 24.89 / Max: 25.29Min: 24.33 / Avg: 24.35 / Max: 24.37

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian ChangepointLinux 5.10.3EPYC 7F52714212835SE +/- 0.09, N = 3SE +/- 0.25, N = 327.8027.21
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian ChangepointLinux 5.10.3EPYC 7F52612182430Min: 27.7 / Avg: 27.8 / Max: 27.98Min: 26.74 / Avg: 27.21 / Max: 27.58

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: Inception V3 - Device: CPULinux 5.10.3EPYC 7F523691215SE +/- 0.04, N = 3SE +/- 0.00, N = 310.1710.39
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: Inception V3 - Device: CPULinux 5.10.3EPYC 7F523691215Min: 10.1 / Avg: 10.17 / Max: 10.22Min: 10.38 / Avg: 10.39 / Max: 10.39

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-GaussianLinux 5.10.3EPYC 7F5290180270360450SE +/- 0.33, N = 3SE +/- 0.33, N = 34284191. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-GaussianLinux 5.10.3EPYC 7F5280160240320400Min: 427 / Avg: 427.67 / Max: 428Min: 418 / Avg: 418.67 / Max: 4191. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra FastLinux 5.10.3EPYC 7F52918273645SE +/- 0.05, N = 3SE +/- 0.06, N = 341.2740.411. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra FastLinux 5.10.3EPYC 7F52918273645Min: 41.2 / Avg: 41.27 / Max: 41.36Min: 40.3 / Avg: 40.41 / Max: 40.471. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

ECP-CANDLE

The CANDLE benchmark codes implement deep learning architectures relevant to problems in cancer. These architectures address problems at different biological scales, specifically problems at the molecular, cellular and population scales. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterECP-CANDLE 0.3Benchmark: P3B1Linux 5.10.3EPYC 7F52140280420560700662.32648.70

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG19 - Device: CPULinux 5.10.3EPYC 7F52510152025SE +/- 0.10, N = 3SE +/- 0.05, N = 320.6920.27
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG19 - Device: CPULinux 5.10.3EPYC 7F52510152025Min: 20.48 / Avg: 20.69 / Max: 20.8Min: 20.19 / Avg: 20.27 / Max: 20.35

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETLinux 5.10.3EPYC 7F52300K600K900K1200K1500KSE +/- 10427.33, N = 15SE +/- 15975.34, N = 151323358.981350619.521. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETLinux 5.10.3EPYC 7F52200K400K600K800K1000KMin: 1279017.88 / Avg: 1323358.98 / Max: 1398735.75Min: 1241012.38 / Avg: 1350619.52 / Max: 1434949.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average LatencyLinux 5.10.3EPYC 7F520.0230.0460.0690.0920.115SE +/- 0.001, N = 15SE +/- 0.001, N = 30.1000.1021. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average LatencyLinux 5.10.3EPYC 7F5212345Min: 0.09 / Avg: 0.1 / Max: 0.1Min: 0.1 / Avg: 0.1 / Max: 0.11. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedLinux 5.10.3EPYC 7F521224364860SE +/- 0.46, N = 3SE +/- 0.32, N = 352.8151.781. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedLinux 5.10.3EPYC 7F521122334455Min: 51.9 / Avg: 52.81 / Max: 53.31Min: 51.46 / Avg: 51.78 / Max: 52.411. (CC) gcc options: -O3

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context SwitchingLinux 5.10.3EPYC 7F522M4M6M8M10MSE +/- 21287.84, N = 3SE +/- 27679.79, N = 38245888.978409881.771. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context SwitchingLinux 5.10.3EPYC 7F521.5M3M4.5M6M7.5MMin: 8205143.3 / Avg: 8245888.97 / Max: 8276955.71Min: 8362250.79 / Avg: 8409881.77 / Max: 8458130.441. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read WriteLinux 5.10.3EPYC 7F529001800270036004500SE +/- 1.61, N = 3SE +/- 4.77, N = 3415142311. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read WriteLinux 5.10.3EPYC 7F527001400210028003500Min: 4149.57 / Avg: 4151.25 / Max: 4154.47Min: 4222.66 / Avg: 4230.61 / Max: 4239.141. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: KostyaLinux 5.10.3EPYC 7F520.11930.23860.35790.47720.5965SE +/- 0.00, N = 3SE +/- 0.00, N = 30.530.521. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: KostyaLinux 5.10.3EPYC 7F52246810Min: 0.53 / Avg: 0.53 / Max: 0.53Min: 0.52 / Avg: 0.52 / Max: 0.531. (CXX) g++ options: -O3 -pthread

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average LatencyLinux 5.10.3EPYC 7F523691215SE +/- 0.00, N = 3SE +/- 0.01, N = 312.0511.821. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average LatencyLinux 5.10.3EPYC 7F5248121620Min: 12.04 / Avg: 12.05 / Max: 12.05Min: 11.8 / Avg: 11.82 / Max: 11.841. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50Linux 5.10.3EPYC 7F52510152025SE +/- 0.04, N = 12SE +/- 0.05, N = 1520.9421.34MIN: 20.35 / MAX: 23.55MIN: 20.69 / MAX: 102.241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50Linux 5.10.3EPYC 7F52510152025Min: 20.67 / Avg: 20.94 / Max: 21.05Min: 21.09 / Avg: 21.34 / Max: 21.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Tachyon

This is a test of the threaded Tachyon, a parallel ray-tracing system, measuring the time to ray-trace a sample scene. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total TimeLinux 5.10.3EPYC 7F521122334455SE +/- 0.39, N = 15SE +/- 0.36, N = 348.1347.241. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total TimeLinux 5.10.3EPYC 7F521020304050Min: 46.86 / Avg: 48.13 / Max: 51.52Min: 46.59 / Avg: 47.24 / Max: 47.841. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read OnlyLinux 5.10.3EPYC 7F52110K220K330K440K550KSE +/- 7128.53, N = 15SE +/- 5689.06, N = 35010474918271. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read OnlyLinux 5.10.3EPYC 7F5290K180K270K360K450KMin: 481274.39 / Avg: 501046.8 / Max: 586825.76Min: 481284.17 / Avg: 491826.61 / Max: 500804.421. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeLinux 5.10.3EPYC 7F521.7M3.4M5.1M6.8M8.5MSE +/- 30182.11, N = 3SE +/- 19965.32, N = 3763324777761891. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm
OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeLinux 5.10.3EPYC 7F521.3M2.6M3.9M5.2M6.5MMin: 7576064 / Avg: 7633247.33 / Max: 7678585Min: 7739919 / Avg: 7776189 / Max: 78087881. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: NUMALinux 5.10.3EPYC 7F5290180270360450SE +/- 0.07, N = 3SE +/- 2.52, N = 3416.60409.251. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: NUMALinux 5.10.3EPYC 7F5270140210280350Min: 416.48 / Avg: 416.6 / Max: 416.72Min: 404.24 / Avg: 409.25 / Max: 412.311. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

KeyDB

A benchmark of KeyDB as a multi-threaded fork of the Redis server. The KeyDB benchmark is conducted using memtier-benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.16Linux 5.10.3EPYC 7F5290K180K270K360K450KSE +/- 1060.81, N = 3SE +/- 3437.31, N = 3424609.60432105.731. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.16Linux 5.10.3EPYC 7F5270K140K210K280K350KMin: 422868.39 / Avg: 424609.6 / Max: 426530Min: 427512.92 / Avg: 432105.73 / Max: 438832.131. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Linux 5.10.3EPYC 7F52816243240SE +/- 0.04, N = 15SE +/- 0.05, N = 1533.9634.55MIN: 32.06 / MAX: 51.84MIN: 32.75 / MAX: 67.951. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Linux 5.10.3EPYC 7F52714212835Min: 33.53 / Avg: 33.96 / Max: 34.23Min: 34.34 / Avg: 34.55 / Max: 35.131. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Linux 5.10.3EPYC 7F52816243240SE +/- 0.18, N = 15SE +/- 0.23, N = 1532.9633.53MIN: 31.71 / MAX: 49.44MIN: 31.39 / MAX: 50.331. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Linux 5.10.3EPYC 7F52714212835Min: 32.35 / Avg: 32.96 / Max: 35.17Min: 32.2 / Avg: 33.53 / Max: 34.671. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateLinux 5.10.3EPYC 7F521122334455SE +/- 0.27, N = 3SE +/- 0.32, N = 347.548.3
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateLinux 5.10.3EPYC 7F521020304050Min: 47 / Avg: 47.53 / Max: 47.9Min: 47.7 / Avg: 48.27 / Max: 48.8

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SemaphoresLinux 5.10.3EPYC 7F52500K1000K1500K2000K2500KSE +/- 2645.51, N = 3SE +/- 14921.24, N = 32278162.652314681.131. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SemaphoresLinux 5.10.3EPYC 7F52400K800K1200K1600K2000KMin: 2273647.24 / Avg: 2278162.65 / Max: 2282808.76Min: 2284970.57 / Avg: 2314681.13 / Max: 2331963.831. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average LatencyLinux 5.10.3EPYC 7F520.04460.08920.13380.17840.223SE +/- 0.001, N = 3SE +/- 0.001, N = 30.1980.1951. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average LatencyLinux 5.10.3EPYC 7F5212345Min: 0.2 / Avg: 0.2 / Max: 0.2Min: 0.19 / Avg: 0.19 / Max: 0.21. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 1080pLinux 5.10.3EPYC 7F52120240360480600SE +/- 2.05, N = 3SE +/- 1.44, N = 3541.83533.80MIN: 374.84 / MAX: 590.34MIN: 341.27 / MAX: 581.441. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 1080pLinux 5.10.3EPYC 7F52100200300400500Min: 537.75 / Avg: 541.83 / Max: 544.28Min: 531.39 / Avg: 533.8 / Max: 536.381. (CC) gcc options: -pthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonLinux 5.10.3EPYC 7F52100200300400500SE +/- 2.08, N = 3SE +/- 0.67, N = 3470477
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonLinux 5.10.3EPYC 7F5280160240320400Min: 466 / Avg: 470 / Max: 473Min: 476 / Avg: 476.67 / Max: 478

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svmLinux 5.10.3EPYC 7F52612182430SE +/- 0.01, N = 3SE +/- 0.27, N = 323.0223.36
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svmLinux 5.10.3EPYC 7F52510152025Min: 23.01 / Avg: 23.02 / Max: 23.04Min: 23.08 / Avg: 23.36 / Max: 23.9

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetLinux 5.10.3EPYC 7F52510152025SE +/- 0.27, N = 12SE +/- 0.15, N = 1519.5519.27MIN: 17.94 / MAX: 34.33MIN: 17.82 / MAX: 79.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetLinux 5.10.3EPYC 7F52510152025Min: 18.44 / Avg: 19.55 / Max: 20.7Min: 18.58 / Avg: 19.27 / Max: 20.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read OnlyLinux 5.10.3EPYC 7F52110K220K330K440K550KSE +/- 1764.57, N = 3SE +/- 3559.12, N = 35071615143071. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read OnlyLinux 5.10.3EPYC 7F5290K180K270K360K450KMin: 503639.44 / Avg: 507160.7 / Max: 509125.5Min: 509047.1 / Avg: 514306.68 / Max: 521090.341. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9/WebM format using a sample 1080p video. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.8.2Speed: Speed 5Linux 5.10.3EPYC 7F52612182430SE +/- 0.10, N = 3SE +/- 0.05, N = 323.4023.081. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=c++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.8.2Speed: Speed 5Linux 5.10.3EPYC 7F52510152025Min: 23.21 / Avg: 23.4 / Max: 23.51Min: 22.99 / Avg: 23.08 / Max: 23.141. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=c++11

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KLinux 5.10.3EPYC 7F52510152025SE +/- 0.03, N = 3SE +/- 0.07, N = 321.2220.931. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KLinux 5.10.3EPYC 7F52510152025Min: 21.17 / Avg: 21.22 / Max: 21.25Min: 20.83 / Avg: 20.93 / Max: 21.071. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian DragonLinux 5.10.3EPYC 7F52510152025SE +/- 0.03, N = 3SE +/- 0.20, N = 620.8921.18MIN: 20.71 / MAX: 22.1MIN: 20.68 / MAX: 22.95
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian DragonLinux 5.10.3EPYC 7F52510152025Min: 20.85 / Avg: 20.89 / Max: 20.96Min: 20.83 / Avg: 21.18 / Max: 22.14

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read OnlyLinux 5.10.3EPYC 7F526K12K18K24K30KSE +/- 272.58, N = 3SE +/- 403.02, N = 327896282731. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read OnlyLinux 5.10.3EPYC 7F525K10K15K20K25KMin: 27516.31 / Avg: 27895.54 / Max: 28424.33Min: 27526.61 / Avg: 28272.78 / Max: 28909.881. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Linux 5.10.3EPYC 7F52246810SE +/- 0.012, N = 15SE +/- 0.012, N = 156.1266.208MIN: 5.97 / MAX: 20.61MIN: 6.01 / MAX: 211. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Linux 5.10.3EPYC 7F52246810Min: 6.07 / Avg: 6.13 / Max: 6.21Min: 6.11 / Avg: 6.21 / Max: 6.281. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Open Porous Media

This is a test of Open Porous Media, a set of open-source tools concerning simulation of flow and transport of fluids in porous media. This test profile depends upon MPI/Flow already being installed on the system. Install instructions at https://opm-project.org/?page_id=36. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 4Linux 5.10.3EPYC 7F524080120160200SE +/- 0.30, N = 3SE +/- 0.45, N = 3166.54168.761. flow 2020.04
OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 4Linux 5.10.3EPYC 7F52306090120150Min: 166.22 / Avg: 166.54 / Max: 167.14Min: 167.94 / Avg: 168.76 / Max: 169.491. flow 2020.04

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Matrix MathLinux 5.10.3EPYC 7F5217K34K51K68K85KSE +/- 608.49, N = 3SE +/- 117.38, N = 376518.7977530.481. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Matrix MathLinux 5.10.3EPYC 7F5213K26K39K52K65KMin: 75895.92 / Avg: 76518.79 / Max: 77735.65Min: 77300.25 / Avg: 77530.48 / Max: 77685.331. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

BRL-CAD

BRL-CAD 7.28.0 is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.30.8VGR Performance MetricLinux 5.10.3EPYC 7F5250K100K150K200K250K2423232455161. (CXX) g++ options: -std=c++11 -pipe -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -rdynamic -lSM -lICE -lXi -lGLU -lGL -lGLdispatch -lX11 -lXext -lXrender -lpthread -ldl -luuid -lm

7-Zip Compression

This is a test of 7-Zip using p7zip with its integrated benchmark feature or upstream 7-Zip for the Windows x64 build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestLinux 5.10.3EPYC 7F5220K40K60K80K100KSE +/- 140.42, N = 3SE +/- 904.09, N = 31082091068031. (CXX) g++ options: -pipe -lpthread
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestLinux 5.10.3EPYC 7F5220K40K60K80K100KMin: 107998 / Avg: 108209 / Max: 108475Min: 105145 / Avg: 106802.67 / Max: 1082571. (CXX) g++ options: -pipe -lpthread

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100Linux 5.10.3EPYC 7F5216K32K48K64K80KSE +/- 793.90, N = 3SE +/- 136.23, N = 372605716671. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100Linux 5.10.3EPYC 7F5213K26K39K52K65KMin: 71557 / Avg: 72605 / Max: 74162Min: 71479 / Avg: 71667.33 / Max: 719321. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: CrownLinux 5.10.3EPYC 7F52510152025SE +/- 0.08, N = 3SE +/- 0.07, N = 319.5319.78MIN: 19.27 / MAX: 19.84MIN: 19.53 / MAX: 20.18
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: CrownLinux 5.10.3EPYC 7F52510152025Min: 19.38 / Avg: 19.53 / Max: 19.64Min: 19.65 / Avg: 19.78 / Max: 19.87

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileLinux 5.10.3EPYC 7F52510152025SE +/- 0.04, N = 3SE +/- 0.03, N = 322.6822.40
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileLinux 5.10.3EPYC 7F52510152025Min: 22.62 / Avg: 22.68 / Max: 22.75Min: 22.36 / Avg: 22.4 / Max: 22.45

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPULinux 5.10.3EPYC 7F520.17780.35560.53340.71120.889SE +/- 0.00, N = 3SE +/- 0.00, N = 30.780.791. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPULinux 5.10.3EPYC 7F52246810Min: 0.78 / Avg: 0.78 / Max: 0.78Min: 0.78 / Avg: 0.79 / Max: 0.791. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeLinux 5.10.3EPYC 7F52510152025SE +/- 0.08, N = 3SE +/- 0.09, N = 319.4019.161. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeLinux 5.10.3EPYC 7F52510152025Min: 19.27 / Avg: 19.4 / Max: 19.55Min: 18.99 / Avg: 19.16 / Max: 19.311. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Timed Eigen Compilation

This test times how long it takes to build all Eigen examples. The Eigen examples are compiled serially. Eigen is a C++ template library for linear algebra. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To CompileLinux 5.10.3EPYC 7F5220406080100SE +/- 0.00, N = 3SE +/- 0.03, N = 384.4883.44
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To CompileLinux 5.10.3EPYC 7F521632486480Min: 84.47 / Avg: 84.48 / Max: 84.48Min: 83.41 / Avg: 83.44 / Max: 83.49

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pLinux 5.10.3EPYC 7F52918273645SE +/- 0.06, N = 3SE +/- 0.07, N = 339.0138.531. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pLinux 5.10.3EPYC 7F52816243240Min: 38.94 / Avg: 39.01 / Max: 39.13Min: 38.39 / Avg: 38.53 / Max: 38.631. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

GNU Octave Benchmark

This test profile measures how long it takes to complete several reference GNU Octave files via octave-benchmark. GNU Octave is used for numerical computations and is an open-source alternative to MATLAB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 5.2.0Linux 5.10.3EPYC 7F52246810SE +/- 0.050, N = 5SE +/- 0.044, N = 57.4927.402
OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 5.2.0Linux 5.10.3EPYC 7F523691215Min: 7.35 / Avg: 7.49 / Max: 7.6Min: 7.33 / Avg: 7.4 / Max: 7.57

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_icaLinux 5.10.3EPYC 7F521224364860SE +/- 0.36, N = 3SE +/- 0.56, N = 453.5052.87
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_icaLinux 5.10.3EPYC 7F521122334455Min: 52.85 / Avg: 53.5 / Max: 54.11Min: 51.61 / Avg: 52.87 / Max: 54.33

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read WriteLinux 5.10.3EPYC 7F525001000150020002500SE +/- 17.17, N = 15SE +/- 27.65, N = 15220122271. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read WriteLinux 5.10.3EPYC 7F52400800120016002000Min: 2090.35 / Avg: 2200.91 / Max: 2323.68Min: 2026.69 / Avg: 2227.06 / Max: 2360.071. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

XZ Compression

This test measures the time needed to compress a sample file (an Ubuntu file-system image) using XZ compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9Linux 5.10.3EPYC 7F52510152025SE +/- 0.06, N = 3SE +/- 0.05, N = 321.2320.991. (CC) gcc options: -pthread -fvisibility=hidden -O2
OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9Linux 5.10.3EPYC 7F52510152025Min: 21.17 / Avg: 21.23 / Max: 21.35Min: 20.95 / Avg: 20.99 / Max: 21.091. (CC) gcc options: -pthread -fvisibility=hidden -O2

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed GaussianLinux 5.10.3EPYC 7F52246810SE +/- 0.027, N = 3SE +/- 0.023, N = 37.4467.530
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed GaussianLinux 5.10.3EPYC 7F523691215Min: 7.39 / Avg: 7.45 / Max: 7.48Min: 7.48 / Avg: 7.53 / Max: 7.56

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080pLinux 5.10.3EPYC 7F52130260390520650SE +/- 1.10, N = 3SE +/- 1.17, N = 3581.26574.78MIN: 460.79 / MAX: 716.22MIN: 454.24 / MAX: 710.141. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080pLinux 5.10.3EPYC 7F52100200300400500Min: 579.17 / Avg: 581.26 / Max: 582.88Min: 572.44 / Avg: 574.78 / Max: 576.021. (CC) gcc options: -pthread

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteLinux 5.10.3EPYC 7F52130K260K390K520K650KSE +/- 627.03, N = 3SE +/- 1384.96, N = 3625441618552
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteLinux 5.10.3EPYC 7F52110K220K330K440K550KMin: 624291 / Avg: 625441.33 / Max: 626449Min: 616209 / Avg: 618552 / Max: 621003

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Linux 5.10.3EPYC 7F52300K600K900K1200K1500KSE +/- 1744.82, N = 3SE +/- 1586.66, N = 31198820.31211752.0
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Linux 5.10.3EPYC 7F52200K400K600K800K1000KMin: 1196305.6 / Avg: 1198820.33 / Max: 1202173Min: 1209943.2 / Avg: 1211751.97 / Max: 1214914.4

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read Write - Average LatencyLinux 5.10.3EPYC 7F52306090120150SE +/- 0.89, N = 15SE +/- 1.44, N = 15113.78112.581. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 250 - Mode: Read Write - Average LatencyLinux 5.10.3EPYC 7F5220406080100Min: 107.67 / Avg: 113.78 / Max: 119.69Min: 106 / Avg: 112.58 / Max: 123.441. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average LatencyLinux 5.10.3EPYC 7F52714212835SE +/- 0.32, N = 3SE +/- 0.22, N = 330.3430.041. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average LatencyLinux 5.10.3EPYC 7F52714212835Min: 29.84 / Avg: 30.34 / Max: 30.95Min: 29.59 / Avg: 30.04 / Max: 30.261. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkLinux 5.10.3EPYC 7F52918273645SE +/- 0.07, N = 3SE +/- 0.04, N = 338.5638.181. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkLinux 5.10.3EPYC 7F52816243240Min: 38.45 / Avg: 38.56 / Max: 38.69Min: 38.09 / Avg: 38.18 / Max: 38.231. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read WriteLinux 5.10.3EPYC 7F527001400210028003500SE +/- 35.07, N = 3SE +/- 24.88, N = 3330033321. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read WriteLinux 5.10.3EPYC 7F526001200180024003000Min: 3233.96 / Avg: 3299.79 / Max: 3353.67Min: 3306.87 / Avg: 3331.99 / Max: 3381.761. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPULinux 5.10.3EPYC 7F520.41360.82721.24081.65442.068SE +/- 0.00255, N = 3SE +/- 0.00177, N = 31.821571.83844MIN: 1.78MIN: 1.811. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPULinux 5.10.3EPYC 7F52246810Min: 1.82 / Avg: 1.82 / Max: 1.83Min: 1.84 / Avg: 1.84 / Max: 1.841. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3Linux 5.10.3EPYC 7F5270140210280350SE +/- 0.33, N = 3SE +/- 0.33, N = 3326329
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3Linux 5.10.3EPYC 7F5260120180240300Min: 326 / Avg: 326.33 / Max: 327Min: 328 / Avg: 328.67 / Max: 329

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesLinux 5.10.3EPYC 7F5220406080100110109

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training ScoreLinux 5.10.3EPYC 7F523006009001200150014341421

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondLinux 5.10.3EPYC 7F52150K300K450K600K750KSE +/- 3903.98, N = 3SE +/- 1877.36, N = 3694463.10688169.931. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondLinux 5.10.3EPYC 7F52120K240K360K480K600KMin: 688468.16 / Avg: 694463.1 / Max: 701792.86Min: 685702.04 / Avg: 688169.93 / Max: 691854.491. (CC) gcc options: -O2 -lrt" -lrt

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosLinux 5.10.3EPYC 7F52306090120150112113

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisLinux 5.10.3EPYC 7F52714212835SE +/- 0.07, N = 4SE +/- 0.26, N = 431.0530.781. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisLinux 5.10.3EPYC 7F52714212835Min: 30.86 / Avg: 31.05 / Max: 31.18Min: 30.21 / Avg: 30.78 / Max: 31.231. (CC) gcc options: -O2 -std=c99

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkLinux 5.10.3EPYC 7F523691215SE +/- 0.08, N = 3SE +/- 0.05, N = 39.359.271. Nodejs v10.19.0
OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkLinux 5.10.3EPYC 7F523691215Min: 9.2 / Avg: 9.35 / Max: 9.49Min: 9.18 / Avg: 9.27 / Max: 9.371. Nodejs v10.19.0

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 0 - Input: 1080pLinux 5.10.3EPYC 7F520.02630.05260.07890.10520.1315SE +/- 0.000, N = 3SE +/- 0.000, N = 30.1160.1171. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 0 - Input: 1080pLinux 5.10.3EPYC 7F5212345Min: 0.12 / Avg: 0.12 / Max: 0.12Min: 0.12 / Avg: 0.12 / Max: 0.121. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: SlowLinux 5.10.3EPYC 7F52816243240SE +/- 0.02, N = 3SE +/- 0.02, N = 335.3535.051. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: SlowLinux 5.10.3EPYC 7F52816243240Min: 35.32 / Avg: 35.35 / Max: 35.38Min: 35.03 / Avg: 35.05 / Max: 35.091. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: NDT MappingLinux 5.10.3EPYC 7F522004006008001000SE +/- 6.61, N = 3SE +/- 3.32, N = 3969.38977.651. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: NDT MappingLinux 5.10.3EPYC 7F522004006008001000Min: 957.7 / Avg: 969.38 / Max: 980.58Min: 971.61 / Avg: 977.65 / Max: 983.061. (CXX) g++ options: -O3 -std=c++11 -fopenmp

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: MediumLinux 5.10.3EPYC 7F52816243240SE +/- 0.15, N = 3SE +/- 0.02, N = 336.2735.971. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: MediumLinux 5.10.3EPYC 7F52816243240Min: 36.09 / Avg: 36.27 / Max: 36.57Min: 35.93 / Avg: 35.97 / Max: 36.011. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgKsamples, More Is BetterChaos Group V-RAY 4.10.07Mode: CPULinux 5.10.3EPYC 7F526K12K18K24K30KSE +/- 33.00, N = 3SE +/- 255.78, N = 32711027334
OpenBenchmarking.orgKsamples, More Is BetterChaos Group V-RAY 4.10.07Mode: CPULinux 5.10.3EPYC 7F525K10K15K20K25KMin: 27077 / Avg: 27110 / Max: 27176Min: 27026 / Avg: 27334.33 / Max: 27842

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pLinux 5.10.3EPYC 7F521428425670SE +/- 0.12, N = 3SE +/- 0.06, N = 362.2761.761. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pLinux 5.10.3EPYC 7F521224364860Min: 62.05 / Avg: 62.27 / Max: 62.44Min: 61.65 / Avg: 61.76 / Max: 61.861. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: RotateLinux 5.10.3EPYC 7F52130260390520650SE +/- 4.04, N = 3SE +/- 5.81, N = 36146191. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: RotateLinux 5.10.3EPYC 7F52110220330440550Min: 606 / Avg: 614 / Max: 619Min: 608 / Avg: 618.67 / Max: 6281. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsLinux 5.10.3EPYC 7F52612182430SE +/- 0.00, N = 3SE +/- 0.03, N = 324.724.9
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsLinux 5.10.3EPYC 7F52612182430Min: 24.7 / Avg: 24.7 / Max: 24.7Min: 24.8 / Avg: 24.87 / Max: 24.9

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: CrownLinux 5.10.3EPYC 7F52510152025SE +/- 0.16, N = 3SE +/- 0.12, N = 318.6218.77MIN: 17.94 / MAX: 19.11MIN: 18.46 / MAX: 19.39
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: CrownLinux 5.10.3EPYC 7F52510152025Min: 18.31 / Avg: 18.62 / Max: 18.81Min: 18.59 / Avg: 18.77 / Max: 18.99

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: CPU-OnlyLinux 5.10.3EPYC 7F5220406080100SE +/- 0.05, N = 3SE +/- 0.25, N = 382.8983.52
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: CPU-OnlyLinux 5.10.3EPYC 7F521632486480Min: 82.79 / Avg: 82.89 / Max: 82.94Min: 83.17 / Avg: 83.52 / Max: 84.01

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPULinux 5.10.3EPYC 7F520.34250.6851.02751.371.7125SE +/- 0.00375, N = 3SE +/- 0.00269, N = 31.522201.51115MIN: 1.49MIN: 1.481. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPULinux 5.10.3EPYC 7F52246810Min: 1.52 / Avg: 1.52 / Max: 1.53Min: 1.51 / Avg: 1.51 / Max: 1.521. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileLinux 5.10.3EPYC 7F5220406080100SE +/- 0.35, N = 3SE +/- 0.10, N = 375.8075.26
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileLinux 5.10.3EPYC 7F521530456075Min: 75.14 / Avg: 75.8 / Max: 76.35Min: 75.08 / Avg: 75.26 / Max: 75.42

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0Linux 5.10.3EPYC 7F523691215SE +/- 0.03, N = 12SE +/- 0.03, N = 1511.1411.06MIN: 10.78 / MAX: 14.68MIN: 10.67 / MAX: 13.41. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0Linux 5.10.3EPYC 7F523691215Min: 11.02 / Avg: 11.14 / Max: 11.32Min: 10.91 / Avg: 11.06 / Max: 11.241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080p 10-bitLinux 5.10.3EPYC 7F5220406080100SE +/- 0.07, N = 3SE +/- 0.05, N = 3111.44110.64MIN: 74.8 / MAX: 220.43MIN: 74.39 / MAX: 217.071. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080p 10-bitLinux 5.10.3EPYC 7F5220406080100Min: 111.29 / Avg: 111.44 / Max: 111.52Min: 110.56 / Avg: 110.64 / Max: 110.741. (CC) gcc options: -pthread

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileLinux 5.10.3EPYC 7F52816243240SE +/- 0.06, N = 3SE +/- 0.08, N = 334.1333.89
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileLinux 5.10.3EPYC 7F52714212835Min: 34.03 / Avg: 34.13 / Max: 34.24Min: 33.8 / Avg: 33.89 / Max: 34.05

LibreOffice

Various benchmarking operations with the LibreOffice open-source office suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLibreOfficeTest: 20 Documents To PDFLinux 5.10.3EPYC 7F52246810SE +/- 0.029, N = 5SE +/- 0.077, N = 57.2087.1581. LibreOffice 6.4.3.2 40(Build:2)
OpenBenchmarking.orgSeconds, Fewer Is BetterLibreOfficeTest: 20 Documents To PDFLinux 5.10.3EPYC 7F523691215Min: 7.15 / Avg: 7.21 / Max: 7.32Min: 7.04 / Avg: 7.16 / Max: 7.461. LibreOffice 6.4.3.2 40(Build:2)

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: MediumLinux 5.10.3EPYC 7F523691215SE +/- 0.01, N = 3SE +/- 0.01, N = 310.3110.241. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: MediumLinux 5.10.3EPYC 7F523691215Min: 10.3 / Avg: 10.31 / Max: 10.33Min: 10.23 / Avg: 10.24 / Max: 10.261. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: aermodLinux 5.10.3EPYC 7F522468106.166.12

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3Linux 5.10.3EPYC 7F52246810SE +/- 0.02, N = 12SE +/- 0.02, N = 157.737.68MIN: 7.28 / MAX: 11.83MIN: 7.21 / MAX: 12.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3Linux 5.10.3EPYC 7F523691215Min: 7.66 / Avg: 7.73 / Max: 7.88Min: 7.54 / Avg: 7.68 / Max: 7.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mLinux 5.10.3EPYC 7F521020304050SE +/- 0.14, N = 12SE +/- 0.18, N = 1544.7944.51MIN: 43.38 / MAX: 124.54MIN: 42.64 / MAX: 117.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mLinux 5.10.3EPYC 7F52918273645Min: 44.08 / Avg: 44.79 / Max: 45.72Min: 43.2 / Avg: 44.51 / Max: 45.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: DenseNet 201 - Device: CPULinux 5.10.3EPYC 7F520.72231.44462.16692.88923.6115SE +/- 0.01, N = 3SE +/- 0.01, N = 33.213.19
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: DenseNet 201 - Device: CPULinux 5.10.3EPYC 7F52246810Min: 3.2 / Avg: 3.21 / Max: 3.22Min: 3.17 / Avg: 3.19 / Max: 3.21

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Linux 5.10.3EPYC 7F521530456075SE +/- 0.02, N = 3SE +/- 0.22, N = 367.1466.721. (CC) gcc options: -O2 -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Linux 5.10.3EPYC 7F521326395265Min: 67.1 / Avg: 67.14 / Max: 67.17Min: 66.35 / Avg: 66.72 / Max: 67.111. (CC) gcc options: -O2 -ldl -lz -lpthread

YafaRay

YafaRay is an open-source physically based montecarlo ray-tracing engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterYafaRay 3.4.1Total Time For Sample SceneLinux 5.10.3EPYC 7F52306090120150SE +/- 0.50, N = 3SE +/- 0.76, N = 3130.91130.101. (CXX) g++ options: -std=c++11 -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterYafaRay 3.4.1Total Time For Sample SceneLinux 5.10.3EPYC 7F5220406080100Min: 129.91 / Avg: 130.91 / Max: 131.41Min: 128.65 / Avg: 130.1 / Max: 131.241. (CXX) g++ options: -std=c++11 -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype -lpthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedLinux 5.10.3EPYC 7F522K4K6K8K10KSE +/- 59.35, N = 3SE +/- 51.25, N = 310009.499947.801. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedLinux 5.10.3EPYC 7F522K4K6K8K10KMin: 9891.06 / Avg: 10009.49 / Max: 10075.62Min: 9892.02 / Avg: 9947.8 / Max: 10050.171. (CC) gcc options: -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2Linux 5.10.3EPYC 7F52246810SE +/- 0.07, N = 12SE +/- 0.04, N = 158.448.49MIN: 6.92 / MAX: 12.58MIN: 7.06 / MAX: 72.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2Linux 5.10.3EPYC 7F523691215Min: 7.86 / Avg: 8.44 / Max: 8.72Min: 8.31 / Avg: 8.49 / Max: 8.941. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregressionLinux 5.10.3EPYC 7F520.38930.77861.16791.55721.9465SE +/- 0.01, N = 3SE +/- 0.02, N = 41.721.73
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregressionLinux 5.10.3EPYC 7F52246810Min: 1.7 / Avg: 1.72 / Max: 1.73Min: 1.69 / Avg: 1.73 / Max: 1.78

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI ScoreLinux 5.10.3EPYC 7F52700140021002800350032073189

FLAC Audio Encoding

This test times how long it takes to encode a sample WAV file to FLAC format five times. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLACLinux 5.10.3EPYC 7F52246810SE +/- 0.005, N = 5SE +/- 0.013, N = 58.6108.5621. (CXX) g++ options: -O2 -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLACLinux 5.10.3EPYC 7F523691215Min: 8.6 / Avg: 8.61 / Max: 8.63Min: 8.51 / Avg: 8.56 / Max: 8.591. (CXX) g++ options: -O2 -fvisibility=hidden -logg -lm

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9/WebM format using a sample 1080p video. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.8.2Speed: Speed 0Linux 5.10.3EPYC 7F52246810SE +/- 0.00, N = 3SE +/- 0.01, N = 37.207.161. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=c++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.8.2Speed: Speed 0Linux 5.10.3EPYC 7F523691215Min: 7.2 / Avg: 7.2 / Max: 7.2Min: 7.14 / Avg: 7.16 / Max: 7.181. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=c++11

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read WriteLinux 5.10.3EPYC 7F528001600240032004000SE +/- 16.52, N = 3SE +/- 24.75, N = 3378238031. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read WriteLinux 5.10.3EPYC 7F527001400210028003500Min: 3761 / Avg: 3781.92 / Max: 3814.53Min: 3777.18 / Avg: 3802.87 / Max: 3852.361. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

BYTE Unix Benchmark

This is a test of BYTE. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 2Linux 5.10.3EPYC 7F529M18M27M36M45MSE +/- 142195.68, N = 3SE +/- 336403.00, N = 341240132.341016145.2
OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 2Linux 5.10.3EPYC 7F527M14M21M28M35MMin: 40957050.5 / Avg: 41240132.33 / Max: 41405281.3Min: 40568980.6 / Avg: 41016145.2 / Max: 41675082.3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceLinux 5.10.3EPYC 7F520.83031.66062.49093.32124.1515SE +/- 0.02, N = 12SE +/- 0.02, N = 153.673.69MIN: 3.53 / MAX: 4.35MIN: 3.52 / MAX: 75.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceLinux 5.10.3EPYC 7F52246810Min: 3.61 / Avg: 3.67 / Max: 3.79Min: 3.6 / Avg: 3.69 / Max: 3.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

x264

This is a simple test of the x264 encoder run on the CPU (OpenCL support disabled) with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2019-12-17H.264 Video EncodingLinux 5.10.3EPYC 7F524080120160200SE +/- 0.78, N = 3SE +/- 1.00, N = 3163.65162.771. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -ffast-math -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize
OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2019-12-17H.264 Video EncodingLinux 5.10.3EPYC 7F52306090120150Min: 162.08 / Avg: 163.65 / Max: 164.44Min: 160.93 / Avg: 162.77 / Max: 164.391. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -ffast-math -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pLinux 5.10.3EPYC 7F521.21232.42463.63694.84926.0615SE +/- 0.009, N = 3SE +/- 0.023, N = 35.3885.3601. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pLinux 5.10.3EPYC 7F52246810Min: 5.38 / Avg: 5.39 / Max: 5.41Min: 5.32 / Avg: 5.36 / Max: 5.41. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: channel2Linux 5.10.3EPYC 7F52102030405042.3042.52

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkLinux 5.10.3EPYC 7F5250100150200250SE +/- 0.38, N = 3SE +/- 0.60, N = 3218.94217.81MIN: 1 / MAX: 772MIN: 1 / MAX: 765
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkLinux 5.10.3EPYC 7F524080120160200Min: 218.22 / Avg: 218.94 / Max: 219.5Min: 216.67 / Avg: 217.81 / Max: 218.67

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileLinux 5.10.3EPYC 7F5230K60K90K120K150KSE +/- 388.72, N = 3SE +/- 356.27, N = 3126619127275
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileLinux 5.10.3EPYC 7F5220K40K60K80K100KMin: 126190 / Avg: 126619 / Max: 127395Min: 126904 / Avg: 127274.67 / Max: 127987

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsLinux 5.10.3EPYC 7F520.25830.51660.77491.03321.2915SE +/- 0.00649, N = 3SE +/- 0.00082, N = 31.148011.14226
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsLinux 5.10.3EPYC 7F52246810Min: 1.14 / Avg: 1.15 / Max: 1.16Min: 1.14 / Avg: 1.14 / Max: 1.14

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1Linux 5.10.3EPYC 7F5260120180240300SE +/- 0.25, N = 3SE +/- 0.77, N = 3264.36263.04MIN: 261.25 / MAX: 266.06MIN: 260.98 / MAX: 265.861. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1Linux 5.10.3EPYC 7F5250100150200250Min: 264.07 / Avg: 264.36 / Max: 264.86Min: 261.76 / Avg: 263.04 / Max: 264.431. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16Linux 5.10.3EPYC 7F52714212835SE +/- 0.04, N = 12SE +/- 0.03, N = 1530.0230.17MIN: 29.27 / MAX: 43.79MIN: 29.55 / MAX: 90.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16Linux 5.10.3EPYC 7F52714212835Min: 29.63 / Avg: 30.02 / Max: 30.13Min: 29.91 / Avg: 30.17 / Max: 30.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very FastLinux 5.10.3EPYC 7F52612182430SE +/- 0.02, N = 3SE +/- 0.03, N = 324.5624.441. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very FastLinux 5.10.3EPYC 7F52612182430Min: 24.52 / Avg: 24.56 / Max: 24.58Min: 24.39 / Avg: 24.44 / Max: 24.481. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Euclidean ClusterLinux 5.10.3EPYC 7F522004006008001000SE +/- 1.78, N = 3SE +/- 1.60, N = 31085.361090.641. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Euclidean ClusterLinux 5.10.3EPYC 7F522004006008001000Min: 1083.3 / Avg: 1085.36 / Max: 1088.91Min: 1087.77 / Avg: 1090.64 / Max: 1093.311. (CXX) g++ options: -O3 -std=c++11 -fopenmp

Ogg Audio Encoding

This test times how long it takes to encode a sample WAV file to Ogg format using the reference Xiph.org tools/libraries. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOgg Audio Encoding 1.3.4WAV To OggLinux 5.10.3EPYC 7F52510152025SE +/- 0.03, N = 3SE +/- 0.03, N = 320.6920.601. (CC) gcc options: -O2 -ffast-math -fsigned-char
OpenBenchmarking.orgSeconds, Fewer Is BetterOgg Audio Encoding 1.3.4WAV To OggLinux 5.10.3EPYC 7F52510152025Min: 20.65 / Avg: 20.69 / Max: 20.76Min: 20.56 / Avg: 20.6 / Max: 20.651. (CC) gcc options: -O2 -ffast-math -fsigned-char

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedLinux 5.10.3EPYC 7F522K4K6K8K10KSE +/- 25.66, N = 3SE +/- 18.07, N = 810815.310768.21. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedLinux 5.10.3EPYC 7F522K4K6K8K10KMin: 10767.2 / Avg: 10815.33 / Max: 10854.8Min: 10730 / Avg: 10768.18 / Max: 10859.81. (CC) gcc options: -O3

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Linux 5.10.3EPYC 7F52300K600K900K1200K1500KSE +/- 2047.76, N = 3SE +/- 1494.65, N = 31419536.41425736.1
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Linux 5.10.3EPYC 7F52200K400K600K800K1000KMin: 1416392.8 / Avg: 1419536.43 / Max: 1423381.6Min: 1423811.1 / Avg: 1425736.13 / Max: 1428679.2

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthLinux 5.10.3EPYC 7F5210M20M30M40M50MSE +/- 434223.37, N = 3SE +/- 250444.78, N = 34644165346240797
OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthLinux 5.10.3EPYC 7F528M16M24M32M40MMin: 45963345 / Avg: 46441653.33 / Max: 47308554Min: 45902281 / Avg: 46240796.67 / Max: 46729778

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedLinux 5.10.3EPYC 7F522K4K6K8K10KSE +/- 4.82, N = 3SE +/- 40.67, N = 310851.610898.41. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedLinux 5.10.3EPYC 7F522K4K6K8K10KMin: 10842 / Avg: 10851.63 / Max: 10856.6Min: 10854.1 / Avg: 10898.37 / Max: 10979.61. (CC) gcc options: -O3

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian DragonLinux 5.10.3EPYC 7F52510152025SE +/- 0.21, N = 3SE +/- 0.05, N = 321.0620.97MIN: 20.53 / MAX: 22.46MIN: 20.82 / MAX: 22.27
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian DragonLinux 5.10.3EPYC 7F52510152025Min: 20.65 / Avg: 21.06 / Max: 21.31Min: 20.88 / Avg: 20.97 / Max: 21.06

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: AtomicLinux 5.10.3EPYC 7F52110K220K330K440K550KSE +/- 203.22, N = 3SE +/- 436.80, N = 3510793.92512936.211. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: AtomicLinux 5.10.3EPYC 7F5290K180K270K360K450KMin: 510444.26 / Avg: 510793.92 / Max: 511148.2Min: 512387.81 / Avg: 512936.21 / Max: 513799.331. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassLinux 5.10.3EPYC 7F520.54451.0891.63352.1782.7225SE +/- 0.00, N = 3SE +/- 0.00, N = 32.412.421. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassLinux 5.10.3EPYC 7F52246810Min: 2.4 / Avg: 2.41 / Max: 2.41Min: 2.41 / Avg: 2.42 / Max: 2.421. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeLinux 5.10.3EPYC 7F52816243240SE +/- 0.06, N = 3SE +/- 0.23, N = 333.9834.121. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeLinux 5.10.3EPYC 7F52714212835Min: 33.86 / Avg: 33.98 / Max: 34.04Min: 33.66 / Avg: 34.12 / Max: 34.361. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessLinux 5.10.3EPYC 7F5248121620SE +/- 0.08, N = 3SE +/- 0.07, N = 317.5717.501. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessLinux 5.10.3EPYC 7F5248121620Min: 17.41 / Avg: 17.57 / Max: 17.68Min: 17.37 / Avg: 17.5 / Max: 17.591. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: SlowLinux 5.10.3EPYC 7F523691215SE +/- 0.01, N = 3SE +/- 0.01, N = 310.1010.061. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: SlowLinux 5.10.3EPYC 7F523691215Min: 10.09 / Avg: 10.1 / Max: 10.12Min: 10.04 / Avg: 10.06 / Max: 10.081. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goLinux 5.10.3EPYC 7F5260120180240300253254

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarLinux 5.10.3EPYC 7F52246810SE +/- 0.008, N = 3SE +/- 0.012, N = 37.7317.761
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarLinux 5.10.3EPYC 7F523691215Min: 7.72 / Avg: 7.73 / Max: 7.74Min: 7.74 / Avg: 7.76 / Max: 7.78

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyLinux 5.10.3EPYC 7F52612182430SE +/- 0.21, N = 12SE +/- 0.13, N = 1525.8425.94MIN: 24.84 / MAX: 30.66MIN: 25.13 / MAX: 86.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyLinux 5.10.3EPYC 7F52612182430Min: 25.2 / Avg: 25.84 / Max: 27.61Min: 25.54 / Avg: 25.94 / Max: 26.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkLinux 5.10.3EPYC 7F520.53421.06841.60262.13682.671SE +/- 0.002, N = 3SE +/- 0.003, N = 32.3742.3651. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkLinux 5.10.3EPYC 7F52246810Min: 2.37 / Avg: 2.37 / Max: 2.38Min: 2.36 / Avg: 2.36 / Max: 2.371. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Write - Average LatencyLinux 5.10.3EPYC 7F520.05940.11880.17820.23760.297SE +/- 0.001, N = 3SE +/- 0.002, N = 30.2640.2631. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Write - Average LatencyLinux 5.10.3EPYC 7F5212345Min: 0.26 / Avg: 0.26 / Max: 0.27Min: 0.26 / Avg: 0.26 / Max: 0.271. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: ResizingLinux 5.10.3EPYC 7F5230060090012001500SE +/- 9.84, N = 3SE +/- 18.67, N = 3159115971. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: ResizingLinux 5.10.3EPYC 7F5230060090012001500Min: 1573 / Avg: 1590.67 / Max: 1607Min: 1560 / Avg: 1597.33 / Max: 16161. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

ECP-CANDLE

The CANDLE benchmark codes implement deep learning architectures relevant to problems in cancer. These architectures address problems at different biological scales, specifically problems at the molecular, cellular and population scales. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterECP-CANDLE 0.3Benchmark: P3B2Linux 5.10.3EPYC 7F522004006008001000896.05899.42

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkLinux 5.10.3EPYC 7F5280160240320400SE +/- 0.27, N = 3SE +/- 1.66, N = 3368.45367.10
OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkLinux 5.10.3EPYC 7F5270140210280350Min: 368.11 / Avg: 368.45 / Max: 368.99Min: 363.81 / Avg: 367.1 / Max: 369.13

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Linux 5.10.3EPYC 7F52246810SE +/- 0.007, N = 15SE +/- 0.012, N = 156.5516.575MIN: 6.45 / MAX: 22.13MIN: 6.41 / MAX: 20.061. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Linux 5.10.3EPYC 7F523691215Min: 6.5 / Avg: 6.55 / Max: 6.59Min: 6.48 / Avg: 6.57 / Max: 6.641. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU StressLinux 5.10.3EPYC 7F5213002600390052006500SE +/- 5.69, N = 3SE +/- 22.12, N = 36266.846244.331. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU StressLinux 5.10.3EPYC 7F5211002200330044005500Min: 6257.59 / Avg: 6266.84 / Max: 6277.21Min: 6200.79 / Avg: 6244.33 / Max: 6272.911. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100Linux 5.10.3EPYC 7F5240K80K120K160K200KSE +/- 172.36, N = 3SE +/- 222.17, N = 31810081816521. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100Linux 5.10.3EPYC 7F5230K60K90K120K150KMin: 180707 / Avg: 181008 / Max: 181304Min: 181404 / Avg: 181651.67 / Max: 1820951. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

FFTE

FFTE is a package by Daisuke Takahashi to compute Discrete Fourier Transforms of 1-, 2- and 3- dimensional sequences of length (2^p)*(3^q)*(5^r). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineLinux 5.10.3EPYC 7F5220K40K60K80K100KSE +/- 226.53, N = 3SE +/- 100.09, N = 3100233.0699888.441. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp
OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineLinux 5.10.3EPYC 7F5220K40K60K80K100KMin: 99913.34 / Avg: 100233.06 / Max: 100670.9Min: 99724.72 / Avg: 99888.44 / Max: 100070.071. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileLinux 5.10.3EPYC 7F521020304050SE +/- 0.51, N = 4SE +/- 0.50, N = 445.2745.12
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileLinux 5.10.3EPYC 7F52918273645Min: 44.6 / Avg: 45.27 / Max: 46.78Min: 44.22 / Avg: 45.12 / Max: 46.53

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPULinux 5.10.3EPYC 7F520.6841.3682.0522.7363.42SE +/- 0.01, N = 3SE +/- 0.02, N = 33.033.041. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPULinux 5.10.3EPYC 7F52246810Min: 3.02 / Avg: 3.03 / Max: 3.04Min: 3 / Avg: 3.04 / Max: 3.061. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPULinux 5.10.3EPYC 7F520.69081.38162.07242.76323.454SE +/- 0.01, N = 3SE +/- 0.00, N = 33.073.061. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPULinux 5.10.3EPYC 7F52246810Min: 3.06 / Avg: 3.07 / Max: 3.08Min: 3.06 / Avg: 3.06 / Max: 3.071. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 200Linux 5.10.3EPYC 7F5280K160K240K320K400KSE +/- 312.30, N = 3SE +/- 75.84, N = 33628413639981. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 200Linux 5.10.3EPYC 7F5260K120K180K240K300KMin: 362407 / Avg: 362841 / Max: 363447Min: 363905 / Avg: 363997.67 / Max: 3641481. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatLinux 5.10.3EPYC 7F5215K30K45K60K75KSE +/- 59.93, N = 3SE +/- 46.97, N = 368630.968415.0
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatLinux 5.10.3EPYC 7F5212K24K36K48K60KMin: 68521.9 / Avg: 68630.87 / Max: 68728.6Min: 68327 / Avg: 68414.97 / Max: 68487.5

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: acLinux 5.10.3EPYC 7F522468106.516.53

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Fishy Cat - Compute: CPU-OnlyLinux 5.10.3EPYC 7F5220406080100SE +/- 0.08, N = 3SE +/- 0.29, N = 3108.33108.00
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Fishy Cat - Compute: CPU-OnlyLinux 5.10.3EPYC 7F5220406080100Min: 108.24 / Avg: 108.33 / Max: 108.48Min: 107.64 / Avg: 108 / Max: 108.57

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedLinux 5.10.3EPYC 7F522K4K6K8K10KSE +/- 30.69, N = 3SE +/- 42.92, N = 311490.611455.81. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedLinux 5.10.3EPYC 7F522K4K6K8K10KMin: 11430.3 / Avg: 11490.63 / Max: 11530.6Min: 11380.2 / Avg: 11455.77 / Max: 11528.81. (CC) gcc options: -O3

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Linux 5.10.3EPYC 7F52300K600K900K1200K1500KSE +/- 1266.68, N = 3SE +/- 1084.27, N = 314990171494590
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Linux 5.10.3EPYC 7F52300K600K900K1200K1500KMin: 1496660 / Avg: 1499016.67 / Max: 1501000Min: 1492620 / Avg: 1494590 / Max: 1496360

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantLinux 5.10.3EPYC 7F5215K30K45K60K75KSE +/- 9.17, N = 3SE +/- 53.57, N = 370037.469832.4
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantLinux 5.10.3EPYC 7F5212K24K36K48K60KMin: 70019.5 / Avg: 70037.43 / Max: 70049.7Min: 69725.3 / Avg: 69832.43 / Max: 69886.1

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 200Linux 5.10.3EPYC 7F5230K60K90K120K150KSE +/- 381.87, N = 3SE +/- 359.19, N = 31440391436221. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 200Linux 5.10.3EPYC 7F5220K40K60K80K100KMin: 143399 / Avg: 144039.33 / Max: 144720Min: 143151 / Avg: 143621.67 / Max: 1443271. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: MediumLinux 5.10.3EPYC 7F52246810SE +/- 0.01, N = 3SE +/- 0.01, N = 36.916.891. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: MediumLinux 5.10.3EPYC 7F523691215Min: 6.9 / Avg: 6.91 / Max: 6.92Min: 6.88 / Avg: 6.89 / Max: 6.921. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPULinux 5.10.3EPYC 7F526001200180024003000SE +/- 3.43, N = 3SE +/- 1.99, N = 32590.332582.881. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPULinux 5.10.3EPYC 7F525001000150020002500Min: 2584.9 / Avg: 2590.33 / Max: 2596.67Min: 2580.21 / Avg: 2582.88 / Max: 2586.781. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

LuxCoreRender

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on the CPU as opposed to the OpenCL version. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: Rainbow Colors and PrismLinux 5.10.3EPYC 7F520.78751.5752.36253.153.9375SE +/- 0.01, N = 3SE +/- 0.01, N = 33.503.49MIN: 3.43 / MAX: 3.52MIN: 3.42 / MAX: 3.52
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: Rainbow Colors and PrismLinux 5.10.3EPYC 7F52246810Min: 3.48 / Avg: 3.5 / Max: 3.52Min: 3.48 / Avg: 3.49 / Max: 3.5

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetLinux 5.10.3EPYC 7F52246810SE +/- 0.09, N = 12SE +/- 0.08, N = 157.017.03MIN: 6.57 / MAX: 10.41MIN: 6.6 / MAX: 43.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetLinux 5.10.3EPYC 7F523691215Min: 6.63 / Avg: 7.01 / Max: 7.33Min: 6.64 / Avg: 7.03 / Max: 7.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: capacitaLinux 5.10.3EPYC 7F524812162017.5617.61

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetLinux 5.10.3EPYC 7F5248121620SE +/- 0.14, N = 12SE +/- 0.06, N = 1517.7017.65MIN: 17.12 / MAX: 260.94MIN: 17.22 / MAX: 117.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetLinux 5.10.3EPYC 7F5248121620Min: 17.37 / Avg: 17.7 / Max: 19.1Min: 17.46 / Avg: 17.65 / Max: 18.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference ScoreLinux 5.10.3EPYC 7F5240080012001600200017731768

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Barbershop - Compute: CPU-OnlyLinux 5.10.3EPYC 7F5280160240320400SE +/- 0.30, N = 3SE +/- 0.32, N = 3355.80354.80
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Barbershop - Compute: CPU-OnlyLinux 5.10.3EPYC 7F5260120180240300Min: 355.43 / Avg: 355.8 / Max: 356.39Min: 354.16 / Avg: 354.8 / Max: 355.15

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100Linux 5.10.3EPYC 7F520.56211.12421.68632.24842.8105SE +/- 0.000, N = 3SE +/- 0.001, N = 32.4912.4981. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100Linux 5.10.3EPYC 7F52246810Min: 2.49 / Avg: 2.49 / Max: 2.49Min: 2.5 / Avg: 2.5 / Max: 2.51. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 1Linux 5.10.3EPYC 7F520.08330.16660.24990.33320.4165SE +/- 0.001, N = 3SE +/- 0.001, N = 30.3700.369
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 1Linux 5.10.3EPYC 7F5212345Min: 0.37 / Avg: 0.37 / Max: 0.37Min: 0.37 / Avg: 0.37 / Max: 0.37

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassLinux 5.10.3EPYC 7F520.84381.68762.53143.37524.219SE +/- 0.01, N = 3SE +/- 0.01, N = 33.753.741. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassLinux 5.10.3EPYC 7F52246810Min: 3.74 / Avg: 3.75 / Max: 3.77Min: 3.73 / Avg: 3.74 / Max: 3.751. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qdaLinux 5.10.3EPYC 7F52714212835SE +/- 0.12, N = 3SE +/- 0.13, N = 329.9630.04
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qdaLinux 5.10.3EPYC 7F52714212835Min: 29.72 / Avg: 29.96 / Max: 30.09Min: 29.9 / Avg: 30.04 / Max: 30.29

Timed MAFFT Alignment

This test performs an alignment of 100 pyruvate decarboxylase sequences. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNALinux 5.10.3EPYC 7F523691215SE +/- 0.021, N = 3SE +/- 0.079, N = 39.0339.0091. (CC) gcc options: -std=c99 -O3 -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNALinux 5.10.3EPYC 7F523691215Min: 8.99 / Avg: 9.03 / Max: 9.06Min: 8.91 / Avg: 9.01 / Max: 9.171. (CC) gcc options: -std=c99 -O3 -lm -lpthread

Aircrack-ng

Aircrack-ng is a tool for assessing WiFi/WLAN network security. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.5.2Linux 5.10.3EPYC 7F5212K24K36K48K60KSE +/- 97.98, N = 3SE +/- 72.75, N = 356766.0056912.741. (CXX) g++ options: -O3 -fvisibility=hidden -masm=intel -fcommon -rdynamic -lpthread -lz -lcrypto -lhwloc -ldl -lm -pthread
OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.5.2Linux 5.10.3EPYC 7F5210K20K30K40K50KMin: 56667.44 / Avg: 56766 / Max: 56961.95Min: 56771 / Avg: 56912.74 / Max: 57012.051. (CXX) g++ options: -O3 -fvisibility=hidden -masm=intel -fcommon -rdynamic -lpthread -lz -lcrypto -lhwloc -ldl -lm -pthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPULinux 5.10.3EPYC 7F520.90451.8092.71353.6184.5225SE +/- 0.01, N = 3SE +/- 0.00, N = 34.014.021. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPULinux 5.10.3EPYC 7F52246810Min: 4 / Avg: 4.01 / Max: 4.02Min: 4.01 / Avg: 4.02 / Max: 4.021. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Classroom - Compute: CPU-OnlyLinux 5.10.3EPYC 7F5250100150200250SE +/- 0.12, N = 3SE +/- 0.22, N = 3239.22239.80
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Classroom - Compute: CPU-OnlyLinux 5.10.3EPYC 7F524080120160200Min: 238.98 / Avg: 239.22 / Max: 239.34Min: 239.43 / Avg: 239.8 / Max: 240.2

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.4Time To CompileLinux 5.10.3EPYC 7F52510152025SE +/- 0.02, N = 3SE +/- 0.05, N = 320.4320.38
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.4Time To CompileLinux 5.10.3EPYC 7F52510152025Min: 20.39 / Avg: 20.43 / Max: 20.46Min: 20.29 / Avg: 20.38 / Max: 20.47

Unpacking Firefox

This simple test profile measures how long it takes to extract the .tar.xz source package of the Mozilla Firefox Web Browser. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzLinux 5.10.3EPYC 7F52510152025SE +/- 0.03, N = 4SE +/- 0.06, N = 420.4820.43
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzLinux 5.10.3EPYC 7F52510152025Min: 20.39 / Avg: 20.48 / Max: 20.55Min: 20.27 / Avg: 20.43 / Max: 20.54

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc Qsort Data SortingLinux 5.10.3EPYC 7F5260120180240300SE +/- 0.93, N = 3SE +/- 0.99, N = 3268.94269.571. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc Qsort Data SortingLinux 5.10.3EPYC 7F5250100150200250Min: 268 / Avg: 268.94 / Max: 270.8Min: 267.9 / Avg: 269.57 / Max: 271.331. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CryptoLinux 5.10.3EPYC 7F5210002000300040005000SE +/- 5.74, N = 3SE +/- 0.84, N = 34555.434565.971. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CryptoLinux 5.10.3EPYC 7F528001600240032004000Min: 4543.96 / Avg: 4555.43 / Max: 4561.41Min: 4565.13 / Avg: 4565.97 / Max: 4567.641. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon ObjLinux 5.10.3EPYC 7F52510152025SE +/- 0.04, N = 3SE +/- 0.03, N = 319.5919.64MIN: 18.78 / MAX: 19.96MIN: 18.92 / MAX: 19.94
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon ObjLinux 5.10.3EPYC 7F52510152025Min: 19.51 / Avg: 19.59 / Max: 19.66Min: 19.6 / Avg: 19.64 / Max: 19.69

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: gas_dyn2Linux 5.10.3EPYC 7F52102030405044.2644.16

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceLinux 5.10.3EPYC 7F52100200300400500SE +/- 0.58, N = 3SE +/- 0.33, N = 3476475
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceLinux 5.10.3EPYC 7F5280160240320400Min: 475 / Avg: 476 / Max: 477Min: 474 / Avg: 474.67 / Max: 475

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionLinux 5.10.3EPYC 7F52246810SE +/- 0.006, N = 3SE +/- 0.007, N = 37.7167.7321. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionLinux 5.10.3EPYC 7F523691215Min: 7.71 / Avg: 7.72 / Max: 7.73Min: 7.72 / Avg: 7.73 / Max: 7.741. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 6Linux 5.10.3EPYC 7F520.32940.65880.98821.31761.647SE +/- 0.001, N = 3SE +/- 0.003, N = 31.4611.464
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 6Linux 5.10.3EPYC 7F52246810Min: 1.46 / Avg: 1.46 / Max: 1.46Min: 1.46 / Avg: 1.46 / Max: 1.47

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetLinux 5.10.3EPYC 7F5220K40K60K80K100KSE +/- 86.53, N = 3SE +/- 36.86, N = 3106510106296
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetLinux 5.10.3EPYC 7F5220K40K60K80K100KMin: 106367 / Avg: 106510.33 / Max: 106666Min: 106226 / Avg: 106296 / Max: 106351

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPULinux 5.10.3EPYC 7F526001200180024003000SE +/- 2.25, N = 3SE +/- 3.48, N = 32605.512600.351. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPULinux 5.10.3EPYC 7F525001000150020002500Min: 2601.58 / Avg: 2605.51 / Max: 2609.36Min: 2593.83 / Avg: 2600.35 / Max: 2605.721. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2Linux 5.10.3EPYC 7F5260120180240300SE +/- 0.42, N = 3SE +/- 0.53, N = 3275.51274.97MIN: 272.98 / MAX: 294.91MIN: 272.73 / MAX: 289.811. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2Linux 5.10.3EPYC 7F5250100150200250Min: 274.94 / Avg: 275.51 / Max: 276.32Min: 274.23 / Avg: 274.97 / Max: 275.991. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: fatigue2Linux 5.10.3EPYC 7F52122436486052.2152.31

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 10Linux 5.10.3EPYC 7F520.71821.43642.15462.87283.591SE +/- 0.003, N = 3SE +/- 0.002, N = 33.1923.186
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 10Linux 5.10.3EPYC 7F52246810Min: 3.19 / Avg: 3.19 / Max: 3.2Min: 3.18 / Avg: 3.19 / Max: 3.19

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18Linux 5.10.3EPYC 7F523691215SE +/- 0.04, N = 12SE +/- 0.03, N = 1510.7110.69MIN: 10.34 / MAX: 64.22MIN: 10.34 / MAX: 13.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18Linux 5.10.3EPYC 7F523691215Min: 10.58 / Avg: 10.71 / Max: 10.96Min: 10.56 / Avg: 10.69 / Max: 10.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

DeepSpeech

Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPULinux 5.10.3EPYC 7F521530456075SE +/- 0.08, N = 3SE +/- 0.20, N = 368.1768.29
OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPULinux 5.10.3EPYC 7F521326395265Min: 68.03 / Avg: 68.17 / Max: 68.3Min: 67.9 / Avg: 68.29 / Max: 68.51

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test measures the RSA 4096-bit performance of OpenSSL. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSigns Per Second, More Is BetterOpenSSL 1.1.1RSA 4096-bit PerformanceLinux 5.10.3EPYC 7F5210002000300040005000SE +/- 0.71, N = 3SE +/- 0.76, N = 34579.84571.41. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgSigns Per Second, More Is BetterOpenSSL 1.1.1RSA 4096-bit PerformanceLinux 5.10.3EPYC 7F528001600240032004000Min: 4578.5 / Avg: 4579.83 / Max: 4580.9Min: 4570 / Avg: 4571.4 / Max: 4572.61. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPULinux 5.10.3EPYC 7F522K4K6K8K10KSE +/- 5.75, N = 3SE +/- 16.24, N = 39953.079935.551. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPULinux 5.10.3EPYC 7F522K4K6K8K10KMin: 9945.47 / Avg: 9953.07 / Max: 9964.35Min: 9913.95 / Avg: 9935.55 / Max: 9967.351. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Linux 5.10.3EPYC 7F52300K600K900K1200K1500KSE +/- 1103.67, N = 3SE +/- 1087.40, N = 313462431343920
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Linux 5.10.3EPYC 7F52200K400K600K800K1000KMin: 1344040 / Avg: 1346243.33 / Max: 1347460Min: 1341750 / Avg: 1343920 / Max: 1345130

OCRMyPDF

OCRMyPDF is an optical character recognition (OCR) text layer to scanned PDF files, producing new PDFs with the text now selectable/searchable/copy-paste capable. OCRMyPDF leverages the Tesseract OCR engine and is written in Python. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 9.6.0+dfsgProcessing 60 Page PDF DocumentLinux 5.10.3EPYC 7F52510152025SE +/- 0.04, N = 3SE +/- 0.07, N = 319.5519.52
OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 9.6.0+dfsgProcessing 60 Page PDF DocumentLinux 5.10.3EPYC 7F52510152025Min: 19.47 / Avg: 19.55 / Max: 19.6Min: 19.41 / Avg: 19.51 / Max: 19.64

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Linux 5.10.3EPYC 7F52510152025SE +/- 0.02, N = 3SE +/- 0.00, N = 320.1020.131. (CC) gcc options: -O2 -pedantic -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Linux 5.10.3EPYC 7F52510152025Min: 20.08 / Avg: 20.1 / Max: 20.15Min: 20.12 / Avg: 20.13 / Max: 20.141. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPULinux 5.10.3EPYC 7F52400800120016002000SE +/- 2.14, N = 3SE +/- 2.66, N = 31989.911986.911. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPULinux 5.10.3EPYC 7F5230060090012001500Min: 1986.12 / Avg: 1989.91 / Max: 1993.54Min: 1981.75 / Avg: 1986.91 / Max: 1990.591. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k AtomsLinux 5.10.3EPYC 7F523691215SE +/- 0.01, N = 3SE +/- 0.01, N = 312.2612.271. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k AtomsLinux 5.10.3EPYC 7F5248121620Min: 12.24 / Avg: 12.26 / Max: 12.28Min: 12.27 / Avg: 12.27 / Max: 12.281. (CXX) g++ options: -O3 -pthread -lm

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 4KLinux 5.10.3EPYC 7F5250100150200250SE +/- 0.24, N = 3SE +/- 0.89, N = 3227.34227.67MIN: 166.45 / MAX: 246.55MIN: 160.75 / MAX: 250.131. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 4KLinux 5.10.3EPYC 7F524080120160200Min: 226.89 / Avg: 227.34 / Max: 227.72Min: 226.22 / Avg: 227.67 / Max: 229.291. (CC) gcc options: -pthread

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: Mobilenet - Device: CPULinux 5.10.3EPYC 7F5248121620SE +/- 0.09, N = 3SE +/- 0.10, N = 314.5114.53
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: Mobilenet - Device: CPULinux 5.10.3EPYC 7F5248121620Min: 14.41 / Avg: 14.51 / Max: 14.69Min: 14.34 / Avg: 14.53 / Max: 14.64

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: IMDB LSTM - Device: CPULinux 5.10.3EPYC 7F52140280420560700SE +/- 2.01, N = 3SE +/- 2.99, N = 3666.79665.88
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: IMDB LSTM - Device: CPULinux 5.10.3EPYC 7F52120240360480600Min: 663.75 / Avg: 666.79 / Max: 670.6Min: 659.97 / Avg: 665.88 / Max: 669.68

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: MD5Linux 5.10.3EPYC 7F52400K800K1200K1600K2000KSE +/- 3179.80, N = 3SE +/- 2962.73, N = 3172866717263331. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2
OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: MD5Linux 5.10.3EPYC 7F52300K600K900K1200K1500KMin: 1725000 / Avg: 1728666.67 / Max: 1735000Min: 1722000 / Avg: 1726333.33 / Max: 17320001. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2

Open Porous Media

This is a test of Open Porous Media, a set of open-source tools concerning simulation of flow and transport of fluids in porous media. This test profile depends upon MPI/Flow already being installed on the system. Install instructions at https://opm-project.org/?page_id=36. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 1Linux 5.10.3EPYC 7F5280160240320400SE +/- 0.73, N = 3SE +/- 1.92, N = 3364.94365.371. flow 2020.04
OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 1Linux 5.10.3EPYC 7F5270140210280350Min: 364.11 / Avg: 364.94 / Max: 366.39Min: 363.16 / Avg: 365.37 / Max: 369.191. flow 2020.04

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SwirlLinux 5.10.3EPYC 7F522004006008001000SE +/- 0.33, N = 3SE +/- 1.20, N = 38958961. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SwirlLinux 5.10.3EPYC 7F52160320480640800Min: 895 / Avg: 895.33 / Max: 896Min: 894 / Avg: 895.67 / Max: 8981. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2Linux 5.10.3EPYC 7F523691215SE +/- 0.02, N = 12SE +/- 0.02, N = 158.978.98MIN: 8.55 / MAX: 22.64MIN: 8.73 / MAX: 14.041. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2Linux 5.10.3EPYC 7F523691215Min: 8.88 / Avg: 8.97 / Max: 9.12Min: 8.85 / Avg: 8.98 / Max: 9.061. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko SkylineLinux 5.10.3EPYC 7F5220406080100SE +/- 0.55, N = 3SE +/- 0.72, N = 376.7576.83
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko SkylineLinux 5.10.3EPYC 7F521530456075Min: 75.7 / Avg: 76.75 / Max: 77.57Min: 75.39 / Avg: 76.83 / Max: 77.68

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon ObjLinux 5.10.3EPYC 7F52510152025SE +/- 0.03, N = 3SE +/- 0.11, N = 320.4220.41MIN: 19.57 / MAX: 20.77MIN: 19.42 / MAX: 20.8
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon ObjLinux 5.10.3EPYC 7F52510152025Min: 20.36 / Avg: 20.42 / Max: 20.47Min: 20.19 / Avg: 20.41 / Max: 20.55

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionLinux 5.10.3EPYC 7F52816243240SE +/- 0.05, N = 3SE +/- 0.04, N = 336.2836.311. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionLinux 5.10.3EPYC 7F52816243240Min: 36.18 / Avg: 36.28 / Max: 36.35Min: 36.23 / Avg: 36.31 / Max: 36.361. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPULinux 5.10.3EPYC 7F521.24372.48743.73114.97486.2185SE +/- 0.01913, N = 3SE +/- 0.02675, N = 35.527585.52280MIN: 5.38MIN: 5.321. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPULinux 5.10.3EPYC 7F52246810Min: 5.5 / Avg: 5.53 / Max: 5.56Min: 5.49 / Avg: 5.52 / Max: 5.581. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: induct2Linux 5.10.3EPYC 7F5261218243023.8123.79

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative EntropyLinux 5.10.3EPYC 7F5248121620SE +/- 0.17, N = 3SE +/- 0.02, N = 314.4814.47
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative EntropyLinux 5.10.3EPYC 7F5248121620Min: 14.21 / Avg: 14.48 / Max: 14.78Min: 14.43 / Avg: 14.47 / Max: 14.51

WavPack Audio Encoding

This test times how long it takes to encode a sample WAV file to WavPack format with very high quality settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPackLinux 5.10.3EPYC 7F5248121620SE +/- 0.01, N = 5SE +/- 0.01, N = 513.7413.751. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPackLinux 5.10.3EPYC 7F5248121620Min: 13.73 / Avg: 13.74 / Max: 13.78Min: 13.73 / Avg: 13.75 / Max: 13.81. (CXX) g++ options: -rdynamic

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPULinux 5.10.3EPYC 7F522K4K6K8K10KSE +/- 5.41, N = 3SE +/- 6.06, N = 39966.939974.701. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPULinux 5.10.3EPYC 7F522K4K6K8K10KMin: 9956.6 / Avg: 9966.93 / Max: 9974.86Min: 9962.58 / Avg: 9974.7 / Max: 9981.161. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

LAME MP3 Encoding

LAME is an MP3 encoder licensed under the LGPL. This test measures the time required to encode a WAV file to MP3 format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3Linux 5.10.3EPYC 7F52246810SE +/- 0.008, N = 3SE +/- 0.004, N = 37.9397.9331. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lncurses -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3Linux 5.10.3EPYC 7F523691215Min: 7.93 / Avg: 7.94 / Max: 7.95Min: 7.93 / Avg: 7.93 / Max: 7.941. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lncurses -lm

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MallocLinux 5.10.3EPYC 7F5270M140M210M280M350MSE +/- 693009.74, N = 3SE +/- 811855.41, N = 3332331122.53332554816.831. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MallocLinux 5.10.3EPYC 7F5260M120M180M240M300MMin: 330945590.83 / Avg: 332331122.53 / Max: 333055730.62Min: 331019569.63 / Avg: 332554816.83 / Max: 333780250.031. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc C String FunctionsLinux 5.10.3EPYC 7F52200K400K600K800K1000KSE +/- 2853.73, N = 3SE +/- 2051.22, N = 31143670.001144375.851. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc C String FunctionsLinux 5.10.3EPYC 7F52200K400K600K800K1000KMin: 1140177.61 / Avg: 1143670 / Max: 1149325.64Min: 1141681.39 / Avg: 1144375.85 / Max: 1148402.161. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: rnflowLinux 5.10.3EPYC 7F524812162016.5916.60

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Pabellon Barcelona - Compute: CPU-OnlyLinux 5.10.3EPYC 7F5260120180240300SE +/- 0.33, N = 3SE +/- 1.48, N = 3266.70266.54
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Pabellon Barcelona - Compute: CPU-OnlyLinux 5.10.3EPYC 7F5250100150200250Min: 266.1 / Avg: 266.7 / Max: 267.25Min: 264.9 / Avg: 266.54 / Max: 269.5

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Vector MathLinux 5.10.3EPYC 7F5230K60K90K120K150KSE +/- 19.82, N = 3SE +/- 6.50, N = 3142907.67142981.971. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Vector MathLinux 5.10.3EPYC 7F5220K40K60K80K100KMin: 142868.09 / Avg: 142907.67 / Max: 142929.44Min: 142971.08 / Avg: 142981.97 / Max: 142993.551. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

Hugin

Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeLinux 5.10.3EPYC 7F521122334455SE +/- 0.25, N = 3SE +/- 0.07, N = 350.6850.70
OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeLinux 5.10.3EPYC 7F521020304050Min: 50.2 / Avg: 50.68 / Max: 51.03Min: 50.58 / Avg: 50.7 / Max: 50.83

Open Porous Media

This is a test of Open Porous Media, a set of open-source tools concerning simulation of flow and transport of fluids in porous media. This test profile depends upon MPI/Flow already being installed on the system. Install instructions at https://opm-project.org/?page_id=36. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 2Linux 5.10.3EPYC 7F5250100150200250SE +/- 0.34, N = 3SE +/- 0.16, N = 3212.31212.221. flow 2020.04
OpenBenchmarking.orgSeconds, Fewer Is BetterOpen Porous MediaOPM Benchmark: Flow MPI Norne - Threads: 2Linux 5.10.3EPYC 7F524080120160200Min: 211.66 / Avg: 212.31 / Max: 212.77Min: 211.93 / Avg: 212.22 / Max: 212.481. flow 2020.04

Monkey Audio Encoding

This test times how long it takes to encode a sample WAV file to Monkey's Audio APE format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APELinux 5.10.3EPYC 7F523691215SE +/- 0.01, N = 5SE +/- 0.01, N = 512.5012.511. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt
OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APELinux 5.10.3EPYC 7F5248121620Min: 12.48 / Avg: 12.5 / Max: 12.52Min: 12.49 / Avg: 12.51 / Max: 12.521. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: BlowfishLinux 5.10.3EPYC 7F526K12K18K24K30KSE +/- 6.17, N = 3SE +/- 5.78, N = 326397263901. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2
OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: BlowfishLinux 5.10.3EPYC 7F525K10K15K20K25KMin: 26390 / Avg: 26396.67 / Max: 26409Min: 26380 / Avg: 26390.33 / Max: 264001. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeLinux 5.10.3EPYC 7F52246810SE +/- 0.012, N = 5SE +/- 0.016, N = 57.9787.9801. (CXX) g++ options: -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeLinux 5.10.3EPYC 7F523691215Min: 7.96 / Avg: 7.98 / Max: 8.03Min: 7.96 / Avg: 7.98 / Max: 8.041. (CXX) g++ options: -fvisibility=hidden -logg -lm

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkUnstructuredVolumeLinux 5.10.3EPYC 7F52400K800K1200K1600K2000KSE +/- 1848.12, N = 3SE +/- 2560.46, N = 31817665.471818093.89MIN: 19297 / MAX: 6122055MIN: 19110 / MAX: 6113054
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkUnstructuredVolumeLinux 5.10.3EPYC 7F52300K600K900K1200K1500KMin: 1814049.49 / Avg: 1817665.47 / Max: 1820136.9Min: 1813068.69 / Avg: 1818093.89 / Max: 1821459.92

Timed Clash Compilation

Build the clash-lang Haskell to VHDL/Verilog/SystemVerilog compiler with GHC 8.10.1 Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Clash CompilationTime To CompileLinux 5.10.3EPYC 7F52100200300400500SE +/- 1.29, N = 3SE +/- 0.56, N = 3450.38450.48
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Clash CompilationTime To CompileLinux 5.10.3EPYC 7F5280160240320400Min: 447.91 / Avg: 450.38 / Max: 452.27Min: 449.43 / Avg: 450.48 / Max: 451.37

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedLinux 5.10.3EPYC 7F521224364860SE +/- 0.44, N = 3SE +/- 0.46, N = 853.4853.491. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedLinux 5.10.3EPYC 7F521122334455Min: 52.99 / Avg: 53.48 / Max: 54.35Min: 52.45 / Avg: 53.49 / Max: 55.111. (CC) gcc options: -O3

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveLinux 5.10.3EPYC 7F5220406080100SE +/- 0.12, N = 3SE +/- 0.13, N = 3108.81108.831. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveLinux 5.10.3EPYC 7F5220406080100Min: 108.56 / Avg: 108.81 / Max: 108.94Min: 108.57 / Avg: 108.83 / Max: 108.981. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: mp_prop_designLinux 5.10.3EPYC 7F52132639526559.3959.38

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total TimeLinux 5.10.3EPYC 7F528M16M24M32M40MSE +/- 178225.83, N = 3SE +/- 300939.62, N = 336383816363882511. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total TimeLinux 5.10.3EPYC 7F526M12M18M24M30MMin: 36202829 / Avg: 36383815.67 / Max: 36740253Min: 35813405 / Avg: 36388251.33 / Max: 368301341. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver

Hierarchical INTegration

This test runs the U.S. Department of Energy's Ames Laboratory Hierarchical INTegration (HINT) benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOATLinux 5.10.3EPYC 7F5270M140M210M280M350MSE +/- 37054.68, N = 3SE +/- 82650.29, N = 3347380647.36347415262.131. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOATLinux 5.10.3EPYC 7F5260M120M180M240M300MMin: 347320726.74 / Avg: 347380647.36 / Max: 347448373.9Min: 347294652.45 / Avg: 347415262.13 / Max: 347573460.731. (CC) gcc options: -O3 -march=native -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPULinux 5.10.3EPYC 7F52400800120016002000SE +/- 2.02, N = 3SE +/- 1.96, N = 31988.901988.751. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPULinux 5.10.3EPYC 7F5230060090012001500Min: 1984.89 / Avg: 1988.9 / Max: 1991.26Min: 1986.54 / Avg: 1988.75 / Max: 1992.661. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileLinux 5.10.3EPYC 7F524080120160200173173

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyLinux 5.10.3EPYC 7F52306090120150113113

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomLinux 5.10.3EPYC 7F520.80481.60962.41443.21924.024SE +/- 0.002, N = 3SE +/- 0.009, N = 33.5773.577
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomLinux 5.10.3EPYC 7F52246810Min: 3.57 / Avg: 3.58 / Max: 3.58Min: 3.57 / Avg: 3.58 / Max: 3.59

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPULinux 5.10.3EPYC 7F520.17550.3510.52650.7020.8775SE +/- 0.00, N = 3SE +/- 0.00, N = 30.780.781. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPULinux 5.10.3EPYC 7F52246810Min: 0.78 / Avg: 0.78 / Max: 0.79Min: 0.78 / Avg: 0.78 / Max: 0.781. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPULinux 5.10.3EPYC 7F520.90231.80462.70693.60924.5115SE +/- 0.00, N = 3SE +/- 0.01, N = 34.014.011. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPULinux 5.10.3EPYC 7F52246810Min: 4.01 / Avg: 4.01 / Max: 4.01Min: 3.99 / Avg: 4.01 / Max: 4.031. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: NASNer Large - Device: CPULinux 5.10.3EPYC 7F520.2340.4680.7020.9361.17SE +/- 0.00, N = 3SE +/- 0.00, N = 31.041.04
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: NASNer Large - Device: CPULinux 5.10.3EPYC 7F52246810Min: 1.04 / Avg: 1.04 / Max: 1.04Min: 1.04 / Avg: 1.04 / Max: 1.05

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetLinux 5.10.3EPYC 7F52246810SE +/- 0.02, N = 12SE +/- 0.02, N = 157.607.60MIN: 7.34 / MAX: 8.93MIN: 6.99 / MAX: 10.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetLinux 5.10.3EPYC 7F523691215Min: 7.47 / Avg: 7.6 / Max: 7.79Min: 7.44 / Avg: 7.6 / Max: 7.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughLinux 5.10.3EPYC 7F5248121620SE +/- 0.01, N = 3SE +/- 0.01, N = 313.7913.791. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughLinux 5.10.3EPYC 7F5248121620Min: 13.77 / Avg: 13.79 / Max: 13.8Min: 13.77 / Avg: 13.79 / Max: 13.811. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: FastLinux 5.10.3EPYC 7F521.20382.40763.61144.81526.019SE +/- 0.00, N = 3SE +/- 0.01, N = 35.355.351. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: FastLinux 5.10.3EPYC 7F52246810Min: 5.35 / Avg: 5.35 / Max: 5.35Min: 5.33 / Avg: 5.35 / Max: 5.361. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

LuxCoreRender

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on the CPU as opposed to the OpenCL version. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: DLSCLinux 5.10.3EPYC 7F520.73581.47162.20742.94323.679SE +/- 0.01, N = 3SE +/- 0.01, N = 33.273.27MIN: 3.17 / MAX: 3.42MIN: 3.12 / MAX: 3.42
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: DLSCLinux 5.10.3EPYC 7F52246810Min: 3.25 / Avg: 3.27 / Max: 3.29Min: 3.26 / Avg: 3.27 / Max: 3.28

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.2.0Scene: MemorialLinux 5.10.3EPYC 7F5248121620SE +/- 0.04, N = 3SE +/- 0.04, N = 314.2214.22
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.2.0Scene: MemorialLinux 5.10.3EPYC 7F5248121620Min: 14.14 / Avg: 14.22 / Max: 14.29Min: 14.15 / Avg: 14.22 / Max: 14.3

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 5Linux 5.10.3EPYC 7F520.24620.49240.73860.98481.231SE +/- 0.001, N = 3SE +/- 0.001, N = 31.0941.094
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 5Linux 5.10.3EPYC 7F52246810Min: 1.09 / Avg: 1.09 / Max: 1.1Min: 1.09 / Avg: 1.09 / Max: 1.1

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: EnhancedLinux 5.10.3EPYC 7F5280160240320400SE +/- 0.33, N = 3SE +/- 0.33, N = 33743741. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: EnhancedLinux 5.10.3EPYC 7F5270140210280350Min: 374 / Avg: 374.33 / Max: 375Min: 374 / Avg: 374.33 / Max: 3751. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SharpenLinux 5.10.3EPYC 7F5250100150200250SE +/- 0.33, N = 3SE +/- 0.33, N = 32352351. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SharpenLinux 5.10.3EPYC 7F524080120160200Min: 235 / Avg: 235.33 / Max: 236Min: 235 / Avg: 235.33 / Max: 2361. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDLinux 5.10.3EPYC 7F520.13950.2790.41850.5580.6975SE +/- 0.00, N = 3SE +/- 0.00, N = 30.620.621. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDLinux 5.10.3EPYC 7F52246810Min: 0.62 / Avg: 0.62 / Max: 0.62Min: 0.62 / Avg: 0.62 / Max: 0.631. (CXX) g++ options: -O3 -pthread

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweetsLinux 5.10.3EPYC 7F520.13730.27460.41190.54920.6865SE +/- 0.00, N = 3SE +/- 0.00, N = 30.610.611. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweetsLinux 5.10.3EPYC 7F52246810Min: 0.61 / Avg: 0.61 / Max: 0.61Min: 0.6 / Avg: 0.61 / Max: 0.611. (CXX) g++ options: -O3 -pthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultLinux 5.10.3EPYC 7F520.36410.72821.09231.45641.8205SE +/- 0.001, N = 3SE +/- 0.001, N = 31.6181.6181. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultLinux 5.10.3EPYC 7F52246810Min: 1.62 / Avg: 1.62 / Max: 1.62Min: 1.62 / Avg: 1.62 / Max: 1.621. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

Monte Carlo Simulations of Ionised Nebulae

Mocassin is the Monte Carlo Simulations of Ionised Nebulae. MOCASSIN is a fully 3D or 2D photoionisation and dust radiative transfer code which employs a Monte Carlo approach to the transfer of radiation through media of arbitrary geometry and density distribution. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2019-03-24Input: Dust 2D tau100.0Linux 5.10.3EPYC 7F5240801201602001921921. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

Polyhedron Fortran Benchmarks

The Fortran.uk Polyhedron Fortran Benchmarks for comparing Fortran compiler performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: proteinLinux 5.10.3EPYC 7F524812162013.8413.84

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: linpkLinux 5.10.3EPYC 7F520.71551.4312.14652.8623.57753.183.18

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: doducLinux 5.10.3EPYC 7F522468107.267.26

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: mdbxLinux 5.10.3EPYC 7F521.0622.1243.1864.2485.314.724.72

OpenBenchmarking.orgSeconds, Fewer Is BetterPolyhedron Fortran BenchmarksBenchmark: airLinux 5.10.3EPYC 7F520.39830.79661.19491.59321.99151.771.77

CLOMP

CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading in order to influence future system designs. This particular test profile configuration is currently set to look at the OpenMP static schedule speed-up across all available CPU cores using the recommended test configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP SpeedupLinux 5.10.3EPYC 7F521122334455SE +/- 0.09, N = 3SE +/- 0.21, N = 350.150.11. (CC) gcc options: -fopenmp -O3 -lm
OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP SpeedupLinux 5.10.3EPYC 7F521020304050Min: 49.9 / Avg: 50.07 / Max: 50.2Min: 49.8 / Avg: 50.1 / Max: 50.51. (CC) gcc options: -fopenmp -O3 -lm

Sunflow Rendering System

This test runs benchmarks of the Sunflow Rendering System. The Sunflow Rendering System is an open-source render engine for photo-realistic image synthesis with a ray-tracing core. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSunflow Rendering System 0.07.2Global Illumination + Image SynthesisLinux 5.10.3EPYC 7F520.18450.3690.55350.7380.9225SE +/- 0.013, N = 15SE +/- 0.008, N = 30.8180.820MIN: 0.58 / MAX: 1.49MIN: 0.56 / MAX: 1.43
OpenBenchmarking.orgSeconds, Fewer Is BetterSunflow Rendering System 0.07.2Global Illumination + Image SynthesisLinux 5.10.3EPYC 7F52246810Min: 0.73 / Avg: 0.82 / Max: 0.96Min: 0.81 / Avg: 0.82 / Max: 0.84

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Linux 5.10.3EPYC 7F523691215SE +/- 0.11, N = 15SE +/- 0.24, N = 1510.2910.93MIN: 9.72 / MAX: 23.37MIN: 9.63 / MAX: 23.961. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Linux 5.10.3EPYC 7F523691215Min: 9.79 / Avg: 10.29 / Max: 11.14Min: 9.83 / Avg: 10.93 / Max: 12.951. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU CacheLinux 5.10.3EPYC 7F521020304050SE +/- 1.40, N = 15SE +/- 1.52, N = 1244.5244.861. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU CacheLinux 5.10.3EPYC 7F52918273645Min: 36.39 / Avg: 44.52 / Max: 54.09Min: 35.89 / Avg: 44.86 / Max: 52.991. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinLinux 5.10.3EPYC 7F523691215SE +/- 0.24, N = 15SE +/- 0.16, N = 1511.5211.681. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinLinux 5.10.3EPYC 7F523691215Min: 9.44 / Avg: 11.52 / Max: 12.23Min: 9.99 / Avg: 11.68 / Max: 12.231. (CXX) g++ options: -O3 -pthread -lm

301 Results Shown

Polyhedron Fortran Benchmarks
oneDNN
Redis
oneDNN:
  Convolution Batch Shapes Auto - f32 - CPU
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
  IP Shapes 3D - f32 - CPU
Stress-NG:
  Forking
  System V Message Passing
oneDNN:
  IP Shapes 1D - f32 - CPU
  Deconvolution Batch shapes_1d - f32 - CPU
  Deconvolution Batch shapes_3d - f32 - CPU
  Convolution Batch Shapes Auto - u8s8f32 - CPU
PyPerformance
oneDNN:
  Recurrent Neural Network Training - u8s8f32 - CPU
  Recurrent Neural Network Training - f32 - CPU
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - f32 - CPU
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - u8s8f32 - CPU
Stress-NG
Redis
GraphicsMagick
OpenVKL
Stress-NG
Kvazaar
OpenVKL
Stress-NG
SVT-VP9
Stress-NG
Redis
Open Porous Media
PostgreSQL pgbench
Open Porous Media
SVT-VP9
NCNN
Kvazaar
PostgreSQL pgbench
Darmstadt Automotive Parallel Heterogeneous Suite
oneDNN
librsvg
Redis
PyPerformance
AOM AV1
PlaidML
Timed HMMer Search
Timed GDB GNU Debugger Compilation
SVT-VP9
PostgreSQL pgbench
GPAW
Numenta Anomaly Benchmark
Zstd Compression
WireGuard + Linux Networking Stack Stress Test
simdjson
Stress-NG
Polyhedron Fortran Benchmarks
Zstd Compression
PyPerformance
ECP-CANDLE
PlaidML
Numenta Anomaly Benchmark
PlaidML
GraphicsMagick
Kvazaar
ECP-CANDLE
PlaidML
Redis
PostgreSQL pgbench
LZ4 Compression
Stress-NG
PostgreSQL pgbench
simdjson
PostgreSQL pgbench
NCNN
Tachyon
PostgreSQL pgbench
Crafty
Stress-NG
KeyDB
Mobile Neural Network:
  resnet-v2-50
  inception-v3
PyPerformance
Stress-NG
PostgreSQL pgbench
dav1d
PyPerformance
Mlpack Benchmark
NCNN
PostgreSQL pgbench
VP9 libvpx Encoding
x265
Embree
PostgreSQL pgbench
Mobile Neural Network
Open Porous Media
Stress-NG
BRL-CAD
7-Zip Compression
Caffe
Embree
Timed Apache Compilation
OpenVINO
AOM AV1
Timed Eigen Compilation
SVT-AV1
GNU Octave Benchmark
Mlpack Benchmark
PostgreSQL pgbench
XZ Compression
Numenta Anomaly Benchmark
dav1d
PHPBench
InfluxDB
PostgreSQL pgbench:
  1 - 250 - Read Write - Average Latency
  1 - 100 - Read Write - Average Latency
LibRaw
PostgreSQL pgbench
oneDNN
PyPerformance:
  2to3
  crypto_pyaes
AI Benchmark Alpha
Coremark
PyPerformance
eSpeak-NG Speech Engine
Node.js V8 Web Tooling Benchmark
SVT-AV1
Kvazaar
Darmstadt Automotive Parallel Heterogeneous Suite
Kvazaar
Chaos Group V-RAY
x265
GraphicsMagick
PyPerformance
Embree
Blender
oneDNN
Build2
NCNN
dav1d
Timed FFmpeg Compilation
LibreOffice
Kvazaar
Polyhedron Fortran Benchmarks
NCNN:
  CPU-v3-v3 - mobilenet-v3
  CPU - regnety_400m
PlaidML
SQLite Speedtest
YafaRay
LZ4 Compression
NCNN
Mlpack Benchmark
AI Benchmark Alpha
FLAC Audio Encoding
VP9 libvpx Encoding
PostgreSQL pgbench
BYTE Unix Benchmark
NCNN
x264
SVT-AV1
Polyhedron Fortran Benchmarks
OpenVKL
TensorFlow Lite
NAMD
TNN
NCNN
Kvazaar
Darmstadt Automotive Parallel Heterogeneous Suite
Ogg Audio Encoding
LZ4 Compression
InfluxDB
asmFish
LZ4 Compression
Embree
Stress-NG
AOM AV1:
  Speed 4 Two-Pass
  Speed 8 Realtime
WebP Image Encode
Kvazaar
PyPerformance
IndigoBench
NCNN
GROMACS
PostgreSQL pgbench
GraphicsMagick
ECP-CANDLE
Numpy Benchmark
Mobile Neural Network
Stress-NG
Caffe
FFTE
Timed Linux Kernel Compilation
OpenVINO:
  Person Detection 0106 FP32 - CPU
  Person Detection 0106 FP16 - CPU
Caffe
TensorFlow Lite
Polyhedron Fortran Benchmarks
Blender
LZ4 Compression
TensorFlow Lite:
  Inception V4
  Mobilenet Quant
Caffe
ASTC Encoder
OpenVINO
LuxCoreRender
NCNN
Polyhedron Fortran Benchmarks
NCNN
AI Benchmark Alpha
Blender
WebP Image Encode
rav1e
AOM AV1
Mlpack Benchmark
Timed MAFFT Alignment
Aircrack-ng
OpenVINO
Blender
Timed MPlayer Compilation
Unpacking Firefox
Stress-NG:
  Glibc Qsort Data Sorting
  Crypto
Embree
Polyhedron Fortran Benchmarks
PyPerformance
WebP Image Encode
rav1e
TensorFlow Lite
OpenVINO
TNN
Polyhedron Fortran Benchmarks
rav1e
NCNN
DeepSpeech
OpenSSL
OpenVINO
TensorFlow Lite
OCRMyPDF
RNNoise
OpenVINO
LAMMPS Molecular Dynamics Simulator
dav1d
PlaidML:
  No - Inference - Mobilenet - CPU
  No - Inference - IMDB LSTM - CPU
John The Ripper
Open Porous Media
GraphicsMagick
NCNN
Numenta Anomaly Benchmark
Embree
WebP Image Encode
oneDNN
Polyhedron Fortran Benchmarks
Numenta Anomaly Benchmark
WavPack Audio Encoding
OpenVINO
LAME MP3 Encoding
Stress-NG:
  Malloc
  Glibc C String Functions
Polyhedron Fortran Benchmarks
Blender
Stress-NG
Hugin
Open Porous Media
Monkey Audio Encoding
John The Ripper
Opus Codec Encoding
OpenVKL
Timed Clash Compilation
LZ4 Compression
ASTC Encoder
Polyhedron Fortran Benchmarks
Stockfish
Hierarchical INTegration
OpenVINO
PyPerformance:
  regex_compile
  nbody
IndigoBench
OpenVINO:
  Age Gender Recognition Retail 0013 FP16 - CPU
  Face Detection 0106 FP32 - CPU
PlaidML
NCNN
ASTC Encoder:
  Thorough
  Fast
LuxCoreRender
Intel Open Image Denoise
rav1e
GraphicsMagick:
  Enhanced
  Sharpen
simdjson:
  DistinctUserID
  PartialTweets
WebP Image Encode
Monte Carlo Simulations of Ionised Nebulae
Polyhedron Fortran Benchmarks:
  protein
  linpk
  doduc
  mdbx
  air
CLOMP
Sunflow Rendering System
Mobile Neural Network
Stress-NG
LAMMPS Molecular Dynamics Simulator