AMD EPYC 7F72 2P Linux 5.11 Perf Governor

2 x AMD EPYC 7F72 24-Core testing looking at CPU freq invariance on 5.11 with patch. CPU power consumption monitoring via AMD_Energy interface at 1 second polling. Additional data with CPUFreq performance governor included.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2101253-HA-AMDEPYC7F96
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 3 Tests
Bioinformatics 3 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 4 Tests
C++ Boost Tests 5 Tests
Chess Test Suite 4 Tests
Timed Code Compilation 5 Tests
C/C++ Compiler Tests 20 Tests
Compression Tests 2 Tests
CPU Massive 38 Tests
Creator Workloads 21 Tests
Cryptography 4 Tests
Database Test Suite 4 Tests
Encoding 5 Tests
Finance 2 Tests
Fortran Tests 7 Tests
Game Development 5 Tests
HPC - High Performance Computing 28 Tests
LAPACK (Linear Algebra Pack) Tests 2 Tests
Machine Learning 8 Tests
Molecular Dynamics 6 Tests
MPI Benchmarks 8 Tests
Multi-Core 35 Tests
NVIDIA GPU Compute 6 Tests
Intel oneAPI 4 Tests
OpenCL 2 Tests
OpenMPI Tests 15 Tests
Programmer / Developer System Benchmarks 9 Tests
Python 2 Tests
Quantum Mechanics 2 Tests
Raytracing 4 Tests
Renderers 9 Tests
Scientific Computing 15 Tests
Server 6 Tests
Server CPU Tests 22 Tests
Single-Threaded 6 Tests
Texture Compression 2 Tests
Video Encoding 5 Tests
Common Workstation Benchmarks 6 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs
No Box Plots

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Perf-Per
Dollar
Date
Triggered
  Test
  Duration
Linux 5.11 Git
January 22
  15 Hours
Linux 5.11 Patched
January 23
  15 Hours, 14 Minutes
CPUFreq Performance
January 24
  16 Hours, 38 Minutes
Invert Hiding All Results Option
  15 Hours, 38 Minutes
Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):


AMD EPYC 7F72 2P Linux 5.11 Perf GovernorProcessorMotherboardChipsetMemoryDiskGraphicsNetworkMonitorOSKernelDesktopDisplay ServerDisplay DriverCompilerFile-SystemScreen ResolutionLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance2 x AMD EPYC 7F72 24-Core @ 3.20GHz (48 Cores / 96 Threads)Supermicro H11DSi-NT v2.00 (2.1 BIOS)AMD Starship/Matisse16 x 8192 MB DDR4-3200MT/s HMA81GR7CJR8N-XN1000GB Western Digital WD_BLACK SN850 1TBASPEED2 x Intel 10G X550TUbuntu 20.105.11.0-051100rc4daily20210122-generic (x86_64) 20210121GNOME Shell 3.38.1X Server 1.20.9modesetting 1.20.9GCC 10.2.0ext41920x1080VE2285.11.0-rc4-max-boost-inv-patch (x86_64) 20210121OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Disk Details- NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Details- Linux 5.11 Git: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8301034- Linux 5.11 Patched: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8301034- CPUFreq Performance: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0x8301034Java Details- OpenJDK Runtime Environment (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.10)Python Details- Python 3.8.6Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

Linux 5.11 GitLinux 5.11 PatchedCPUFreq PerformanceResult OverviewPhoronix Test Suite 10.2.2100%107%114%121%128%dav1dCpuminer-Optx265DaCapo BenchmarkTimed GDB GNU Debugger CompilationInfluxDBNebular Empirical Analysis ToolCLOMPFFTWLAMMPS Molecular Dynamics SimulatorQMCPACKZstd CompressiononeDNNQuantum ESPRESSOIORTTSIOD 3D RendererAI Benchmark Alpharav1eOSPrayRodiniaRedisSVT-VP9Himeno BenchmarkTimed Godot Game Engine CompilationFFTEYafaRayKeyDBJohn The RipperChaos Group V-RAYLeelaChessZeroTNNHigh Performance Conjugate GradientLULESHNAMDNAS Parallel BenchmarksASKAPTimed Linux Kernel CompilationStockfishBlogBenchOpenFOAMPOV-RayGPAWTachyonCython BenchmarkBlenderTimed LLVM CompilationBYTE Unix BenchmarkBuild2PrimesieveIntel Open Image DenoiseSVT-AV1SQLite SpeedtestONNX RuntimePlaidMLAlgebraic Multi-Grid BenchmarkTungsten RendererLuxCoreRenderTimed MrBayes AnalysissimdjsonASTC EncoderBRL-CADLZ4 CompressionGcrypt LibraryNumpy BenchmarkDolfynGROMACSEtcpakGoogle SynthMarkRELIONQuantLibGnuPGasmFishSwetTSCPHierarchical INTegrationTensorFlow LiteFinanceBench

Linux 5.11 GitLinux 5.11 PatchedCPUFreq PerformancePer Watt Result OverviewPhoronix Test Suite 10.2.2100%105%110%116%121%ASKAPdav1dCpuminer-OptHigh Performance Conjugate GradientAI Benchmark AlphaKeyDBZstd CompressionFFTWLAMMPS Molecular Dynamics SimulatorCLOMPTTSIOD 3D RendererIORRedisOSPrayFFTESVT-VP9BlogBenchx265asmFishInfluxDBNAS Parallel BenchmarksHimeno BenchmarkLULESHChaos Group V-RAYQuantLibEtcpakJohn The RipperStockfishLeelaChessZeroBYTE Unix BenchmarkNumpy BenchmarkAlgebraic Multi-Grid BenchmarkLZ4 CompressionSwetONNX RuntimeGoogle SynthMarkHierarchical INTegrationBRL-CADTSCPrav1eSVT-AV1LuxCoreRenderPlaidMLIntel Open Image DenoiseGROMACSP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.M

AMD EPYC 7F72 2P Linux 5.11 Perf Governorcpuminer-opt: LBC, LBRY Creditsdav1d: Chimera 1080p 10-bitospray: Magnetic Reconnection - Path Tracerx265: Bosphorus 1080pdacapobench: Tradebeansinfluxdb: 4 - 10000 - 2,5000,1 - 10000dav1d: Summer Nature 4Klammps: Rhodopsin Proteinbuild-gdb: Time To Compiledacapobench: Tradesoapx265: Bosphorus 4Kinfluxdb: 64 - 10000 - 2,5000,1 - 10000tensorflow-lite: Inception V4rav1e: 10clomp: Static OMP Speedupior: 2MB - Default Test Directoryfftw: Float + SSE - 2D FFT Size 4096onednn: IP Shapes 3D - f32 - CPUai-benchmark: Device Training Scoreqe: AUSURF112svt-vp9: VMAF Optimized - Bosphorus 1080ponednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUtensorflow-lite: SqueezeNetttsiod-renderer: Phong Rendering With Soft-Shadow Mappingonednn: Convolution Batch Shapes Auto - f32 - CPUonnx: yolov4 - OpenMP CPUospray: San Miguel - SciVisrav1e: 6ai-benchmark: Device AI Scoreredis: SETrodinia: OpenMP Leukocytesvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080ponnx: super-resolution-10 - OpenMP CPUrav1e: 5tnn: CPU - MobileNet v2npb: LU.Credis: SADDjohn-the-ripper: MD5ai-benchmark: Device Inference Scoreonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: IP Shapes 1D - f32 - CPUtensorflow-lite: Inception ResNet V2askap: tConvolve MPI - Degriddinglczero: Eigenhimeno: Poisson Pressure Solverior: 8MB - Default Test Directorybuild-godot: Time To Compileffte: N=256, 3D Complex FFT Routinedacapobench: Jythonyafaray: Total Time For Sample Scenesimdjson: LargeRandredis: LPUSHfinancebench: Bonds OpenMPjohn-the-ripper: Blowfishcompress-lz4: 3 - Decompression Speedsvt-av1: Enc Mode 4 - 1080popenfoam: Motorbike 30Mhpcg: lulesh: namd: ATPase Simulation - 327,506 Atomslczero: BLASbuild-linux-kernel: Time To Compilestockfish: Total Timerav1e: 1financebench: Repo OpenMPplaidml: No - Inference - VGG19 - CPUnpb: EP.Cblogbench: Readospray: Magnetic Reconnection - SciViscompress-lz4: 1 - Decompression Speedsimdjson: PartialTweetspovray: Trace Timegpaw: Carbon Nanotubetachyon: Total Timetungsten: Haircompress-lz4: 1 - Compression Speedrodinia: OpenMP HotSpot3Drodinia: OpenMP LavaMDcython-bench: N-Queensbyte: Dhrystone 2build2: Time To Compileplaidml: No - Inference - VGG16 - CPUblender: Barbershop - CPU-Onlyluxcorerender: DLSCcpuminer-opt: x25xprimesieve: 1e12 Prime Number Generationbuild-llvm: Time To Compilesvt-av1: Enc Mode 0 - 1080popenfoam: Motorbike 60Moidn: Memorialcompress-lz4: 3 - Compression Speedtungsten: Volumetric Causticsqlite-speedtest: Timed Time - Size 1,000compress-lz4: 9 - Compression Speedamg: mrbayes: Primate Phylogeny Analysisospray: XFrog Forest - SciVisastcenc: Thoroughospray: San Miguel - Path Tracerospray: XFrog Forest - Path Tracertungsten: Water Causticplaidml: No - Inference - ResNet 50 - CPUcompress-lz4: 9 - Decompression Speedgcrypt: brl-cad: VGR Performance Metriclammps: 20k Atomsastcenc: Exhaustiveetcpak: ETC1luxcorerender: Rainbow Colors and Prismnumpy: relion: Basic - CPUdolfyn: Computational Fluid Dynamicsgromacs: Water Benchmarketcpak: ETC2synthmark: VoiceMark_100askap: tConvolve MPI - Griddinggnupg: 2.7GB Sample File Encryptionquantlib: npb: EP.Detcpak: ETC1 + Ditheringtnn: CPU - SqueezeNet v1.1asmfish: 1024 Hash Memory, 26 Depthcpuminer-opt: Garlicoinswet: Averagetscp: AI Chess Performancehint: FLOATospray: NASA Streamlines - SciVisospray: NASA Streamlines - Path Tracersimdjson: DistinctUserIDsimdjson: Kostyatensorflow-lite: NASNet Mobiletensorflow-lite: Mobilenet Floatqmcpack: simple-H2Otensorflow-lite: Mobilenet Quantredis: GETv-ray: CPUcpuminer-opt: Skeincoincpuminer-opt: Quad SHA-256, Pyritesvt-vp9: Visual Quality Optimized - Bosphorus 1080pkeydb: rodinia: OpenMP CFD Solverrodinia: OpenMP Streamclusterneat: onednn: Recurrent Neural Network Training - f32 - CPUsvt-av1: Enc Mode 8 - 1080pcompress-zstd: 3dacapobench: H2Linux 5.11 GitLinux 5.11 PatchedCPUFreq Performance132477130.6125047.665954807463.1308.2921.12997.641517018.631231991.28946402.90243.9505.19184680.88134810591217.49369.010.54767465193.0627.2080.91419817552.631.37027561380890.2253.862381.0843931.045303.449147443.861539146.21455033316972.405491.6208876572611870.342844233.511583531.0960.858174206.13000387489784.7390.371220973.7157601.3632817082410404.57.66318.7130.241519576.1220.45451410625.923979450890.36840124.37369822.093788.88108440532.2611334.00.6211.48560.70217.93756.600549622.9997.94252.79426.95038181643.668.14125.12158.647.811524.214.540210.3470.092129.6728.0949.435.3010570.16047.63143777133382.65011.115.694.305.9121.32534.6510488.0233.82563897124.99341.17266.3158.72323.93349.85218.7185.239155.182712.2527426.0877.3022149.83854.60244.793274.9441173109859937.316858888631115133322702844.1820471.4316.390.650.5718977146659.231.17745083.91689203.1054803363784297502311.85302893.569.25511.20927.0671317.4068.2298205.25310139037133.3725049.455591812193.6317.4523.78792.916514819.741256112.18107503.05447.8475.25170150.84924810671171.03364.810.52196862195.4655.2250.86378218154.971.40827871427348.1052.684371.4842101.068289.764154376.761611164.34461230817202.332901.5544773628511944.244334286.628309520.7259.177178738.12497094477887.1430.361217218.7556769.4531257263610666.07.64818.3030.826219771.2230.44472406125.752970426010.37239406.75781222.493841.48110311832.6211305.00.6311.30559.85118.05656.690369757.1796.60352.09226.60438319339.867.32225.42156.837.801541.764.535208.7860.091128.2828.3948.955.2623570.54047.76144871833382.04211.195.654.325.9521.33294.6310489.8232.54263652125.07740.97267.5878.76323.00348.29418.6525.261155.798714.9147453.5177.1792157.23863.45245.595274.8691176329559949.886874802621114562323144417.1719771.4316.390.650.5713404439523.529.28141034.01711621.5253460364017296995323.81294214.378.88210.33824.6331123.3268.2318270.55217194087181.95333.3362.294671956189.4363.3324.63985.277462120.751360163.08188873.17747.4517.33173350.81362811331249.06346.710.51460461347.2665.4480.86754518555.561.44629081454741.4251.290363.1841901.095297.132153770.571610484.50476200017752.304671.5796373799311492.944504135.519154539.6358.746180037.95358586474087.3310.371251006.0056094.4882817175710410.37.48818.2930.926019334.8620.44469414725.414960821140.37539948.35677122.433857.33108634732.8011150.60.6311.42759.75718.20046.695059651.6196.88552.36026.89538656226.367.33425.22158.077.891537.524.489208.0430.091128.8128.3349.295.3118369.88348.02144417466782.26111.195.674.335.9521.46524.6610555.1232.76063572525.11641.07267.1368.76324.48348.33818.6355.255155.783714.1817441.9577.0222156.33867.30245.453274.0971173708719964.356861321091116255322775283.8012571.4316.390.650.5713284439981.928.97141180.11782755.9755014522948420231309.61303171.338.51010.43225.0371247.4868.4657770.34570OpenBenchmarking.org

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: LBC, LBRY CreditsLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance40K80K120K160K200KSE +/- 1036.73, N = 3SE +/- 1380.06, N = 3SE +/- 861.90, N = 31324771390371940871. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s Per Watt, More Is BetterCpuminer-Opt 3.15.5Algorithm: LBC, LBRY CreditsLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance300600900120015001052.711091.621315.46
MinAvgMaxLinux 5.11 Git120.3125.8163.16Linux 5.11 Patched119.92127.4169.08CPUFreq Performance120.58147.5245.01OpenBenchmarking.orgWatts, Fewer Is BetterCpuminer-Opt 3.15.5CPU Power Consumption Monitor60120180240300
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.15.5Algorithm: LBC, LBRY CreditsLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance30K60K90K120K150KMin: 130710 / Avg: 132476.67 / Max: 134300Min: 136670 / Avg: 139036.67 / Max: 141450Min: 192690 / Avg: 194086.67 / Max: 1956601. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p 10-bitLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance4080120160200SE +/- 0.23, N = 3SE +/- 0.14, N = 3SE +/- 0.23, N = 3130.61133.37181.95MIN: 90.23 / MAX: 199.74MIN: 92.59 / MAX: 205.11MIN: 125.32 / MAX: 275.361. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS Per Watt, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p 10-bitLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance0.27230.54460.81691.08921.36150.910.921.21
MinAvgMaxLinux 5.11 Git67.41143.3181.43Linux 5.11 Patched119.65144.8190.24CPUFreq Performance119.6150.2248.83OpenBenchmarking.orgWatts, Fewer Is Betterdav1d 0.8.1CPU Power Consumption Monitor60120180240300
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Chimera 1080p 10-bitLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance306090120150Min: 130.35 / Avg: 130.61 / Max: 131.07Min: 133.23 / Avg: 133.37 / Max: 133.65Min: 181.5 / Avg: 181.95 / Max: 182.281. (CC) gcc options: -pthread

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: Magnetic Reconnection - Renderer: Path TracerLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance70140210280350SE +/- 0.00, N = 11250.00250.00333.33MIN: 90.91 / MAX: 500MIN: 90.91 / MAX: 333.33MIN: 100 / MAX: 500
OpenBenchmarking.orgFPS Per Watt, More Is BetterOSPray 1.8.5Demo: Magnetic Reconnection - Renderer: Path TracerLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance0.44780.89561.34341.79122.2391.611.651.99
MinAvgMaxLinux 5.11 Git120.89154.8256.66Linux 5.11 Patched120.24151.8245.88CPUFreq Performance120.4167.3303.25OpenBenchmarking.orgWatts, Fewer Is BetterOSPray 1.8.5CPU Power Consumption Monitor80160240320400
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: Magnetic Reconnection - Renderer: Path TracerLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance60120180240300Min: 333.33 / Avg: 333.33 / Max: 333.33

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance1428425670SE +/- 0.42, N = 7SE +/- 0.52, N = 4SE +/- 0.73, N = 1547.6649.4562.291. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second Per Watt, More Is Betterx265 3.4Video Input: Bosphorus 1080pLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance0.06980.13960.20940.27920.3490.300.290.31
MinAvgMaxLinux 5.11 Git121.34158.7187.18Linux 5.11 Patched121.07169.9202.05CPUFreq Performance120.63200.8253.76OpenBenchmarking.orgWatts, Fewer Is Betterx265 3.4CPU Power Consumption Monitor70140210280350
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance1224364860Min: 45.27 / Avg: 47.66 / Max: 48.53Min: 48.01 / Avg: 49.45 / Max: 50.31Min: 58.9 / Avg: 62.29 / Max: 69.021. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance13002600390052006500SE +/- 50.83, N = 20SE +/- 66.39, N = 20SE +/- 52.34, N = 20595455914671
MinAvgMaxLinux 5.11 Git120.58140.2197.99Linux 5.11 Patched119.93140.5200.89CPUFreq Performance119.81149.7216.02OpenBenchmarking.orgWatts, Fewer Is BetterDaCapo Benchmark 9.12-MR1CPU Power Consumption Monitor60120180240300
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance10002000300040005000Min: 5457 / Avg: 5954.45 / Max: 6300Min: 5113 / Avg: 5590.8 / Max: 6277Min: 4214 / Avg: 4670.95 / Max: 5110

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Linux 5.11 GitLinux 5.11 PatchedCPUFreq Performance200K400K600K800K1000KSE +/- 2183.04, N = 3SE +/- 1525.09, N = 3SE +/- 2401.68, N = 3807463.1812193.6956189.4
OpenBenchmarking.orgval/sec Per Watt, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Linux 5.11 GitLinux 5.11 PatchedCPUFreq Performance110022003300440055005060.304857.314963.60
MinAvgMaxLinux 5.11 Git120.89159.6179.42Linux 5.11 Patched120.51167.2187.45CPUFreq Performance120.75192.6218.26OpenBenchmarking.orgWatts, Fewer Is BetterInfluxDB 1.8.2CPU Power Consumption Monitor60120180240300
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Linux 5.11 GitLinux 5.11 PatchedCPUFreq Performance170K340K510K680K850KMin: 804402.8 / Avg: 807463.13 / Max: 811690.1Min: 810252.3 / Avg: 812193.6 / Max: 815201.7Min: 952399.9 / Avg: 956189.37 / Max: 960640.3

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 4KLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance80160240320400SE +/- 1.86, N = 3SE +/- 0.53, N = 3SE +/- 3.54, N = 15308.29317.45363.33MIN: 163.13 / MAX: 334.13MIN: 173.69 / MAX: 340.43MIN: 186.32 / MAX: 403.051. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS Per Watt, More Is Betterdav1d 0.8.1Video Input: Summer Nature 4KLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance0.4140.8281.2421.6562.071.771.721.84
MinAvgMaxLinux 5.11 Git120.37174.0311.99Linux 5.11 Patched119.77184.4344.42CPUFreq Performance119.89197.5386.86OpenBenchmarking.orgWatts, Fewer Is Betterdav1d 0.8.1CPU Power Consumption Monitor100200300400500
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.1Video Input: Summer Nature 4KLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance60120180240300Min: 304.76 / Avg: 308.29 / Max: 311.07Min: 316.41 / Avg: 317.45 / Max: 318.17Min: 333.89 / Avg: 363.33 / Max: 377.561. (CC) gcc options: -pthread

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance612182430SE +/- 0.23, N = 15SE +/- 0.17, N = 12SE +/- 0.19, N = 1521.1323.7924.641. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day Per Watt, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance0.03150.0630.09450.1260.15750.120.130.14
MinAvgMaxLinux 5.11 Git120.65178.3411.37Linux 5.11 Patched120.38176.6409.04CPUFreq Performance120.49176.0401.41OpenBenchmarking.orgWatts, Fewer Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020CPU Power Consumption Monitor110220330440550
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance612182430Min: 20.01 / Avg: 21.13 / Max: 23.09Min: 23 / Avg: 23.79 / Max: 25.06Min: 22.96 / Avg: 24.64 / Max: 25.71. (CXX) g++ options: -O3 -pthread -lm

Timed GDB GNU Debugger Compilation

This test times how long it takes to build the GNU Debugger (GDB) in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 9.1Time To CompileLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance20406080100SE +/- 0.40, N = 3SE +/- 0.43, N = 3SE +/- 0.14, N = 397.6492.9285.28
MinAvgMaxLinux 5.11 Git121157.1492.25Linux 5.11 Patched120.58159.2492.25CPUFreq Performance120.67167.0492.09OpenBenchmarking.orgWatts, Fewer Is BetterTimed GDB GNU Debugger Compilation 9.1CPU Power Consumption Monitor130260390520650
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 9.1Time To CompileLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance20406080100Min: 97.05 / Avg: 97.64 / Max: 98.41Min: 92.05 / Avg: 92.92 / Max: 93.37Min: 85.13 / Avg: 85.28 / Max: 85.56

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradesoapLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance11002200330044005500SE +/- 44.82, N = 4SE +/- 61.21, N = 4SE +/- 42.72, N = 5517051484621
MinAvgMaxLinux 5.11 Git120.76163.6307.79Linux 5.11 Patched120.1165.5309.01CPUFreq Performance120.15171.7314.65OpenBenchmarking.orgWatts, Fewer Is BetterDaCapo Benchmark 9.12-MR1CPU Power Consumption Monitor80160240320400
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradesoapLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance9001800270036004500Min: 5044 / Avg: 5170 / Max: 5238Min: 4978 / Avg: 5147.75 / Max: 5264Min: 4484 / Avg: 4621.4 / Max: 4730

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance510152025SE +/- 0.10, N = 3SE +/- 0.14, N = 3SE +/- 0.09, N = 318.6319.7420.751. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second Per Watt, More Is Betterx265 3.4Video Input: Bosphorus 4KLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance0.0180.0360.0540.0720.090.080.080.08
MinAvgMaxLinux 5.11 Git121.03221.5281.65Linux 5.11 Patched120.55238.0293.24CPUFreq Performance120.3250.4301.78OpenBenchmarking.orgWatts, Fewer Is Betterx265 3.4CPU Power Consumption Monitor80160240320400
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance510152025Min: 18.46 / Avg: 18.63 / Max: 18.82Min: 19.52 / Avg: 19.74 / Max: 19.99Min: 20.59 / Avg: 20.75 / Max: 20.91. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Linux 5.11 GitLinux 5.11 PatchedCPUFreq Performance300K600K900K1200K1500KSE +/- 6204.63, N = 3SE +/- 2545.78, N = 3SE +/- 9433.94, N = 31231991.21256112.11360163.0
OpenBenchmarking.orgval/sec Per Watt, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Linux 5.11 GitLinux 5.11 PatchedCPUFreq Performance150030004500600075006781.376672.146626.07
MinAvgMaxLinux 5.11 Git120.92181.7213.6Linux 5.11 Patched120.34188.3217.2CPUFreq Performance120.87205.3231.98OpenBenchmarking.orgWatts, Fewer Is BetterInfluxDB 1.8.2CPU Power Consumption Monitor60120180240300
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Linux 5.11 GitLinux 5.11 PatchedCPUFreq Performance200K400K600K800K1000KMin: 1221574.2 / Avg: 1231991.2 / Max: 1243039.8Min: 1251965 / Avg: 1256112.13 / Max: 1260743.8Min: 1341751.8 / Avg: 1360163.03 / Max: 1372941.8

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Linux 5.11 GitLinux 5.11 PatchedCPUFreq Performance200K400K600K800K1000KSE +/- 2435.29, N = 3SE +/- 1163.43, N = 3SE +/- 4685.69, N = 3894640810750818887
MinAvgMaxLinux 5.11 Git122.23414.2453.88Linux 5.11 Patched121.54431.6460.9CPUFreq Performance121.73429.0458.63OpenBenchmarking.orgWatts, Fewer Is BetterTensorFlow Lite 2020-08-23CPU Power Consumption Monitor120240360480600
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Linux 5.11 GitLinux 5.11 PatchedCPUFreq Performance160K320K480K640K800KMin: 889793 / Avg: 894640 / Max: 897478Min: 808813 / Avg: 810749.67 / Max: 812835Min: 809516 / Avg: 818887.33 / Max: 823600

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 10Linux 5.11 GitLinux 5.11 PatchedCPUFreq Performance0.71481.42962.14442.85923.574SE +/- 0.016, N = 3SE +/- 0.008, N = 3SE +/- 0.018, N = 32.9023.0543.177
OpenBenchmarking.orgFrames Per Second Per Watt, More Is Betterrav1e 0.4Speed: 10Linux 5.11 GitLinux 5.11 PatchedCPUFreq Performance0.00450.0090.01350.0180.02250.020.020.02
MinAvgMaxLinux 5.11 Git120.83137.3145.92Linux 5.11 Patched120.22136.5145.83CPUFreq Performance119.87138.6146.51OpenBenchmarking.orgWatts, Fewer Is Betterrav1e 0.4CPU Power Consumption Monitor4080120160200
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 10Linux 5.11 GitLinux 5.11 PatchedCPUFreq Performance246810Min: 2.87 / Avg: 2.9 / Max: 2.93Min: 3.04 / Avg: 3.05 / Max: 3.07Min: 3.15 / Avg: 3.18 / Max: 3.21

CLOMP

CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading in order to influence future system designs. This particular test profile configuration is currently set to look at the OpenMP static schedule speed-up across all available CPU cores using the recommended test configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP SpeedupLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance1122334455SE +/- 0.60, N = 3SE +/- 0.47, N = 3SE +/- 0.55, N = 343.947.847.41. (CC) gcc options: -fopenmp -O3 -lm
OpenBenchmarking.orgSpeedup Per Watt, More Is BetterCLOMP 1.2Static OMP SpeedupLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance0.06980.13960.20940.27920.3490.290.310.31
MinAvgMaxLinux 5.11 Git120.36153.4298.43Linux 5.11 Patched119.88152.6321.53CPUFreq Performance119.91151.3312.47OpenBenchmarking.orgWatts, Fewer Is BetterCLOMP 1.2CPU Power Consumption Monitor80160240320400
OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP SpeedupLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance1020304050Min: 43.2 / Avg: 43.9 / Max: 45.1Min: 47.2 / Avg: 47.77 / Max: 48.7Min: 46.4 / Avg: 47.37 / Max: 48.31. (CC) gcc options: -fopenmp -O3 -lm

IOR

IOR is a parallel I/O storage benchmark making use of MPI with a particular focus on HPC (High Performance Computing) systems. IOR is developed at the Lawrence Livermore National Laboratory (LLNL). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterIOR 3.3.0Block Size: 2MB - Disk Target: Default Test DirectoryLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance110220330440550SE +/- 1.77, N = 3SE +/- 2.06, N = 3SE +/- 5.48, N = 3505.19475.25517.33MIN: 457.62 / MAX: 951.11MIN: 400.96 / MAX: 971.55MIN: 463.44 / MAX: 1007.521. (CC) gcc options: -O2 -lm -pthread -lmpi
OpenBenchmarking.orgMB/s Per Watt, More Is BetterIOR 3.3.0Block Size: 2MB - Disk Target: Default Test DirectoryLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance0.5941.1881.7822.3762.972.562.432.64
MinAvgMaxLinux 5.11 Git120.55197.5218.12Linux 5.11 Patched119.94195.2214.21CPUFreq Performance119.8196.2215.22OpenBenchmarking.orgWatts, Fewer Is BetterIOR 3.3.0CPU Power Consumption Monitor60120180240300
OpenBenchmarking.orgMB/s, More Is BetterIOR 3.3.0Block Size: 2MB - Disk Target: Default Test DirectoryLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance90180270360450Min: 501.85 / Avg: 505.19 / Max: 507.87Min: 473.03 / Avg: 475.25 / Max: 479.37Min: 507.13 / Avg: 517.33 / Max: 525.921. (CC) gcc options: -O2 -lm -pthread -lmpi

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 4096Linux 5.11 GitLinux 5.11 PatchedCPUFreq Performance4K8K12K16K20KSE +/- 24.98, N = 3SE +/- 213.45, N = 3SE +/- 199.64, N = 91846817015173351. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm
OpenBenchmarking.orgMflops Per Watt, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 4096Linux 5.11 GitLinux 5.11 PatchedCPUFreq Performance306090120150138.49127.83131.21
MinAvgMaxLinux 5.11 Git68.24133.4146.37Linux 5.11 Patched120.07133.1142.83CPUFreq Performance65.17132.1150.06OpenBenchmarking.orgWatts, Fewer Is BetterFFTW 3.3.6CPU Power Consumption Monitor4080120160200
OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 4096Linux 5.11 GitLinux 5.11 PatchedCPUFreq Performance3K6K9K12K15KMin: 18432 / Avg: 18468 / Max: 18516Min: 16653 / Avg: 17015.33 / Max: 17392Min: 16583 / Avg: 17334.89 / Max: 186651. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPULinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance0.19830.39660.59490.79320.9915SE +/- 0.005127, N = 5SE +/- 0.004000, N = 5SE +/- 0.005456, N = 50.8813480.8492480.813628MIN: 0.71MIN: 0.73MIN: 0.691. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
MinAvgMaxLinux 5.11 Git121.62235.5411.7Linux 5.11 Patched120.69235.0415.01CPUFreq Performance121.08239.4430.23OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 2.0CPU Power Consumption Monitor110220330440550
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPULinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance246810Min: 0.87 / Avg: 0.88 / Max: 0.9Min: 0.84 / Avg: 0.85 / Max: 0.86Min: 0.8 / Avg: 0.81 / Max: 0.831. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training ScoreLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance2004006008001000105910671133

Quantum ESPRESSO

Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterQuantum ESPRESSO 6.7Input: AUSURF112Linux 5.11 GitLinux 5.11 PatchedCPUFreq Performance30060090012001500SE +/- 11.28, N = 3SE +/- 12.21, N = 4SE +/- 19.05, N = 91217.491171.031249.061. (F9X) gfortran options: -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz
OpenBenchmarking.orgWatts, Fewer Is BetterQuantum ESPRESSO 6.7CPU Power Consumption MonitorLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance90180270360450Min: 73.52 / Avg: 387.82 / Max: 492.31Min: 120.56 / Avg: 385.69 / Max: 492.5Min: 70.27 / Avg: 389.94 / Max: 492.33
OpenBenchmarking.orgSeconds, Fewer Is BetterQuantum ESPRESSO 6.7Input: AUSURF112Linux 5.11 GitLinux 5.11 PatchedCPUFreq Performance2004006008001000Min: 1194.93 / Avg: 1217.49 / Max: 1228.78Min: 1148.76 / Avg: 1171.03 / Max: 1205.51Min: 1163.48 / Avg: 1249.06 / Max: 1316.521. (F9X) gfortran options: -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: VMAF Optimized - Input: Bosphorus 1080pLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance80160240320400SE +/- 1.11, N = 10SE +/- 0.91, N = 10SE +/- 2.58, N = 15369.01364.81346.711. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second Per Watt, More Is BetterSVT-VP9 0.1Tuning: VMAF Optimized - Input: Bosphorus 1080pLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance0.47030.94061.41091.88122.35152.092.081.97
MinAvgMaxLinux 5.11 Git120.82176.2359.18Linux 5.11 Patched119.81175.7360.6CPUFreq Performance119.86175.7352.54OpenBenchmarking.orgWatts, Fewer Is BetterSVT-VP9 0.1CPU Power Consumption Monitor100200300400500
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: VMAF Optimized - Input: Bosphorus 1080pLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance70140210280350Min: 364.08 / Avg: 369.01 / Max: 375.94Min: 361.66 / Avg: 364.81 / Max: 369.91Min: 338.22 / Avg: 346.71 / Max: 380.951. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPULinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance0.12320.24640.36960.49280.616SE +/- 0.005010, N = 4SE +/- 0.004601, N = 4SE +/- 0.005906, N = 40.5476740.5219680.514604MIN: 0.43MIN: 0.43MIN: 0.431. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
MinAvgMaxLinux 5.11 Git122.2271.2438.68Linux 5.11 Patched120.6274.8455.7CPUFreq Performance120.73278.8480.49OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 2.0CPU Power Consumption Monitor120240360480600
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPULinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance246810Min: 0.54 / Avg: 0.55 / Max: 0.56Min: 0.51 / Avg: 0.52 / Max: 0.54Min: 0.51 / Avg: 0.51 / Max: 0.531. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance14K28K42K56K70KSE +/- 690.93, N = 3SE +/- 412.91, N = 15SE +/- 715.70, N = 465193.062195.461347.2
MinAvgMaxLinux 5.11 Git123.35441.7476.85Linux 5.11 Patched121.35449.5481.23CPUFreq Performance121.89451.3481.68OpenBenchmarking.orgWatts, Fewer Is BetterTensorFlow Lite 2020-08-23CPU Power Consumption Monitor120240360480600
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance11K22K33K44K55KMin: 63856.6 / Avg: 65192.97 / Max: 66165.7Min: 59468.9 / Avg: 62195.4 / Max: 65210.3Min: 60282.3 / Avg: 61347.15 / Max: 63426.1

TTSIOD 3D Renderer

A portable GPL 3D software renderer that supports OpenMP and Intel Threading Building Blocks with many different rendering modes. This version does not use OpenGL but is entirely CPU/software based. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterTTSIOD 3D Renderer 2.3bPhong Rendering With Soft-Shadow MappingLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance140280420560700SE +/- 9.04, N = 15SE +/- 3.22, N = 3SE +/- 5.92, N = 15627.21655.23665.451. (CXX) g++ options: -O3 -fomit-frame-pointer -ffast-math -mtune=native -flto -msse -mrecip -mfpmath=sse -msse2 -mssse3 -lSDL -fopenmp -fwhole-program -lstdc++
OpenBenchmarking.orgFPS Per Watt, More Is BetterTTSIOD 3D Renderer 2.3bPhong Rendering With Soft-Shadow MappingLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance0.61651.2331.84952.4663.08252.592.702.74
MinAvgMaxLinux 5.11 Git122.28242.6267.93Linux 5.11 Patched120.7242.4266.29CPUFreq Performance121.04243.0271.32OpenBenchmarking.orgWatts, Fewer Is BetterTTSIOD 3D Renderer 2.3bCPU Power Consumption Monitor70140210280350
OpenBenchmarking.orgFPS, More Is BetterTTSIOD 3D Renderer 2.3bPhong Rendering With Soft-Shadow MappingLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance120240360480600Min: 572.05 / Avg: 627.21 / Max: 682.22Min: 651.25 / Avg: 655.23 / Max: 661.59Min: 632.51 / Avg: 665.45 / Max: 717.261. (CXX) g++ options: -O3 -fomit-frame-pointer -ffast-math -mtune=native -flto -msse -mrecip -mfpmath=sse -msse2 -mssse3 -lSDL -fopenmp -fwhole-program -lstdc++

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPULinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance0.20570.41140.61710.82281.0285SE +/- 0.006064, N = 7SE +/- 0.001510, N = 7SE +/- 0.001697, N = 70.9141980.8637820.867545MIN: 0.78MIN: 0.79MIN: 0.781. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
MinAvgMaxLinux 5.11 Git122.12326.0492.02Linux 5.11 Patched121.56331.1492.01CPUFreq Performance121.82332.1492.35OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 2.0CPU Power Consumption Monitor130260390520650
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPULinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance246810Min: 0.9 / Avg: 0.91 / Max: 0.95Min: 0.86 / Avg: 0.86 / Max: 0.87Min: 0.86 / Avg: 0.87 / Max: 0.881. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPULinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance4080120160200SE +/- 1.60, N = 12SE +/- 1.86, N = 3SE +/- 2.62, N = 31751811851. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute Per Watt, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPULinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance0.15750.3150.47250.630.78750.670.690.70
OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.6CPU Power Consumption MonitorLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance50100150200250Min: 122.51 / Avg: 262.95 / Max: 279.9Min: 121.06 / Avg: 263.88 / Max: 279.86Min: 121.25 / Avg: 263.54 / Max: 278.66
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPULinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance306090120150Min: 166 / Avg: 174.58 / Max: 184.5Min: 177.5 / Avg: 181.17 / Max: 183.5Min: 181 / Avg: 185.17 / Max: 1901. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: SciVisLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance1224364860SE +/- 0.00, N = 3SE +/- 0.58, N = 5SE +/- 0.00, N = 352.6354.9755.56MIN: 27.03 / MAX: 58.82MIN: 31.25 / MAX: 58.82MIN: 33.33 / MAX: 58.82
OpenBenchmarking.orgFPS Per Watt, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: SciVisLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance0.07430.14860.22290.29720.37150.300.320.33
MinAvgMaxLinux 5.11 Git121.41173.5481.49Linux 5.11 Patched120.69171.8482.19CPUFreq Performance120.62168.2481.96OpenBenchmarking.orgWatts, Fewer Is BetterOSPray 1.8.5CPU Power Consumption Monitor120240360480600
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: SciVisLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance1122334455Min: 52.63 / Avg: 52.63 / Max: 52.63Min: 52.63 / Avg: 54.97 / Max: 55.56Min: 55.56 / Avg: 55.56 / Max: 55.56

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 6Linux 5.11 GitLinux 5.11 PatchedCPUFreq Performance0.32540.65080.97621.30161.627SE +/- 0.002, N = 3SE +/- 0.003, N = 3SE +/- 0.001, N = 31.3701.4081.446
OpenBenchmarking.orgFrames Per Second Per Watt, More Is Betterrav1e 0.4Speed: 6Linux 5.11 GitLinux 5.11 PatchedCPUFreq Performance0.00230.00460.00690.00920.01150.010.010.01
MinAvgMaxLinux 5.11 Git120.68142.0151.56Linux 5.11 Patched74.91142.0151.61CPUFreq Performance74.72143.0152.17OpenBenchmarking.orgWatts, Fewer Is Betterrav1e 0.4CPU Power Consumption Monitor4080120160200
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 6Linux 5.11 GitLinux 5.11 PatchedCPUFreq Performance246810Min: 1.37 / Avg: 1.37 / Max: 1.38Min: 1.4 / Avg: 1.41 / Max: 1.41Min: 1.45 / Avg: 1.45 / Max: 1.45

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI ScoreLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance6001200180024003000275627872908
OpenBenchmarking.orgScore Per Watt, More Is BetterAI Benchmark Alpha 0.1.2Device AI ScoreLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance369121512.9512.9811.59
MinAvgMaxLinux 5.11 Git120.43212.9368.5Linux 5.11 Patched119.91214.7378.22CPUFreq Performance119.81250.8388.86OpenBenchmarking.orgWatts, Fewer Is BetterAI Benchmark Alpha 0.1.2CPU Power Consumption Monitor100200300400500

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance300K600K900K1200K1500KSE +/- 10410.66, N = 15SE +/- 13176.39, N = 15SE +/- 10017.79, N = 131380890.221427348.101454741.421. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second Per Watt, More Is BetterRedis 6.0.9Test: SETLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance3K6K9K12K15K11015.9211500.6111683.85
MinAvgMaxLinux 5.11 Git120.52125.4135.35Linux 5.11 Patched119.72124.1134.55CPUFreq Performance119.66124.5135.4OpenBenchmarking.orgWatts, Fewer Is BetterRedis 6.0.9CPU Power Consumption Monitor4080120160200
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance300K600K900K1200K1500KMin: 1271941 / Avg: 1380890.22 / Max: 1431871.62Min: 1355752.75 / Avg: 1427348.1 / Max: 1519526Min: 1381801.25 / Avg: 1454741.42 / Max: 1521606.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance1224364860SE +/- 0.35, N = 3SE +/- 0.69, N = 3SE +/- 0.19, N = 353.8652.6851.291. (CXX) g++ options: -O2 -lOpenCL
MinAvgMaxLinux 5.11 Git122.43274.5324.64Linux 5.11 Patched121.02281.6334.82CPUFreq Performance121.37285.4336.93OpenBenchmarking.orgWatts, Fewer Is BetterRodinia 3.1CPU Power Consumption Monitor80160240320400
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance1122334455Min: 53.31 / Avg: 53.86 / Max: 54.49Min: 51.76 / Avg: 52.68 / Max: 54.03Min: 50.95 / Avg: 51.29 / Max: 51.61. (CXX) g++ options: -O2 -lOpenCL

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance80160240320400SE +/- 2.00, N = 10SE +/- 1.70, N = 9SE +/- 1.84, N = 9381.08371.48363.181. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second Per Watt, More Is BetterSVT-VP9 0.1Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance0.4860.9721.4581.9442.432.162.112.07
MinAvgMaxLinux 5.11 Git120.72176.7367.85Linux 5.11 Patched119.66176.2363.09CPUFreq Performance119.74175.3361OpenBenchmarking.orgWatts, Fewer Is BetterSVT-VP9 0.1CPU Power Consumption Monitor100200300400500
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance70140210280350Min: 364.74 / Avg: 381.08 / Max: 385.85Min: 363.2 / Avg: 371.48 / Max: 378.55Min: 350.88 / Avg: 363.18 / Max: 369.691. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPULinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance9001800270036004500SE +/- 78.33, N = 9SE +/- 44.10, N = 3SE +/- 68.07, N = 124393421041901. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt
OpenBenchmarking.orgInferences Per Minute Per Watt, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPULinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance4812162015.7215.2615.49
MinAvgMaxLinux 5.11 Git61.08279.5301.53Linux 5.11 Patched121.22275.8301.54CPUFreq Performance121.3270.4293.8OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.6CPU Power Consumption Monitor80160240320400
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPULinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance8001600240032004000Min: 4142 / Avg: 4393 / Max: 4922Min: 4127 / Avg: 4210.33 / Max: 4277Min: 3763 / Avg: 4189.54 / Max: 45001. (CXX) g++ options: -fopenmp -ffunction-sections -fdata-sections -O3 -ldl -lrt

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 5Linux 5.11 GitLinux 5.11 PatchedCPUFreq Performance0.24640.49280.73920.98561.232SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.003, N = 31.0451.0681.095
OpenBenchmarking.orgFrames Per Second Per Watt, More Is Betterrav1e 0.4Speed: 5Linux 5.11 GitLinux 5.11 PatchedCPUFreq Performance0.00230.00460.00690.00920.01150.010.010.01
MinAvgMaxLinux 5.11 Git120.44143.1157.63Linux 5.11 Patched119.97143.3155.18CPUFreq Performance119.75143.6155.63OpenBenchmarking.orgWatts, Fewer Is Betterrav1e 0.4CPU Power Consumption Monitor4080120160200
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4Speed: 5Linux 5.11 GitLinux 5.11 PatchedCPUFreq Performance246810Min: 1.04 / Avg: 1.04 / Max: 1.05Min: 1.07 / Avg: 1.07 / Max: 1.07Min: 1.09 / Avg: 1.1 / Max: 1.1

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2Linux 5.11 GitLinux 5.11 PatchedCPUFreq Performance70140210280350SE +/- 3.80, N = 3SE +/- 2.83, N = 3SE +/- 0.07, N = 3303.45289.76297.13MIN: 284.51 / MAX: 461.21MIN: 283.65 / MAX: 458.79MIN: 295.49 / MAX: 320.41. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
MinAvgMaxLinux 5.11 Git121.15144.5155.08Linux 5.11 Patched73.73143.3175.31CPUFreq Performance120.61155.3164.96OpenBenchmarking.orgWatts, Fewer Is BetterTNN 0.2.3CPU Power Consumption Monitor50100150200250
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2Linux 5.11 GitLinux 5.11 PatchedCPUFreq Performance50100150200250Min: 299.58 / Avg: 303.45 / Max: 311.05Min: 285.31 / Avg: 289.76 / Max: 295Min: 296.99 / Avg: 297.13 / Max: 297.231. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance30K60K90K120K150KSE +/- 1780.52, N = 15SE +/- 509.59, N = 4SE +/- 121.33, N = 4147443.86154376.76153770.571. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.0.3
OpenBenchmarking.orgTotal Mop/s Per Watt, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance100200300400500440.23469.62466.44
MinAvgMaxLinux 5.11 Git122.56334.9407.9Linux 5.11 Patched121.04328.7407.89CPUFreq Performance121.12329.7405.46OpenBenchmarking.orgWatts, Fewer Is BetterNAS Parallel Benchmarks 3.4CPU Power Consumption Monitor110220330440550
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance30K60K90K120K150KMin: 130785.22 / Avg: 147443.86 / Max: 153153.3Min: 153161.9 / Avg: 154376.76 / Max: 155556.23Min: 153434.98 / Avg: 153770.57 / Max: 153998.551. (F9X) gfortran options: -O3 -march=native -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -ldl -levent -levent_pthreads -lutil -lm -lrt -lz 2. Open MPI 4.0.3

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance300K600K900K1200K1500KSE +/- 16361.41, N = 3SE +/- 15585.71, N = 4SE +/- 17675.25, N = 151539146.211611164.341610484.501. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second Per Watt, More Is BetterRedis 6.0.9Test: SADDLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance3K6K9K12K15K12420.9113036.2512966.74
MinAvgMaxLinux 5.11 Git66.7123.9133.71Linux 5.11 Patched119.6123.6132.75CPUFreq Performance119.69124.2136.01OpenBenchmarking.orgWatts, Fewer Is BetterRedis 6.0.9CPU Power Consumption Monitor4080120160200
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance300K600K900K1200K1500KMin: 1521665.25 / Avg: 1539146.21 / Max: 1571842.88Min: 1584073.5 / Avg: 1611164.34 / Max: 1649898.12Min: 1516070.25 / Avg: 1610484.5 / Max: 1736747.51. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: MD5Linux 5.11 GitLinux 5.11 PatchedCPUFreq Performance1000K2000K3000K4000K5000KSE +/- 49184.46, N = 3SE +/- 54344.04, N = 13SE +/- 7371.11, N = 34550333461230847620001. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2
OpenBenchmarking.orgReal C/S Per Watt, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: MD5Linux 5.11 GitLinux 5.11 PatchedCPUFreq Performance2K4K6K8K10K10022.9210084.8810378.20
MinAvgMaxLinux 5.11 Git123.99454.0492.03Linux 5.11 Patched121.95457.3492.3CPUFreq Performance122.39458.8492.08OpenBenchmarking.orgWatts, Fewer Is BetterJohn The Ripper 1.9.0-jumbo-1CPU Power Consumption Monitor130260390520650
OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: MD5Linux 5.11 GitLinux 5.11 PatchedCPUFreq Performance800K1600K2400K3200K4000KMin: 4455000 / Avg: 4550333.33 / Max: 4619000Min: 4069000 / Avg: 4612307.69 / Max: 4771000Min: 4751000 / Avg: 4762000 / Max: 47760001. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference ScoreLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance400800120016002000169717201775

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPULinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance0.54121.08241.62362.16482.706SE +/- 0.03372, N = 3SE +/- 0.01587, N = 3SE +/- 0.02560, N = 152.405492.332902.30467MIN: 1.92MIN: 2MIN: 1.881. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
MinAvgMaxLinux 5.11 Git123.12282.9453.78Linux 5.11 Patched120.8285.6468.65CPUFreq Performance121284.2491.77OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 2.0CPU Power Consumption Monitor130260390520650
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPULinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance246810Min: 2.35 / Avg: 2.41 / Max: 2.46Min: 2.31 / Avg: 2.33 / Max: 2.36Min: 2.11 / Avg: 2.3 / Max: 2.431. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPULinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance0.36470.72941.09411.45881.8235SE +/- 0.01518, N = 4SE +/- 0.01340, N = 4SE +/- 0.01359, N = 41.620881.554471.57963MIN: 1.31MIN: 1.29MIN: 1.311. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
MinAvgMaxLinux 5.11 Git122.4278.1477.04Linux 5.11 Patched120.54279.6482.66CPUFreq Performance120.95280.4481.14OpenBenchmarking.orgWatts, Fewer Is BetteroneDNN 2.0CPU Power Consumption Monitor120240360480600
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPULinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance246810Min: 1.58 / Avg: 1.62 / Max: 1.66Min: 1.53 / Avg: 1.55 / Max: 1.59Min: 1.55 / Avg: 1.58 / Max: 1.611. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Linux 5.11 GitLinux 5.11 PatchedCPUFreq Performance160K320K480K640K800KSE +/- 4257.59, N = 3SE +/- 5824.36, N = 9SE +/- 2132.19, N = 3765726736285737993
MinAvgMaxLinux 5.11 Git123.26436.2470.2Linux 5.11 Patched121.57442.7471.43CPUFreq Performance121.82439.4470.49OpenBenchmarking.orgWatts, Fewer Is BetterTensorFlow Lite 2020-08-23CPU Power Consumption Monitor120240360480600
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Linux 5.11 GitLinux 5.11 PatchedCPUFreq Performance130K260K390K520K650KMin: 757283 / Avg: 765726.33 / Max: 770904Min: 718390 / Avg: 736285 / Max: 779525Min: 733830 / Avg: 737993 / Max: 740875

ASKAP

This is a CUDA benchmark of ATNF's ASKAP Benchmark with currently using the tConvolveCuda sub-test. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 2018-11-10Test: tConvolve MPI - DegriddingLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance3K6K9K12K15KSE +/- 7.33, N = 3SE +/- 6.47, N = 3SE +/- 137.09, N = 311870.311944.211492.91. (CXX) g++ options: -lpthread
OpenBenchmarking.orgMillion Grid Points Per Second Per Watt, More Is BetterASKAP 2018-11-10Test: tConvolve MPI - DegriddingLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance91827364536.1740.5133.50
MinAvgMaxLinux 5.11 Git122.71328.2440.55Linux 5.11 Patched120.77294.9460.53CPUFreq Performance120.5343.1458.48OpenBenchmarking.orgWatts, Fewer Is BetterASKAP 2018-11-10CPU Power Consumption Monitor120240360480600
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 2018-11-10Test: tConvolve MPI - DegriddingLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance2K4K6K8K10KMin: 11855.6 / Avg: 11870.27 / Max: 11877.6Min: 11933 / Avg: 11944.2 / Max: 11955.4Min: 11230.5 / Avg: 11492.93 / Max: 11692.91. (CXX) g++ options: -lpthread

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: EigenLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance10002000300040005000SE +/- 49.20, N = 4SE +/- 36.23, N = 3SE +/- 26.71, N = 34284443344501. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second Per Watt, More Is BetterLeelaChessZero 0.26Backend: EigenLinux 5.11 GitLinux 5.11 PatchedCPUFreq Performance369121510.0810.4510.39
OpenBenchmarking.orgWatts, Fewer Is BetterLeelaChessZero 0.26CPU Power Consumption Monitor