AMD EPYC Genoa Memory Scaling

Benchmarks by Michael Larabel for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2212240-NE-AMDEPYCGE62
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
12c
December 21 2022
  11 Hours, 55 Minutes
10c
December 21 2022
  12 Hours, 59 Minutes
8c
December 22 2022
  13 Hours, 22 Minutes
6c
December 23 2022
  15 Hours, 14 Minutes
Invert Behavior (Only Show Selected Data)
  13 Hours, 22 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


AMD EPYC Genoa Memory ScalingOpenBenchmarking.orgPhoronix Test Suite2 x AMD EPYC 9654 96-Core @ 2.40GHz (192 Cores / 384 Threads)AMD Titanite_4G (RTI1002E BIOS)AMD Device 14a41520GB1264GB1008GB768GB800GB INTEL SSDPF21Q800GBASPEEDVGA HDMIBroadcom NetXtreme BCM5720 PCIeUbuntu 22.106.1.0-phx (x86_64)GNOME Shell 43.0X Server 1.21.1.41.3.224GCC 12.2.0 + Clang 15.0.2-1ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerVulkanCompilerFile-SystemScreen ResolutionAMD EPYC Genoa Memory Scaling BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10110d - OpenJDK Runtime Environment (build 11.0.17+8-post-Ubuntu-1ubuntu2)- Python 3.10.7- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

12c10c8c6cResult OverviewPhoronix Test Suite100%144%189%233%278%Xcompact3d Incompact3dHigh Performance Conjugate GradientOpenFOAMRELIONWRFGraph500NAS Parallel BenchmarksnekRSTensorFlowSVT-AV1GPAWNeural Magic DeepSparseOpenVKLIntel Open Image Denoise7-Zip CompressiononeDNNRodiniaApache CassandraGROMACSTimed GDB GNU Debugger CompilationTimed Gem5 CompilationEmbreeKvazaarXmrignginxTimed Linux Kernel CompilationBlenderOpenVINOTimed LLVM CompilationTimed Node.js CompilationONNX RuntimeNWChemLuxCoreRenderCockroachDBTimed Apache CompilationTimed Godot Game Engine CompilationOSPRayACES DGEMMlibavif avifencTimed MPlayer CompilationASTC EncoderminiBUDEBuild2Timed Mesa CompilationNAMDStargate Digital Audio WorkstationTimed PHP CompilationsimdjsonOpenRadiossLiquid-DSPDaCapo Benchmark

AMD EPYC Genoa Memory Scalinghpcg: npb: CG.Cnpb: IS.Dnpb: LU.Cnpb: MG.Cnpb: SP.Cminibude: OpenMP - BM2minibude: OpenMP - BM2rodinia: OpenMP CFD Solverrodinia: OpenMP Streamclusternamd: ATPase Simulation - 327,506 Atomsnekrs: TurboPipe Periodicnwchem: C240 Buckyballincompact3d: X3D-benchmarking input.i3dopenfoam: drivaerFastback, Medium Mesh Size - Execution Timeopenradioss: Bumper Beamopenradioss: Bird Strike on Windshieldopenradioss: INIVOL and Fluid Structure Interaction Drop Containerrelion: Basic - CPUsimdjson: Kostyasimdjson: TopTweetsimdjson: LargeRandsimdjson: PartialTweetssimdjson: DistinctUserIDxmrig: Monero - 1Mxmrig: Wownero - 1Mdacapobench: H2dacapobench: Jythonluxcorerender: Danish Mood - CPUluxcorerender: Orange Juice - CPUembree: Pathtracer ISPC - Crownembree: Pathtracer ISPC - Asian Dragonkvazaar: Bosphorus 4K - Mediumkvazaar: Bosphorus 4K - Very Fastkvazaar: Bosphorus 4K - Ultra Fastsvt-av1: Preset 12 - Bosphorus 4Kmt-dgemm: Sustained Floating-Point Rateoidn: RT.hdr_alb_nrm.3840x2160oidn: RTLightmap.hdr.4096x4096openvkl: vklBenchmark ISPCospray: particle_volume/ao/real_timeospray: particle_volume/scivis/real_timeospray: particle_volume/pathtracer/real_timeospray: gravity_spheres_volume/dim_512/ao/real_timeospray: gravity_spheres_volume/dim_512/scivis/real_timeospray: gravity_spheres_volume/dim_512/pathtracer/real_timecompress-7zip: Compression Ratingcompress-7zip: Decompression Ratingstargate: 96000 - 1024stargate: 192000 - 1024avifenc: 0avifenc: 2avifenc: 6avifenc: 6, Losslessavifenc: 10, Losslessbuild-apache: Time To Compilebuild-gdb: Time To Compilebuild-gem5: Time To Compilebuild-godot: Time To Compilebuild-linux-kernel: defconfigbuild-linux-kernel: allmodconfigbuild-llvm: Ninjabuild-mesa: Time To Compilebuild-mplayer: Time To Compilebuild-nodejs: Time To Compilebuild-php: Time To Compilebuild2: Time To Compileonednn: IP Shapes 3D - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUliquid-dsp: 256 - 256 - 57liquid-dsp: 384 - 256 - 57cockroach: MoVR - 512cockroach: MoVR - 1024cockroach: KV, 10% Reads - 512cockroach: KV, 50% Reads - 512cockroach: KV, 60% Reads - 512cockroach: KV, 95% Reads - 512cockroach: KV, 10% Reads - 1024cockroach: KV, 50% Reads - 1024cockroach: KV, 60% Reads - 1024cockroach: KV, 95% Reads - 1024astcenc: Thoroughastcenc: Exhaustivegraph500: 26gromacs: MPI CPU - water_GMX50_baretensorflow: CPU - 256 - ResNet-50deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamwrf: conus 2.5kmgpaw: Carbon Nanotubeblender: BMW27 - CPU-Onlyblender: Classroom - CPU-Onlyblender: Barbershop - CPU-Onlycassandra: Writesnginx: 500onnx: fcn-resnet101-11 - CPU - Standardopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU12c10c8c6c86.814380225.018491.01489164.65209846.76260471.508640.310345.6126.0506.0010.127838214620000001537.1125.526248109.5372179.86216.8881.57128.1014.116.591.255.656.86104604.6126465.6480233809.6928.82182.4498213.750762.5673.4477.83251.76970.4077333.521.65132543.706142.7970229.26943.977543.126253.774892317611814354.3458902.82906163.24734.8482.4595.2874.24120.46141.709139.23834.03225.501147.14775.65520.1177.777101.46544.51949.9173.954711968.702344.292275.860.4469301034700000010347000000948.5953.835970.047621.952330.164467.636846.947465.552573.364661.8106.566311.725056515200018.706109.1384.35061133.2823761.4853125.7196856.0186111.89091964.273048.76731195.910280.0790615.4474155.479784.24701133.47744070.1923.1518.5820.9281.03251793201032.06254101.74470.9842.981109.4542.951110.687394.656.48191.43250.3411018.374.359867.414.85959.1649.9819171.519.959038.475.30147769.260.55119606.210.9748.294581179.007124.92489995.20177097.42239496.018666.980346.6796.0746.2850.127597862580000001531146.289830117.9400379.70218.2281.15151.3984.116.491.255.676.84102599.6127226.6483233299.6228.19184.7346214.309362.2375.3577.30241.36970.6137753.441.63131743.031642.9999230.28243.996943.328754.408389343311716274.3545562.80619063.24634.9092.4115.2864.33720.48042.412134.37333.61625.407145.41075.44020.2057.755101.94144.60849.8004.009382030.722438.002325.710.4634541034000000010352666667949.6949.535993.149102.751748.860769.735776.848449.051959.562029.8106.854211.763757401800018.677105.9184.48221133.1821742.7956128.9249844.4268113.40791965.560648.74311201.139179.7139611.2926156.537484.26571135.18454563.18323.3738.4220.7680.37243603198858.66255102.01469.4342.941110.4443.181104.597425.106.45192.30249.1211066.164.339900.474.83934.7151.2919254.089.919063.845.28147717.320.55122938.230.9845.000579784.156675.71466769.54153458.78208535.238615.967344.6395.9706.0180.127687402470000001519.6270.091271166.1497179.20219.4581.09221.3364.116.571.255.666.86101953.5127081.2473133699.5629.04185.4907217.406061.8173.0476.84227.89871.0103233.471.64132543.970043.8442228.58144.230243.431054.508787943011599014.3514022.81155562.96134.6872.4205.2704.25220.58942.409136.79333.90525.528147.37775.72520.1077.808101.14944.58349.8713.993051982.152375.452371.780.4657961033766666710349666667960.3946.934832.947596.652515.264111.936685.747498.152559.058195.5107.110811.809053185400018.678105.0184.21151136.8544705.7116135.6212773.0686123.85761954.122748.99821201.983979.6865614.6105155.819184.15461137.51196551.87624.5988.3420.6880.18240854197081.98257101.26472.8442.591119.7942.221129.017389.006.49192.25249.2611108.164.319931.494.82875.3954.8019278.939.909113.115.26152292.390.55123571.680.9836.541171662.285690.01454360.62117733.57167474.708651.924346.0776.1526.4090.128206595543333331517.9348.880025227.8959579.62219.1080.81258.5004.116.551.245.696.83100446.2126057.7483033459.4928.90187.6107221.289861.4071.4175.86221.16170.8983123.291.54121243.357543.2396230.44044.271643.289954.605482492611774844.3647672.82481463.80334.8742.4355.3304.25020.72043.245134.69533.67124.747145.76676.74720.1577.773102.77644.69850.0843.964882072.572479.622471.570.4650591034033333310349000000954.7952.735742.347428.051275.162666.536329.647593.952626.460137.3106.509511.820739249600017.94095.6782.48691148.4964575.7518166.4322635.0246150.91671930.327749.62781190.528680.4399608.5336157.216482.26131148.32787432.65526.3088.3320.7179.93246882196805.30253101.08473.6941.331153.7041.441150.547306.476.56191.29250.4911150.324.309959.384.81817.2758.6719314.049.899081.735.28151213.170.54121027.250.97OpenBenchmarking.org

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.112c10c8c6c20406080100SE +/- 1.12, N = 12SE +/- 3.31, N = 9SE +/- 0.49, N = 9SE +/- 0.99, N = 986.8148.2945.0036.541. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.C12c10c8c6c20K40K60K80K100KSE +/- 812.04, N = 15SE +/- 899.80, N = 15SE +/- 907.72, N = 15SE +/- 554.69, N = 380225.0181179.0079784.1571662.281. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.D12c10c8c6c2K4K6K8K10KSE +/- 84.88, N = 3SE +/- 206.91, N = 12SE +/- 134.50, N = 15SE +/- 158.57, N = 128491.017124.926675.715690.011. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.C12c10c8c6c100K200K300K400K500KSE +/- 5489.08, N = 4SE +/- 2546.14, N = 3SE +/- 5095.33, N = 5SE +/- 4680.97, N = 5489164.65489995.20466769.54454360.621. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.C12c10c8c6c40K80K120K160K200KSE +/- 2393.90, N = 3SE +/- 2631.10, N = 15SE +/- 2089.98, N = 15SE +/- 1626.80, N = 15209846.76177097.42153458.78117733.571. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.C12c10c8c6c60K120K180K240K300KSE +/- 1589.72, N = 3SE +/- 726.36, N = 3SE +/- 1630.30, N = 3SE +/- 1838.44, N = 3260471.50239496.01208535.23167474.701. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

miniBUDE

MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM212c10c8c6c2K4K6K8K10KSE +/- 27.15, N = 3SE +/- 31.49, N = 3SE +/- 63.13, N = 3SE +/- 96.81, N = 38640.318666.988615.978651.921. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM212c10c8c6c80160240320400SE +/- 1.09, N = 3SE +/- 1.26, N = 3SE +/- 2.53, N = 3SE +/- 3.87, N = 3345.61346.68344.64346.081. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD Solver12c10c8c6c246810SE +/- 0.031, N = 3SE +/- 0.014, N = 3SE +/- 0.016, N = 3SE +/- 0.024, N = 36.0506.0745.9706.1521. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP Streamcluster12c10c8c6c246810SE +/- 0.089, N = 15SE +/- 0.079, N = 15SE +/- 0.078, N = 15SE +/- 0.050, N = 36.0016.2856.0186.4091. (CXX) g++ options: -O2 -lOpenCL

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atoms12c10c8c6c0.02880.05760.08640.11520.144SE +/- 0.00009, N = 3SE +/- 0.00007, N = 3SE +/- 0.00046, N = 3SE +/- 0.00009, N = 30.127830.127590.127680.12820

nekRS

nekRS is an open-source Navier Stokes solver based on the spectral element method. NekRS supports both CPU and GPU/accelerator support though this test profile is currently configured for CPU execution. NekRS is part of Nek5000 of the Mathematics and Computer Science MCS at Argonne National Laboratory. This nekRS benchmark is primarily relevant to large core count HPC servers and otherwise may be very time consuming. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFLOP/s, More Is BetternekRS 22.0Input: TurboPipe Periodic12c10c8c6c200000M400000M600000M800000M1000000MSE +/- 9551971733.63, N = 3SE +/- 7825985326.68, N = 3SE +/- 5892587066.25, N = 3SE +/- 1934071468.29, N = 38214620000007862580000007402470000006595543333331. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -lmpi_cxx -lmpi

NWChem

NWChem is an open-source high performance computational chemistry package. Per NWChem's documentation, "NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNWChem 7.0.2Input: C240 Buckyball12c10c8c6c300600900120015001537.11531.01519.61517.91. (F9X) gfortran options: -lnwctask -lccsd -lmcscf -lselci -lmp2 -lmoints -lstepper -ldriver -loptim -lnwdft -lgradients -lcphf -lesp -lddscf -ldangchang -lguess -lhessian -lvib -lnwcutil -lrimp2 -lproperty -lsolvation -lnwints -lprepar -lnwmd -lnwpw -lofpw -lpaw -lpspw -lband -lnwpwlib -lcafe -lspace -lanalyze -lqhop -lpfft -ldplot -ldrdy -lvscf -lqmmm -lqmd -letrans -ltce -lbq -lmm -lcons -lperfm -ldntmc -lccca -ldimqm -lga -larmci -lpeigs -l64to32 -lopenblas -lpthread -lrt -llapack -lnwcblas -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz -lcomex -m64 -ffast-math -std=legacy -fdefault-integer-8 -finline-functions -O2

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: X3D-benchmarking input.i3d12c10c8c6c80160240320400SE +/- 0.14, N = 3SE +/- 0.11, N = 3SE +/- 2.69, N = 9SE +/- 4.79, N = 9125.53146.29270.09348.881. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Medium Mesh Size - Execution Time12c10c8c6c50100150200250109.54117.94166.15227.901. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper Beam12c10c8c6c20406080100SE +/- 0.79, N = 3SE +/- 0.75, N = 3SE +/- 0.70, N = 3SE +/- 0.71, N = 379.8679.7079.2079.62

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on Windshield12c10c8c6c50100150200250SE +/- 0.38, N = 3SE +/- 0.54, N = 3SE +/- 0.19, N = 3SE +/- 0.14, N = 3216.88218.22219.45219.10

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: INIVOL and Fluid Structure Interaction Drop Container12c10c8c6c20406080100SE +/- 0.14, N = 3SE +/- 0.08, N = 3SE +/- 0.12, N = 3SE +/- 0.08, N = 381.5781.1581.0980.81

RELION

RELION - REgularised LIkelihood OptimisatioN - is a stand-alone computer program for Maximum A Posteriori refinement of (multiple) 3D reconstructions or 2D class averages in cryo-electron microscopy (cryo-EM). It is developed in the research group of Sjors Scheres at the MRC Laboratory of Molecular Biology. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRELION 3.1.1Test: Basic - Device: CPU12c10c8c6c60120180240300SE +/- 1.38, N = 5SE +/- 1.86, N = 4SE +/- 2.88, N = 3SE +/- 2.59, N = 6128.10151.40221.34258.501. (CXX) g++ options: -fopenmp -std=c++0x -O3 -rdynamic -ldl -ltiff -lfftw3f -lfftw3 -lpng -lmpi_cxx -lmpi

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: Kostya12c10c8c6c0.92481.84962.77443.69924.624SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 34.114.114.114.111. (CXX) g++ options: -O3

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: TopTweet12c10c8c6c246810SE +/- 0.01, N = 3SE +/- 0.07, N = 6SE +/- 0.01, N = 3SE +/- 0.00, N = 36.596.496.576.551. (CXX) g++ options: -O3

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: LargeRandom12c10c8c6c0.28130.56260.84391.12521.4065SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 31.251.251.251.241. (CXX) g++ options: -O3

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: PartialTweets12c10c8c6c1.28032.56063.84095.12126.4015SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 35.655.675.665.691. (CXX) g++ options: -O3

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: DistinctUserID12c10c8c6c246810SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 36.866.846.866.831. (CXX) g++ options: -O3

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1M12c10c8c6c20K40K60K80K100KSE +/- 328.13, N = 3SE +/- 152.19, N = 3SE +/- 383.60, N = 3SE +/- 214.10, N = 3104604.6102599.6101953.5100446.21. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1M12c10c8c6c30K60K90K120K150KSE +/- 849.90, N = 3SE +/- 70.55, N = 3SE +/- 122.05, N = 3SE +/- 349.73, N = 3126465.6127226.6127081.2126057.71. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H212c10c8c6c10002000300040005000SE +/- 53.17, N = 20SE +/- 39.79, N = 20SE +/- 40.50, N = 20SE +/- 36.16, N = 204802483247314830

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Jython12c10c8c6c7001400210028003500SE +/- 29.26, N = 4SE +/- 18.49, N = 4SE +/- 35.24, N = 4SE +/- 21.34, N = 43380332933693345

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Danish Mood - Acceleration: CPU12c10c8c6c3691215SE +/- 0.09, N = 15SE +/- 0.17, N = 12SE +/- 0.11, N = 15SE +/- 0.14, N = 129.699.629.569.49MIN: 4 / MAX: 12.39MIN: 3.97 / MAX: 12.9MIN: 3.94 / MAX: 12.41MIN: 3.85 / MAX: 12.15

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Orange Juice - Acceleration: CPU12c10c8c6c714212835SE +/- 0.63, N = 15SE +/- 0.29, N = 3SE +/- 0.72, N = 15SE +/- 0.71, N = 1528.8228.1929.0428.90MIN: 23.01 / MAX: 45.86MIN: 23.3 / MAX: 45.65MIN: 22.62 / MAX: 45.48MIN: 22.4 / MAX: 44.91

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Crown12c10c8c6c4080120160200SE +/- 1.01, N = 3SE +/- 0.47, N = 3SE +/- 0.36, N = 3SE +/- 0.33, N = 3182.45184.73185.49187.61MIN: 128.42 / MAX: 209.42MIN: 137.82 / MAX: 210.21MIN: 134.45 / MAX: 211.64MIN: 146.69 / MAX: 208.25

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Asian Dragon12c10c8c6c50100150200250SE +/- 0.13, N = 3SE +/- 0.47, N = 3SE +/- 0.39, N = 3SE +/- 0.46, N = 3213.75214.31217.41221.29MIN: 209.16 / MAX: 225.43MIN: 209.11 / MAX: 223.97MIN: 211.73 / MAX: 230.1MIN: 215.19 / MAX: 233.21

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Medium12c10c8c6c1428425670SE +/- 0.68, N = 3SE +/- 0.11, N = 3SE +/- 0.73, N = 3SE +/- 0.53, N = 362.5662.2361.8161.401. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Very Fast12c10c8c6c20406080100SE +/- 0.58, N = 10SE +/- 0.74, N = 3SE +/- 1.04, N = 3SE +/- 0.77, N = 373.4475.3573.0471.411. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Ultra Fast12c10c8c6c20406080100SE +/- 0.66, N = 3SE +/- 1.02, N = 3SE +/- 0.71, N = 3SE +/- 0.63, N = 377.8377.3076.8475.861. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 4K12c10c8c6c60120180240300SE +/- 7.35, N = 15SE +/- 7.16, N = 15SE +/- 7.53, N = 15SE +/- 9.18, N = 13251.77241.37227.90221.16

ACES DGEMM

This is a multi-threaded DGEMM benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point Rate12c10c8c6c1632486480SE +/- 0.33, N = 3SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.13, N = 370.4170.6171.0170.901. (CC) gcc options: -O3 -march=native -fopenmp

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RT.hdr_alb_nrm.3840x216012c10c8c6c0.7921.5842.3763.1683.96SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 33.523.443.473.29

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RTLightmap.hdr.4096x409612c10c8c6c0.37130.74261.11391.48521.8565SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 31.651.631.641.54

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ISPC12c10c8c6c30060090012001500SE +/- 6.93, N = 3SE +/- 11.03, N = 9SE +/- 8.82, N = 3SE +/- 15.59, N = 31325131713251212MIN: 329 / MAX: 4553MIN: 327 / MAX: 5660MIN: 330 / MAX: 5664MIN: 328 / MAX: 4115

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/ao/real_time12c10c8c6c1020304050SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 343.7143.0343.9743.36

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/scivis/real_time12c10c8c6c1020304050SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 342.8043.0043.8443.24

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/pathtracer/real_time12c10c8c6c50100150200250SE +/- 1.54, N = 3SE +/- 1.94, N = 3SE +/- 1.74, N = 3SE +/- 0.59, N = 3229.27230.28228.58230.44

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/ao/real_time12c10c8c6c1020304050SE +/- 0.13, N = 3SE +/- 0.04, N = 3SE +/- 0.10, N = 3SE +/- 0.07, N = 343.9844.0044.2344.27

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/scivis/real_time12c10c8c6c1020304050SE +/- 0.15, N = 3SE +/- 0.12, N = 3SE +/- 0.13, N = 3SE +/- 0.15, N = 343.1343.3343.4343.29

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time12c10c8c6c1224364860SE +/- 0.50, N = 3SE +/- 0.12, N = 3SE +/- 0.08, N = 3SE +/- 0.04, N = 353.7754.4154.5154.61

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression Rating12c10c8c6c200K400K600K800K1000KSE +/- 6636.11, N = 3SE +/- 2580.44, N = 3SE +/- 3797.71, N = 3SE +/- 7292.38, N = 39231768934338794308249261. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression Rating12c10c8c6c300K600K900K1200K1500KSE +/- 3305.67, N = 3SE +/- 5138.86, N = 3SE +/- 9235.88, N = 3SE +/- 2020.82, N = 311814351171627115990111774841. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 102412c10c8c6c0.98211.96422.94633.92844.9105SE +/- 0.023689, N = 3SE +/- 0.010431, N = 3SE +/- 0.008144, N = 3SE +/- 0.002133, N = 34.3458904.3545564.3514024.3647671. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 192000 - Buffer Size: 102412c10c8c6c0.63651.2731.90952.5463.1825SE +/- 0.001919, N = 3SE +/- 0.017291, N = 3SE +/- 0.019484, N = 3SE +/- 0.004057, N = 32.8290612.8061902.8115552.8248141. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 012c10c8c6c1428425670SE +/- 0.18, N = 3SE +/- 0.27, N = 3SE +/- 0.03, N = 3SE +/- 0.47, N = 363.2563.2562.9663.801. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 212c10c8c6c816243240SE +/- 0.14, N = 3SE +/- 0.08, N = 3SE +/- 0.10, N = 3SE +/- 0.14, N = 334.8534.9134.6934.871. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 612c10c8c6c0.55331.10661.65992.21322.7665SE +/- 0.016, N = 3SE +/- 0.003, N = 3SE +/- 0.017, N = 3SE +/- 0.004, N = 32.4592.4112.4202.4351. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6, Lossless12c10c8c6c1.19932.39863.59794.79725.9965SE +/- 0.076, N = 3SE +/- 0.044, N = 3SE +/- 0.034, N = 3SE +/- 0.055, N = 35.2875.2865.2705.3301. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 10, Lossless12c10c8c6c0.97581.95162.92743.90324.879SE +/- 0.024, N = 3SE +/- 0.055, N = 3SE +/- 0.009, N = 3SE +/- 0.043, N = 34.2414.3374.2524.2501. (CXX) g++ options: -O3 -fPIC -lm

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To Compile12c10c8c6c510152025SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 320.4620.4820.5920.72

Timed GDB GNU Debugger Compilation

This test times how long it takes to build the GNU Debugger (GDB) in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 10.2Time To Compile12c10c8c6c1020304050SE +/- 0.17, N = 3SE +/- 0.08, N = 3SE +/- 0.03, N = 3SE +/- 0.12, N = 341.7142.4142.4143.25

Timed Gem5 Compilation

This test times how long it takes to compile Gem5. Gem5 is a simulator for computer system architecture research. Gem5 is widely used for computer architecture research within the industry, academia, and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 21.2Time To Compile12c10c8c6c306090120150SE +/- 0.16, N = 3SE +/- 0.36, N = 3SE +/- 0.77, N = 3SE +/- 0.57, N = 3139.24134.37136.79134.70

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To Compile12c10c8c6c816243240SE +/- 0.40, N = 4SE +/- 0.04, N = 3SE +/- 0.19, N = 3SE +/- 0.11, N = 334.0333.6233.9133.67

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfig12c10c8c6c612182430SE +/- 0.19, N = 11SE +/- 0.21, N = 14SE +/- 0.21, N = 9SE +/- 0.22, N = 725.5025.4125.5324.75

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfig12c10c8c6c306090120150SE +/- 0.90, N = 3SE +/- 0.72, N = 3SE +/- 1.03, N = 3SE +/- 0.14, N = 3147.15145.41147.38145.77

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Ninja12c10c8c6c20406080100SE +/- 0.23, N = 3SE +/- 0.21, N = 3SE +/- 0.09, N = 3SE +/- 0.06, N = 375.6675.4475.7376.75

Timed Mesa Compilation

This test profile times how long it takes to compile Mesa with Meson/Ninja. For minimizing build dependencies and avoid versioning conflicts, test this is just the core Mesa build without LLVM or the extra Gallium3D/Mesa drivers enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To Compile12c10c8c6c510152025SE +/- 0.07, N = 3SE +/- 0.06, N = 3SE +/- 0.07, N = 3SE +/- 0.05, N = 320.1220.2120.1120.16

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.5Time To Compile12c10c8c6c246810SE +/- 0.033, N = 3SE +/- 0.034, N = 3SE +/- 0.023, N = 3SE +/- 0.010, N = 37.7777.7557.8087.773

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 18.8Time To Compile12c10c8c6c20406080100SE +/- 0.26, N = 3SE +/- 0.29, N = 3SE +/- 0.22, N = 3SE +/- 0.06, N = 3101.47101.94101.15102.78

Timed PHP Compilation

This test times how long it takes to build PHP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 8.1.9Time To Compile12c10c8c6c1020304050SE +/- 0.06, N = 3SE +/- 0.08, N = 3SE +/- 0.04, N = 3SE +/- 0.07, N = 344.5244.6144.5844.70

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To Compile12c10c8c6c1122334455SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.20, N = 3SE +/- 0.28, N = 349.9249.8049.8750.08

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU12c10c8c6c0.90211.80422.70633.60844.5105SE +/- 0.02537, N = 3SE +/- 0.05885, N = 12SE +/- 0.08932, N = 12SE +/- 0.01788, N = 33.954714.009383.993053.96488MIN: 3.05MIN: 2.96MIN: 2.67MIN: 2.991. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU12c10c8c6c400800120016002000SE +/- 31.84, N = 15SE +/- 14.89, N = 3SE +/- 28.30, N = 3SE +/- 16.27, N = 101968.702030.721982.152072.57MIN: 1632.62MIN: 1981.15MIN: 1911.33MIN: 1942.141. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU12c10c8c6c5001000150020002500SE +/- 21.01, N = 3SE +/- 30.76, N = 3SE +/- 21.41, N = 3SE +/- 25.74, N = 152344.292438.002375.452479.62MIN: 2288.85MIN: 2353.97MIN: 2319.45MIN: 2293.491. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU12c10c8c6c5001000150020002500SE +/- 24.22, N = 3SE +/- 25.04, N = 15SE +/- 25.14, N = 15SE +/- 31.16, N = 32275.862325.712371.782471.57MIN: 2213.34MIN: 2171.69MIN: 2234.23MIN: 2410.731. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU12c10c8c6c0.10480.20960.31440.41920.524SE +/- 0.005042, N = 3SE +/- 0.005241, N = 4SE +/- 0.006374, N = 3SE +/- 0.005815, N = 30.4469300.4634540.4657960.465059MIN: 0.38MIN: 0.38MIN: 0.38MIN: 0.381. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 256 - Buffer Length: 256 - Filter Length: 5712c10c8c6c2000M4000M6000M8000M10000MSE +/- 4618802.15, N = 3SE +/- 5196152.42, N = 3SE +/- 4333333.33, N = 3SE +/- 3844187.53, N = 3103470000001034000000010337666667103403333331. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 384 - Buffer Length: 256 - Filter Length: 5712c10c8c6c2000M4000M6000M8000M10000MSE +/- 4582575.69, N = 3SE +/- 4409585.52, N = 3SE +/- 5783117.19, N = 3SE +/- 3214550.25, N = 3103470000001035266666710349666667103490000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

CockroachDB

CockroachDB is a cloud-native, distributed SQL database for data intensive applications. This test profile uses a server-less CockroachDB configuration to test various Coackroach workloads on the local host with a single node. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: MoVR - Concurrency: 51212c10c8c6c2004006008001000SE +/- 3.38, N = 3SE +/- 3.66, N = 3SE +/- 9.03, N = 3SE +/- 4.87, N = 3948.5949.6960.3954.7

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: MoVR - Concurrency: 102412c10c8c6c2004006008001000SE +/- 1.42, N = 3SE +/- 0.58, N = 3SE +/- 3.18, N = 3SE +/- 1.56, N = 3953.8949.5946.9952.7

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 10% Reads - Concurrency: 51212c10c8c6c8K16K24K32K40KSE +/- 343.66, N = 15SE +/- 270.36, N = 15SE +/- 351.71, N = 6SE +/- 438.30, N = 1535970.035993.134832.935742.3

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 50% Reads - Concurrency: 51212c10c8c6c11K22K33K44K55KSE +/- 464.03, N = 15SE +/- 514.54, N = 3SE +/- 454.84, N = 15SE +/- 32.88, N = 347621.949102.747596.647428.0

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 60% Reads - Concurrency: 51212c10c8c6c11K22K33K44K55KSE +/- 268.61, N = 3SE +/- 620.92, N = 15SE +/- 411.73, N = 13SE +/- 555.56, N = 1552330.151748.852515.251275.1

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 95% Reads - Concurrency: 51212c10c8c6c14K28K42K56K70KSE +/- 702.29, N = 3SE +/- 1044.13, N = 15SE +/- 890.57, N = 3SE +/- 813.26, N = 1564467.660769.764111.962666.5

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 10% Reads - Concurrency: 102412c10c8c6c8K16K24K32K40KSE +/- 155.07, N = 3SE +/- 346.25, N = 3SE +/- 322.68, N = 3SE +/- 206.35, N = 336846.935776.836685.736329.6

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 50% Reads - Concurrency: 102412c10c8c6c10K20K30K40K50KSE +/- 366.75, N = 15SE +/- 380.16, N = 3SE +/- 468.66, N = 15SE +/- 391.13, N = 947465.548449.047498.147593.9

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 60% Reads - Concurrency: 102412c10c8c6c11K22K33K44K55KSE +/- 239.52, N = 3SE +/- 400.61, N = 10SE +/- 447.89, N = 3SE +/- 448.33, N = 352573.351959.552559.052626.4

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 95% Reads - Concurrency: 102412c10c8c6c14K28K42K56K70KSE +/- 575.30, N = 3SE +/- 1142.40, N = 15SE +/- 1317.65, N = 15SE +/- 1310.27, N = 1564661.862029.858195.560137.3

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: Thorough12c10c8c6c20406080100SE +/- 0.05, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.10, N = 3106.57106.85107.11106.511. (CXX) g++ options: -O3 -flto -pthread

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: Exhaustive12c10c8c6c3691215SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 311.7311.7611.8111.821. (CXX) g++ options: -O3 -flto -pthread

Graph500

This is a benchmark of the reference implementation of Graph500, an HPC benchmark focused on data intensive loads and commonly tested on supercomputers for complex data problems. Graph500 primarily stresses the communication subsystem of the hardware under test. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsssp median_TEPS, More Is BetterGraph500 3.0Scale: 2612c10c8c6c120M240M360M480M600M5651520005740180005318540003924960001. (CC) gcc options: -fcommon -O3 -lpthread -lm -lmpi

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bare12c10c8c6c510152025SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 318.7118.6818.6817.941. (CXX) g++ options: -O3

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: ResNet-5012c10c8c6c20406080100SE +/- 0.48, N = 3SE +/- 0.36, N = 3SE +/- 0.48, N = 3SE +/- 0.26, N = 3109.13105.91105.0195.67

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream12c10c8c6c20406080100SE +/- 0.18, N = 3SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.31, N = 384.3584.4884.2182.49

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream12c10c8c6c2004006008001000SE +/- 0.82, N = 3SE +/- 0.20, N = 3SE +/- 0.88, N = 3SE +/- 0.67, N = 31133.281133.181136.851148.50

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream12c10c8c6c160320480640800SE +/- 0.72, N = 3SE +/- 2.41, N = 3SE +/- 2.11, N = 3SE +/- 6.13, N = 15761.49742.80705.71575.75

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream12c10c8c6c4080120160200SE +/- 0.11, N = 3SE +/- 0.43, N = 3SE +/- 0.38, N = 3SE +/- 1.66, N = 15125.72128.92135.62166.43

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream12c10c8c6c2004006008001000SE +/- 0.57, N = 3SE +/- 0.53, N = 3SE +/- 1.22, N = 3SE +/- 6.69, N = 15856.02844.43773.07635.02

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream12c10c8c6c306090120150SE +/- 0.07, N = 3SE +/- 0.08, N = 3SE +/- 0.19, N = 3SE +/- 1.56, N = 15111.89113.41123.86150.92

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream12c10c8c6c400800120016002000SE +/- 4.95, N = 3SE +/- 1.61, N = 3SE +/- 1.56, N = 3SE +/- 8.40, N = 31964.271965.561954.121930.33

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream12c10c8c6c1122334455SE +/- 0.12, N = 3SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.21, N = 348.7748.7449.0049.63

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream12c10c8c6c30060090012001500SE +/- 4.04, N = 3SE +/- 0.69, N = 3SE +/- 3.22, N = 3SE +/- 1.21, N = 31195.911201.141201.981190.53

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream12c10c8c6c20406080100SE +/- 0.27, N = 3SE +/- 0.03, N = 3SE +/- 0.20, N = 3SE +/- 0.07, N = 380.0879.7179.6980.44

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream12c10c8c6c130260390520650SE +/- 1.72, N = 3SE +/- 2.48, N = 3SE +/- 1.32, N = 3SE +/- 2.24, N = 3615.45611.29614.61608.53

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream12c10c8c6c306090120150SE +/- 0.46, N = 3SE +/- 0.55, N = 3SE +/- 0.27, N = 3SE +/- 0.58, N = 3155.48156.54155.82157.22

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream12c10c8c6c20406080100SE +/- 0.21, N = 3SE +/- 0.03, N = 3SE +/- 0.16, N = 3SE +/- 0.25, N = 384.2584.2784.1582.26

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream12c10c8c6c2004006008001000SE +/- 1.25, N = 3SE +/- 1.00, N = 3SE +/- 1.67, N = 3SE +/- 1.05, N = 31133.481135.181137.511148.33

WRF

WRF, the Weather Research and Forecasting Model, is a "next-generation mesoscale numerical weather prediction system designed for both atmospheric research and operational forecasting applications. It features two dynamical cores, a data assimilation system, and a software architecture supporting parallel computation and system extensibility." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWRF 4.2.2Input: conus 2.5km12c10c8c6c160032004800640080004070.194563.186551.887432.661. (F9X) gfortran options: -O2 -ftree-vectorize -funroll-loops -ffree-form -fconvert=big-endian -frecord-marker=4 -fallow-invalid-boz -lesmf_time -lwrfio_nf -lnetcdff -lnetcdf -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon Nanotube12c10c8c6c612182430SE +/- 0.23, N = 5SE +/- 0.13, N = 3SE +/- 0.18, N = 3SE +/- 0.20, N = 323.1523.3724.6026.311. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: BMW27 - Compute: CPU-Only12c10c8c6c246810SE +/- 0.06, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 38.588.428.348.33

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Classroom - Compute: CPU-Only12c10c8c6c510152025SE +/- 0.00, N = 3SE +/- 0.09, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 320.9220.7620.6820.71

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Barbershop - Compute: CPU-Only12c10c8c6c20406080100SE +/- 0.21, N = 3SE +/- 0.15, N = 3SE +/- 0.24, N = 3SE +/- 0.31, N = 381.0380.3780.1879.93

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: Writes12c10c8c6c50K100K150K200K250KSE +/- 3742.45, N = 12SE +/- 2429.87, N = 3SE +/- 1899.17, N = 3SE +/- 2957.03, N = 3251793243603240854246882

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 50012c10c8c6c40K80K120K160K200KSE +/- 291.63, N = 3SE +/- 335.64, N = 3SE +/- 453.48, N = 3SE +/- 113.87, N = 3201032.06198858.66197081.98196805.301. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: Standard12c10c8c6c60120180240300SE +/- 2.33, N = 7SE +/- 3.09, N = 3SE +/- 2.84, N = 5SE +/- 2.17, N = 122542552572531. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPU12c10c8c6c20406080100SE +/- 0.08, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.06, N = 3101.74102.01101.26101.081. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPU12c10c8c6c100200300400500SE +/- 0.21, N = 3SE +/- 0.10, N = 3SE +/- 0.27, N = 3SE +/- 0.14, N = 3470.98469.43472.84473.69MIN: 451.07 / MAX: 556.04MIN: 432.92 / MAX: 555.25MIN: 394.37 / MAX: 553.15MIN: 423.34 / MAX: 579.411. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPU12c10c8c6c1020304050SE +/- 0.13, N = 3SE +/- 0.12, N = 3SE +/- 0.15, N = 3SE +/- 0.17, N = 342.9842.9442.5941.331. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPU12c10c8c6c2004006008001000SE +/- 3.30, N = 3SE +/- 2.71, N = 3SE +/- 3.60, N = 3SE +/- 4.42, N = 31109.451110.441119.791153.70MIN: 810.74 / MAX: 1835.01MIN: 769.04 / MAX: 1860.23MIN: 808.33 / MAX: 1875.91MIN: 853.88 / MAX: 1939.061. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPU12c10c8c6c1020304050SE +/- 0.32, N = 3SE +/- 0.20, N = 3SE +/- 0.01, N = 3SE +/- 0.07, N = 342.9543.1842.2241.441. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPU12c10c8c6c2004006008001000SE +/- 8.73, N = 3SE +/- 5.36, N = 3SE +/- 0.54, N = 3SE +/- 1.87, N = 31110.681104.591129.011150.54MIN: 833.53 / MAX: 1865.19MIN: 807.38 / MAX: 1818.79MIN: 850.94 / MAX: 1870.94MIN: 870.26 / MAX: 1902.461. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPU12c10c8c6c16003200480064008000SE +/- 2.30, N = 3SE +/- 13.32, N = 3SE +/- 6.27, N = 3SE +/- 4.59, N = 37394.657425.107389.007306.471. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPU12c10c8c6c246810SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 36.486.456.496.56MIN: 5.06 / MAX: 59.88MIN: 4.97 / MAX: 59.86MIN: 4.93 / MAX: 59.51MIN: 4.99 / MAX: 59.461. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPU12c10c8c6c4080120160200SE +/- 0.21, N = 3SE +/- 0.03, N = 3SE +/- 0.48, N = 3SE +/- 0.09, N = 3191.43192.30192.25191.291. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPU12c10c8c6c50100150200250SE +/- 0.32, N = 3SE +/- 0.03, N = 3SE +/- 0.69, N = 3SE +/- 0.13, N = 3250.34249.12249.26250.49MIN: 222.95 / MAX: 301.42MIN: 209.28 / MAX: 311.3MIN: 207.76 / MAX: 340.53MIN: 213.3 / MAX: 307.841. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPU12c10c8c6c2K4K6K8K10KSE +/- 1.42, N = 3SE +/- 3.30, N = 3SE +/- 1.79, N = 3SE +/- 1.79, N = 311018.3711066.1611108.1611150.321. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPU12c10c8c6c0.97881.95762.93643.91524.894SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 34.354.334.314.30MIN: 3.52 / MAX: 41.44MIN: 3.51 / MAX: 41.25MIN: 3.51 / MAX: 43.89MIN: 3.52 / MAX: 43.571. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPU12c10c8c6c2K4K6K8K10KSE +/- 2.57, N = 3SE +/- 2.08, N = 3SE +/- 7.50, N = 3SE +/- 3.42, N = 39867.419900.479931.499959.381. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPU12c10c8c6c1.09132.18263.27394.36525.4565SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 34.854.834.824.81MIN: 4.06 / MAX: 28.62MIN: 4.08 / MAX: 28.68MIN: 3.98 / MAX: 28.83MIN: 4.14 / MAX: 27.291. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPU12c10c8c6c2004006008001000SE +/- 2.32, N = 3SE +/- 1.48, N = 3SE +/- 8.79, N = 6SE +/- 5.14, N = 3959.16934.71875.39817.271. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPU12c10c8c6c1326395265SE +/- 0.12, N = 3SE +/- 0.08, N = 3SE +/- 0.57, N = 6SE +/- 0.37, N = 349.9851.2954.8058.67MIN: 38.24 / MAX: 187.97MIN: 40.28 / MAX: 292.83MIN: 40.7 / MAX: 276.86MIN: 43.56 / MAX: 315.051. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPU12c10c8c6c4K8K12K16K20KSE +/- 12.43, N = 3SE +/- 30.88, N = 3SE +/- 31.30, N = 3SE +/- 33.95, N = 319171.5119254.0819278.9319314.041. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPU12c10c8c6c3691215SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 39.959.919.909.89MIN: 8.42 / MAX: 52.38MIN: 8.4 / MAX: 50.42MIN: 8.39 / MAX: 56.99MIN: 8.35 / MAX: 32.161. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPU12c10c8c6c2K4K6K8K10KSE +/- 9.96, N = 3SE +/- 5.19, N = 3SE +/- 2.85, N = 3SE +/- 7.67, N = 39038.479063.849113.119081.731. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPU12c10c8c6c1.19252.3853.57754.775.9625SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 35.305.285.265.28MIN: 4.42 / MAX: 40.66MIN: 4.37 / MAX: 41.23MIN: 4.42 / MAX: 42.93MIN: 4.34 / MAX: 38.931. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU12c10c8c6c30K60K90K120K150KSE +/- 745.28, N = 3SE +/- 1134.97, N = 10SE +/- 994.61, N = 3SE +/- 365.43, N = 3147769.26147717.32152292.39151213.171. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU12c10c8c6c0.12380.24760.37140.49520.619SE +/- 0.00, N = 3SE +/- 0.00, N = 10SE +/- 0.00, N = 3SE +/- 0.00, N = 30.550.550.550.54MIN: 0.5 / MAX: 34.71MIN: 0.5 / MAX: 41.23MIN: 0.5 / MAX: 30.68MIN: 0.5 / MAX: 34.191. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU12c10c8c6c30K60K90K120K150KSE +/- 1214.59, N = 3SE +/- 815.42, N = 3SE +/- 1158.58, N = 3SE +/- 681.80, N = 3119606.21122938.23123571.68121027.251. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU12c10c8c6c0.22050.4410.66150.8821.1025SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.970.980.980.97MIN: 0.85 / MAX: 22.9MIN: 0.85 / MAX: 39.82MIN: 0.86 / MAX: 39.58MIN: 0.86 / MAX: 33.821. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

135 Results Shown

High Performance Conjugate Gradient
NAS Parallel Benchmarks:
  CG.C
  IS.D
  LU.C
  MG.C
  SP.C
miniBUDE:
  OpenMP - BM2:
    GFInst/s
    Billion Interactions/s
Rodinia:
  OpenMP CFD Solver
  OpenMP Streamcluster
NAMD
nekRS
NWChem
Xcompact3d Incompact3d
OpenFOAM
OpenRadioss:
  Bumper Beam
  Bird Strike on Windshield
  INIVOL and Fluid Structure Interaction Drop Container
RELION
simdjson:
  Kostya
  TopTweet
  LargeRand
  PartialTweets
  DistinctUserID
Xmrig:
  Monero - 1M
  Wownero - 1M
DaCapo Benchmark:
  H2
  Jython
LuxCoreRender:
  Danish Mood - CPU
  Orange Juice - CPU
Embree:
  Pathtracer ISPC - Crown
  Pathtracer ISPC - Asian Dragon
Kvazaar:
  Bosphorus 4K - Medium
  Bosphorus 4K - Very Fast
  Bosphorus 4K - Ultra Fast
SVT-AV1
ACES DGEMM
Intel Open Image Denoise:
  RT.hdr_alb_nrm.3840x2160
  RTLightmap.hdr.4096x4096
OpenVKL
OSPRay:
  particle_volume/ao/real_time
  particle_volume/scivis/real_time
  particle_volume/pathtracer/real_time
  gravity_spheres_volume/dim_512/ao/real_time
  gravity_spheres_volume/dim_512/scivis/real_time
  gravity_spheres_volume/dim_512/pathtracer/real_time
7-Zip Compression:
  Compression Rating
  Decompression Rating
Stargate Digital Audio Workstation:
  96000 - 1024
  192000 - 1024
libavif avifenc:
  0
  2
  6
  6, Lossless
  10, Lossless
Timed Apache Compilation
Timed GDB GNU Debugger Compilation
Timed Gem5 Compilation
Timed Godot Game Engine Compilation
Timed Linux Kernel Compilation:
  defconfig
  allmodconfig
Timed LLVM Compilation
Timed Mesa Compilation
Timed MPlayer Compilation
Timed Node.js Compilation
Timed PHP Compilation
Build2
oneDNN:
  IP Shapes 3D - bf16bf16bf16 - CPU
  Recurrent Neural Network Training - u8s8f32 - CPU
  Recurrent Neural Network Inference - u8s8f32 - CPU
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
Liquid-DSP:
  256 - 256 - 57
  384 - 256 - 57
CockroachDB:
  MoVR - 512
  MoVR - 1024
  KV, 10% Reads - 512
  KV, 50% Reads - 512
  KV, 60% Reads - 512
  KV, 95% Reads - 512
  KV, 10% Reads - 1024
  KV, 50% Reads - 1024
  KV, 60% Reads - 1024
  KV, 95% Reads - 1024
ASTC Encoder:
  Thorough
  Exhaustive
Graph500
GROMACS
TensorFlow
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
WRF
GPAW
Blender:
  BMW27 - CPU-Only
  Classroom - CPU-Only
  Barbershop - CPU-Only
Apache Cassandra
nginx
ONNX Runtime
OpenVINO:
  Face Detection FP16 - CPU:
    FPS
    ms
  Person Detection FP16 - CPU:
    FPS
    ms
  Person Detection FP32 - CPU:
    FPS
    ms
  Vehicle Detection FP16 - CPU:
    FPS
    ms
  Face Detection FP16-INT8 - CPU:
    FPS
    ms
  Vehicle Detection FP16-INT8 - CPU:
    FPS
    ms
  Weld Porosity Detection FP16 - CPU:
    FPS
    ms
  Machine Translation EN To DE FP16 - CPU:
    FPS
    ms
  Weld Porosity Detection FP16-INT8 - CPU:
    FPS
    ms
  Person Vehicle Bike Detection FP16 - CPU:
    FPS
    ms
  Age Gender Recognition Retail 0013 FP16 - CPU:
    FPS
    ms
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU:
    FPS
    ms