AMD EPYC Genoa Memory Scaling

Benchmarks by Michael Larabel for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2212240-NE-AMDEPYCGE62
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
12c
December 21 2022
  11 Hours, 55 Minutes
10c
December 21 2022
  12 Hours, 59 Minutes
8c
December 22 2022
  13 Hours, 22 Minutes
6c
December 23 2022
  15 Hours, 14 Minutes
Invert Behavior (Only Show Selected Data)
  13 Hours, 22 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


AMD EPYC Genoa Memory ScalingOpenBenchmarking.orgPhoronix Test Suite2 x AMD EPYC 9654 96-Core @ 2.40GHz (192 Cores / 384 Threads)AMD Titanite_4G (RTI1002E BIOS)AMD Device 14a41520GB1264GB1008GB768GB800GB INTEL SSDPF21Q800GBASPEEDVGA HDMIBroadcom NetXtreme BCM5720 PCIeUbuntu 22.106.1.0-phx (x86_64)GNOME Shell 43.0X Server 1.21.1.41.3.224GCC 12.2.0 + Clang 15.0.2-1ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerVulkanCompilerFile-SystemScreen ResolutionAMD EPYC Genoa Memory Scaling BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10110d - OpenJDK Runtime Environment (build 11.0.17+8-post-Ubuntu-1ubuntu2)- Python 3.10.7- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

12c10c8c6cResult OverviewPhoronix Test Suite100%144%189%233%278%Xcompact3d Incompact3dHigh Performance Conjugate GradientOpenFOAMRELIONWRFGraph500NAS Parallel BenchmarksnekRSTensorFlowSVT-AV1GPAWNeural Magic DeepSparseOpenVKLIntel Open Image Denoise7-Zip CompressiononeDNNRodiniaApache CassandraGROMACSTimed GDB GNU Debugger CompilationTimed Gem5 CompilationEmbreeKvazaarXmrignginxTimed Linux Kernel CompilationBlenderOpenVINOTimed LLVM CompilationTimed Node.js CompilationONNX RuntimeNWChemLuxCoreRenderCockroachDBTimed Apache CompilationTimed Godot Game Engine CompilationOSPRayACES DGEMMlibavif avifencTimed MPlayer CompilationASTC EncoderminiBUDEBuild2Timed Mesa CompilationNAMDStargate Digital Audio WorkstationTimed PHP CompilationsimdjsonOpenRadiossLiquid-DSPDaCapo Benchmark

AMD EPYC Genoa Memory Scalingwrf: conus 2.5kmhpcg: openvkl: vklBenchmark ISPCincompact3d: X3D-benchmarking input.i3dnwchem: C240 Buckyballcockroach: KV, 10% Reads - 512cockroach: KV, 95% Reads - 1024cockroach: KV, 60% Reads - 512cockroach: KV, 50% Reads - 1024ospray: particle_volume/scivis/real_timecockroach: KV, 50% Reads - 512cockroach: KV, 95% Reads - 512ospray: particle_volume/pathtracer/real_timerelion: Basic - CPUtensorflow: CPU - 256 - ResNet-50luxcorerender: Danish Mood - CPUonnx: fcn-resnet101-11 - CPU - Standardluxcorerender: Orange Juice - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUopenradioss: Bird Strike on Windshieldcassandra: Writesonednn: Recurrent Neural Network Training - u8s8f32 - CPUospray: particle_volume/ao/real_timecockroach: KV, 60% Reads - 1024onednn: Recurrent Neural Network Inference - u8s8f32 - CPUgraph500: 26build-linux-kernel: allmodconfigbuild-gem5: Time To Compileospray: gravity_spheres_volume/dim_512/scivis/real_timeospray: gravity_spheres_volume/dim_512/ao/real_timeospray: gravity_spheres_volume/dim_512/pathtracer/real_timecockroach: KV, 10% Reads - 1024deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streambuild-nodejs: Time To Compileopenradioss: INIVOL and Fluid Structure Interaction Drop Containeropenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUcockroach: MoVR - 512cockroach: MoVR - 1024openfoam: drivaerFastback, Medium Mesh Size - Execution Timenginx: 500build-linux-kernel: defconfigsimdjson: TopTweetopenradioss: Bumper Beamblender: Barbershop - CPU-Onlydeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUbuild-llvm: Ninjasimdjson: DistinctUserIDsimdjson: PartialTweetsopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUavifenc: 0namd: ATPase Simulation - 327,506 Atomsopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUstargate: 192000 - 1024openvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamsimdjson: Kostyadeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdacapobench: H2simdjson: LargeRandbuild2: Time To Compilebuild-php: Time To Compiledeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streambuild-gdb: Time To Compiledeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamcompress-7zip: Decompression Ratingcompress-7zip: Compression Ratingnekrs: TurboPipe Periodicstargate: 96000 - 1024build-godot: Time To Compileavifenc: 2gpaw: Carbon Nanotubesvt-av1: Preset 12 - Bosphorus 4Krodinia: OpenMP Streamclustergromacs: MPI CPU - water_GMX50_bareonednn: IP Shapes 3D - bf16bf16bf16 - CPUnpb: IS.Dblender: Classroom - CPU-Onlybuild-apache: Time To Compilebuild-mesa: Time To Compileliquid-dsp: 384 - 256 - 57liquid-dsp: 256 - 256 - 57oidn: RTLightmap.hdr.4096x4096minibude: OpenMP - BM2minibude: OpenMP - BM2onednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUkvazaar: Bosphorus 4K - Very Fastxmrig: Monero - 1Mnpb: CG.Ckvazaar: Bosphorus 4K - Mediumxmrig: Wownero - 1Moidn: RT.hdr_alb_nrm.3840x2160blender: BMW27 - CPU-Onlynpb: MG.Cnpb: SP.Ckvazaar: Bosphorus 4K - Ultra Fastastcenc: Exhaustivebuild-mplayer: Time To Compilenpb: LU.Castcenc: Thoroughdacapobench: Jythonrodinia: OpenMP CFD Solveravifenc: 6, Losslessavifenc: 10, Losslessembree: Pathtracer ISPC - Crownembree: Pathtracer ISPC - Asian Dragonmt-dgemm: Sustained Floating-Point Rateavifenc: 612c10c8c6c4070.1986.81431325125.5262481537.135970.064661.852330.147465.542.797047621.964467.6229.269128.101109.139.6925428.822275.86216.882517931968.7043.706152573.32344.29565152000147.147139.23843.126243.977553.774836846.9125.7196761.4853101.46581.570.55147769.26948.5953.8109.53721201032.0625.5016.5979.8681.03111.8909856.018649.98959.1675.6556.865.651109.4542.981110.6842.95250.34191.43470.98101.7463.2470.127835.309038.479.9519171.510.97119606.212.8290614.3511018.376.487394.651133.282384.35064.859867.411133.477484.24704.11155.4797615.447448021.2549.91744.51980.07901195.910241.70948.76731964.273011814359231768214620000004.34589034.03234.84823.151251.7696.00118.7063.954718491.0120.9220.46120.11710347000000103470000001.65345.6128640.3100.44693073.44104604.680225.0162.56126465.63.528.58209846.76260471.5077.8311.72507.777489164.65106.566333806.0505.2874.241182.4498213.750770.4077332.4594563.18348.29451317146.289830153135993.162029.851748.848449.042.999949102.760769.7230.282151.398105.919.6225528.192325.71218.222436032030.7243.031651959.52438.00574018000145.410134.37343.328743.996954.408335776.8128.9249742.7956101.94181.150.55147717.32949.6949.5117.94003198858.6625.4076.4979.7080.37113.4079844.426851.29934.7175.4406.845.671110.4442.941104.5943.18249.12192.30469.43102.0163.2460.127595.289063.849.9119254.080.98122938.232.8061904.3311066.166.457425.101133.182184.48224.839900.471135.184584.26574.11156.5374611.292648321.2549.80044.60879.71391201.139142.41248.74311965.560611716278934337862580000004.35455633.61634.90923.373241.3696.28518.6774.009387124.9220.7620.48020.20510352666667103400000001.63346.6798666.9800.46345475.35102599.681179.0062.23127226.63.448.42177097.42239496.0177.3011.76377.755489995.20106.854233296.0745.2864.337184.7346214.309370.6137752.4116551.87645.00051325270.0912711519.634832.958195.552515.247498.143.844247596.664111.9228.581221.336105.019.5625729.042371.78219.452408541982.1543.970052559.02375.45531854000147.377136.79343.431044.230254.508736685.7135.6212705.7116101.14981.090.55152292.39960.3946.9166.14971197081.9825.5286.5779.2080.18123.8576773.068654.80875.3975.7256.865.661119.7942.591129.0142.22249.26192.25472.84101.2662.9610.127685.269113.119.9019278.930.98123571.682.8115554.3111108.166.497389.001136.854484.21154.829931.491137.511984.15464.11155.8191614.610547311.2549.87144.58379.68651201.983942.40948.99821954.122711599018794307402470000004.35140233.90534.68724.598227.8986.01818.6783.993056675.7120.6820.58920.10710349666667103376666671.64344.6398615.9670.46579673.04101953.579784.1561.81127081.23.478.34153458.78208535.2376.8411.80907.808466769.54107.110833695.9705.2704.252185.4907217.406071.0103232.4207432.65536.54111212348.8800251517.935742.360137.351275.147593.943.239647428.062666.5230.440258.50095.679.4925328.902471.57219.102468822072.5743.357552626.42479.62392496000145.766134.69543.289944.271654.605436329.6166.4322575.7518102.77680.810.54151213.17954.7952.7227.89595196805.3024.7476.5579.6279.93150.9167635.024658.67817.2776.7476.835.691153.7041.331150.5441.44250.49191.29473.69101.0863.8030.128205.289081.739.8919314.040.97121027.252.8248144.3011150.326.567306.471148.496482.48694.819959.381148.327882.26134.11157.2164608.533648301.2450.08444.69880.43991190.528643.24549.62781930.327711774848249266595543333334.36476733.67134.87426.308221.1616.40917.9403.964885690.0120.7120.72020.15710349000000103403333331.54346.0778651.9240.46505971.41100446.271662.2861.40126057.73.298.33117733.57167474.7075.8611.82077.773454360.62106.509533456.1525.3304.250187.6107221.289870.8983122.435OpenBenchmarking.org

WRF

WRF, the Weather Research and Forecasting Model, is a "next-generation mesoscale numerical weather prediction system designed for both atmospheric research and operational forecasting applications. It features two dynamical cores, a data assimilation system, and a software architecture supporting parallel computation and system extensibility." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWRF 4.2.2Input: conus 2.5km12c10c8c6c160032004800640080004070.194563.186551.887432.661. (F9X) gfortran options: -O2 -ftree-vectorize -funroll-loops -ffree-form -fconvert=big-endian -frecord-marker=4 -fallow-invalid-boz -lesmf_time -lwrfio_nf -lnetcdff -lnetcdf -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.112c10c8c6c20406080100SE +/- 1.12, N = 12SE +/- 3.31, N = 9SE +/- 0.49, N = 9SE +/- 0.99, N = 986.8148.2945.0036.541. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ISPC12c10c8c6c30060090012001500SE +/- 6.93, N = 3SE +/- 11.03, N = 9SE +/- 8.82, N = 3SE +/- 15.59, N = 31325131713251212MIN: 329 / MAX: 4553MIN: 327 / MAX: 5660MIN: 330 / MAX: 5664MIN: 328 / MAX: 4115

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: X3D-benchmarking input.i3d12c10c8c6c80160240320400SE +/- 0.14, N = 3SE +/- 0.11, N = 3SE +/- 2.69, N = 9SE +/- 4.79, N = 9125.53146.29270.09348.881. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

NWChem

NWChem is an open-source high performance computational chemistry package. Per NWChem's documentation, "NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNWChem 7.0.2Input: C240 Buckyball12c10c8c6c300600900120015001537.11531.01519.61517.91. (F9X) gfortran options: -lnwctask -lccsd -lmcscf -lselci -lmp2 -lmoints -lstepper -ldriver -loptim -lnwdft -lgradients -lcphf -lesp -lddscf -ldangchang -lguess -lhessian -lvib -lnwcutil -lrimp2 -lproperty -lsolvation -lnwints -lprepar -lnwmd -lnwpw -lofpw -lpaw -lpspw -lband -lnwpwlib -lcafe -lspace -lanalyze -lqhop -lpfft -ldplot -ldrdy -lvscf -lqmmm -lqmd -letrans -ltce -lbq -lmm -lcons -lperfm -ldntmc -lccca -ldimqm -lga -larmci -lpeigs -l64to32 -lopenblas -lpthread -lrt -llapack -lnwcblas -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz -lcomex -m64 -ffast-math -std=legacy -fdefault-integer-8 -finline-functions -O2

CockroachDB

CockroachDB is a cloud-native, distributed SQL database for data intensive applications. This test profile uses a server-less CockroachDB configuration to test various Coackroach workloads on the local host with a single node. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 10% Reads - Concurrency: 51212c10c8c6c8K16K24K32K40KSE +/- 343.66, N = 15SE +/- 270.36, N = 15SE +/- 351.71, N = 6SE +/- 438.30, N = 1535970.035993.134832.935742.3

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 95% Reads - Concurrency: 102412c10c8c6c14K28K42K56K70KSE +/- 575.30, N = 3SE +/- 1142.40, N = 15SE +/- 1317.65, N = 15SE +/- 1310.27, N = 1564661.862029.858195.560137.3

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 60% Reads - Concurrency: 51212c10c8c6c11K22K33K44K55KSE +/- 268.61, N = 3SE +/- 620.92, N = 15SE +/- 411.73, N = 13SE +/- 555.56, N = 1552330.151748.852515.251275.1

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 50% Reads - Concurrency: 102412c10c8c6c10K20K30K40K50KSE +/- 366.75, N = 15SE +/- 380.16, N = 3SE +/- 468.66, N = 15SE +/- 391.13, N = 947465.548449.047498.147593.9

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/scivis/real_time12c10c8c6c1020304050SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 342.8043.0043.8443.24

CockroachDB

CockroachDB is a cloud-native, distributed SQL database for data intensive applications. This test profile uses a server-less CockroachDB configuration to test various Coackroach workloads on the local host with a single node. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 50% Reads - Concurrency: 51212c10c8c6c11K22K33K44K55KSE +/- 464.03, N = 15SE +/- 514.54, N = 3SE +/- 454.84, N = 15SE +/- 32.88, N = 347621.949102.747596.647428.0

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 95% Reads - Concurrency: 51212c10c8c6c14K28K42K56K70KSE +/- 702.29, N = 3SE +/- 1044.13, N = 15SE +/- 890.57, N = 3SE +/- 813.26, N = 1564467.660769.764111.962666.5

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/pathtracer/real_time12c10c8c6c50100150200250SE +/- 1.54, N = 3SE +/- 1.94, N = 3SE +/- 1.74, N = 3SE +/- 0.59, N = 3229.27230.28228.58230.44

RELION

RELION - REgularised LIkelihood OptimisatioN - is a stand-alone computer program for Maximum A Posteriori refinement of (multiple) 3D reconstructions or 2D class averages in cryo-electron microscopy (cryo-EM). It is developed in the research group of Sjors Scheres at the MRC Laboratory of Molecular Biology. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRELION 3.1.1Test: Basic - Device: CPU12c10c8c6c60120180240300SE +/- 1.38, N = 5SE +/- 1.86, N = 4SE +/- 2.88, N = 3SE +/- 2.59, N = 6128.10151.40221.34258.501. (CXX) g++ options: -fopenmp -std=c++0x -O3 -rdynamic -ldl -ltiff -lfftw3f -lfftw3 -lpng -lmpi_cxx -lmpi

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: ResNet-5012c10c8c6c20406080100SE +/- 0.48, N = 3SE +/- 0.36, N = 3SE +/- 0.48, N = 3SE +/- 0.26, N = 3109.13105.91105.0195.67

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Danish Mood - Acceleration: CPU12c10c8c6c3691215SE +/- 0.09, N = 15SE +/- 0.17, N = 12SE +/- 0.11, N = 15SE +/- 0.14, N = 129.699.629.569.49MIN: 4 / MAX: 12.39MIN: 3.97 / MAX: 12.9MIN: 3.94 / MAX: 12.41MIN: 3.85 / MAX: 12.15

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: Standard12c10c8c6c60120180240300SE +/- 2.33, N = 7SE +/- 3.09, N = 3SE +/- 2.84, N = 5SE +/- 2.17, N = 122542552572531. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Orange Juice - Acceleration: CPU12c10c8c6c714212835SE +/- 0.63, N = 15SE +/- 0.29, N = 3SE +/- 0.72, N = 15SE +/- 0.71, N = 1528.8228.1929.0428.90MIN: 23.01 / MAX: 45.86MIN: 23.3 / MAX: 45.65MIN: 22.62 / MAX: 45.48MIN: 22.4 / MAX: 44.91

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU12c10c8c6c5001000150020002500SE +/- 24.22, N = 3SE +/- 25.04, N = 15SE +/- 25.14, N = 15SE +/- 31.16, N = 32275.862325.712371.782471.57MIN: 2213.34MIN: 2171.69MIN: 2234.23MIN: 2410.731. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on Windshield12c10c8c6c50100150200250SE +/- 0.38, N = 3SE +/- 0.54, N = 3SE +/- 0.19, N = 3SE +/- 0.14, N = 3216.88218.22219.45219.10

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: Writes12c10c8c6c50K100K150K200K250KSE +/- 3742.45, N = 12SE +/- 2429.87, N = 3SE +/- 1899.17, N = 3SE +/- 2957.03, N = 3251793243603240854246882

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU12c10c8c6c400800120016002000SE +/- 31.84, N = 15SE +/- 14.89, N = 3SE +/- 28.30, N = 3SE +/- 16.27, N = 101968.702030.721982.152072.57MIN: 1632.62MIN: 1981.15MIN: 1911.33MIN: 1942.141. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/ao/real_time12c10c8c6c1020304050SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 343.7143.0343.9743.36

CockroachDB

CockroachDB is a cloud-native, distributed SQL database for data intensive applications. This test profile uses a server-less CockroachDB configuration to test various Coackroach workloads on the local host with a single node. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 60% Reads - Concurrency: 102412c10c8c6c11K22K33K44K55KSE +/- 239.52, N = 3SE +/- 400.61, N = 10SE +/- 447.89, N = 3SE +/- 448.33, N = 352573.351959.552559.052626.4

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU12c10c8c6c5001000150020002500SE +/- 21.01, N = 3SE +/- 30.76, N = 3SE +/- 21.41, N = 3SE +/- 25.74, N = 152344.292438.002375.452479.62MIN: 2288.85MIN: 2353.97MIN: 2319.45MIN: 2293.491. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Graph500

This is a benchmark of the reference implementation of Graph500, an HPC benchmark focused on data intensive loads and commonly tested on supercomputers for complex data problems. Graph500 primarily stresses the communication subsystem of the hardware under test. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsssp median_TEPS, More Is BetterGraph500 3.0Scale: 2612c10c8c6c120M240M360M480M600M5651520005740180005318540003924960001. (CC) gcc options: -fcommon -O3 -lpthread -lm -lmpi

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfig12c10c8c6c306090120150SE +/- 0.90, N = 3SE +/- 0.72, N = 3SE +/- 1.03, N = 3SE +/- 0.14, N = 3147.15145.41147.38145.77

Timed Gem5 Compilation

This test times how long it takes to compile Gem5. Gem5 is a simulator for computer system architecture research. Gem5 is widely used for computer architecture research within the industry, academia, and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 21.2Time To Compile12c10c8c6c306090120150SE +/- 0.16, N = 3SE +/- 0.36, N = 3SE +/- 0.77, N = 3SE +/- 0.57, N = 3139.24134.37136.79134.70

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/scivis/real_time12c10c8c6c1020304050SE +/- 0.15, N = 3SE +/- 0.12, N = 3SE +/- 0.13, N = 3SE +/- 0.15, N = 343.1343.3343.4343.29

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/ao/real_time12c10c8c6c1020304050SE +/- 0.13, N = 3SE +/- 0.04, N = 3SE +/- 0.10, N = 3SE +/- 0.07, N = 343.9844.0044.2344.27

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time12c10c8c6c1224364860SE +/- 0.50, N = 3SE +/- 0.12, N = 3SE +/- 0.08, N = 3SE +/- 0.04, N = 353.7754.4154.5154.61

CockroachDB

CockroachDB is a cloud-native, distributed SQL database for data intensive applications. This test profile uses a server-less CockroachDB configuration to test various Coackroach workloads on the local host with a single node. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 10% Reads - Concurrency: 102412c10c8c6c8K16K24K32K40KSE +/- 155.07, N = 3SE +/- 346.25, N = 3SE +/- 322.68, N = 3SE +/- 206.35, N = 336846.935776.836685.736329.6

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream12c10c8c6c4080120160200SE +/- 0.11, N = 3SE +/- 0.43, N = 3SE +/- 0.38, N = 3SE +/- 1.66, N = 15125.72128.92135.62166.43

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream12c10c8c6c160320480640800SE +/- 0.72, N = 3SE +/- 2.41, N = 3SE +/- 2.11, N = 3SE +/- 6.13, N = 15761.49742.80705.71575.75

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 18.8Time To Compile12c10c8c6c20406080100SE +/- 0.26, N = 3SE +/- 0.29, N = 3SE +/- 0.22, N = 3SE +/- 0.06, N = 3101.47101.94101.15102.78

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: INIVOL and Fluid Structure Interaction Drop Container12c10c8c6c20406080100SE +/- 0.14, N = 3SE +/- 0.08, N = 3SE +/- 0.12, N = 3SE +/- 0.08, N = 381.5781.1581.0980.81

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU12c10c8c6c0.12380.24760.37140.49520.619SE +/- 0.00, N = 3SE +/- 0.00, N = 10SE +/- 0.00, N = 3SE +/- 0.00, N = 30.550.550.550.54MIN: 0.5 / MAX: 34.71MIN: 0.5 / MAX: 41.23MIN: 0.5 / MAX: 30.68MIN: 0.5 / MAX: 34.191. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU12c10c8c6c30K60K90K120K150KSE +/- 745.28, N = 3SE +/- 1134.97, N = 10SE +/- 994.61, N = 3SE +/- 365.43, N = 3147769.26147717.32152292.39151213.171. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

CockroachDB

CockroachDB is a cloud-native, distributed SQL database for data intensive applications. This test profile uses a server-less CockroachDB configuration to test various Coackroach workloads on the local host with a single node. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: MoVR - Concurrency: 51212c10c8c6c2004006008001000SE +/- 3.38, N = 3SE +/- 3.66, N = 3SE +/- 9.03, N = 3SE +/- 4.87, N = 3948.5949.6960.3954.7

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: MoVR - Concurrency: 102412c10c8c6c2004006008001000SE +/- 1.42, N = 3SE +/- 0.58, N = 3SE +/- 3.18, N = 3SE +/- 1.56, N = 3953.8949.5946.9952.7

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Medium Mesh Size - Execution Time12c10c8c6c50100150200250109.54117.94166.15227.901. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 50012c10c8c6c40K80K120K160K200KSE +/- 291.63, N = 3SE +/- 335.64, N = 3SE +/- 453.48, N = 3SE +/- 113.87, N = 3201032.06198858.66197081.98196805.301. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfig12c10c8c6c612182430SE +/- 0.19, N = 11SE +/- 0.21, N = 14SE +/- 0.21, N = 9SE +/- 0.22, N = 725.5025.4125.5324.75

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: TopTweet12c10c8c6c246810SE +/- 0.01, N = 3SE +/- 0.07, N = 6SE +/- 0.01, N = 3SE +/- 0.00, N = 36.596.496.576.551. (CXX) g++ options: -O3

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper Beam12c10c8c6c20406080100SE +/- 0.79, N = 3SE +/- 0.75, N = 3SE +/- 0.70, N = 3SE +/- 0.71, N = 379.8679.7079.2079.62

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Barbershop - Compute: CPU-Only12c10c8c6c20406080100SE +/- 0.21, N = 3SE +/- 0.15, N = 3SE +/- 0.24, N = 3SE +/- 0.31, N = 381.0380.3780.1879.93

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream12c10c8c6c306090120150SE +/- 0.07, N = 3SE +/- 0.08, N = 3SE +/- 0.19, N = 3SE +/- 1.56, N = 15111.89113.41123.86150.92

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream12c10c8c6c2004006008001000SE +/- 0.57, N = 3SE +/- 0.53, N = 3SE +/- 1.22, N = 3SE +/- 6.69, N = 15856.02844.43773.07635.02

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPU12c10c8c6c1326395265SE +/- 0.12, N = 3SE +/- 0.08, N = 3SE +/- 0.57, N = 6SE +/- 0.37, N = 349.9851.2954.8058.67MIN: 38.24 / MAX: 187.97MIN: 40.28 / MAX: 292.83MIN: 40.7 / MAX: 276.86MIN: 43.56 / MAX: 315.051. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPU12c10c8c6c2004006008001000SE +/- 2.32, N = 3SE +/- 1.48, N = 3SE +/- 8.79, N = 6SE +/- 5.14, N = 3959.16934.71875.39817.271. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Ninja12c10c8c6c20406080100SE +/- 0.23, N = 3SE +/- 0.21, N = 3SE +/- 0.09, N = 3SE +/- 0.06, N = 375.6675.4475.7376.75

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: DistinctUserID12c10c8c6c246810SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 36.866.846.866.831. (CXX) g++ options: -O3

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: PartialTweets12c10c8c6c1.28032.56063.84095.12126.4015SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 35.655.675.665.691. (CXX) g++ options: -O3

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPU12c10c8c6c2004006008001000SE +/- 3.30, N = 3SE +/- 2.71, N = 3SE +/- 3.60, N = 3SE +/- 4.42, N = 31109.451110.441119.791153.70MIN: 810.74 / MAX: 1835.01MIN: 769.04 / MAX: 1860.23MIN: 808.33 / MAX: 1875.91MIN: 853.88 / MAX: 1939.061. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPU12c10c8c6c1020304050SE +/- 0.13, N = 3SE +/- 0.12, N = 3SE +/- 0.15, N = 3SE +/- 0.17, N = 342.9842.9442.5941.331. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPU12c10c8c6c2004006008001000SE +/- 8.73, N = 3SE +/- 5.36, N = 3SE +/- 0.54, N = 3SE +/- 1.87, N = 31110.681104.591129.011150.54MIN: 833.53 / MAX: 1865.19MIN: 807.38 / MAX: 1818.79MIN: 850.94 / MAX: 1870.94MIN: 870.26 / MAX: 1902.461. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPU12c10c8c6c1020304050SE +/- 0.32, N = 3SE +/- 0.20, N = 3SE +/- 0.01, N = 3SE +/- 0.07, N = 342.9543.1842.2241.441. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPU12c10c8c6c50100150200250SE +/- 0.32, N = 3SE +/- 0.03, N = 3SE +/- 0.69, N = 3SE +/- 0.13, N = 3250.34249.12249.26250.49MIN: 222.95 / MAX: 301.42MIN: 209.28 / MAX: 311.3MIN: 207.76 / MAX: 340.53MIN: 213.3 / MAX: 307.841. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPU12c10c8c6c4080120160200SE +/- 0.21, N = 3SE +/- 0.03, N = 3SE +/- 0.48, N = 3SE +/- 0.09, N = 3191.43192.30192.25191.291. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPU12c10c8c6c100200300400500SE +/- 0.21, N = 3SE +/- 0.10, N = 3SE +/- 0.27, N = 3SE +/- 0.14, N = 3470.98469.43472.84473.69MIN: 451.07 / MAX: 556.04MIN: 432.92 / MAX: 555.25MIN: 394.37 / MAX: 553.15MIN: 423.34 / MAX: 579.411. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPU12c10c8c6c20406080100SE +/- 0.08, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.06, N = 3101.74102.01101.26101.081. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 012c10c8c6c1428425670SE +/- 0.18, N = 3SE +/- 0.27, N = 3SE +/- 0.03, N = 3SE +/- 0.47, N = 363.2563.2562.9663.801. (CXX) g++ options: -O3 -fPIC -lm

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atoms12c10c8c6c0.02880.05760.08640.11520.144SE +/- 0.00009, N = 3SE +/- 0.00007, N = 3SE +/- 0.00046, N = 3SE +/- 0.00009, N = 30.127830.127590.127680.12820

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPU12c10c8c6c1.19252.3853.57754.775.9625SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 35.305.285.265.28MIN: 4.42 / MAX: 40.66MIN: 4.37 / MAX: 41.23MIN: 4.42 / MAX: 42.93MIN: 4.34 / MAX: 38.931. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPU12c10c8c6c2K4K6K8K10KSE +/- 9.96, N = 3SE +/- 5.19, N = 3SE +/- 2.85, N = 3SE +/- 7.67, N = 39038.479063.849113.119081.731. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPU12c10c8c6c3691215SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 39.959.919.909.89MIN: 8.42 / MAX: 52.38MIN: 8.4 / MAX: 50.42MIN: 8.39 / MAX: 56.99MIN: 8.35 / MAX: 32.161. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPU12c10c8c6c4K8K12K16K20KSE +/- 12.43, N = 3SE +/- 30.88, N = 3SE +/- 31.30, N = 3SE +/- 33.95, N = 319171.5119254.0819278.9319314.041. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU12c10c8c6c0.22050.4410.66150.8821.1025SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.970.980.980.97MIN: 0.85 / MAX: 22.9MIN: 0.85 / MAX: 39.82MIN: 0.86 / MAX: 39.58MIN: 0.86 / MAX: 33.821. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU12c10c8c6c30K60K90K120K150KSE +/- 1214.59, N = 3SE +/- 815.42, N = 3SE +/- 1158.58, N = 3SE +/- 681.80, N = 3119606.21122938.23123571.68121027.251. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 192000 - Buffer Size: 102412c10c8c6c0.63651.2731.90952.5463.1825SE +/- 0.001919, N = 3SE +/- 0.017291, N = 3SE +/- 0.019484, N = 3SE +/- 0.004057, N = 32.8290612.8061902.8115552.8248141. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPU12c10c8c6c0.97881.95762.93643.91524.894SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 34.354.334.314.30MIN: 3.52 / MAX: 41.44MIN: 3.51 / MAX: 41.25MIN: 3.51 / MAX: 43.89MIN: 3.52 / MAX: 43.571. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPU12c10c8c6c2K4K6K8K10KSE +/- 1.42, N = 3SE +/- 3.30, N = 3SE +/- 1.79, N = 3SE +/- 1.79, N = 311018.3711066.1611108.1611150.321. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPU12c10c8c6c246810SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 36.486.456.496.56MIN: 5.06 / MAX: 59.88MIN: 4.97 / MAX: 59.86MIN: 4.93 / MAX: 59.51MIN: 4.99 / MAX: 59.461. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPU12c10c8c6c16003200480064008000SE +/- 2.30, N = 3SE +/- 13.32, N = 3SE +/- 6.27, N = 3SE +/- 4.59, N = 37394.657425.107389.007306.471. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream12c10c8c6c2004006008001000SE +/- 0.82, N = 3SE +/- 0.20, N = 3SE +/- 0.88, N = 3SE +/- 0.67, N = 31133.281133.181136.851148.50

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream12c10c8c6c20406080100SE +/- 0.18, N = 3SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.31, N = 384.3584.4884.2182.49

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPU12c10c8c6c1.09132.18263.27394.36525.4565SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 34.854.834.824.81MIN: 4.06 / MAX: 28.62MIN: 4.08 / MAX: 28.68MIN: 3.98 / MAX: 28.83MIN: 4.14 / MAX: 27.291. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPU12c10c8c6c2K4K6K8K10KSE +/- 2.57, N = 3SE +/- 2.08, N = 3SE +/- 7.50, N = 3SE +/- 3.42, N = 39867.419900.479931.499959.381. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream12c10c8c6c2004006008001000SE +/- 1.25, N = 3SE +/- 1.00, N = 3SE +/- 1.67, N = 3SE +/- 1.05, N = 31133.481135.181137.511148.33

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream12c10c8c6c20406080100SE +/- 0.21, N = 3SE +/- 0.03, N = 3SE +/- 0.16, N = 3SE +/- 0.25, N = 384.2584.2784.1582.26

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: Kostya12c10c8c6c0.92481.84962.77443.69924.624SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 34.114.114.114.111. (CXX) g++ options: -O3

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream12c10c8c6c306090120150SE +/- 0.46, N = 3SE +/- 0.55, N = 3SE +/- 0.27, N = 3SE +/- 0.58, N = 3155.48156.54155.82157.22

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream12c10c8c6c130260390520650SE +/- 1.72, N = 3SE +/- 2.48, N = 3SE +/- 1.32, N = 3SE +/- 2.24, N = 3615.45611.29614.61608.53

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H212c10c8c6c10002000300040005000SE +/- 53.17, N = 20SE +/- 39.79, N = 20SE +/- 40.50, N = 20SE +/- 36.16, N = 204802483247314830

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: LargeRandom12c10c8c6c0.28130.56260.84391.12521.4065SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 31.251.251.251.241. (CXX) g++ options: -O3

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To Compile12c10c8c6c1122334455SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.20, N = 3SE +/- 0.28, N = 349.9249.8049.8750.08

Timed PHP Compilation

This test times how long it takes to build PHP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 8.1.9Time To Compile12c10c8c6c1020304050SE +/- 0.06, N = 3SE +/- 0.08, N = 3SE +/- 0.04, N = 3SE +/- 0.07, N = 344.5244.6144.5844.70

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream12c10c8c6c20406080100SE +/- 0.27, N = 3SE +/- 0.03, N = 3SE +/- 0.20, N = 3SE +/- 0.07, N = 380.0879.7179.6980.44

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream12c10c8c6c30060090012001500SE +/- 4.04, N = 3SE +/- 0.69, N = 3SE +/- 3.22, N = 3SE +/- 1.21, N = 31195.911201.141201.981190.53

Timed GDB GNU Debugger Compilation

This test times how long it takes to build the GNU Debugger (GDB) in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 10.2Time To Compile12c10c8c6c1020304050SE +/- 0.17, N = 3SE +/- 0.08, N = 3SE +/- 0.03, N = 3SE +/- 0.12, N = 341.7142.4142.4143.25

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream12c10c8c6c1122334455SE +/- 0.12, N = 3SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.21, N = 348.7748.7449.0049.63

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream12c10c8c6c400800120016002000SE +/- 4.95, N = 3SE +/- 1.61, N = 3SE +/- 1.56, N = 3SE +/- 8.40, N = 31964.271965.561954.121930.33

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression Rating12c10c8c6c300K600K900K1200K1500KSE +/- 3305.67, N = 3SE +/- 5138.86, N = 3SE +/- 9235.88, N = 3SE +/- 2020.82, N = 311814351171627115990111774841. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression Rating12c10c8c6c200K400K600K800K1000KSE +/- 6636.11, N = 3SE +/- 2580.44, N = 3SE +/- 3797.71, N = 3SE +/- 7292.38, N = 39231768934338794308249261. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

nekRS

nekRS is an open-source Navier Stokes solver based on the spectral element method. NekRS supports both CPU and GPU/accelerator support though this test profile is currently configured for CPU execution. NekRS is part of Nek5000 of the Mathematics and Computer Science MCS at Argonne National Laboratory. This nekRS benchmark is primarily relevant to large core count HPC servers and otherwise may be very time consuming. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFLOP/s, More Is BetternekRS 22.0Input: TurboPipe Periodic12c10c8c6c200000M400000M600000M800000M1000000MSE +/- 9551971733.63, N = 3SE +/- 7825985326.68, N = 3SE +/- 5892587066.25, N = 3SE +/- 1934071468.29, N = 38214620000007862580000007402470000006595543333331. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -lmpi_cxx -lmpi

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 102412c10c8c6c0.98211.96422.94633.92844.9105SE +/- 0.023689, N = 3SE +/- 0.010431, N = 3SE +/- 0.008144, N = 3SE +/- 0.002133, N = 34.3458904.3545564.3514024.3647671. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To Compile12c10c8c6c816243240SE +/- 0.40, N = 4SE +/- 0.04, N = 3SE +/- 0.19, N = 3SE +/- 0.11, N = 334.0333.6233.9133.67

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 212c10c8c6c816243240SE +/- 0.14, N = 3SE +/- 0.08, N = 3SE +/- 0.10, N = 3SE +/- 0.14, N = 334.8534.9134.6934.871. (CXX) g++ options: -O3 -fPIC -lm

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon Nanotube12c10c8c6c612182430SE +/- 0.23, N = 5SE +/- 0.13, N = 3SE +/- 0.18, N = 3SE +/- 0.20, N = 323.1523.3724.6026.311. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 4K12c10c8c6c60120180240300SE +/- 7.35, N = 15SE +/- 7.16, N = 15SE +/- 7.53, N = 15SE +/- 9.18, N = 13251.77241.37227.90221.16

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP Streamcluster12c10c8c6c246810SE +/- 0.089, N = 15SE +/- 0.079, N = 15SE +/- 0.078, N = 15SE +/- 0.050, N = 36.0016.2856.0186.4091. (CXX) g++ options: -O2 -lOpenCL

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bare12c10c8c6c510152025SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 318.7118.6818.6817.941. (CXX) g++ options: -O3

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU12c10c8c6c0.90211.80422.70633.60844.5105SE +/- 0.02537, N = 3SE +/- 0.05885, N = 12SE +/- 0.08932, N = 12SE +/- 0.01788, N = 33.954714.009383.993053.96488MIN: 3.05MIN: 2.96MIN: 2.67MIN: 2.991. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.D12c10c8c6c2K4K6K8K10KSE +/- 84.88, N = 3SE +/- 206.91, N = 12SE +/- 134.50, N = 15SE +/- 158.57, N = 128491.017124.926675.715690.011. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Classroom - Compute: CPU-Only12c10c8c6c510152025SE +/- 0.00, N = 3SE +/- 0.09, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 320.9220.7620.6820.71

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To Compile12c10c8c6c510152025SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 320.4620.4820.5920.72

Timed Mesa Compilation

This test profile times how long it takes to compile Mesa with Meson/Ninja. For minimizing build dependencies and avoid versioning conflicts, test this is just the core Mesa build without LLVM or the extra Gallium3D/Mesa drivers enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To Compile12c10c8c6c510152025SE +/- 0.07, N = 3SE +/- 0.06, N = 3SE +/- 0.07, N = 3SE +/- 0.05, N = 320.1220.2120.1120.16

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 384 - Buffer Length: 256 - Filter Length: 5712c10c8c6c2000M4000M6000M8000M10000MSE +/- 4582575.69, N = 3SE +/- 4409585.52, N = 3SE +/- 5783117.19, N = 3SE +/- 3214550.25, N = 3103470000001035266666710349666667103490000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 256 - Buffer Length: 256 - Filter Length: 5712c10c8c6c2000M4000M6000M8000M10000MSE +/- 4618802.15, N = 3SE +/- 5196152.42, N = 3SE +/- 4333333.33, N = 3SE +/- 3844187.53, N = 3103470000001034000000010337666667103403333331. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RTLightmap.hdr.4096x409612c10c8c6c0.37130.74261.11391.48521.8565SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 31.651.631.641.54

miniBUDE

MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM212c10c8c6c80160240320400SE +/- 1.09, N = 3SE +/- 1.26, N = 3SE +/- 2.53, N = 3SE +/- 3.87, N = 3345.61346.68344.64346.081. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM212c10c8c6c2K4K6K8K10KSE +/- 27.15, N = 3SE +/- 31.49, N = 3SE +/- 63.13, N = 3SE +/- 96.81, N = 38640.318666.988615.978651.921. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU12c10c8c6c0.10480.20960.31440.41920.524SE +/- 0.005042, N = 3SE +/- 0.005241, N = 4SE +/- 0.006374, N = 3SE +/- 0.005815, N = 30.4469300.4634540.4657960.465059MIN: 0.38MIN: 0.38MIN: 0.38MIN: 0.381. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Very Fast12c10c8c6c20406080100SE +/- 0.58, N = 10SE +/- 0.74, N = 3SE +/- 1.04, N = 3SE +/- 0.77, N = 373.4475.3573.0471.411. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1M12c10c8c6c20K40K60K80K100KSE +/- 328.13, N = 3SE +/- 152.19, N = 3SE +/- 383.60, N = 3SE +/- 214.10, N = 3104604.6102599.6101953.5100446.21. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.C12c10c8c6c20K40K60K80K100KSE +/- 812.04, N = 15SE +/- 899.80, N = 15SE +/- 907.72, N = 15SE +/- 554.69, N = 380225.0181179.0079784.1571662.281. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Medium12c10c8c6c1428425670SE +/- 0.68, N = 3SE +/- 0.11, N = 3SE +/- 0.73, N = 3SE +/- 0.53, N = 362.5662.2361.8161.401. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1M12c10c8c6c30K60K90K120K150KSE +/- 849.90, N = 3SE +/- 70.55, N = 3SE +/- 122.05, N = 3SE +/- 349.73, N = 3126465.6127226.6127081.2126057.71. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RT.hdr_alb_nrm.3840x216012c10c8c6c0.7921.5842.3763.1683.96SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 33.523.443.473.29

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: BMW27 - Compute: CPU-Only12c10c8c6c246810SE +/- 0.06, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 38.588.428.348.33

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.C12c10c8c6c40K80K120K160K200KSE +/- 2393.90, N = 3SE +/- 2631.10, N = 15SE +/- 2089.98, N = 15SE +/- 1626.80, N = 15209846.76177097.42153458.78117733.571. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.C12c10c8c6c60K120K180K240K300KSE +/- 1589.72, N = 3SE +/- 726.36, N = 3SE +/- 1630.30, N = 3SE +/- 1838.44, N = 3260471.50239496.01208535.23167474.701. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Ultra Fast12c10c8c6c20406080100SE +/- 0.66, N = 3SE +/- 1.02, N = 3SE +/- 0.71, N = 3SE +/- 0.63, N = 377.8377.3076.8475.861. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: Exhaustive12c10c8c6c3691215SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 311.7311.7611.8111.821. (CXX) g++ options: -O3 -flto -pthread

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.5Time To Compile12c10c8c6c246810SE +/- 0.033, N = 3SE +/- 0.034, N = 3SE +/- 0.023, N = 3SE +/- 0.010, N = 37.7777.7557.8087.773

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.C12c10c8c6c100K200K300K400K500KSE +/- 5489.08, N = 4SE +/- 2546.14, N = 3SE +/- 5095.33, N = 5SE +/- 4680.97, N = 5489164.65489995.20466769.54454360.621. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: Thorough12c10c8c6c20406080100SE +/- 0.05, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.10, N = 3106.57106.85107.11106.511. (CXX) g++ options: -O3 -flto -pthread

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Jython12c10c8c6c7001400210028003500SE +/- 29.26, N = 4SE +/- 18.49, N = 4SE +/- 35.24, N = 4SE +/- 21.34, N = 43380332933693345

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD Solver12c10c8c6c246810SE +/- 0.031, N = 3SE +/- 0.014, N = 3SE +/- 0.016, N = 3SE +/- 0.024, N = 36.0506.0745.9706.1521. (CXX) g++ options: -O2 -lOpenCL

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6, Lossless12c10c8c6c1.19932.39863.59794.79725.9965SE +/- 0.076, N = 3SE +/- 0.044, N = 3SE +/- 0.034, N = 3SE +/- 0.055, N = 35.2875.2865.2705.3301. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 10, Lossless12c10c8c6c0.97581.95162.92743.90324.879SE +/- 0.024, N = 3SE +/- 0.055, N = 3SE +/- 0.009, N = 3SE +/- 0.043, N = 34.2414.3374.2524.2501. (CXX) g++ options: -O3 -fPIC -lm

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Crown12c10c8c6c4080120160200SE +/- 1.01, N = 3SE +/- 0.47, N = 3SE +/- 0.36, N = 3SE +/- 0.33, N = 3182.45184.73185.49187.61MIN: 128.42 / MAX: 209.42MIN: 137.82 / MAX: 210.21MIN: 134.45 / MAX: 211.64MIN: 146.69 / MAX: 208.25

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Asian Dragon12c10c8c6c50100150200250SE +/- 0.13, N = 3SE +/- 0.47, N = 3SE +/- 0.39, N = 3SE +/- 0.46, N = 3213.75214.31217.41221.29MIN: 209.16 / MAX: 225.43MIN: 209.11 / MAX: 223.97MIN: 211.73 / MAX: 230.1MIN: 215.19 / MAX: 233.21

ACES DGEMM

This is a multi-threaded DGEMM benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point Rate12c10c8c6c1632486480SE +/- 0.33, N = 3SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.13, N = 370.4170.6171.0170.901. (CC) gcc options: -O3 -march=native -fopenmp

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 612c10c8c6c0.55331.10661.65992.21322.7665SE +/- 0.016, N = 3SE +/- 0.003, N = 3SE +/- 0.017, N = 3SE +/- 0.004, N = 32.4592.4112.4202.4351. (CXX) g++ options: -O3 -fPIC -lm

135 Results Shown

WRF
High Performance Conjugate Gradient
OpenVKL
Xcompact3d Incompact3d
NWChem
CockroachDB:
  KV, 10% Reads - 512
  KV, 95% Reads - 1024
  KV, 60% Reads - 512
  KV, 50% Reads - 1024
OSPRay
CockroachDB:
  KV, 50% Reads - 512
  KV, 95% Reads - 512
OSPRay
RELION
TensorFlow
LuxCoreRender
ONNX Runtime
LuxCoreRender
oneDNN
OpenRadioss
Apache Cassandra
oneDNN
OSPRay
CockroachDB
oneDNN
Graph500
Timed Linux Kernel Compilation
Timed Gem5 Compilation
OSPRay:
  gravity_spheres_volume/dim_512/scivis/real_time
  gravity_spheres_volume/dim_512/ao/real_time
  gravity_spheres_volume/dim_512/pathtracer/real_time
CockroachDB
Neural Magic DeepSparse:
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Timed Node.js Compilation
OpenRadioss
OpenVINO:
  Age Gender Recognition Retail 0013 FP16 - CPU:
    ms
    FPS
CockroachDB:
  MoVR - 512
  MoVR - 1024
OpenFOAM
nginx
Timed Linux Kernel Compilation
simdjson
OpenRadioss
Blender
Neural Magic DeepSparse:
  CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream:
    ms/batch
    items/sec
OpenVINO:
  Machine Translation EN To DE FP16 - CPU:
    ms
    FPS
Timed LLVM Compilation
simdjson:
  DistinctUserID
  PartialTweets
OpenVINO:
  Person Detection FP16 - CPU:
    ms
    FPS
  Person Detection FP32 - CPU:
    ms
    FPS
  Face Detection FP16-INT8 - CPU:
    ms
    FPS
  Face Detection FP16 - CPU:
    ms
    FPS
libavif avifenc
NAMD
OpenVINO:
  Person Vehicle Bike Detection FP16 - CPU:
    ms
    FPS
  Weld Porosity Detection FP16-INT8 - CPU:
    ms
    FPS
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU:
    ms
    FPS
Stargate Digital Audio Workstation
OpenVINO:
  Vehicle Detection FP16-INT8 - CPU:
    ms
    FPS
  Vehicle Detection FP16 - CPU:
    ms
    FPS
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream:
    ms/batch
    items/sec
OpenVINO:
  Weld Porosity Detection FP16 - CPU:
    ms
    FPS
Neural Magic DeepSparse:
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
simdjson
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
DaCapo Benchmark
simdjson
Build2
Timed PHP Compilation
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    ms/batch
    items/sec
Timed GDB GNU Debugger Compilation
Neural Magic DeepSparse:
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    ms/batch
    items/sec
7-Zip Compression:
  Decompression Rating
  Compression Rating
nekRS
Stargate Digital Audio Workstation
Timed Godot Game Engine Compilation
libavif avifenc
GPAW
SVT-AV1
Rodinia
GROMACS
oneDNN
NAS Parallel Benchmarks
Blender
Timed Apache Compilation
Timed Mesa Compilation
Liquid-DSP:
  384 - 256 - 57
  256 - 256 - 57
Intel Open Image Denoise
miniBUDE:
  OpenMP - BM2:
    Billion Interactions/s
    GFInst/s
oneDNN
Kvazaar
Xmrig
NAS Parallel Benchmarks
Kvazaar
Xmrig
Intel Open Image Denoise
Blender
NAS Parallel Benchmarks:
  MG.C
  SP.C
Kvazaar
ASTC Encoder
Timed MPlayer Compilation
NAS Parallel Benchmarks
ASTC Encoder
DaCapo Benchmark
Rodinia
libavif avifenc:
  6, Lossless
  10, Lossless
Embree:
  Pathtracer ISPC - Crown
  Pathtracer ISPC - Asian Dragon
ACES DGEMM
libavif avifenc