AMD EPYC Genoa Memory Scaling

Benchmarks by Michael Larabel for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2212240-NE-AMDEPYCGE62
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 2 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 3 Tests
C++ Boost Tests 2 Tests
Timed Code Compilation 11 Tests
C/C++ Compiler Tests 10 Tests
CPU Massive 15 Tests
Creator Workloads 14 Tests
Database Test Suite 2 Tests
Encoding 4 Tests
Fortran Tests 5 Tests
Game Development 5 Tests
HPC - High Performance Computing 21 Tests
Java Tests 2 Tests
LAPACK (Linear Algebra Pack) Tests 2 Tests
Machine Learning 5 Tests
Molecular Dynamics 5 Tests
MPI Benchmarks 5 Tests
Multi-Core 30 Tests
NVIDIA GPU Compute 4 Tests
Intel oneAPI 6 Tests
OpenMPI Tests 13 Tests
Programmer / Developer System Benchmarks 13 Tests
Python Tests 11 Tests
Renderers 3 Tests
Scientific Computing 7 Tests
Server 4 Tests
Server CPU Tests 11 Tests
Video Encoding 3 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
12c
December 21 2022
  11 Hours, 55 Minutes
10c
December 21 2022
  12 Hours, 59 Minutes
8c
December 22 2022
  13 Hours, 22 Minutes
6c
December 23 2022
  15 Hours, 14 Minutes
Invert Hiding All Results Option
  13 Hours, 22 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


AMD EPYC Genoa Memory ScalingOpenBenchmarking.orgPhoronix Test Suite2 x AMD EPYC 9654 96-Core @ 2.40GHz (192 Cores / 384 Threads)AMD Titanite_4G (RTI1002E BIOS)AMD Device 14a41520GB1264GB1008GB768GB800GB INTEL SSDPF21Q800GBASPEEDVGA HDMIBroadcom NetXtreme BCM5720 PCIeUbuntu 22.106.1.0-phx (x86_64)GNOME Shell 43.0X Server 1.21.1.41.3.224GCC 12.2.0 + Clang 15.0.2-1ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerVulkanCompilerFile-SystemScreen ResolutionAMD EPYC Genoa Memory Scaling BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10110d - OpenJDK Runtime Environment (build 11.0.17+8-post-Ubuntu-1ubuntu2)- Python 3.10.7- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

12c10c8c6cResult OverviewPhoronix Test Suite100%144%189%233%278%Xcompact3d Incompact3dHigh Performance Conjugate GradientOpenFOAMRELIONWRFGraph500NAS Parallel BenchmarksnekRSTensorFlowSVT-AV1GPAWNeural Magic DeepSparseOpenVKLIntel Open Image Denoise7-Zip CompressiononeDNNRodiniaApache CassandraGROMACSTimed GDB GNU Debugger CompilationTimed Gem5 CompilationEmbreeKvazaarXmrignginxTimed Linux Kernel CompilationBlenderOpenVINOTimed LLVM CompilationTimed Node.js CompilationONNX RuntimeNWChemLuxCoreRenderCockroachDBTimed Apache CompilationTimed Godot Game Engine CompilationOSPRayACES DGEMMlibavif avifencTimed MPlayer CompilationASTC EncoderminiBUDEBuild2Timed Mesa CompilationNAMDStargate Digital Audio WorkstationTimed PHP CompilationsimdjsonOpenRadiossLiquid-DSPDaCapo Benchmark

AMD EPYC Genoa Memory Scalinghpcg: npb: CG.Cnpb: IS.Dnpb: LU.Cnpb: MG.Cnpb: SP.Cminibude: OpenMP - BM2minibude: OpenMP - BM2rodinia: OpenMP CFD Solverrodinia: OpenMP Streamclusternamd: ATPase Simulation - 327,506 Atomsnekrs: TurboPipe Periodicnwchem: C240 Buckyballincompact3d: X3D-benchmarking input.i3dopenfoam: drivaerFastback, Medium Mesh Size - Execution Timeopenradioss: Bumper Beamopenradioss: Bird Strike on Windshieldopenradioss: INIVOL and Fluid Structure Interaction Drop Containerrelion: Basic - CPUsimdjson: Kostyasimdjson: TopTweetsimdjson: LargeRandsimdjson: PartialTweetssimdjson: DistinctUserIDxmrig: Monero - 1Mxmrig: Wownero - 1Mdacapobench: H2dacapobench: Jythonluxcorerender: Danish Mood - CPUluxcorerender: Orange Juice - CPUembree: Pathtracer ISPC - Crownembree: Pathtracer ISPC - Asian Dragonkvazaar: Bosphorus 4K - Mediumkvazaar: Bosphorus 4K - Very Fastkvazaar: Bosphorus 4K - Ultra Fastsvt-av1: Preset 12 - Bosphorus 4Kmt-dgemm: Sustained Floating-Point Rateoidn: RT.hdr_alb_nrm.3840x2160oidn: RTLightmap.hdr.4096x4096openvkl: vklBenchmark ISPCospray: particle_volume/ao/real_timeospray: particle_volume/scivis/real_timeospray: particle_volume/pathtracer/real_timeospray: gravity_spheres_volume/dim_512/ao/real_timeospray: gravity_spheres_volume/dim_512/scivis/real_timeospray: gravity_spheres_volume/dim_512/pathtracer/real_timecompress-7zip: Compression Ratingcompress-7zip: Decompression Ratingstargate: 96000 - 1024stargate: 192000 - 1024avifenc: 0avifenc: 2avifenc: 6avifenc: 6, Losslessavifenc: 10, Losslessbuild-apache: Time To Compilebuild-gdb: Time To Compilebuild-gem5: Time To Compilebuild-godot: Time To Compilebuild-linux-kernel: defconfigbuild-linux-kernel: allmodconfigbuild-llvm: Ninjabuild-mesa: Time To Compilebuild-mplayer: Time To Compilebuild-nodejs: Time To Compilebuild-php: Time To Compilebuild2: Time To Compileonednn: IP Shapes 3D - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUliquid-dsp: 256 - 256 - 57liquid-dsp: 384 - 256 - 57cockroach: MoVR - 512cockroach: MoVR - 1024cockroach: KV, 10% Reads - 512cockroach: KV, 50% Reads - 512cockroach: KV, 60% Reads - 512cockroach: KV, 95% Reads - 512cockroach: KV, 10% Reads - 1024cockroach: KV, 50% Reads - 1024cockroach: KV, 60% Reads - 1024cockroach: KV, 95% Reads - 1024astcenc: Thoroughastcenc: Exhaustivegraph500: 26gromacs: MPI CPU - water_GMX50_baretensorflow: CPU - 256 - ResNet-50deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamwrf: conus 2.5kmgpaw: Carbon Nanotubeblender: BMW27 - CPU-Onlyblender: Classroom - CPU-Onlyblender: Barbershop - CPU-Onlycassandra: Writesnginx: 500onnx: fcn-resnet101-11 - CPU - Standardopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU12c10c8c6c86.814380225.018491.01489164.65209846.76260471.508640.310345.6126.0506.0010.127838214620000001537.1125.526248109.5372179.86216.8881.57128.1014.116.591.255.656.86104604.6126465.6480233809.6928.82182.4498213.750762.5673.4477.83251.76970.4077333.521.65132543.706142.7970229.26943.977543.126253.774892317611814354.3458902.82906163.24734.8482.4595.2874.24120.46141.709139.23834.03225.501147.14775.65520.1177.777101.46544.51949.9173.954711968.702344.292275.860.4469301034700000010347000000948.5953.835970.047621.952330.164467.636846.947465.552573.364661.8106.566311.725056515200018.706109.1384.35061133.2823761.4853125.7196856.0186111.89091964.273048.76731195.910280.0790615.4474155.479784.24701133.47744070.1923.1518.5820.9281.03251793201032.06254101.74470.9842.981109.4542.951110.687394.656.48191.43250.3411018.374.359867.414.85959.1649.9819171.519.959038.475.30147769.260.55119606.210.9748.294581179.007124.92489995.20177097.42239496.018666.980346.6796.0746.2850.127597862580000001531146.289830117.9400379.70218.2281.15151.3984.116.491.255.676.84102599.6127226.6483233299.6228.19184.7346214.309362.2375.3577.30241.36970.6137753.441.63131743.031642.9999230.28243.996943.328754.408389343311716274.3545562.80619063.24634.9092.4115.2864.33720.48042.412134.37333.61625.407145.41075.44020.2057.755101.94144.60849.8004.009382030.722438.002325.710.4634541034000000010352666667949.6949.535993.149102.751748.860769.735776.848449.051959.562029.8106.854211.763757401800018.677105.9184.48221133.1821742.7956128.9249844.4268113.40791965.560648.74311201.139179.7139611.2926156.537484.26571135.18454563.18323.3738.4220.7680.37243603198858.66255102.01469.4342.941110.4443.181104.597425.106.45192.30249.1211066.164.339900.474.83934.7151.2919254.089.919063.845.28147717.320.55122938.230.9845.000579784.156675.71466769.54153458.78208535.238615.967344.6395.9706.0180.127687402470000001519.6270.091271166.1497179.20219.4581.09221.3364.116.571.255.666.86101953.5127081.2473133699.5629.04185.4907217.406061.8173.0476.84227.89871.0103233.471.64132543.970043.8442228.58144.230243.431054.508787943011599014.3514022.81155562.96134.6872.4205.2704.25220.58942.409136.79333.90525.528147.37775.72520.1077.808101.14944.58349.8713.993051982.152375.452371.780.4657961033766666710349666667960.3946.934832.947596.652515.264111.936685.747498.152559.058195.5107.110811.809053185400018.678105.0184.21151136.8544705.7116135.6212773.0686123.85761954.122748.99821201.983979.6865614.6105155.819184.15461137.51196551.87624.5988.3420.6880.18240854197081.98257101.26472.8442.591119.7942.221129.017389.006.49192.25249.2611108.164.319931.494.82875.3954.8019278.939.909113.115.26152292.390.55123571.680.9836.541171662.285690.01454360.62117733.57167474.708651.924346.0776.1526.4090.128206595543333331517.9348.880025227.8959579.62219.1080.81258.5004.116.551.245.696.83100446.2126057.7483033459.4928.90187.6107221.289861.4071.4175.86221.16170.8983123.291.54121243.357543.2396230.44044.271643.289954.605482492611774844.3647672.82481463.80334.8742.4355.3304.25020.72043.245134.69533.67124.747145.76676.74720.1577.773102.77644.69850.0843.964882072.572479.622471.570.4650591034033333310349000000954.7952.735742.347428.051275.162666.536329.647593.952626.460137.3106.509511.820739249600017.94095.6782.48691148.4964575.7518166.4322635.0246150.91671930.327749.62781190.528680.4399608.5336157.216482.26131148.32787432.65526.3088.3320.7179.93246882196805.30253101.08473.6941.331153.7041.441150.547306.476.56191.29250.4911150.324.309959.384.81817.2758.6719314.049.899081.735.28151213.170.54121027.250.97OpenBenchmarking.org

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.112c10c8c6c20406080100SE +/- 1.12, N = 12SE +/- 3.31, N = 9SE +/- 0.49, N = 9SE +/- 0.99, N = 986.8148.2945.0036.541. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi
OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.112c10c8c6c1632486480Min: 74.55 / Avg: 86.81 / Max: 88.29Min: 43.78 / Avg: 48.29 / Max: 73.98Min: 41.69 / Avg: 45 / Max: 46.61Min: 28.66 / Avg: 36.54 / Max: 37.951. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.C12c10c8c6c20K40K60K80K100KSE +/- 812.04, N = 15SE +/- 899.80, N = 15SE +/- 907.72, N = 15SE +/- 554.69, N = 380225.0181179.0079784.1571662.281. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.C12c10c8c6c14K28K42K56K70KMin: 74881.92 / Avg: 80225.01 / Max: 85148.78Min: 74809.56 / Avg: 81179 / Max: 85505.4Min: 73259.11 / Avg: 79784.15 / Max: 86607.69Min: 70729.36 / Avg: 71662.28 / Max: 72648.631. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.D12c10c8c6c2K4K6K8K10KSE +/- 84.88, N = 3SE +/- 206.91, N = 12SE +/- 134.50, N = 15SE +/- 158.57, N = 128491.017124.926675.715690.011. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.D12c10c8c6c15003000450060007500Min: 8359.93 / Avg: 8491.01 / Max: 8649.98Min: 6490.72 / Avg: 7124.92 / Max: 8437.58Min: 5871.65 / Avg: 6675.71 / Max: 7335.61Min: 4956.65 / Avg: 5690.01 / Max: 6385.811. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.C12c10c8c6c100K200K300K400K500KSE +/- 5489.08, N = 4SE +/- 2546.14, N = 3SE +/- 5095.33, N = 5SE +/- 4680.97, N = 5489164.65489995.20466769.54454360.621. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.C12c10c8c6c80K160K240K320K400KMin: 480215.4 / Avg: 489164.65 / Max: 505131.78Min: 485308.15 / Avg: 489995.2 / Max: 494062.74Min: 459069.84 / Avg: 466769.54 / Max: 486631.13Min: 437653.51 / Avg: 454360.62 / Max: 464593.481. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.C12c10c8c6c40K80K120K160K200KSE +/- 2393.90, N = 3SE +/- 2631.10, N = 15SE +/- 2089.98, N = 15SE +/- 1626.80, N = 15209846.76177097.42153458.78117733.571. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.C12c10c8c6c40K80K120K160K200KMin: 205559.5 / Avg: 209846.76 / Max: 213836.16Min: 159878.34 / Avg: 177097.42 / Max: 191708.82Min: 140375.98 / Avg: 153458.78 / Max: 170657.02Min: 106678.44 / Avg: 117733.57 / Max: 126886.351. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.C12c10c8c6c60K120K180K240K300KSE +/- 1589.72, N = 3SE +/- 726.36, N = 3SE +/- 1630.30, N = 3SE +/- 1838.44, N = 3260471.50239496.01208535.23167474.701. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.C12c10c8c6c50K100K150K200K250KMin: 258628.4 / Avg: 260471.5 / Max: 263636.68Min: 238190.76 / Avg: 239496.01 / Max: 240700.95Min: 206321.57 / Avg: 208535.23 / Max: 211715.33Min: 163964.08 / Avg: 167474.7 / Max: 170176.731. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

miniBUDE

MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM212c10c8c6c2K4K6K8K10KSE +/- 27.15, N = 3SE +/- 31.49, N = 3SE +/- 63.13, N = 3SE +/- 96.81, N = 38640.318666.988615.978651.921. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM212c10c8c6c15003000450060007500Min: 8612.21 / Avg: 8640.31 / Max: 8694.59Min: 8624.2 / Avg: 8666.98 / Max: 8728.39Min: 8489.76 / Avg: 8615.97 / Max: 8682.17Min: 8493.33 / Avg: 8651.92 / Max: 8827.41. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM212c10c8c6c80160240320400SE +/- 1.09, N = 3SE +/- 1.26, N = 3SE +/- 2.53, N = 3SE +/- 3.87, N = 3345.61346.68344.64346.081. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM212c10c8c6c60120180240300Min: 344.49 / Avg: 345.61 / Max: 347.78Min: 344.97 / Avg: 346.68 / Max: 349.14Min: 339.59 / Avg: 344.64 / Max: 347.29Min: 339.73 / Avg: 346.08 / Max: 353.11. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD Solver12c10c8c6c246810SE +/- 0.031, N = 3SE +/- 0.014, N = 3SE +/- 0.016, N = 3SE +/- 0.024, N = 36.0506.0745.9706.1521. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD Solver12c10c8c6c246810Min: 6.01 / Avg: 6.05 / Max: 6.11Min: 6.05 / Avg: 6.07 / Max: 6.1Min: 5.95 / Avg: 5.97 / Max: 6Min: 6.11 / Avg: 6.15 / Max: 6.181. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP Streamcluster12c10c8c6c246810SE +/- 0.089, N = 15SE +/- 0.079, N = 15SE +/- 0.078, N = 15SE +/- 0.050, N = 36.0016.2856.0186.4091. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP Streamcluster12c10c8c6c3691215Min: 5.71 / Avg: 6 / Max: 6.53Min: 5.43 / Avg: 6.29 / Max: 6.53Min: 5.65 / Avg: 6.02 / Max: 6.38Min: 6.32 / Avg: 6.41 / Max: 6.491. (CXX) g++ options: -O2 -lOpenCL

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atoms12c10c8c6c0.02880.05760.08640.11520.144SE +/- 0.00009, N = 3SE +/- 0.00007, N = 3SE +/- 0.00046, N = 3SE +/- 0.00009, N = 30.127830.127590.127680.12820
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atoms12c10c8c6c12345Min: 0.13 / Avg: 0.13 / Max: 0.13Min: 0.13 / Avg: 0.13 / Max: 0.13Min: 0.13 / Avg: 0.13 / Max: 0.13Min: 0.13 / Avg: 0.13 / Max: 0.13

nekRS

nekRS is an open-source Navier Stokes solver based on the spectral element method. NekRS supports both CPU and GPU/accelerator support though this test profile is currently configured for CPU execution. NekRS is part of Nek5000 of the Mathematics and Computer Science MCS at Argonne National Laboratory. This nekRS benchmark is primarily relevant to large core count HPC servers and otherwise may be very time consuming. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFLOP/s, More Is BetternekRS 22.0Input: TurboPipe Periodic12c10c8c6c200000M400000M600000M800000M1000000MSE +/- 9551971733.63, N = 3SE +/- 7825985326.68, N = 3SE +/- 5892587066.25, N = 3SE +/- 1934071468.29, N = 38214620000007862580000007402470000006595543333331. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -lmpi_cxx -lmpi
OpenBenchmarking.orgFLOP/s, More Is BetternekRS 22.0Input: TurboPipe Periodic12c10c8c6c140000M280000M420000M560000M700000MMin: 803114000000 / Avg: 821462000000 / Max: 835244000000Min: 771416000000 / Avg: 786258000000 / Max: 797983000000Min: 732825000000 / Avg: 740247000000 / Max: 751886000000Min: 656459000000 / Avg: 659554333333.33 / Max: 6631110000001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -lmpi_cxx -lmpi

NWChem

NWChem is an open-source high performance computational chemistry package. Per NWChem's documentation, "NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNWChem 7.0.2Input: C240 Buckyball12c10c8c6c300600900120015001537.11531.01519.61517.91. (F9X) gfortran options: -lnwctask -lccsd -lmcscf -lselci -lmp2 -lmoints -lstepper -ldriver -loptim -lnwdft -lgradients -lcphf -lesp -lddscf -ldangchang -lguess -lhessian -lvib -lnwcutil -lrimp2 -lproperty -lsolvation -lnwints -lprepar -lnwmd -lnwpw -lofpw -lpaw -lpspw -lband -lnwpwlib -lcafe -lspace -lanalyze -lqhop -lpfft -ldplot -ldrdy -lvscf -lqmmm -lqmd -letrans -ltce -lbq -lmm -lcons -lperfm -ldntmc -lccca -ldimqm -lga -larmci -lpeigs -l64to32 -lopenblas -lpthread -lrt -llapack -lnwcblas -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz -lcomex -m64 -ffast-math -std=legacy -fdefault-integer-8 -finline-functions -O2

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: X3D-benchmarking input.i3d12c10c8c6c80160240320400SE +/- 0.14, N = 3SE +/- 0.11, N = 3SE +/- 2.69, N = 9SE +/- 4.79, N = 9125.53146.29270.09348.881. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: X3D-benchmarking input.i3d12c10c8c6c60120180240300Min: 125.36 / Avg: 125.53 / Max: 125.82Min: 146.16 / Avg: 146.29 / Max: 146.51Min: 264.74 / Avg: 270.09 / Max: 289.63Min: 339.78 / Avg: 348.88 / Max: 386.351. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Medium Mesh Size - Execution Time12c10c8c6c50100150200250109.54117.94166.15227.901. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper Beam12c10c8c6c20406080100SE +/- 0.79, N = 3SE +/- 0.75, N = 3SE +/- 0.70, N = 3SE +/- 0.71, N = 379.8679.7079.2079.62
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper Beam12c10c8c6c1530456075Min: 78.62 / Avg: 79.86 / Max: 81.34Min: 78.22 / Avg: 79.7 / Max: 80.67Min: 78.38 / Avg: 79.2 / Max: 80.58Min: 78.41 / Avg: 79.62 / Max: 80.88

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on Windshield12c10c8c6c50100150200250SE +/- 0.38, N = 3SE +/- 0.54, N = 3SE +/- 0.19, N = 3SE +/- 0.14, N = 3216.88218.22219.45219.10
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on Windshield12c10c8c6c4080120160200Min: 216.2 / Avg: 216.88 / Max: 217.5Min: 217.33 / Avg: 218.22 / Max: 219.2Min: 219.11 / Avg: 219.45 / Max: 219.76Min: 218.83 / Avg: 219.1 / Max: 219.26

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: INIVOL and Fluid Structure Interaction Drop Container12c10c8c6c20406080100SE +/- 0.14, N = 3SE +/- 0.08, N = 3SE +/- 0.12, N = 3SE +/- 0.08, N = 381.5781.1581.0980.81
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: INIVOL and Fluid Structure Interaction Drop Container12c10c8c6c1632486480Min: 81.3 / Avg: 81.57 / Max: 81.74Min: 81.06 / Avg: 81.15 / Max: 81.31Min: 80.9 / Avg: 81.09 / Max: 81.32Min: 80.72 / Avg: 80.81 / Max: 80.97

RELION

RELION - REgularised LIkelihood OptimisatioN - is a stand-alone computer program for Maximum A Posteriori refinement of (multiple) 3D reconstructions or 2D class averages in cryo-electron microscopy (cryo-EM). It is developed in the research group of Sjors Scheres at the MRC Laboratory of Molecular Biology. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRELION 3.1.1Test: Basic - Device: CPU12c10c8c6c60120180240300SE +/- 1.38, N = 5SE +/- 1.86, N = 4SE +/- 2.88, N = 3SE +/- 2.59, N = 6128.10151.40221.34258.501. (CXX) g++ options: -fopenmp -std=c++0x -O3 -rdynamic -ldl -ltiff -lfftw3f -lfftw3 -lpng -lmpi_cxx -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterRELION 3.1.1Test: Basic - Device: CPU12c10c8c6c50100150200250Min: 126.59 / Avg: 128.1 / Max: 133.63Min: 149.34 / Avg: 151.4 / Max: 156.99Min: 218.27 / Avg: 221.34 / Max: 227.08Min: 253.78 / Avg: 258.5 / Max: 270.991. (CXX) g++ options: -fopenmp -std=c++0x -O3 -rdynamic -ldl -ltiff -lfftw3f -lfftw3 -lpng -lmpi_cxx -lmpi

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: Kostya12c10c8c6c0.92481.84962.77443.69924.624SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 34.114.114.114.111. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: Kostya12c10c8c6c246810Min: 4.1 / Avg: 4.11 / Max: 4.12Min: 4.1 / Avg: 4.11 / Max: 4.12Min: 4.11 / Avg: 4.11 / Max: 4.11Min: 4.1 / Avg: 4.11 / Max: 4.121. (CXX) g++ options: -O3

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: TopTweet12c10c8c6c246810SE +/- 0.01, N = 3SE +/- 0.07, N = 6SE +/- 0.01, N = 3SE +/- 0.00, N = 36.596.496.576.551. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: TopTweet12c10c8c6c3691215Min: 6.58 / Avg: 6.59 / Max: 6.6Min: 6.18 / Avg: 6.49 / Max: 6.62Min: 6.56 / Avg: 6.57 / Max: 6.58Min: 6.55 / Avg: 6.55 / Max: 6.561. (CXX) g++ options: -O3

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: LargeRandom12c10c8c6c0.28130.56260.84391.12521.4065SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 31.251.251.251.241. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: LargeRandom12c10c8c6c246810Min: 1.25 / Avg: 1.25 / Max: 1.25Min: 1.25 / Avg: 1.25 / Max: 1.25Min: 1.25 / Avg: 1.25 / Max: 1.25Min: 1.24 / Avg: 1.24 / Max: 1.241. (CXX) g++ options: -O3

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: PartialTweets12c10c8c6c1.28032.56063.84095.12126.4015SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 35.655.675.665.691. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: PartialTweets12c10c8c6c246810Min: 5.63 / Avg: 5.65 / Max: 5.67Min: 5.65 / Avg: 5.67 / Max: 5.7Min: 5.65 / Avg: 5.66 / Max: 5.68Min: 5.67 / Avg: 5.69 / Max: 5.711. (CXX) g++ options: -O3

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: DistinctUserID12c10c8c6c246810SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 36.866.846.866.831. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: DistinctUserID12c10c8c6c3691215Min: 6.82 / Avg: 6.86 / Max: 6.88Min: 6.82 / Avg: 6.84 / Max: 6.87Min: 6.83 / Avg: 6.86 / Max: 6.87Min: 6.8 / Avg: 6.83 / Max: 6.851. (CXX) g++ options: -O3

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1M12c10c8c6c20K40K60K80K100KSE +/- 328.13, N = 3SE +/- 152.19, N = 3SE +/- 383.60, N = 3SE +/- 214.10, N = 3104604.6102599.6101953.5100446.21. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1M12c10c8c6c20K40K60K80K100KMin: 104188.4 / Avg: 104604.57 / Max: 105252.1Min: 102417 / Avg: 102599.6 / Max: 102901.8Min: 101194.1 / Avg: 101953.53 / Max: 102427.5Min: 100020 / Avg: 100446.23 / Max: 100694.81. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1M12c10c8c6c30K60K90K120K150KSE +/- 849.90, N = 3SE +/- 70.55, N = 3SE +/- 122.05, N = 3SE +/- 349.73, N = 3126465.6127226.6127081.2126057.71. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1M12c10c8c6c20K40K60K80K100KMin: 125125.1 / Avg: 126465.6 / Max: 128041Min: 127097.1 / Avg: 127226.57 / Max: 127339.9Min: 126935.8 / Avg: 127081.2 / Max: 127323.7Min: 125360.4 / Avg: 126057.67 / Max: 126454.21. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H212c10c8c6c10002000300040005000SE +/- 53.17, N = 20SE +/- 39.79, N = 20SE +/- 40.50, N = 20SE +/- 36.16, N = 204802483247314830
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H212c10c8c6c8001600240032004000Min: 4382 / Avg: 4802.25 / Max: 5230Min: 4366 / Avg: 4831.5 / Max: 5219Min: 4201 / Avg: 4731.2 / Max: 4981Min: 4486 / Avg: 4829.6 / Max: 5022

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Jython12c10c8c6c7001400210028003500SE +/- 29.26, N = 4SE +/- 18.49, N = 4SE +/- 35.24, N = 4SE +/- 21.34, N = 43380332933693345
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Jython12c10c8c6c6001200180024003000Min: 3303 / Avg: 3380.25 / Max: 3436Min: 3294 / Avg: 3328.75 / Max: 3378Min: 3307 / Avg: 3369.25 / Max: 3453Min: 3297 / Avg: 3344.75 / Max: 3399

LuxCoreRender

LuxCoreRender is an open-source 3D physically based renderer formerly known as LuxRender. LuxCoreRender supports CPU-based rendering as well as GPU acceleration via OpenCL, NVIDIA CUDA, and NVIDIA OptiX interfaces. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Danish Mood - Acceleration: CPU12c10c8c6c3691215SE +/- 0.09, N = 15SE +/- 0.17, N = 12SE +/- 0.11, N = 15SE +/- 0.14, N = 129.699.629.569.49MIN: 4 / MAX: 12.39MIN: 3.97 / MAX: 12.9MIN: 3.94 / MAX: 12.41MIN: 3.85 / MAX: 12.15
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Danish Mood - Acceleration: CPU12c10c8c6c3691215Min: 9.09 / Avg: 9.69 / Max: 10.52Min: 8.92 / Avg: 9.62 / Max: 11.02Min: 8.86 / Avg: 9.56 / Max: 10.55Min: 8.93 / Avg: 9.49 / Max: 10.42

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Orange Juice - Acceleration: CPU12c10c8c6c714212835SE +/- 0.63, N = 15SE +/- 0.29, N = 3SE +/- 0.72, N = 15SE +/- 0.71, N = 1528.8228.1929.0428.90MIN: 23.01 / MAX: 45.86MIN: 23.3 / MAX: 45.65MIN: 22.62 / MAX: 45.48MIN: 22.4 / MAX: 44.91
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.6Scene: Orange Juice - Acceleration: CPU12c10c8c6c612182430Min: 27.35 / Avg: 28.82 / Max: 34.83Min: 27.62 / Avg: 28.19 / Max: 28.52Min: 26.84 / Avg: 29.04 / Max: 34.54Min: 26.76 / Avg: 28.9 / Max: 34.34

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Crown12c10c8c6c4080120160200SE +/- 1.01, N = 3SE +/- 0.47, N = 3SE +/- 0.36, N = 3SE +/- 0.33, N = 3182.45184.73185.49187.61MIN: 128.42 / MAX: 209.42MIN: 137.82 / MAX: 210.21MIN: 134.45 / MAX: 211.64MIN: 146.69 / MAX: 208.25
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Crown12c10c8c6c306090120150Min: 180.92 / Avg: 182.45 / Max: 184.35Min: 183.8 / Avg: 184.73 / Max: 185.21Min: 184.77 / Avg: 185.49 / Max: 185.92Min: 187.13 / Avg: 187.61 / Max: 188.24

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Asian Dragon12c10c8c6c50100150200250SE +/- 0.13, N = 3SE +/- 0.47, N = 3SE +/- 0.39, N = 3SE +/- 0.46, N = 3213.75214.31217.41221.29MIN: 209.16 / MAX: 225.43MIN: 209.11 / MAX: 223.97MIN: 211.73 / MAX: 230.1MIN: 215.19 / MAX: 233.21
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Asian Dragon12c10c8c6c4080120160200Min: 213.57 / Avg: 213.75 / Max: 213.99Min: 213.39 / Avg: 214.31 / Max: 214.96Min: 216.78 / Avg: 217.41 / Max: 218.12Min: 220.38 / Avg: 221.29 / Max: 221.86

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Medium12c10c8c6c1428425670SE +/- 0.68, N = 3SE +/- 0.11, N = 3SE +/- 0.73, N = 3SE +/- 0.53, N = 362.5662.2361.8161.401. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Medium12c10c8c6c1224364860Min: 61.45 / Avg: 62.56 / Max: 63.8Min: 62.03 / Avg: 62.23 / Max: 62.39Min: 60.34 / Avg: 61.81 / Max: 62.56Min: 60.35 / Avg: 61.4 / Max: 62.111. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Very Fast12c10c8c6c20406080100SE +/- 0.58, N = 10SE +/- 0.74, N = 3SE +/- 1.04, N = 3SE +/- 0.77, N = 373.4475.3573.0471.411. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Very Fast12c10c8c6c1428425670Min: 71.39 / Avg: 73.44 / Max: 76.81Min: 73.87 / Avg: 75.35 / Max: 76.19Min: 70.97 / Avg: 73.04 / Max: 74.32Min: 70.41 / Avg: 71.41 / Max: 72.931. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Ultra Fast12c10c8c6c20406080100SE +/- 0.66, N = 3SE +/- 1.02, N = 3SE +/- 0.71, N = 3SE +/- 0.63, N = 377.8377.3076.8475.861. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Ultra Fast12c10c8c6c1530456075Min: 76.52 / Avg: 77.83 / Max: 78.57Min: 75.38 / Avg: 77.3 / Max: 78.86Min: 75.43 / Avg: 76.84 / Max: 77.75Min: 74.62 / Avg: 75.86 / Max: 76.641. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 4K12c10c8c6c60120180240300SE +/- 7.35, N = 15SE +/- 7.16, N = 15SE +/- 7.53, N = 15SE +/- 9.18, N = 13251.77241.37227.90221.16
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 4K12c10c8c6c50100150200250Min: 183.9 / Avg: 251.77 / Max: 285.59Min: 185.4 / Avg: 241.37 / Max: 284.79Min: 168.37 / Avg: 227.9 / Max: 288.61Min: 154.64 / Avg: 221.16 / Max: 264.64

ACES DGEMM

This is a multi-threaded DGEMM benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point Rate12c10c8c6c1632486480SE +/- 0.33, N = 3SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.13, N = 370.4170.6171.0170.901. (CC) gcc options: -O3 -march=native -fopenmp
OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point Rate12c10c8c6c1428425670Min: 69.76 / Avg: 70.41 / Max: 70.84Min: 70.57 / Avg: 70.61 / Max: 70.64Min: 70.93 / Avg: 71.01 / Max: 71.12Min: 70.65 / Avg: 70.9 / Max: 71.051. (CC) gcc options: -O3 -march=native -fopenmp

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RT.hdr_alb_nrm.3840x216012c10c8c6c0.7921.5842.3763.1683.96SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 33.523.443.473.29
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RT.hdr_alb_nrm.3840x216012c10c8c6c246810Min: 3.49 / Avg: 3.52 / Max: 3.54Min: 3.41 / Avg: 3.44 / Max: 3.48Min: 3.44 / Avg: 3.47 / Max: 3.51Min: 3.27 / Avg: 3.29 / Max: 3.32

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RTLightmap.hdr.4096x409612c10c8c6c0.37130.74261.11391.48521.8565SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 31.651.631.641.54
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RTLightmap.hdr.4096x409612c10c8c6c246810Min: 1.65 / Avg: 1.65 / Max: 1.65Min: 1.63 / Avg: 1.63 / Max: 1.64Min: 1.63 / Avg: 1.64 / Max: 1.64Min: 1.52 / Avg: 1.54 / Max: 1.54

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ISPC12c10c8c6c30060090012001500SE +/- 6.93, N = 3SE +/- 11.03, N = 9SE +/- 8.82, N = 3SE +/- 15.59, N = 31325131713251212MIN: 329 / MAX: 4553MIN: 327 / MAX: 5660MIN: 330 / MAX: 5664MIN: 328 / MAX: 4115
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ISPC12c10c8c6c2004006008001000Min: 1313 / Avg: 1325 / Max: 1337Min: 1269 / Avg: 1317 / Max: 1358Min: 1312 / Avg: 1325.33 / Max: 1342Min: 1185 / Avg: 1211.67 / Max: 1239

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/ao/real_time12c10c8c6c1020304050SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 343.7143.0343.9743.36
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/ao/real_time12c10c8c6c918273645Min: 43.63 / Avg: 43.71 / Max: 43.75Min: 42.95 / Avg: 43.03 / Max: 43.1Min: 43.94 / Avg: 43.97 / Max: 43.99Min: 43.27 / Avg: 43.36 / Max: 43.4

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/scivis/real_time12c10c8c6c1020304050SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 342.8043.0043.8443.24
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/scivis/real_time12c10c8c6c918273645Min: 42.7 / Avg: 42.8 / Max: 42.89Min: 42.99 / Avg: 43 / Max: 43.01Min: 43.79 / Avg: 43.84 / Max: 43.9Min: 43.12 / Avg: 43.24 / Max: 43.32

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/pathtracer/real_time12c10c8c6c50100150200250SE +/- 1.54, N = 3SE +/- 1.94, N = 3SE +/- 1.74, N = 3SE +/- 0.59, N = 3229.27230.28228.58230.44
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/pathtracer/real_time12c10c8c6c4080120160200Min: 226.21 / Avg: 229.27 / Max: 231.09Min: 226.42 / Avg: 230.28 / Max: 232.52Min: 226.7 / Avg: 228.58 / Max: 232.06Min: 229.51 / Avg: 230.44 / Max: 231.53

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/ao/real_time12c10c8c6c1020304050SE +/- 0.13, N = 3SE +/- 0.04, N = 3SE +/- 0.10, N = 3SE +/- 0.07, N = 343.9844.0044.2344.27
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/ao/real_time12c10c8c6c918273645Min: 43.75 / Avg: 43.98 / Max: 44.21Min: 43.94 / Avg: 44 / Max: 44.08Min: 44.03 / Avg: 44.23 / Max: 44.37Min: 44.13 / Avg: 44.27 / Max: 44.37

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/scivis/real_time12c10c8c6c1020304050SE +/- 0.15, N = 3SE +/- 0.12, N = 3SE +/- 0.13, N = 3SE +/- 0.15, N = 343.1343.3343.4343.29
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/scivis/real_time12c10c8c6c918273645Min: 42.98 / Avg: 43.13 / Max: 43.42Min: 43.18 / Avg: 43.33 / Max: 43.56Min: 43.29 / Avg: 43.43 / Max: 43.7Min: 43.03 / Avg: 43.29 / Max: 43.53

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time12c10c8c6c1224364860SE +/- 0.50, N = 3SE +/- 0.12, N = 3SE +/- 0.08, N = 3SE +/- 0.04, N = 353.7754.4154.5154.61
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time12c10c8c6c1122334455Min: 52.77 / Avg: 53.77 / Max: 54.31Min: 54.19 / Avg: 54.41 / Max: 54.62Min: 54.36 / Avg: 54.51 / Max: 54.64Min: 54.55 / Avg: 54.61 / Max: 54.69

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression Rating12c10c8c6c200K400K600K800K1000KSE +/- 6636.11, N = 3SE +/- 2580.44, N = 3SE +/- 3797.71, N = 3SE +/- 7292.38, N = 39231768934338794308249261. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression Rating12c10c8c6c160K320K480K640K800KMin: 910936 / Avg: 923176.33 / Max: 933740Min: 888273 / Avg: 893433.33 / Max: 896078Min: 874258 / Avg: 879429.67 / Max: 886833Min: 810511 / Avg: 824926 / Max: 8340551. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression Rating12c10c8c6c300K600K900K1200K1500KSE +/- 3305.67, N = 3SE +/- 5138.86, N = 3SE +/- 9235.88, N = 3SE +/- 2020.82, N = 311814351171627115990111774841. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression Rating12c10c8c6c200K400K600K800K1000KMin: 1174971 / Avg: 1181435 / Max: 1185869Min: 1161424 / Avg: 1171627.33 / Max: 1177798Min: 1148093 / Avg: 1159901.33 / Max: 1178107Min: 1173682 / Avg: 1177484.33 / Max: 11805721. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 102412c10c8c6c0.98211.96422.94633.92844.9105SE +/- 0.023689, N = 3SE +/- 0.010431, N = 3SE +/- 0.008144, N = 3SE +/- 0.002133, N = 34.3458904.3545564.3514024.3647671. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 102412c10c8c6c246810Min: 4.3 / Avg: 4.35 / Max: 4.37Min: 4.34 / Avg: 4.35 / Max: 4.37Min: 4.34 / Avg: 4.35 / Max: 4.37Min: 4.36 / Avg: 4.36 / Max: 4.371. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 192000 - Buffer Size: 102412c10c8c6c0.63651.2731.90952.5463.1825SE +/- 0.001919, N = 3SE +/- 0.017291, N = 3SE +/- 0.019484, N = 3SE +/- 0.004057, N = 32.8290612.8061902.8115552.8248141. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 192000 - Buffer Size: 102412c10c8c6c246810Min: 2.83 / Avg: 2.83 / Max: 2.83Min: 2.77 / Avg: 2.81 / Max: 2.83Min: 2.77 / Avg: 2.81 / Max: 2.83Min: 2.82 / Avg: 2.82 / Max: 2.831. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 012c10c8c6c1428425670SE +/- 0.18, N = 3SE +/- 0.27, N = 3SE +/- 0.03, N = 3SE +/- 0.47, N = 363.2563.2562.9663.801. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 012c10c8c6c1224364860Min: 63.05 / Avg: 63.25 / Max: 63.61Min: 62.97 / Avg: 63.25 / Max: 63.78Min: 62.9 / Avg: 62.96 / Max: 63.01Min: 62.99 / Avg: 63.8 / Max: 64.611. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 212c10c8c6c816243240SE +/- 0.14, N = 3SE +/- 0.08, N = 3SE +/- 0.10, N = 3SE +/- 0.14, N = 334.8534.9134.6934.871. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 212c10c8c6c714212835Min: 34.64 / Avg: 34.85 / Max: 35.12Min: 34.8 / Avg: 34.91 / Max: 35.08Min: 34.53 / Avg: 34.69 / Max: 34.88Min: 34.59 / Avg: 34.87 / Max: 35.031. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 612c10c8c6c0.55331.10661.65992.21322.7665SE +/- 0.016, N = 3SE +/- 0.003, N = 3SE +/- 0.017, N = 3SE +/- 0.004, N = 32.4592.4112.4202.4351. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 612c10c8c6c246810Min: 2.43 / Avg: 2.46 / Max: 2.48Min: 2.41 / Avg: 2.41 / Max: 2.42Min: 2.39 / Avg: 2.42 / Max: 2.45Min: 2.43 / Avg: 2.44 / Max: 2.441. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6, Lossless12c10c8c6c1.19932.39863.59794.79725.9965SE +/- 0.076, N = 3SE +/- 0.044, N = 3SE +/- 0.034, N = 3SE +/- 0.055, N = 35.2875.2865.2705.3301. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6, Lossless12c10c8c6c246810Min: 5.18 / Avg: 5.29 / Max: 5.43Min: 5.2 / Avg: 5.29 / Max: 5.35Min: 5.22 / Avg: 5.27 / Max: 5.34Min: 5.23 / Avg: 5.33 / Max: 5.421. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 10, Lossless12c10c8c6c0.97581.95162.92743.90324.879SE +/- 0.024, N = 3SE +/- 0.055, N = 3SE +/- 0.009, N = 3SE +/- 0.043, N = 34.2414.3374.2524.2501. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 10, Lossless12c10c8c6c246810Min: 4.2 / Avg: 4.24 / Max: 4.28Min: 4.23 / Avg: 4.34 / Max: 4.41Min: 4.24 / Avg: 4.25 / Max: 4.27Min: 4.18 / Avg: 4.25 / Max: 4.331. (CXX) g++ options: -O3 -fPIC -lm

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To Compile12c10c8c6c510152025SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 320.4620.4820.5920.72
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To Compile12c10c8c6c510152025Min: 20.44 / Avg: 20.46 / Max: 20.47Min: 20.46 / Avg: 20.48 / Max: 20.5Min: 20.57 / Avg: 20.59 / Max: 20.6Min: 20.71 / Avg: 20.72 / Max: 20.74

Timed GDB GNU Debugger Compilation

This test times how long it takes to build the GNU Debugger (GDB) in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 10.2Time To Compile12c10c8c6c1020304050SE +/- 0.17, N = 3SE +/- 0.08, N = 3SE +/- 0.03, N = 3SE +/- 0.12, N = 341.7142.4142.4143.25
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 10.2Time To Compile12c10c8c6c918273645Min: 41.54 / Avg: 41.71 / Max: 42.05Min: 42.3 / Avg: 42.41 / Max: 42.56Min: 42.35 / Avg: 42.41 / Max: 42.47Min: 43 / Avg: 43.24 / Max: 43.39

Timed Gem5 Compilation

This test times how long it takes to compile Gem5. Gem5 is a simulator for computer system architecture research. Gem5 is widely used for computer architecture research within the industry, academia, and more. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 21.2Time To Compile12c10c8c6c306090120150SE +/- 0.16, N = 3SE +/- 0.36, N = 3SE +/- 0.77, N = 3SE +/- 0.57, N = 3139.24134.37136.79134.70
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 21.2Time To Compile12c10c8c6c306090120150Min: 139.02 / Avg: 139.24 / Max: 139.56Min: 133.76 / Avg: 134.37 / Max: 135Min: 135.81 / Avg: 136.79 / Max: 138.31Min: 133.71 / Avg: 134.69 / Max: 135.68

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To Compile12c10c8c6c816243240SE +/- 0.40, N = 4SE +/- 0.04, N = 3SE +/- 0.19, N = 3SE +/- 0.11, N = 334.0333.6233.9133.67
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To Compile12c10c8c6c714212835Min: 33.34 / Avg: 34.03 / Max: 34.84Min: 33.56 / Avg: 33.62 / Max: 33.7Min: 33.64 / Avg: 33.91 / Max: 34.27Min: 33.46 / Avg: 33.67 / Max: 33.82

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfig12c10c8c6c612182430SE +/- 0.19, N = 11SE +/- 0.21, N = 14SE +/- 0.21, N = 9SE +/- 0.22, N = 725.5025.4125.5324.75
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfig12c10c8c6c612182430Min: 25.16 / Avg: 25.5 / Max: 27.18Min: 25.03 / Avg: 25.41 / Max: 28Min: 25 / Avg: 25.53 / Max: 26.91Min: 24.42 / Avg: 24.75 / Max: 26.06

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfig12c10c8c6c306090120150SE +/- 0.90, N = 3SE +/- 0.72, N = 3SE +/- 1.03, N = 3SE +/- 0.14, N = 3147.15145.41147.38145.77
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfig12c10c8c6c306090120150Min: 145.6 / Avg: 147.15 / Max: 148.7Min: 144.55 / Avg: 145.41 / Max: 146.85Min: 145.46 / Avg: 147.38 / Max: 148.97Min: 145.61 / Avg: 145.77 / Max: 146.04

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Ninja12c10c8c6c20406080100SE +/- 0.23, N = 3SE +/- 0.21, N = 3SE +/- 0.09, N = 3SE +/- 0.06, N = 375.6675.4475.7376.75
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Ninja12c10c8c6c1530456075Min: 75.25 / Avg: 75.66 / Max: 76.05Min: 75.15 / Avg: 75.44 / Max: 75.85Min: 75.64 / Avg: 75.72 / Max: 75.9Min: 76.67 / Avg: 76.75 / Max: 76.87

Timed Mesa Compilation

This test profile times how long it takes to compile Mesa with Meson/Ninja. For minimizing build dependencies and avoid versioning conflicts, test this is just the core Mesa build without LLVM or the extra Gallium3D/Mesa drivers enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To Compile12c10c8c6c510152025SE +/- 0.07, N = 3SE +/- 0.06, N = 3SE +/- 0.07, N = 3SE +/- 0.05, N = 320.1220.2120.1120.16
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To Compile12c10c8c6c510152025Min: 20.01 / Avg: 20.12 / Max: 20.24Min: 20.09 / Avg: 20.21 / Max: 20.28Min: 20.02 / Avg: 20.11 / Max: 20.26Min: 20.07 / Avg: 20.16 / Max: 20.23

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.5Time To Compile12c10c8c6c246810SE +/- 0.033, N = 3SE +/- 0.034, N = 3SE +/- 0.023, N = 3SE +/- 0.010, N = 37.7777.7557.8087.773
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.5Time To Compile12c10c8c6c3691215Min: 7.74 / Avg: 7.78 / Max: 7.84Min: 7.69 / Avg: 7.76 / Max: 7.79Min: 7.77 / Avg: 7.81 / Max: 7.85Min: 7.75 / Avg: 7.77 / Max: 7.79

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 18.8Time To Compile12c10c8c6c20406080100SE +/- 0.26, N = 3SE +/- 0.29, N = 3SE +/- 0.22, N = 3SE +/- 0.06, N = 3101.47101.94101.15102.78
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 18.8Time To Compile12c10c8c6c20406080100Min: 100.96 / Avg: 101.47 / Max: 101.79Min: 101.39 / Avg: 101.94 / Max: 102.38Min: 100.79 / Avg: 101.15 / Max: 101.55Min: 102.7 / Avg: 102.78 / Max: 102.9

Timed PHP Compilation

This test times how long it takes to build PHP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 8.1.9Time To Compile12c10c8c6c1020304050SE +/- 0.06, N = 3SE +/- 0.08, N = 3SE +/- 0.04, N = 3SE +/- 0.07, N = 344.5244.6144.5844.70
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 8.1.9Time To Compile12c10c8c6c918273645Min: 44.42 / Avg: 44.52 / Max: 44.6Min: 44.48 / Avg: 44.61 / Max: 44.76Min: 44.53 / Avg: 44.58 / Max: 44.65Min: 44.56 / Avg: 44.7 / Max: 44.8

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To Compile12c10c8c6c1122334455SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.20, N = 3SE +/- 0.28, N = 349.9249.8049.8750.08
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To Compile12c10c8c6c1020304050Min: 49.86 / Avg: 49.92 / Max: 50Min: 49.77 / Avg: 49.8 / Max: 49.85Min: 49.47 / Avg: 49.87 / Max: 50.12Min: 49.7 / Avg: 50.08 / Max: 50.62

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU12c10c8c6c0.90211.80422.70633.60844.5105SE +/- 0.02537, N = 3SE +/- 0.05885, N = 12SE +/- 0.08932, N = 12SE +/- 0.01788, N = 33.954714.009383.993053.96488MIN: 3.05MIN: 2.96MIN: 2.67MIN: 2.991. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU12c10c8c6c246810Min: 3.9 / Avg: 3.95 / Max: 3.99Min: 3.65 / Avg: 4.01 / Max: 4.36Min: 3.36 / Avg: 3.99 / Max: 4.49Min: 3.95 / Avg: 3.96 / Max: 41. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU12c10c8c6c400800120016002000SE +/- 31.84, N = 15SE +/- 14.89, N = 3SE +/- 28.30, N = 3SE +/- 16.27, N = 101968.702030.721982.152072.57MIN: 1632.62MIN: 1981.15MIN: 1911.33MIN: 1942.141. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU12c10c8c6c400800120016002000Min: 1651.96 / Avg: 1968.7 / Max: 2117.81Min: 2002.69 / Avg: 2030.72 / Max: 2053.46Min: 1926.66 / Avg: 1982.15 / Max: 2019.56Min: 1957.12 / Avg: 2072.57 / Max: 2128.971. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU12c10c8c6c5001000150020002500SE +/- 21.01, N = 3SE +/- 30.76, N = 3SE +/- 21.41, N = 3SE +/- 25.74, N = 152344.292438.002375.452479.62MIN: 2288.85MIN: 2353.97MIN: 2319.45MIN: 2293.491. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU12c10c8c6c400800120016002000Min: 2322.08 / Avg: 2344.29 / Max: 2386.29Min: 2376.59 / Avg: 2438 / Max: 2472.01Min: 2338.1 / Avg: 2375.45 / Max: 2412.27Min: 2314.82 / Avg: 2479.62 / Max: 2645.611. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU12c10c8c6c5001000150020002500SE +/- 24.22, N = 3SE +/- 25.04, N = 15SE +/- 25.14, N = 15SE +/- 31.16, N = 32275.862325.712371.782471.57MIN: 2213.34MIN: 2171.69MIN: 2234.23MIN: 2410.731. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU12c10c8c6c400800120016002000Min: 2236.84 / Avg: 2275.86 / Max: 2320.22Min: 2187.66 / Avg: 2325.71 / Max: 2507.39Min: 2251.46 / Avg: 2371.78 / Max: 2567.52Min: 2439.87 / Avg: 2471.57 / Max: 2533.91. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU12c10c8c6c0.10480.20960.31440.41920.524SE +/- 0.005042, N = 3SE +/- 0.005241, N = 4SE +/- 0.006374, N = 3SE +/- 0.005815, N = 30.4469300.4634540.4657960.465059MIN: 0.38MIN: 0.38MIN: 0.38MIN: 0.381. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU12c10c8c6c12345Min: 0.44 / Avg: 0.45 / Max: 0.45Min: 0.45 / Avg: 0.46 / Max: 0.47Min: 0.46 / Avg: 0.47 / Max: 0.48Min: 0.46 / Avg: 0.47 / Max: 0.481. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 256 - Buffer Length: 256 - Filter Length: 5712c10c8c6c2000M4000M6000M8000M10000MSE +/- 4618802.15, N = 3SE +/- 5196152.42, N = 3SE +/- 4333333.33, N = 3SE +/- 3844187.53, N = 3103470000001034000000010337666667103403333331. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 256 - Buffer Length: 256 - Filter Length: 5712c10c8c6c2000M4000M6000M8000M10000MMin: 10339000000 / Avg: 10347000000 / Max: 10355000000Min: 10331000000 / Avg: 10340000000 / Max: 10349000000Min: 10329000000 / Avg: 10337666666.67 / Max: 10342000000Min: 10333000000 / Avg: 10340333333.33 / Max: 103460000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 384 - Buffer Length: 256 - Filter Length: 5712c10c8c6c2000M4000M6000M8000M10000MSE +/- 4582575.69, N = 3SE +/- 4409585.52, N = 3SE +/- 5783117.19, N = 3SE +/- 3214550.25, N = 3103470000001035266666710349666667103490000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 384 - Buffer Length: 256 - Filter Length: 5712c10c8c6c2000M4000M6000M8000M10000MMin: 10338000000 / Avg: 10347000000 / Max: 10353000000Min: 10346000000 / Avg: 10352666666.67 / Max: 10361000000Min: 10340000000 / Avg: 10349666666.67 / Max: 10360000000Min: 10343000000 / Avg: 10349000000 / Max: 103540000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

CockroachDB

CockroachDB is a cloud-native, distributed SQL database for data intensive applications. This test profile uses a server-less CockroachDB configuration to test various Coackroach workloads on the local host with a single node. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: MoVR - Concurrency: 51212c10c8c6c2004006008001000SE +/- 3.38, N = 3SE +/- 3.66, N = 3SE +/- 9.03, N = 3SE +/- 4.87, N = 3948.5949.6960.3954.7
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: MoVR - Concurrency: 51212c10c8c6c2004006008001000Min: 941.7 / Avg: 948.47 / Max: 952Min: 942.4 / Avg: 949.57 / Max: 954.4Min: 942.7 / Avg: 960.33 / Max: 972.5Min: 949.4 / Avg: 954.67 / Max: 964.4

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: MoVR - Concurrency: 102412c10c8c6c2004006008001000SE +/- 1.42, N = 3SE +/- 0.58, N = 3SE +/- 3.18, N = 3SE +/- 1.56, N = 3953.8949.5946.9952.7
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: MoVR - Concurrency: 102412c10c8c6c2004006008001000Min: 951 / Avg: 953.77 / Max: 955.7Min: 948.5 / Avg: 949.5 / Max: 950.5Min: 941 / Avg: 946.9 / Max: 951.9Min: 949.6 / Avg: 952.67 / Max: 954.7

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 10% Reads - Concurrency: 51212c10c8c6c8K16K24K32K40KSE +/- 343.66, N = 15SE +/- 270.36, N = 15SE +/- 351.71, N = 6SE +/- 438.30, N = 1535970.035993.134832.935742.3
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 10% Reads - Concurrency: 51212c10c8c6c6K12K18K24K30KMin: 32039.5 / Avg: 35970.03 / Max: 37352.5Min: 33663.2 / Avg: 35993.1 / Max: 37472.8Min: 33272.6 / Avg: 34832.85 / Max: 35633.8Min: 31126.4 / Avg: 35742.32 / Max: 37986.8

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 50% Reads - Concurrency: 51212c10c8c6c11K22K33K44K55KSE +/- 464.03, N = 15SE +/- 514.54, N = 3SE +/- 454.84, N = 15SE +/- 32.88, N = 347621.949102.747596.647428.0
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 50% Reads - Concurrency: 51212c10c8c6c9K18K27K36K45KMin: 44452.5 / Avg: 47621.87 / Max: 49438.5Min: 48076.4 / Avg: 49102.7 / Max: 49681.4Min: 44903.2 / Avg: 47596.55 / Max: 49940.8Min: 47372.5 / Avg: 47428 / Max: 47486.3

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 60% Reads - Concurrency: 51212c10c8c6c11K22K33K44K55KSE +/- 268.61, N = 3SE +/- 620.92, N = 15SE +/- 411.73, N = 13SE +/- 555.56, N = 1552330.151748.852515.251275.1
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 60% Reads - Concurrency: 51212c10c8c6c9K18K27K36K45KMin: 51917.8 / Avg: 52330.07 / Max: 52834.5Min: 47947.6 / Avg: 51748.85 / Max: 54017.9Min: 47856.1 / Avg: 52515.2 / Max: 53608.4Min: 46679.7 / Avg: 51275.11 / Max: 53678.6

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 95% Reads - Concurrency: 51212c10c8c6c14K28K42K56K70KSE +/- 702.29, N = 3SE +/- 1044.13, N = 15SE +/- 890.57, N = 3SE +/- 813.26, N = 1564467.660769.764111.962666.5
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 95% Reads - Concurrency: 51212c10c8c6c11K22K33K44K55KMin: 63342.9 / Avg: 64467.6 / Max: 65758.6Min: 55737.1 / Avg: 60769.65 / Max: 66712.5Min: 62804.4 / Avg: 64111.9 / Max: 65813.1Min: 56318.9 / Avg: 62666.49 / Max: 66050

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 10% Reads - Concurrency: 102412c10c8c6c8K16K24K32K40KSE +/- 155.07, N = 3SE +/- 346.25, N = 3SE +/- 322.68, N = 3SE +/- 206.35, N = 336846.935776.836685.736329.6
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 10% Reads - Concurrency: 102412c10c8c6c6K12K18K24K30KMin: 36657.8 / Avg: 36846.87 / Max: 37154.3Min: 35429.2 / Avg: 35776.8 / Max: 36469.3Min: 36252.5 / Avg: 36685.73 / Max: 37316.6Min: 35917 / Avg: 36329.63 / Max: 36542.8

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 50% Reads - Concurrency: 102412c10c8c6c10K20K30K40K50KSE +/- 366.75, N = 15SE +/- 380.16, N = 3SE +/- 468.66, N = 15SE +/- 391.13, N = 947465.548449.047498.147593.9
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 50% Reads - Concurrency: 102412c10c8c6c8K16K24K32K40KMin: 45247.7 / Avg: 47465.49 / Max: 49624.5Min: 47752.9 / Avg: 48449.03 / Max: 49061.9Min: 44691.2 / Avg: 47498.11 / Max: 50079.9Min: 45594.7 / Avg: 47593.89 / Max: 48880.5

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 60% Reads - Concurrency: 102412c10c8c6c11K22K33K44K55KSE +/- 239.52, N = 3SE +/- 400.61, N = 10SE +/- 447.89, N = 3SE +/- 448.33, N = 352573.351959.552559.052626.4
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 60% Reads - Concurrency: 102412c10c8c6c9K18K27K36K45KMin: 52170.8 / Avg: 52573.3 / Max: 52999.5Min: 48601.2 / Avg: 51959.53 / Max: 53084.6Min: 51799.2 / Avg: 52559.03 / Max: 53349.8Min: 51958.8 / Avg: 52626.4 / Max: 53478.6

OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 95% Reads - Concurrency: 102412c10c8c6c14K28K42K56K70KSE +/- 575.30, N = 3SE +/- 1142.40, N = 15SE +/- 1317.65, N = 15SE +/- 1310.27, N = 1564661.862029.858195.560137.3
OpenBenchmarking.orgops/s, More Is BetterCockroachDB 22.2Workload: KV, 95% Reads - Concurrency: 102412c10c8c6c11K22K33K44K55KMin: 63521.2 / Avg: 64661.77 / Max: 65363.3Min: 53337.1 / Avg: 62029.77 / Max: 65677Min: 52785.3 / Avg: 58195.49 / Max: 64306Min: 53404.1 / Avg: 60137.28 / Max: 67136.4

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: Thorough12c10c8c6c20406080100SE +/- 0.05, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.10, N = 3106.57106.85107.11106.511. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: Thorough12c10c8c6c20406080100Min: 106.47 / Avg: 106.57 / Max: 106.66Min: 106.76 / Avg: 106.85 / Max: 106.93Min: 107.05 / Avg: 107.11 / Max: 107.18Min: 106.37 / Avg: 106.51 / Max: 106.71. (CXX) g++ options: -O3 -flto -pthread

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: Exhaustive12c10c8c6c3691215SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 311.7311.7611.8111.821. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: Exhaustive12c10c8c6c3691215Min: 11.7 / Avg: 11.73 / Max: 11.78Min: 11.76 / Avg: 11.76 / Max: 11.76Min: 11.8 / Avg: 11.81 / Max: 11.82Min: 11.82 / Avg: 11.82 / Max: 11.821. (CXX) g++ options: -O3 -flto -pthread

Graph500

This is a benchmark of the reference implementation of Graph500, an HPC benchmark focused on data intensive loads and commonly tested on supercomputers for complex data problems. Graph500 primarily stresses the communication subsystem of the hardware under test. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsssp median_TEPS, More Is BetterGraph500 3.0Scale: 2612c10c8c6c120M240M360M480M600M5651520005740180005318540003924960001. (CC) gcc options: -fcommon -O3 -lpthread -lm -lmpi

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bare12c10c8c6c510152025SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 318.7118.6818.6817.941. (CXX) g++ options: -O3
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bare12c10c8c6c510152025Min: 18.67 / Avg: 18.71 / Max: 18.75Min: 18.65 / Avg: 18.68 / Max: 18.69Min: 18.67 / Avg: 18.68 / Max: 18.71Min: 17.89 / Avg: 17.94 / Max: 17.971. (CXX) g++ options: -O3

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: ResNet-5012c10c8c6c20406080100SE +/- 0.48, N = 3SE +/- 0.36, N = 3SE +/- 0.48, N = 3SE +/- 0.26, N = 3109.13105.91105.0195.67
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: ResNet-5012c10c8c6c20406080100Min: 108.47 / Avg: 109.13 / Max: 110.07Min: 105.37 / Avg: 105.91 / Max: 106.58Min: 104.05 / Avg: 105.01 / Max: 105.53Min: 95.16 / Avg: 95.67 / Max: 95.99

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream12c10c8c6c20406080100SE +/- 0.18, N = 3SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.31, N = 384.3584.4884.2182.49
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream12c10c8c6c1632486480Min: 84.09 / Avg: 84.35 / Max: 84.69Min: 84.41 / Avg: 84.48 / Max: 84.56Min: 84.13 / Avg: 84.21 / Max: 84.28Min: 81.86 / Avg: 82.49 / Max: 82.81

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream12c10c8c6c2004006008001000SE +/- 0.82, N = 3SE +/- 0.20, N = 3SE +/- 0.88, N = 3SE +/- 0.67, N = 31133.281133.181136.851148.50
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream12c10c8c6c2004006008001000Min: 1132.02 / Avg: 1133.28 / Max: 1134.83Min: 1132.83 / Avg: 1133.18 / Max: 1133.52Min: 1135.1 / Avg: 1136.85 / Max: 1137.86Min: 1147.56 / Avg: 1148.5 / Max: 1149.81

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream12c10c8c6c160320480640800SE +/- 0.72, N = 3SE +/- 2.41, N = 3SE +/- 2.11, N = 3SE +/- 6.13, N = 15761.49742.80705.71575.75
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream12c10c8c6c130260390520650Min: 760.16 / Avg: 761.49 / Max: 762.64Min: 738.03 / Avg: 742.8 / Max: 745.73Min: 702.25 / Avg: 705.71 / Max: 709.54Min: 551.51 / Avg: 575.75 / Max: 638.96

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream12c10c8c6c4080120160200SE +/- 0.11, N = 3SE +/- 0.43, N = 3SE +/- 0.38, N = 3SE +/- 1.66, N = 15125.72128.92135.62166.43
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream12c10c8c6c306090120150Min: 125.53 / Avg: 125.72 / Max: 125.92Min: 128.39 / Avg: 128.92 / Max: 129.78Min: 134.92 / Avg: 135.62 / Max: 136.25Min: 149.83 / Avg: 166.43 / Max: 173.5

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream12c10c8c6c2004006008001000SE +/- 0.57, N = 3SE +/- 0.53, N = 3SE +/- 1.22, N = 3SE +/- 6.69, N = 15856.02844.43773.07635.02
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream12c10c8c6c150300450600750Min: 854.89 / Avg: 856.02 / Max: 856.67Min: 843.51 / Avg: 844.43 / Max: 845.35Min: 770.91 / Avg: 773.07 / Max: 775.13Min: 596.39 / Avg: 635.02 / Max: 693.79

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream12c10c8c6c306090120150SE +/- 0.07, N = 3SE +/- 0.08, N = 3SE +/- 0.19, N = 3SE +/- 1.56, N = 15111.89113.41123.86150.92
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream12c10c8c6c306090120150Min: 111.81 / Avg: 111.89 / Max: 112.03Min: 113.26 / Avg: 113.41 / Max: 113.54Min: 123.54 / Avg: 123.86 / Max: 124.18Min: 137.99 / Avg: 150.92 / Max: 160.42

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream12c10c8c6c400800120016002000SE +/- 4.95, N = 3SE +/- 1.61, N = 3SE +/- 1.56, N = 3SE +/- 8.40, N = 31964.271965.561954.121930.33
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream12c10c8c6c30060090012001500Min: 1955.09 / Avg: 1964.27 / Max: 1972.06Min: 1962.38 / Avg: 1965.56 / Max: 1967.61Min: 1951.11 / Avg: 1954.12 / Max: 1956.31Min: 1916.16 / Avg: 1930.33 / Max: 1945.21

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream12c10c8c6c1122334455SE +/- 0.12, N = 3SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.21, N = 348.7748.7449.0049.63
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream12c10c8c6c1020304050Min: 48.6 / Avg: 48.77 / Max: 48.99Min: 48.7 / Avg: 48.74 / Max: 48.82Min: 48.93 / Avg: 49 / Max: 49.07Min: 49.26 / Avg: 49.63 / Max: 49.98

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream12c10c8c6c30060090012001500SE +/- 4.04, N = 3SE +/- 0.69, N = 3SE +/- 3.22, N = 3SE +/- 1.21, N = 31195.911201.141201.981190.53
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream12c10c8c6c2004006008001000Min: 1188.16 / Avg: 1195.91 / Max: 1201.8Min: 1199.9 / Avg: 1201.14 / Max: 1202.3Min: 1198.18 / Avg: 1201.98 / Max: 1208.39Min: 1188.2 / Avg: 1190.53 / Max: 1192.25

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream12c10c8c6c20406080100SE +/- 0.27, N = 3SE +/- 0.03, N = 3SE +/- 0.20, N = 3SE +/- 0.07, N = 380.0879.7179.6980.44
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream12c10c8c6c1530456075Min: 79.67 / Avg: 80.08 / Max: 80.6Min: 79.66 / Avg: 79.71 / Max: 79.75Min: 79.31 / Avg: 79.69 / Max: 79.98Min: 80.34 / Avg: 80.44 / Max: 80.59

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream12c10c8c6c130260390520650SE +/- 1.72, N = 3SE +/- 2.48, N = 3SE +/- 1.32, N = 3SE +/- 2.24, N = 3615.45611.29614.61608.53
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream12c10c8c6c110220330440550Min: 612.22 / Avg: 615.45 / Max: 618.08Min: 606.41 / Avg: 611.29 / Max: 614.45Min: 611.97 / Avg: 614.61 / Max: 615.94Min: 604.28 / Avg: 608.53 / Max: 611.88

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream12c10c8c6c306090120150SE +/- 0.46, N = 3SE +/- 0.55, N = 3SE +/- 0.27, N = 3SE +/- 0.58, N = 3155.48156.54155.82157.22
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream12c10c8c6c306090120150Min: 154.79 / Avg: 155.48 / Max: 156.36Min: 155.92 / Avg: 156.54 / Max: 157.63Min: 155.48 / Avg: 155.82 / Max: 156.34Min: 156.32 / Avg: 157.22 / Max: 158.29

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream12c10c8c6c20406080100SE +/- 0.21, N = 3SE +/- 0.03, N = 3SE +/- 0.16, N = 3SE +/- 0.25, N = 384.2584.2784.1582.26
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream12c10c8c6c1632486480Min: 83.99 / Avg: 84.25 / Max: 84.65Min: 84.2 / Avg: 84.27 / Max: 84.31Min: 83.96 / Avg: 84.15 / Max: 84.47Min: 81.88 / Avg: 82.26 / Max: 82.73

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream12c10c8c6c2004006008001000SE +/- 1.25, N = 3SE +/- 1.00, N = 3SE +/- 1.67, N = 3SE +/- 1.05, N = 31133.481135.181137.511148.33
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream12c10c8c6c2004006008001000Min: 1131.04 / Avg: 1133.48 / Max: 1135.21Min: 1133.69 / Avg: 1135.18 / Max: 1137.08Min: 1134.18 / Avg: 1137.51 / Max: 1139.26Min: 1146.59 / Avg: 1148.33 / Max: 1150.22

WRF

WRF, the Weather Research and Forecasting Model, is a "next-generation mesoscale numerical weather prediction system designed for both atmospheric research and operational forecasting applications. It features two dynamical cores, a data assimilation system, and a software architecture supporting parallel computation and system extensibility." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWRF 4.2.2Input: conus 2.5km12c10c8c6c160032004800640080004070.194563.186551.887432.661. (F9X) gfortran options: -O2 -ftree-vectorize -funroll-loops -ffree-form -fconvert=big-endian -frecord-marker=4 -fallow-invalid-boz -lesmf_time -lwrfio_nf -lnetcdff -lnetcdf -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon Nanotube12c10c8c6c612182430SE +/- 0.23, N = 5SE +/- 0.13, N = 3SE +/- 0.18, N = 3SE +/- 0.20, N = 323.1523.3724.6026.311. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon Nanotube12c10c8c6c612182430Min: 22.78 / Avg: 23.15 / Max: 24.06Min: 23.23 / Avg: 23.37 / Max: 23.64Min: 24.37 / Avg: 24.6 / Max: 24.96Min: 25.99 / Avg: 26.31 / Max: 26.661. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: BMW27 - Compute: CPU-Only12c10c8c6c246810SE +/- 0.06, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 38.588.428.348.33
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: BMW27 - Compute: CPU-Only12c10c8c6c3691215Min: 8.51 / Avg: 8.58 / Max: 8.7Min: 8.36 / Avg: 8.42 / Max: 8.5Min: 8.31 / Avg: 8.34 / Max: 8.38Min: 8.24 / Avg: 8.33 / Max: 8.44

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Classroom - Compute: CPU-Only12c10c8c6c510152025SE +/- 0.00, N = 3SE +/- 0.09, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 320.9220.7620.6820.71
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Classroom - Compute: CPU-Only12c10c8c6c510152025Min: 20.92 / Avg: 20.92 / Max: 20.93Min: 20.64 / Avg: 20.76 / Max: 20.93Min: 20.59 / Avg: 20.68 / Max: 20.79Min: 20.64 / Avg: 20.71 / Max: 20.77

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Barbershop - Compute: CPU-Only12c10c8c6c20406080100SE +/- 0.21, N = 3SE +/- 0.15, N = 3SE +/- 0.24, N = 3SE +/- 0.31, N = 381.0380.3780.1879.93
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Barbershop - Compute: CPU-Only12c10c8c6c1530456075Min: 80.61 / Avg: 81.03 / Max: 81.27Min: 80.08 / Avg: 80.37 / Max: 80.61Min: 79.74 / Avg: 80.18 / Max: 80.55Min: 79.37 / Avg: 79.93 / Max: 80.43

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: Writes12c10c8c6c50K100K150K200K250KSE +/- 3742.45, N = 12SE +/- 2429.87, N = 3SE +/- 1899.17, N = 3SE +/- 2957.03, N = 3251793243603240854246882
OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.0Test: Writes12c10c8c6c40K80K120K160K200KMin: 237035 / Avg: 251792.67 / Max: 285173Min: 239136 / Avg: 243603 / Max: 247494Min: 238071 / Avg: 240853.67 / Max: 244484Min: 241042 / Avg: 246882 / Max: 250610

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 50012c10c8c6c40K80K120K160K200KSE +/- 291.63, N = 3SE +/- 335.64, N = 3SE +/- 453.48, N = 3SE +/- 113.87, N = 3201032.06198858.66197081.98196805.301. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 50012c10c8c6c30K60K90K120K150KMin: 200710.1 / Avg: 201032.06 / Max: 201614.22Min: 198263.21 / Avg: 198858.66 / Max: 199424.8Min: 196424.19 / Avg: 197081.98 / Max: 197951.64Min: 196577.79 / Avg: 196805.3 / Max: 196928.021. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: Standard12c10c8c6c60120180240300SE +/- 2.33, N = 7SE +/- 3.09, N = 3SE +/- 2.84, N = 5SE +/- 2.17, N = 122542552572531. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: Standard12c10c8c6c50100150200250Min: 247.5 / Avg: 253.93 / Max: 267Min: 250 / Avg: 254.67 / Max: 260.5Min: 252 / Avg: 257.2 / Max: 268Min: 243.5 / Avg: 252.83 / Max: 262.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPU12c10c8c6c20406080100SE +/- 0.08, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.06, N = 3101.74102.01101.26101.081. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPU12c10c8c6c20406080100Min: 101.64 / Avg: 101.74 / Max: 101.89Min: 101.98 / Avg: 102.01 / Max: 102.06Min: 101.2 / Avg: 101.26 / Max: 101.33Min: 100.97 / Avg: 101.08 / Max: 101.181. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPU12c10c8c6c100200300400500SE +/- 0.21, N = 3SE +/- 0.10, N = 3SE +/- 0.27, N = 3SE +/- 0.14, N = 3470.98469.43472.84473.69MIN: 451.07 / MAX: 556.04MIN: 432.92 / MAX: 555.25MIN: 394.37 / MAX: 553.15MIN: 423.34 / MAX: 579.411. (CXX) g++ options: -isystem -fsigned-char -ffunction-sections -fdata-sections -msse4.1 -msse4.2 -O3 -fno-strict-overflow -fwrapv -fPIC -fvisibility=hidden -Os -std=c++11 -MD -MT -MF
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPU12c10c8c6c80160240320400