Xeon Max Linux Distros

2 x Intel Xeon Max 9480 testing with a Supermicro X13DEM v1.10 (1.3 BIOS) and ASPEED on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2310151-NE-2310131NE09
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 2 Tests
C++ Boost Tests 2 Tests
C/C++ Compiler Tests 6 Tests
CPU Massive 14 Tests
Creator Workloads 12 Tests
Database Test Suite 7 Tests
Encoding 5 Tests
Fortran Tests 2 Tests
Game Development 2 Tests
HPC - High Performance Computing 11 Tests
Java 2 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 4 Tests
Molecular Dynamics 4 Tests
MPI Benchmarks 3 Tests
Multi-Core 17 Tests
NVIDIA GPU Compute 2 Tests
Intel oneAPI 5 Tests
OpenMPI Tests 5 Tests
Programmer / Developer System Benchmarks 2 Tests
Python 3 Tests
Renderers 3 Tests
Scientific Computing 4 Tests
Software Defined Radio 2 Tests
Server 9 Tests
Server CPU Tests 12 Tests
Single-Threaded 5 Tests
Video Encoding 5 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Ubuntu 23.04
October 12 2023
  1 Day, 2 Hours, 41 Minutes
Ubuntu 22.04
October 13 2023
  1 Day, 6 Hours, 32 Minutes
Invert Hiding All Results Option
  1 Day, 4 Hours, 37 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Xeon Max Linux DistrosProcessorMotherboardChipsetMemoryDiskGraphicsNetworkMonitorOSKernelDesktopDisplay ServerCompilerFile-SystemScreen ResolutionVulkanUbuntu 23.04Ubuntu 22.042 x Intel Xeon Max 9480 @ 3.50GHz (112 Cores / 224 Threads)Supermicro X13DEM v1.10 (1.3 BIOS)Intel Device 1bce512GB2 x 7682GB INTEL SSDPF2KX076TZASPEED2 x Broadcom BCM57508 NetXtreme-E 10Gb/25Gb/40Gb/50Gb/100Gb/200GbUbuntu 23.046.2.0-34-generic (x86_64)GNOME Shell 44.3X Server 1.21.1.7GCC 12.3.0ext41024x768VE228Ubuntu 22.04GNOME Shell 42.9X Server 1.21.1.41.3.238GCC 11.4.01920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- Ubuntu 23.04: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-DAPbBt/gcc-12-12.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-DAPbBt/gcc-12-12.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Ubuntu 22.04: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_cpufreq schedutil - CPU Microcode: 0x2c000271Java Details- Ubuntu 23.04: OpenJDK Runtime Environment (build 17.0.8.1+1-Ubuntu-0ubuntu123.04)- Ubuntu 22.04: OpenJDK Runtime Environment (build 11.0.20.1+1-post-Ubuntu-0ubuntu122.04)Python Details- Ubuntu 23.04: Python 3.11.4- Ubuntu 22.04: Python 3.10.12Security Details- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

Ubuntu 23.04 vs. Ubuntu 22.04 ComparisonPhoronix Test SuiteBaseline+22.6%+22.6%+45.2%+45.2%+67.8%+67.8%90.2%36.9%27.2%18.8%18.7%15.8%14%13.8%11.3%11%8.6%8.5%8.4%8.3%8.2%8.1%7.6%6.8%6.7%6.3%6.2%5.5%5.2%4.5%4.2%4.2%4%3.9%3.8%3.7%3.7%3.1%3%2.8%2.8%2.6%2.6%2.6%2.5%2.4%2.3%2.3%2.1%2%Create - 100 - 1000000Savina Reactors.IO74.3%raytrace59.3%go51.7%nbody49.4%pickle_pure_python44.4%crypto_pyaes44%chaos43.4%Delete - 50 - 100000042%float41.1%ALS Movie Lensd.S.M.S - Mesh Timedjango_template25.6%2to325%regex_compile23.1%F.H.R22.5%json_loads21.7%8 - Compression Speed20%800 - 100 - 500 - 400800 - 100 - 500 - 400A.U.C.T18.2%A.S.P16.7%T.F.A.T.T16.4%d.M.M.S - Mesh TimeH214.8%100 - 800 - Read Write100 - 800 - Read Write - Average Latency19 - Compression Speed32pathlib9.2%Rand Read8.9%100 - 1000 - Read WritePreset 8 - Bosphorus 4K100 - 1000 - Read Write - Average Latency800 - 100 - 800 - 400800 - 100 - 800 - 400Material TesterBosphorus 4K - Very Fast7.4%R.N.N.T - bf16bf16bf16 - CPU7.3%GET - 5007.3%P.P.B.T.T6.9%19, Long Mode - Compression SpeedDisney MaterialScala Dotty6.6%A.G.R.R.0.F.I - CPU6.5%Bosphorus 4K - FasterApache Spark Bayespython_startup6.1%i.i.1.C.P.D6.1%10, Lossless5.6%500OpenMP - BM25.2%OpenMP - BM25.2%19 - D.Sgravity_spheres_volume/dim_512/scivis/real_time5.1%Pathtracer ISPC - Asian Dragon104 104 104 - 604.2%Bosphorus 4K - Ultra FastR.R.W.R1:106OpenMP - BM13.9%OpenMP - BM13.9%JythonV.D.F.I - CPUI.a.F.S.I.D.C3.7%V.D.F.I - CPUparticle_volume/pathtracer/real_time3.4%gravity_spheres_volume/dim_512/ao/real_time3.3%100 - 1000 - Read Only - Average Latency3.1%CPU - 256 - ResNet-5064224 - 256 - 5123%A.G.R.R.0.F.I - CPU2.9%gravity_spheres_volume/dim_512/pathtracer/real_time100 - 1000 - Read Only2.7%1.R.H.D.T.RRTLightmap.hdr.4096x4096 - CPU-OnlyPreset 4 - Bosphorus 4KBosphorus 4K - Very FastP.P.B.T.T2.5%Preset 12 - Bosphorus 4K8 - D.SBosphorus 4K - Ultra FastBumper Beam2.2%particle_volume/ao/real_time2.2%19, Long Mode - D.SBosphorus 4K - MediumApache HadoopRenaissancePyPerformancePyPerformancePyPerformancePyPerformancePyPerformancePyPerformanceApache HadoopPyPerformanceRenaissanceOpenFOAMPyPerformancePyPerformancePyPerformanceRenaissancePyPerformanceZstd CompressionApache IoTDBApache IoTDBRenaissanceRenaissancePyBenchOpenFOAMDaCapo BenchmarkPostgreSQLPostgreSQLZstd CompressionlibxsmmPyPerformanceRocksDBPostgreSQLSVT-AV1PostgreSQLApache IoTDBApache IoTDBAppleseedKvazaarNumpy BenchmarkoneDNNRedissrsRAN ProjectZstd CompressionAppleseedRenaissanceOpenVINOVVenCRenaissancePyPerformanceXcompact3d Incompact3dlibavif avifencnginxminiBUDEminiBUDEZstd CompressionOSPRayEmbreeHigh Performance Conjugate GradientKvazaarRocksDBMemcachedlibavif avifencminiBUDEminiBUDEDaCapo BenchmarkOpenVINOOpenRadiossOpenVINOOSPRayOSPRayPostgreSQLTensorFlowlibxsmmLiquid-DSPOpenVINOOSPRayOpenSSLPostgreSQLClickHouseIntel Open Image DenoiseSVT-AV1uvg266srsRAN ProjectSVT-AV1Zstd Compressionuvg266OpenRadiossOSPRayZstd Compressionuvg266Ubuntu 23.04Ubuntu 22.04

Xeon Max Linux Distrosrenaissance: Akka Unbalanced Cobwebbed Treelibxsmm: 128renaissance: ALS Movie Lenshadoop: Delete - 50 - 1000000renaissance: Savina Reactors.IOclickhouse: 100M Rows Hits Dataset, Third Runclickhouse: 100M Rows Hits Dataset, Second Runclickhouse: 100M Rows Hits Dataset, First Run / Cold Cacheapache-iotdb: 800 - 100 - 500 - 400apache-iotdb: 800 - 100 - 500 - 400apache-iotdb: 800 - 100 - 800 - 400apache-iotdb: 800 - 100 - 800 - 400tensorflow: CPU - 256 - ResNet-50renaissance: Finagle HTTP Requestshadoop: Create - 100 - 1000000pgbench: 100 - 1000 - Read Write - Average Latencypgbench: 100 - 1000 - Read Writepgbench: 100 - 800 - Read Write - Average Latencypgbench: 100 - 800 - Read Writerenaissance: Apache Spark Bayesospray: particle_volume/pathtracer/real_timeonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUpgbench: 100 - 1000 - Read Only - Average Latencypgbench: 100 - 1000 - Read Onlypgbench: 100 - 800 - Read Only - Average Latencypgbench: 100 - 800 - Read Onlyrenaissance: Apache Spark PageRankhpcg: 144 144 144 - 60vvenc: Bosphorus 4K - Fastervvenc: Bosphorus 4K - Fastrenaissance: Scala Dottysvt-av1: Preset 4 - Bosphorus 4Kmemcached: 1:10tensorflow: CPU - 16 - ResNet-50openvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUospray: gravity_spheres_volume/dim_512/pathtracer/real_timeopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUsrsran: PUSCH Processor Benchmark, Throughput Totalospray: gravity_spheres_volume/dim_512/scivis/real_timeblender: Barbershop - CPU-Onlyrenaissance: Rand Forestcompress-zstd: 19, Long Mode - Decompression Speedcompress-zstd: 19, Long Mode - Compression Speedappleseed: Material Testermemcached: 1:100openvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUappleseed: Emilycompress-zstd: 19 - Decompression Speedcompress-zstd: 19 - Compression Speedrocksdb: Read Rand Write Randpyperformance: python_startupdacapobench: H2ospray: gravity_spheres_volume/dim_512/ao/real_timeopenradioss: Chrysler Neon 1Mnumpy: openradioss: Bird Strike on Windshieldcompress-zstd: 8 - Decompression Speedcompress-zstd: 8 - Compression Speedminibude: OpenMP - BM2minibude: OpenMP - BM2redis: SET - 500openradioss: INIVOL and Fluid Structure Interaction Drop Containerospray: particle_volume/scivis/real_timerocksdb: Rand Readhpcg: 104 104 104 - 60namd: ATPase Simulation - 327,506 Atomsopenradioss: Rubber O-Ring Seal Installationopenfoam: drivaerFastback, Medium Mesh Size - Execution Timeopenfoam: drivaerFastback, Medium Mesh Size - Mesh Timerocksdb: Update Randopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenradioss: Bumper Beamredis: GET - 500openvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUospray: particle_volume/ao/real_timenginx: 500embree: Pathtracer ISPC - Asian Dragononednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUembree: Pathtracer ISPC - Crownopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUsvt-av1: Preset 13 - Bosphorus 4Kcompress-7zip: Decompression Ratingcompress-7zip: Compression Ratinguvg266: Bosphorus 4K - Slowsvt-av1: Preset 8 - Bosphorus 4Kuvg266: Bosphorus 4K - Ultra Fastopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUoidn: RT.hdr_alb_nrm.3840x2160 - CPU-Onlyoidn: RT.ldr_alb_nrm.3840x2160 - CPU-Onlyuvg266: Bosphorus 4K - Mediumopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUsrsran: PUSCH Processor Benchmark, Throughput Threadappleseed: Disney Materialblender: Classroom - CPU-Onlypyperformance: 2to3pyperformance: raytracelibxsmm: 64svt-av1: Preset 12 - Bosphorus 4Klibxsmm: 32openradioss: Cell Phone Drop Testminibude: OpenMP - BM1minibude: OpenMP - BM1uvg266: Bosphorus 4K - Very Fastuvg266: Bosphorus 4K - Super Fastincompact3d: input.i3d 193 Cells Per Directionpyperformance: goavifenc: 10, Losslessoidn: RTLightmap.hdr.4096x4096 - CPU-Onlyavifenc: 6, Losslesspyperformance: json_loadsdacapobench: Jythonkvazaar: Bosphorus 4K - Very Fastpyperformance: django_templateliquid-dsp: 224 - 256 - 512liquid-dsp: 128 - 256 - 512pyperformance: nbodyliquid-dsp: 64 - 256 - 512liquid-dsp: 224 - 256 - 32liquid-dsp: 64 - 256 - 32liquid-dsp: 128 - 256 - 32pyperformance: pickle_pure_pythonkvazaar: Bosphorus 4K - Super Fastpyperformance: crypto_pyaespyperformance: regex_compilegromacs: MPI CPU - water_GMX50_barepyperformance: floatpyperformance: chaoskvazaar: Bosphorus 4K - Ultra Fastopenfoam: drivaerFastback, Small Mesh Size - Execution Timeopenfoam: drivaerFastback, Small Mesh Size - Mesh Timepyperformance: pathlibblender: BMW27 - CPU-Onlyphpbench: PHP Benchmark Suiteopenssl: openssl: srsran: Downlink Processor Benchmarkpybench: Total For Average Test Timesavifenc: 6incompact3d: input.i3d 129 Cells Per DirectionUbuntu 23.04Ubuntu 22.0445287.91930.828896.63191216277.8173.12170.24155.80417.8143280940624.384738939444.8019378.1324160.7801646943.242185232712.983.426714593.51.5716403971.2626353584961.277.36013.6262.528919.32.380599628.3114.4960.10616.899.038070.7089007.6910598.810.94923261.521853.42832.927.9376.10339501826.544.2425416.48284.6735562683.746.266518511.41986611.1267131.14472.63157.193154.91130.8111.0702776.7541916420.90137.4327.6732380807398102.6760.32646116.57202.91135174.83463102796343.39325.2795.253012006.7917.666329.8828.4421124724.5837.06441379.2636.757473.831514.3556.9233738192549107.9623.41316.3279.11467.062.232.248.3722.105056.476.9116127.3646.532404.86190.891.16113553.142242091815.254.6601196.833.21101.1522528.80815.3015.856.075614551166.9691.158.73115.7389518.4927.7108313333394613333363.266503000042379000001989933333310630000021622.2156.310812.15754.551.622.9538.60060339.4196231321.649530321355014.919701.8674.86465.0802.0057109653510.71928.121108.02247528378.6177.66172.55156.54351.8351368423577.215134226246.1923731.4616356.0611787837.998211112554.680.657715662.221.6206236061.2716318935791.775.88423.8552.542980.12.441623717.4114.4860.55612.569.288120.7283575.1010344.310.42124262.721861.62891.429.8347.915994500744.654.2925126.84286.2576492823.251.469299712.12280910.7693132.86439.90157.253228.1942.1105.5722639.2881882918.18142.4727.359334963619198.51530.32225117.80200.77099151.0314103258341.34327.2697.392807752.5817.696319.4227.8268131549.8338.74481355.7536.739574.401502.5257.2563793472568258.0925.39516.6980.31460.122.272.268.5421.325242.156.9316072.3847.272367.16178.585.42287353.712803331870.555.9471328.933.5097.3642434.08815.6915.796.444384071767.3611.188.60419.1375319.9034.8105136666793042000094.466755000042537666671975600000307210000031222.6481.113312.16076.974.023.9138.19480830.99281614.221.999666341349949.820245.3678.67524.8891.97589309OpenBenchmarking.org

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Akka Unbalanced Cobwebbed TreeUbuntu 23.04Ubuntu 22.0411K22K33K44K55KSE +/- 234.36, N = 3SE +/- 565.20, N = 945287.953510.7MIN: 33697.71 / MAX: 48659.35MIN: 36510.64 / MAX: 56207.73
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Akka Unbalanced Cobwebbed TreeUbuntu 23.04Ubuntu 22.049K18K27K36K45KMin: 44827.76 / Avg: 45287.91 / Max: 45595.28Min: 51835.19 / Avg: 53510.74 / Max: 56207.73

libxsmm

Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 128Ubuntu 23.04Ubuntu 22.04400800120016002000SE +/- 44.57, N = 6SE +/- 27.11, N = 31930.81928.11. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2
OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 128Ubuntu 23.04Ubuntu 22.0430060090012001500Min: 1817.5 / Avg: 1930.77 / Max: 2118.1Min: 1893.2 / Avg: 1928.13 / Max: 1981.51. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensUbuntu 23.04Ubuntu 22.046K12K18K24K30KSE +/- 623.64, N = 6SE +/- 354.60, N = 928896.621108.0MIN: 22999.73 / MAX: 39599.89MIN: 17287.27 / MAX: 27400.12
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensUbuntu 23.04Ubuntu 22.045K10K15K20K25KMin: 26285.81 / Avg: 28896.56 / Max: 30938.52Min: 19609.53 / Avg: 21107.99 / Max: 22817.27

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 50 - Files: 1000000Ubuntu 23.04Ubuntu 22.047K14K21K28K35KSE +/- 452.80, N = 3SE +/- 2293.11, N = 93191222475
OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 50 - Files: 1000000Ubuntu 23.04Ubuntu 22.046K12K18K24K30KMin: 31074.24 / Avg: 31912.21 / Max: 32628.56Min: 9560.41 / Avg: 22475.21 / Max: 29893.58

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOUbuntu 23.04Ubuntu 22.046K12K18K24K30KSE +/- 203.55, N = 3SE +/- 544.94, N = 916277.828378.6MIN: 15959.38 / MAX: 31275.85MIN: 24488.11 / MAX: 54448.55
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOUbuntu 23.04Ubuntu 22.045K10K15K20K25KMin: 15959.38 / Avg: 16277.75 / Max: 16656.66Min: 25746.35 / Avg: 28378.65 / Max: 31005.06

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ / https://github.com/ClickHouse/ClickBench/tree/main/clickhouse with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all separate queries performed as an aggregate. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Third RunUbuntu 23.04Ubuntu 22.044080120160200SE +/- 3.29, N = 9SE +/- 4.50, N = 3173.12177.66MIN: 18.26 / MAX: 1621.62MIN: 31.4 / MAX: 1621.62
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Third RunUbuntu 23.04Ubuntu 22.04306090120150Min: 158.86 / Avg: 173.12 / Max: 187.62Min: 171.91 / Avg: 177.66 / Max: 186.54

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Second RunUbuntu 23.04Ubuntu 22.044080120160200SE +/- 2.63, N = 9SE +/- 3.35, N = 3170.24172.55MIN: 14.63 / MAX: 1875MIN: 24.51 / MAX: 2068.97
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Second RunUbuntu 23.04Ubuntu 22.04306090120150Min: 158.01 / Avg: 170.24 / Max: 182.45Min: 168.17 / Avg: 172.55 / Max: 179.12

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, First Run / Cold CacheUbuntu 23.04Ubuntu 22.04306090120150SE +/- 2.03, N = 9SE +/- 2.07, N = 3155.80156.54MIN: 10.69 / MAX: 1714.29MIN: 9.6 / MAX: 2500
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, First Run / Cold CacheUbuntu 23.04Ubuntu 22.04306090120150Min: 144 / Avg: 155.8 / Max: 161.9Min: 153.08 / Avg: 156.54 / Max: 160.23

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400Ubuntu 23.04Ubuntu 22.0490180270360450SE +/- 4.63, N = 5SE +/- 4.16, N = 12417.81351.83MAX: 31785.53MAX: 33320.78
OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400Ubuntu 23.04Ubuntu 22.0470140210280350Min: 404.68 / Avg: 417.81 / Max: 432.05Min: 329.98 / Avg: 351.83 / Max: 376.26

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400Ubuntu 23.04Ubuntu 22.0411M22M33M44M55MSE +/- 474661.21, N = 5SE +/- 506019.03, N = 124328094051368423
OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400Ubuntu 23.04Ubuntu 22.049M18M27M36M45MMin: 42246646.35 / Avg: 43280940.25 / Max: 44707566.51Min: 48124917.89 / Avg: 51368422.65 / Max: 53981101.14

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400Ubuntu 23.04Ubuntu 22.04130260390520650SE +/- 2.92, N = 3SE +/- 20.90, N = 9624.38577.21MAX: 34185.88MAX: 48250.35
OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400Ubuntu 23.04Ubuntu 22.04110220330440550Min: 619.68 / Avg: 624.38 / Max: 629.73Min: 539.32 / Avg: 577.21 / Max: 742.44

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400Ubuntu 23.04Ubuntu 22.0411M22M33M44M55MSE +/- 314308.71, N = 3SE +/- 1382666.67, N = 94738939451342262
OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400Ubuntu 23.04Ubuntu 22.049M18M27M36M45MMin: 46992424.37 / Avg: 47389394.49 / Max: 48009994.4Min: 40463799.94 / Avg: 51342261.73 / Max: 54311629.67

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: ResNet-50Ubuntu 23.04Ubuntu 22.041020304050SE +/- 0.47, N = 3SE +/- 0.31, N = 344.8046.19
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: ResNet-50Ubuntu 23.04Ubuntu 22.04918273645Min: 44.06 / Avg: 44.8 / Max: 45.68Min: 45.84 / Avg: 46.19 / Max: 46.8

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsUbuntu 23.04Ubuntu 22.045K10K15K20K25KSE +/- 232.89, N = 3SE +/- 482.14, N = 919378.123731.4MIN: 16059.97 / MAX: 19838.17MIN: 18921.39 / MAX: 25419.48
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsUbuntu 23.04Ubuntu 22.044K8K12K16K20KMin: 18920.93 / Avg: 19378.1 / Max: 19683.9Min: 21131.41 / Avg: 23731.44 / Max: 25419.48

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 100 - Files: 1000000Ubuntu 23.04Ubuntu 22.0413002600390052006500SE +/- 17.50, N = 3SE +/- 1101.28, N = 932416163
OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 100 - Files: 1000000Ubuntu 23.04Ubuntu 22.0411002200330044005500Min: 3220.33 / Avg: 3241.34 / Max: 3276.08Min: 2972.11 / Avg: 6163.21 / Max: 11721.96

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average LatencyUbuntu 23.04Ubuntu 22.041428425670SE +/- 0.56, N = 12SE +/- 0.79, N = 1260.7856.061. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average LatencyUbuntu 23.04Ubuntu 22.041224364860Min: 56.83 / Avg: 60.78 / Max: 63.03Min: 50.67 / Avg: 56.06 / Max: 58.941. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read WriteUbuntu 23.04Ubuntu 22.044K8K12K16K20KSE +/- 157.54, N = 12SE +/- 262.46, N = 1216469178781. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read WriteUbuntu 23.04Ubuntu 22.043K6K9K12K15KMin: 15864.79 / Avg: 16468.81 / Max: 17597.54Min: 16967.12 / Avg: 17878.39 / Max: 19737.191. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average LatencyUbuntu 23.04Ubuntu 22.041020304050SE +/- 0.45, N = 12SE +/- 0.60, N = 1243.2438.001. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average LatencyUbuntu 23.04Ubuntu 22.04918273645Min: 40.68 / Avg: 43.24 / Max: 45.28Min: 34.61 / Avg: 38 / Max: 40.691. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 800 - Mode: Read WriteUbuntu 23.04Ubuntu 22.045K10K15K20K25KSE +/- 197.10, N = 12SE +/- 332.96, N = 1218523211111. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 800 - Mode: Read WriteUbuntu 23.04Ubuntu 22.044K8K12K16K20KMin: 17668.07 / Avg: 18522.95 / Max: 19666.65Min: 19660.06 / Avg: 21111.28 / Max: 23113.641. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesUbuntu 23.04Ubuntu 22.046001200180024003000SE +/- 72.54, N = 15SE +/- 21.10, N = 152712.92554.6MIN: 630.89 / MAX: 6260.44MIN: 1105.84 / MAX: 6590.98
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesUbuntu 23.04Ubuntu 22.045001000150020002500Min: 2269.01 / Avg: 2712.89 / Max: 3090.16Min: 2296.48 / Avg: 2554.57 / Max: 2637.21

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/pathtracer/real_timeUbuntu 23.04Ubuntu 22.0420406080100SE +/- 3.15, N = 9SE +/- 2.80, N = 1283.4380.66
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/pathtracer/real_timeUbuntu 23.04Ubuntu 22.041632486480Min: 61.1 / Avg: 83.43 / Max: 93.45Min: 63.94 / Avg: 80.66 / Max: 93.37

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUUbuntu 23.04Ubuntu 22.043K6K9K12K15KSE +/- 277.46, N = 9SE +/- 727.41, N = 1014593.5015662.22MIN: 8664.05-lpthread - MIN: 9645.271. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUUbuntu 23.04Ubuntu 22.043K6K9K12K15KMin: 13365.4 / Avg: 14593.51 / Max: 15994.6Min: 9847.39 / Avg: 15662.22 / Max: 18496.11. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average LatencyUbuntu 23.04Ubuntu 22.040.36450.7291.09351.4581.8225SE +/- 0.044, N = 9SE +/- 0.049, N = 121.5711.6201. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average LatencyUbuntu 23.04Ubuntu 22.04246810Min: 1.37 / Avg: 1.57 / Max: 1.77Min: 1.39 / Avg: 1.62 / Max: 1.851. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read OnlyUbuntu 23.04Ubuntu 22.04140K280K420K560K700KSE +/- 17891.35, N = 9SE +/- 18940.33, N = 126403976236061. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read OnlyUbuntu 23.04Ubuntu 22.04110K220K330K440K550KMin: 566505.39 / Avg: 640397.11 / Max: 728298.24Min: 540048.16 / Avg: 623606.24 / Max: 721445.851. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average LatencyUbuntu 23.04Ubuntu 22.040.2860.5720.8581.1441.43SE +/- 0.019, N = 12SE +/- 0.027, N = 91.2621.2711. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average LatencyUbuntu 23.04Ubuntu 22.04246810Min: 1.18 / Avg: 1.26 / Max: 1.4Min: 1.11 / Avg: 1.27 / Max: 1.361. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 800 - Mode: Read OnlyUbuntu 23.04Ubuntu 22.04140K280K420K560K700KSE +/- 9142.97, N = 12SE +/- 14209.40, N = 96353586318931. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 800 - Mode: Read OnlyUbuntu 23.04Ubuntu 22.04110K220K330K440K550KMin: 570722.8 / Avg: 635357.81 / Max: 676729.95Min: 590347.47 / Avg: 631893.28 / Max: 723344.881. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRankUbuntu 23.04Ubuntu 22.0412002400360048006000SE +/- 123.15, N = 12SE +/- 68.33, N = 34961.25791.7MIN: 2972.29 / MAX: 7999.32MIN: 3887.37 / MAX: 7904.31
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRankUbuntu 23.04Ubuntu 22.0410002000300040005000Min: 4208.14 / Avg: 4961.25 / Max: 5589Min: 5658.13 / Avg: 5791.69 / Max: 5883.51

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1X Y Z: 144 144 144 - RT: 60Ubuntu 23.04Ubuntu 22.0420406080100SE +/- 0.77, N = 6SE +/- 0.58, N = 377.3675.881. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi
OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1X Y Z: 144 144 144 - RT: 60Ubuntu 23.04Ubuntu 22.041530456075Min: 75.46 / Avg: 77.36 / Max: 80.97Min: 74.77 / Avg: 75.88 / Max: 76.751. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: FasterUbuntu 23.04Ubuntu 22.040.86741.73482.60223.46964.337SE +/- 0.058, N = 12SE +/- 0.047, N = 33.6263.855-flto1. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: FasterUbuntu 23.04Ubuntu 22.04246810Min: 3.38 / Avg: 3.63 / Max: 3.93Min: 3.76 / Avg: 3.86 / Max: 3.911. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: FastUbuntu 23.04Ubuntu 22.040.5721.1441.7162.2882.86SE +/- 0.023, N = 7SE +/- 0.015, N = 32.5282.542-flto1. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: FastUbuntu 23.04Ubuntu 22.04246810Min: 2.42 / Avg: 2.53 / Max: 2.61Min: 2.52 / Avg: 2.54 / Max: 2.571. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Scala DottyUbuntu 23.04Ubuntu 22.042004006008001000SE +/- 24.95, N = 15SE +/- 18.18, N = 15919.3980.1MIN: 615.34 / MAX: 2847.74MIN: 648.36 / MAX: 3175.49
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Scala DottyUbuntu 23.04Ubuntu 22.042004006008001000Min: 793.7 / Avg: 919.34 / Max: 1104.95Min: 863.85 / Avg: 980.06 / Max: 1102.54

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 4 - Input: Bosphorus 4KUbuntu 23.04Ubuntu 22.040.54921.09841.64762.19682.746SE +/- 0.025, N = 15SE +/- 0.016, N = 132.3802.4411. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 4 - Input: Bosphorus 4KUbuntu 23.04Ubuntu 22.04246810Min: 2.25 / Avg: 2.38 / Max: 2.57Min: 2.34 / Avg: 2.44 / Max: 2.551. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Memcached

Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:10Ubuntu 23.04Ubuntu 22.04130K260K390K520K650KSE +/- 10608.70, N = 15SE +/- 7813.87, N = 15599628.31623717.411. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:10Ubuntu 23.04Ubuntu 22.04110K220K330K440K550KMin: 539692.94 / Avg: 599628.31 / Max: 675122.09Min: 582678.32 / Avg: 623717.41 / Max: 676209.21. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-50Ubuntu 23.04Ubuntu 22.0448121620SE +/- 0.11, N = 3SE +/- 0.14, N = 1214.4914.48
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-50Ubuntu 23.04Ubuntu 22.0448121620Min: 14.35 / Avg: 14.49 / Max: 14.71Min: 13.77 / Avg: 14.48 / Max: 15.42

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Machine Translation EN To DE FP16 - Device: CPUUbuntu 23.04Ubuntu 22.041428425670SE +/- 1.04, N = 15SE +/- 1.29, N = 1260.1060.55MIN: 36.84 / MAX: 723.03MIN: 38.21 / MAX: 574.751. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Machine Translation EN To DE FP16 - Device: CPUUbuntu 23.04Ubuntu 22.041224364860Min: 55.67 / Avg: 60.1 / Max: 70.41Min: 56.05 / Avg: 60.55 / Max: 70.761. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Machine Translation EN To DE FP16 - Device: CPUUbuntu 23.04Ubuntu 22.04130260390520650SE +/- 9.85, N = 15SE +/- 12.13, N = 12616.89612.561. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Machine Translation EN To DE FP16 - Device: CPUUbuntu 23.04Ubuntu 22.04110220330440550Min: 524.56 / Avg: 616.89 / Max: 663.13Min: 521.74 / Avg: 612.56 / Max: 658.891. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeUbuntu 23.04Ubuntu 22.043691215SE +/- 0.16639, N = 15SE +/- 0.10483, N = 159.038079.28812
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeUbuntu 23.04Ubuntu 22.043691215Min: 8.09 / Avg: 9.04 / Max: 10.2Min: 8.7 / Avg: 9.29 / Max: 9.86

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUUbuntu 23.04Ubuntu 22.040.1620.3240.4860.6480.81SE +/- 0.01, N = 15SE +/- 0.01, N = 120.700.72MIN: 0.27 / MAX: 65.5MIN: 0.28 / MAX: 72.421. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUUbuntu 23.04Ubuntu 22.04246810Min: 0.66 / Avg: 0.7 / Max: 0.76Min: 0.65 / Avg: 0.72 / Max: 0.761. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUUbuntu 23.04Ubuntu 22.0420K40K60K80K100KSE +/- 1618.25, N = 15SE +/- 1954.61, N = 1289007.6983575.101. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUUbuntu 23.04Ubuntu 22.0415K30K45K60K75KMin: 74004.05 / Avg: 89007.69 / Max: 99053.79Min: 73692.51 / Avg: 83575.1 / Max: 98361.971. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

srsRAN Project

srsRAN Project is a complete ORAN-native 5G RAN solution created by Software Radio Systems (SRS). The srsRAN Project radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.5Test: PUSCH Processor Benchmark, Throughput TotalUbuntu 23.04Ubuntu 22.042K4K6K8K10KSE +/- 309.18, N = 15SE +/- 160.30, N = 1410598.810344.31. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest
OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.5Test: PUSCH Processor Benchmark, Throughput TotalUbuntu 23.04Ubuntu 22.042K4K6K8K10KMin: 7546.9 / Avg: 10598.81 / Max: 11739.6Min: 8985.2 / Avg: 10344.32 / Max: 11194.51. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeUbuntu 23.04Ubuntu 22.043691215SE +/- 0.33, N = 12SE +/- 0.17, N = 1510.9510.42
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeUbuntu 23.04Ubuntu 22.043691215Min: 9.43 / Avg: 10.95 / Max: 12.97Min: 9.67 / Avg: 10.42 / Max: 11.78

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Barbershop - Compute: CPU-OnlyUbuntu 23.04Ubuntu 22.0460120180240300SE +/- 2.68, N = 3SE +/- 2.00, N = 3261.52262.72
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Barbershop - Compute: CPU-OnlyUbuntu 23.04Ubuntu 22.0450100150200250Min: 256.69 / Avg: 261.52 / Max: 265.95Min: 260.41 / Avg: 262.72 / Max: 266.71

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random ForestUbuntu 23.04Ubuntu 22.04400800120016002000SE +/- 13.67, N = 3SE +/- 33.63, N = 121853.41861.6MIN: 1540.7 / MAX: 2511.91MIN: 1313.89 / MAX: 2805.42
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random ForestUbuntu 23.04Ubuntu 22.0430060090012001500Min: 1830.4 / Avg: 1853.42 / Max: 1877.71Min: 1666.31 / Avg: 1861.58 / Max: 2009.48

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Decompression SpeedUbuntu 23.04Ubuntu 22.046001200180024003000SE +/- 1.31, N = 15SE +/- 1.05, N = 122832.92891.41. Ubuntu 23.04: *** Zstandard CLI (64-bit) v1.5.4, by Yann Collet ***2. Ubuntu 22.04: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Decompression SpeedUbuntu 23.04Ubuntu 22.045001000150020002500Min: 2821.7 / Avg: 2832.89 / Max: 2839.6Min: 2887 / Avg: 2891.38 / Max: 2899.61. Ubuntu 23.04: *** Zstandard CLI (64-bit) v1.5.4, by Yann Collet ***2. Ubuntu 22.04: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Compression SpeedUbuntu 23.04Ubuntu 22.04714212835SE +/- 0.63, N = 15SE +/- 0.44, N = 1227.929.81. Ubuntu 23.04: *** Zstandard CLI (64-bit) v1.5.4, by Yann Collet ***2. Ubuntu 22.04: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Compression SpeedUbuntu 23.04Ubuntu 22.04714212835Min: 22.5 / Avg: 27.93 / Max: 32.3Min: 27 / Avg: 29.78 / Max: 31.51. Ubuntu 23.04: *** Zstandard CLI (64-bit) v1.5.4, by Yann Collet ***2. Ubuntu 22.04: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Material TesterUbuntu 23.04Ubuntu 22.0480160240320400376.10347.92

Memcached

Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:100Ubuntu 23.04Ubuntu 22.04110K220K330K440K550KSE +/- 8153.99, N = 15SE +/- 3294.51, N = 3501826.54500744.651. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:100Ubuntu 23.04Ubuntu 22.0490K180K270K360K450KMin: 447387.72 / Avg: 501826.54 / Max: 565690.89Min: 497048.26 / Avg: 500744.65 / Max: 507316.611. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16-INT8 - Device: CPUUbuntu 23.04Ubuntu 22.040.96531.93062.89593.86124.8265SE +/- 0.04, N = 15SE +/- 0.06, N = 34.244.29MIN: 2.6 / MAX: 139.76MIN: 2.66 / MAX: 146.931. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16-INT8 - Device: CPUUbuntu 23.04Ubuntu 22.04246810Min: 3.89 / Avg: 4.24 / Max: 4.41Min: 4.17 / Avg: 4.29 / Max: 4.381. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16-INT8 - Device: CPUUbuntu 23.04Ubuntu 22.045K10K15K20K25KSE +/- 255.75, N = 15SE +/- 314.49, N = 325416.4825126.841. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16-INT8 - Device: CPUUbuntu 23.04Ubuntu 22.044K8K12K16K20KMin: 24421.88 / Avg: 25416.48 / Max: 27978.56Min: 24610.35 / Avg: 25126.84 / Max: 25695.961. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: EmilyUbuntu 23.04Ubuntu 22.0460120180240300284.67286.26

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Decompression SpeedUbuntu 23.04Ubuntu 22.046001200180024003000SE +/- 3.08, N = 12SE +/- 22.65, N = 152683.72823.21. Ubuntu 23.04: *** Zstandard CLI (64-bit) v1.5.4, by Yann Collet ***2. Ubuntu 22.04: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Decompression SpeedUbuntu 23.04Ubuntu 22.045001000150020002500Min: 2664.1 / Avg: 2683.68 / Max: 2694.2Min: 2508.6 / Avg: 2823.17 / Max: 2854.21. Ubuntu 23.04: *** Zstandard CLI (64-bit) v1.5.4, by Yann Collet ***2. Ubuntu 22.04: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Compression SpeedUbuntu 23.04Ubuntu 22.041224364860SE +/- 1.72, N = 12SE +/- 1.13, N = 1546.251.41. Ubuntu 23.04: *** Zstandard CLI (64-bit) v1.5.4, by Yann Collet ***2. Ubuntu 22.04: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Compression SpeedUbuntu 23.04Ubuntu 22.041020304050Min: 36.1 / Avg: 46.2 / Max: 58.1Min: 45.1 / Avg: 51.37 / Max: 62.61. Ubuntu 23.04: *** Zstandard CLI (64-bit) v1.5.4, by Yann Collet ***2. Ubuntu 22.04: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read Random Write RandomUbuntu 23.04Ubuntu 22.04150K300K450K600K750KSE +/- 6074.54, N = 15SE +/- 3631.47, N = 36651856929971. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read Random Write RandomUbuntu 23.04Ubuntu 22.04120K240K360K480K600KMin: 629903 / Avg: 665185.13 / Max: 709727Min: 685782 / Avg: 692997.33 / Max: 6973241. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupUbuntu 23.04Ubuntu 22.043691215SE +/- 0.10, N = 3SE +/- 0.11, N = 811.412.1
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupUbuntu 23.04Ubuntu 22.0448121620Min: 11.2 / Avg: 11.4 / Max: 11.5Min: 11.5 / Avg: 12.06 / Max: 12.3

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Ubuntu 23.04Ubuntu 22.045K10K15K20K25KSE +/- 401.72, N = 20SE +/- 393.58, N = 201986622809
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Ubuntu 23.04Ubuntu 22.044K8K12K16K20KMin: 16133 / Avg: 19865.8 / Max: 24002Min: 17672 / Avg: 22809.4 / Max: 25072

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/ao/real_timeUbuntu 23.04Ubuntu 22.043691215SE +/- 0.22, N = 15SE +/- 0.13, N = 311.1310.77
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/ao/real_timeUbuntu 23.04Ubuntu 22.043691215Min: 10.03 / Avg: 11.13 / Max: 12.85Min: 10.53 / Avg: 10.77 / Max: 10.96

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Chrysler Neon 1MUbuntu 23.04Ubuntu 22.04306090120150SE +/- 1.48, N = 3SE +/- 0.40, N = 3131.14132.86
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Chrysler Neon 1MUbuntu 23.04Ubuntu 22.0420406080100Min: 128.41 / Avg: 131.14 / Max: 133.5Min: 132.16 / Avg: 132.86 / Max: 133.54

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkUbuntu 23.04Ubuntu 22.04100200300400500SE +/- 5.52, N = 4SE +/- 4.01, N = 3472.63439.90
OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkUbuntu 23.04Ubuntu 22.0480160240320400Min: 456.93 / Avg: 472.63 / Max: 481.47Min: 432.15 / Avg: 439.9 / Max: 445.56

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Bird Strike on WindshieldUbuntu 23.04Ubuntu 22.04306090120150SE +/- 0.61, N = 3SE +/- 1.23, N = 3157.19157.25
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Bird Strike on WindshieldUbuntu 23.04Ubuntu 22.04306090120150Min: 156.08 / Avg: 157.19 / Max: 158.18Min: 155.5 / Avg: 157.25 / Max: 159.62

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8 - Decompression SpeedUbuntu 23.04Ubuntu 22.047001400210028003500SE +/- 46.81, N = 12SE +/- 29.89, N = 123154.93228.11. Ubuntu 23.04: *** Zstandard CLI (64-bit) v1.5.4, by Yann Collet ***2. Ubuntu 22.04: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8 - Decompression SpeedUbuntu 23.04Ubuntu 22.046001200180024003000Min: 2805.4 / Avg: 3154.94 / Max: 3288.6Min: 2921.7 / Avg: 3228.13 / Max: 32931. Ubuntu 23.04: *** Zstandard CLI (64-bit) v1.5.4, by Yann Collet ***2. Ubuntu 22.04: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8 - Compression SpeedUbuntu 23.04Ubuntu 22.042004006008001000SE +/- 53.57, N = 12SE +/- 20.54, N = 121130.8942.11. Ubuntu 23.04: *** Zstandard CLI (64-bit) v1.5.4, by Yann Collet ***2. Ubuntu 22.04: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8 - Compression SpeedUbuntu 23.04Ubuntu 22.042004006008001000Min: 783.2 / Avg: 1130.84 / Max: 1503.2Min: 818.9 / Avg: 942.13 / Max: 10741. Ubuntu 23.04: *** Zstandard CLI (64-bit) v1.5.4, by Yann Collet ***2. Ubuntu 22.04: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

miniBUDE

MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2Ubuntu 23.04Ubuntu 22.0420406080100SE +/- 0.85, N = 15SE +/- 1.27, N = 4111.07105.571. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2Ubuntu 23.04Ubuntu 22.0420406080100Min: 105.21 / Avg: 111.07 / Max: 115.43Min: 103.2 / Avg: 105.57 / Max: 108.871. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2Ubuntu 23.04Ubuntu 22.046001200180024003000SE +/- 21.23, N = 15SE +/- 31.78, N = 42776.752639.291. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2Ubuntu 23.04Ubuntu 22.045001000150020002500Min: 2630.16 / Avg: 2776.75 / Max: 2885.69Min: 2580.08 / Avg: 2639.29 / Max: 2721.781. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 500Ubuntu 23.04Ubuntu 22.04400K800K1200K1600K2000KSE +/- 37507.63, N = 12SE +/- 28736.18, N = 151916420.901882918.181. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 500Ubuntu 23.04Ubuntu 22.04300K600K900K1200K1500KMin: 1637333.62 / Avg: 1916420.9 / Max: 2136425.25Min: 1569394.12 / Avg: 1882918.18 / Max: 20416031. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: INIVOL and Fluid Structure Interaction Drop ContainerUbuntu 23.04Ubuntu 22.04306090120150SE +/- 0.65, N = 3SE +/- 0.46, N = 3137.43142.47
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: INIVOL and Fluid Structure Interaction Drop ContainerUbuntu 23.04Ubuntu 22.04306090120150Min: 136.52 / Avg: 137.43 / Max: 138.7Min: 141.59 / Avg: 142.47 / Max: 143.13

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/scivis/real_timeUbuntu 23.04Ubuntu 22.04714212835SE +/- 0.08, N = 3SE +/- 0.18, N = 327.6727.36
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/scivis/real_timeUbuntu 23.04Ubuntu 22.04612182430Min: 27.51 / Avg: 27.67 / Max: 27.75Min: 27.06 / Avg: 27.36 / Max: 27.67

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random ReadUbuntu 23.04Ubuntu 22.0480M160M240M320M400MSE +/- 3155038.10, N = 3SE +/- 2596603.43, N = 113808073983496361911. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random ReadUbuntu 23.04Ubuntu 22.0470M140M210M280M350MMin: 375511759 / Avg: 380807398 / Max: 386426741Min: 325825064 / Avg: 349636191 / Max: 3573149901. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1X Y Z: 104 104 104 - RT: 60Ubuntu 23.04Ubuntu 22.0420406080100SE +/- 0.72, N = 3SE +/- 0.16, N = 3102.6898.521. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi
OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1X Y Z: 104 104 104 - RT: 60Ubuntu 23.04Ubuntu 22.0420406080100Min: 101.37 / Avg: 102.68 / Max: 103.84Min: 98.3 / Avg: 98.52 / Max: 98.831. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsUbuntu 23.04Ubuntu 22.040.07350.1470.22050.2940.3675SE +/- 0.00600, N = 14SE +/- 0.00473, N = 150.326460.32225
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsUbuntu 23.04Ubuntu 22.0412345Min: 0.28 / Avg: 0.33 / Max: 0.37Min: 0.29 / Avg: 0.32 / Max: 0.36

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Rubber O-Ring Seal InstallationUbuntu 23.04Ubuntu 22.04306090120150SE +/- 0.91, N = 3SE +/- 0.52, N = 3116.57117.80
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Rubber O-Ring Seal InstallationUbuntu 23.04Ubuntu 22.0420406080100Min: 114.97 / Avg: 116.57 / Max: 118.11Min: 116.82 / Avg: 117.8 / Max: 118.57

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Medium Mesh Size - Execution TimeUbuntu 23.04Ubuntu 22.044080120160200202.91200.77-ldynamicMesh -lsampling-lfiniteVolume -lmeshTools -lparallel -lregionModels1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lgenericPatchFields -llagrangian -lOpenFOAM -ldl -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Medium Mesh Size - Mesh TimeUbuntu 23.04Ubuntu 22.044080120160200174.83151.03-ldynamicMesh -lsampling-lfiniteVolume -lmeshTools -lparallel -lregionModels1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lgenericPatchFields -llagrangian -lOpenFOAM -ldl -lm

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Update RandomUbuntu 23.04Ubuntu 22.0420K40K60K80K100KSE +/- 1141.51, N = 4SE +/- 979.86, N = 61027961032581. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Update RandomUbuntu 23.04Ubuntu 22.0420K40K60K80K100KMin: 101554 / Avg: 102796 / Max: 106218Min: 101308 / Avg: 103258 / Max: 1079171. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection FP16-INT8 - Device: CPUUbuntu 23.04Ubuntu 22.0470140210280350SE +/- 2.34, N = 3SE +/- 0.28, N = 3343.39341.34MIN: 244.44 / MAX: 745.09MIN: 244.79 / MAX: 669.321. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection FP16-INT8 - Device: CPUUbuntu 23.04Ubuntu 22.0460120180240300Min: 339.95 / Avg: 343.39 / Max: 347.85Min: 340.92 / Avg: 341.34 / Max: 341.881. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection FP16-INT8 - Device: CPUUbuntu 23.04Ubuntu 22.0470140210280350SE +/- 2.19, N = 3SE +/- 0.21, N = 3325.27327.261. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection FP16-INT8 - Device: CPUUbuntu 23.04Ubuntu 22.0460120180240300Min: 321.11 / Avg: 325.27 / Max: 328.56Min: 326.84 / Avg: 327.26 / Max: 327.551. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Bumper BeamUbuntu 23.04Ubuntu 22.0420406080100SE +/- 0.47, N = 3SE +/- 0.44, N = 395.2597.39
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Bumper BeamUbuntu 23.04Ubuntu 22.0420406080100Min: 94.31 / Avg: 95.25 / Max: 95.84Min: 96.85 / Avg: 97.39 / Max: 98.27

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 500Ubuntu 23.04Ubuntu 22.04600K1200K1800K2400K3000KSE +/- 56263.19, N = 12SE +/- 60261.21, N = 123012006.792807752.581. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 500Ubuntu 23.04Ubuntu 22.04500K1000K1500K2000K2500KMin: 2516274.75 / Avg: 3012006.79 / Max: 3296255.25Min: 2377259.25 / Avg: 2807752.58 / Max: 31275971. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Vehicle Bike Detection FP16 - Device: CPUUbuntu 23.04Ubuntu 22.0448121620SE +/- 0.03, N = 3SE +/- 0.03, N = 317.6617.69MIN: 13.56 / MAX: 221.85MIN: 12.69 / MAX: 219.191. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Vehicle Bike Detection FP16 - Device: CPUUbuntu 23.04Ubuntu 22.0448121620Min: 17.6 / Avg: 17.66 / Max: 17.7Min: 17.66 / Avg: 17.69 / Max: 17.741. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Vehicle Bike Detection FP16 - Device: CPUUbuntu 23.04Ubuntu 22.0414002800420056007000SE +/- 11.85, N = 3SE +/- 9.82, N = 36329.886319.421. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Vehicle Bike Detection FP16 - Device: CPUUbuntu 23.04Ubuntu 22.0411002200330044005500Min: 6313.5 / Avg: 6329.88 / Max: 6352.9Min: 6299.79 / Avg: 6319.42 / Max: 6329.411. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/ao/real_timeUbuntu 23.04Ubuntu 22.04714212835SE +/- 0.02, N = 3SE +/- 0.05, N = 328.4427.83
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/ao/real_timeUbuntu 23.04Ubuntu 22.04612182430Min: 28.42 / Avg: 28.44 / Max: 28.48Min: 27.74 / Avg: 27.83 / Max: 27.92

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 500Ubuntu 23.04Ubuntu 22.0430K60K90K120K150KSE +/- 1330.04, N = 3SE +/- 836.61, N = 3124724.58131549.831. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 500Ubuntu 23.04Ubuntu 22.0420K40K60K80K100KMin: 122651.9 / Avg: 124724.58 / Max: 127204.89Min: 129907.33 / Avg: 131549.83 / Max: 132647.481. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian DragonUbuntu 23.04Ubuntu 22.04918273645SE +/- 0.45, N = 12SE +/- 0.76, N = 1537.0638.74MIN: 28.65 / MAX: 48.65MIN: 30.62 / MAX: 58.26
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian DragonUbuntu 23.04Ubuntu 22.04816243240Min: 33.43 / Avg: 37.06 / Max: 39.3Min: 35.71 / Avg: 38.74 / Max: 46.11

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUUbuntu 23.04Ubuntu 22.0430060090012001500SE +/- 8.91, N = 3SE +/- 7.30, N = 31379.261355.75MIN: 1265.95-lpthread - MIN: 1217.231. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUUbuntu 23.04Ubuntu 22.042004006008001000Min: 1361.67 / Avg: 1379.26 / Max: 1390.5Min: 1341.51 / Avg: 1355.75 / Max: 1365.71. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: CrownUbuntu 23.04Ubuntu 22.04816243240SE +/- 0.80, N = 12SE +/- 0.95, N = 1236.7636.74MIN: 25.45 / MAX: 57.06MIN: 22.41 / MAX: 55.39
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: CrownUbuntu 23.04Ubuntu 22.04816243240Min: 32.77 / Avg: 36.76 / Max: 42.53Min: 27.91 / Avg: 36.74 / Max: 40.92

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16-INT8 - Device: CPUUbuntu 23.04Ubuntu 22.0420406080100SE +/- 0.09, N = 3SE +/- 0.08, N = 373.8374.40MIN: 53.98 / MAX: 322.99MIN: 54.01 / MAX: 402.311. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16-INT8 - Device: CPUUbuntu 23.04Ubuntu 22.041428425670Min: 73.67 / Avg: 73.83 / Max: 73.98Min: 74.29 / Avg: 74.4 / Max: 74.551. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16-INT8 - Device: CPUUbuntu 23.04Ubuntu 22.0430060090012001500SE +/- 1.92, N = 3SE +/- 1.27, N = 31514.351502.521. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16-INT8 - Device: CPUUbuntu 23.04Ubuntu 22.0430060090012001500Min: 1511.49 / Avg: 1514.35 / Max: 1518.01Min: 1500.35 / Avg: 1502.52 / Max: 1504.761. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 13 - Input: Bosphorus 4KUbuntu 23.04Ubuntu 22.041326395265SE +/- 0.98, N = 15SE +/- 1.51, N = 1556.9257.261. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 13 - Input: Bosphorus 4KUbuntu 23.04Ubuntu 22.041122334455Min: 49.55 / Avg: 56.92 / Max: 63.23Min: 43.09 / Avg: 57.26 / Max: 64.771. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingUbuntu 23.04Ubuntu 22.0480K160K240K320K400KSE +/- 8855.12, N = 3SE +/- 5913.92, N = 33738193793471. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingUbuntu 23.04Ubuntu 22.0470K140K210K280K350KMin: 359478 / Avg: 373819 / Max: 389989Min: 372364 / Avg: 379347 / Max: 3911061. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingUbuntu 23.04Ubuntu 22.0460K120K180K240K300KSE +/- 1028.75, N = 3SE +/- 3527.52, N = 32549102568251. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingUbuntu 23.04Ubuntu 22.0440K80K120K160K200KMin: 253264 / Avg: 254909.67 / Max: 256802Min: 251136 / Avg: 256825.33 / Max: 2632831. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: SlowUbuntu 23.04Ubuntu 22.04246810SE +/- 0.01, N = 3SE +/- 0.02, N = 37.968.09
OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: SlowUbuntu 23.04Ubuntu 22.043691215Min: 7.94 / Avg: 7.96 / Max: 7.97Min: 8.07 / Avg: 8.09 / Max: 8.12

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 8 - Input: Bosphorus 4KUbuntu 23.04Ubuntu 22.04612182430SE +/- 0.31, N = 3SE +/- 0.47, N = 1223.4125.401. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 8 - Input: Bosphorus 4KUbuntu 23.04Ubuntu 22.04612182430Min: 22.83 / Avg: 23.41 / Max: 23.89Min: 22.77 / Avg: 25.39 / Max: 28.591. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Ultra FastUbuntu 23.04Ubuntu 22.0448121620SE +/- 0.07, N = 3SE +/- 0.13, N = 916.3216.69
OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Ultra FastUbuntu 23.04Ubuntu 22.0448121620Min: 16.19 / Avg: 16.32 / Max: 16.42Min: 16.31 / Avg: 16.69 / Max: 17.54

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Detection FP16 - Device: CPUUbuntu 23.04Ubuntu 22.0420406080100SE +/- 0.58, N = 3SE +/- 0.26, N = 379.1180.31MIN: 47.44 / MAX: 636.26MIN: 49.45 / MAX: 540.741. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Detection FP16 - Device: CPUUbuntu 23.04Ubuntu 22.041530456075Min: 77.97 / Avg: 79.11 / Max: 79.9Min: 79.8 / Avg: 80.31 / Max: 80.681. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Detection FP16 - Device: CPUUbuntu 23.04Ubuntu 22.04100200300400500SE +/- 3.50, N = 3SE +/- 1.53, N = 3467.06460.121. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Detection FP16 - Device: CPUUbuntu 23.04Ubuntu 22.0480160240320400Min: 462.38 / Avg: 467.06 / Max: 473.9Min: 458 / Avg: 460.12 / Max: 463.091. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Intel Open Image Denoise

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-OnlyUbuntu 23.04Ubuntu 22.040.51081.02161.53242.04322.554SE +/- 0.03, N = 15SE +/- 0.03, N = 152.232.27
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-OnlyUbuntu 23.04Ubuntu 22.04246810Min: 2.03 / Avg: 2.23 / Max: 2.4Min: 2.06 / Avg: 2.27 / Max: 2.44

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-OnlyUbuntu 23.04Ubuntu 22.040.50851.0171.52552.0342.5425SE +/- 0.03, N = 15SE +/- 0.02, N = 152.242.26
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-OnlyUbuntu 23.04Ubuntu 22.04246810Min: 2.05 / Avg: 2.24 / Max: 2.43Min: 2.13 / Avg: 2.26 / Max: 2.39

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: MediumUbuntu 23.04Ubuntu 22.04246810SE +/- 0.04, N = 3SE +/- 0.02, N = 38.378.54
OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: MediumUbuntu 23.04Ubuntu 22.043691215Min: 8.31 / Avg: 8.37 / Max: 8.44Min: 8.5 / Avg: 8.54 / Max: 8.58

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16-INT8 - Device: CPUUbuntu 23.04Ubuntu 22.04510152025SE +/- 0.07, N = 3SE +/- 0.07, N = 322.1021.32MIN: 14.51 / MAX: 233.42MIN: 14.91 / MAX: 228.751. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16-INT8 - Device: CPUUbuntu 23.04Ubuntu 22.04510152025Min: 21.96 / Avg: 22.1 / Max: 22.19Min: 21.18 / Avg: 21.32 / Max: 21.411. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16-INT8 - Device: CPUUbuntu 23.04Ubuntu 22.0411002200330044005500SE +/- 16.47, N = 3SE +/- 17.95, N = 35056.475242.151. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16-INT8 - Device: CPUUbuntu 23.04Ubuntu 22.049001800270036004500Min: 5037.09 / Avg: 5056.47 / Max: 5089.23Min: 5221.54 / Avg: 5242.15 / Max: 5277.911. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16-INT8 - Device: CPUUbuntu 23.04Ubuntu 22.04246810SE +/- 0.02, N = 3SE +/- 0.03, N = 36.916.93MIN: 5.46 / MAX: 76.1MIN: 5.45 / MAX: 77.381. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16-INT8 - Device: CPUUbuntu 23.04Ubuntu 22.043691215Min: 6.89 / Avg: 6.91 / Max: 6.94Min: 6.89 / Avg: 6.93 / Max: 71. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16-INT8 - Device: CPUUbuntu 23.04Ubuntu 22.043K6K9K12K15KSE +/- 25.06, N = 3SE +/- 105.07, N = 316127.3616072.381. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16-INT8 - Device: CPUUbuntu 23.04Ubuntu 22.043K6K9K12K15KMin: 16077.61 / Avg: 16127.36 / Max: 16157.49Min: 15865.55 / Avg: 16072.38 / Max: 16207.921. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16-INT8 - Device: CPUUbuntu 23.04Ubuntu 22.041122334455SE +/- 0.09, N = 3SE +/- 0.13, N = 346.5347.27MIN: 40.48 / MAX: 136.39MIN: 40.88 / MAX: 153.121. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16-INT8 - Device: CPUUbuntu 23.04Ubuntu 22.041020304050Min: 46.35 / Avg: 46.53 / Max: 46.65Min: 47.02 / Avg: 47.27 / Max: 47.481. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16-INT8 - Device: CPUUbuntu 23.04Ubuntu 22.045001000150020002500SE +/- 4.80, N = 3SE +/- 6.84, N = 32404.862367.161. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16-INT8 - Device: CPUUbuntu 23.04Ubuntu 22.04400800120016002000Min: 2398.81 / Avg: 2404.86 / Max: 2414.33Min: 2356.8 / Avg: 2367.16 / Max: 2380.081. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

srsRAN Project

srsRAN Project is a complete ORAN-native 5G RAN solution created by Software Radio Systems (SRS). The srsRAN Project radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.5Test: PUSCH Processor Benchmark, Throughput ThreadUbuntu 23.04Ubuntu 22.044080120160200SE +/- 7.02, N = 12SE +/- 5.59, N = 14190.8178.51. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest
OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.5Test: PUSCH Processor Benchmark, Throughput ThreadUbuntu 23.04Ubuntu 22.04306090120150Min: 114.1 / Avg: 190.75 / Max: 201.9Min: 106.2 / Avg: 178.51 / Max: 187.21. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Disney MaterialUbuntu 23.04Ubuntu 22.042040608010091.1685.42

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Classroom - Compute: CPU-OnlyUbuntu 23.04Ubuntu 22.041224364860SE +/- 0.25, N = 3SE +/- 0.22, N = 353.1453.71
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Classroom - Compute: CPU-OnlyUbuntu 23.04Ubuntu 22.041122334455Min: 52.71 / Avg: 53.14 / Max: 53.56Min: 53.28 / Avg: 53.71 / Max: 53.93

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3Ubuntu 23.04Ubuntu 22.0460120180240300SE +/- 1.45, N = 3SE +/- 2.60, N = 3224280
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3Ubuntu 23.04Ubuntu 22.0450100150200250Min: 222 / Avg: 224.33 / Max: 227Min: 276 / Avg: 280.33 / Max: 285

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceUbuntu 23.04Ubuntu 22.0470140210280350SE +/- 0.00, N = 3SE +/- 0.58, N = 3209333
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceUbuntu 23.04Ubuntu 22.0460120180240300Min: 209 / Avg: 209 / Max: 209Min: 332 / Avg: 333 / Max: 334

libxsmm

Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 64Ubuntu 23.04Ubuntu 22.04400800120016002000SE +/- 41.92, N = 12SE +/- 28.89, N = 151815.21870.51. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2
OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 64Ubuntu 23.04Ubuntu 22.0430060090012001500Min: 1628.3 / Avg: 1815.23 / Max: 2106.3Min: 1710.7 / Avg: 1870.53 / Max: 2077.41. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 12 - Input: Bosphorus 4KUbuntu 23.04Ubuntu 22.041326395265SE +/- 0.35, N = 3SE +/- 1.55, N = 1554.6655.951. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 12 - Input: Bosphorus 4KUbuntu 23.04Ubuntu 22.041122334455Min: 54.1 / Avg: 54.66 / Max: 55.31Min: 47.22 / Avg: 55.95 / Max: 70.061. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

libxsmm

Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 32Ubuntu 23.04Ubuntu 22.0430060090012001500SE +/- 40.91, N = 12SE +/- 33.79, N = 151196.81328.91. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2
OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 32Ubuntu 23.04Ubuntu 22.042004006008001000Min: 1006.1 / Avg: 1196.77 / Max: 1412.2Min: 1131.8 / Avg: 1328.86 / Max: 1519.61. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Cell Phone Drop TestUbuntu 23.04Ubuntu 22.04816243240SE +/- 0.05, N = 3SE +/- 0.21, N = 333.2133.50
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Cell Phone Drop TestUbuntu 23.04Ubuntu 22.04714212835Min: 33.13 / Avg: 33.21 / Max: 33.29Min: 33.09 / Avg: 33.5 / Max: 33.8

miniBUDE

MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1Ubuntu 23.04Ubuntu 22.0420406080100SE +/- 1.77, N = 15SE +/- 2.22, N = 15101.1597.361. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1Ubuntu 23.04Ubuntu 22.0420406080100Min: 87.43 / Avg: 101.15 / Max: 111.86Min: 81.04 / Avg: 97.36 / Max: 108.381. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1Ubuntu 23.04Ubuntu 22.045001000150020002500SE +/- 44.32, N = 15SE +/- 55.61, N = 152528.812434.091. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1Ubuntu 23.04Ubuntu 22.04400800120016002000Min: 2185.77 / Avg: 2528.81 / Max: 2796.58Min: 2025.93 / Avg: 2434.09 / Max: 2709.411. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Very FastUbuntu 23.04Ubuntu 22.0448121620SE +/- 0.02, N = 3SE +/- 0.02, N = 315.3015.69
OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Very FastUbuntu 23.04Ubuntu 22.0448121620Min: 15.26 / Avg: 15.3 / Max: 15.32Min: 15.66 / Avg: 15.69 / Max: 15.73

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Super FastUbuntu 23.04Ubuntu 22.0448121620SE +/- 0.02, N = 3SE +/- 0.09, N = 315.8515.79
OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Super FastUbuntu 23.04Ubuntu 22.0448121620Min: 15.82 / Avg: 15.85 / Max: 15.89Min: 15.61 / Avg: 15.79 / Max: 15.9

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per DirectionUbuntu 23.04Ubuntu 22.04246810SE +/- 0.04653610, N = 15SE +/- 0.04176484, N = 156.075614556.444384071. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per DirectionUbuntu 23.04Ubuntu 22.043691215Min: 5.82 / Avg: 6.08 / Max: 6.35Min: 6.15 / Avg: 6.44 / Max: 6.721. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goUbuntu 23.04Ubuntu 22.044080120160200SE +/- 0.88, N = 3SE +/- 0.33, N = 3116176
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goUbuntu 23.04Ubuntu 22.04306090120150Min: 115 / Avg: 116.33 / Max: 118Min: 175 / Avg: 175.67 / Max: 176

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 10, LosslessUbuntu 23.04Ubuntu 22.04246810SE +/- 0.124, N = 15SE +/- 0.120, N = 156.9697.3611. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 10, LosslessUbuntu 23.04Ubuntu 22.043691215Min: 6.29 / Avg: 6.97 / Max: 7.94Min: 6.62 / Avg: 7.36 / Max: 7.961. (CXX) g++ options: -O3 -fPIC -lm

Intel Open Image Denoise

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RTLightmap.hdr.4096x4096 - Device: CPU-OnlyUbuntu 23.04Ubuntu 22.040.26550.5310.79651.0621.3275SE +/- 0.01, N = 3SE +/- 0.01, N = 51.151.18
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RTLightmap.hdr.4096x4096 - Device: CPU-OnlyUbuntu 23.04Ubuntu 22.04246810Min: 1.13 / Avg: 1.15 / Max: 1.17Min: 1.15 / Avg: 1.18 / Max: 1.22

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 6, LosslessUbuntu 23.04Ubuntu 22.04246810SE +/- 0.156, N = 12SE +/- 0.120, N = 128.7318.6041. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 6, LosslessUbuntu 23.04Ubuntu 22.043691215Min: 7.88 / Avg: 8.73 / Max: 9.7Min: 7.75 / Avg: 8.6 / Max: 9.191. (CXX) g++ options: -O3 -fPIC -lm

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsUbuntu 23.04Ubuntu 22.04510152025SE +/- 0.00, N = 3SE +/- 0.03, N = 315.719.1
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsUbuntu 23.04Ubuntu 22.04510152025Min: 15.7 / Avg: 15.7 / Max: 15.7Min: 19 / Avg: 19.07 / Max: 19.1

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonUbuntu 23.04Ubuntu 22.048001600240032004000SE +/- 231.46, N = 16SE +/- 133.04, N = 2038953753
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonUbuntu 23.04Ubuntu 22.047001400210028003500Min: 3113 / Avg: 3895.31 / Max: 5533Min: 3319 / Avg: 3753.05 / Max: 5502

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Very FastUbuntu 23.04Ubuntu 22.04510152025SE +/- 0.11, N = 3SE +/- 0.18, N = 318.4919.901. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Very FastUbuntu 23.04Ubuntu 22.04510152025Min: 18.28 / Avg: 18.49 / Max: 18.61Min: 19.63 / Avg: 19.9 / Max: 20.231. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateUbuntu 23.04Ubuntu 22.04816243240SE +/- 0.03, N = 3SE +/- 0.03, N = 327.734.8
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateUbuntu 23.04Ubuntu 22.04714212835Min: 27.6 / Avg: 27.67 / Max: 27.7Min: 34.8 / Avg: 34.83 / Max: 34.9

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 224 - Buffer Length: 256 - Filter Length: 512Ubuntu 23.04Ubuntu 22.04200M400M600M800M1000MSE +/- 3337830.30, N = 3SE +/- 7056990.23, N = 3108313333310513666671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 224 - Buffer Length: 256 - Filter Length: 512Ubuntu 23.04Ubuntu 22.04200M400M600M800M1000MMin: 1076500000 / Avg: 1083133333.33 / Max: 1087100000Min: 1040900000 / Avg: 1051366666.67 / Max: 10648000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 128 - Buffer Length: 256 - Filter Length: 512Ubuntu 23.04Ubuntu 22.04200M400M600M800M1000MSE +/- 6672056.99, N = 3SE +/- 7247277.65, N = 39461333339304200001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 128 - Buffer Length: 256 - Filter Length: 512Ubuntu 23.04Ubuntu 22.04160M320M480M640M800MMin: 937720000 / Avg: 946133333.33 / Max: 959310000Min: 921720000 / Avg: 930420000 / Max: 9448100001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyUbuntu 23.04Ubuntu 22.0420406080100SE +/- 0.17, N = 3SE +/- 0.52, N = 363.294.4
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyUbuntu 23.04Ubuntu 22.0420406080100Min: 62.9 / Avg: 63.2 / Max: 63.5Min: 93.4 / Avg: 94.37 / Max: 95.2

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 512Ubuntu 23.04Ubuntu 22.04140M280M420M560M700MSE +/- 2175224.13, N = 3SE +/- 5495610.37, N = 36650300006675500001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 512Ubuntu 23.04Ubuntu 22.04120M240M360M480M600MMin: 661070000 / Avg: 665030000 / Max: 668570000Min: 659230000 / Avg: 667550000 / Max: 6779300001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 224 - Buffer Length: 256 - Filter Length: 32Ubuntu 23.04Ubuntu 22.04900M1800M2700M3600M4500MSE +/- 13214007.72, N = 3SE +/- 15689947.24, N = 3423790000042537666671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 224 - Buffer Length: 256 - Filter Length: 32Ubuntu 23.04Ubuntu 22.04700M1400M2100M2800M3500MMin: 4211600000 / Avg: 4237900000 / Max: 4253300000Min: 4233300000 / Avg: 4253766666.67 / Max: 42846000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 32Ubuntu 23.04Ubuntu 22.04400M800M1200M1600M2000MSE +/- 2382109.24, N = 3SE +/- 4460194.32, N = 3198993333319756000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 32Ubuntu 23.04Ubuntu 22.04300M600M900M1200M1500MMin: 1986100000 / Avg: 1989933333.33 / Max: 1994300000Min: 1967200000 / Avg: 1975600000 / Max: 19824000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 128 - Buffer Length: 256 - Filter Length: 32Ubuntu 23.04Ubuntu 22.04700M1400M2100M2800M3500MSE +/- 10835281.88, N = 3SE +/- 13675647.46, N = 3310630000030721000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 128 - Buffer Length: 256 - Filter Length: 32Ubuntu 23.04Ubuntu 22.04500M1000M1500M2000M2500MMin: 3093200000 / Avg: 3106300000 / Max: 3127800000Min: 3045900000 / Avg: 3072100000 / Max: 30920000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonUbuntu 23.04Ubuntu 22.0470140210280350SE +/- 0.88, N = 3SE +/- 0.00, N = 3216312
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonUbuntu 23.04Ubuntu 22.0460120180240300Min: 215 / Avg: 216.33 / Max: 218Min: 312 / Avg: 312 / Max: 312

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Super FastUbuntu 23.04Ubuntu 22.04510152025SE +/- 0.11, N = 3SE +/- 0.15, N = 322.2122.641. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Super FastUbuntu 23.04Ubuntu 22.04510152025Min: 22 / Avg: 22.21 / Max: 22.35Min: 22.35 / Avg: 22.64 / Max: 22.831. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesUbuntu 23.04Ubuntu 22.0420406080100SE +/- 0.03, N = 3SE +/- 0.03, N = 356.381.1
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesUbuntu 23.04Ubuntu 22.041530456075Min: 56.3 / Avg: 56.33 / Max: 56.4Min: 81.1 / Avg: 81.13 / Max: 81.2

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileUbuntu 23.04Ubuntu 22.04306090120150SE +/- 0.33, N = 3SE +/- 0.00, N = 3108133
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileUbuntu 23.04Ubuntu 22.0420406080100Min: 107 / Avg: 107.67 / Max: 108Min: 133 / Avg: 133 / Max: 133

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2023Implementation: MPI CPU - Input: water_GMX50_bareUbuntu 23.04Ubuntu 22.043691215SE +/- 0.06, N = 3SE +/- 0.01, N = 312.1612.161. (CXX) g++ options: -O3
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2023Implementation: MPI CPU - Input: water_GMX50_bareUbuntu 23.04Ubuntu 22.0448121620Min: 12.04 / Avg: 12.16 / Max: 12.26Min: 12.14 / Avg: 12.16 / Max: 12.181. (CXX) g++ options: -O3

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatUbuntu 23.04Ubuntu 22.0420406080100SE +/- 0.13, N = 3SE +/- 0.34, N = 354.576.9
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatUbuntu 23.04Ubuntu 22.041530456075Min: 54.2 / Avg: 54.47 / Max: 54.6Min: 76.5 / Avg: 76.93 / Max: 77.6

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosUbuntu 23.04Ubuntu 22.041632486480SE +/- 0.07, N = 3SE +/- 0.12, N = 351.674.0
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosUbuntu 23.04Ubuntu 22.041428425670Min: 51.5 / Avg: 51.63 / Max: 51.7Min: 73.8 / Avg: 74 / Max: 74.2

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Ultra FastUbuntu 23.04Ubuntu 22.04612182430SE +/- 0.22, N = 3SE +/- 0.32, N = 322.9523.911. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Ultra FastUbuntu 23.04Ubuntu 22.04612182430Min: 22.51 / Avg: 22.95 / Max: 23.23Min: 23.4 / Avg: 23.91 / Max: 24.51. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution TimeUbuntu 23.04Ubuntu 22.0491827364538.6038.19-ldynamicMesh -lsampling-lfiniteVolume -lmeshTools -lparallel -lregionModels1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lgenericPatchFields -llagrangian -lOpenFOAM -ldl -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh TimeUbuntu 23.04Ubuntu 22.0491827364539.4230.99-ldynamicMesh -lsampling-lfiniteVolume -lmeshTools -lparallel -lregionModels1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lgenericPatchFields -llagrangian -lOpenFOAM -ldl -lm

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibUbuntu 23.04Ubuntu 22.0448121620SE +/- 0.00, N = 3SE +/- 0.03, N = 313.014.2
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibUbuntu 23.04Ubuntu 22.0448121620Min: 13 / Avg: 13 / Max: 13Min: 14.2 / Avg: 14.23 / Max: 14.3

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: BMW27 - Compute: CPU-OnlyUbuntu 23.04Ubuntu 22.04510152025SE +/- 0.26, N = 3SE +/- 0.28, N = 321.6421.99
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: BMW27 - Compute: CPU-OnlyUbuntu 23.04Ubuntu 22.04510152025Min: 21.12 / Avg: 21.64 / Max: 21.96Min: 21.63 / Avg: 21.99 / Max: 22.55

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteUbuntu 23.04Ubuntu 22.04200K400K600K800K1000KSE +/- 1336.12, N = 3SE +/- 3637.24, N = 3953032966634
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteUbuntu 23.04Ubuntu 22.04200K400K600K800K1000KMin: 950361 / Avg: 953032.33 / Max: 954428Min: 960375 / Avg: 966634 / Max: 972974

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. The system/openssl test profiles relies on benchmarking the system/OS-supplied openssl binary rather than the pts/openssl test profile that uses the locally-built OpenSSL for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgverify/s, More Is BetterOpenSSLUbuntu 23.04Ubuntu 22.04300K600K900K1200K1500KSE +/- 20330.16, N = 3SE +/- 3325.21, N = 31355014.91349949.81. Ubuntu 23.04: OpenSSL 3.0.8 7 Feb 2023 (Library: OpenSSL 3.0.8 7 Feb 2023)2. Ubuntu 22.04: OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
OpenBenchmarking.orgverify/s, More Is BetterOpenSSLUbuntu 23.04Ubuntu 22.04200K400K600K800K1000KMin: 1333676.3 / Avg: 1355014.93 / Max: 1395658.3Min: 1346417.5 / Avg: 1349949.77 / Max: 1356595.81. Ubuntu 23.04: OpenSSL 3.0.8 7 Feb 2023 (Library: OpenSSL 3.0.8 7 Feb 2023)2. Ubuntu 22.04: OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)

OpenBenchmarking.orgsign/s, More Is BetterOpenSSLUbuntu 23.04Ubuntu 22.044K8K12K16K20KSE +/- 121.28, N = 3SE +/- 184.59, N = 319701.820245.31. Ubuntu 23.04: OpenSSL 3.0.8 7 Feb 2023 (Library: OpenSSL 3.0.8 7 Feb 2023)2. Ubuntu 22.04: OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
OpenBenchmarking.orgsign/s, More Is BetterOpenSSLUbuntu 23.04Ubuntu 22.044K8K12K16K20KMin: 19575 / Avg: 19701.83 / Max: 19944.3Min: 19894.3 / Avg: 20245.3 / Max: 20519.91. Ubuntu 23.04: OpenSSL 3.0.8 7 Feb 2023 (Library: OpenSSL 3.0.8 7 Feb 2023)2. Ubuntu 22.04: OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)

srsRAN Project

srsRAN Project is a complete ORAN-native 5G RAN solution created by Software Radio Systems (SRS). The srsRAN Project radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.5Test: Downlink Processor BenchmarkUbuntu 23.04Ubuntu 22.04150300450600750SE +/- 2.92, N = 3SE +/- 5.05, N = 3674.8678.61. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest
OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.5Test: Downlink Processor BenchmarkUbuntu 23.04Ubuntu 22.04120240360480600Min: 669.9 / Avg: 674.77 / Max: 680Min: 669.2 / Avg: 678.57 / Max: 686.51. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesUbuntu 23.04Ubuntu 22.04160320480640800SE +/- 0.33, N = 3SE +/- 0.88, N = 3646752
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesUbuntu 23.04Ubuntu 22.04130260390520650Min: 646 / Avg: 646.33 / Max: 647Min: 751 / Avg: 752.33 / Max: 754

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 6Ubuntu 23.04Ubuntu 22.041.1432.2863.4294.5725.715SE +/- 0.033, N = 3SE +/- 0.036, N = 155.0804.8891. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 6Ubuntu 23.04Ubuntu 22.04246810Min: 5.02 / Avg: 5.08 / Max: 5.12Min: 4.52 / Avg: 4.89 / Max: 5.081. (CXX) g++ options: -O3 -fPIC -lm

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per DirectionUbuntu 23.04Ubuntu 22.040.45130.90261.35391.80522.2565SE +/- 0.00950626, N = 3SE +/- 0.01292206, N = 92.005710961.975893091. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per DirectionUbuntu 23.04Ubuntu 22.04246810Min: 2 / Avg: 2.01 / Max: 2.02Min: 1.91 / Avg: 1.98 / Max: 2.031. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

CPU Power Consumption Monitor

OpenBenchmarking.orgWattsCPU Power Consumption MonitorPhoronix Test Suite System MonitoringUbuntu 22.04140280420560700Min: 83.14 / Avg: 424.58 / Max: 794.37

Meta Performance Per Watts

OpenBenchmarking.orgPerformance Per Watts, More Is BetterMeta Performance Per WattsPerformance Per WattsUbuntu 22.044008001200160020001894.53

153 Results Shown

Renaissance
libxsmm
Renaissance
Apache Hadoop
Renaissance
ClickHouse:
  100M Rows Hits Dataset, Third Run
  100M Rows Hits Dataset, Second Run
  100M Rows Hits Dataset, First Run / Cold Cache
Apache IoTDB:
  800 - 100 - 500 - 400:
    Average Latency
    point/sec
  800 - 100 - 800 - 400:
    Average Latency
    point/sec
TensorFlow
Renaissance
Apache Hadoop
PostgreSQL:
  100 - 1000 - Read Write - Average Latency
  100 - 1000 - Read Write
  100 - 800 - Read Write - Average Latency
  100 - 800 - Read Write
Renaissance
OSPRay
oneDNN
PostgreSQL:
  100 - 1000 - Read Only - Average Latency
  100 - 1000 - Read Only
  100 - 800 - Read Only - Average Latency
  100 - 800 - Read Only
Renaissance
High Performance Conjugate Gradient
VVenC:
  Bosphorus 4K - Faster
  Bosphorus 4K - Fast
Renaissance
SVT-AV1
Memcached
TensorFlow
OpenVINO:
  Machine Translation EN To DE FP16 - CPU:
    ms
    FPS
OSPRay
OpenVINO:
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU:
    ms
    FPS
srsRAN Project
OSPRay
Blender
Renaissance
Zstd Compression:
  19, Long Mode - Decompression Speed
  19, Long Mode - Compression Speed
Appleseed
Memcached
OpenVINO:
  Weld Porosity Detection FP16-INT8 - CPU:
    ms
    FPS
Appleseed
Zstd Compression:
  19 - Decompression Speed
  19 - Compression Speed
RocksDB
PyPerformance
DaCapo Benchmark
OSPRay
OpenRadioss
Numpy Benchmark
OpenRadioss
Zstd Compression:
  8 - Decompression Speed
  8 - Compression Speed
miniBUDE:
  OpenMP - BM2:
    Billion Interactions/s
    GFInst/s
Redis
OpenRadioss
OSPRay
RocksDB
High Performance Conjugate Gradient
NAMD
OpenRadioss
OpenFOAM:
  drivaerFastback, Medium Mesh Size - Execution Time
  drivaerFastback, Medium Mesh Size - Mesh Time
RocksDB
OpenVINO:
  Face Detection FP16-INT8 - CPU:
    ms
    FPS
OpenRadioss
Redis
OpenVINO:
  Person Vehicle Bike Detection FP16 - CPU:
    ms
    FPS
OSPRay
nginx
Embree
oneDNN
Embree
OpenVINO:
  Road Segmentation ADAS FP16-INT8 - CPU:
    ms
    FPS
SVT-AV1
7-Zip Compression:
  Decompression Rating
  Compression Rating
uvg266
SVT-AV1
uvg266
OpenVINO:
  Person Detection FP16 - CPU:
    ms
    FPS
Intel Open Image Denoise:
  RT.hdr_alb_nrm.3840x2160 - CPU-Only
  RT.ldr_alb_nrm.3840x2160 - CPU-Only
uvg266
OpenVINO:
  Vehicle Detection FP16-INT8 - CPU:
    ms
    FPS
  Face Detection Retail FP16-INT8 - CPU:
    ms
    FPS
  Handwritten English Recognition FP16-INT8 - CPU:
    ms
    FPS
srsRAN Project
Appleseed
Blender
PyPerformance:
  2to3
  raytrace
libxsmm
SVT-AV1
libxsmm
OpenRadioss
miniBUDE:
  OpenMP - BM1:
    Billion Interactions/s
    GFInst/s
uvg266:
  Bosphorus 4K - Very Fast
  Bosphorus 4K - Super Fast
Xcompact3d Incompact3d
PyPerformance
libavif avifenc
Intel Open Image Denoise
libavif avifenc
PyPerformance
DaCapo Benchmark
Kvazaar
PyPerformance
Liquid-DSP:
  224 - 256 - 512
  128 - 256 - 512
PyPerformance
Liquid-DSP:
  64 - 256 - 512
  224 - 256 - 32
  64 - 256 - 32
  128 - 256 - 32
PyPerformance
Kvazaar
PyPerformance:
  crypto_pyaes
  regex_compile
GROMACS
PyPerformance:
  float
  chaos
Kvazaar
OpenFOAM:
  drivaerFastback, Small Mesh Size - Execution Time
  drivaerFastback, Small Mesh Size - Mesh Time
PyPerformance
Blender
PHPBench
OpenSSL:
 
 
srsRAN Project
PyBench
libavif avifenc
Xcompact3d Incompact3d
CPU Power Consumption Monitor:
  Phoronix Test Suite System Monitoring
  Performance Per Watts