Xeon Max Linux Distros

2 x Intel Xeon Max 9480 testing with a Supermicro X13DEM v1.10 (1.3 BIOS) and ASPEED on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2310151-NE-2310131NE09
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts

Limit displaying results to tests within:

AV1 2 Tests
C++ Boost Tests 2 Tests
C/C++ Compiler Tests 6 Tests
CPU Massive 14 Tests
Creator Workloads 12 Tests
Database Test Suite 7 Tests
Encoding 5 Tests
Fortran Tests 2 Tests
Game Development 2 Tests
HPC - High Performance Computing 11 Tests
Java 2 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 4 Tests
Molecular Dynamics 4 Tests
MPI Benchmarks 3 Tests
Multi-Core 17 Tests
NVIDIA GPU Compute 2 Tests
Intel oneAPI 5 Tests
OpenMPI Tests 5 Tests
Programmer / Developer System Benchmarks 2 Tests
Python 3 Tests
Renderers 3 Tests
Scientific Computing 4 Tests
Software Defined Radio 2 Tests
Server 9 Tests
Server CPU Tests 12 Tests
Single-Threaded 5 Tests
Video Encoding 5 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Ubuntu 23.04
October 12 2023
  1 Day, 2 Hours, 41 Minutes
Ubuntu 22.04
October 13 2023
  1 Day, 6 Hours, 32 Minutes
Invert Hiding All Results Option
  1 Day, 4 Hours, 37 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Xeon Max Linux DistrosProcessorMotherboardChipsetMemoryDiskGraphicsNetworkMonitorOSKernelDesktopDisplay ServerCompilerFile-SystemScreen ResolutionVulkanUbuntu 23.04Ubuntu 22.042 x Intel Xeon Max 9480 @ 3.50GHz (112 Cores / 224 Threads)Supermicro X13DEM v1.10 (1.3 BIOS)Intel Device 1bce512GB2 x 7682GB INTEL SSDPF2KX076TZASPEED2 x Broadcom BCM57508 NetXtreme-E 10Gb/25Gb/40Gb/50Gb/100Gb/200GbUbuntu 23.046.2.0-34-generic (x86_64)GNOME Shell 44.3X Server 1.21.1.7GCC 12.3.0ext41024x768VE228Ubuntu 22.04GNOME Shell 42.9X Server 1.21.1.41.3.238GCC 11.4.01920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- Ubuntu 23.04: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-DAPbBt/gcc-12-12.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-DAPbBt/gcc-12-12.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Ubuntu 22.04: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_cpufreq schedutil - CPU Microcode: 0x2c000271Java Details- Ubuntu 23.04: OpenJDK Runtime Environment (build 17.0.8.1+1-Ubuntu-0ubuntu123.04)- Ubuntu 22.04: OpenJDK Runtime Environment (build 11.0.20.1+1-post-Ubuntu-0ubuntu122.04)Python Details- Ubuntu 23.04: Python 3.11.4- Ubuntu 22.04: Python 3.10.12Security Details- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

Ubuntu 23.04 vs. Ubuntu 22.04 ComparisonPhoronix Test SuiteBaseline+22.6%+22.6%+45.2%+45.2%+67.8%+67.8%90.2%36.9%27.2%18.8%18.7%15.8%14%13.8%11.3%11%8.6%8.5%8.4%8.3%8.2%8.1%7.6%6.8%6.7%6.3%6.2%5.5%5.2%4.5%4.2%4.2%4%3.9%3.8%3.7%3.7%3.1%3%2.8%2.8%2.6%2.6%2.6%2.5%2.4%2.3%2.3%2.1%2%Create - 100 - 1000000Savina Reactors.IO74.3%raytrace59.3%go51.7%nbody49.4%pickle_pure_python44.4%crypto_pyaes44%chaos43.4%Delete - 50 - 100000042%float41.1%ALS Movie Lensd.S.M.S - Mesh Timedjango_template25.6%2to325%regex_compile23.1%F.H.R22.5%json_loads21.7%8 - Compression Speed20%800 - 100 - 500 - 400800 - 100 - 500 - 400A.U.C.T18.2%A.S.P16.7%T.F.A.T.T16.4%d.M.M.S - Mesh TimeH214.8%100 - 800 - Read Write100 - 800 - Read Write - Average Latency19 - Compression Speed32pathlib9.2%Rand Read8.9%100 - 1000 - Read WritePreset 8 - Bosphorus 4K100 - 1000 - Read Write - Average Latency800 - 100 - 800 - 400800 - 100 - 800 - 400Material TesterBosphorus 4K - Very Fast7.4%R.N.N.T - bf16bf16bf16 - CPU7.3%GET - 5007.3%P.P.B.T.T6.9%19, Long Mode - Compression SpeedDisney MaterialScala Dotty6.6%A.G.R.R.0.F.I - CPU6.5%Bosphorus 4K - FasterApache Spark Bayespython_startup6.1%i.i.1.C.P.D6.1%10, Lossless5.6%500OpenMP - BM25.2%OpenMP - BM25.2%19 - D.Sgravity_spheres_volume/dim_512/scivis/real_time5.1%Pathtracer ISPC - Asian Dragon104 104 104 - 604.2%Bosphorus 4K - Ultra FastR.R.W.R1:106OpenMP - BM13.9%OpenMP - BM13.9%JythonV.D.F.I - CPUI.a.F.S.I.D.C3.7%V.D.F.I - CPUparticle_volume/pathtracer/real_time3.4%gravity_spheres_volume/dim_512/ao/real_time3.3%100 - 1000 - Read Only - Average Latency3.1%CPU - 256 - ResNet-5064224 - 256 - 5123%A.G.R.R.0.F.I - CPU2.9%gravity_spheres_volume/dim_512/pathtracer/real_time100 - 1000 - Read Only2.7%1.R.H.D.T.RRTLightmap.hdr.4096x4096 - CPU-OnlyPreset 4 - Bosphorus 4KBosphorus 4K - Very FastP.P.B.T.T2.5%Preset 12 - Bosphorus 4K8 - D.SBosphorus 4K - Ultra FastBumper Beam2.2%particle_volume/ao/real_time2.2%19, Long Mode - D.SBosphorus 4K - MediumApache HadoopRenaissancePyPerformancePyPerformancePyPerformancePyPerformancePyPerformancePyPerformanceApache HadoopPyPerformanceRenaissanceOpenFOAMPyPerformancePyPerformancePyPerformanceRenaissancePyPerformanceZstd CompressionApache IoTDBApache IoTDBRenaissanceRenaissancePyBenchOpenFOAMDaCapo BenchmarkPostgreSQLPostgreSQLZstd CompressionlibxsmmPyPerformanceRocksDBPostgreSQLSVT-AV1PostgreSQLApache IoTDBApache IoTDBAppleseedKvazaarNumpy BenchmarkoneDNNRedissrsRAN ProjectZstd CompressionAppleseedRenaissanceOpenVINOVVenCRenaissancePyPerformanceXcompact3d Incompact3dlibavif avifencnginxminiBUDEminiBUDEZstd CompressionOSPRayEmbreeHigh Performance Conjugate GradientKvazaarRocksDBMemcachedlibavif avifencminiBUDEminiBUDEDaCapo BenchmarkOpenVINOOpenRadiossOpenVINOOSPRayOSPRayPostgreSQLTensorFlowlibxsmmLiquid-DSPOpenVINOOSPRayOpenSSLPostgreSQLClickHouseIntel Open Image DenoiseSVT-AV1uvg266srsRAN ProjectSVT-AV1Zstd Compressionuvg266OpenRadiossOSPRayZstd Compressionuvg266Ubuntu 23.04Ubuntu 22.04

Xeon Max Linux Distroscompress-7zip: Compression Ratingcompress-7zip: Decompression Ratinghadoop: Delete - 50 - 1000000hadoop: Create - 100 - 1000000apache-iotdb: 800 - 100 - 500 - 400apache-iotdb: 800 - 100 - 500 - 400apache-iotdb: 800 - 100 - 800 - 400apache-iotdb: 800 - 100 - 800 - 400appleseed: Emilyappleseed: Disney Materialappleseed: Material Testerblender: BMW27 - CPU-Onlyblender: Classroom - CPU-Onlyblender: Barbershop - CPU-Onlyclickhouse: 100M Rows Hits Dataset, First Run / Cold Cacheclickhouse: 100M Rows Hits Dataset, Second Runclickhouse: 100M Rows Hits Dataset, Third Rundacapobench: H2dacapobench: Jythonembree: Pathtracer ISPC - Crownembree: Pathtracer ISPC - Asian Dragongromacs: MPI CPU - water_GMX50_barehpcg: 104 104 104 - 60hpcg: 144 144 144 - 60oidn: RT.hdr_alb_nrm.3840x2160 - CPU-Onlyoidn: RT.ldr_alb_nrm.3840x2160 - CPU-Onlyoidn: RTLightmap.hdr.4096x4096 - CPU-Onlykvazaar: Bosphorus 4K - Very Fastkvazaar: Bosphorus 4K - Super Fastkvazaar: Bosphorus 4K - Ultra Fastavifenc: 6avifenc: 6, Losslessavifenc: 10, Losslesslibxsmm: 128libxsmm: 32libxsmm: 64liquid-dsp: 64 - 256 - 32liquid-dsp: 128 - 256 - 32liquid-dsp: 224 - 256 - 32liquid-dsp: 64 - 256 - 512liquid-dsp: 128 - 256 - 512liquid-dsp: 224 - 256 - 512memcached: 1:10memcached: 1:100minibude: OpenMP - BM1minibude: OpenMP - BM1minibude: OpenMP - BM2minibude: OpenMP - BM2namd: ATPase Simulation - 327,506 Atomsnginx: 500numpy: onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUopenfoam: drivaerFastback, Small Mesh Size - Mesh Timeopenfoam: drivaerFastback, Small Mesh Size - Execution Timeopenfoam: drivaerFastback, Medium Mesh Size - Mesh Timeopenfoam: drivaerFastback, Medium Mesh Size - Execution Timeopenradioss: Bumper Beamopenradioss: Chrysler Neon 1Mopenradioss: Cell Phone Drop Testopenradioss: Bird Strike on Windshieldopenradioss: Rubber O-Ring Seal Installationopenradioss: INIVOL and Fluid Structure Interaction Drop Containeropenssl: openssl: openvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUospray: particle_volume/ao/real_timeospray: particle_volume/scivis/real_timeospray: particle_volume/pathtracer/real_timeospray: gravity_spheres_volume/dim_512/ao/real_timeospray: gravity_spheres_volume/dim_512/scivis/real_timeospray: gravity_spheres_volume/dim_512/pathtracer/real_timephpbench: PHP Benchmark Suitepgbench: 100 - 800 - Read Onlypgbench: 100 - 800 - Read Only - Average Latencypgbench: 100 - 1000 - Read Onlypgbench: 100 - 1000 - Read Only - Average Latencypgbench: 100 - 800 - Read Writepgbench: 100 - 800 - Read Write - Average Latencypgbench: 100 - 1000 - Read Writepgbench: 100 - 1000 - Read Write - Average Latencypybench: Total For Average Test Timespyperformance: gopyperformance: 2to3pyperformance: chaospyperformance: floatpyperformance: nbodypyperformance: pathlibpyperformance: raytracepyperformance: json_loadspyperformance: crypto_pyaespyperformance: regex_compilepyperformance: python_startuppyperformance: django_templatepyperformance: pickle_pure_pythonredis: GET - 500redis: SET - 500renaissance: Scala Dottyrenaissance: Rand Forestrenaissance: ALS Movie Lensrenaissance: Apache Spark Bayesrenaissance: Savina Reactors.IOrenaissance: Apache Spark PageRankrenaissance: Finagle HTTP Requestsrenaissance: Akka Unbalanced Cobwebbed Treerocksdb: Rand Readrocksdb: Update Randrocksdb: Read Rand Write Randsrsran: Downlink Processor Benchmarksrsran: PUSCH Processor Benchmark, Throughput Totalsrsran: PUSCH Processor Benchmark, Throughput Threadsvt-av1: Preset 4 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Ksvt-av1: Preset 13 - Bosphorus 4Ktensorflow: CPU - 16 - ResNet-50tensorflow: CPU - 256 - ResNet-50uvg266: Bosphorus 4K - Slowuvg266: Bosphorus 4K - Mediumuvg266: Bosphorus 4K - Very Fastuvg266: Bosphorus 4K - Super Fastuvg266: Bosphorus 4K - Ultra Fastvvenc: Bosphorus 4K - Fastvvenc: Bosphorus 4K - Fasterincompact3d: input.i3d 129 Cells Per Directionincompact3d: input.i3d 193 Cells Per Directioncompress-zstd: 8 - Compression Speedcompress-zstd: 8 - Decompression Speedcompress-zstd: 19 - Compression Speedcompress-zstd: 19 - Decompression Speedcompress-zstd: 19, Long Mode - Compression Speedcompress-zstd: 19, Long Mode - Decompression SpeedUbuntu 23.04Ubuntu 22.0425491037381931912324143280940417.8147389394624.38284.67355691.161135376.1033921.6453.14261.52155.80170.24173.1219866389536.757437.064412.157102.67677.36012.232.241.1518.4922.2122.955.0808.7316.9691930.81196.81815.21989933333310630000042379000006650300009461333331083133333599628.31501826.542528.808101.1522776.754111.0700.32646124724.58472.6314593.51379.2639.41962338.600603174.83463202.9113595.25131.1433.21157.19116.57137.4319701.81355014.9467.0679.11325.27343.395056.4722.1016127.366.911514.3573.83616.8960.1025416.484.246329.8817.662404.8646.5389007.690.7028.442127.673283.426711.126710.949239.038079530326353581.2626403971.5711852343.2421646960.78064611622451.654.563.21320915.756.310811.427.72163012006.791916420.90919.31853.428896.62712.916277.84961.219378.145287.9380807398102796665185674.810598.8190.82.38023.41354.66056.92314.4944.807.968.3715.3015.8516.322.5283.6262.005710966.075614551130.83154.946.22683.727.92832.925682537934722475616351368423351.8351342262577.21286.25764985.422873347.91599421.9953.71262.72156.54172.55177.6622809375336.739538.744812.16098.515375.88422.272.261.1819.9022.6423.914.8898.6047.3611928.11328.91870.51975600000307210000042537666676675500009304200001051366667623717.41500744.652434.08897.3642639.288105.5720.32225131549.83439.9015662.221355.7530.99281638.194808151.0314200.7709997.39132.8633.50157.25117.80142.4720245.31349949.8460.1280.31327.26341.345242.1521.3216072.386.931502.5274.40612.5660.5525126.844.296319.4217.692367.1647.2783575.100.7227.826827.359380.657710.769310.421249.288129666346318931.2716236061.6202111137.9981787856.06175217628074.076.994.414.233319.181.113312.134.83122807752.581882918.18980.11861.621108.02554.628378.65791.723731.453510.7349636191103258692997678.610344.3178.52.44125.39555.94757.25614.4846.198.098.5415.6915.7916.692.5423.8551.975893096.44438407942.13228.151.42823.229.82891.4OpenBenchmarking.org

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingUbuntu 22.04Ubuntu 23.0460K120K180K240K300KSE +/- 3527.52, N = 3SE +/- 1028.75, N = 32568252549101. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingUbuntu 22.04Ubuntu 23.0480K160K240K320K400KSE +/- 5913.92, N = 3SE +/- 8855.12, N = 33793473738191. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 50 - Files: 1000000Ubuntu 22.04Ubuntu 23.047K14K21K28K35KSE +/- 2293.11, N = 9SE +/- 452.80, N = 32247531912

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 100 - Files: 1000000Ubuntu 22.04Ubuntu 23.0413002600390052006500SE +/- 1101.28, N = 9SE +/- 17.50, N = 361633241

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400Ubuntu 22.04Ubuntu 23.0411M22M33M44M55MSE +/- 506019.03, N = 12SE +/- 474661.21, N = 55136842343280940

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400Ubuntu 22.04Ubuntu 23.0490180270360450SE +/- 4.16, N = 12SE +/- 4.63, N = 5351.83417.81MAX: 33320.78MAX: 31785.53

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400Ubuntu 22.04Ubuntu 23.0411M22M33M44M55MSE +/- 1382666.67, N = 9SE +/- 314308.71, N = 35134226247389394

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400Ubuntu 22.04Ubuntu 23.04130260390520650SE +/- 20.90, N = 9SE +/- 2.92, N = 3577.21624.38MAX: 48250.35MAX: 34185.88

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: EmilyUbuntu 22.04Ubuntu 23.0460120180240300286.26284.67

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Disney MaterialUbuntu 22.04Ubuntu 23.042040608010085.4291.16

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Material TesterUbuntu 22.04Ubuntu 23.0480160240320400347.92376.10

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: BMW27 - Compute: CPU-OnlyUbuntu 22.04Ubuntu 23.04510152025SE +/- 0.28, N = 3SE +/- 0.26, N = 321.9921.64

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Classroom - Compute: CPU-OnlyUbuntu 22.04Ubuntu 23.041224364860SE +/- 0.22, N = 3SE +/- 0.25, N = 353.7153.14

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Barbershop - Compute: CPU-OnlyUbuntu 22.04Ubuntu 23.0460120180240300SE +/- 2.00, N = 3SE +/- 2.68, N = 3262.72261.52

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ / https://github.com/ClickHouse/ClickBench/tree/main/clickhouse with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all separate queries performed as an aggregate. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, First Run / Cold CacheUbuntu 22.04Ubuntu 23.04306090120150SE +/- 2.07, N = 3SE +/- 2.03, N = 9156.54155.80MIN: 9.6 / MAX: 2500MIN: 10.69 / MAX: 1714.29

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Second RunUbuntu 22.04Ubuntu 23.044080120160200SE +/- 3.35, N = 3SE +/- 2.63, N = 9172.55170.24MIN: 24.51 / MAX: 2068.97MIN: 14.63 / MAX: 1875

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Third RunUbuntu 22.04Ubuntu 23.044080120160200SE +/- 4.50, N = 3SE +/- 3.29, N = 9177.66173.12MIN: 31.4 / MAX: 1621.62MIN: 18.26 / MAX: 1621.62

CPU Power Consumption Monitor

OpenBenchmarking.orgWattsCPU Power Consumption MonitorPhoronix Test Suite System MonitoringUbuntu 22.04140280420560700Min: 83.14 / Avg: 424.58 / Max: 794.37

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Ubuntu 22.04Ubuntu 23.045K10K15K20K25KSE +/- 393.58, N = 20SE +/- 401.72, N = 202280919866

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonUbuntu 22.04Ubuntu 23.048001600240032004000SE +/- 133.04, N = 20SE +/- 231.46, N = 1637533895

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: CrownUbuntu 22.04Ubuntu 23.04816243240SE +/- 0.95, N = 12SE +/- 0.80, N = 1236.7436.76MIN: 22.41 / MAX: 55.39MIN: 25.45 / MAX: 57.06

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian DragonUbuntu 22.04Ubuntu 23.04918273645SE +/- 0.76, N = 15SE +/- 0.45, N = 1238.7437.06MIN: 30.62 / MAX: 58.26MIN: 28.65 / MAX: 48.65

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2023Implementation: MPI CPU - Input: water_GMX50_bareUbuntu 22.04Ubuntu 23.043691215SE +/- 0.01, N = 3SE +/- 0.06, N = 312.1612.161. (CXX) g++ options: -O3

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1X Y Z: 104 104 104 - RT: 60Ubuntu 22.04Ubuntu 23.0420406080100SE +/- 0.16, N = 3SE +/- 0.72, N = 398.52102.681. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1X Y Z: 144 144 144 - RT: 60Ubuntu 22.04Ubuntu 23.0420406080100SE +/- 0.58, N = 3SE +/- 0.77, N = 675.8877.361. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

Intel Open Image Denoise

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-OnlyUbuntu 22.04Ubuntu 23.040.51081.02161.53242.04322.554SE +/- 0.03, N = 15SE +/- 0.03, N = 152.272.23

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-OnlyUbuntu 22.04Ubuntu 23.040.50851.0171.52552.0342.5425SE +/- 0.02, N = 15SE +/- 0.03, N = 152.262.24

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RTLightmap.hdr.4096x4096 - Device: CPU-OnlyUbuntu 22.04Ubuntu 23.040.26550.5310.79651.0621.3275SE +/- 0.01, N = 5SE +/- 0.01, N = 31.181.15

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Very FastUbuntu 22.04Ubuntu 23.04510152025SE +/- 0.18, N = 3SE +/- 0.11, N = 319.9018.491. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Super FastUbuntu 22.04Ubuntu 23.04510152025SE +/- 0.15, N = 3SE +/- 0.11, N = 322.6422.211. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.2Video Input: Bosphorus 4K - Video Preset: Ultra FastUbuntu 22.04Ubuntu 23.04612182430SE +/- 0.32, N = 3SE +/- 0.22, N = 323.9122.951. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 6Ubuntu 22.04Ubuntu 23.041.1432.2863.4294.5725.715SE +/- 0.036, N = 15SE +/- 0.033, N = 34.8895.0801. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 6, LosslessUbuntu 22.04Ubuntu 23.04246810SE +/- 0.120, N = 12SE +/- 0.156, N = 128.6048.7311. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 10, LosslessUbuntu 22.04Ubuntu 23.04246810SE +/- 0.120, N = 15SE +/- 0.124, N = 157.3616.9691. (CXX) g++ options: -O3 -fPIC -lm

libxsmm

Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 128Ubuntu 22.04Ubuntu 23.04400800120016002000SE +/- 27.11, N = 3SE +/- 44.57, N = 61928.11930.81. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 32Ubuntu 22.04Ubuntu 23.0430060090012001500SE +/- 33.79, N = 15SE +/- 40.91, N = 121328.91196.81. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 64Ubuntu 22.04Ubuntu 23.04400800120016002000SE +/- 28.89, N = 15SE +/- 41.92, N = 121870.51815.21. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 32Ubuntu 22.04Ubuntu 23.04400M800M1200M1600M2000MSE +/- 4460194.32, N = 3SE +/- 2382109.24, N = 3197560000019899333331. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 128 - Buffer Length: 256 - Filter Length: 32Ubuntu 22.04Ubuntu 23.04700M1400M2100M2800M3500MSE +/- 13675647.46, N = 3SE +/- 10835281.88, N = 3307210000031063000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 224 - Buffer Length: 256 - Filter Length: 32Ubuntu 22.04Ubuntu 23.04900M1800M2700M3600M4500MSE +/- 15689947.24, N = 3SE +/- 13214007.72, N = 3425376666742379000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 512Ubuntu 22.04Ubuntu 23.04140M280M420M560M700MSE +/- 5495610.37, N = 3SE +/- 2175224.13, N = 36675500006650300001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 128 - Buffer Length: 256 - Filter Length: 512Ubuntu 22.04Ubuntu 23.04200M400M600M800M1000MSE +/- 7247277.65, N = 3SE +/- 6672056.99, N = 39304200009461333331. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 224 - Buffer Length: 256 - Filter Length: 512Ubuntu 22.04Ubuntu 23.04200M400M600M800M1000MSE +/- 7056990.23, N = 3SE +/- 3337830.30, N = 3105136666710831333331. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Memcached

Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:10Ubuntu 22.04Ubuntu 23.04130K260K390K520K650KSE +/- 7813.87, N = 15SE +/- 10608.70, N = 15623717.41599628.311. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:100Ubuntu 22.04Ubuntu 23.04110K220K330K440K550KSE +/- 3294.51, N = 3SE +/- 8153.99, N = 15500744.65501826.541. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Meta Performance Per Watts

OpenBenchmarking.orgPerformance Per Watts, More Is BetterMeta Performance Per WattsPerformance Per WattsUbuntu 22.044008001200160020001894.53

miniBUDE

MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1Ubuntu 22.04Ubuntu 23.045001000150020002500SE +/- 55.61, N = 15SE +/- 44.32, N = 152434.092528.811. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1Ubuntu 22.04Ubuntu 23.0420406080100SE +/- 2.22, N = 15SE +/- 1.77, N = 1597.36101.151. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2Ubuntu 22.04Ubuntu 23.046001200180024003000SE +/- 31.78, N = 4SE +/- 21.23, N = 152639.292776.751. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2Ubuntu 22.04Ubuntu 23.0420406080100SE +/- 1.27, N = 4SE +/- 0.85, N = 15105.57111.071. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsUbuntu 22.04Ubuntu 23.040.07350.1470.22050.2940.3675SE +/- 0.00473, N = 15SE +/- 0.00600, N = 140.322250.32646

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 500Ubuntu 22.04Ubuntu 23.0430K60K90K120K150KSE +/- 836.61, N = 3SE +/- 1330.04, N = 3131549.83124724.581. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkUbuntu 22.04Ubuntu 23.04100200300400500SE +/- 4.01, N = 3SE +/- 5.52, N = 4439.90472.63

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUUbuntu 22.04Ubuntu 23.043K6K9K12K15KSE +/- 727.41, N = 10SE +/- 277.46, N = 915662.2214593.50-lpthread - MIN: 9645.27MIN: 8664.051. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUUbuntu 22.04Ubuntu 23.0430060090012001500SE +/- 7.30, N = 3SE +/- 8.91, N = 31355.751379.26-lpthread - MIN: 1217.23MIN: 1265.951. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh TimeUbuntu 22.04Ubuntu 23.0491827364530.9939.42-lfiniteVolume -lmeshTools -lparallel -lregionModels-ldynamicMesh -lsampling1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -llagrangian -lgenericPatchFields -lOpenFOAM -ldl -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution TimeUbuntu 22.04Ubuntu 23.0491827364538.1938.60-lfiniteVolume -lmeshTools -lparallel -lregionModels-ldynamicMesh -lsampling1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -llagrangian -lgenericPatchFields -lOpenFOAM -ldl -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Medium Mesh Size - Mesh TimeUbuntu 22.04Ubuntu 23.044080120160200151.03174.83-lfiniteVolume -lmeshTools -lparallel -lregionModels-ldynamicMesh -lsampling1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -llagrangian -lgenericPatchFields -lOpenFOAM -ldl -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Medium Mesh Size - Execution TimeUbuntu 22.04Ubuntu 23.044080120160200200.77202.91-lfiniteVolume -lmeshTools -lparallel -lregionModels-ldynamicMesh -lsampling1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -llagrangian -lgenericPatchFields -lOpenFOAM -ldl -lm

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Bumper BeamUbuntu 22.04Ubuntu 23.0420406080100SE +/- 0.44, N = 3SE +/- 0.47, N = 397.3995.25

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Chrysler Neon 1MUbuntu 22.04Ubuntu 23.04306090120150SE +/- 0.40, N = 3SE +/- 1.48, N = 3132.86131.14

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Cell Phone Drop TestUbuntu 22.04Ubuntu 23.04816243240SE +/- 0.21, N = 3SE +/- 0.05, N = 333.5033.21

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Bird Strike on WindshieldUbuntu 22.04Ubuntu 23.04306090120150SE +/- 1.23, N = 3SE +/- 0.61, N = 3157.25157.19

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Rubber O-Ring Seal InstallationUbuntu 22.04Ubuntu 23.04306090120150SE +/- 0.52, N = 3SE +/- 0.91, N = 3117.80116.57

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: INIVOL and Fluid Structure Interaction Drop ContainerUbuntu 22.04Ubuntu 23.04306090120150SE +/- 0.46, N = 3SE +/- 0.65, N = 3142.47137.43

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. The system/openssl test profiles relies on benchmarking the system/OS-supplied openssl binary rather than the pts/openssl test profile that uses the locally-built OpenSSL for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsign/s, More Is BetterOpenSSLUbuntu 22.04Ubuntu 23.044K8K12K16K20KSE +/- 184.59, N = 3SE +/- 121.28, N = 320245.319701.81. Ubuntu 22.04: OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)2. Ubuntu 23.04: OpenSSL 3.0.8 7 Feb 2023 (Library: OpenSSL 3.0.8 7 Feb 2023)

OpenBenchmarking.orgverify/s, More Is BetterOpenSSLUbuntu 22.04Ubuntu 23.04300K600K900K1200K1500KSE +/- 3325.21, N = 3SE +/- 20330.16, N = 31349949.81355014.91. Ubuntu 22.04: OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)2. Ubuntu 23.04: OpenSSL 3.0.8 7 Feb 2023 (Library: OpenSSL 3.0.8 7 Feb 2023)

OpenVINO

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Detection FP16 - Device: CPUUbuntu 22.04Ubuntu 23.04100200300400500SE +/- 1.53, N = 3SE +/- 3.50, N = 3460.12467.061. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Detection FP16 - Device: CPUUbuntu 22.04Ubuntu 23.0420406080100SE +/- 0.26, N = 3SE +/- 0.58, N = 380.3179.11MIN: 49.45 / MAX: 540.74MIN: 47.44 / MAX: 636.261. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection FP16-INT8 - Device: CPUUbuntu 22.04Ubuntu 23.0470140210280350SE +/- 0.21, N = 3SE +/- 2.19, N = 3327.26325.271. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection FP16-INT8 - Device: CPUUbuntu 22.04Ubuntu 23.0470140210280350SE +/- 0.28, N = 3SE +/- 2.34, N = 3341.34343.39MIN: 244.79 / MAX: 669.32MIN: 244.44 / MAX: 745.091. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16-INT8 - Device: CPUUbuntu 22.04Ubuntu 23.0411002200330044005500SE +/- 17.95, N = 3SE +/- 16.47, N = 35242.155056.471. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16-INT8 - Device: CPUUbuntu 22.04Ubuntu 23.04510152025SE +/- 0.07, N = 3SE +/- 0.07, N = 321.3222.10MIN: 14.91 / MAX: 228.75MIN: 14.51 / MAX: 233.421. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16-INT8 - Device: CPUUbuntu 22.04Ubuntu 23.043K6K9K12K15KSE +/- 105.07, N = 3SE +/- 25.06, N = 316072.3816127.361. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16-INT8 - Device: CPUUbuntu 22.04Ubuntu 23.04246810SE +/- 0.03, N = 3SE +/- 0.02, N = 36.936.91MIN: 5.45 / MAX: 77.38MIN: 5.46 / MAX: 76.11. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16-INT8 - Device: CPUUbuntu 22.04Ubuntu 23.0430060090012001500SE +/- 1.27, N = 3SE +/- 1.92, N = 31502.521514.351. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16-INT8 - Device: CPUUbuntu 22.04Ubuntu 23.0420406080100SE +/- 0.08, N = 3SE +/- 0.09, N = 374.4073.83MIN: 54.01 / MAX: 402.31MIN: 53.98 / MAX: 322.991. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Machine Translation EN To DE FP16 - Device: CPUUbuntu 22.04Ubuntu 23.04130260390520650SE +/- 12.13, N = 12SE +/- 9.85, N = 15612.56616.891. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Machine Translation EN To DE FP16 - Device: CPUUbuntu 22.04Ubuntu 23.041428425670SE +/- 1.29, N = 12SE +/- 1.04, N = 1560.5560.10MIN: 38.21 / MAX: 574.75MIN: 36.84 / MAX: 723.031. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16-INT8 - Device: CPUUbuntu 22.04Ubuntu 23.045K10K15K20K25KSE +/- 314.49, N = 3SE +/- 255.75, N = 1525126.8425416.481. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16-INT8 - Device: CPUUbuntu 22.04Ubuntu 23.040.96531.93062.89593.86124.8265SE +/- 0.06, N = 3SE +/- 0.04, N = 154.294.24MIN: 2.66 / MAX: 146.93MIN: 2.6 / MAX: 139.761. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Vehicle Bike Detection FP16 - Device: CPUUbuntu 22.04Ubuntu 23.0414002800420056007000SE +/- 9.82, N = 3SE +/- 11.85, N = 36319.426329.881. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Vehicle Bike Detection FP16 - Device: CPUUbuntu 22.04Ubuntu 23.0448121620SE +/- 0.03, N = 3SE +/- 0.03, N = 317.6917.66MIN: 12.69 / MAX: 219.19MIN: 13.56 / MAX: 221.851. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16-INT8 - Device: CPUUbuntu 22.04Ubuntu 23.045001000150020002500SE +/- 6.84, N = 3SE +/- 4.80, N = 32367.162404.861. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16-INT8 - Device: CPUUbuntu 22.04Ubuntu 23.041122334455SE +/- 0.13, N = 3SE +/- 0.09, N = 347.2746.53MIN: 40.88 / MAX: 153.12MIN: 40.48 / MAX: 136.391. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUUbuntu 22.04Ubuntu 23.0420K40K60K80K100KSE +/- 1954.61, N = 12SE +/- 1618.25, N = 1583575.1089007.691. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUUbuntu 22.04Ubuntu 23.040.1620.3240.4860.6480.81SE +/- 0.01, N = 12SE +/- 0.01, N = 150.720.70MIN: 0.28 / MAX: 72.42MIN: 0.27 / MAX: 65.51. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/ao/real_timeUbuntu 22.04Ubuntu 23.04714212835SE +/- 0.05, N = 3SE +/- 0.02, N = 327.8328.44

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/scivis/real_timeUbuntu 22.04Ubuntu 23.04714212835SE +/- 0.18, N = 3SE +/- 0.08, N = 327.3627.67

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/pathtracer/real_timeUbuntu 22.04Ubuntu 23.0420406080100SE +/- 2.80, N = 12SE +/- 3.15, N = 980.6683.43

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/ao/real_timeUbuntu 22.04Ubuntu 23.043691215SE +/- 0.13, N = 3SE +/- 0.22, N = 1510.7711.13

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeUbuntu 22.04Ubuntu 23.043691215SE +/- 0.17, N = 15SE +/- 0.33, N = 1210.4210.95

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeUbuntu 22.04Ubuntu 23.043691215SE +/- 0.10483, N = 15SE +/- 0.16639, N = 159.288129.03807

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteUbuntu 22.04Ubuntu 23.04200K400K600K800K1000KSE +/- 3637.24, N = 3SE +/- 1336.12, N = 3966634953032

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 800 - Mode: Read OnlyUbuntu 22.04Ubuntu 23.04140K280K420K560K700KSE +/- 14209.40, N = 9SE +/- 9142.97, N = 126318936353581. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average LatencyUbuntu 22.04Ubuntu 23.040.2860.5720.8581.1441.43SE +/- 0.027, N = 9SE +/- 0.019, N = 121.2711.2621. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read OnlyUbuntu 22.04Ubuntu 23.04140K280K420K560K700KSE +/- 18940.33, N = 12SE +/- 17891.35, N = 96236066403971. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average LatencyUbuntu 22.04Ubuntu 23.040.36450.7291.09351.4581.8225SE +/- 0.049, N = 12SE +/- 0.044, N = 91.6201.5711. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 800 - Mode: Read WriteUbuntu 22.04Ubuntu 23.045K10K15K20K25KSE +/- 332.96, N = 12SE +/- 197.10, N = 1221111185231. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average LatencyUbuntu 22.04Ubuntu 23.041020304050SE +/- 0.60, N = 12SE +/- 0.45, N = 1238.0043.241. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read WriteUbuntu 22.04Ubuntu 23.044K8K12K16K20KSE +/- 262.46, N = 12SE +/- 157.54, N = 1217878164691. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average LatencyUbuntu 22.04Ubuntu 23.041428425670SE +/- 0.79, N = 12SE +/- 0.56, N = 1256.0660.781. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesUbuntu 22.04Ubuntu 23.04160320480640800SE +/- 0.88, N = 3SE +/- 0.33, N = 3752646

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goUbuntu 22.04Ubuntu 23.044080120160200SE +/- 0.33, N = 3SE +/- 0.88, N = 3176116

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3Ubuntu 22.04Ubuntu 23.0460120180240300SE +/- 2.60, N = 3SE +/- 1.45, N = 3280224

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosUbuntu 22.04Ubuntu 23.041632486480SE +/- 0.12, N = 3SE +/- 0.07, N = 374.051.6

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatUbuntu 22.04Ubuntu 23.0420406080100SE +/- 0.34, N = 3SE +/- 0.13, N = 376.954.5

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyUbuntu 22.04Ubuntu 23.0420406080100SE +/- 0.52, N = 3SE +/- 0.17, N = 394.463.2

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibUbuntu 22.04Ubuntu 23.0448121620SE +/- 0.03, N = 3SE +/- 0.00, N = 314.213.0

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceUbuntu 22.04Ubuntu 23.0470140210280350SE +/- 0.58, N = 3SE +/- 0.00, N = 3333209

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsUbuntu 22.04Ubuntu 23.04510152025SE +/- 0.03, N = 3SE +/- 0.00, N = 319.115.7

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesUbuntu 22.04Ubuntu 23.0420406080100SE +/- 0.03, N = 3SE +/- 0.03, N = 381.156.3

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileUbuntu 22.04Ubuntu 23.04306090120150SE +/- 0.00, N = 3SE +/- 0.33, N = 3133108

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupUbuntu 22.04Ubuntu 23.043691215SE +/- 0.11, N = 8SE +/- 0.10, N = 312.111.4

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateUbuntu 22.04Ubuntu 23.04816243240SE +/- 0.03, N = 3SE +/- 0.03, N = 334.827.7

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonUbuntu 22.04Ubuntu 23.0470140210280350SE +/- 0.00, N = 3SE +/- 0.88, N = 3312216

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 500Ubuntu 22.04Ubuntu 23.04600K1200K1800K2400K3000KSE +/- 60261.21, N = 12SE +/- 56263.19, N = 122807752.583012006.791. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 500Ubuntu 22.04Ubuntu 23.04400K800K1200K1600K2000KSE +/- 28736.18, N = 15SE +/- 37507.63, N = 121882918.181916420.901. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Scala DottyUbuntu 22.04Ubuntu 23.042004006008001000SE +/- 18.18, N = 15SE +/- 24.95, N = 15980.1919.3MIN: 648.36 / MAX: 3175.49MIN: 615.34 / MAX: 2847.74

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random ForestUbuntu 22.04Ubuntu 23.04400800120016002000SE +/- 33.63, N = 12SE +/- 13.67, N = 31861.61853.4MIN: 1313.89 / MAX: 2805.42MIN: 1540.7 / MAX: 2511.91

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensUbuntu 22.04Ubuntu 23.046K12K18K24K30KSE +/- 354.60, N = 9SE +/- 623.64, N = 621108.028896.6MIN: 17287.27 / MAX: 27400.12MIN: 22999.73 / MAX: 39599.89

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesUbuntu 22.04Ubuntu 23.046001200180024003000SE +/- 21.10, N = 15SE +/- 72.54, N = 152554.62712.9MIN: 1105.84 / MAX: 6590.98MIN: 630.89 / MAX: 6260.44

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOUbuntu 22.04Ubuntu 23.046K12K18K24K30KSE +/- 544.94, N = 9SE +/- 203.55, N = 328378.616277.8MIN: 24488.11 / MAX: 54448.55MIN: 15959.38 / MAX: 31275.85

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRankUbuntu 22.04Ubuntu 23.0412002400360048006000SE +/- 68.33, N = 3SE +/- 123.15, N = 125791.74961.2MIN: 3887.37 / MAX: 7904.31MIN: 2972.29 / MAX: 7999.32

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsUbuntu 22.04Ubuntu 23.045K10K15K20K25KSE +/- 482.14, N = 9SE +/- 232.89, N = 323731.419378.1MIN: 18921.39 / MAX: 25419.48MIN: 16059.97 / MAX: 19838.17

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Akka Unbalanced Cobwebbed TreeUbuntu 22.04Ubuntu 23.0411K22K33K44K55KSE +/- 565.20, N = 9SE +/- 234.36, N = 353510.745287.9MIN: 36510.64 / MAX: 56207.73MIN: 33697.71 / MAX: 48659.35

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random ReadUbuntu 22.04Ubuntu 23.0480M160M240M320M400MSE +/- 2596603.43, N = 11SE +/- 3155038.10, N = 33496361913808073981. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Update RandomUbuntu 22.04Ubuntu 23.0420K40K60K80K100KSE +/- 979.86, N = 6SE +/- 1141.51, N = 41032581027961. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read Random Write RandomUbuntu 22.04Ubuntu 23.04150K300K450K600K750KSE +/- 3631.47, N = 3SE +/- 6074.54, N = 156929976651851. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

srsRAN Project

srsRAN Project is a complete ORAN-native 5G RAN solution created by Software Radio Systems (SRS). The srsRAN Project radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.5Test: Downlink Processor BenchmarkUbuntu 22.04Ubuntu 23.04150300450600750SE +/- 5.05, N = 3SE +/- 2.92, N = 3678.6674.81. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.5Test: PUSCH Processor Benchmark, Throughput TotalUbuntu 22.04Ubuntu 23.042K4K6K8K10KSE +/- 160.30, N = 14SE +/- 309.18, N = 1510344.310598.81. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.5Test: PUSCH Processor Benchmark, Throughput ThreadUbuntu 22.04Ubuntu 23.044080120160200SE +/- 5.59, N = 14SE +/- 7.02, N = 12178.5190.81. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 4 - Input: Bosphorus 4KUbuntu 22.04Ubuntu 23.040.54921.09841.64762.19682.746SE +/- 0.016, N = 13SE +/- 0.025, N = 152.4412.3801. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 8 - Input: Bosphorus 4KUbuntu 22.04Ubuntu 23.04612182430SE +/- 0.47, N = 12SE +/- 0.31, N = 325.4023.411. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 12 - Input: Bosphorus 4KUbuntu 22.04Ubuntu 23.041326395265SE +/- 1.55, N = 15SE +/- 0.35, N = 355.9554.661. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 13 - Input: Bosphorus 4KUbuntu 22.04Ubuntu 23.041326395265SE +/- 1.51, N = 15SE +/- 0.98, N = 1557.2656.921. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-50Ubuntu 22.04Ubuntu 23.0448121620SE +/- 0.14, N = 12SE +/- 0.11, N = 314.4814.49

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: ResNet-50Ubuntu 22.04Ubuntu 23.041020304050SE +/- 0.31, N = 3SE +/- 0.47, N = 346.1944.80

uvg266

uvg266 is an open-source VVC/H.266 (Versatile Video Coding) encoder based on Kvazaar as part of the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: SlowUbuntu 22.04Ubuntu 23.04246810SE +/- 0.02, N = 3SE +/- 0.01, N = 38.097.96

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: MediumUbuntu 22.04Ubuntu 23.04246810SE +/- 0.02, N = 3SE +/- 0.04, N = 38.548.37

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Very FastUbuntu 22.04Ubuntu 23.0448121620SE +/- 0.02, N = 3SE +/- 0.02, N = 315.6915.30

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Super FastUbuntu 22.04Ubuntu 23.0448121620SE +/- 0.09, N = 3SE +/- 0.02, N = 315.7915.85

OpenBenchmarking.orgFrames Per Second, More Is Betteruvg266 0.4.1Video Input: Bosphorus 4K - Video Preset: Ultra FastUbuntu 22.04Ubuntu 23.0448121620SE +/- 0.13, N = 9SE +/- 0.07, N = 316.6916.32

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: FastUbuntu 22.04Ubuntu 23.040.5721.1441.7162.2882.86SE +/- 0.015, N = 3SE +/- 0.023, N = 72.5422.528-flto1. (CXX) g++ options: -O3 -fno-fat-lto-objects -flto=auto

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: FasterUbuntu 22.04Ubuntu 23.040.86741.73482.60223.46964.337SE +/- 0.047, N = 3SE +/- 0.058, N = 123.8553.626-flto1. (CXX) g++ options: -O3 -fno-fat-lto-objects -flto=auto

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per DirectionUbuntu 22.04Ubuntu 23.040.45130.90261.35391.80522.2565SE +/- 0.01292206, N = 9SE +/- 0.00950626, N = 31.975893092.005710961. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 193 Cells Per DirectionUbuntu 22.04Ubuntu 23.04246810SE +/- 0.04176484, N = 15SE +/- 0.04653610, N = 156.444384076.075614551. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8 - Compression SpeedUbuntu 22.04Ubuntu 23.042004006008001000SE +/- 20.54, N = 12SE +/- 53.57, N = 12942.11130.81. Ubuntu 22.04: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***2. Ubuntu 23.04: *** Zstandard CLI (64-bit) v1.5.4, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8 - Decompression SpeedUbuntu 22.04Ubuntu 23.047001400210028003500SE +/- 29.89, N = 12SE +/- 46.81, N = 123228.13154.91. Ubuntu 22.04: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***2. Ubuntu 23.04: *** Zstandard CLI (64-bit) v1.5.4, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Compression SpeedUbuntu 22.04Ubuntu 23.041224364860SE +/- 1.13, N = 15SE +/- 1.72, N = 1251.446.21. Ubuntu 22.04: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***2. Ubuntu 23.04: *** Zstandard CLI (64-bit) v1.5.4, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Decompression SpeedUbuntu 22.04Ubuntu 23.046001200180024003000SE +/- 22.65, N = 15SE +/- 3.08, N = 122823.22683.71. Ubuntu 22.04: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***2. Ubuntu 23.04: *** Zstandard CLI (64-bit) v1.5.4, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Compression SpeedUbuntu 22.04Ubuntu 23.04714212835SE +/- 0.44, N = 12SE +/- 0.63, N = 1529.827.91. Ubuntu 22.04: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***2. Ubuntu 23.04: *** Zstandard CLI (64-bit) v1.5.4, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Decompression SpeedUbuntu 22.04Ubuntu 23.046001200180024003000SE +/- 1.05, N = 12SE +/- 1.31, N = 152891.42832.91. Ubuntu 22.04: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***2. Ubuntu 23.04: *** Zstandard CLI (64-bit) v1.5.4, by Yann Collet ***

153 Results Shown

7-Zip Compression:
  Compression Rating
  Decompression Rating
Apache Hadoop:
  Delete - 50 - 1000000
  Create - 100 - 1000000
Apache IoTDB:
  800 - 100 - 500 - 400:
    point/sec
    Average Latency
  800 - 100 - 800 - 400:
    point/sec
    Average Latency
Appleseed:
  Emily
  Disney Material
  Material Tester
Blender:
  BMW27 - CPU-Only
  Classroom - CPU-Only
  Barbershop - CPU-Only
ClickHouse:
  100M Rows Hits Dataset, First Run / Cold Cache
  100M Rows Hits Dataset, Second Run
  100M Rows Hits Dataset, Third Run
CPU Power Consumption Monitor
DaCapo Benchmark:
  H2
  Jython
Embree:
  Pathtracer ISPC - Crown
  Pathtracer ISPC - Asian Dragon
GROMACS
High Performance Conjugate Gradient:
  104 104 104 - 60
  144 144 144 - 60
Intel Open Image Denoise:
  RT.hdr_alb_nrm.3840x2160 - CPU-Only
  RT.ldr_alb_nrm.3840x2160 - CPU-Only
  RTLightmap.hdr.4096x4096 - CPU-Only
Kvazaar:
  Bosphorus 4K - Very Fast
  Bosphorus 4K - Super Fast
  Bosphorus 4K - Ultra Fast
libavif avifenc:
  6
  6, Lossless
  10, Lossless
libxsmm:
  128
  32
  64
Liquid-DSP:
  64 - 256 - 32
  128 - 256 - 32
  224 - 256 - 32
  64 - 256 - 512
  128 - 256 - 512
  224 - 256 - 512
Memcached:
  1:10
  1:100
Meta Performance Per Watts
miniBUDE:
  OpenMP - BM1:
    GFInst/s
    Billion Interactions/s
  OpenMP - BM2:
    GFInst/s
    Billion Interactions/s
NAMD
nginx
Numpy Benchmark
oneDNN:
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
OpenFOAM:
  drivaerFastback, Small Mesh Size - Mesh Time
  drivaerFastback, Small Mesh Size - Execution Time
  drivaerFastback, Medium Mesh Size - Mesh Time
  drivaerFastback, Medium Mesh Size - Execution Time
OpenRadioss:
  Bumper Beam
  Chrysler Neon 1M
  Cell Phone Drop Test
  Bird Strike on Windshield
  Rubber O-Ring Seal Installation
  INIVOL and Fluid Structure Interaction Drop Container
OpenSSL:
 
 
OpenVINO:
  Person Detection FP16 - CPU:
    FPS
    ms
  Face Detection FP16-INT8 - CPU:
    FPS
    ms
  Vehicle Detection FP16-INT8 - CPU:
    FPS
    ms
  Face Detection Retail FP16-INT8 - CPU:
    FPS
    ms
  Road Segmentation ADAS FP16-INT8 - CPU:
    FPS
    ms
  Machine Translation EN To DE FP16 - CPU:
    FPS
    ms
  Weld Porosity Detection FP16-INT8 - CPU:
    FPS
    ms
  Person Vehicle Bike Detection FP16 - CPU:
    FPS
    ms
  Handwritten English Recognition FP16-INT8 - CPU:
    FPS
    ms
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU:
    FPS
    ms
OSPRay:
  particle_volume/ao/real_time
  particle_volume/scivis/real_time
  particle_volume/pathtracer/real_time
  gravity_spheres_volume/dim_512/ao/real_time
  gravity_spheres_volume/dim_512/scivis/real_time
  gravity_spheres_volume/dim_512/pathtracer/real_time
PHPBench
PostgreSQL:
  100 - 800 - Read Only
  100 - 800 - Read Only - Average Latency
  100 - 1000 - Read Only
  100 - 1000 - Read Only - Average Latency
  100 - 800 - Read Write
  100 - 800 - Read Write - Average Latency
  100 - 1000 - Read Write
  100 - 1000 - Read Write - Average Latency
PyBench
PyPerformance:
  go
  2to3
  chaos
  float
  nbody
  pathlib
  raytrace
  json_loads
  crypto_pyaes
  regex_compile
  python_startup
  django_template
  pickle_pure_python
Redis:
  GET - 500
  SET - 500
Renaissance:
  Scala Dotty
  Rand Forest
  ALS Movie Lens
  Apache Spark Bayes
  Savina Reactors.IO
  Apache Spark PageRank
  Finagle HTTP Requests
  Akka Unbalanced Cobwebbed Tree
RocksDB:
  Rand Read
  Update Rand
  Read Rand Write Rand
srsRAN Project:
  Downlink Processor Benchmark
  PUSCH Processor Benchmark, Throughput Total
  PUSCH Processor Benchmark, Throughput Thread
SVT-AV1:
  Preset 4 - Bosphorus 4K
  Preset 8 - Bosphorus 4K
  Preset 12 - Bosphorus 4K
  Preset 13 - Bosphorus 4K
TensorFlow:
  CPU - 16 - ResNet-50
  CPU - 256 - ResNet-50
uvg266:
  Bosphorus 4K - Slow
  Bosphorus 4K - Medium
  Bosphorus 4K - Very Fast
  Bosphorus 4K - Super Fast
  Bosphorus 4K - Ultra Fast
VVenC:
  Bosphorus 4K - Fast
  Bosphorus 4K - Faster
Xcompact3d Incompact3d:
  input.i3d 129 Cells Per Direction
  input.i3d 193 Cells Per Direction
Zstd Compression:
  8 - Compression Speed
  8 - Decompression Speed
  19 - Compression Speed
  19 - Decompression Speed
  19, Long Mode - Compression Speed
  19, Long Mode - Decompression Speed