Xeon Max Linux Distros

2 x Intel Xeon Max 9480 testing with a Supermicro X13DEM v1.10 (1.3 BIOS) and ASPEED on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2310151-NE-2310131NE09
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 2 Tests
C++ Boost Tests 2 Tests
C/C++ Compiler Tests 6 Tests
CPU Massive 14 Tests
Creator Workloads 12 Tests
Database Test Suite 7 Tests
Encoding 5 Tests
Fortran Tests 2 Tests
Game Development 2 Tests
HPC - High Performance Computing 11 Tests
Java 2 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 4 Tests
Molecular Dynamics 4 Tests
MPI Benchmarks 3 Tests
Multi-Core 17 Tests
NVIDIA GPU Compute 2 Tests
Intel oneAPI 5 Tests
OpenMPI Tests 5 Tests
Programmer / Developer System Benchmarks 2 Tests
Python 3 Tests
Renderers 3 Tests
Scientific Computing 4 Tests
Software Defined Radio 2 Tests
Server 9 Tests
Server CPU Tests 12 Tests
Single-Threaded 5 Tests
Video Encoding 5 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Ubuntu 23.04
October 12 2023
  1 Day, 2 Hours, 41 Minutes
Ubuntu 22.04
October 13 2023
  1 Day, 6 Hours, 32 Minutes
Invert Hiding All Results Option
  1 Day, 4 Hours, 37 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Xeon Max Linux DistrosProcessorMotherboardChipsetMemoryDiskGraphicsNetworkMonitorOSKernelDesktopDisplay ServerCompilerFile-SystemScreen ResolutionVulkanUbuntu 23.04Ubuntu 22.042 x Intel Xeon Max 9480 @ 3.50GHz (112 Cores / 224 Threads)Supermicro X13DEM v1.10 (1.3 BIOS)Intel Device 1bce512GB2 x 7682GB INTEL SSDPF2KX076TZASPEED2 x Broadcom BCM57508 NetXtreme-E 10Gb/25Gb/40Gb/50Gb/100Gb/200GbUbuntu 23.046.2.0-34-generic (x86_64)GNOME Shell 44.3X Server 1.21.1.7GCC 12.3.0ext41024x768VE228Ubuntu 22.04GNOME Shell 42.9X Server 1.21.1.41.3.238GCC 11.4.01920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- Ubuntu 23.04: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-DAPbBt/gcc-12-12.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-DAPbBt/gcc-12-12.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Ubuntu 22.04: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_cpufreq schedutil - CPU Microcode: 0x2c000271Java Details- Ubuntu 23.04: OpenJDK Runtime Environment (build 17.0.8.1+1-Ubuntu-0ubuntu123.04)- Ubuntu 22.04: OpenJDK Runtime Environment (build 11.0.20.1+1-post-Ubuntu-0ubuntu122.04)Python Details- Ubuntu 23.04: Python 3.11.4- Ubuntu 22.04: Python 3.10.12Security Details- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

Ubuntu 23.04 vs. Ubuntu 22.04 ComparisonPhoronix Test SuiteBaseline+22.6%+22.6%+45.2%+45.2%+67.8%+67.8%90.2%36.9%27.2%18.8%18.7%15.8%14%13.8%11.3%11%8.6%8.5%8.4%8.3%8.2%8.1%7.6%6.8%6.7%6.3%6.2%5.5%5.2%4.5%4.2%4.2%4%3.9%3.8%3.7%3.7%3.1%3%2.8%2.8%2.6%2.6%2.6%2.5%2.4%2.3%2.3%2.1%2%Create - 100 - 1000000Savina Reactors.IO74.3%raytrace59.3%go51.7%nbody49.4%pickle_pure_python44.4%crypto_pyaes44%chaos43.4%Delete - 50 - 100000042%float41.1%ALS Movie Lensd.S.M.S - Mesh Timedjango_template25.6%2to325%regex_compile23.1%F.H.R22.5%json_loads21.7%8 - Compression Speed20%800 - 100 - 500 - 400800 - 100 - 500 - 400A.U.C.T18.2%A.S.P16.7%T.F.A.T.T16.4%d.M.M.S - Mesh TimeH214.8%100 - 800 - Read Write100 - 800 - Read Write - Average Latency19 - Compression Speed32pathlib9.2%Rand Read8.9%100 - 1000 - Read WritePreset 8 - Bosphorus 4K100 - 1000 - Read Write - Average Latency800 - 100 - 800 - 400800 - 100 - 800 - 400Material TesterBosphorus 4K - Very Fast7.4%R.N.N.T - bf16bf16bf16 - CPU7.3%GET - 5007.3%P.P.B.T.T6.9%19, Long Mode - Compression SpeedDisney MaterialScala Dotty6.6%A.G.R.R.0.F.I - CPU6.5%Bosphorus 4K - FasterApache Spark Bayespython_startup6.1%i.i.1.C.P.D6.1%10, Lossless5.6%500OpenMP - BM25.2%OpenMP - BM25.2%19 - D.Sgravity_spheres_volume/dim_512/scivis/real_time5.1%Pathtracer ISPC - Asian Dragon104 104 104 - 604.2%Bosphorus 4K - Ultra FastR.R.W.R1:106OpenMP - BM13.9%OpenMP - BM13.9%JythonV.D.F.I - CPUI.a.F.S.I.D.C3.7%V.D.F.I - CPUparticle_volume/pathtracer/real_time3.4%gravity_spheres_volume/dim_512/ao/real_time3.3%100 - 1000 - Read Only - Average Latency3.1%CPU - 256 - ResNet-5064224 - 256 - 5123%A.G.R.R.0.F.I - CPU2.9%gravity_spheres_volume/dim_512/pathtracer/real_time100 - 1000 - Read Only2.7%1.R.H.D.T.RRTLightmap.hdr.4096x4096 - CPU-OnlyPreset 4 - Bosphorus 4KBosphorus 4K - Very FastP.P.B.T.T2.5%Preset 12 - Bosphorus 4K8 - D.SBosphorus 4K - Ultra FastBumper Beam2.2%particle_volume/ao/real_time2.2%19, Long Mode - D.SBosphorus 4K - MediumApache HadoopRenaissancePyPerformancePyPerformancePyPerformancePyPerformancePyPerformancePyPerformanceApache HadoopPyPerformanceRenaissanceOpenFOAMPyPerformancePyPerformancePyPerformanceRenaissancePyPerformanceZstd CompressionApache IoTDBApache IoTDBRenaissanceRenaissancePyBenchOpenFOAMDaCapo BenchmarkPostgreSQLPostgreSQLZstd CompressionlibxsmmPyPerformanceRocksDBPostgreSQLSVT-AV1PostgreSQLApache IoTDBApache IoTDBAppleseedKvazaarNumpy BenchmarkoneDNNRedissrsRAN ProjectZstd CompressionAppleseedRenaissanceOpenVINOVVenCRenaissancePyPerformanceXcompact3d Incompact3dlibavif avifencnginxminiBUDEminiBUDEZstd CompressionOSPRayEmbreeHigh Performance Conjugate GradientKvazaarRocksDBMemcachedlibavif avifencminiBUDEminiBUDEDaCapo BenchmarkOpenVINOOpenRadiossOpenVINOOSPRayOSPRayPostgreSQLTensorFlowlibxsmmLiquid-DSPOpenVINOOSPRayOpenSSLPostgreSQLClickHouseIntel Open Image DenoiseSVT-AV1uvg266srsRAN ProjectSVT-AV1Zstd Compressionuvg266OpenRadiossOSPRayZstd Compressionuvg266Ubuntu 23.04Ubuntu 22.04

Xeon Max Linux Distrosrenaissance: Akka Unbalanced Cobwebbed Treelibxsmm: 128renaissance: ALS Movie Lenshadoop: Delete - 50 - 1000000renaissance: Savina Reactors.IOclickhouse: 100M Rows Hits Dataset, Third Runclickhouse: 100M Rows Hits Dataset, Second Runclickhouse: 100M Rows Hits Dataset, First Run / Cold Cacheapache-iotdb: 800 - 100 - 500 - 400apache-iotdb: 800 - 100 - 500 - 400apache-iotdb: 800 - 100 - 800 - 400apache-iotdb: 800 - 100 - 800 - 400tensorflow: CPU - 256 - ResNet-50renaissance: Finagle HTTP Requestshadoop: Create - 100 - 1000000pgbench: 100 - 1000 - Read Write - Average Latencypgbench: 100 - 1000 - Read Writepgbench: 100 - 800 - Read Write - Average Latencypgbench: 100 - 800 - Read Writerenaissance: Apache Spark Bayesospray: particle_volume/pathtracer/real_timeonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUpgbench: 100 - 1000 - Read Only - Average Latencypgbench: 100 - 1000 - Read Onlypgbench: 100 - 800 - Read Only - Average Latencypgbench: 100 - 800 - Read Onlyrenaissance: Apache Spark PageRankhpcg: 144 144 144 - 60vvenc: Bosphorus 4K - Fastervvenc: Bosphorus 4K - Fastrenaissance: Scala Dottysvt-av1: Preset 4 - Bosphorus 4Kmemcached: 1:10tensorflow: CPU - 16 - ResNet-50openvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUospray: gravity_spheres_volume/dim_512/pathtracer/real_timeopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUsrsran: PUSCH Processor Benchmark, Throughput Totalospray: gravity_spheres_volume/dim_512/scivis/real_timeblender: Barbershop - CPU-Onlyrenaissance: Rand Forestcompress-zstd: 19, Long Mode - Decompression Speedcompress-zstd: 19, Long Mode - Compression Speedappleseed: Material Testermemcached: 1:100openvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUappleseed: Emilycompress-zstd: 19 - Decompression Speedcompress-zstd: 19 - Compression Speedrocksdb: Read Rand Write Randpyperformance: python_startupdacapobench: H2ospray: gravity_spheres_volume/dim_512/ao/real_timeopenradioss: Chrysler Neon 1Mnumpy: openradioss: Bird Strike on Windshieldcompress-zstd: 8 - Decompression Speedcompress-zstd: 8 - Compression Speedminibude: OpenMP - BM2minibude: OpenMP - BM2redis: SET - 500openradioss: INIVOL and Fluid Structure Interaction Drop Containerospray: particle_volume/scivis/real_timerocksdb: Rand Readhpcg: 104 104 104 - 60namd: ATPase Simulation - 327,506 Atomsopenradioss: Rubber O-Ring Seal Installationopenfoam: drivaerFastback, Medium Mesh Size - Execution Timeopenfoam: drivaerFastback, Medium Mesh Size - Mesh Timerocksdb: Update Randopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenradioss: Bumper Beamredis: GET - 500openvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUospray: particle_volume/ao/real_timenginx: 500embree: Pathtracer ISPC - Asian Dragononednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUembree: Pathtracer ISPC - Crownopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUsvt-av1: Preset 13 - Bosphorus 4Kcompress-7zip: Decompression Ratingcompress-7zip: Compression Ratinguvg266: Bosphorus 4K - Slowsvt-av1: Preset 8 - Bosphorus 4Kuvg266: Bosphorus 4K - Ultra Fastopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUoidn: RT.hdr_alb_nrm.3840x2160 - CPU-Onlyoidn: RT.ldr_alb_nrm.3840x2160 - CPU-Onlyuvg266: Bosphorus 4K - Mediumopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUsrsran: PUSCH Processor Benchmark, Throughput Threadappleseed: Disney Materialblender: Classroom - CPU-Onlypyperformance: 2to3pyperformance: raytracelibxsmm: 64svt-av1: Preset 12 - Bosphorus 4Klibxsmm: 32openradioss: Cell Phone Drop Testminibude: OpenMP - BM1minibude: OpenMP - BM1uvg266: Bosphorus 4K - Very Fastuvg266: Bosphorus 4K - Super Fastincompact3d: input.i3d 193 Cells Per Directionpyperformance: goavifenc: 10, Losslessoidn: RTLightmap.hdr.4096x4096 - CPU-Onlyavifenc: 6, Losslesspyperformance: json_loadsdacapobench: Jythonkvazaar: Bosphorus 4K - Very Fastpyperformance: django_templateliquid-dsp: 224 - 256 - 512liquid-dsp: 128 - 256 - 512pyperformance: nbodyliquid-dsp: 64 - 256 - 512liquid-dsp: 224 - 256 - 32liquid-dsp: 64 - 256 - 32liquid-dsp: 128 - 256 - 32pyperformance: pickle_pure_pythonkvazaar: Bosphorus 4K - Super Fastpyperformance: crypto_pyaespyperformance: regex_compilegromacs: MPI CPU - water_GMX50_barepyperformance: floatpyperformance: chaoskvazaar: Bosphorus 4K - Ultra Fastopenfoam: drivaerFastback, Small Mesh Size - Execution Timeopenfoam: drivaerFastback, Small Mesh Size - Mesh Timepyperformance: pathlibblender: BMW27 - CPU-Onlyphpbench: PHP Benchmark Suiteopenssl: openssl: srsran: Downlink Processor Benchmarkpybench: Total For Average Test Timesavifenc: 6incompact3d: input.i3d 129 Cells Per DirectionUbuntu 23.04Ubuntu 22.0445287.91930.828896.63191216277.8173.12170.24155.80417.8143280940624.384738939444.8019378.1324160.7801646943.242185232712.983.426714593.51.5716403971.2626353584961.277.36013.6262.528919.32.380599628.3114.4960.10616.899.038070.7089007.6910598.810.94923261.521853.42832.927.9376.10339501826.544.2425416.48284.6735562683.746.266518511.41986611.1267131.14472.63157.193154.91130.8111.0702776.7541916420.90137.4327.6732380807398102.6760.32646116.57202.91135174.83463102796343.39325.2795.253012006.7917.666329.8828.4421124724.5837.06441379.2636.757473.831514.3556.9233738192549107.9623.41316.3279.11467.062.232.248.3722.105056.476.9116127.3646.532404.86190.891.16113553.142242091815.254.6601196.833.21101.1522528.80815.3015.856.075614551166.9691.158.73115.7389518.4927.7108313333394613333363.266503000042379000001989933333310630000021622.2156.310812.15754.551.622.9538.60060339.4196231321.649530321355014.919701.8674.86465.0802.0057109653510.71928.121108.02247528378.6177.66172.55156.54351.8351368423577.215134226246.1923731.4616356.0611787837.998211112554.680.657715662.221.6206236061.2716318935791.775.88423.8552.542980.12.441623717.4114.4860.55612.569.288120.7283575.1010344.310.42124262.721861.62891.429.8347.915994500744.654.2925126.84286.2576492823.251.469299712.12280910.7693132.86439.90157.253228.1942.1105.5722639.2881882918.18142.4727.359334963619198.51530.32225117.80200.77099151.0314103258341.34327.2697.392807752.5817.696319.4227.8268131549.8338.74481355.7536.739574.401502.5257.2563793472568258.0925.39516.6980.31460.122.272.268.5421.325242.156.9316072.3847.272367.16178.585.42287353.712803331870.555.9471328.933.5097.3642434.08815.6915.796.444384071767.3611.188.60419.1375319.9034.8105136666793042000094.466755000042537666671975600000307210000031222.6481.113312.16076.974.023.9138.19480830.99281614.221.999666341349949.820245.3678.67524.8891.97589309OpenBenchmarking.org

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Akka Unbalanced Cobwebbed TreeUbuntu 23.04Ubuntu 22.0411K22K33K44K55KSE +/- 234.36, N = 3SE +/- 565.20, N = 945287.953510.7MIN: 33697.71 / MAX: 48659.35MIN: 36510.64 / MAX: 56207.73
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Akka Unbalanced Cobwebbed TreeUbuntu 23.04Ubuntu 22.049K18K27K36K45KMin: 44827.76 / Avg: 45287.91 / Max: 45595.28Min: 51835.19 / Avg: 53510.74 / Max: 56207.73

libxsmm

Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 128Ubuntu 23.04Ubuntu 22.04400800120016002000SE +/- 44.57, N = 6SE +/- 27.11, N = 31930.81928.11. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2
OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 128Ubuntu 23.04Ubuntu 22.0430060090012001500Min: 1817.5 / Avg: 1930.77 / Max: 2118.1Min: 1893.2 / Avg: 1928.13 / Max: 1981.51. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensUbuntu 23.04Ubuntu 22.046K12K18K24K30KSE +/- 623.64, N = 6SE +/- 354.60, N = 928896.621108.0MIN: 22999.73 / MAX: 39599.89MIN: 17287.27 / MAX: 27400.12
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensUbuntu 23.04Ubuntu 22.045K10K15K20K25KMin: 26285.81 / Avg: 28896.56 / Max: 30938.52Min: 19609.53 / Avg: 21107.99 / Max: 22817.27

Apache Hadoop

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 50 - Files: 1000000Ubuntu 23.04Ubuntu 22.047K14K21K28K35KSE +/- 452.80, N = 3SE +/- 2293.11, N = 93191222475
OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 50 - Files: 1000000Ubuntu 23.04Ubuntu 22.046K12K18K24K30KMin: 31074.24 / Avg: 31912.21 / Max: 32628.56Min: 9560.41 / Avg: 22475.21 / Max: 29893.58

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOUbuntu 23.04Ubuntu 22.046K12K18K24K30KSE +/- 203.55, N = 3SE +/- 544.94, N = 916277.828378.6MIN: 15959.38 / MAX: 31275.85MIN: 24488.11 / MAX: 54448.55
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOUbuntu 23.04Ubuntu 22.045K10K15K20K25KMin: 15959.38 / Avg: 16277.75 / Max: 16656.66Min: 25746.35 / Avg: 28378.65 / Max: 31005.06

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ / https://github.com/ClickHouse/ClickBench/tree/main/clickhouse with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all separate queries performed as an aggregate. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Third RunUbuntu 23.04Ubuntu 22.044080120160200SE +/- 3.29, N = 9SE +/- 4.50, N = 3173.12177.66MIN: 18.26 / MAX: 1621.62MIN: 31.4 / MAX: 1621.62
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Third RunUbuntu 23.04Ubuntu 22.04306090120150Min: 158.86 / Avg: 173.12 / Max: 187.62Min: 171.91 / Avg: 177.66 / Max: 186.54

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Second RunUbuntu 23.04Ubuntu 22.044080120160200SE +/- 2.63, N = 9SE +/- 3.35, N = 3170.24172.55MIN: 14.63 / MAX: 1875MIN: 24.51 / MAX: 2068.97
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, Second RunUbuntu 23.04Ubuntu 22.04306090120150Min: 158.01 / Avg: 170.24 / Max: 182.45Min: 168.17 / Avg: 172.55 / Max: 179.12

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, First Run / Cold CacheUbuntu 23.04Ubuntu 22.04306090120150SE +/- 2.03, N = 9SE +/- 2.07, N = 3155.80156.54MIN: 10.69 / MAX: 1714.29MIN: 9.6 / MAX: 2500
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.12.3.5100M Rows Hits Dataset, First Run / Cold CacheUbuntu 23.04Ubuntu 22.04306090120150Min: 144 / Avg: 155.8 / Max: 161.9Min: 153.08 / Avg: 156.54 / Max: 160.23

Apache IoTDB

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400Ubuntu 23.04Ubuntu 22.0490180270360450SE +/- 4.63, N = 5SE +/- 4.16, N = 12417.81351.83MAX: 31785.53MAX: 33320.78
OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400Ubuntu 23.04Ubuntu 22.0470140210280350Min: 404.68 / Avg: 417.81 / Max: 432.05Min: 329.98 / Avg: 351.83 / Max: 376.26

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400Ubuntu 23.04Ubuntu 22.0411M22M33M44M55MSE +/- 474661.21, N = 5SE +/- 506019.03, N = 124328094051368423
OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400Ubuntu 23.04Ubuntu 22.049M18M27M36M45MMin: 42246646.35 / Avg: 43280940.25 / Max: 44707566.51Min: 48124917.89 / Avg: 51368422.65 / Max: 53981101.14

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400Ubuntu 23.04Ubuntu 22.04130260390520650SE +/- 2.92, N = 3SE +/- 20.90, N = 9624.38577.21MAX: 34185.88MAX: 48250.35
OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400Ubuntu 23.04Ubuntu 22.04110220330440550Min: 619.68 / Avg: 624.38 / Max: 629.73Min: 539.32 / Avg: 577.21 / Max: 742.44

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400Ubuntu 23.04Ubuntu 22.0411M22M33M44M55MSE +/- 314308.71, N = 3SE +/- 1382666.67, N = 94738939451342262
OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400Ubuntu 23.04Ubuntu 22.049M18M27M36M45MMin: 46992424.37 / Avg: 47389394.49 / Max: 48009994.4Min: 40463799.94 / Avg: 51342261.73 / Max: 54311629.67

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: ResNet-50Ubuntu 23.04Ubuntu 22.041020304050SE +/- 0.47, N = 3SE +/- 0.31, N = 344.8046.19
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: ResNet-50Ubuntu 23.04Ubuntu 22.04918273645Min: 44.06 / Avg: 44.8 / Max: 45.68Min: 45.84 / Avg: 46.19 / Max: 46.8

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsUbuntu 23.04Ubuntu 22.045K10K15K20K25KSE +/- 232.89, N = 3SE +/- 482.14, N = 919378.123731.4MIN: 16059.97 / MAX: 19838.17MIN: 18921.39 / MAX: 25419.48
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsUbuntu 23.04Ubuntu 22.044K8K12K16K20KMin: 18920.93 / Avg: 19378.1 / Max: 19683.9Min: 21131.41 / Avg: 23731.44 / Max: 25419.48

Apache Hadoop

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 100 - Files: 1000000Ubuntu 23.04Ubuntu 22.0413002600390052006500SE +/- 17.50, N = 3SE +/- 1101.28, N = 932416163
OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 100 - Files: 1000000Ubuntu 23.04Ubuntu 22.0411002200330044005500Min: 3220.33 / Avg: 3241.34 / Max: 3276.08Min: 2972.11 / Avg: 6163.21 / Max: 11721.96

PostgreSQL

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average LatencyUbuntu 23.04Ubuntu 22.041428425670SE +/- 0.56, N = 12SE +/- 0.79, N = 1260.7856.061. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average LatencyUbuntu 23.04Ubuntu 22.041224364860Min: 56.83 / Avg: 60.78 / Max: 63.03Min: 50.67 / Avg: 56.06 / Max: 58.941. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read WriteUbuntu 23.04Ubuntu 22.044K8K12K16K20KSE +/- 157.54, N = 12SE +/- 262.46, N = 1216469178781. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read WriteUbuntu 23.04Ubuntu 22.043K6K9K12K15KMin: 15864.79 / Avg: 16468.81 / Max: 17597.54Min: 16967.12 / Avg: 17878.39 / Max: 19737.191. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average LatencyUbuntu 23.04Ubuntu 22.041020304050SE +/- 0.45, N = 12SE +/- 0.60, N = 1243.2438.001. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average LatencyUbuntu 23.04Ubuntu 22.04918273645Min: 40.68 / Avg: 43.24 / Max: 45.28Min: 34.61 / Avg: 38 / Max: 40.691. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 800 - Mode: Read WriteUbuntu 23.04Ubuntu 22.045K10K15K20K25KSE +/- 197.10, N = 12SE +/- 332.96, N = 1218523211111. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 800 - Mode: Read WriteUbuntu 23.04Ubuntu 22.044K8K12K16K20KMin: 17668.07 / Avg: 18522.95 / Max: 19666.65Min: 19660.06 / Avg: 21111.28 / Max: 23113.641. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesUbuntu 23.04Ubuntu 22.046001200180024003000SE +/- 72.54, N = 15SE +/- 21.10, N = 152712.92554.6MIN: 630.89 / MAX: 6260.44MIN: 1105.84 / MAX: 6590.98
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesUbuntu 23.04Ubuntu 22.045001000150020002500Min: 2269.01 / Avg: 2712.89 / Max: 3090.16Min: 2296.48 / Avg: 2554.57 / Max: 2637.21

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/pathtracer/real_timeUbuntu 23.04Ubuntu 22.0420406080100SE +/- 3.15, N = 9SE +/- 2.80, N = 1283.4380.66
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/pathtracer/real_timeUbuntu 23.04Ubuntu 22.041632486480Min: 61.1 / Avg: 83.43 / Max: 93.45Min: 63.94 / Avg: 80.66 / Max: 93.37

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUUbuntu 23.04Ubuntu 22.043K6K9K12K15KSE +/- 277.46, N = 9SE +/- 727.41, N = 1014593.5015662.22MIN: 8664.05-lpthread - MIN: 9645.271. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUUbuntu 23.04Ubuntu 22.043K6K9K12K15KMin: 13365.4 / Avg: 14593.51 / Max: 15994.6Min: 9847.39 / Avg: 15662.22 / Max: 18496.11. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

PostgreSQL

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average LatencyUbuntu 23.04Ubuntu 22.040.36450.7291.09351.4581.8225SE +/- 0.044, N = 9SE +/- 0.049, N = 121.5711.6201. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average LatencyUbuntu 23.04Ubuntu 22.04246810Min: 1.37 / Avg: 1.57 / Max: 1.77Min: 1.39 / Avg: 1.62 / Max: 1.851. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read OnlyUbuntu 23.04Ubuntu 22.04140K280K420K560K700KSE +/- 17891.35, N = 9SE +/- 18940.33, N = 126403976236061. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read OnlyUbuntu 23.04Ubuntu 22.04110K220K330K440K550KMin: 566505.39 / Avg: 640397.11 / Max: 728298.24Min: 540048.16 / Avg: 623606.24 / Max: 721445.851. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average LatencyUbuntu 23.04Ubuntu 22.040.2860.5720.8581.1441.43SE +/- 0.019, N = 12SE +/- 0.027, N = 91.2621.2711. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average LatencyUbuntu 23.04Ubuntu 22.04246810Min: 1.18 / Avg: 1.26 / Max: 1.4Min: 1.11 / Avg: 1.27 / Max: 1.361. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 800 - Mode: Read OnlyUbuntu 23.04Ubuntu 22.04140K280K420K560K700KSE +/- 9142.97, N = 12SE +/- 14209.40, N = 96353586318931. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 800 - Mode: Read OnlyUbuntu 23.04Ubuntu 22.04110K220K330K440K550KMin: 570722.8 / Avg: 635357.81 / Max: 676729.95Min: 590347.47 / Avg: 631893.28 / Max: 723344.881. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRankUbuntu 23.04Ubuntu 22.0412002400360048006000SE +/- 123.15, N = 12SE +/- 68.33, N = 34961.25791.7MIN: 2972.29 / MAX: 7999.32MIN: 3887.37 / MAX: 7904.31
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark PageRankUbuntu 23.04Ubuntu 22.0410002000300040005000Min: 4208.14 / Avg: 4961.25 / Max: 5589Min: 5658.13 / Avg: 5791.69 / Max: 5883.51

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1X Y Z: 144 144 144 - RT: 60Ubuntu 23.04Ubuntu 22.0420406080100SE +/- 0.77, N = 6SE +/- 0.58, N = 377.3675.881. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi
OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1X Y Z: 144 144 144 - RT: 60Ubuntu 23.04Ubuntu 22.041530456075Min: 75.46 / Avg: 77.36 / Max: 80.97Min: 74.77 / Avg: 75.88 / Max: 76.751. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

VVenC

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: FasterUbuntu 23.04Ubuntu 22.040.86741.73482.60223.46964.337SE +/- 0.058, N = 12SE +/- 0.047, N = 33.6263.855-flto1. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: FasterUbuntu 23.04Ubuntu 22.04246810Min: 3.38 / Avg: 3.63 / Max: 3.93Min: 3.76 / Avg: 3.86 / Max: 3.911. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: FastUbuntu 23.04Ubuntu 22.040.5721.1441.7162.2882.86SE +/- 0.023, N = 7SE +/- 0.015, N = 32.5282.542-flto1. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: FastUbuntu 23.04Ubuntu 22.04246810Min: 2.42 / Avg: 2.53 / Max: 2.61Min: 2.52 / Avg: 2.54 / Max: 2.571. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Scala DottyUbuntu 23.04Ubuntu 22.042004006008001000SE +/- 24.95, N = 15SE +/- 18.18, N = 15919.3980.1MIN: 615.34 / MAX: 2847.74MIN: 648.36 / MAX: 3175.49
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Scala DottyUbuntu 23.04Ubuntu 22.042004006008001000Min: 793.7 / Avg: 919.34 / Max: 1104.95Min: 863.85 / Avg: 980.06 / Max: 1102.54

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 4 - Input: Bosphorus 4KUbuntu 23.04Ubuntu 22.040.54921.09841.64762.19682.746SE +/- 0.025, N = 15SE +/- 0.016, N = 132.3802.4411. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 4 - Input: Bosphorus 4KUbuntu 23.04Ubuntu 22.04246810Min: 2.25 / Avg: 2.38 / Max: 2.57Min: 2.34 / Avg: 2.44 / Max: 2.551. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Memcached

Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:10Ubuntu 23.04Ubuntu 22.04130K260K390K520K650KSE +/- 10608.70, N = 15SE +/- 7813.87, N = 15599628.31623717.411. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:10Ubuntu 23.04Ubuntu 22.04110K220K330K440K550KMin: 539692.94 / Avg: 599628.31 / Max: 675122.09Min: 582678.32 / Avg: 623717.41 / Max: 676209.21. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-50Ubuntu 23.04Ubuntu 22.0448121620SE +/- 0.11, N = 3SE +/- 0.14, N = 1214.4914.48
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-50Ubuntu 23.04Ubuntu 22.0448121620Min: 14.35 / Avg: 14.49 / Max: 14.71Min: 13.77 / Avg: 14.48 / Max: 15.42

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Machine Translation EN To DE FP16 - Device: CPUUbuntu 23.04Ubuntu 22.041428425670SE +/- 1.04, N = 15SE +/- 1.29, N = 1260.1060.55MIN: 36.84 / MAX: 723.03MIN: 38.21 / MAX: 574.751. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Machine Translation EN To DE FP16 - Device: CPUUbuntu 23.04Ubuntu 22.041224364860Min: 55.67 / Avg: 60.1 / Max: 70.41Min: 56.05 / Avg: 60.55 / Max: 70.761. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Machine Translation EN To DE FP16 - Device: CPUUbuntu 23.04Ubuntu 22.04130260390520650SE +/- 9.85, N = 15SE +/- 12.13, N = 12616.89612.561. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Machine Translation EN To DE FP16 - Device: CPUUbuntu 23.04Ubuntu 22.04110220330440550Min: 524.56 / Avg: 616.89 / Max: 663.13Min: 521.74 / Avg: 612.56 / Max: 658.891. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeUbuntu 23.04Ubuntu 22.043691215SE +/- 0.16639, N = 15SE +/- 0.10483, N = 159.038079.28812
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeUbuntu 23.04Ubuntu 22.043691215Min: 8.09 / Avg: 9.04 / Max: 10.2Min: 8.7 / Avg: 9.29 / Max: 9.86

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUUbuntu 23.04Ubuntu 22.040.1620.3240.4860.6480.81SE +/- 0.01, N = 15SE +/- 0.01, N = 120.700.72MIN: 0.27 / MAX: 65.5MIN: 0.28 / MAX: 72.421. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUUbuntu 23.04Ubuntu 22.04246810Min: 0.66 / Avg: 0.7 / Max: 0.76Min: 0.65 / Avg: 0.72 / Max: 0.761. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUUbuntu 23.04Ubuntu 22.0420K40K60K80K100KSE +/- 1618.25, N = 15SE +/- 1954.61, N = 1289007.6983575.101. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUUbuntu 23.04Ubuntu 22.0415K30K45K60K75KMin: 74004.05 / Avg: 89007.69 / Max: 99053.79Min: 73692.51 / Avg: 83575.1 / Max: 98361.971. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

srsRAN Project

srsRAN Project is a complete ORAN-native 5G RAN solution created by Software Radio Systems (SRS). The srsRAN Project radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.5Test: PUSCH Processor Benchmark, Throughput TotalUbuntu 23.04Ubuntu 22.042K4K6K8K10KSE +/- 309.18, N = 15SE +/- 160.30, N = 1410598.810344.31. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest
OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.5Test: PUSCH Processor Benchmark, Throughput TotalUbuntu 23.04Ubuntu 22.042K4K6K8K10KMin: 7546.9 / Avg: 10598.81 / Max: 11739.6Min: 8985.2 / Avg: 10344.32 / Max: 11194.51. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeUbuntu 23.04Ubuntu 22.043691215SE +/- 0.33, N = 12SE +/- 0.17, N = 1510.9510.42
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeUbuntu 23.04Ubuntu 22.043691215Min: 9.43 / Avg: 10.95 / Max: 12.97Min: 9.67 / Avg: 10.42 / Max: 11.78

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Barbershop - Compute: CPU-OnlyUbuntu 23.04Ubuntu 22.0460120180240300SE +/- 2.68, N = 3SE +/- 2.00, N = 3261.52262.72
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Barbershop - Compute: CPU-OnlyUbuntu 23.04Ubuntu 22.0450100150200250Min: 256.69 / Avg: 261.52 / Max: 265.95Min: 260.41 / Avg: 262.72 / Max: 266.71

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random ForestUbuntu 23.04Ubuntu 22.04400800120016002000SE +/- 13.67, N = 3SE +/- 33.63, N = 121853.41861.6MIN: 1540.7 / MAX: 2511.91MIN: 1313.89 / MAX: 2805.42
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random ForestUbuntu 23.04Ubuntu 22.0430060090012001500Min: 1830.4 / Avg: 1853.42 / Max: 1877.71Min: 1666.31 / Avg: 1861.58 / Max: 2009.48

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Decompression SpeedUbuntu 23.04Ubuntu 22.046001200180024003000SE +/- 1.31, N = 15SE +/- 1.05, N = 122832.92891.41. Ubuntu 23.04: *** Zstandard CLI (64-bit) v1.5.4, by Yann Collet ***2. Ubuntu 22.04: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Decompression SpeedUbuntu 23.04Ubuntu 22.045001000150020002500Min: 2821.7 / Avg: 2832.89 / Max: 2839.6Min: 2887 / Avg: 2891.38 / Max: 2899.61. Ubuntu 23.04: *** Zstandard CLI (64-bit) v1.5.4, by Yann Collet ***2. Ubuntu 22.04: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Compression SpeedUbuntu 23.04Ubuntu 22.04714212835SE +/- 0.63, N = 15SE +/- 0.44, N = 1227.929.81. Ubuntu 23.04: *** Zstandard CLI (64-bit) v1.5.4, by Yann Collet ***2. Ubuntu 22.04: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Compression SpeedUbuntu 23.04Ubuntu 22.04714212835Min: 22.5 / Avg: 27.93 / Max: 32.3Min: 27 / Avg: 29.78 / Max: 31.51. Ubuntu 23.04: *** Zstandard CLI (64-bit) v1.5.4, by Yann Collet ***2. Ubuntu 22.04: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Material TesterUbuntu 23.04Ubuntu 22.0480160240320400376.10347.92

Memcached

Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:100Ubuntu 23.04Ubuntu 22.04110K220K330K440K550KSE +/- 8153.99, N = 15SE +/- 3294.51, N = 3501826.54500744.651. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:100Ubuntu 23.04Ubuntu 22.0490K180K270K360K450KMin: 447387.72 / Avg: 501826.54 / Max: 565690.89Min: 497048.26 / Avg: 500744.65 / Max: 507316.611. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16-INT8 - Device: CPUUbuntu 23.04Ubuntu 22.040.96531.93062.89593.86124.8265SE +/- 0.04, N = 15SE +/- 0.06, N = 34.244.29MIN: 2.6 / MAX: 139.76MIN: 2.66 / MAX: 146.931. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16-INT8 - Device: CPUUbuntu 23.04Ubuntu 22.04246810Min: 3.89 / Avg: 4.24 / Max: 4.41Min: 4.17 / Avg: 4.29 / Max: 4.381. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16-INT8 - Device: CPUUbuntu 23.04Ubuntu 22.045K10K15K20K25KSE +/- 255.75, N = 15SE +/- 314.49, N = 325416.4825126.841. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16-INT8 - Device: CPUUbuntu 23.04Ubuntu 22.044K8K12K16K20KMin: 24421.88 / Avg: 25416.48 / Max: 27978.56Min: 24610.35 / Avg: 25126.84 / Max: 25695.961. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: EmilyUbuntu 23.04Ubuntu 22.0460120180240300284.67286.26

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Decompression SpeedUbuntu 23.04Ubuntu 22.046001200180024003000SE +/- 3.08, N = 12SE +/- 22.65, N = 152683.72823.21. Ubuntu 23.04: *** Zstandard CLI (64-bit) v1.5.4, by Yann Collet ***2. Ubuntu 22.04: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Decompression SpeedUbuntu 23.04Ubuntu 22.045001000150020002500Min: 2664.1 / Avg: 2683.68 / Max: 2694.2Min: 2508.6 / Avg: 2823.17 / Max: 2854.21. Ubuntu 23.04: *** Zstandard CLI (64-bit) v1.5.4, by Yann Collet ***2. Ubuntu 22.04: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Compression SpeedUbuntu 23.04Ubuntu 22.041224364860SE +/- 1.72, N = 12SE +/- 1.13, N = 1546.251.41. Ubuntu 23.04: *** Zstandard CLI (64-bit) v1.5.4, by Yann Collet ***2. Ubuntu 22.04: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Compression SpeedUbuntu 23.04Ubuntu 22.041020304050Min: 36.1 / Avg: 46.2 / Max: 58.1Min: 45.1 / Avg: 51.37 / Max: 62.61. Ubuntu 23.04: *** Zstandard CLI (64-bit) v1.5.4, by Yann Collet ***2. Ubuntu 22.04: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read Random Write RandomUbuntu 23.04Ubuntu 22.04150K300K450K600K750KSE +/- 6074.54, N = 15SE +/- 3631.47, N = 36651856929971. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read Random Write RandomUbuntu 23.04Ubuntu 22.04120K240K360K480K600KMin: 629903 / Avg: 665185.13 / Max: 709727Min: 685782 / Avg: 692997.33 / Max: 6973241. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupUbuntu 23.04Ubuntu 22.043691215SE +/- 0.10, N = 3SE +/- 0.11, N = 811.412.1
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupUbuntu 23.04Ubuntu 22.0448121620Min: 11.2 / Avg: 11.4 / Max: 11.5Min: 11.5 / Avg: 12.06 / Max: 12.3

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Ubuntu 23.04Ubuntu 22.045K10K15K20K25KSE +/- 401.72, N = 20SE +/- 393.58, N = 201986622809
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Ubuntu 23.04Ubuntu 22.044K8K12K16K20KMin: 16133 / Avg: 19865.8 / Max: 24002Min: 17672 / Avg: 22809.4 / Max: 25072

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/ao/real_timeUbuntu 23.04Ubuntu 22.043691215SE +/- 0.22, N = 15SE +/- 0.13, N = 311.1310.77
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/ao/real_timeUbuntu 23.04Ubuntu 22.043691215Min: 10.03 / Avg: 11.13 / Max: 12.85Min: 10.53 / Avg: 10.77 / Max: 10.96

OpenRadioss

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Chrysler Neon 1MUbuntu 23.04Ubuntu 22.04306090120150SE +/- 1.48, N = 3SE +/- 0.40, N = 3131.14132.86
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Chrysler Neon 1MUbuntu 23.04Ubuntu 22.0420406080100Min: 128.41 / Avg: 131.14 / Max: 133.5Min: 132.16 / Avg: 132.86 / Max: 133.54

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkUbuntu 23.04Ubuntu 22.04100200300400500SE +/- 5.52, N = 4SE +/- 4.01, N = 3472.63439.90
OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkUbuntu 23.04Ubuntu 22.0480160240320400Min: 456.93 / Avg: 472.63 / Max: 481.47Min: 432.15 / Avg: 439.9 / Max: 445.56

OpenRadioss

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Bird Strike on WindshieldUbuntu 23.04Ubuntu 22.04306090120150SE +/- 0.61, N = 3SE +/- 1.23, N = 3157.19157.25
OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Bird Strike on WindshieldUbuntu 23.04Ubuntu 22.04306090120150Min: 156.08 / Avg: 157.19 / Max: 158.18Min: 155.5 / Avg: 157.25 / Max: 159.62

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8 - Decompression SpeedUbuntu 23.04Ubuntu 22.047001400210028003500SE +/- 46.81, N = 12SE +/- 29.89, N = 123154.93228.11. Ubuntu 23.04: *** Zstandard CLI (64-bit) v1.5.4, by Yann Collet ***2. Ubuntu 22.04: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8 - Decompression SpeedUbuntu 23.04Ubuntu 22.046001200180024003000Min: 2805.4 / Avg: 3154.94 / Max: 3288.6Min: 2921.7 / Avg: 3228.13 / Max: 32931. Ubuntu 23.04: *** Zstandard CLI (64-bit) v1.5.4, by Yann Collet ***2. Ubuntu 22.04: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8 - Compression SpeedUbuntu 23.04Ubuntu 22.042004006008001000SE +/- 53.57, N = 12SE +/- 20.54, N = 121130.8942.11. Ubuntu 23.04: *** Zstandard CLI (64-bit) v1.5.4, by Yann Collet ***2. Ubuntu 22.04: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8 - Compression SpeedUbuntu 23.04Ubuntu 22.042004006008001000Min: 783.2 / Avg: 1130.84 / Max: 1503.2Min: 818.9 / Avg: 942.13 / Max: 10741. Ubuntu 23.04: *** Zstandard CLI (64-bit) v1.5.4, by Yann Collet ***2. Ubuntu 22.04: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

miniBUDE

MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2Ubuntu 23.04Ubuntu 22.0420406080100SE +/- 0.85, N = 15SE +/- 1.27, N = 4111.07105.571. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2Ubuntu 23.04Ubuntu 22.0420406080100Min: 105.21 / Avg: 111.07 / Max: 115.43Min: 103.2 / Avg: 105.57 / Max: 108.871. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2Ubuntu 23.04Ubuntu 22.046001200180024003000SE +/- 21.23, N = 15SE +/- 31.78, N = 42776.752639.291. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2Ubuntu 23.04Ubuntu 22.045001000150020002500Min: 2630.16 / Avg: 2776.75 / Max: 2885.69Min: 2580.08 / Avg: 2639.29 / Max: 2721.781. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 500Ubuntu 23.04Ubuntu 22.04400K800K1200K1600K2000KSE +/- 37507.63, N = 12SE +/- 28736.18, N = 151916420.901882918.181. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 500Ubuntu 23.04Ubuntu 22.04300K600K900K1200K1500KMin: 1637333.62 / Avg: 1916420.9 / Max: 2136425.25Min: 1569394.12 / Avg: 1882918.18 / Max: 20416031. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenRadioss

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: INIVOL and Fluid Structure Interaction Drop Container