New Tests

2 x Intel Xeon Platinum 8380 testing with a Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS) and ASPEED on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2209031-NE-2209025NE82
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 2 Tests
Timed Code Compilation 3 Tests
C/C++ Compiler Tests 15 Tests
Compression Tests 2 Tests
CPU Massive 23 Tests
Creator Workloads 15 Tests
Database Test Suite 8 Tests
Encoding 6 Tests
Fortran Tests 2 Tests
Game Development 2 Tests
Go Language Tests 3 Tests
HPC - High Performance Computing 10 Tests
Imaging 3 Tests
Java 2 Tests
Common Kernel Benchmarks 3 Tests
Machine Learning 6 Tests
Molecular Dynamics 3 Tests
MPI Benchmarks 3 Tests
Multi-Core 23 Tests
Node.js + NPM Tests 2 Tests
NVIDIA GPU Compute 2 Tests
Intel oneAPI 4 Tests
OpenMPI Tests 3 Tests
Programmer / Developer System Benchmarks 6 Tests
Python Tests 4 Tests
Raytracing 2 Tests
Renderers 4 Tests
Scientific Computing 3 Tests
Server 13 Tests
Server CPU Tests 15 Tests
Single-Threaded 3 Tests
Video Encoding 6 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
CentOS Stream 9
August 31 2022
  23 Hours, 41 Minutes
Clear Linux 36990
September 01 2022
  19 Hours, 13 Minutes
Ubuntu 20.04.1 LTS
September 02 2022
  21 Hours, 41 Minutes
Invert Hiding All Results Option
  21 Hours, 32 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


New TestsProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen ResolutionVulkanCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2 x Intel Xeon Platinum 8380 @ 3.40GHz (80 Cores / 160 Threads)Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS)Intel Device 0998512GB7682GB INTEL SSDPF2KX076TZASPEEDVE2282 x Intel X710 for 10GBASE-T + 2 x Intel E810-C for QSFPCentOS Stream 95.14.0-148.el9.x86_64 (x86_64)GNOME Shell 40.10X ServerGCC 11.3.1 20220421xfs1920x1080Clear Linux OS 369905.19.6-1185.native (x86_64)GNOME Shell 42.4X Server 1.21.1.3GCC 12.2.1 20220831 releases/gcc-12.2.0-35-g63997f2223 + Clang 14.0.6 + LLVM 14.0.6ext4Ubuntu 22.045.15.0-47-generic (x86_64)GNOME Shell 42.21.2.204GCC 11.2.0OpenBenchmarking.orgKernel Details- CentOS Stream 9: Transparent Huge Pages: always- Clear Linux 36990: Transparent Huge Pages: always- Ubuntu 20.04.1 LTS: Transparent Huge Pages: madviseCompiler Details- CentOS Stream 9: --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-isl - Clear Linux 36990: --build=x86_64-generic-linux --disable-libmpx --disable-libunwind-exceptions --disable-multiarch --disable-vtable-verify --disable-werror --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-clocale=gnu --enable-default-pie --enable-gnu-indirect-function --enable-gnu-indirect-function --enable-host-shared --enable-languages=c,c++,fortran,go,jit --enable-ld=default --enable-libstdcxx-pch --enable-linux-futex --enable-lto --enable-multilib --enable-plugin --enable-shared --enable-threads=posix --exec-prefix=/usr --includedir=/usr/include --target=x86_64-generic-linux --with-arch=x86-64-v3 --with-gcc-major-version-only --with-glibc-version=2.35 --with-gnu-ld --with-isl --with-pic --with-ppl=yes --with-tune=skylake-avx512 --with-zstd - Ubuntu 20.04.1 LTS: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Disk Details- CentOS Stream 9: NONE / attr2,inode64,logbsize=32k,logbufs=8,noquota,relatime,rw,seclabel / Block Size: 4096- Clear Linux 36990: MQ-DEADLINE / relatime,rw,stripe=256 / Block Size: 4096- Ubuntu 20.04.1 LTS: NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Details- CentOS Stream 9: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xd000363- Clear Linux 36990: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0xd000375- Ubuntu 20.04.1 LTS: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xd000363Java Details- CentOS Stream 9: OpenJDK Runtime Environment (Red_Hat-11.0.16.0.8-2.el9) (build 11.0.16+8-LTS)- Clear Linux 36990: OpenJDK Runtime Environment (build 18.0.1-internal+0-adhoc.mockbuild.corretto-18-18.0.1.10.1)- Ubuntu 20.04.1 LTS: OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Details- CentOS Stream 9: Python 3.9.13- Clear Linux 36990: Python 3.10.6- Ubuntu 20.04.1 LTS: Python 3.10.4Security Details- CentOS Stream 9: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected - Clear Linux 36990: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected - Ubuntu 20.04.1 LTS: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected Environment Details- Clear Linux 36990: FFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z -Wl,now -Wl,-z -Wl,relro -malign-data=abi -fno-semantic-interposition -ftree-vectorize -ftree-loop-vectorize -Wl,--enable-new-dtags" CXXFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -Wformat -Wformat-security -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z -Wl,now -Wl,-z -Wl,relro -fno-semantic-interposition -ffat-lto-objects -fno-trapping-math -Wl,-sort-common -Wl,--enable-new-dtags -mtune=skylake -mrelax-cmpxchg-loop -fvisibility-inlines-hidden -Wl,--enable-new-dtags" FCFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z -Wl,now -Wl,-z -Wl,relro -malign-data=abi -fno-semantic-interposition -ftree-vectorize -ftree-loop-vectorize -Wl,-sort-common -Wl,--enable-new-dtags" CFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -Wformat -Wformat-security -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z -Wl,now -Wl,-z -Wl,relro -fno-semantic-interposition -ffat-lto-objects -fno-trapping-math -Wl,-sort-common -Wl,--enable-new-dtags -mtune=skylake -mrelax-cmpxchg-loop" THEANO_FLAGS="floatX=float32,openmp=true,gcc.cxxflags="-ftree-vectorize -mavx""

CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTSLogarithmic Result OverviewPhoronix Test SuiteC-BloscNatronRenaissancex264DaCapo BenchmarkPostgreSQL pgbenchVP9 libvpx EncodingTimed LLVM CompilationNode.js Express HTTP Load TestSVT-AV1GraphicsMagickApache SparkClickHouseSVT-HEVCZstd CompressionTensorFlow Litelibavif avifencSVT-VP9memtier_benchmarkNode.js V8 Web Tooling BenchmarkoneDNNRedisStress-NG7-Zip CompressionUnpacking The Linux KernelLAMMPS Molecular Dynamics SimulatorTNNApache HTTP ServerWebP Image EncodeOSPRayASTC EncoderONNX RuntimenginxStockfishMobile Neural NetworkGROMACSBlendersimdjsonHigh Performance Conjugate GradientOpenSSLNAMDOSPRay Studio

New Testspgbench: 100 - 500 - Read Only - Average Latencypgbench: 100 - 500 - Read Onlytnn: CPU - DenseNetrenaissance: Savina Reactors.IOstockfish: Total Timespark: 40000000 - 500 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 500 - Calculate Pi Benchmarkspark: 40000000 - 500 - SHA-512 Benchmark Timespark: 40000000 - 500 - Broadcast Inner Join Test Timespark: 40000000 - 500 - Inner Join Test Timespark: 40000000 - 500 - Repartition Test Timespark: 40000000 - 500 - Group By Test Timepgbench: 100 - 250 - Read Only - Average Latencypgbench: 100 - 250 - Read Onlyonnx: GPT-2 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonnx: fcn-resnet101-11 - CPU - Standardrenaissance: Finagle HTTP Requestsonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUospray: particle_volume/scivis/real_timerenaissance: ALS Movie Lenstensorflow-lite: Inception V4memtier-benchmark: Redis - 50 - 5:1ospray: particle_volume/pathtracer/real_timeopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUmnn: inception-v3mnn: mobilenet-v1-1.0mnn: MobileNetV2_224mnn: SqueezeNetV1.0mnn: resnet-v2-50mnn: squeezenetv1.1mnn: mobilenetV3mnn: nasnettensorflow-lite: NASNet Mobiletensorflow-lite: SqueezeNetblender: Barbershop - CPU-Onlyonnx: yolov4 - CPU - Standardospray: particle_volume/ao/real_timelammps: 20k Atomsonnx: super-resolution-10 - CPU - Standardtensorflow-lite: Inception ResNet V2apache: 1000renaissance: In-Memory Database Shootoutblosc: blosclz shufflememtier-benchmark: Redis - 50 - 1:10tensorflow-lite: Mobilenet Floatopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUgraphics-magick: HWB Color Spaceblosc: blosclz bitshufflehpcg: build-llvm: Ninjavpxenc: Speed 0 - Bosphorus 4Kospray-studio: 3 - 4K - 32 - Path Tracertensorflow-lite: Mobilenet Quantclickhouse: 100M Rows Web Analytics Dataset, Third Runclickhouse: 100M Rows Web Analytics Dataset, Second Runclickhouse: 100M Rows Web Analytics Dataset, First Run / Cold Cachestress-ng: Atomicopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUstress-ng: Futexgraphics-magick: Resizingospray: gravity_spheres_volume/dim_512/scivis/real_timeospray: gravity_spheres_volume/dim_512/ao/real_timepgbench: 100 - 250 - Read Write - Average Latencypgbench: 100 - 250 - Read Writepgbench: 100 - 500 - Read Write - Average Latencypgbench: 100 - 500 - Read Writegraphics-magick: Rotateospray-studio: 2 - 4K - 32 - Path Tracerospray-studio: 1 - 4K - 32 - Path Tracerospray: gravity_spheres_volume/dim_512/pathtracer/real_timeonnx: fcn-resnet101-11 - CPU - Parallelonnx: GPT-2 - CPU - Parallelonnx: ArcFace ResNet-100 - CPU - Parallelonnx: bertsquad-12 - CPU - Parallelonnx: yolov4 - CPU - Parallelonnx: bertsquad-12 - CPU - Standardnatron: Spaceshipcompress-zstd: 8 - Decompression Speedcompress-zstd: 8 - Compression Speedonnx: super-resolution-10 - CPU - Parallelgraphics-magick: Noise-Gaussianrenaissance: Apache Spark Bayesospray-studio: 1 - 4K - 16 - Path Tracerredis: SET - 500ospray-studio: 3 - 4K - 16 - Path Tracerstress-ng: Socket Activitycompress-zstd: 19 - Decompression Speedcompress-zstd: 19 - Compression Speedsvt-av1: Preset 4 - Bosphorus 4Kospray-studio: 2 - 4K - 16 - Path Tracerinfluxdb: 4 - 10000 - 2,5000,1 - 10000nginx: 1000build-linux-kernel: defconfigcompress-zstd: 3 - Decompression Speedcompress-zstd: 3 - Compression Speedblender: Pabellon Barcelona - CPU-Onlyavifenc: 0compress-zstd: 3, Long Mode - Compression Speedredis: GET - 500onednn: IP Shapes 1D - bf16bf16bf16 - CPUnode-web-tooling: redis: SET - 1000tnn: CPU - MobileNet v2stress-ng: System V Message Passingopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUkeydb: simdjson: PartialTweetsstress-ng: Context Switchingsimdjson: DistinctUserIDsimdjson: TopTweetopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUdragonflydb: 50 - 5:1dragonflydb: 50 - 1:5dragonflydb: 50 - 1:1blender: Classroom - CPU-Onlyopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUrenaissance: Rand Forestopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUstress-ng: CPU Cachenode-express-loadtest: graphics-magick: Sharpengraphics-magick: Enhancedgraphics-magick: Swirlbuild-gdb: Time To Compilesimdjson: Kostyacompress-zstd: 19, Long Mode - Decompression Speedcompress-zstd: 19, Long Mode - Compression Speedonednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPUsimdjson: LargeRandcompress-7zip: Decompression Ratingcompress-7zip: Compression Ratingstress-ng: Glibc C String Functionsavifenc: 2onednn: IP Shapes 3D - bf16bf16bf16 - CPUwebp: Quality 100, Lossless, Highest Compressionunpack-linux: linux-5.19.tar.xzcompress-zstd: 3, Long Mode - Decompression Speedx264: Bosphorus 4Kcompress-zstd: 8, Long Mode - Decompression Speedcompress-zstd: 8, Long Mode - Compression Speedavifenc: 6, Losslessblender: Fishy Cat - CPU-Onlyredis: GET - 1000namd: ATPase Simulation - 327,506 Atomsdacapobench: Jythonstress-ng: NUMAstress-ng: x86_64 RdRandstress-ng: IO_uringstress-ng: Mallocstress-ng: Forkingstress-ng: Memory Copyingstress-ng: MMAPstress-ng: CPU Stressstress-ng: Semaphoresstress-ng: SENDFILEstress-ng: MEMFDstress-ng: Glibc Qsort Data Sortingstress-ng: Cryptostress-ng: Matrix Mathstress-ng: Vector Mathgromacs: MPI CPU - water_GMX50_bareredis: SET - 50avifenc: 10, Losslessblender: BMW27 - CPU-Onlytnn: CPU - SqueezeNet v1.1redis: GET - 50openssl: openssl: astcenc: Mediumonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUwebp: Quality 100, Highest Compressionwebp: Quality 100, Losslesssvt-hevc: 10 - Bosphorus 4Kdacapobench: H2svt-av1: Preset 8 - Bosphorus 4Kastcenc: Exhaustivesvt-vp9: Visual Quality Optimized - Bosphorus 4Kavifenc: 6svt-vp9: PSNR/SSIM Optimized - Bosphorus 4Ksvt-vp9: VMAF Optimized - Bosphorus 4Kwebp: Quality 100astcenc: Thoroughonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUsvt-av1: Preset 10 - Bosphorus 4Ksvt-hevc: 7 - Bosphorus 4Ktnn: CPU - SqueezeNet v2astcenc: Fastwebp: Defaultsvt-av1: Preset 12 - Bosphorus 4Konednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUlammps: Rhodopsin Proteinstress-ng: IO_uringCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS0.27018556563955.04621219.41794731292.7936.2188.830.1501669388110451881447.6164438693.9697.28224.295117123.973896.51339297.91100.66961.5042731.9320.0912.0902.6633.9568.6632.3561.75312.09768713.016614.41257.1569424.352735.1231226047297.8131349.6017787.24916.71398073.704240.4113.601478.6411383704.140.2812135.0193.04483199540.86243.95244.38231.48187775.7718.671071.701088788.92274822.021522.417412.0512074526.724187101030408524058025.58522365269169379963010931.93017.51244.032597381075.3201521931278.62239672460.372571.386.61.32720261666008.7200945.4929.6703022.97026.182.8984.316281.02018201.095.4060310.551847194.12378.8797093379.73819.6324.291451.6213.671424.5713.924.856233126.455.775.62239.8683.2664.8285.47233.331455.232.002478.961.3647224.774.524414.948.279657.9916.2649106411153234095.4152.912635.743.437.957780.963711314678669473078.1748.7102.3856341.2089.1943208.034.573201.0307.59.26033.372406986.650.28138560010.37667284.36306750258.8463484.4512812.453747.58135517.467186364.511271967.054098.84934.2683808.91286293.40322923.098.9962189377.086.60525.04366.4932284227.21112427.216866.1316.37433.811558.80221.115113.23984738.6754.505499.276.056115.50112.933.04446.38483.6840465.62086.6775.880799.11202.16392.8322.1593830.8700.27518316653620.8578256.61860796282.0433.0520.2115.1715.5913.0734.450.1321913115102112077487.1165215950.5728.04024.71598225.442370.31760873.07161.93020.4482.1792.9124.2448.5632.5291.86212.81368566.76244.19253.2763624.590335.0461015149331.9118161.8317905.81994991.483701.27173712603.140.8610267.3876.15485068576.95400.32400.44386.79145035.861140633.72285122.224722.51682.862873575.828857921029409324050325.6467241525919788286409925.03007.51621.38024994479.0201802083152.812410236452.692522.591.52.35120404215488.102985.26807.881.9778.591965.32765192.674.6408613.562078925.13348.8158684030.90485587.545.0214924848.445.715.743826220.784236085.834003419.3164.23668.316.409808869119225582.942613.248.812.27230.983642094730269658490.0442.6252.3025138.3037.85382.433163.1993.16.17432.432722264.500.2805336835.21669368.1810692920.63342573199.6961843.0111244.943336.53140290.2115903941.881161464.613783.49893.8495806.71328235.14309299.869.5252346688.34.69024.25358.1533512867.331119831.817031.1359.89803.800808.04618.186192.06245373.1884.8916148.913.463163.87153.772.65650.47203.64846132.871147.6372.128837.14771.662195.9342.1263735.9950.28517604404704.04221545.71747925872.7331.6790.620.1571593950108561899497.8094838478.8770.26124.726318654.136132.71457157.1498.65540.9966238.8320.5402.1783.0774.2588.6022.5901.86412.92180034.05617.26262.6567124.852534.8201161954111.0137078.3518477.34279.51755483.283340.6018.312178.706363193.040.2047131.6982.954995310747.97225.18223.34214.26183431.6638.041049.56942679.7241720.896520.98827.3213415116.88129620759424764197325.2944233530516828346449771.62788.91638.331335131132.3209471835435.102492945595.602498.174.01.29221165668433.3210852.3727.8092755.95669.984.4487.167102.42174436.565.8005310.661851066.21463.9814420124.121684.3923.523027.3812.982990.5013.14487637.294.755371747.815.795.69469.3684.991209586.381315146.421243856.6365.82160.42248.701531.532.532453.451.5147005.858.894487.458.938939.7416.1362106411094191681.6292.832540.738.541.89120.943590583457258173725.0349.3532.4469142.6068.8882945.932.412962.4127.911.03634.072309354.80.28101481942.05666992.328404895.30292765838.9965455.7213427.143888.56135774.417249112.241177667.633881.85918.7686462.28307592.99324720.789.1152076700.088.10125.23365.9172588694.171115888.016884.7315.09783.887938.67521.601111.551068138.3314.4840101.726.406115.32111.612.93145.63213.6611265.83787.7774.709809.16661.97592.7322.1356326.749OpenBenchmarking.org

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read Only - Average LatencyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS0.06410.12820.19230.25640.3205SE +/- 0.005, N = 12SE +/- 0.007, N = 12SE +/- 0.004, N = 120.2700.2750.285-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-O21. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read Only - Average LatencyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS12345Min: 0.25 / Avg: 0.27 / Max: 0.29Min: 0.25 / Avg: 0.27 / Max: 0.3Min: 0.27 / Avg: 0.28 / Max: 0.311. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read OnlyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS400K800K1200K1600K2000KSE +/- 30425.21, N = 12SE +/- 42947.24, N = 12SE +/- 24026.09, N = 12185565618316651760440-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-O21. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read OnlyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS300K600K900K1200K1500KMin: 1702189.9 / Avg: 1855656.11 / Max: 1970416.52Min: 1643669.33 / Avg: 1831665.25 / Max: 2003918.94Min: 1634369.08 / Avg: 1760440.5 / Max: 1852918.691. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS10002000300040005000SE +/- 27.70, N = 3SE +/- 1.46, N = 3SE +/- 40.05, N = 93955.053620.864704.04MIN: 3833.99 / MAX: 5510.15-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 3599.42 / MAX: 3730.48MIN: 3855.1 / MAX: 6393.881. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS8001600240032004000Min: 3909.45 / Avg: 3955.05 / Max: 4005.08Min: 3618.21 / Avg: 3620.86 / Max: 3623.25Min: 4587.41 / Avg: 4704.04 / Max: 4901.581. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS5K10K15K20K25KSE +/- 296.93, N = 3SE +/- 56.26, N = 15SE +/- 209.69, N = 621219.48256.621545.7MIN: 20627.9 / MAX: 32602.9MIN: 7799.33 / MAX: 12715.24MIN: 20986.56 / MAX: 35348.39
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS4K8K12K16K20KMin: 20627.9 / Avg: 21219.38 / Max: 21561.04Min: 7799.33 / Avg: 8256.64 / Max: 8639Min: 20986.56 / Avg: 21545.65 / Max: 22331.64

Stockfish

This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 15Total TimeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS40M80M120M160M200MSE +/- 2364357.21, N = 15SE +/- 3123886.35, N = 15SE +/- 1938385.82, N = 15179473129186079628174792587-pipe -fexceptions -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 15Total TimeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS30M60M90M120M150MMin: 164540890 / Avg: 179473129.13 / Max: 195156974Min: 166800438 / Avg: 186079627.93 / Max: 209129501Min: 159673139 / Avg: 174792586.8 / Max: 1889688451. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using DataframeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS0.66831.33662.00492.67323.3415SE +/- 0.09, N = 3SE +/- 0.04, N = 15SE +/- 0.19, N = 32.792.042.97
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using DataframeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810Min: 2.62 / Avg: 2.79 / Max: 2.92Min: 1.86 / Avg: 2.04 / Max: 2.3Min: 2.75 / Avg: 2.97 / Max: 3.36

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Calculate Pi BenchmarkCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS816243240SE +/- 0.12, N = 3SE +/- 0.06, N = 15SE +/- 0.02, N = 336.2133.0531.65
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Calculate Pi BenchmarkCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS816243240Min: 35.99 / Avg: 36.21 / Max: 36.41Min: 32.72 / Avg: 33.05 / Max: 33.71Min: 31.64 / Avg: 31.65 / Max: 31.69

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark TimeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS20406080100SE +/- 0.53, N = 3SE +/- 0.19, N = 15SE +/- 0.76, N = 388.8320.2190.14
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark TimeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS20406080100Min: 87.98 / Avg: 88.83 / Max: 89.81Min: 18.6 / Avg: 20.21 / Max: 21.72Min: 88.78 / Avg: 90.14 / Max: 91.39

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Broadcast Inner Join Test TimeClear Linux 3699048121620SE +/- 0.83, N = 215.17

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Inner Join Test TimeClear Linux 3699048121620SE +/- 0.23, N = 215.59

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Repartition Test TimeClear Linux 369903691215SE +/- 0.21, N = 213.07

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Group By Test TimeClear Linux 36990816243240SE +/- 5.10, N = 234.45

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average LatencyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS0.03530.07060.10590.14120.1765SE +/- 0.001, N = 3SE +/- 0.004, N = 12SE +/- 0.002, N = 120.1500.1320.157-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-O21. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average LatencyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS12345Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.12 / Avg: 0.13 / Max: 0.15Min: 0.14 / Avg: 0.16 / Max: 0.161. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read OnlyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS400K800K1200K1600K2000KSE +/- 9583.19, N = 3SE +/- 57458.70, N = 12SE +/- 17628.71, N = 12166938819131151593950-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-O21. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read OnlyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS300K600K900K1200K1500KMin: 1655270.14 / Avg: 1669387.76 / Max: 1687672.99Min: 1661379.78 / Avg: 1913115.4 / Max: 2124955.9Min: 1530613.95 / Avg: 1593950.05 / Max: 1778797.511. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: StandardCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2K4K6K8K10KSE +/- 388.59, N = 12SE +/- 346.22, N = 9SE +/- 93.38, N = 8110451021110856-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto-flto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: StandardCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2K4K6K8K10KMin: 9069 / Avg: 11045.42 / Max: 12021.5Min: 9378.5 / Avg: 10210.94 / Max: 12014.5Min: 10315.5 / Avg: 10855.81 / Max: 11047.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS400800120016002000SE +/- 16.82, N = 12SE +/- 24.49, N = 12SE +/- 20.93, N = 5188120771899-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto-flto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS400800120016002000Min: 1791.5 / Avg: 1880.54 / Max: 1942Min: 1978.5 / Avg: 2076.63 / Max: 2237.5Min: 1817.5 / Avg: 1898.9 / Max: 1928.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS110220330440550SE +/- 7.22, N = 15SE +/- 4.67, N = 15SE +/- 8.91, N = 15447.62487.12497.81MIN: 376.51-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 431.07MIN: 385.191. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS90180270360450Min: 393.11 / Avg: 447.62 / Max: 486.98Min: 454.04 / Avg: 487.12 / Max: 533.05Min: 402.9 / Avg: 497.81 / Max: 536.791. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: StandardCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS110220330440550SE +/- 1.17, N = 3SE +/- 17.65, N = 12SE +/- 18.60, N = 12443521483-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto-flto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: StandardCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS90180270360450Min: 441.5 / Avg: 443.33 / Max: 445.5Min: 436.5 / Avg: 521.33 / Max: 571Min: 437 / Avg: 482.92 / Max: 5741. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2K4K6K8K10KSE +/- 154.17, N = 12SE +/- 56.01, N = 3SE +/- 197.62, N = 138693.95950.58478.8MIN: 6648.05 / MAX: 15659.82MIN: 5353.91 / MAX: 6140.39MIN: 6706.47 / MAX: 17648.26
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS15003000450060007500Min: 8107.68 / Avg: 8693.92 / Max: 9648.75Min: 5840.72 / Avg: 5950.5 / Max: 6024.72Min: 7693.23 / Avg: 8478.8 / Max: 9573.87

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS170340510680850SE +/- 6.94, N = 12SE +/- 6.52, N = 15SE +/- 10.13, N = 15697.28728.04770.26MIN: 605.85-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 651.84MIN: 662.441. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS140280420560700Min: 627.38 / Avg: 697.28 / Max: 719.91Min: 682.09 / Avg: 728.04 / Max: 791.16Min: 691.92 / Avg: 770.26 / Max: 843.471. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/scivis/real_timeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS612182430SE +/- 0.29, N = 3SE +/- 0.05, N = 3SE +/- 0.00, N = 324.3024.7224.73
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/scivis/real_timeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS612182430Min: 23.8 / Avg: 24.3 / Max: 24.8Min: 24.63 / Avg: 24.72 / Max: 24.77Min: 24.72 / Avg: 24.73 / Max: 24.73

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS4K8K12K16K20KSE +/- 73.46, N = 3SE +/- 28.18, N = 3SE +/- 88.50, N = 317123.98225.418654.1MIN: 16240.16 / MAX: 19195.87MIN: 8194.03 / MAX: 9046.37MIN: 18482.22 / MAX: 21069.29
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS3K6K9K12K15KMin: 16999.98 / Avg: 17123.88 / Max: 17254.2Min: 8194.03 / Avg: 8225.41 / Max: 8281.65Min: 18482.22 / Avg: 18654.15 / Max: 18776.54

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS16K32K48K64K80KSE +/- 21727.22, N = 15SE +/- 2536.95, N = 15SE +/- 264.98, N = 1573896.542370.336132.7
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS13K26K39K52K65KMin: 35453.9 / Avg: 73896.46 / Max: 362681Min: 35507 / Avg: 42370.31 / Max: 62938.2Min: 35035.8 / Avg: 36132.68 / Max: 38853.5

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS400K800K1200K1600K2000KSE +/- 63962.41, N = 12SE +/- 66661.35, N = 12SE +/- 53478.91, N = 151339297.911760873.071457157.141. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS300K600K900K1200K1500KMin: 966119.24 / Avg: 1339297.91 / Max: 1668140.37Min: 1097948.3 / Avg: 1760873.07 / Max: 1949244.22Min: 1046747.75 / Avg: 1457157.14 / Max: 1723232.261. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/pathtracer/real_timeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS4080120160200SE +/- 0.72, N = 3SE +/- 0.68, N = 3SE +/- 0.28, N = 3100.67161.9398.66
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/pathtracer/real_timeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS306090120150Min: 99.77 / Avg: 100.67 / Max: 102.09Min: 160.97 / Avg: 161.93 / Max: 163.24Min: 98.12 / Avg: 98.66 / Max: 99.07

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS0.33750.6751.01251.351.6875SE +/- 0.05, N = 15SE +/- 0.07, N = 121.500.99MIN: 0.34 / MAX: 29.48MIN: 0.21 / MAX: 76.71. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS246810Min: 1.19 / Avg: 1.5 / Max: 1.85Min: 0.73 / Avg: 0.99 / Max: 1.361. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS14K28K42K56K70KSE +/- 1567.95, N = 15SE +/- 5640.72, N = 1242731.9366238.831. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS11K22K33K44K55KMin: 32765.75 / Avg: 42731.93 / Max: 52857.39Min: 43827.36 / Avg: 66238.83 / Max: 88837.641. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS510152025SE +/- 0.19, N = 15SE +/- 0.03, N = 3SE +/- 0.12, N = 320.0920.4520.54MIN: 17.31 / MAX: 37.29-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 19.73 / MAX: 33.83MIN: 19.73 / MAX: 36.821. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS510152025Min: 18.66 / Avg: 20.09 / Max: 21.34Min: 20.38 / Avg: 20.45 / Max: 20.49Min: 20.32 / Avg: 20.54 / Max: 20.711. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS0.49030.98061.47091.96122.4515SE +/- 0.047, N = 15SE +/- 0.016, N = 3SE +/- 0.016, N = 32.0902.1792.178MIN: 1.76 / MAX: 3.93-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 2.08 / MAX: 2.41MIN: 2.08 / MAX: 2.581. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810Min: 1.78 / Avg: 2.09 / Max: 2.24Min: 2.15 / Avg: 2.18 / Max: 2.21Min: 2.15 / Avg: 2.18 / Max: 2.21. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS0.69231.38462.07692.76923.4615SE +/- 0.014, N = 15SE +/- 0.013, N = 3SE +/- 0.011, N = 32.6632.9123.077MIN: 2.48 / MAX: 5.57-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 2.72 / MAX: 5.76MIN: 3.01 / MAX: 3.321. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810Min: 2.6 / Avg: 2.66 / Max: 2.75Min: 2.89 / Avg: 2.91 / Max: 2.93Min: 3.06 / Avg: 3.08 / Max: 3.091. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS0.95811.91622.87433.83244.7905SE +/- 0.075, N = 15SE +/- 0.058, N = 3SE +/- 0.029, N = 33.9564.2444.258MIN: 3.51 / MAX: 9.33-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 3.95 / MAX: 8.29MIN: 4.15 / MAX: 9.781. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810Min: 3.67 / Avg: 3.96 / Max: 4.46Min: 4.17 / Avg: 4.24 / Max: 4.36Min: 4.21 / Avg: 4.26 / Max: 4.311. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810SE +/- 0.088, N = 15SE +/- 0.071, N = 3SE +/- 0.036, N = 38.6638.5638.602MIN: 7.71 / MAX: 20.48-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 8.16 / MAX: 9.67MIN: 7.85 / MAX: 30.881. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS3691215Min: 7.94 / Avg: 8.66 / Max: 9.23Min: 8.42 / Avg: 8.56 / Max: 8.64Min: 8.53 / Avg: 8.6 / Max: 8.641. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS0.58281.16561.74842.33122.914SE +/- 0.050, N = 15SE +/- 0.014, N = 3SE +/- 0.012, N = 32.3562.5292.590MIN: 2.03 / MAX: 5.76-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 2.48 / MAX: 2.83MIN: 2.54 / MAX: 5.991. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810Min: 2.07 / Avg: 2.36 / Max: 2.67Min: 2.51 / Avg: 2.53 / Max: 2.56Min: 2.57 / Avg: 2.59 / Max: 2.611. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS0.41940.83881.25821.67762.097SE +/- 0.020, N = 15SE +/- 0.017, N = 3SE +/- 0.012, N = 31.7531.8621.864MIN: 1.61 / MAX: 4.19-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 1.81 / MAX: 2.51MIN: 1.81 / MAX: 2.461. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810Min: 1.63 / Avg: 1.75 / Max: 1.86Min: 1.83 / Avg: 1.86 / Max: 1.89Min: 1.85 / Avg: 1.86 / Max: 1.891. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS3691215SE +/- 0.23, N = 15SE +/- 0.09, N = 3SE +/- 0.16, N = 312.1012.8112.92MIN: 10.54 / MAX: 23.03-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 12.47 / MAX: 16.42MIN: 10.67 / MAX: 25.281. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS48121620Min: 10.69 / Avg: 12.1 / Max: 13.11Min: 12.65 / Avg: 12.81 / Max: 12.97Min: 12.63 / Avg: 12.92 / Max: 13.21. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet MobileCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS20K40K60K80K100KSE +/- 3728.57, N = 12SE +/- 935.16, N = 15SE +/- 3387.75, N = 1268713.068566.780034.0
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet MobileCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS14K28K42K56K70KMin: 62607.9 / Avg: 68712.96 / Max: 109269Min: 64020.3 / Avg: 68566.65 / Max: 79083.9Min: 65377.7 / Avg: 80034.01 / Max: 104357

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNetCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS4K8K12K16K20KSE +/- 5506.54, N = 12SE +/- 370.01, N = 12SE +/- 58.03, N = 1516614.416244.195617.26
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNetCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS3K6K9K12K15KMin: 5327.65 / Avg: 16614.41 / Max: 68861Min: 5465.21 / Avg: 6244.19 / Max: 10090.6Min: 5285.06 / Avg: 5617.26 / Max: 6097.49

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported and HIP for AMD Radeon GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Barbershop - Compute: CPU-OnlyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS60120180240300SE +/- 0.55, N = 3SE +/- 0.37, N = 3SE +/- 0.19, N = 3257.15253.27262.65
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Barbershop - Compute: CPU-OnlyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS50100150200250Min: 256.22 / Avg: 257.15 / Max: 258.13Min: 252.82 / Avg: 253.27 / Max: 254.01Min: 262.27 / Avg: 262.65 / Max: 262.9

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: StandardCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS150300450600750SE +/- 1.17, N = 3SE +/- 12.00, N = 12SE +/- 7.60, N = 4694636671-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto-flto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: StandardCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS120240360480600Min: 692 / Avg: 694.17 / Max: 696Min: 604.5 / Avg: 635.67 / Max: 699Min: 661 / Avg: 670.75 / Max: 6931. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/ao/real_timeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS612182430SE +/- 0.07, N = 3SE +/- 0.14, N = 3SE +/- 0.06, N = 324.3524.5924.85
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/ao/real_timeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS612182430Min: 24.26 / Avg: 24.35 / Max: 24.49Min: 24.32 / Avg: 24.59 / Max: 24.78Min: 24.78 / Avg: 24.85 / Max: 24.98

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k AtomsCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS816243240SE +/- 0.05, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 335.1235.0534.82-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k AtomsCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS816243240Min: 35.04 / Avg: 35.12 / Max: 35.21Min: 34.94 / Avg: 35.05 / Max: 35.13Min: 34.73 / Avg: 34.82 / Max: 34.871. (CXX) g++ options: -O3 -lm -ldl

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: StandardCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS3K6K9K12K15KSE +/- 43.63, N = 3SE +/- 690.46, N = 12SE +/- 135.56, N = 3122601015111619-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto-flto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: StandardCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2K4K6K8K10KMin: 12190.5 / Avg: 12260.17 / Max: 12340.5Min: 6882 / Avg: 10151.38 / Max: 11950.5Min: 11357.5 / Avg: 11618.83 / Max: 118121. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V2CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS12K24K36K48K60KSE +/- 268.62, N = 3SE +/- 583.51, N = 15SE +/- 1518.21, N = 1547297.849331.954111.0
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V2CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS9K18K27K36K45KMin: 46787.4 / Avg: 47297.83 / Max: 47698.2Min: 47180.5 / Avg: 49331.85 / Max: 54041.2Min: 47734.3 / Avg: 54110.95 / Max: 69012.2

Apache HTTP Server

This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 1000CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS30K60K90K120K150KSE +/- 1558.40, N = 15SE +/- 760.56, N = 3SE +/- 1575.81, N = 3131349.60118161.83137078.35-O2-O3 -m64 -mtune=skylake -mrelax-cmpxchg-loop-O21. (CC) gcc options: -shared -fPIC
OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 1000CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS20K40K60K80K100KMin: 113593.7 / Avg: 131349.6 / Max: 135605.05Min: 116952.09 / Avg: 118161.83 / Max: 119565.29Min: 134278.43 / Avg: 137078.35 / Max: 139731.271. (CC) gcc options: -shared -fPIC

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database ShootoutCentOS Stream 9Ubuntu 20.04.1 LTS4K8K12K16K20KSE +/- 197.42, N = 3SE +/- 82.13, N = 317787.218477.3MIN: 17444.33 / MAX: 21383.13MIN: 18342.05 / MAX: 21649.71
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database ShootoutCentOS Stream 9Ubuntu 20.04.1 LTS3K6K9K12K15KMin: 17444.33 / Avg: 17787.24 / Max: 18128.2Min: 18342.05 / Avg: 18477.32 / Max: 18625.65

Test: In-Memory Database Shootout

Clear Linux 36990: The test run did not produce a result.

C-Blosc

C-Blosc (c-blosc2) simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz shuffleCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS4K8K12K16K20KSE +/- 23.45, N = 3SE +/- 153.72, N = 15SE +/- 18.38, N = 34916.717905.84279.51. (CC) gcc options: -std=gnu99 -O3 -lrt -lm
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz shuffleCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS3K6K9K12K15KMin: 4886.1 / Avg: 4916.73 / Max: 4962.8Min: 17321.5 / Avg: 17905.79 / Max: 19245.5Min: 4249.1 / Avg: 4279.47 / Max: 4312.61. (CC) gcc options: -std=gnu99 -O3 -lrt -lm

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS400K800K1200K1600K2000KSE +/- 67672.39, N = 12SE +/- 83568.62, N = 13SE +/- 17388.30, N = 31398073.701994991.481755483.281. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS300K600K900K1200K1500KMin: 1126743.33 / Avg: 1398073.7 / Max: 1952770.48Min: 1190376.76 / Avg: 1994991.48 / Max: 2288695.61Min: 1732597.08 / Avg: 1755483.28 / Max: 1789602.931. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet FloatCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS9001800270036004500SE +/- 499.70, N = 12SE +/- 75.96, N = 15SE +/- 20.34, N = 34240.413701.273340.60
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet FloatCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS7001400210028003500Min: 3338.31 / Avg: 4240.41 / Max: 9146.21Min: 3362.9 / Avg: 3701.27 / Max: 4201.56Min: 3303.43 / Avg: 3340.6 / Max: 3373.51

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS510152025SE +/- 0.30, N = 15SE +/- 0.01, N = 313.6018.31MIN: 8.57 / MAX: 68.28MIN: 10.08 / MAX: 179.251. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS510152025Min: 10.69 / Avg: 13.6 / Max: 14.26Min: 18.29 / Avg: 18.31 / Max: 18.331. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS5001000150020002500SE +/- 39.85, N = 15SE +/- 1.52, N = 31478.642178.701. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS400800120016002000Min: 1398 / Avg: 1478.64 / Max: 1868.26Min: 2176.32 / Avg: 2178.7 / Max: 2181.531. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: HWB Color SpaceCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS400800120016002000SE +/- 28.49, N = 12SE +/- 4.16, N = 3SE +/- 6.23, N = 1511381737636-O2 -lbz2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -ltiff -lfreetype -lSM -lICE -lbz2 -lxml2-O2 -ljbig -ltiff -lfreetype -lSM -lICE -llzma1. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: HWB Color SpaceCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS30060090012001500Min: 938 / Avg: 1138.17 / Max: 1302Min: 1729 / Avg: 1737 / Max: 1743Min: 608 / Avg: 635.6 / Max: 6851. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lz -lm -lpthread

C-Blosc

C-Blosc (c-blosc2) simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz bitshuffleCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS3K6K9K12K15KSE +/- 6.11, N = 3SE +/- 20.65, N = 3SE +/- 7.59, N = 33704.112603.13193.01. (CC) gcc options: -std=gnu99 -O3 -lrt -lm
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz bitshuffleCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2K4K6K8K10KMin: 3692.6 / Avg: 3704.13 / Max: 3713.4Min: 12564.5 / Avg: 12603.1 / Max: 12635.1Min: 3181.3 / Avg: 3192.97 / Max: 3207.21. (CC) gcc options: -std=gnu99 -O3 -lrt -lm

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS918273645SE +/- 0.08, N = 3SE +/- 0.05, N = 3SE +/- 0.05, N = 340.2840.8640.20-lmpi_cxx-lmpi_cxx1. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi
OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS816243240Min: 40.14 / Avg: 40.28 / Max: 40.39Min: 40.8 / Avg: 40.86 / Max: 40.96Min: 40.11 / Avg: 40.2 / Max: 40.291. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: NinjaCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS60120180240300SE +/- 0.23, N = 3SE +/- 0.26, N = 3SE +/- 0.42, N = 3135.02267.39131.70
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: NinjaCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS50100150200250Min: 134.59 / Avg: 135.02 / Max: 135.39Min: 266.87 / Avg: 267.39 / Max: 267.76Min: 131.04 / Avg: 131.7 / Max: 132.48

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 33.046.152.95-U_FORTIFY_SOURCE-pipe -fexceptions -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-U_FORTIFY_SOURCE1. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810Min: 3.02 / Avg: 3.04 / Max: 3.07Min: 6.13 / Avg: 6.15 / Max: 6.2Min: 2.89 / Avg: 2.95 / Max: 2.981. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -std=gnu++11

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS11K22K33K44K55KSE +/- 81.93, N = 3SE +/- 111.95, N = 3SE +/- 27.95, N = 3483194850649953-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lm-lm1. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS9K18K27K36K45KMin: 48156 / Avg: 48319 / Max: 48415Min: 48361 / Avg: 48505.67 / Max: 48726Min: 49901 / Avg: 49952.67 / Max: 499971. (CXX) g++ options: -O3 -ldl

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet QuantCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2K4K6K8K10KSE +/- 100.45, N = 3SE +/- 81.76, N = 6SE +/- 337.08, N = 159540.868576.9510747.97
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet QuantCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2K4K6K8K10KMin: 9372.16 / Avg: 9540.86 / Max: 9719.7Min: 8277.81 / Avg: 8576.95 / Max: 8873.84Min: 9333.34 / Avg: 10747.97 / Max: 12938.7

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Third RunCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS90180270360450SE +/- 1.95, N = 15SE +/- 5.35, N = 12SE +/- 2.99, N = 3243.95400.32225.18MIN: 42.11 / MAX: 6000MIN: 54.25 / MAX: 20000MIN: 37.43 / MAX: 5454.551. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Third RunCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS70140210280350Min: 233.04 / Avg: 243.95 / Max: 257.32Min: 374.42 / Avg: 400.32 / Max: 434.04Min: 219.64 / Avg: 225.18 / Max: 229.891. ClickHouse server version 22.5.4.19 (official build).

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Second RunCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS90180270360450SE +/- 1.48, N = 15SE +/- 5.40, N = 12SE +/- 5.12, N = 3244.38400.44223.34MIN: 44.09 / MAX: 5454.55MIN: 53.29 / MAX: 20000MIN: 43.48 / MAX: 5454.551. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Second RunCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS70140210280350Min: 236.34 / Avg: 244.38 / Max: 255.26Min: 375.33 / Avg: 400.44 / Max: 433.41Min: 216.18 / Avg: 223.34 / Max: 233.251. ClickHouse server version 22.5.4.19 (official build).

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, First Run / Cold CacheCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS80160240320400SE +/- 2.21, N = 15SE +/- 5.14, N = 12SE +/- 2.32, N = 3231.48386.79214.26MIN: 41.47 / MAX: 5454.55MIN: 51.15 / MAX: 20000MIN: 36.76 / MAX: 2727.271. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, First Run / Cold CacheCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS70140210280350Min: 212.68 / Avg: 231.48 / Max: 244.77Min: 363.65 / Avg: 386.79 / Max: 416.95Min: 211.04 / Avg: 214.26 / Max: 218.771. ClickHouse server version 22.5.4.19 (official build).

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: AtomicCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS40K80K120K160K200KSE +/- 3961.98, N = 15SE +/- 3304.16, N = 15SE +/- 3618.17, N = 15187775.77145035.86183431.66-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic-lapparmor -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: AtomicCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS30K60K90K120K150KMin: 164376.47 / Avg: 187775.77 / Max: 204483.19Min: 127421.65 / Avg: 145035.86 / Max: 159671.65Min: 159582.8 / Avg: 183431.66 / Max: 201202.471. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS918273645SE +/- 0.29, N = 12SE +/- 0.02, N = 318.6738.04MIN: 11.54 / MAX: 79.43MIN: 17.97 / MAX: 246.51. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS816243240Min: 18.27 / Avg: 18.67 / Max: 21.83Min: 38.01 / Avg: 38.04 / Max: 38.091. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS2004006008001000SE +/- 14.44, N = 12SE +/- 0.62, N = 31071.701049.561. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS2004006008001000Min: 914.28 / Avg: 1071.7 / Max: 1092.92Min: 1048.38 / Avg: 1049.56 / Max: 1050.471. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: FutexCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS200K400K600K800K1000KSE +/- 73263.26, N = 15SE +/- 60890.67, N = 15SE +/- 59760.04, N = 151088788.921140633.72942679.72-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic-lapparmor -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: FutexCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS200K400K600K800K1000KMin: 696500.15 / Avg: 1088788.92 / Max: 1490893.83Min: 846766.74 / Avg: 1140633.72 / Max: 1364896.25Min: 619867.04 / Avg: 942679.72 / Max: 1247331.681. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: ResizingCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS6001200180024003000SE +/- 27.10, N = 3SE +/- 35.23, N = 4SE +/- 9.26, N = 1527482851417-O2 -lbz2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -ltiff -lfreetype -lSM -lICE -lbz2 -lxml2-O2 -ljbig -ltiff -lfreetype -lSM -lICE -llzma1. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: ResizingCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS5001000150020002500Min: 2717 / Avg: 2748 / Max: 2802Min: 2753 / Avg: 2851.25 / Max: 2913Min: 351 / Avg: 416.73 / Max: 4661. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lz -lm -lpthread

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS510152025SE +/- 0.15, N = 3SE +/- 0.07, N = 3SE +/- 0.02, N = 322.0222.2220.90
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS510152025Min: 21.75 / Avg: 22.02 / Max: 22.26Min: 22.1 / Avg: 22.22 / Max: 22.34Min: 20.86 / Avg: 20.9 / Max: 20.93

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/ao/real_timeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS510152025SE +/- 0.06, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 322.4222.5220.99
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/ao/real_timeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS510152025Min: 22.3 / Avg: 22.42 / Max: 22.5Min: 22.47 / Avg: 22.52 / Max: 22.62Min: 20.95 / Avg: 20.99 / Max: 21.02

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average LatencyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS3691215SE +/- 0.012, N = 3SE +/- 0.005, N = 3SE +/- 0.021, N = 312.0512.8627.321-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-O21. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average LatencyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS48121620Min: 12.03 / Avg: 12.05 / Max: 12.07Min: 2.85 / Avg: 2.86 / Max: 2.87Min: 7.28 / Avg: 7.32 / Max: 7.351. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read WriteCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS20K40K60K80K100KSE +/- 21.44, N = 3SE +/- 149.93, N = 3SE +/- 96.97, N = 3207458735734151-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-O21. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read WriteCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS15K30K45K60K75KMin: 20713.98 / Avg: 20744.88 / Max: 20786.09Min: 87081.55 / Avg: 87357.07 / Max: 87597.32Min: 34011.12 / Avg: 34150.96 / Max: 34337.261. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read Write - Average LatencyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS612182430SE +/- 0.047, N = 3SE +/- 0.008, N = 3SE +/- 0.005, N = 326.7245.82816.881-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-O21. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read Write - Average LatencyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS612182430Min: 26.66 / Avg: 26.72 / Max: 26.82Min: 5.81 / Avg: 5.83 / Max: 5.84Min: 16.88 / Avg: 16.88 / Max: 16.891. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read WriteCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS20K40K60K80K100KSE +/- 32.56, N = 3SE +/- 119.89, N = 3SE +/- 8.12, N = 3187108579229620-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-O21. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read WriteCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS15K30K45K60K75KMin: 18646.54 / Avg: 18709.95 / Max: 18754.52Min: 85631.08 / Avg: 85791.56 / Max: 86026.08Min: 29603.65 / Avg: 29619.52 / Max: 29630.451. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: RotateCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2004006008001000SE +/- 7.69, N = 15SE +/- 10.17, N = 3SE +/- 4.18, N = 310301029759-O2 -lbz2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -ltiff -lfreetype -lSM -lICE -lbz2 -lxml2-O2 -ljbig -ltiff -lfreetype -lSM -lICE -llzma1. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: RotateCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2004006008001000Min: 948 / Avg: 1030.47 / Max: 1062Min: 1009 / Avg: 1028.67 / Max: 1043Min: 754 / Avg: 758.67 / Max: 7671. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lz -lm -lpthread

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS9K18K27K36K45KSE +/- 38.89, N = 3SE +/- 84.10, N = 3SE +/- 93.93, N = 3408524093242476-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lm-lm1. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS7K14K21K28K35KMin: 40775 / Avg: 40852 / Max: 40900Min: 40775 / Avg: 40931.67 / Max: 41063Min: 42309 / Avg: 42476 / Max: 426341. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS9K18K27K36K45KSE +/- 74.23, N = 3SE +/- 180.17, N = 3SE +/- 56.40, N = 3405804050341973-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lm-lm1. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS7K14K21K28K35KMin: 40445 / Avg: 40580 / Max: 40701Min: 40214 / Avg: 40503.33 / Max: 40834Min: 41872 / Avg: 41973 / Max: 420671. (CXX) g++ options: -O3 -ldl

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS612182430SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 325.5925.6525.29
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS612182430Min: 25.52 / Avg: 25.59 / Max: 25.63Min: 25.58 / Avg: 25.65 / Max: 25.69Min: 25.23 / Avg: 25.29 / Max: 25.34

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS50100150200250SE +/- 0.17, N = 3SE +/- 0.17, N = 3SE +/- 0.33, N = 3236241233-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto-flto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS4080120160200Min: 235.5 / Avg: 235.67 / Max: 236Min: 240.5 / Avg: 240.67 / Max: 241Min: 232.5 / Avg: 233.17 / Max: 233.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS11002200330044005500SE +/- 32.87, N = 3SE +/- 26.07, N = 3SE +/- 17.61, N = 3526952595305-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto-flto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS9001800270036004500Min: 5205.5 / Avg: 5268.67 / Max: 5316Min: 5222 / Avg: 5258.5 / Max: 5309Min: 5274.5 / Avg: 5305 / Max: 5335.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS400800120016002000SE +/- 3.09, N = 3SE +/- 3.21, N = 3SE +/- 12.00, N = 3169319781682-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto-flto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS30060090012001500Min: 1687 / Avg: 1693.17 / Max: 1696.5Min: 1972 / Avg: 1978 / Max: 1983Min: 1658 / Avg: 1682 / Max: 16941. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2004006008001000SE +/- 2.02, N = 3SE +/- 0.29, N = 3SE +/- 1.92, N = 3799828834-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto-flto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS150300450600750Min: 796 / Avg: 798.5 / Max: 802.5Min: 827 / Avg: 827.5 / Max: 828Min: 830 / Avg: 833.83 / Max: 8361. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS140280420560700SE +/- 1.04, N = 3SE +/- 0.29, N = 3SE +/- 2.08, N = 3630640644-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto-flto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS110220330440550Min: 628 / Avg: 630 / Max: 631.5Min: 639.5 / Avg: 640 / Max: 640.5Min: 640.5 / Avg: 643.5 / Max: 647.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: StandardCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2004006008001000SE +/- 0.50, N = 3SE +/- 5.77, N = 3SE +/- 10.38, N = 31093992977-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto-flto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: StandardCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2004006008001000Min: 1091.5 / Avg: 1092.5 / Max: 1093Min: 983.5 / Avg: 992 / Max: 1003Min: 962.5 / Avg: 976.83 / Max: 9971. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

Natron

Natron is an open-source, cross-platform compositing software for visual effects (VFX) and motion graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: SpaceshipCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS1.1252.253.3754.55.625SE +/- 0.01, N = 15SE +/- 0.03, N = 3SE +/- 0.00, N = 31.95.01.6
OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: SpaceshipCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810Min: 1.8 / Avg: 1.92 / Max: 2Min: 5 / Avg: 5.03 / Max: 5.1Min: 1.6 / Avg: 1.6 / Max: 1.6

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8 - Decompression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS6001200180024003000SE +/- 6.19, N = 12SE +/- 4.36, N = 15SE +/- 2.78, N = 33017.53007.52788.91. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8 - Decompression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS5001000150020002500Min: 2973.2 / Avg: 3017.48 / Max: 3043.8Min: 2978 / Avg: 3007.53 / Max: 3033.9Min: 2784.6 / Avg: 2788.9 / Max: 2794.11. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8 - Compression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS400800120016002000SE +/- 18.11, N = 12SE +/- 15.83, N = 15SE +/- 17.15, N = 31244.01621.31638.31. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8 - Compression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS30060090012001500Min: 1141.3 / Avg: 1244.02 / Max: 1347.8Min: 1521.9 / Avg: 1621.27 / Max: 1702.3Min: 1613.4 / Avg: 1638.33 / Max: 1671.21. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2K4K6K8K10KSE +/- 4.91, N = 3SE +/- 8.23, N = 3SE +/- 10.50, N = 3325980243133-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto-flto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS14002800420056007000Min: 3250.5 / Avg: 3259 / Max: 3267.5Min: 8009.5 / Avg: 8023.83 / Max: 8038Min: 3116 / Avg: 3132.5 / Max: 31521. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: Noise-GaussianCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2004006008001000SE +/- 0.88, N = 3SE +/- 3.93, N = 3SE +/- 7.66, N = 12738994513-O2 -lbz2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -ltiff -lfreetype -lSM -lICE -lbz2 -lxml2-O2 -ljbig -ltiff -lfreetype -lSM -lICE -llzma1. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: Noise-GaussianCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2004006008001000Min: 736 / Avg: 737.67 / Max: 739Min: 986 / Avg: 993.67 / Max: 999Min: 451 / Avg: 513.25 / Max: 5571. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lz -lm -lpthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2004006008001000SE +/- 11.23, N = 3SE +/- 1.10, N = 3SE +/- 14.06, N = 151075.3479.01132.3MIN: 628.33 / MAX: 1551.11MIN: 329.34 / MAX: 719.74MIN: 559.12 / MAX: 2166.29
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2004006008001000Min: 1052.97 / Avg: 1075.26 / Max: 1088.85Min: 477.72 / Avg: 479.04 / Max: 481.21Min: 1046.53 / Avg: 1132.28 / Max: 1266.11

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS4K8K12K16K20KSE +/- 58.89, N = 3SE +/- 13.09, N = 3SE +/- 70.51, N = 3201522018020947-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lm-lm1. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS4K8K12K16K20KMin: 20050 / Avg: 20152.33 / Max: 20254Min: 20163 / Avg: 20180.33 / Max: 20206Min: 20806 / Avg: 20947 / Max: 210191. (CXX) g++ options: -O3 -ldl

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 500CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS400K800K1200K1600K2000KSE +/- 47157.16, N = 12SE +/- 26407.67, N = 15SE +/- 20191.86, N = 51931278.622083152.811835435.10-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 500CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS400K800K1200K1600K2000KMin: 1439413.25 / Avg: 1931278.62 / Max: 2022608.12Min: 1853257.88 / Avg: 2083152.81 / Max: 2209886.75Min: 1781927.88 / Avg: 1835435.1 / Max: 18809351. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS5K10K15K20K25KSE +/- 79.25, N = 3SE +/- 16.17, N = 3SE +/- 39.89, N = 3239672410224929-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lm-lm1. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS4K8K12K16K20KMin: 23809 / Avg: 23967 / Max: 24057Min: 24070 / Avg: 24102.33 / Max: 24119Min: 24870 / Avg: 24929 / Max: 250051. (CXX) g++ options: -O3 -ldl

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Socket ActivityCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS10K20K30K40K50KSE +/- 900.65, N = 15SE +/- 324.72, N = 3SE +/- 570.10, N = 152460.3736452.6945595.60-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic-lapparmor -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Socket ActivityCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS8K16K24K32K40KMin: 6 / Avg: 2460.37 / Max: 8954.24Min: 35834.48 / Avg: 36452.69 / Max: 36934.11Min: 41828.77 / Avg: 45595.6 / Max: 48296.211. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Decompression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS6001200180024003000SE +/- 6.30, N = 3SE +/- 2.24, N = 3SE +/- 5.28, N = 152571.32522.52498.11. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Decompression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS400800120016002000Min: 2559.2 / Avg: 2571.3 / Max: 2580.4Min: 2518 / Avg: 2522.47 / Max: 2525Min: 2441.3 / Avg: 2498.06 / Max: 2511.41. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Compression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS20406080100SE +/- 0.52, N = 3SE +/- 0.48, N = 3SE +/- 0.65, N = 1586.691.574.01. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Compression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS20406080100Min: 85.9 / Avg: 86.57 / Max: 87.6Min: 91 / Avg: 91.53 / Max: 92.5Min: 68.8 / Avg: 74.03 / Max: 77.31. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS0.5291.0581.5872.1162.645SE +/- 0.001, N = 3SE +/- 0.003, N = 3SE +/- 0.008, N = 31.3272.3511.292-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810Min: 1.33 / Avg: 1.33 / Max: 1.33Min: 2.35 / Avg: 2.35 / Max: 2.36Min: 1.28 / Avg: 1.29 / Max: 1.311. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS5K10K15K20K25KSE +/- 49.21, N = 3SE +/- 50.10, N = 3SE +/- 40.18, N = 3202612040421165-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lm-lm1. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS4K8K12K16K20KMin: 20164 / Avg: 20260.67 / Max: 20325Min: 20325 / Avg: 20404.33 / Max: 20497Min: 21098 / Avg: 21165.33 / Max: 212371. (CXX) g++ options: -O3 -ldl

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000CentOS Stream 9Ubuntu 20.04.1 LTS140K280K420K560K700KSE +/- 2481.53, N = 3SE +/- 4260.37, N = 3666008.7668433.3
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000CentOS Stream 9Ubuntu 20.04.1 LTS120K240K360K480K600KMin: 661392.3 / Avg: 666008.67 / Max: 669895.1Min: 662079 / Avg: 668433.33 / Max: 676526.7

Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000

Clear Linux 36990: The test quit with a non-zero exit status.

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 1000CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS50K100K150K200K250KSE +/- 1519.57, N = 3SE +/- 255.31, N = 3SE +/- 2419.36, N = 4200945.49215488.10210852.37-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -lcrypt -lz -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 1000CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS40K80K120K160K200KMin: 197912.72 / Avg: 200945.49 / Max: 202632.18Min: 215069.04 / Avg: 215488.1 / Max: 215950.31Min: 203685.74 / Avg: 210852.37 / Max: 214308.211. (CC) gcc options: -lcrypt -lz -O3 -march=native

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: defconfigCentOS Stream 9Ubuntu 20.04.1 LTS714212835SE +/- 0.39, N = 13SE +/- 0.24, N = 1529.6727.81
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: defconfigCentOS Stream 9Ubuntu 20.04.1 LTS714212835Min: 28.98 / Avg: 29.67 / Max: 34.27Min: 27.27 / Avg: 27.81 / Max: 30.9

Build: defconfig

Clear Linux 36990: The test quit with a non-zero exit status. E: linux-5.18/tools/objtool/include/objtool/elf.h:10:10: fatal error: gelf.h: No such file or directory

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3 - Decompression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS6001200180024003000SE +/- 0.65, N = 2SE +/- 1.35, N = 3SE +/- 3.69, N = 33022.92985.22755.91. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3 - Decompression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS5001000150020002500Min: 3022.2 / Avg: 3022.85 / Max: 3023.5Min: 2983.3 / Avg: 2985.17 / Max: 2987.8Min: 2751.5 / Avg: 2755.87 / Max: 2763.21. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3 - Compression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS15003000450060007500SE +/- 78.16, N = 3SE +/- 4.17, N = 3SE +/- 86.44, N = 157026.16807.85669.91. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3 - Compression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS12002400360048006000Min: 6873.2 / Avg: 7026.13 / Max: 7130.6Min: 6803.2 / Avg: 6807.77 / Max: 6816.1Min: 4937.4 / Avg: 5669.93 / Max: 6074.11. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported and HIP for AMD Radeon GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Pabellon Barcelona - Compute: CPU-OnlyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS20406080100SE +/- 0.02, N = 3SE +/- 0.10, N = 3SE +/- 0.17, N = 382.8981.9784.44
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Pabellon Barcelona - Compute: CPU-OnlyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS1632486480Min: 82.85 / Avg: 82.89 / Max: 82.91Min: 81.79 / Avg: 81.97 / Max: 82.13Min: 84.2 / Avg: 84.44 / Max: 84.77

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 0CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS20406080100SE +/- 0.66, N = 3SE +/- 0.45, N = 3SE +/- 0.30, N = 384.3278.5987.17-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 0CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS20406080100Min: 83 / Avg: 84.32 / Max: 85.05Min: 77.94 / Avg: 78.59 / Max: 79.45Min: 86.81 / Avg: 87.17 / Max: 87.761. (CXX) g++ options: -O3 -fPIC -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3, Long Mode - Compression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2004006008001000SE +/- 4.03, N = 3SE +/- 7.62, N = 15SE +/- 0.10, N = 3281.0969.1102.41. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3, Long Mode - Compression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2004006008001000Min: 273 / Avg: 281.03 / Max: 285.7Min: 919.4 / Avg: 969.07 / Max: 1029.1Min: 102.3 / Avg: 102.4 / Max: 102.61. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 500CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS600K1200K1800K2400K3000KSE +/- 89203.76, N = 15SE +/- 22655.99, N = 3SE +/- 26054.27, N = 42018201.092765192.672174436.56-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 500CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS500K1000K1500K2000K2500KMin: 1472476.75 / Avg: 2018201.09 / Max: 2357865.5Min: 2724096.25 / Avg: 2765192.67 / Max: 2802269.5Min: 2110365.75 / Avg: 2174436.56 / Max: 2236155.51. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS1.30512.61023.91535.22046.5255SE +/- 0.32475, N = 15SE +/- 0.07618, N = 15SE +/- 0.71740, N = 155.406034.640865.80053MIN: 3.28-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 3.55MIN: 3.081. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810Min: 3.69 / Avg: 5.41 / Max: 8.91Min: 4.23 / Avg: 4.64 / Max: 5.24Min: 3.36 / Avg: 5.8 / Max: 14.791. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS3691215SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.08, N = 310.5513.5610.66
OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS48121620Min: 10.44 / Avg: 10.55 / Max: 10.66Min: 13.51 / Avg: 13.56 / Max: 13.59Min: 10.55 / Avg: 10.66 / Max: 10.81

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 1000CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS400K800K1200K1600K2000KSE +/- 55692.46, N = 12SE +/- 21642.05, N = 5SE +/- 13612.37, N = 31847194.122078925.131851066.21-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 1000CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS400K800K1200K1600K2000KMin: 1444277.75 / Avg: 1847194.12 / Max: 2020238.25Min: 1993126.88 / Avg: 2078925.13 / Max: 2110900.25Min: 1829513.62 / Avg: 1851066.21 / Max: 1876247.51. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS100200300400500SE +/- 4.68, N = 4SE +/- 0.42, N = 3SE +/- 9.70, N = 15378.88348.82463.98MIN: 371.88 / MAX: 634.44-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 346.87 / MAX: 355.44MIN: 369.59 / MAX: 652.11. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS80160240320400Min: 373.95 / Avg: 378.88 / Max: 392.9Min: 348.16 / Avg: 348.81 / Max: 349.61Min: 386.51 / Avg: 463.98 / Max: 514.431. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: System V Message PassingCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2M4M6M8M10MSE +/- 85352.98, N = 4SE +/- 1671.17, N = 3SE +/- 182521.68, N = 157093379.738684030.904420124.12-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic-lapparmor -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: System V Message PassingCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS1.5M3M4.5M6M7.5MMin: 6837619.62 / Avg: 7093379.73 / Max: 7189613.49Min: 8680900.29 / Avg: 8684030.9 / Max: 8686610.06Min: 2906124.63 / Avg: 4420124.12 / Max: 5110222.341. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS400800120016002000SE +/- 0.67, N = 3SE +/- 4.60, N = 3819.631684.39MIN: 519.3 / MAX: 967.18MIN: 1122.36 / MAX: 2167.691. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS30060090012001500Min: 818.48 / Avg: 819.63 / Max: 820.81Min: 1675.85 / Avg: 1684.39 / Max: 1691.621. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS612182430SE +/- 0.02, N = 3SE +/- 0.07, N = 324.2923.521. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS612182430Min: 24.25 / Avg: 24.29 / Max: 24.32Min: 23.41 / Avg: 23.52 / Max: 23.651. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl