New Tests

2 x Intel Xeon Platinum 8380 testing with a Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS) and ASPEED on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2209031-NE-2209025NE82
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 2 Tests
Timed Code Compilation 3 Tests
C/C++ Compiler Tests 15 Tests
Compression Tests 2 Tests
CPU Massive 23 Tests
Creator Workloads 15 Tests
Database Test Suite 8 Tests
Encoding 6 Tests
Fortran Tests 2 Tests
Game Development 2 Tests
Go Language Tests 3 Tests
HPC - High Performance Computing 10 Tests
Imaging 3 Tests
Java 2 Tests
Common Kernel Benchmarks 3 Tests
Machine Learning 6 Tests
Molecular Dynamics 3 Tests
MPI Benchmarks 3 Tests
Multi-Core 23 Tests
Node.js + NPM Tests 2 Tests
NVIDIA GPU Compute 2 Tests
Intel oneAPI 4 Tests
OpenMPI Tests 3 Tests
Programmer / Developer System Benchmarks 6 Tests
Python Tests 4 Tests
Raytracing 2 Tests
Renderers 4 Tests
Scientific Computing 3 Tests
Server 13 Tests
Server CPU Tests 15 Tests
Single-Threaded 3 Tests
Video Encoding 6 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
CentOS Stream 9
August 31 2022
  23 Hours, 41 Minutes
Clear Linux 36990
September 01 2022
  19 Hours, 13 Minutes
Ubuntu 20.04.1 LTS
September 02 2022
  21 Hours, 41 Minutes
Invert Hiding All Results Option
  21 Hours, 32 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


New TestsProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen ResolutionVulkanCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2 x Intel Xeon Platinum 8380 @ 3.40GHz (80 Cores / 160 Threads)Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS)Intel Device 0998512GB7682GB INTEL SSDPF2KX076TZASPEEDVE2282 x Intel X710 for 10GBASE-T + 2 x Intel E810-C for QSFPCentOS Stream 95.14.0-148.el9.x86_64 (x86_64)GNOME Shell 40.10X ServerGCC 11.3.1 20220421xfs1920x1080Clear Linux OS 369905.19.6-1185.native (x86_64)GNOME Shell 42.4X Server 1.21.1.3GCC 12.2.1 20220831 releases/gcc-12.2.0-35-g63997f2223 + Clang 14.0.6 + LLVM 14.0.6ext4Ubuntu 22.045.15.0-47-generic (x86_64)GNOME Shell 42.21.2.204GCC 11.2.0OpenBenchmarking.orgKernel Details- CentOS Stream 9: Transparent Huge Pages: always- Clear Linux 36990: Transparent Huge Pages: always- Ubuntu 20.04.1 LTS: Transparent Huge Pages: madviseCompiler Details- CentOS Stream 9: --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-isl - Clear Linux 36990: --build=x86_64-generic-linux --disable-libmpx --disable-libunwind-exceptions --disable-multiarch --disable-vtable-verify --disable-werror --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-clocale=gnu --enable-default-pie --enable-gnu-indirect-function --enable-gnu-indirect-function --enable-host-shared --enable-languages=c,c++,fortran,go,jit --enable-ld=default --enable-libstdcxx-pch --enable-linux-futex --enable-lto --enable-multilib --enable-plugin --enable-shared --enable-threads=posix --exec-prefix=/usr --includedir=/usr/include --target=x86_64-generic-linux --with-arch=x86-64-v3 --with-gcc-major-version-only --with-glibc-version=2.35 --with-gnu-ld --with-isl --with-pic --with-ppl=yes --with-tune=skylake-avx512 --with-zstd - Ubuntu 20.04.1 LTS: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Disk Details- CentOS Stream 9: NONE / attr2,inode64,logbsize=32k,logbufs=8,noquota,relatime,rw,seclabel / Block Size: 4096- Clear Linux 36990: MQ-DEADLINE / relatime,rw,stripe=256 / Block Size: 4096- Ubuntu 20.04.1 LTS: NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Details- CentOS Stream 9: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xd000363- Clear Linux 36990: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0xd000375- Ubuntu 20.04.1 LTS: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xd000363Java Details- CentOS Stream 9: OpenJDK Runtime Environment (Red_Hat-11.0.16.0.8-2.el9) (build 11.0.16+8-LTS)- Clear Linux 36990: OpenJDK Runtime Environment (build 18.0.1-internal+0-adhoc.mockbuild.corretto-18-18.0.1.10.1)- Ubuntu 20.04.1 LTS: OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Details- CentOS Stream 9: Python 3.9.13- Clear Linux 36990: Python 3.10.6- Ubuntu 20.04.1 LTS: Python 3.10.4Security Details- CentOS Stream 9: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected - Clear Linux 36990: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected - Ubuntu 20.04.1 LTS: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected Environment Details- Clear Linux 36990: FFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z -Wl,now -Wl,-z -Wl,relro -malign-data=abi -fno-semantic-interposition -ftree-vectorize -ftree-loop-vectorize -Wl,--enable-new-dtags" CXXFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -Wformat -Wformat-security -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z -Wl,now -Wl,-z -Wl,relro -fno-semantic-interposition -ffat-lto-objects -fno-trapping-math -Wl,-sort-common -Wl,--enable-new-dtags -mtune=skylake -mrelax-cmpxchg-loop -fvisibility-inlines-hidden -Wl,--enable-new-dtags" FCFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z -Wl,now -Wl,-z -Wl,relro -malign-data=abi -fno-semantic-interposition -ftree-vectorize -ftree-loop-vectorize -Wl,-sort-common -Wl,--enable-new-dtags" CFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -Wformat -Wformat-security -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z -Wl,now -Wl,-z -Wl,relro -fno-semantic-interposition -ffat-lto-objects -fno-trapping-math -Wl,-sort-common -Wl,--enable-new-dtags -mtune=skylake -mrelax-cmpxchg-loop" THEANO_FLAGS="floatX=float32,openmp=true,gcc.cxxflags="-ftree-vectorize -mavx""

CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTSLogarithmic Result OverviewPhoronix Test SuiteC-BloscNatronRenaissancex264DaCapo BenchmarkPostgreSQL pgbenchVP9 libvpx EncodingTimed LLVM CompilationNode.js Express HTTP Load TestSVT-AV1GraphicsMagickApache SparkClickHouseSVT-HEVCZstd CompressionTensorFlow Litelibavif avifencSVT-VP9memtier_benchmarkNode.js V8 Web Tooling BenchmarkoneDNNRedisStress-NG7-Zip CompressionUnpacking The Linux KernelLAMMPS Molecular Dynamics SimulatorTNNApache HTTP ServerWebP Image EncodeOSPRayASTC EncoderONNX RuntimenginxStockfishMobile Neural NetworkGROMACSBlendersimdjsonHigh Performance Conjugate GradientOpenSSLNAMDOSPRay Studio

New Testspgbench: 100 - 500 - Read Only - Average Latencypgbench: 100 - 500 - Read Onlytnn: CPU - DenseNetrenaissance: Savina Reactors.IOstockfish: Total Timespark: 40000000 - 500 - Calculate Pi Benchmark Using Dataframespark: 40000000 - 500 - Calculate Pi Benchmarkspark: 40000000 - 500 - SHA-512 Benchmark Timespark: 40000000 - 500 - Broadcast Inner Join Test Timespark: 40000000 - 500 - Inner Join Test Timespark: 40000000 - 500 - Repartition Test Timespark: 40000000 - 500 - Group By Test Timepgbench: 100 - 250 - Read Only - Average Latencypgbench: 100 - 250 - Read Onlyonnx: GPT-2 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonnx: fcn-resnet101-11 - CPU - Standardrenaissance: Finagle HTTP Requestsonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUospray: particle_volume/scivis/real_timerenaissance: ALS Movie Lenstensorflow-lite: Inception V4memtier-benchmark: Redis - 50 - 5:1ospray: particle_volume/pathtracer/real_timeopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUmnn: inception-v3mnn: mobilenet-v1-1.0mnn: MobileNetV2_224mnn: SqueezeNetV1.0mnn: resnet-v2-50mnn: squeezenetv1.1mnn: mobilenetV3mnn: nasnettensorflow-lite: NASNet Mobiletensorflow-lite: SqueezeNetblender: Barbershop - CPU-Onlyonnx: yolov4 - CPU - Standardospray: particle_volume/ao/real_timelammps: 20k Atomsonnx: super-resolution-10 - CPU - Standardtensorflow-lite: Inception ResNet V2apache: 1000renaissance: In-Memory Database Shootoutblosc: blosclz shufflememtier-benchmark: Redis - 50 - 1:10tensorflow-lite: Mobilenet Floatopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUgraphics-magick: HWB Color Spaceblosc: blosclz bitshufflehpcg: build-llvm: Ninjavpxenc: Speed 0 - Bosphorus 4Kospray-studio: 3 - 4K - 32 - Path Tracertensorflow-lite: Mobilenet Quantclickhouse: 100M Rows Web Analytics Dataset, Third Runclickhouse: 100M Rows Web Analytics Dataset, Second Runclickhouse: 100M Rows Web Analytics Dataset, First Run / Cold Cachestress-ng: Atomicopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUstress-ng: Futexgraphics-magick: Resizingospray: gravity_spheres_volume/dim_512/scivis/real_timeospray: gravity_spheres_volume/dim_512/ao/real_timepgbench: 100 - 250 - Read Write - Average Latencypgbench: 100 - 250 - Read Writepgbench: 100 - 500 - Read Write - Average Latencypgbench: 100 - 500 - Read Writegraphics-magick: Rotateospray-studio: 2 - 4K - 32 - Path Tracerospray-studio: 1 - 4K - 32 - Path Tracerospray: gravity_spheres_volume/dim_512/pathtracer/real_timeonnx: fcn-resnet101-11 - CPU - Parallelonnx: GPT-2 - CPU - Parallelonnx: ArcFace ResNet-100 - CPU - Parallelonnx: bertsquad-12 - CPU - Parallelonnx: yolov4 - CPU - Parallelonnx: bertsquad-12 - CPU - Standardnatron: Spaceshipcompress-zstd: 8 - Decompression Speedcompress-zstd: 8 - Compression Speedonnx: super-resolution-10 - CPU - Parallelgraphics-magick: Noise-Gaussianrenaissance: Apache Spark Bayesospray-studio: 1 - 4K - 16 - Path Tracerredis: SET - 500ospray-studio: 3 - 4K - 16 - Path Tracerstress-ng: Socket Activitycompress-zstd: 19 - Decompression Speedcompress-zstd: 19 - Compression Speedsvt-av1: Preset 4 - Bosphorus 4Kospray-studio: 2 - 4K - 16 - Path Tracerinfluxdb: 4 - 10000 - 2,5000,1 - 10000nginx: 1000build-linux-kernel: defconfigcompress-zstd: 3 - Decompression Speedcompress-zstd: 3 - Compression Speedblender: Pabellon Barcelona - CPU-Onlyavifenc: 0compress-zstd: 3, Long Mode - Compression Speedredis: GET - 500onednn: IP Shapes 1D - bf16bf16bf16 - CPUnode-web-tooling: redis: SET - 1000tnn: CPU - MobileNet v2stress-ng: System V Message Passingopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUkeydb: simdjson: PartialTweetsstress-ng: Context Switchingsimdjson: DistinctUserIDsimdjson: TopTweetopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUdragonflydb: 50 - 5:1dragonflydb: 50 - 1:5dragonflydb: 50 - 1:1blender: Classroom - CPU-Onlyopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUrenaissance: Rand Forestopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUstress-ng: CPU Cachenode-express-loadtest: graphics-magick: Sharpengraphics-magick: Enhancedgraphics-magick: Swirlbuild-gdb: Time To Compilesimdjson: Kostyacompress-zstd: 19, Long Mode - Decompression Speedcompress-zstd: 19, Long Mode - Compression Speedonednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPUsimdjson: LargeRandcompress-7zip: Decompression Ratingcompress-7zip: Compression Ratingstress-ng: Glibc C String Functionsavifenc: 2onednn: IP Shapes 3D - bf16bf16bf16 - CPUwebp: Quality 100, Lossless, Highest Compressionunpack-linux: linux-5.19.tar.xzcompress-zstd: 3, Long Mode - Decompression Speedx264: Bosphorus 4Kcompress-zstd: 8, Long Mode - Decompression Speedcompress-zstd: 8, Long Mode - Compression Speedavifenc: 6, Losslessblender: Fishy Cat - CPU-Onlyredis: GET - 1000namd: ATPase Simulation - 327,506 Atomsdacapobench: Jythonstress-ng: NUMAstress-ng: x86_64 RdRandstress-ng: IO_uringstress-ng: Mallocstress-ng: Forkingstress-ng: Memory Copyingstress-ng: MMAPstress-ng: CPU Stressstress-ng: Semaphoresstress-ng: SENDFILEstress-ng: MEMFDstress-ng: Glibc Qsort Data Sortingstress-ng: Cryptostress-ng: Matrix Mathstress-ng: Vector Mathgromacs: MPI CPU - water_GMX50_bareredis: SET - 50avifenc: 10, Losslessblender: BMW27 - CPU-Onlytnn: CPU - SqueezeNet v1.1redis: GET - 50openssl: openssl: astcenc: Mediumonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUwebp: Quality 100, Highest Compressionwebp: Quality 100, Losslesssvt-hevc: 10 - Bosphorus 4Kdacapobench: H2svt-av1: Preset 8 - Bosphorus 4Kastcenc: Exhaustivesvt-vp9: Visual Quality Optimized - Bosphorus 4Kavifenc: 6svt-vp9: PSNR/SSIM Optimized - Bosphorus 4Ksvt-vp9: VMAF Optimized - Bosphorus 4Kwebp: Quality 100astcenc: Thoroughonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUsvt-av1: Preset 10 - Bosphorus 4Ksvt-hevc: 7 - Bosphorus 4Ktnn: CPU - SqueezeNet v2astcenc: Fastwebp: Defaultsvt-av1: Preset 12 - Bosphorus 4Konednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUlammps: Rhodopsin Proteinstress-ng: IO_uringCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS0.27018556563955.04621219.41794731292.7936.2188.830.1501669388110451881447.6164438693.9697.28224.295117123.973896.51339297.91100.66961.5042731.9320.0912.0902.6633.9568.6632.3561.75312.09768713.016614.41257.1569424.352735.1231226047297.8131349.6017787.24916.71398073.704240.4113.601478.6411383704.140.2812135.0193.04483199540.86243.95244.38231.48187775.7718.671071.701088788.92274822.021522.417412.0512074526.724187101030408524058025.58522365269169379963010931.93017.51244.032597381075.3201521931278.62239672460.372571.386.61.32720261666008.7200945.4929.6703022.97026.182.8984.316281.02018201.095.4060310.551847194.12378.8797093379.73819.6324.291451.6213.671424.5713.924.856233126.455.775.62239.8683.2664.8285.47233.331455.232.002478.961.3647224.774.524414.948.279657.9916.2649106411153234095.4152.912635.743.437.957780.963711314678669473078.1748.7102.3856341.2089.1943208.034.573201.0307.59.26033.372406986.650.28138560010.37667284.36306750258.8463484.4512812.453747.58135517.467186364.511271967.054098.84934.2683808.91286293.40322923.098.9962189377.086.60525.04366.4932284227.21112427.216866.1316.37433.811558.80221.115113.23984738.6754.505499.276.056115.50112.933.04446.38483.6840465.62086.6775.880799.11202.16392.8322.1593830.8700.27518316653620.8578256.61860796282.0433.0520.2115.1715.5913.0734.450.1321913115102112077487.1165215950.5728.04024.71598225.442370.31760873.07161.93020.4482.1792.9124.2448.5632.5291.86212.81368566.76244.19253.2763624.590335.0461015149331.9118161.8317905.81994991.483701.27173712603.140.8610267.3876.15485068576.95400.32400.44386.79145035.861140633.72285122.224722.51682.862873575.828857921029409324050325.6467241525919788286409925.03007.51621.38024994479.0201802083152.812410236452.692522.591.52.35120404215488.102985.26807.881.9778.591965.32765192.674.6408613.562078925.13348.8158684030.90485587.545.0214924848.445.715.743826220.784236085.834003419.3164.23668.316.409808869119225582.942613.248.812.27230.983642094730269658490.0442.6252.3025138.3037.85382.433163.1993.16.17432.432722264.500.2805336835.21669368.1810692920.63342573199.6961843.0111244.943336.53140290.2115903941.881161464.613783.49893.8495806.71328235.14309299.869.5252346688.34.69024.25358.1533512867.331119831.817031.1359.89803.800808.04618.186192.06245373.1884.8916148.913.463163.87153.772.65650.47203.64846132.871147.6372.128837.14771.662195.9342.1263735.9950.28517604404704.04221545.71747925872.7331.6790.620.1571593950108561899497.8094838478.8770.26124.726318654.136132.71457157.1498.65540.9966238.8320.5402.1783.0774.2588.6022.5901.86412.92180034.05617.26262.6567124.852534.8201161954111.0137078.3518477.34279.51755483.283340.6018.312178.706363193.040.2047131.6982.954995310747.97225.18223.34214.26183431.6638.041049.56942679.7241720.896520.98827.3213415116.88129620759424764197325.2944233530516828346449771.62788.91638.331335131132.3209471835435.102492945595.602498.174.01.29221165668433.3210852.3727.8092755.95669.984.4487.167102.42174436.565.8005310.661851066.21463.9814420124.121684.3923.523027.3812.982990.5013.14487637.294.755371747.815.795.69469.3684.991209586.381315146.421243856.6365.82160.42248.701531.532.532453.451.5147005.858.894487.458.938939.7416.1362106411094191681.6292.832540.738.541.89120.943590583457258173725.0349.3532.4469142.6068.8882945.932.412962.4127.911.03634.072309354.80.28101481942.05666992.328404895.30292765838.9965455.7213427.143888.56135774.417249112.241177667.633881.85918.7686462.28307592.99324720.789.1152076700.088.10125.23365.9172588694.171115888.016884.7315.09783.887938.67521.601111.551068138.3314.4840101.726.406115.32111.612.93145.63213.6611265.83787.7774.709809.16661.97592.7322.1356326.749OpenBenchmarking.org

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read Only - Average LatencyUbuntu 20.04.1 LTSClear Linux 36990CentOS Stream 90.06410.12820.19230.25640.3205SE +/- 0.004, N = 12SE +/- 0.007, N = 12SE +/- 0.005, N = 120.2850.2750.270-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-O21. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read Only - Average LatencyUbuntu 20.04.1 LTSClear Linux 36990CentOS Stream 912345Min: 0.27 / Avg: 0.28 / Max: 0.31Min: 0.25 / Avg: 0.27 / Max: 0.3Min: 0.25 / Avg: 0.27 / Max: 0.291. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read OnlyUbuntu 20.04.1 LTSClear Linux 36990CentOS Stream 9400K800K1200K1600K2000KSE +/- 24026.09, N = 12SE +/- 42947.24, N = 12SE +/- 30425.21, N = 12176044018316651855656-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-O21. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read OnlyUbuntu 20.04.1 LTSClear Linux 36990CentOS Stream 9300K600K900K1200K1500KMin: 1634369.08 / Avg: 1760440.5 / Max: 1852918.69Min: 1643669.33 / Avg: 1831665.25 / Max: 2003918.94Min: 1702189.9 / Avg: 1855656.11 / Max: 1970416.521. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 3699010002000300040005000SE +/- 40.05, N = 9SE +/- 27.70, N = 3SE +/- 1.46, N = 34704.043955.053620.86MIN: 3855.1 / MAX: 6393.88MIN: 3833.99 / MAX: 5510.15-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 3599.42 / MAX: 3730.481. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 369908001600240032004000Min: 4587.41 / Avg: 4704.04 / Max: 4901.58Min: 3909.45 / Avg: 3955.05 / Max: 4005.08Min: 3618.21 / Avg: 3620.86 / Max: 3623.251. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 369905K10K15K20K25KSE +/- 209.69, N = 6SE +/- 296.93, N = 3SE +/- 56.26, N = 1521545.721219.48256.6MIN: 20986.56 / MAX: 35348.39MIN: 20627.9 / MAX: 32602.9MIN: 7799.33 / MAX: 12715.24
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 369904K8K12K16K20KMin: 20986.56 / Avg: 21545.65 / Max: 22331.64Min: 20627.9 / Avg: 21219.38 / Max: 21561.04Min: 7799.33 / Avg: 8256.64 / Max: 8639

Stockfish

This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 15Total TimeUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 3699040M80M120M160M200MSE +/- 1938385.82, N = 15SE +/- 2364357.21, N = 15SE +/- 3123886.35, N = 15174792587179473129186079628-pipe -fexceptions -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 15Total TimeUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 3699030M60M90M120M150MMin: 159673139 / Avg: 174792586.8 / Max: 188968845Min: 164540890 / Avg: 179473129.13 / Max: 195156974Min: 166800438 / Avg: 186079627.93 / Max: 2091295011. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using DataframeUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 369900.66831.33662.00492.67323.3415SE +/- 0.19, N = 3SE +/- 0.09, N = 3SE +/- 0.04, N = 152.972.792.04
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using DataframeUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 36990246810Min: 2.75 / Avg: 2.97 / Max: 3.36Min: 2.62 / Avg: 2.79 / Max: 2.92Min: 1.86 / Avg: 2.04 / Max: 2.3

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Calculate Pi BenchmarkCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS816243240SE +/- 0.12, N = 3SE +/- 0.06, N = 15SE +/- 0.05, N = 936.2133.0531.71
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Calculate Pi BenchmarkCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS816243240Min: 35.99 / Avg: 36.21 / Max: 36.41Min: 32.72 / Avg: 33.05 / Max: 33.71Min: 31.45 / Avg: 31.71 / Max: 31.89

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark TimeUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 3699020406080100SE +/- 1.02, N = 4SE +/- 0.53, N = 3SE +/- 0.19, N = 1590.8988.8320.21
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark TimeUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 3699020406080100Min: 88 / Avg: 90.89 / Max: 92.38Min: 87.98 / Avg: 88.83 / Max: 89.81Min: 18.6 / Avg: 20.21 / Max: 21.72

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Broadcast Inner Join Test TimeClear Linux 3699048121620SE +/- 0.83, N = 215.17

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Inner Join Test TimeClear Linux 3699048121620SE +/- 0.23, N = 215.59

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Repartition Test TimeClear Linux 369903691215SE +/- 0.21, N = 213.07

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Group By Test TimeClear Linux 36990816243240SE +/- 5.10, N = 234.45

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average LatencyUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 369900.03530.07060.10590.14120.1765SE +/- 0.002, N = 12SE +/- 0.001, N = 3SE +/- 0.004, N = 120.1570.1500.132-O2-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average LatencyUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 3699012345Min: 0.14 / Avg: 0.16 / Max: 0.16Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.12 / Avg: 0.13 / Max: 0.151. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read OnlyUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 36990400K800K1200K1600K2000KSE +/- 17628.71, N = 12SE +/- 9583.19, N = 3SE +/- 57458.70, N = 12159395016693881913115-O2-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read OnlyUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 36990300K600K900K1200K1500KMin: 1530613.95 / Avg: 1593950.05 / Max: 1778797.51Min: 1655270.14 / Avg: 1669387.76 / Max: 1687672.99Min: 1661379.78 / Avg: 1913115.4 / Max: 2124955.91. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: StandardClear Linux 36990Ubuntu 20.04.1 LTSCentOS Stream 92K4K6K8K10KSE +/- 346.22, N = 9SE +/- 93.38, N = 8SE +/- 388.59, N = 12102111085611045-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto-flto-flto1. (CXX) g++ options: -O3 -ffunction-sections -fdata-sections -march=native -mtune=native -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: StandardClear Linux 36990Ubuntu 20.04.1 LTSCentOS Stream 92K4K6K8K10KMin: 9378.5 / Avg: 10210.94 / Max: 12014.5Min: 10315.5 / Avg: 10855.81 / Max: 11047.5Min: 9069 / Avg: 11045.42 / Max: 12021.51. (CXX) g++ options: -O3 -ffunction-sections -fdata-sections -march=native -mtune=native -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardCentOS Stream 9Ubuntu 20.04.1 LTSClear Linux 36990400800120016002000SE +/- 16.82, N = 12SE +/- 20.93, N = 5SE +/- 24.49, N = 12188118992077-flto-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardCentOS Stream 9Ubuntu 20.04.1 LTSClear Linux 36990400800120016002000Min: 1791.5 / Avg: 1880.54 / Max: 1942Min: 1817.5 / Avg: 1898.9 / Max: 1928.5Min: 1978.5 / Avg: 2076.63 / Max: 2237.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUUbuntu 20.04.1 LTSClear Linux 36990CentOS Stream 9110220330440550SE +/- 8.91, N = 15SE +/- 4.67, N = 15SE +/- 7.22, N = 15497.81487.12447.62MIN: 385.19-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 431.07MIN: 376.511. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUUbuntu 20.04.1 LTSClear Linux 36990CentOS Stream 990180270360450Min: 402.9 / Avg: 497.81 / Max: 536.79Min: 454.04 / Avg: 487.12 / Max: 533.05Min: 393.11 / Avg: 447.62 / Max: 486.981. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: StandardCentOS Stream 9Ubuntu 20.04.1 LTSClear Linux 36990110220330440550SE +/- 1.17, N = 3SE +/- 18.60, N = 12SE +/- 17.65, N = 12443483521-flto-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: StandardCentOS Stream 9Ubuntu 20.04.1 LTSClear Linux 3699090180270360450Min: 441.5 / Avg: 443.33 / Max: 445.5Min: 437 / Avg: 482.92 / Max: 574Min: 436.5 / Avg: 521.33 / Max: 5711. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsCentOS Stream 9Ubuntu 20.04.1 LTSClear Linux 369902K4K6K8K10KSE +/- 154.17, N = 12SE +/- 197.62, N = 13SE +/- 56.01, N = 38693.98478.85950.5MIN: 6648.05 / MAX: 15659.82MIN: 6706.47 / MAX: 17648.26MIN: 5353.91 / MAX: 6140.39
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsCentOS Stream 9Ubuntu 20.04.1 LTSClear Linux 3699015003000450060007500Min: 8107.68 / Avg: 8693.92 / Max: 9648.75Min: 7693.23 / Avg: 8478.8 / Max: 9573.87Min: 5840.72 / Avg: 5950.5 / Max: 6024.72

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUUbuntu 20.04.1 LTSClear Linux 36990CentOS Stream 9170340510680850SE +/- 10.13, N = 15SE +/- 6.52, N = 15SE +/- 6.94, N = 12770.26728.04697.28MIN: 662.44-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 651.84MIN: 605.851. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUUbuntu 20.04.1 LTSClear Linux 36990CentOS Stream 9140280420560700Min: 691.92 / Avg: 770.26 / Max: 843.47Min: 682.09 / Avg: 728.04 / Max: 791.16Min: 627.38 / Avg: 697.28 / Max: 719.911. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/scivis/real_timeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS612182430SE +/- 0.29, N = 3SE +/- 0.05, N = 3SE +/- 0.00, N = 324.3024.7224.73
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/scivis/real_timeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS612182430Min: 23.8 / Avg: 24.3 / Max: 24.8Min: 24.63 / Avg: 24.72 / Max: 24.77Min: 24.72 / Avg: 24.73 / Max: 24.73

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 369904K8K12K16K20KSE +/- 88.50, N = 3SE +/- 73.46, N = 3SE +/- 28.18, N = 318654.117123.98225.4MIN: 18482.22 / MAX: 21069.29MIN: 16240.16 / MAX: 19195.87MIN: 8194.03 / MAX: 9046.37
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 369903K6K9K12K15KMin: 18482.22 / Avg: 18654.15 / Max: 18776.54Min: 16999.98 / Avg: 17123.88 / Max: 17254.2Min: 8194.03 / Avg: 8225.41 / Max: 8281.65

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS16K32K48K64K80KSE +/- 21727.22, N = 15SE +/- 2536.95, N = 15SE +/- 264.98, N = 1573896.542370.336132.7
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS13K26K39K52K65KMin: 35453.9 / Avg: 73896.46 / Max: 362681Min: 35507 / Avg: 42370.31 / Max: 62938.2Min: 35035.8 / Avg: 36132.68 / Max: 38853.5

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1CentOS Stream 9Ubuntu 20.04.1 LTSClear Linux 36990400K800K1200K1600K2000KSE +/- 63962.41, N = 12SE +/- 53478.91, N = 15SE +/- 66661.35, N = 121339297.911457157.141760873.071. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1CentOS Stream 9Ubuntu 20.04.1 LTSClear Linux 36990300K600K900K1200K1500KMin: 966119.24 / Avg: 1339297.91 / Max: 1668140.37Min: 1046747.75 / Avg: 1457157.14 / Max: 1723232.26Min: 1097948.3 / Avg: 1760873.07 / Max: 1949244.221. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/pathtracer/real_timeUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 369904080120160200SE +/- 0.28, N = 3SE +/- 0.72, N = 3SE +/- 0.68, N = 398.66100.67161.93
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/pathtracer/real_timeUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 36990306090120150Min: 98.12 / Avg: 98.66 / Max: 99.07Min: 99.77 / Avg: 100.67 / Max: 102.09Min: 160.97 / Avg: 161.93 / Max: 163.24

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS0.33750.6751.01251.351.6875SE +/- 0.05, N = 15SE +/- 0.07, N = 121.500.99MIN: 0.34 / MAX: 29.48MIN: 0.21 / MAX: 76.71. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS246810Min: 1.19 / Avg: 1.5 / Max: 1.85Min: 0.73 / Avg: 0.99 / Max: 1.361. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS14K28K42K56K70KSE +/- 1567.95, N = 15SE +/- 5640.72, N = 1242731.9366238.831. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS11K22K33K44K55KMin: 32765.75 / Avg: 42731.93 / Max: 52857.39Min: 43827.36 / Avg: 66238.83 / Max: 88837.641. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3Ubuntu 20.04.1 LTSClear Linux 36990CentOS Stream 9510152025SE +/- 0.12, N = 3SE +/- 0.03, N = 3SE +/- 0.19, N = 1520.5420.4520.09MIN: 19.73 / MAX: 36.82-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 19.73 / MAX: 33.83MIN: 17.31 / MAX: 37.291. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3Ubuntu 20.04.1 LTSClear Linux 36990CentOS Stream 9510152025Min: 20.32 / Avg: 20.54 / Max: 20.71Min: 20.38 / Avg: 20.45 / Max: 20.49Min: 18.66 / Avg: 20.09 / Max: 21.341. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0Clear Linux 36990Ubuntu 20.04.1 LTSCentOS Stream 90.49030.98061.47091.96122.4515SE +/- 0.016, N = 3SE +/- 0.016, N = 3SE +/- 0.047, N = 152.1792.1782.090-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 2.08 / MAX: 2.41MIN: 2.08 / MAX: 2.58MIN: 1.76 / MAX: 3.931. (CXX) g++ options: -O3 -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0Clear Linux 36990Ubuntu 20.04.1 LTSCentOS Stream 9246810Min: 2.15 / Avg: 2.18 / Max: 2.21Min: 2.15 / Avg: 2.18 / Max: 2.2Min: 1.78 / Avg: 2.09 / Max: 2.241. (CXX) g++ options: -O3 -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224Ubuntu 20.04.1 LTSClear Linux 36990CentOS Stream 90.69231.38462.07692.76923.4615SE +/- 0.011, N = 3SE +/- 0.013, N = 3SE +/- 0.014, N = 153.0772.9122.663MIN: 3.01 / MAX: 3.32-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 2.72 / MAX: 5.76MIN: 2.48 / MAX: 5.571. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224Ubuntu 20.04.1 LTSClear Linux 36990CentOS Stream 9246810Min: 3.06 / Avg: 3.08 / Max: 3.09Min: 2.89 / Avg: 2.91 / Max: 2.93Min: 2.6 / Avg: 2.66 / Max: 2.751. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0Ubuntu 20.04.1 LTSClear Linux 36990CentOS Stream 90.95811.91622.87433.83244.7905SE +/- 0.029, N = 3SE +/- 0.058, N = 3SE +/- 0.075, N = 154.2584.2443.956MIN: 4.15 / MAX: 9.78-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 3.95 / MAX: 8.29MIN: 3.51 / MAX: 9.331. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0Ubuntu 20.04.1 LTSClear Linux 36990CentOS Stream 9246810Min: 4.21 / Avg: 4.26 / Max: 4.31Min: 4.17 / Avg: 4.24 / Max: 4.36Min: 3.67 / Avg: 3.96 / Max: 4.461. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50CentOS Stream 9Ubuntu 20.04.1 LTSClear Linux 36990246810SE +/- 0.088, N = 15SE +/- 0.036, N = 3SE +/- 0.071, N = 38.6638.6028.563MIN: 7.71 / MAX: 20.48MIN: 7.85 / MAX: 30.88-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 8.16 / MAX: 9.671. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50CentOS Stream 9Ubuntu 20.04.1 LTSClear Linux 369903691215Min: 7.94 / Avg: 8.66 / Max: 9.23Min: 8.53 / Avg: 8.6 / Max: 8.64Min: 8.42 / Avg: 8.56 / Max: 8.641. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1Ubuntu 20.04.1 LTSClear Linux 36990CentOS Stream 90.58281.16561.74842.33122.914SE +/- 0.012, N = 3SE +/- 0.014, N = 3SE +/- 0.050, N = 152.5902.5292.356MIN: 2.54 / MAX: 5.99-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 2.48 / MAX: 2.83MIN: 2.03 / MAX: 5.761. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1Ubuntu 20.04.1 LTSClear Linux 36990CentOS Stream 9246810Min: 2.57 / Avg: 2.59 / Max: 2.61Min: 2.51 / Avg: 2.53 / Max: 2.56Min: 2.07 / Avg: 2.36 / Max: 2.671. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3Ubuntu 20.04.1 LTSClear Linux 36990CentOS Stream 90.41940.83881.25821.67762.097SE +/- 0.012, N = 3SE +/- 0.017, N = 3SE +/- 0.020, N = 151.8641.8621.753MIN: 1.81 / MAX: 2.46-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 1.81 / MAX: 2.51MIN: 1.61 / MAX: 4.191. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3Ubuntu 20.04.1 LTSClear Linux 36990CentOS Stream 9246810Min: 1.85 / Avg: 1.86 / Max: 1.89Min: 1.83 / Avg: 1.86 / Max: 1.89Min: 1.63 / Avg: 1.75 / Max: 1.861. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetUbuntu 20.04.1 LTSClear Linux 36990CentOS Stream 93691215SE +/- 0.16, N = 3SE +/- 0.09, N = 3SE +/- 0.23, N = 1512.9212.8112.10MIN: 10.67 / MAX: 25.28-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 12.47 / MAX: 16.42MIN: 10.54 / MAX: 23.031. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetUbuntu 20.04.1 LTSClear Linux 36990CentOS Stream 948121620Min: 12.63 / Avg: 12.92 / Max: 13.2Min: 12.65 / Avg: 12.81 / Max: 12.97Min: 10.69 / Avg: 12.1 / Max: 13.111. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet MobileUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 3699020K40K60K80K100KSE +/- 3387.75, N = 12SE +/- 3728.57, N = 12SE +/- 935.16, N = 1580034.068713.068566.7
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet MobileUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 3699014K28K42K56K70KMin: 65377.7 / Avg: 80034.01 / Max: 104357Min: 62607.9 / Avg: 68712.96 / Max: 109269Min: 64020.3 / Avg: 68566.65 / Max: 79083.9

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNetCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS4K8K12K16K20KSE +/- 5506.54, N = 12SE +/- 370.01, N = 12SE +/- 58.03, N = 1516614.416244.195617.26
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNetCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS3K6K9K12K15KMin: 5327.65 / Avg: 16614.41 / Max: 68861Min: 5465.21 / Avg: 6244.19 / Max: 10090.6Min: 5285.06 / Avg: 5617.26 / Max: 6097.49

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported and HIP for AMD Radeon GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Barbershop - Compute: CPU-OnlyUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 3699060120180240300SE +/- 0.19, N = 3SE +/- 0.55, N = 3SE +/- 0.37, N = 3262.65257.15253.27
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Barbershop - Compute: CPU-OnlyUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 3699050100150200250Min: 262.27 / Avg: 262.65 / Max: 262.9Min: 256.22 / Avg: 257.15 / Max: 258.13Min: 252.82 / Avg: 253.27 / Max: 254.01

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: StandardClear Linux 36990Ubuntu 20.04.1 LTSCentOS Stream 9150300450600750SE +/- 12.00, N = 12SE +/- 7.60, N = 4SE +/- 1.17, N = 3636671694-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto-flto-flto1. (CXX) g++ options: -O3 -ffunction-sections -fdata-sections -march=native -mtune=native -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: StandardClear Linux 36990Ubuntu 20.04.1 LTSCentOS Stream 9120240360480600Min: 604.5 / Avg: 635.67 / Max: 699Min: 661 / Avg: 670.75 / Max: 693Min: 692 / Avg: 694.17 / Max: 6961. (CXX) g++ options: -O3 -ffunction-sections -fdata-sections -march=native -mtune=native -fno-fat-lto-objects -ldl -lrt

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/ao/real_timeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS612182430SE +/- 0.07, N = 3SE +/- 0.14, N = 3SE +/- 0.06, N = 324.3524.5924.85
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/ao/real_timeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS612182430Min: 24.26 / Avg: 24.35 / Max: 24.49Min: 24.32 / Avg: 24.59 / Max: 24.78Min: 24.78 / Avg: 24.85 / Max: 24.98

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k AtomsUbuntu 20.04.1 LTSClear Linux 36990CentOS Stream 9816243240SE +/- 0.04, N = 3SE +/- 0.06, N = 3SE +/- 0.05, N = 334.8235.0535.12-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k AtomsUbuntu 20.04.1 LTSClear Linux 36990CentOS Stream 9816243240Min: 34.73 / Avg: 34.82 / Max: 34.87Min: 34.94 / Avg: 35.05 / Max: 35.13Min: 35.04 / Avg: 35.12 / Max: 35.211. (CXX) g++ options: -O3 -lm -ldl

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: StandardClear Linux 36990Ubuntu 20.04.1 LTSCentOS Stream 93K6K9K12K15KSE +/- 690.46, N = 12SE +/- 135.56, N = 3SE +/- 43.63, N = 3101511161912260-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto-flto-flto1. (CXX) g++ options: -O3 -ffunction-sections -fdata-sections -march=native -mtune=native -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: StandardClear Linux 36990Ubuntu 20.04.1 LTSCentOS Stream 92K4K6K8K10KMin: 6882 / Avg: 10151.38 / Max: 11950.5Min: 11357.5 / Avg: 11618.83 / Max: 11812Min: 12190.5 / Avg: 12260.17 / Max: 12340.51. (CXX) g++ options: -O3 -ffunction-sections -fdata-sections -march=native -mtune=native -fno-fat-lto-objects -ldl -lrt

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V2Ubuntu 20.04.1 LTSClear Linux 36990CentOS Stream 912K24K36K48K60KSE +/- 1518.21, N = 15SE +/- 583.51, N = 15SE +/- 268.62, N = 354111.049331.947297.8
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V2Ubuntu 20.04.1 LTSClear Linux 36990CentOS Stream 99K18K27K36K45KMin: 47734.3 / Avg: 54110.95 / Max: 69012.2Min: 47180.5 / Avg: 49331.85 / Max: 54041.2Min: 46787.4 / Avg: 47297.83 / Max: 47698.2

Apache HTTP Server

This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 1000Clear Linux 36990CentOS Stream 9Ubuntu 20.04.1 LTS30K60K90K120K150KSE +/- 760.56, N = 3SE +/- 1558.40, N = 15SE +/- 1575.81, N = 3118161.83131349.60137078.35-O3 -m64 -mtune=skylake -mrelax-cmpxchg-loop-O2-O21. (CC) gcc options: -shared -fPIC
OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 1000Clear Linux 36990CentOS Stream 9Ubuntu 20.04.1 LTS20K40K60K80K100KMin: 116952.09 / Avg: 118161.83 / Max: 119565.29Min: 113593.7 / Avg: 131349.6 / Max: 135605.05Min: 134278.43 / Avg: 137078.35 / Max: 139731.271. (CC) gcc options: -shared -fPIC

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database ShootoutUbuntu 20.04.1 LTSCentOS Stream 94K8K12K16K20KSE +/- 82.13, N = 3SE +/- 197.42, N = 318477.317787.2MIN: 18342.05 / MAX: 21649.71MIN: 17444.33 / MAX: 21383.13
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database ShootoutUbuntu 20.04.1 LTSCentOS Stream 93K6K9K12K15KMin: 18342.05 / Avg: 18477.32 / Max: 18625.65Min: 17444.33 / Avg: 17787.24 / Max: 18128.2

Test: In-Memory Database Shootout

Clear Linux 36990: The test run did not produce a result.

C-Blosc

C-Blosc (c-blosc2) simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz shuffleUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 369904K8K12K16K20KSE +/- 18.38, N = 3SE +/- 23.45, N = 3SE +/- 153.72, N = 154279.54916.717905.81. (CC) gcc options: -std=gnu99 -O3 -lrt -lm
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz shuffleUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 369903K6K9K12K15KMin: 4249.1 / Avg: 4279.47 / Max: 4312.6Min: 4886.1 / Avg: 4916.73 / Max: 4962.8Min: 17321.5 / Avg: 17905.79 / Max: 19245.51. (CC) gcc options: -std=gnu99 -O3 -lrt -lm

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10CentOS Stream 9Ubuntu 20.04.1 LTSClear Linux 36990400K800K1200K1600K2000KSE +/- 67672.39, N = 12SE +/- 17388.30, N = 3SE +/- 83568.62, N = 131398073.701755483.281994991.481. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10CentOS Stream 9Ubuntu 20.04.1 LTSClear Linux 36990300K600K900K1200K1500KMin: 1126743.33 / Avg: 1398073.7 / Max: 1952770.48Min: 1732597.08 / Avg: 1755483.28 / Max: 1789602.93Min: 1190376.76 / Avg: 1994991.48 / Max: 2288695.611. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet FloatCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS9001800270036004500SE +/- 499.70, N = 12SE +/- 75.96, N = 15SE +/- 20.34, N = 34240.413701.273340.60
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet FloatCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS7001400210028003500Min: 3338.31 / Avg: 4240.41 / Max: 9146.21Min: 3362.9 / Avg: 3701.27 / Max: 4201.56Min: 3303.43 / Avg: 3340.6 / Max: 3373.51

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUUbuntu 20.04.1 LTSCentOS Stream 9510152025SE +/- 0.01, N = 3SE +/- 0.30, N = 1518.3113.60MIN: 10.08 / MAX: 179.25MIN: 8.57 / MAX: 68.281. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUUbuntu 20.04.1 LTSCentOS Stream 9510152025Min: 18.29 / Avg: 18.31 / Max: 18.33Min: 10.69 / Avg: 13.6 / Max: 14.261. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS5001000150020002500SE +/- 39.85, N = 15SE +/- 1.52, N = 31478.642178.701. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS400800120016002000Min: 1398 / Avg: 1478.64 / Max: 1868.26Min: 2176.32 / Avg: 2178.7 / Max: 2181.531. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: HWB Color SpaceUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 36990400800120016002000SE +/- 6.23, N = 15SE +/- 28.49, N = 12SE +/- 4.16, N = 363611381737-O2 -ljbig -ltiff -lfreetype -lSM -lICE -llzma-O2 -lbz2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -ltiff -lfreetype -lSM -lICE -lbz2 -lxml21. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: HWB Color SpaceUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 3699030060090012001500Min: 608 / Avg: 635.6 / Max: 685Min: 938 / Avg: 1138.17 / Max: 1302Min: 1729 / Avg: 1737 / Max: 17431. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lz -lm -lpthread

C-Blosc

C-Blosc (c-blosc2) simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz bitshuffleUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 369903K6K9K12K15KSE +/- 7.59, N = 3SE +/- 6.11, N = 3SE +/- 20.65, N = 33193.03704.112603.11. (CC) gcc options: -std=gnu99 -O3 -lrt -lm
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz bitshuffleUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 369902K4K6K8K10KMin: 3181.3 / Avg: 3192.97 / Max: 3207.2Min: 3692.6 / Avg: 3704.13 / Max: 3713.4Min: 12564.5 / Avg: 12603.1 / Max: 12635.11. (CC) gcc options: -std=gnu99 -O3 -lrt -lm

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1Ubuntu 20.04.1 LTSCentOS Stream 9Clear Linux 36990918273645SE +/- 0.05, N = 3SE +/- 0.08, N = 3SE +/- 0.05, N = 340.2040.2840.86-lmpi_cxx-lmpi_cxx1. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi
OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1Ubuntu 20.04.1 LTSCentOS Stream 9Clear Linux 36990816243240Min: 40.11 / Avg: 40.2 / Max: 40.29Min: 40.14 / Avg: 40.28 / Max: 40.39Min: 40.8 / Avg: 40.86 / Max: 40.961. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: NinjaClear Linux 36990CentOS Stream 9Ubuntu 20.04.1 LTS60120180240300SE +/- 0.26, N = 3SE +/- 0.23, N = 3SE +/- 0.42, N = 3267.39135.02131.70
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: NinjaClear Linux 36990CentOS Stream 9Ubuntu 20.04.1 LTS50100150200250Min: 266.87 / Avg: 267.39 / Max: 267.76Min: 134.59 / Avg: 135.02 / Max: 135.39Min: 131.04 / Avg: 131.7 / Max: 132.48

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 4KUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 36990246810SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 32.953.046.15-U_FORTIFY_SOURCE-U_FORTIFY_SOURCE-pipe -fexceptions -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 4KUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 36990246810Min: 2.89 / Avg: 2.95 / Max: 2.98Min: 3.02 / Avg: 3.04 / Max: 3.07Min: 6.13 / Avg: 6.15 / Max: 6.21. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -std=gnu++11

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerUbuntu 20.04.1 LTSClear Linux 36990CentOS Stream 911K22K33K44K55KSE +/- 27.95, N = 3SE +/- 111.95, N = 3SE +/- 81.93, N = 3499534850648319-lm-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lm1. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerUbuntu 20.04.1 LTSClear Linux 36990CentOS Stream 99K18K27K36K45KMin: 49901 / Avg: 49952.67 / Max: 49997Min: 48361 / Avg: 48505.67 / Max: 48726Min: 48156 / Avg: 48319 / Max: 484151. (CXX) g++ options: -O3 -ldl

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet QuantUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 369902K4K6K8K10KSE +/- 337.08, N = 15SE +/- 100.45, N = 3SE +/- 81.76, N = 610747.979540.868576.95
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet QuantUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 369902K4K6K8K10KMin: 9333.34 / Avg: 10747.97 / Max: 12938.7Min: 9372.16 / Avg: 9540.86 / Max: 9719.7Min: 8277.81 / Avg: 8576.95 / Max: 8873.84

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Third RunUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 3699090180270360450SE +/- 2.99, N = 3SE +/- 1.95, N = 15SE +/- 5.35, N = 12225.18243.95400.32MIN: 37.43 / MAX: 5454.55MIN: 42.11 / MAX: 6000MIN: 54.25 / MAX: 200001. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Third RunUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 3699070140210280350Min: 219.64 / Avg: 225.18 / Max: 229.89Min: 233.04 / Avg: 243.95 / Max: 257.32Min: 374.42 / Avg: 400.32 / Max: 434.041. ClickHouse server version 22.5.4.19 (official build).

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Second RunUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 3699090180270360450SE +/- 5.12, N = 3SE +/- 1.48, N = 15SE +/- 5.40, N = 12223.34244.38400.44MIN: 43.48 / MAX: 5454.55MIN: 44.09 / MAX: 5454.55MIN: 53.29 / MAX: 200001. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Second RunUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 3699070140210280350Min: 216.18 / Avg: 223.34 / Max: 233.25Min: 236.34 / Avg: 244.38 / Max: 255.26Min: 375.33 / Avg: 400.44 / Max: 433.411. ClickHouse server version 22.5.4.19 (official build).

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, First Run / Cold CacheUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 3699080160240320400SE +/- 2.32, N = 3SE +/- 2.21, N = 15SE +/- 5.14, N = 12214.26231.48386.79MIN: 36.76 / MAX: 2727.27MIN: 41.47 / MAX: 5454.55MIN: 51.15 / MAX: 200001. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, First Run / Cold CacheUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 3699070140210280350Min: 211.04 / Avg: 214.26 / Max: 218.77Min: 212.68 / Avg: 231.48 / Max: 244.77Min: 363.65 / Avg: 386.79 / Max: 416.951. ClickHouse server version 22.5.4.19 (official build).

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: AtomicClear Linux 36990Ubuntu 20.04.1 LTSCentOS Stream 940K80K120K160K200KSE +/- 3304.16, N = 15SE +/- 3618.17, N = 15SE +/- 3961.98, N = 15145035.86183431.66187775.77-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic-lapparmor -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: AtomicClear Linux 36990Ubuntu 20.04.1 LTSCentOS Stream 930K60K90K120K150KMin: 127421.65 / Avg: 145035.86 / Max: 159671.65Min: 159582.8 / Avg: 183431.66 / Max: 201202.47Min: 164376.47 / Avg: 187775.77 / Max: 204483.191. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUUbuntu 20.04.1 LTSCentOS Stream 9918273645SE +/- 0.02, N = 3SE +/- 0.29, N = 1238.0418.67MIN: 17.97 / MAX: 246.5MIN: 11.54 / MAX: 79.431. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUUbuntu 20.04.1 LTSCentOS Stream 9816243240Min: 38.01 / Avg: 38.04 / Max: 38.09Min: 18.27 / Avg: 18.67 / Max: 21.831. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUUbuntu 20.04.1 LTSCentOS Stream 92004006008001000SE +/- 0.62, N = 3SE +/- 14.44, N = 121049.561071.701. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUUbuntu 20.04.1 LTSCentOS Stream 92004006008001000Min: 1048.38 / Avg: 1049.56 / Max: 1050.47Min: 914.28 / Avg: 1071.7 / Max: 1092.921. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: FutexUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 36990200K400K600K800K1000KSE +/- 59760.04, N = 15SE +/- 73263.26, N = 15SE +/- 60890.67, N = 15942679.721088788.921140633.72-lapparmor -latomic-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: FutexUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 36990200K400K600K800K1000KMin: 619867.04 / Avg: 942679.72 / Max: 1247331.68Min: 696500.15 / Avg: 1088788.92 / Max: 1490893.83Min: 846766.74 / Avg: 1140633.72 / Max: 1364896.251. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: ResizingUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 369906001200180024003000SE +/- 9.26, N = 15SE +/- 27.10, N = 3SE +/- 35.23, N = 441727482851-O2 -ljbig -ltiff -lfreetype -lSM -lICE -llzma-O2 -lbz2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -ltiff -lfreetype -lSM -lICE -lbz2 -lxml21. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: ResizingUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 369905001000150020002500Min: 351 / Avg: 416.73 / Max: 466Min: 2717 / Avg: 2748 / Max: 2802Min: 2753 / Avg: 2851.25 / Max: 29131. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lz -lm -lpthread

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 36990510152025SE +/- 0.02, N = 3SE +/- 0.15, N = 3SE +/- 0.07, N = 320.9022.0222.22
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 36990510152025Min: 20.86 / Avg: 20.9 / Max: 20.93Min: 21.75 / Avg: 22.02 / Max: 22.26Min: 22.1 / Avg: 22.22 / Max: 22.34

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/ao/real_timeUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 36990510152025SE +/- 0.02, N = 3SE +/- 0.06, N = 3SE +/- 0.05, N = 320.9922.4222.52
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/ao/real_timeUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 36990510152025Min: 20.95 / Avg: 20.99 / Max: 21.02Min: 22.3 / Avg: 22.42 / Max: 22.5Min: 22.47 / Avg: 22.52 / Max: 22.62

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average LatencyCentOS Stream 9Ubuntu 20.04.1 LTSClear Linux 369903691215SE +/- 0.012, N = 3SE +/- 0.021, N = 3SE +/- 0.005, N = 312.0517.3212.862-O2-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average LatencyCentOS Stream 9Ubuntu 20.04.1 LTSClear Linux 3699048121620Min: 12.03 / Avg: 12.05 / Max: 12.07Min: 7.28 / Avg: 7.32 / Max: 7.35Min: 2.85 / Avg: 2.86 / Max: 2.871. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read WriteCentOS Stream 9Ubuntu 20.04.1 LTSClear Linux 3699020K40K60K80K100KSE +/- 21.44, N = 3SE +/- 96.97, N = 3SE +/- 149.93, N = 3207453415187357-O2-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read WriteCentOS Stream 9Ubuntu 20.04.1 LTSClear Linux 3699015K30K45K60K75KMin: 20713.98 / Avg: 20744.88 / Max: 20786.09Min: 34011.12 / Avg: 34150.96 / Max: 34337.26Min: 87081.55 / Avg: 87357.07 / Max: 87597.321. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read Write - Average LatencyCentOS Stream 9Ubuntu 20.04.1 LTSClear Linux 36990612182430SE +/- 0.047, N = 3SE +/- 0.005, N = 3SE +/- 0.008, N = 326.72416.8815.828-O2-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read Write - Average LatencyCentOS Stream 9Ubuntu 20.04.1 LTSClear Linux 36990612182430Min: 26.66 / Avg: 26.72 / Max: 26.82Min: 16.88 / Avg: 16.88 / Max: 16.89Min: 5.81 / Avg: 5.83 / Max: 5.841. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read WriteCentOS Stream 9Ubuntu 20.04.1 LTSClear Linux 3699020K40K60K80K100KSE +/- 32.56, N = 3SE +/- 8.12, N = 3SE +/- 119.89, N = 3187102962085792-O2-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read WriteCentOS Stream 9Ubuntu 20.04.1 LTSClear Linux 3699015K30K45K60K75KMin: 18646.54 / Avg: 18709.95 / Max: 18754.52Min: 29603.65 / Avg: 29619.52 / Max: 29630.45Min: 85631.08 / Avg: 85791.56 / Max: 86026.081. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: RotateUbuntu 20.04.1 LTSClear Linux 36990CentOS Stream 92004006008001000SE +/- 4.18, N = 3SE +/- 10.17, N = 3SE +/- 7.69, N = 1575910291030-O2 -ljbig -ltiff -lfreetype -lSM -lICE -llzma-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -ltiff -lfreetype -lSM -lICE -lbz2 -lxml2-O2 -lbz21. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: RotateUbuntu 20.04.1 LTSClear Linux 36990CentOS Stream 92004006008001000Min: 754 / Avg: 758.67 / Max: 767Min: 1009 / Avg: 1028.67 / Max: 1043Min: 948 / Avg: 1030.47 / Max: 10621. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lz -lm -lpthread

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerUbuntu 20.04.1 LTSClear Linux 36990CentOS Stream 99K18K27K36K45KSE +/- 93.93, N = 3SE +/- 84.10, N = 3SE +/- 38.89, N = 3424764093240852-lm-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lm1. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerUbuntu 20.04.1 LTSClear Linux 36990CentOS Stream 97K14K21K28K35KMin: 42309 / Avg: 42476 / Max: 42634Min: 40775 / Avg: 40931.67 / Max: 41063Min: 40775 / Avg: 40852 / Max: 409001. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 369909K18K27K36K45KSE +/- 56.40, N = 3SE +/- 74.23, N = 3SE +/- 180.17, N = 3419734058040503-lm-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lm1. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 369907K14K21K28K35KMin: 41872 / Avg: 41973 / Max: 42067Min: 40445 / Avg: 40580 / Max: 40701Min: 40214 / Avg: 40503.33 / Max: 408341. (CXX) g++ options: -O3 -ldl

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 36990612182430SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 325.2925.5925.65
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 36990612182430Min: 25.23 / Avg: 25.29 / Max: 25.34Min: 25.52 / Avg: 25.59 / Max: 25.63Min: 25.58 / Avg: 25.65 / Max: 25.69

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: ParallelUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 3699050100150200250SE +/- 0.33, N = 3SE +/- 0.17, N = 3SE +/- 0.17, N = 3233236241-flto-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: ParallelUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 369904080120160200Min: 232.5 / Avg: 233.17 / Max: 233.5Min: 235.5 / Avg: 235.67 / Max: 236Min: 240.5 / Avg: 240.67 / Max: 2411. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: ParallelClear Linux 36990CentOS Stream 9Ubuntu 20.04.1 LTS11002200330044005500SE +/- 26.07, N = 3SE +/- 32.87, N = 3SE +/- 17.61, N = 3525952695305-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto-flto-flto1. (CXX) g++ options: -O3 -ffunction-sections -fdata-sections -march=native -mtune=native -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: ParallelClear Linux 36990CentOS Stream 9Ubuntu 20.04.1 LTS9001800270036004500Min: 5222 / Avg: 5258.5 / Max: 5309Min: 5205.5 / Avg: 5268.67 / Max: 5316Min: 5274.5 / Avg: 5305 / Max: 5335.51. (CXX) g++ options: -O3 -ffunction-sections -fdata-sections -march=native -mtune=native -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: ParallelUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 36990400800120016002000SE +/- 12.00, N = 3SE +/- 3.09, N = 3SE +/- 3.21, N = 3168216931978-flto-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: ParallelUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 3699030060090012001500Min: 1658 / Avg: 1682 / Max: 1694Min: 1687 / Avg: 1693.17 / Max: 1696.5Min: 1972 / Avg: 1978 / Max: 19831. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2004006008001000SE +/- 2.02, N = 3SE +/- 0.29, N = 3SE +/- 1.92, N = 3799828834-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto-flto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS150300450600750Min: 796 / Avg: 798.5 / Max: 802.5Min: 827 / Avg: 827.5 / Max: 828Min: 830 / Avg: 833.83 / Max: 8361. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS140280420560700SE +/- 1.04, N = 3SE +/- 0.29, N = 3SE +/- 2.08, N = 3630640644-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto-flto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS110220330440550Min: 628 / Avg: 630 / Max: 631.5Min: 639.5 / Avg: 640 / Max: 640.5Min: 640.5 / Avg: 643.5 / Max: 647.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: StandardUbuntu 20.04.1 LTSClear Linux 36990CentOS Stream 92004006008001000SE +/- 10.38, N = 3SE +/- 5.77, N = 3SE +/- 0.50, N = 39779921093-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto-flto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: StandardUbuntu 20.04.1 LTSClear Linux 36990CentOS Stream 92004006008001000Min: 962.5 / Avg: 976.83 / Max: 997Min: 983.5 / Avg: 992 / Max: 1003Min: 1091.5 / Avg: 1092.5 / Max: 10931. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

Natron

Natron is an open-source, cross-platform compositing software for visual effects (VFX) and motion graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: SpaceshipUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 369901.1252.253.3754.55.625SE +/- 0.00, N = 3SE +/- 0.01, N = 15SE +/- 0.03, N = 31.61.95.0
OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: SpaceshipUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 36990246810Min: 1.6 / Avg: 1.6 / Max: 1.6Min: 1.8 / Avg: 1.92 / Max: 2Min: 5 / Avg: 5.03 / Max: 5.1

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8 - Decompression SpeedUbuntu 20.04.1 LTSClear Linux 36990CentOS Stream 96001200180024003000SE +/- 2.78, N = 3SE +/- 4.36, N = 15SE +/- 6.19, N = 122788.93007.53017.51. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8 - Decompression SpeedUbuntu 20.04.1 LTSClear Linux 36990CentOS Stream 95001000150020002500Min: 2784.6 / Avg: 2788.9 / Max: 2794.1Min: 2978 / Avg: 3007.53 / Max: 3033.9Min: 2973.2 / Avg: 3017.48 / Max: 3043.81. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8 - Compression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS400800120016002000SE +/- 18.11, N = 12SE +/- 15.83, N = 15SE +/- 17.15, N = 31244.01621.31638.31. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8 - Compression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS30060090012001500Min: 1141.3 / Avg: 1244.02 / Max: 1347.8Min: 1521.9 / Avg: 1621.27 / Max: 1702.3Min: 1613.4 / Avg: 1638.33 / Max: 1671.21. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: ParallelUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 369902K4K6K8K10KSE +/- 10.50, N = 3SE +/- 4.91, N = 3SE +/- 8.23, N = 3313332598024-flto-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: ParallelUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 3699014002800420056007000Min: 3116 / Avg: 3132.5 / Max: 3152Min: 3250.5 / Avg: 3259 / Max: 3267.5Min: 8009.5 / Avg: 8023.83 / Max: 80381. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: Noise-GaussianUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 369902004006008001000SE +/- 7.66, N = 12SE +/- 0.88, N = 3SE +/- 3.93, N = 3513738994-O2 -ljbig -ltiff -lfreetype -lSM -lICE -llzma-O2 -lbz2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -ltiff -lfreetype -lSM -lICE -lbz2 -lxml21. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: Noise-GaussianUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 369902004006008001000Min: 451 / Avg: 513.25 / Max: 557Min: 736 / Avg: 737.67 / Max: 739Min: 986 / Avg: 993.67 / Max: 9991. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lz -lm -lpthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 369902004006008001000SE +/- 14.06, N = 15SE +/- 11.23, N = 3SE +/- 1.10, N = 31132.31075.3479.0MIN: 559.12 / MAX: 2166.29MIN: 628.33 / MAX: 1551.11MIN: 329.34 / MAX: 719.74
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 369902004006008001000Min: 1046.53 / Avg: 1132.28 / Max: 1266.11Min: 1052.97 / Avg: 1075.26 / Max: 1088.85Min: 477.72 / Avg: 479.04 / Max: 481.21

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerUbuntu 20.04.1 LTSClear Linux 36990CentOS Stream 94K8K12K16K20KSE +/- 70.51, N = 3SE +/- 13.09, N = 3SE +/- 58.89, N = 3209472018020152-lm-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lm1. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerUbuntu 20.04.1 LTSClear Linux 36990CentOS Stream 94K8K12K16K20KMin: 20806 / Avg: 20947 / Max: 21019Min: 20163 / Avg: 20180.33 / Max: 20206Min: 20050 / Avg: 20152.33 / Max: 202541. (CXX) g++ options: -O3 -ldl

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 500Ubuntu 20.04.1 LTSCentOS Stream 9Clear Linux 36990400K800K1200K1600K2000KSE +/- 20191.86, N = 5SE +/- 47157.16, N = 12SE +/- 26407.67, N = 151835435.101931278.622083152.81-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 500Ubuntu 20.04.1 LTSCentOS Stream 9Clear Linux 36990400K800K1200K1600K2000KMin: 1781927.88 / Avg: 1835435.1 / Max: 1880935Min: 1439413.25 / Avg: 1931278.62 / Max: 2022608.12Min: 1853257.88 / Avg: 2083152.81 / Max: 2209886.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerUbuntu 20.04.1 LTSClear Linux 36990CentOS Stream 95K10K15K20K25KSE +/- 39.89, N = 3SE +/- 16.17, N = 3SE +/- 79.25, N = 3249292410223967-lm-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lm1. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerUbuntu 20.04.1 LTSClear Linux 36990CentOS Stream 94K8K12K16K20KMin: 24870 / Avg: 24929 / Max: 25005Min: 24070 / Avg: 24102.33 / Max: 24119Min: 23809 / Avg: 23967 / Max: 240571. (CXX) g++ options: -O3 -ldl

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Socket ActivityCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS10K20K30K40K50KSE +/- 900.65, N = 15SE +/- 324.72, N = 3SE +/- 570.10, N = 152460.3736452.6945595.60-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic-lapparmor -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Socket ActivityCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS8K16K24K32K40KMin: 6 / Avg: 2460.37 / Max: 8954.24Min: 35834.48 / Avg: 36452.69 / Max: 36934.11Min: 41828.77 / Avg: 45595.6 / Max: 48296.211. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Decompression SpeedUbuntu 20.04.1 LTSClear Linux 36990CentOS Stream 96001200180024003000SE +/- 5.28, N = 15SE +/- 2.24, N = 3SE +/- 6.30, N = 32498.12522.52571.31. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Decompression SpeedUbuntu 20.04.1 LTSClear Linux 36990CentOS Stream 9400800120016002000Min: 2441.3 / Avg: 2498.06 / Max: 2511.4Min: 2518 / Avg: 2522.47 / Max: 2525Min: 2559.2 / Avg: 2571.3 / Max: 2580.41. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Compression SpeedUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 3699020406080100SE +/- 0.65, N = 15SE +/- 0.52, N = 3SE +/- 0.48, N = 374.086.691.51. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***2. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***3. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Compression SpeedUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 3699020406080100Min: 68.8 / Avg: 74.03 / Max: 77.3Min: 85.9 / Avg: 86.57 / Max: 87.6Min: 91 / Avg: 91.53 / Max: 92.51. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***2. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***3. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 4KUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 369900.5291.0581.5872.1162.645SE +/- 0.008, N = 3SE +/- 0.001, N = 3SE +/- 0.003, N = 31.2921.3272.351-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 4KUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 36990246810Min: 1.28 / Avg: 1.29 / Max: 1.31Min: 1.33 / Avg: 1.33 / Max: 1.33Min: 2.35 / Avg: 2.35 / Max: 2.361. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerUbuntu 20.04.1 LTSClear Linux 36990CentOS Stream 95K10K15K20K25KSE +/- 40.18, N = 3SE +/- 50.10, N = 3SE +/- 49.21, N = 3211652040420261-lm-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lm1. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerUbuntu 20.04.1 LTSClear Linux 36990CentOS Stream 94K8K12K16K20KMin: 21098 / Avg: 21165.33 / Max: 21237Min: 20325 / Avg: 20404.33 / Max: 20497Min: 20164 / Avg: 20260.67 / Max: 203251. (CXX) g++ options: -O3 -ldl

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000CentOS Stream 9Ubuntu 20.04.1 LTS140K280K420K560K700KSE +/- 2481.53, N = 3SE +/- 4260.37, N = 3666008.7668433.3
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000CentOS Stream 9Ubuntu 20.04.1 LTS120K240K360K480K600KMin: 661392.3 / Avg: 666008.67 / Max: 669895.1Min: 662079 / Avg: 668433.33 / Max: 676526.7

Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000

Clear Linux 36990: The test quit with a non-zero exit status.

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 1000CentOS Stream 9Ubuntu 20.04.1 LTSClear Linux 3699050K100K150K200K250KSE +/- 1519.57, N = 3SE +/- 2419.36, N = 4SE +/- 255.31, N = 3200945.49210852.37215488.10-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -lcrypt -lz -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 1000CentOS Stream 9Ubuntu 20.04.1 LTSClear Linux 3699040K80K120K160K200KMin: 197912.72 / Avg: 200945.49 / Max: 202632.18Min: 203685.74 / Avg: 210852.37 / Max: 214308.21Min: 215069.04 / Avg: 215488.1 / Max: 215950.311. (CC) gcc options: -lcrypt -lz -O3 -march=native

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: defconfigCentOS Stream 9Ubuntu 20.04.1 LTS714212835SE +/- 0.39, N = 13SE +/- 0.24, N = 1529.6727.81
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: defconfigCentOS Stream 9Ubuntu 20.04.1 LTS714212835Min: 28.98 / Avg: 29.67 / Max: 34.27Min: 27.27 / Avg: 27.81 / Max: 30.9

Build: defconfig

Clear Linux 36990: The test quit with a non-zero exit status. E: linux-5.18/tools/objtool/include/objtool/elf.h:10:10: fatal error: gelf.h: No such file or directory

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3 - Decompression SpeedUbuntu 20.04.1 LTSClear Linux 36990CentOS Stream 96001200180024003000SE +/- 3.69, N = 3SE +/- 1.35, N = 3SE +/- 0.65, N = 22755.92985.23022.91. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3 - Decompression SpeedUbuntu 20.04.1 LTSClear Linux 36990CentOS Stream 95001000150020002500Min: 2751.5 / Avg: 2755.87 / Max: 2763.2Min: 2983.3 / Avg: 2985.17 / Max: 2987.8Min: 3022.2 / Avg: 3022.85 / Max: 3023.51. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3 - Compression SpeedUbuntu 20.04.1 LTSClear Linux 36990CentOS Stream 915003000450060007500SE +/- 86.44, N = 15SE +/- 4.17, N = 3SE +/- 78.16, N = 35669.96807.87026.11. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3 - Compression SpeedUbuntu 20.04.1 LTSClear Linux 36990CentOS Stream 912002400360048006000Min: 4937.4 / Avg: 5669.93 / Max: 6074.1Min: 6803.2 / Avg: 6807.77 / Max: 6816.1Min: 6873.2 / Avg: 7026.13 / Max: 7130.61. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported and HIP for AMD Radeon GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Pabellon Barcelona - Compute: CPU-OnlyUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 3699020406080100SE +/- 0.17, N = 3SE +/- 0.02, N = 3SE +/- 0.10, N = 384.4482.8981.97
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Pabellon Barcelona - Compute: CPU-OnlyUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 369901632486480Min: 84.2 / Avg: 84.44 / Max: 84.77Min: 82.85 / Avg: 82.89 / Max: 82.91Min: 81.79 / Avg: 81.97 / Max: 82.13

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 0Ubuntu 20.04.1 LTSCentOS Stream 9Clear Linux 3699020406080100SE +/- 0.30, N = 3SE +/- 0.66, N = 3SE +/- 0.45, N = 387.1784.3278.59-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 0Ubuntu 20.04.1 LTSCentOS Stream 9Clear Linux 3699020406080100Min: 86.81 / Avg: 87.17 / Max: 87.76Min: 83 / Avg: 84.32 / Max: 85.05Min: 77.94 / Avg: 78.59 / Max: 79.451. (CXX) g++ options: -O3 -fPIC -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3, Long Mode - Compression SpeedUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 369902004006008001000SE +/- 0.10, N = 3SE +/- 4.03, N = 3SE +/- 11.51, N = 4102.4281.0965.31. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***2. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***3. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3, Long Mode - Compression SpeedUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 369902004006008001000Min: 102.3 / Avg: 102.4 / Max: 102.6Min: 273 / Avg: 281.03 / Max: 285.7Min: 935.4 / Avg: 965.25 / Max: 984.81. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***2. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***3. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 500CentOS Stream 9Ubuntu 20.04.1 LTSClear Linux 36990600K1200K1800K2400K3000KSE +/- 89203.76, N = 15SE +/- 26054.27, N = 4SE +/- 22655.99, N = 32018201.092174436.562765192.67-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 500CentOS Stream 9Ubuntu 20.04.1 LTSClear Linux 36990500K1000K1500K2000K2500KMin: 1472476.75 / Avg: 2018201.09 / Max: 2357865.5Min: 2110365.75 / Avg: 2174436.56 / Max: 2236155.5Min: 2724096.25 / Avg: 2765192.67 / Max: 2802269.51. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 369901.30512.61023.91535.22046.5255SE +/- 0.71740, N = 15SE +/- 0.32475, N = 15SE +/- 0.07618, N = 155.800535.406034.64086MIN: 3.08MIN: 3.28-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 3.551. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 36990246810Min: 3.36 / Avg: 5.8 / Max: 14.79Min: 3.69 / Avg: 5.41 / Max: 8.91Min: 4.23 / Avg: 4.64 / Max: 5.241. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkCentOS Stream 9Ubuntu 20.04.1 LTSClear Linux 369903691215SE +/- 0.06, N = 3SE +/- 0.08, N = 3SE +/- 0.03, N = 310.5510.6613.56
OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkCentOS Stream 9Ubuntu 20.04.1 LTSClear Linux 3699048121620Min: 10.44 / Avg: 10.55 / Max: 10.66Min: 10.55 / Avg: 10.66 / Max: 10.81Min: 13.51 / Avg: 13.56 / Max: 13.59

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 1000CentOS Stream 9Ubuntu 20.04.1 LTSClear Linux 36990400K800K1200K1600K2000KSE +/- 55692.46, N = 12SE +/- 13612.37, N = 3SE +/- 21642.05, N = 51847194.121851066.212078925.13-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 1000CentOS Stream 9Ubuntu 20.04.1 LTSClear Linux 36990400K800K1200K1600K2000KMin: 1444277.75 / Avg: 1847194.12 / Max: 2020238.25Min: 1829513.62 / Avg: 1851066.21 / Max: 1876247.5Min: 1993126.88 / Avg: 2078925.13 / Max: 2110900.251. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2Ubuntu 20.04.1 LTSCentOS Stream 9Clear Linux 36990100200300400500SE +/- 9.70, N = 15SE +/- 4.68, N = 4SE +/- 0.42, N = 3463.98378.88348.82MIN: 369.59 / MAX: 652.1MIN: 371.88 / MAX: 634.44-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 346.87 / MAX: 355.441. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2Ubuntu 20.04.1 LTSCentOS Stream 9Clear Linux 3699080160240320400Min: 386.51 / Avg: 463.98 / Max: 514.43Min: 373.95 / Avg: 378.88 / Max: 392.9Min: 348.16 / Avg: 348.81 / Max: 349.611. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: System V Message PassingUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 369902M4M6M8M10MSE +/- 182521.68, N = 15SE +/- 85352.98, N = 4SE +/- 1671.17, N = 34420124.127093379.738684030.90-lapparmor -latomic-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: System V Message PassingUbuntu 20.04.1 LTSCentOS Stream 9Clear Linux 369901.5M3M4.5M6M7.5MMin: 2906124.63 / Avg: 4420124.12 / Max: 5110222.34Min: 6837619.62 / Avg: 7093379.73 / Max: 7189613.49Min: 8680900.29 / Avg: 8684030.9 / Max: 8686610.061. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUUbuntu 20.04.1 LTSCentOS Stream 9400800120016002000SE +/- 4.60, N = 3SE +/- 0.67, N = 31684.39819.63MIN: 1122.36 / MAX: 2167.69MIN: 519.3 / MAX: 967.181. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUUbuntu 20.04.1 LTSCentOS Stream 930060090012001500Min: 1675.85 / Avg: 1684.39 / Max: 1691.62Min: 818.48 / Avg: 819.63 / Max: 820.811. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUUbuntu 20.04.1 LTSCentOS Stream 9612182430SE +/- 0.07, N = 3SE +/- 0.02, N = 323.5224.291. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl