New Tests

2 x Intel Xeon Platinum 8380 testing with a Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS) and ASPEED on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2209031-NE-2209025NE82
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 2 Tests
Timed Code Compilation 3 Tests
C/C++ Compiler Tests 15 Tests
Compression Tests 2 Tests
CPU Massive 23 Tests
Creator Workloads 15 Tests
Database Test Suite 8 Tests
Encoding 6 Tests
Fortran Tests 2 Tests
Game Development 2 Tests
Go Language Tests 3 Tests
HPC - High Performance Computing 10 Tests
Imaging 3 Tests
Java 2 Tests
Common Kernel Benchmarks 3 Tests
Machine Learning 6 Tests
Molecular Dynamics 3 Tests
MPI Benchmarks 3 Tests
Multi-Core 23 Tests
Node.js + NPM Tests 2 Tests
NVIDIA GPU Compute 2 Tests
Intel oneAPI 4 Tests
OpenMPI Tests 3 Tests
Programmer / Developer System Benchmarks 6 Tests
Python Tests 4 Tests
Raytracing 2 Tests
Renderers 4 Tests
Scientific Computing 3 Tests
Server 13 Tests
Server CPU Tests 15 Tests
Single-Threaded 3 Tests
Video Encoding 6 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
CentOS Stream 9
August 31 2022
  23 Hours, 41 Minutes
Clear Linux 36990
September 01 2022
  19 Hours, 13 Minutes
Ubuntu 20.04.1 LTS
September 02 2022
  21 Hours, 41 Minutes
Invert Hiding All Results Option
  21 Hours, 32 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


New TestsProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen ResolutionVulkanCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2 x Intel Xeon Platinum 8380 @ 3.40GHz (80 Cores / 160 Threads)Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS)Intel Device 0998512GB7682GB INTEL SSDPF2KX076TZASPEEDVE2282 x Intel X710 for 10GBASE-T + 2 x Intel E810-C for QSFPCentOS Stream 95.14.0-148.el9.x86_64 (x86_64)GNOME Shell 40.10X ServerGCC 11.3.1 20220421xfs1920x1080Clear Linux OS 369905.19.6-1185.native (x86_64)GNOME Shell 42.4X Server 1.21.1.3GCC 12.2.1 20220831 releases/gcc-12.2.0-35-g63997f2223 + Clang 14.0.6 + LLVM 14.0.6ext4Ubuntu 22.045.15.0-47-generic (x86_64)GNOME Shell 42.21.2.204GCC 11.2.0OpenBenchmarking.orgKernel Details- CentOS Stream 9: Transparent Huge Pages: always- Clear Linux 36990: Transparent Huge Pages: always- Ubuntu 20.04.1 LTS: Transparent Huge Pages: madviseCompiler Details- CentOS Stream 9: --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-isl - Clear Linux 36990: --build=x86_64-generic-linux --disable-libmpx --disable-libunwind-exceptions --disable-multiarch --disable-vtable-verify --disable-werror --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-clocale=gnu --enable-default-pie --enable-gnu-indirect-function --enable-gnu-indirect-function --enable-host-shared --enable-languages=c,c++,fortran,go,jit --enable-ld=default --enable-libstdcxx-pch --enable-linux-futex --enable-lto --enable-multilib --enable-plugin --enable-shared --enable-threads=posix --exec-prefix=/usr --includedir=/usr/include --target=x86_64-generic-linux --with-arch=x86-64-v3 --with-gcc-major-version-only --with-glibc-version=2.35 --with-gnu-ld --with-isl --with-pic --with-ppl=yes --with-tune=skylake-avx512 --with-zstd - Ubuntu 20.04.1 LTS: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Disk Details- CentOS Stream 9: NONE / attr2,inode64,logbsize=32k,logbufs=8,noquota,relatime,rw,seclabel / Block Size: 4096- Clear Linux 36990: MQ-DEADLINE / relatime,rw,stripe=256 / Block Size: 4096- Ubuntu 20.04.1 LTS: NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Details- CentOS Stream 9: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xd000363- Clear Linux 36990: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0xd000375- Ubuntu 20.04.1 LTS: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xd000363Java Details- CentOS Stream 9: OpenJDK Runtime Environment (Red_Hat-11.0.16.0.8-2.el9) (build 11.0.16+8-LTS)- Clear Linux 36990: OpenJDK Runtime Environment (build 18.0.1-internal+0-adhoc.mockbuild.corretto-18-18.0.1.10.1)- Ubuntu 20.04.1 LTS: OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Details- CentOS Stream 9: Python 3.9.13- Clear Linux 36990: Python 3.10.6- Ubuntu 20.04.1 LTS: Python 3.10.4Security Details- CentOS Stream 9: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected - Clear Linux 36990: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected - Ubuntu 20.04.1 LTS: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected Environment Details- Clear Linux 36990: FFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z -Wl,now -Wl,-z -Wl,relro -malign-data=abi -fno-semantic-interposition -ftree-vectorize -ftree-loop-vectorize -Wl,--enable-new-dtags" CXXFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -Wformat -Wformat-security -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z -Wl,now -Wl,-z -Wl,relro -fno-semantic-interposition -ffat-lto-objects -fno-trapping-math -Wl,-sort-common -Wl,--enable-new-dtags -mtune=skylake -mrelax-cmpxchg-loop -fvisibility-inlines-hidden -Wl,--enable-new-dtags" FCFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z -Wl,now -Wl,-z -Wl,relro -malign-data=abi -fno-semantic-interposition -ftree-vectorize -ftree-loop-vectorize -Wl,-sort-common -Wl,--enable-new-dtags" CFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -Wformat -Wformat-security -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z -Wl,now -Wl,-z -Wl,relro -fno-semantic-interposition -ffat-lto-objects -fno-trapping-math -Wl,-sort-common -Wl,--enable-new-dtags -mtune=skylake -mrelax-cmpxchg-loop" THEANO_FLAGS="floatX=float32,openmp=true,gcc.cxxflags="-ftree-vectorize -mavx""

CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTSLogarithmic Result OverviewPhoronix Test SuiteC-BloscNatronRenaissancex264DaCapo BenchmarkPostgreSQL pgbenchVP9 libvpx EncodingTimed LLVM CompilationNode.js Express HTTP Load TestSVT-AV1GraphicsMagickApache SparkClickHouseSVT-HEVCZstd CompressionTensorFlow Litelibavif avifencSVT-VP9memtier_benchmarkNode.js V8 Web Tooling BenchmarkoneDNNRedisStress-NG7-Zip CompressionUnpacking The Linux KernelLAMMPS Molecular Dynamics SimulatorTNNApache HTTP ServerWebP Image EncodeOSPRayASTC EncoderONNX RuntimenginxStockfishMobile Neural NetworkGROMACSBlendersimdjsonHigh Performance Conjugate GradientOpenSSLNAMDOSPRay Studio

New Testscompress-zstd: 3, Long Mode - Compression Speedstress-ng: NUMAcompress-zstd: 8, Long Mode - Compression Speedpgbench: 100 - 500 - Read Write - Average Latencypgbench: 100 - 500 - Read Writespark: 40000000 - 500 - SHA-512 Benchmark Timedacapobench: H2pgbench: 100 - 250 - Read Writepgbench: 100 - 250 - Read Write - Average Latencyblosc: blosclz shuffleblosc: blosclz bitshuffledragonflydb: 50 - 1:5dragonflydb: 50 - 1:1dragonflydb: 50 - 5:1natron: Spaceshipstress-ng: Context Switchingrenaissance: Savina Reactors.IOonnx: super-resolution-10 - CPU - Parallelx264: Bosphorus 4Krenaissance: Apache Spark Bayesrenaissance: Rand Forestrenaissance: ALS Movie Lensstress-ng: Semaphoressvt-av1: Preset 12 - Bosphorus 4Kopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP32 - CPUvpxenc: Speed 0 - Bosphorus 4Kopenvino: Face Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUbuild-llvm: Ninjasvt-av1: Preset 10 - Bosphorus 4Kopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUgraphics-magick: Noise-Gaussiansvt-av1: Preset 8 - Bosphorus 4Kopenvino: Machine Translation EN To DE FP16 - CPUavifenc: 6svt-av1: Preset 4 - Bosphorus 4Kclickhouse: 100M Rows Web Analytics Dataset, First Run / Cold Cacheclickhouse: 100M Rows Web Analytics Dataset, Second Runavifenc: 6, Losslessclickhouse: 100M Rows Web Analytics Dataset, Third Runavifenc: 10, Losslesssvt-hevc: 10 - Bosphorus 4Ksvt-hevc: 7 - Bosphorus 4Kospray: particle_volume/pathtracer/real_timeredis: GET - 50svt-vp9: Visual Quality Optimized - Bosphorus 4Ksvt-vp9: PSNR/SSIM Optimized - Bosphorus 4Ksvt-vp9: VMAF Optimized - Bosphorus 4Kcompress-7zip: Compression Ratinggraphics-magick: Rotategraphics-magick: Sharpengraphics-magick: Swirlcompress-zstd: 8 - Compression Speedtnn: CPU - DenseNetnode-web-tooling: stress-ng: IO_uringcompress-zstd: 19, Long Mode - Compression Speedcompress-zstd: 3 - Compression Speedcompress-zstd: 19 - Compression Speedstress-ng: Memory Copyingwebp: Quality 100, Losslessstress-ng: Glibc C String Functionsredis: GET - 1000onnx: ArcFace ResNet-100 - CPU - Parallelunpack-linux: linux-5.19.tar.xzstress-ng: Mallocbuild-gdb: Time To Compilestress-ng: MMAPapache: 1000avifenc: 2mnn: MobileNetV2_224stress-ng: Matrix Mathspark: 40000000 - 500 - Calculate Pi Benchmarkstress-ng: Cryptoastcenc: Mediumredis: SET - 50onnx: bertsquad-12 - CPU - Standardwebp: Quality 100, Lossless, Highest Compressionopenvino: Age Gender Recognition Retail 0013 FP16 - CPUavifenc: 0astcenc: Thoroughonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonnx: ArcFace ResNet-100 - CPU - Standardcompress-zstd: 3 - Decompression Speedstress-ng: SENDFILEwebp: Quality 100, Highest Compressionastcenc: Exhaustivegraphics-magick: Enhancedcompress-zstd: 3, Long Mode - Decompression Speedstress-ng: MEMFDcompress-zstd: 8 - Decompression Speedcompress-zstd: 8, Long Mode - Decompression Speedopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUospray: gravity_spheres_volume/dim_512/ao/real_timenginx: 1000build-linux-kernel: defconfigopenvino: Machine Translation EN To DE FP16 - CPUospray: gravity_spheres_volume/dim_512/scivis/real_timemnn: mobilenetV3openvino: Person Detection FP16 - CPUgromacs: MPI CPU - water_GMX50_barestress-ng: Forkingsimdjson: PartialTweetsopenvino: Person Detection FP32 - CPUtnn: CPU - SqueezeNet v2blender: Fishy Cat - CPU-Onlystress-ng: Vector Mathastcenc: Faststress-ng: Glibc Qsort Data Sortingospray-studio: 2 - 4K - 16 - Path Traceronnx: bertsquad-12 - CPU - Parallelsimdjson: LargeRandblender: BMW27 - CPU-Onlyospray-studio: 3 - 4K - 16 - Path Tracerospray-studio: 2 - 4K - 32 - Path Tracerospray-studio: 1 - 4K - 16 - Path Tracersimdjson: Kostyarenaissance: In-Memory Database Shootoutcompress-zstd: 19, Long Mode - Decompression Speedblender: Barbershop - CPU-Onlyospray-studio: 1 - 4K - 32 - Path Tracerstress-ng: CPU Stressonnx: fcn-resnet101-11 - CPU - Parallelospray-studio: 3 - 4K - 32 - Path Tracercompress-7zip: Decompression Ratingopenvino: Face Detection FP16 - CPUblender: Pabellon Barcelona - CPU-Onlycompress-zstd: 19 - Decompression Speedblender: Classroom - CPU-Onlytnn: CPU - SqueezeNet v1.1onednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUmnn: inception-v3onnx: yolov4 - CPU - Parallelsimdjson: TopTweetopenvino: Vehicle Detection FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUospray: particle_volume/ao/real_timeospray: particle_volume/scivis/real_timestress-ng: CPU Cacheopenvino: Weld Porosity Detection FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUhpcg: onednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUsimdjson: DistinctUserIDospray: gravity_spheres_volume/dim_512/pathtracer/real_timemnn: resnet-v2-50openvino: Weld Porosity Detection FP16 - CPUopenssl: onednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUonnx: GPT-2 - CPU - Parallellammps: 20k Atomsopenssl: openvino: Age Gender Recognition Retail 0013 FP16 - CPUkeydb: influxdb: 4 - 10000 - 2,5000,1 - 10000stress-ng: x86_64 RdRandnamd: ATPase Simulation - 327,506 Atomsspark: 40000000 - 500 - Broadcast Inner Join Test Timespark: 40000000 - 500 - Inner Join Test Timespark: 40000000 - 500 - Repartition Test Timespark: 40000000 - 500 - Group By Test Timeonnx: super-resolution-10 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardonnx: yolov4 - CPU - Standardonnx: GPT-2 - CPU - Standardopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUtnn: CPU - MobileNet v2mnn: mobilenet-v1-1.0mnn: SqueezeNetV1.0mnn: squeezenetv1.1mnn: nasnetstress-ng: System V Message Passingstress-ng: Socket Activitystress-ng: Atomicstress-ng: Futexmemtier-benchmark: Redis - 50 - 1:10memtier-benchmark: Redis - 50 - 5:1pgbench: 100 - 500 - Read Only - Average Latencypgbench: 100 - 500 - Read Onlypgbench: 100 - 250 - Read Only - Average Latencypgbench: 100 - 250 - Read Onlytensorflow-lite: Inception ResNet V2tensorflow-lite: Mobilenet Quanttensorflow-lite: Mobilenet Floattensorflow-lite: NASNet Mobiletensorflow-lite: Inception V4tensorflow-lite: SqueezeNetredis: SET - 1000redis: SET - 500redis: GET - 500spark: 40000000 - 500 - Calculate Pi Benchmark Using Dataframeonednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: IP Shapes 3D - bf16bf16bf16 - CPUonednn: IP Shapes 1D - bf16bf16bf16 - CPUstockfish: Total Timegraphics-magick: HWB Color Spacegraphics-magick: Resizingnode-express-loadtest: renaissance: Finagle HTTP Requestsdacapobench: Jythonwebp: Quality 100webp: Defaultlammps: Rhodopsin ProteinCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS281.010.37307.526.7241871088.8398472074512.0514916.73704.11.96233126.4521219.4325934.571075.31455.217123.97186364.5192.8321424.571451.623.04819.6318.67135.01965.6204.52239.8673838.67585.476.0561.327231.48244.389.260243.956.605113.2386.67100.66962284227.299.27115.50112.93467866103064123401244.03955.04610.5543.47026.186.612812.4521.1159473078.172406986.6516939.194306750258.8495.4153747.58131349.6048.7102.663286293.4036.2183808.91316.37432189377.08109341.2081.3684.31646.3848697.28218813022.91271967.058.8024.505411533208.04098.843017.53201.09657.998.2722.4174200945.4929.670233.3322.02151.75313.928.99663484.454.8513.6775.88033.37322923.09799.1120934.26202617990.9625.042396740852201522.9117787.22635.7257.1540580135517.462364831937113124.2982.892571.364.82366.4933.8115520.0916305.621071.7083.2624.352724.295116.2632.004414.9440.28122.159385.7725.58528.6632478.9616866.13.68404526935.1231112427.247224.77666008.7667284.360.2813812260443694110451.5042731.9313.601478.64378.8792.0903.9562.35612.0977093379.732460.37187775.771088788.921398073.701339297.910.27018556560.150166938847297.89540.864240.4168713.073896.516614.411847194.121931278.622018201.092.7937.95778447.6162.385635.406031794731291138274849108693.956003.0442.16330.870965.35.21993.15.8288579220.212453873572.86217905.812603.14236085.834003419.313826220.785.014924848.448256.6802482.43479.0668.38225.415903941.88195.9346.15267.387132.87199473.1883.4632.351386.79400.446.174400.324.690192.06147.63161.9303512867.33148.91163.87153.77473026102986925581621.33620.85713.5610692920.6348.86807.891.511244.9418.1869658490.042722264.5019787.853342573199.693336.53118161.8342.6252.912328235.1433.0595806.71359.89802346688.399238.30378.59150.4720728.04020772985.21161464.618.0464.891611923783.493007.53163.122.5168215488.1022.22471.8629.52561843.015.0272.12832.43309299.86837.1477893.84204048280.9824.252410240932201802.942613.2253.2740503140290.212414850636420981.972522.564.23358.1533.8008020.4486405.7424.590324.715916.4040.86102.126375.7125.64678.56317031.13.64846525935.0461119831.8485587.54669368.180.2805315.1715.5913.0734.451015152163610211348.8152.1794.2442.52912.8138684030.9036452.69145035.861140633.721994991.481760873.070.27518316650.132191311549331.98576.953701.2768566.742370.36244.192078925.132083152.812765192.672.0412.2723487.1162.302514.640861860796281737285198085950.536832.6561.66235.995102.442.05127.916.8812962090.6210681341517.3214279.53193.01315146.421243856.631209586.381.65371747.8121545.7313332.411132.31531.518654.17249112.2492.7322990.503027.382.951684.3938.04131.69865.8378.89469.3651338.331160.426.4061.292214.26223.3411.036225.188.101111.5587.7798.65542588694.17101.72115.32111.6134572575964119161638.34704.04210.668404895.3038.55669.974.013427.1421.6018173725.032309354.816828.888292765838.9981.6293888.56137078.3549.3533.077307592.9931.6786462.28315.09782076700.0897742.6061.5187.16745.6321770.26118992755.91177667.638.6754.484010942945.93881.852788.92962.48939.748.9320.9882210852.3727.809248.7020.89651.86413.149.11565455.724.7512.9874.70934.07324720.78809.1666918.76211658340.9425.232492942476209472.8318477.32540.7262.6541973135774.412334995335905823.5284.442498.165.82365.9173.8879320.5406445.691049.5684.9924.852524.726316.1332.534487.4540.20472.135635.7925.29448.6022453.4516884.73.66112530534.8201115888.047005.85487637.29668433.3666992.320.2810111619483671108560.9966238.8318.312178.70463.9812.1784.2582.59012.9214420124.1245595.60183431.66942679.721755483.281457157.140.28517604400.157159395054111.010747.973340.6080034.036132.75617.261851066.211835435.102174436.562.7341.8912497.8092.446915.8005317479258763641762108478.848192.9311.97526.749OpenBenchmarking.org

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3, Long Mode - Compression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2004006008001000SE +/- 4.03, N = 3SE +/- 7.62, N = 15SE +/- 0.10, N = 3281.0969.1102.41. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3, Long Mode - Compression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2004006008001000Min: 273 / Avg: 281.03 / Max: 285.7Min: 919.4 / Avg: 969.07 / Max: 1029.1Min: 102.3 / Avg: 102.4 / Max: 102.61. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: NUMACentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS1020304050SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.12, N = 310.375.2142.05-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic-lapparmor -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: NUMACentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS918273645Min: 10.34 / Avg: 10.37 / Max: 10.41Min: 5.19 / Avg: 5.21 / Max: 5.22Min: 41.88 / Avg: 42.05 / Max: 42.281. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8, Long Mode - Compression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2004006008001000SE +/- 0.78, N = 3SE +/- 3.84, N = 3SE +/- 0.15, N = 3307.5995.9127.91. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8, Long Mode - Compression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2004006008001000Min: 306.2 / Avg: 307.53 / Max: 308.9Min: 990.4 / Avg: 995.93 / Max: 1003.3Min: 127.7 / Avg: 127.9 / Max: 128.21. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read Write - Average LatencyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS612182430SE +/- 0.047, N = 3SE +/- 0.008, N = 3SE +/- 0.005, N = 326.7245.82816.881-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-O21. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read Write - Average LatencyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS612182430Min: 26.66 / Avg: 26.72 / Max: 26.82Min: 5.81 / Avg: 5.83 / Max: 5.84Min: 16.88 / Avg: 16.88 / Max: 16.891. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read WriteCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS20K40K60K80K100KSE +/- 32.56, N = 3SE +/- 119.89, N = 3SE +/- 8.12, N = 3187108579229620-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-O21. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read WriteCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS15K30K45K60K75KMin: 18646.54 / Avg: 18709.95 / Max: 18754.52Min: 85631.08 / Avg: 85791.56 / Max: 86026.08Min: 29603.65 / Avg: 29619.52 / Max: 29630.451. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark TimeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS20406080100SE +/- 0.53, N = 3SE +/- 0.19, N = 15SE +/- 0.76, N = 388.8320.2190.14
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark TimeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS20406080100Min: 87.98 / Avg: 88.83 / Max: 89.81Min: 18.6 / Avg: 20.21 / Max: 21.72Min: 88.78 / Avg: 90.14 / Max: 91.39

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2K4K6K8K10KSE +/- 54.95, N = 4SE +/- 29.40, N = 4SE +/- 97.08, N = 49847245310681
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2K4K6K8K10KMin: 9761 / Avg: 9847.25 / Max: 10006Min: 2416 / Avg: 2453.25 / Max: 2541Min: 10438 / Avg: 10680.75 / Max: 10910

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read WriteCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS20K40K60K80K100KSE +/- 21.44, N = 3SE +/- 149.93, N = 3SE +/- 96.97, N = 3207458735734151-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-O21. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read WriteCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS15K30K45K60K75KMin: 20713.98 / Avg: 20744.88 / Max: 20786.09Min: 87081.55 / Avg: 87357.07 / Max: 87597.32Min: 34011.12 / Avg: 34150.96 / Max: 34337.261. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average LatencyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS3691215SE +/- 0.012, N = 3SE +/- 0.005, N = 3SE +/- 0.021, N = 312.0512.8627.321-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-O21. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average LatencyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS48121620Min: 12.03 / Avg: 12.05 / Max: 12.07Min: 2.85 / Avg: 2.86 / Max: 2.87Min: 7.28 / Avg: 7.32 / Max: 7.351. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm

C-Blosc

C-Blosc (c-blosc2) simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz shuffleCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS4K8K12K16K20KSE +/- 23.45, N = 3SE +/- 153.72, N = 15SE +/- 18.38, N = 34916.717905.84279.51. (CC) gcc options: -std=gnu99 -O3 -lrt -lm
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz shuffleCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS3K6K9K12K15KMin: 4886.1 / Avg: 4916.73 / Max: 4962.8Min: 17321.5 / Avg: 17905.79 / Max: 19245.5Min: 4249.1 / Avg: 4279.47 / Max: 4312.61. (CC) gcc options: -std=gnu99 -O3 -lrt -lm

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz bitshuffleCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS3K6K9K12K15KSE +/- 6.11, N = 3SE +/- 20.65, N = 3SE +/- 7.59, N = 33704.112603.13193.01. (CC) gcc options: -std=gnu99 -O3 -lrt -lm
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz bitshuffleCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2K4K6K8K10KMin: 3692.6 / Avg: 3704.13 / Max: 3713.4Min: 12564.5 / Avg: 12603.1 / Max: 12635.1Min: 3181.3 / Avg: 3192.97 / Max: 3207.21. (CC) gcc options: -std=gnu99 -O3 -lrt -lm

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:5Clear Linux 36990Ubuntu 20.04.1 LTS900K1800K2700K3600K4500KSE +/- 12346.26, N = 3SE +/- 4729.77, N = 34236085.831315146.421. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:5Clear Linux 36990Ubuntu 20.04.1 LTS700K1400K2100K2800K3500KMin: 4216739.02 / Avg: 4236085.83 / Max: 4259047.04Min: 1307480.46 / Avg: 1315146.42 / Max: 1323779.081. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:1Clear Linux 36990Ubuntu 20.04.1 LTS900K1800K2700K3600K4500KSE +/- 10042.07, N = 3SE +/- 595.66, N = 34003419.311243856.631. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 1:1Clear Linux 36990Ubuntu 20.04.1 LTS700K1400K2100K2800K3500KMin: 3988181.35 / Avg: 4003419.31 / Max: 4022368.9Min: 1242807.38 / Avg: 1243856.63 / Max: 1244869.851. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 5:1Clear Linux 36990Ubuntu 20.04.1 LTS800K1600K2400K3200K4000KSE +/- 16274.57, N = 3SE +/- 3107.17, N = 33826220.781209586.381. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 0.6Clients: 50 - Set To Get Ratio: 5:1Clear Linux 36990Ubuntu 20.04.1 LTS700K1400K2100K2800K3500KMin: 3808157.35 / Avg: 3826220.78 / Max: 3858701.78Min: 1205631.97 / Avg: 1209586.38 / Max: 1215715.151. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Natron

Natron is an open-source, cross-platform compositing software for visual effects (VFX) and motion graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: SpaceshipCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS1.1252.253.3754.55.625SE +/- 0.01, N = 15SE +/- 0.03, N = 3SE +/- 0.00, N = 31.95.01.6
OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: SpaceshipCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810Min: 1.8 / Avg: 1.92 / Max: 2Min: 5 / Avg: 5.03 / Max: 5.1Min: 1.6 / Avg: 1.6 / Max: 1.6

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Context SwitchingCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS3M6M9M12M15MSE +/- 78706.86, N = 3SE +/- 96811.45, N = 15SE +/- 37051.92, N = 36233126.4514924848.445371747.81-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic-lapparmor -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Context SwitchingCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS3M6M9M12M15MMin: 6075749.87 / Avg: 6233126.45 / Max: 6314776.19Min: 14406871.62 / Avg: 14924848.44 / Max: 15897148.7Min: 5300098.06 / Avg: 5371747.81 / Max: 5423951.61. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS5K10K15K20K25KSE +/- 296.93, N = 3SE +/- 56.26, N = 15SE +/- 209.69, N = 621219.48256.621545.7MIN: 20627.9 / MAX: 32602.9MIN: 7799.33 / MAX: 12715.24MIN: 20986.56 / MAX: 35348.39
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS4K8K12K16K20KMin: 20627.9 / Avg: 21219.38 / Max: 21561.04Min: 7799.33 / Avg: 8256.64 / Max: 8639Min: 20986.56 / Avg: 21545.65 / Max: 22331.64

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2K4K6K8K10KSE +/- 4.91, N = 3SE +/- 8.23, N = 3SE +/- 10.50, N = 3325980243133-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto-flto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS14002800420056007000Min: 3250.5 / Avg: 3259 / Max: 3267.5Min: 8009.5 / Avg: 8023.83 / Max: 8038Min: 3116 / Avg: 3132.5 / Max: 31521. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS20406080100SE +/- 0.49, N = 15SE +/- 1.11, N = 3SE +/- 0.35, N = 334.5782.4332.41-lavformat -lavcodec -lavutil -lswscale1. (CC) gcc options: -ldl -m64 -lm -lpthread -O3 -flto
OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS1632486480Min: 31.96 / Avg: 34.57 / Max: 38.56Min: 81.02 / Avg: 82.43 / Max: 84.61Min: 31.98 / Avg: 32.41 / Max: 33.11. (CC) gcc options: -ldl -m64 -lm -lpthread -O3 -flto

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2004006008001000SE +/- 11.23, N = 3SE +/- 1.10, N = 3SE +/- 14.06, N = 151075.3479.01132.3MIN: 628.33 / MAX: 1551.11MIN: 329.34 / MAX: 719.74MIN: 559.12 / MAX: 2166.29
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2004006008001000Min: 1052.97 / Avg: 1075.26 / Max: 1088.85Min: 477.72 / Avg: 479.04 / Max: 481.21Min: 1046.53 / Avg: 1132.28 / Max: 1266.11

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random ForestCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS30060090012001500SE +/- 6.85, N = 3SE +/- 6.83, N = 3SE +/- 17.25, N = 31455.2668.31531.5MIN: 1315.52 / MAX: 1806.24MIN: 620.85 / MAX: 782.18MIN: 1322.46 / MAX: 1932.51
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random ForestCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS30060090012001500Min: 1446.76 / Avg: 1455.16 / Max: 1468.73Min: 659.48 / Avg: 668.33 / Max: 681.76Min: 1505.36 / Avg: 1531.47 / Max: 1564.05

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS4K8K12K16K20KSE +/- 73.46, N = 3SE +/- 28.18, N = 3SE +/- 88.50, N = 317123.98225.418654.1MIN: 16240.16 / MAX: 19195.87MIN: 8194.03 / MAX: 9046.37MIN: 18482.22 / MAX: 21069.29
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS3K6K9K12K15KMin: 16999.98 / Avg: 17123.88 / Max: 17254.2Min: 8194.03 / Avg: 8225.41 / Max: 8281.65Min: 18482.22 / Avg: 18654.15 / Max: 18776.54

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: SemaphoresCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS3M6M9M12M15MSE +/- 27158.37, N = 3SE +/- 5152.27, N = 3SE +/- 17371.47, N = 37186364.5115903941.887249112.24-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic-lapparmor -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: SemaphoresCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS3M6M9M12M15MMin: 7157718.18 / Avg: 7186364.51 / Max: 7240653.56Min: 15897756.79 / Avg: 15903941.88 / Max: 15914172.09Min: 7217747.64 / Avg: 7249112.24 / Max: 7277736.751. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS4080120160200SE +/- 0.83, N = 3SE +/- 1.35, N = 3SE +/- 0.91, N = 392.83195.9392.73-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS4080120160200Min: 91.74 / Avg: 92.83 / Max: 94.47Min: 193.59 / Avg: 195.93 / Max: 198.25Min: 90.92 / Avg: 92.73 / Max: 93.791. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS6001200180024003000SE +/- 0.41, N = 3SE +/- 2.57, N = 31424.572990.50MIN: 1046.08 / MAX: 1657.29MIN: 1585.88 / MAX: 4320.221. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS5001000150020002500Min: 1423.89 / Avg: 1424.57 / Max: 1425.32Min: 2987.52 / Avg: 2990.5 / Max: 2995.621. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS6001200180024003000SE +/- 1.03, N = 3SE +/- 8.96, N = 31451.623027.38MIN: 1039.96 / MAX: 1708.95MIN: 1576.6 / MAX: 4000.361. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS5001000150020002500Min: 1450.34 / Avg: 1451.62 / Max: 1453.66Min: 3015.8 / Avg: 3027.38 / Max: 30451. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 33.046.152.95-U_FORTIFY_SOURCE-pipe -fexceptions -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-U_FORTIFY_SOURCE1. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810Min: 3.02 / Avg: 3.04 / Max: 3.07Min: 6.13 / Avg: 6.15 / Max: 6.2Min: 2.89 / Avg: 2.95 / Max: 2.981. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -std=gnu++11

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS400800120016002000SE +/- 0.67, N = 3SE +/- 4.60, N = 3819.631684.39MIN: 519.3 / MAX: 967.18MIN: 1122.36 / MAX: 2167.691. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS30060090012001500Min: 818.48 / Avg: 819.63 / Max: 820.81Min: 1675.85 / Avg: 1684.39 / Max: 1691.621. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS918273645SE +/- 0.29, N = 12SE +/- 0.02, N = 318.6738.04MIN: 11.54 / MAX: 79.43MIN: 17.97 / MAX: 246.51. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS816243240Min: 18.27 / Avg: 18.67 / Max: 21.83Min: 38.01 / Avg: 38.04 / Max: 38.091. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: NinjaCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS60120180240300SE +/- 0.23, N = 3SE +/- 0.26, N = 3SE +/- 0.42, N = 3135.02267.39131.70
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: NinjaCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS50100150200250Min: 134.59 / Avg: 135.02 / Max: 135.39Min: 266.87 / Avg: 267.39 / Max: 267.76Min: 131.04 / Avg: 131.7 / Max: 132.48

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS306090120150SE +/- 0.26, N = 3SE +/- 0.37, N = 3SE +/- 0.44, N = 365.62132.8765.84-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS20406080100Min: 65.2 / Avg: 65.62 / Max: 66.09Min: 132.14 / Avg: 132.87 / Max: 133.38Min: 65.31 / Avg: 65.84 / Max: 66.711. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS246810SE +/- 0.00, N = 3SE +/- 0.01, N = 34.528.89MIN: 4.11 / MAX: 44.74MIN: 5.1 / MAX: 89.931. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS3691215Min: 4.52 / Avg: 4.52 / Max: 4.52Min: 8.87 / Avg: 8.89 / Max: 8.911. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS100200300400500SE +/- 0.22, N = 3SE +/- 0.72, N = 3239.86469.36MIN: 178.86 / MAX: 348.97MIN: 244.15 / MAX: 801.611. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS80160240320400Min: 239.42 / Avg: 239.86 / Max: 240.14Min: 468.6 / Avg: 469.36 / Max: 470.811. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: Noise-GaussianCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2004006008001000SE +/- 0.88, N = 3SE +/- 3.93, N = 3SE +/- 7.66, N = 12738994513-O2 -lbz2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -ltiff -lfreetype -lSM -lICE -lbz2 -lxml2-O2 -ljbig -ltiff -lfreetype -lSM -lICE -llzma1. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: Noise-GaussianCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2004006008001000Min: 736 / Avg: 737.67 / Max: 739Min: 986 / Avg: 993.67 / Max: 999Min: 451 / Avg: 513.25 / Max: 5571. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lz -lm -lpthread

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS1632486480SE +/- 0.30, N = 3SE +/- 0.40, N = 3SE +/- 0.09, N = 338.6873.1938.33-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS1428425670Min: 38.2 / Avg: 38.67 / Max: 39.23Min: 72.78 / Avg: 73.19 / Max: 73.98Min: 38.15 / Avg: 38.33 / Max: 38.431. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS4080120160200SE +/- 0.18, N = 3SE +/- 1.44, N = 385.47160.42MIN: 76.11 / MAX: 195.12MIN: 57.35 / MAX: 17091. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS306090120150Min: 85.1 / Avg: 85.47 / Max: 85.68Min: 157.58 / Avg: 160.42 / Max: 162.241. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810SE +/- 0.037, N = 3SE +/- 0.026, N = 3SE +/- 0.082, N = 156.0563.4636.406-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS3691215Min: 6.01 / Avg: 6.06 / Max: 6.13Min: 3.44 / Avg: 3.46 / Max: 3.52Min: 5.92 / Avg: 6.41 / Max: 6.741. (CXX) g++ options: -O3 -fPIC -lm

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS0.5291.0581.5872.1162.645SE +/- 0.001, N = 3SE +/- 0.003, N = 3SE +/- 0.008, N = 31.3272.3511.292-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810Min: 1.33 / Avg: 1.33 / Max: 1.33Min: 2.35 / Avg: 2.35 / Max: 2.36Min: 1.28 / Avg: 1.29 / Max: 1.311. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, First Run / Cold CacheCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS80160240320400SE +/- 2.21, N = 15SE +/- 5.14, N = 12SE +/- 2.32, N = 3231.48386.79214.26MIN: 41.47 / MAX: 5454.55MIN: 51.15 / MAX: 20000MIN: 36.76 / MAX: 2727.271. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, First Run / Cold CacheCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS70140210280350Min: 212.68 / Avg: 231.48 / Max: 244.77Min: 363.65 / Avg: 386.79 / Max: 416.95Min: 211.04 / Avg: 214.26 / Max: 218.771. ClickHouse server version 22.5.4.19 (official build).

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Second RunCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS90180270360450SE +/- 1.48, N = 15SE +/- 5.40, N = 12SE +/- 5.12, N = 3244.38400.44223.34MIN: 44.09 / MAX: 5454.55MIN: 53.29 / MAX: 20000MIN: 43.48 / MAX: 5454.551. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Second RunCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS70140210280350Min: 236.34 / Avg: 244.38 / Max: 255.26Min: 375.33 / Avg: 400.44 / Max: 433.41Min: 216.18 / Avg: 223.34 / Max: 233.251. ClickHouse server version 22.5.4.19 (official build).

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6, LosslessCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS3691215SE +/- 0.070, N = 15SE +/- 0.021, N = 3SE +/- 0.108, N = 159.2606.17411.036-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6, LosslessCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS3691215Min: 8.96 / Avg: 9.26 / Max: 9.79Min: 6.14 / Avg: 6.17 / Max: 6.21Min: 10 / Avg: 11.04 / Max: 11.571. (CXX) g++ options: -O3 -fPIC -lm

ClickHouse

ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Third RunCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS90180270360450SE +/- 1.95, N = 15SE +/- 5.35, N = 12SE +/- 2.99, N = 3243.95400.32225.18MIN: 42.11 / MAX: 6000MIN: 54.25 / MAX: 20000MIN: 37.43 / MAX: 5454.551. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Third RunCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS70140210280350Min: 233.04 / Avg: 243.95 / Max: 257.32Min: 374.42 / Avg: 400.32 / Max: 434.04Min: 219.64 / Avg: 225.18 / Max: 229.891. ClickHouse server version 22.5.4.19 (official build).

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 10, LosslessCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810SE +/- 0.073, N = 15SE +/- 0.012, N = 3SE +/- 0.096, N = 156.6054.6908.101-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 10, LosslessCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS3691215Min: 6.11 / Avg: 6.6 / Max: 6.91Min: 4.67 / Avg: 4.69 / Max: 4.71Min: 7.4 / Avg: 8.1 / Max: 8.821. (CXX) g++ options: -O3 -fPIC -lm

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS4080120160200SE +/- 1.63, N = 3SE +/- 2.32, N = 4SE +/- 0.82, N = 15113.23192.06111.55-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS4080120160200Min: 110.27 / Avg: 113.23 / Max: 115.9Min: 187.09 / Avg: 192.06 / Max: 198.22Min: 107.14 / Avg: 111.55 / Max: 117.051. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS306090120150SE +/- 1.08, N = 4SE +/- 0.82, N = 3SE +/- 1.10, N = 386.67147.6387.77-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS306090120150Min: 83.67 / Avg: 86.67 / Max: 88.78Min: 146.16 / Avg: 147.63 / Max: 148.99Min: 85.7 / Avg: 87.77 / Max: 89.431. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/pathtracer/real_timeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS4080120160200SE +/- 0.72, N = 3SE +/- 0.68, N = 3SE +/- 0.28, N = 3100.67161.9398.66
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/pathtracer/real_timeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS306090120150Min: 99.77 / Avg: 100.67 / Max: 102.09Min: 160.97 / Avg: 161.93 / Max: 163.24Min: 98.12 / Avg: 98.66 / Max: 99.07

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 50CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS800K1600K2400K3200K4000KSE +/- 2019.92, N = 3SE +/- 3266.10, N = 3SE +/- 34045.62, N = 32284227.203512867.332588694.17-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 50CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS600K1200K1800K2400K3000KMin: 2280646.5 / Avg: 2284227.17 / Max: 2287637.5Min: 3506374.25 / Avg: 3512867.33 / Max: 3516732Min: 2534642.75 / Avg: 2588694.17 / Max: 2651582.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS306090120150SE +/- 1.23, N = 3SE +/- 1.00, N = 3SE +/- 0.31, N = 399.27148.91101.72-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS306090120150Min: 96.81 / Avg: 99.27 / Max: 100.63Min: 147.87 / Avg: 148.91 / Max: 150.9Min: 101.16 / Avg: 101.72 / Max: 102.221. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS4080120160200SE +/- 1.37, N = 4SE +/- 0.92, N = 3SE +/- 0.65, N = 3115.50163.87115.32-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS306090120150Min: 113.24 / Avg: 115.5 / Max: 119.42Min: 162.6 / Avg: 163.87 / Max: 165.65Min: 114.56 / Avg: 115.32 / Max: 116.61. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS306090120150SE +/- 0.07, N = 3SE +/- 2.02, N = 3SE +/- 0.35, N = 3112.93153.77111.61-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 4KCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS306090120150Min: 112.85 / Avg: 112.93 / Max: 113.07Min: 150 / Avg: 153.77 / Max: 156.91Min: 111 / Avg: 111.61 / Max: 112.211. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS100K200K300K400K500KSE +/- 5624.44, N = 3SE +/- 507.78, N = 3SE +/- 3782.18, N = 44678664730263457251. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS80K160K240K320K400KMin: 456744 / Avg: 467866 / Max: 474886Min: 472060 / Avg: 473026.33 / Max: 473780Min: 335003 / Avg: 345724.5 / Max: 3526181. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: RotateCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2004006008001000SE +/- 7.69, N = 15SE +/- 10.17, N = 3SE +/- 4.18, N = 310301029759-O2 -lbz2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -ltiff -lfreetype -lSM -lICE -lbz2 -lxml2-O2 -ljbig -ltiff -lfreetype -lSM -lICE -llzma1. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: RotateCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2004006008001000Min: 948 / Avg: 1030.47 / Max: 1062Min: 1009 / Avg: 1028.67 / Max: 1043Min: 754 / Avg: 758.67 / Max: 7671. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SharpenCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2004006008001000SE +/- 1.76, N = 3SE +/- 2.31, N = 3SE +/- 0.67, N = 3641869641-O2 -lbz2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -ltiff -lfreetype -lSM -lICE -lbz2 -lxml2-O2 -ljbig -ltiff -lfreetype -lSM -lICE -llzma1. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SharpenCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS150300450600750Min: 638 / Avg: 641.33 / Max: 644Min: 865 / Avg: 869 / Max: 873Min: 640 / Avg: 641.33 / Max: 6421. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SwirlCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS5001000150020002500SE +/- 10.48, N = 3SE +/- 14.50, N = 3SE +/- 5.67, N = 3234025581916-O2 -lbz2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -ltiff -lfreetype -lSM -lICE -lbz2 -lxml2-O2 -ljbig -ltiff -lfreetype -lSM -lICE -llzma1. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SwirlCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS400800120016002000Min: 2327 / Avg: 2340.33 / Max: 2361Min: 2543 / Avg: 2558 / Max: 2587Min: 1905 / Avg: 1916.33 / Max: 19221. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lz -lm -lpthread

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8 - Compression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS400800120016002000SE +/- 18.11, N = 12SE +/- 15.83, N = 15SE +/- 17.15, N = 31244.01621.31638.31. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8 - Compression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS30060090012001500Min: 1141.3 / Avg: 1244.02 / Max: 1347.8Min: 1521.9 / Avg: 1621.27 / Max: 1702.3Min: 1613.4 / Avg: 1638.33 / Max: 1671.21. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS10002000300040005000SE +/- 27.70, N = 3SE +/- 1.46, N = 3SE +/- 40.05, N = 93955.053620.864704.04MIN: 3833.99 / MAX: 5510.15-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 3599.42 / MAX: 3730.48MIN: 3855.1 / MAX: 6393.881. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS8001600240032004000Min: 3909.45 / Avg: 3955.05 / Max: 4005.08Min: 3618.21 / Avg: 3620.86 / Max: 3623.25Min: 4587.41 / Avg: 4704.04 / Max: 4901.581. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS3691215SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.08, N = 310.5513.5610.66
OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS48121620Min: 10.44 / Avg: 10.55 / Max: 10.66Min: 13.51 / Avg: 13.56 / Max: 13.59Min: 10.55 / Avg: 10.66 / Max: 10.81

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: IO_uringClear Linux 36990Ubuntu 20.04.1 LTS2M4M6M8M10MSE +/- 6299.40, N = 3SE +/- 72129.89, N = 310692920.638404895.30-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio-lapparmor1. (CC) gcc options: -O2 -std=gnu99 -lm -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: IO_uringClear Linux 36990Ubuntu 20.04.1 LTS2M4M6M8M10MMin: 10681307.85 / Avg: 10692920.63 / Max: 10702958.44Min: 8261379.45 / Avg: 8404895.3 / Max: 8489324.691. (CC) gcc options: -O2 -std=gnu99 -lm -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Compression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS1122334455SE +/- 0.45, N = 5SE +/- 0.37, N = 3SE +/- 0.15, N = 343.448.838.51. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Compression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS1020304050Min: 42 / Avg: 43.38 / Max: 44.4Min: 48.1 / Avg: 48.83 / Max: 49.3Min: 38.2 / Avg: 38.5 / Max: 38.71. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3 - Compression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS15003000450060007500SE +/- 78.16, N = 3SE +/- 4.17, N = 3SE +/- 86.44, N = 157026.16807.85669.91. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3 - Compression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS12002400360048006000Min: 6873.2 / Avg: 7026.13 / Max: 7130.6Min: 6803.2 / Avg: 6807.77 / Max: 6816.1Min: 4937.4 / Avg: 5669.93 / Max: 6074.11. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Compression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS20406080100SE +/- 0.52, N = 3SE +/- 0.48, N = 3SE +/- 0.65, N = 1586.691.574.01. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Compression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS20406080100Min: 85.9 / Avg: 86.57 / Max: 87.6Min: 91 / Avg: 91.53 / Max: 92.5Min: 68.8 / Avg: 74.03 / Max: 77.31. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Memory CopyingCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS3K6K9K12K15KSE +/- 5.23, N = 3SE +/- 141.11, N = 3SE +/- 22.54, N = 312812.4511244.9413427.14-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic-lapparmor -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Memory CopyingCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2K4K6K8K10KMin: 12802 / Avg: 12812.45 / Max: 12817.98Min: 11047.19 / Avg: 11244.94 / Max: 11518.19Min: 13382.36 / Avg: 13427.14 / Max: 13453.951. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS510152025SE +/- 0.17, N = 3SE +/- 0.08, N = 3SE +/- 0.01, N = 321.1218.1921.60-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -ltiff-O2 -ltiff1. (CC) gcc options: -fvisibility=hidden -lm -lpng16 -ljpeg
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS510152025Min: 20.82 / Avg: 21.12 / Max: 21.39Min: 18.06 / Avg: 18.19 / Max: 18.35Min: 21.59 / Avg: 21.6 / Max: 21.611. (CC) gcc options: -fvisibility=hidden -lm -lpng16 -ljpeg

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Glibc C String FunctionsCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2M4M6M8M10MSE +/- 103735.47, N = 4SE +/- 83362.05, N = 8SE +/- 26324.24, N = 39473078.179658490.048173725.03-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic-lapparmor -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Glibc C String FunctionsCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2M4M6M8M10MMin: 9162429.01 / Avg: 9473078.17 / Max: 9591018.69Min: 9392447.16 / Avg: 9658490.04 / Max: 10016352.52Min: 8121159.15 / Avg: 8173725.03 / Max: 8202560.91. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 1000CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS600K1200K1800K2400K3000KSE +/- 26860.41, N = 5SE +/- 23267.85, N = 3SE +/- 31288.67, N = 32406986.652722264.502309354.80-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 1000CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS500K1000K1500K2000K2500KMin: 2307812.75 / Avg: 2406986.65 / Max: 2470034Min: 2675991.25 / Avg: 2722264.5 / Max: 2749675.25Min: 2272045 / Avg: 2309354.83 / Max: 2371517.51. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS400800120016002000SE +/- 3.09, N = 3SE +/- 3.21, N = 3SE +/- 12.00, N = 3169319781682-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto-flto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS30060090012001500Min: 1687 / Avg: 1693.17 / Max: 1696.5Min: 1972 / Avg: 1978 / Max: 1983Min: 1658 / Avg: 1682 / Max: 16941. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

Unpacking The Linux Kernel

This test measures how long it takes to extract the .tar.xz Linux kernel source tree package. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernel 5.19linux-5.19.tar.xzCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS3691215SE +/- 0.116, N = 17SE +/- 0.002, N = 4SE +/- 0.066, N = 209.1947.8538.888
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernel 5.19linux-5.19.tar.xzCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS3691215Min: 8.77 / Avg: 9.19 / Max: 10.78Min: 7.85 / Avg: 7.85 / Max: 7.86Min: 8.7 / Avg: 8.89 / Max: 9.6

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: MallocCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS70M140M210M280M350MSE +/- 452266.97, N = 3SE +/- 1194169.90, N = 3SE +/- 870345.53, N = 3306750258.84342573199.69292765838.99-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic-lapparmor -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: MallocCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS60M120M180M240M300MMin: 306128368.17 / Avg: 306750258.84 / Max: 307630040.92Min: 340555237.75 / Avg: 342573199.69 / Max: 344688524.84Min: 291033434.72 / Avg: 292765838.99 / Max: 293778961.481. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Timed GDB GNU Debugger Compilation

This test times how long it takes to build the GNU Debugger (GDB) in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 10.2Time To CompileCentOS Stream 9Ubuntu 20.04.1 LTS20406080100SE +/- 0.27, N = 3SE +/- 0.16, N = 395.4281.63
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 10.2Time To CompileCentOS Stream 9Ubuntu 20.04.1 LTS20406080100Min: 94.99 / Avg: 95.42 / Max: 95.92Min: 81.35 / Avg: 81.63 / Max: 81.92

Time To Compile

Clear Linux 36990: The test quit with a non-zero exit status. E: configure: error: C compiler cannot create executables

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: MMAPCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS8001600240032004000SE +/- 34.11, N = 3SE +/- 1.76, N = 3SE +/- 34.70, N = 33747.583336.533888.56-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic-lapparmor -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: MMAPCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS7001400210028003500Min: 3679.42 / Avg: 3747.58 / Max: 3784.23Min: 3333.95 / Avg: 3336.53 / Max: 3339.89Min: 3819.16 / Avg: 3888.56 / Max: 3923.691. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Apache HTTP Server

This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 1000CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS30K60K90K120K150KSE +/- 1558.40, N = 15SE +/- 760.56, N = 3SE +/- 1575.81, N = 3131349.60118161.83137078.35-O2-O3 -m64 -mtune=skylake -mrelax-cmpxchg-loop-O21. (CC) gcc options: -shared -fPIC
OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 1000CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS20K40K60K80K100KMin: 113593.7 / Avg: 131349.6 / Max: 135605.05Min: 116952.09 / Avg: 118161.83 / Max: 119565.29Min: 134278.43 / Avg: 137078.35 / Max: 139731.271. (CC) gcc options: -shared -fPIC

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 2CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS1122334455SE +/- 0.53, N = 3SE +/- 0.10, N = 3SE +/- 0.33, N = 348.7142.6349.35-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 2CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS1020304050Min: 48.05 / Avg: 48.71 / Max: 49.77Min: 42.46 / Avg: 42.62 / Max: 42.81Min: 48.92 / Avg: 49.35 / Max: 50.011. (CXX) g++ options: -O3 -fPIC -lm

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS0.69231.38462.07692.76923.4615SE +/- 0.014, N = 15SE +/- 0.013, N = 3SE +/- 0.011, N = 32.6632.9123.077MIN: 2.48 / MAX: 5.57-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 2.72 / MAX: 5.76MIN: 3.01 / MAX: 3.321. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810Min: 2.6 / Avg: 2.66 / Max: 2.75Min: 2.89 / Avg: 2.91 / Max: 2.93Min: 3.06 / Avg: 3.08 / Max: 3.091. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Matrix MathCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS70K140K210K280K350KSE +/- 512.51, N = 3SE +/- 272.45, N = 3SE +/- 224.33, N = 3286293.40328235.14307592.99-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic-lapparmor -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Matrix MathCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS60K120K180K240K300KMin: 285369.07 / Avg: 286293.4 / Max: 287139.23Min: 327954.2 / Avg: 328235.14 / Max: 328779.95Min: 307240.65 / Avg: 307592.99 / Max: 308009.711. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Calculate Pi BenchmarkCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS816243240SE +/- 0.12, N = 3SE +/- 0.06, N = 15SE +/- 0.02, N = 336.2133.0531.65
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Calculate Pi BenchmarkCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS816243240Min: 35.99 / Avg: 36.21 / Max: 36.41Min: 32.72 / Avg: 33.05 / Max: 33.71Min: 31.64 / Avg: 31.65 / Max: 31.69

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CryptoCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS20K40K60K80K100KSE +/- 289.31, N = 3SE +/- 22.86, N = 3SE +/- 298.10, N = 383808.9195806.7186462.28-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic-lapparmor -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CryptoCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS17K34K51K68K85KMin: 83236.24 / Avg: 83808.91 / Max: 84166.97Min: 95769.44 / Avg: 95806.71 / Max: 95848.27Min: 85866.38 / Avg: 86462.28 / Max: 86776.741. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: MediumCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS80160240320400SE +/- 2.47, N = 15SE +/- 1.16, N = 3SE +/- 2.77, N = 15316.37359.90315.10-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: MediumCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS60120180240300Min: 294.95 / Avg: 316.37 / Max: 327.35Min: 358.7 / Avg: 359.9 / Max: 362.23Min: 288.77 / Avg: 315.1 / Max: 326.911. (CXX) g++ options: -O3 -flto -pthread

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 50CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS500K1000K1500K2000K2500KSE +/- 29696.58, N = 3SE +/- 9831.09, N = 3SE +/- 13461.07, N = 32189377.082346688.302076700.08-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 50CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS400K800K1200K1600K2000KMin: 2136607.75 / Avg: 2189377.08 / Max: 2239367Min: 2327138.5 / Avg: 2346688.33 / Max: 2358281Min: 2062828.25 / Avg: 2076700.08 / Max: 21036181. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: StandardCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2004006008001000SE +/- 0.50, N = 3SE +/- 5.77, N = 3SE +/- 10.38, N = 31093992977-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto-flto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: StandardCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2004006008001000Min: 1091.5 / Avg: 1092.5 / Max: 1093Min: 983.5 / Avg: 992 / Max: 1003Min: 962.5 / Avg: 976.83 / Max: 9971. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS1020304050SE +/- 0.22, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 341.2138.3042.61-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -ltiff-O2 -ltiff1. (CC) gcc options: -fvisibility=hidden -lm -lpng16 -ljpeg
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS918273645Min: 40.76 / Avg: 41.21 / Max: 41.45Min: 38.25 / Avg: 38.3 / Max: 38.39Min: 42.52 / Avg: 42.61 / Max: 42.71. (CC) gcc options: -fvisibility=hidden -lm -lpng16 -ljpeg

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS0.33980.67961.01941.35921.699SE +/- 0.00, N = 3SE +/- 0.00, N = 31.361.51MIN: 0.99 / MAX: 13.44MIN: 0.55 / MAX: 57.821. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS246810Min: 1.35 / Avg: 1.36 / Max: 1.36Min: 1.51 / Avg: 1.51 / Max: 1.521. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 0CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS20406080100SE +/- 0.66, N = 3SE +/- 0.45, N = 3SE +/- 0.30, N = 384.3278.5987.17-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 0CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS20406080100Min: 83 / Avg: 84.32 / Max: 85.05Min: 77.94 / Avg: 78.59 / Max: 79.45Min: 86.81 / Avg: 87.17 / Max: 87.761. (CXX) g++ options: -O3 -fPIC -lm

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ThoroughCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS1122334455SE +/- 0.05, N = 3SE +/- 0.06, N = 3SE +/- 0.05, N = 346.3850.4745.63-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ThoroughCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS1020304050Min: 46.29 / Avg: 46.38 / Max: 46.44Min: 50.38 / Avg: 50.47 / Max: 50.59Min: 45.56 / Avg: 45.63 / Max: 45.731. (CXX) g++ options: -O3 -flto -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS170340510680850SE +/- 6.94, N = 12SE +/- 6.52, N = 15SE +/- 10.13, N = 15697.28728.04770.26MIN: 605.85-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 651.84MIN: 662.441. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS140280420560700Min: 627.38 / Avg: 697.28 / Max: 719.91Min: 682.09 / Avg: 728.04 / Max: 791.16Min: 691.92 / Avg: 770.26 / Max: 843.471. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS400800120016002000SE +/- 16.82, N = 12SE +/- 24.49, N = 12SE +/- 20.93, N = 5188120771899-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto-flto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS400800120016002000Min: 1791.5 / Avg: 1880.54 / Max: 1942Min: 1978.5 / Avg: 2076.63 / Max: 2237.5Min: 1817.5 / Avg: 1898.9 / Max: 1928.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3 - Decompression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS6001200180024003000SE +/- 0.65, N = 2SE +/- 1.35, N = 3SE +/- 3.69, N = 33022.92985.22755.91. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3 - Decompression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS5001000150020002500Min: 3022.2 / Avg: 3022.85 / Max: 3023.5Min: 2983.3 / Avg: 2985.17 / Max: 2987.8Min: 2751.5 / Avg: 2755.87 / Max: 2763.21. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: SENDFILECentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS300K600K900K1200K1500KSE +/- 2669.03, N = 3SE +/- 713.22, N = 3SE +/- 3383.13, N = 31271967.051161464.611177667.63-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic-lapparmor -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: SENDFILECentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS200K400K600K800K1000KMin: 1268777.74 / Avg: 1271967.05 / Max: 1277268.78Min: 1160038.17 / Avg: 1161464.61 / Max: 1162181.06Min: 1171012.06 / Avg: 1177667.63 / Max: 1182051.011. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810SE +/- 0.061, N = 15SE +/- 0.002, N = 3SE +/- 0.112, N = 38.8028.0468.675-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -ltiff-O2 -ltiff1. (CC) gcc options: -fvisibility=hidden -lm -lpng16 -ljpeg
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS3691215Min: 8.44 / Avg: 8.8 / Max: 9.01Min: 8.04 / Avg: 8.05 / Max: 8.05Min: 8.55 / Avg: 8.68 / Max: 8.91. (CC) gcc options: -fvisibility=hidden -lm -lpng16 -ljpeg

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ExhaustiveCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS1.10062.20123.30184.40245.503SE +/- 0.0017, N = 3SE +/- 0.0032, N = 3SE +/- 0.0038, N = 34.50544.89164.4840-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ExhaustiveCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810Min: 4.5 / Avg: 4.51 / Max: 4.51Min: 4.89 / Avg: 4.89 / Max: 4.9Min: 4.48 / Avg: 4.48 / Max: 4.491. (CXX) g++ options: -O3 -flto -pthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: EnhancedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS30060090012001500SE +/- 1.76, N = 3SE +/- 2.08, N = 3SE +/- 2.60, N = 3115311921094-O2 -lbz2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -ltiff -lfreetype -lSM -lICE -lbz2 -lxml2-O2 -ljbig -ltiff -lfreetype -lSM -lICE -llzma1. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: EnhancedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2004006008001000Min: 1150 / Avg: 1152.67 / Max: 1156Min: 1189 / Avg: 1192 / Max: 1196Min: 1089 / Avg: 1093.67 / Max: 10981. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lz -lm -lpthread

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3, Long Mode - Decompression SpeedCentOS Stream 9Ubuntu 20.04.1 LTS7001400210028003500SE +/- 13.60, N = 3SE +/- 6.04, N = 33208.02945.91. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3, Long Mode - Decompression SpeedCentOS Stream 9Ubuntu 20.04.1 LTS6001200180024003000Min: 3187.6 / Avg: 3208.03 / Max: 3233.8Min: 2933.9 / Avg: 2945.93 / Max: 2952.81. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: MEMFDCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS9001800270036004500SE +/- 35.16, N = 3SE +/- 5.40, N = 3SE +/- 34.35, N = 34098.843783.493881.85-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic-lapparmor -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: MEMFDCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS7001400210028003500Min: 4028.56 / Avg: 4098.84 / Max: 4135.99Min: 3772.7 / Avg: 3783.49 / Max: 3788.97Min: 3813.16 / Avg: 3881.85 / Max: 3916.741. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8 - Decompression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS6001200180024003000SE +/- 6.19, N = 12SE +/- 4.36, N = 15SE +/- 2.78, N = 33017.53007.52788.91. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8 - Decompression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS5001000150020002500Min: 2973.2 / Avg: 3017.48 / Max: 3043.8Min: 2978 / Avg: 3007.53 / Max: 3033.9Min: 2784.6 / Avg: 2788.9 / Max: 2794.11. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8, Long Mode - Decompression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS7001400210028003500SE +/- 10.41, N = 3SE +/- 16.67, N = 33201.03163.12962.41. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8, Long Mode - Decompression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS6001200180024003000Min: 3188.2 / Avg: 3200.97 / Max: 3221.6Min: 2941.6 / Avg: 2962.43 / Max: 2995.41. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS2K4K6K8K10KSE +/- 8.29, N = 3SE +/- 2.19, N = 39657.998939.741. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS2K4K6K8K10KMin: 9649.24 / Avg: 9657.99 / Max: 9674.56Min: 8936.81 / Avg: 8939.74 / Max: 8944.021. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS246810SE +/- 0.01, N = 3SE +/- 0.00, N = 38.278.93MIN: 7.23 / MAX: 27.1MIN: 4.95 / MAX: 81.081. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS3691215Min: 8.26 / Avg: 8.27 / Max: 8.28Min: 8.92 / Avg: 8.93 / Max: 8.931. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/ao/real_timeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS510152025SE +/- 0.06, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 322.4222.5220.99
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/ao/real_timeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS510152025Min: 22.3 / Avg: 22.42 / Max: 22.5Min: 22.47 / Avg: 22.52 / Max: 22.62Min: 20.95 / Avg: 20.99 / Max: 21.02

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 1000CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS50K100K150K200K250KSE +/- 1519.57, N = 3SE +/- 255.31, N = 3SE +/- 2419.36, N = 4200945.49215488.10210852.37-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CC) gcc options: -lcrypt -lz -O3 -march=native
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 1000CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS40K80K120K160K200KMin: 197912.72 / Avg: 200945.49 / Max: 202632.18Min: 215069.04 / Avg: 215488.1 / Max: 215950.31Min: 203685.74 / Avg: 210852.37 / Max: 214308.211. (CC) gcc options: -lcrypt -lz -O3 -march=native

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: defconfigCentOS Stream 9Ubuntu 20.04.1 LTS714212835SE +/- 0.39, N = 13SE +/- 0.24, N = 1529.6727.81
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: defconfigCentOS Stream 9Ubuntu 20.04.1 LTS714212835Min: 28.98 / Avg: 29.67 / Max: 34.27Min: 27.27 / Avg: 27.81 / Max: 30.9

Build: defconfig

Clear Linux 36990: The test quit with a non-zero exit status. E: linux-5.18/tools/objtool/include/objtool/elf.h:10:10: fatal error: gelf.h: No such file or directory

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS50100150200250SE +/- 0.51, N = 3SE +/- 2.26, N = 3233.33248.701. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS50100150200250Min: 232.75 / Avg: 233.33 / Max: 234.34Min: 245.84 / Avg: 248.7 / Max: 253.151. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS510152025SE +/- 0.15, N = 3SE +/- 0.07, N = 3SE +/- 0.02, N = 322.0222.2220.90
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS510152025Min: 21.75 / Avg: 22.02 / Max: 22.26Min: 22.1 / Avg: 22.22 / Max: 22.34Min: 20.86 / Avg: 20.9 / Max: 20.93

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS0.41940.83881.25821.67762.097SE +/- 0.020, N = 15SE +/- 0.017, N = 3SE +/- 0.012, N = 31.7531.8621.864MIN: 1.61 / MAX: 4.19-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 1.81 / MAX: 2.51MIN: 1.81 / MAX: 2.461. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810Min: 1.63 / Avg: 1.75 / Max: 1.86Min: 1.83 / Avg: 1.86 / Max: 1.89Min: 1.85 / Avg: 1.86 / Max: 1.891. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS48121620SE +/- 0.00, N = 3SE +/- 0.01, N = 313.9213.141. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS48121620Min: 13.92 / Avg: 13.92 / Max: 13.93Min: 13.13 / Avg: 13.14 / Max: 13.151. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bareCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS3691215SE +/- 0.002, N = 3SE +/- 0.033, N = 3SE +/- 0.045, N = 38.9969.5259.115-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bareCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS3691215Min: 8.99 / Avg: 9 / Max: 9Min: 9.47 / Avg: 9.53 / Max: 9.58Min: 9.06 / Avg: 9.11 / Max: 9.21. (CXX) g++ options: -O3

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: ForkingCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS14K28K42K56K70KSE +/- 123.25, N = 3SE +/- 173.91, N = 3SE +/- 292.14, N = 363484.4561843.0165455.72-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic-lapparmor -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: ForkingCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS11K22K33K44K55KMin: 63238.25 / Avg: 63484.45 / Max: 63618.02Min: 61495.27 / Avg: 61843.01 / Max: 62023.04Min: 64873.05 / Avg: 65455.72 / Max: 65784.681. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: PartialTweetsCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS1.12952.2593.38854.5185.6475SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 34.855.024.75-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: PartialTweetsCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810Min: 4.84 / Avg: 4.85 / Max: 4.86Min: 5.01 / Avg: 5.02 / Max: 5.03Min: 4.75 / Avg: 4.75 / Max: 4.761. (CXX) g++ options: -O3

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS48121620SE +/- 0.02, N = 3SE +/- 0.05, N = 313.6712.981. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS48121620Min: 13.63 / Avg: 13.67 / Max: 13.69Min: 12.88 / Avg: 12.98 / Max: 13.041. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v2CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS20406080100SE +/- 0.78, N = 3SE +/- 0.57, N = 10SE +/- 0.04, N = 375.8872.1374.71MIN: 74.63 / MAX: 111.7-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 71.03 / MAX: 78.39MIN: 74.27 / MAX: 75.191. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v2CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS1530456075Min: 74.81 / Avg: 75.88 / Max: 77.39Min: 71.19 / Avg: 72.13 / Max: 77.08Min: 74.65 / Avg: 74.71 / Max: 74.781. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported and HIP for AMD Radeon GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Fishy Cat - Compute: CPU-OnlyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS816243240SE +/- 0.07, N = 3SE +/- 0.02, N = 3SE +/- 0.11, N = 333.3732.4334.07
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Fishy Cat - Compute: CPU-OnlyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS714212835Min: 33.24 / Avg: 33.37 / Max: 33.47Min: 32.39 / Avg: 32.43 / Max: 32.46Min: 33.87 / Avg: 34.07 / Max: 34.23

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Vector MathCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS70K140K210K280K350KSE +/- 944.66, N = 3SE +/- 79.17, N = 3SE +/- 950.34, N = 3322923.09309299.86324720.78-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic-lapparmor -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Vector MathCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS60K120K180K240K300KMin: 321061.83 / Avg: 322923.09 / Max: 324134.71Min: 309155.65 / Avg: 309299.86 / Max: 309428.58Min: 322854.35 / Avg: 324720.78 / Max: 325965.061. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: FastCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2004006008001000SE +/- 3.69, N = 3SE +/- 1.80, N = 3SE +/- 2.29, N = 3799.11837.15809.17-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: FastCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS150300450600750Min: 792 / Avg: 799.11 / Max: 804.37Min: 834.02 / Avg: 837.15 / Max: 840.25Min: 804.8 / Avg: 809.17 / Max: 812.541. (CXX) g++ options: -O3 -flto -pthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Glibc Qsort Data SortingCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2004006008001000SE +/- 2.69, N = 3SE +/- 1.09, N = 3SE +/- 2.76, N = 3934.26893.84918.76-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic-lapparmor -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Glibc Qsort Data SortingCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS160320480640800Min: 930.36 / Avg: 934.26 / Max: 939.43Min: 892.33 / Avg: 893.84 / Max: 895.96Min: 913.26 / Avg: 918.76 / Max: 921.831. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS5K10K15K20K25KSE +/- 49.21, N = 3SE +/- 50.10, N = 3SE +/- 40.18, N = 3202612040421165-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lm-lm1. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS4K8K12K16K20KMin: 20164 / Avg: 20260.67 / Max: 20325Min: 20325 / Avg: 20404.33 / Max: 20497Min: 21098 / Avg: 21165.33 / Max: 212371. (CXX) g++ options: -O3 -ldl

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2004006008001000SE +/- 2.02, N = 3SE +/- 0.29, N = 3SE +/- 1.92, N = 3799828834-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto-flto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS150300450600750Min: 796 / Avg: 798.5 / Max: 802.5Min: 827 / Avg: 827.5 / Max: 828Min: 830 / Avg: 833.83 / Max: 8361. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: LargeRandomCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS0.22050.4410.66150.8821.1025SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.960.980.94-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: LargeRandomCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810Min: 0.96 / Avg: 0.96 / Max: 0.96Min: 0.98 / Avg: 0.98 / Max: 0.98Min: 0.94 / Avg: 0.94 / Max: 0.941. (CXX) g++ options: -O3

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported and HIP for AMD Radeon GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: BMW27 - Compute: CPU-OnlyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS612182430SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.11, N = 325.0424.2525.23
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: BMW27 - Compute: CPU-OnlyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS612182430Min: 24.98 / Avg: 25.04 / Max: 25.07Min: 24.17 / Avg: 24.25 / Max: 24.36Min: 25.05 / Avg: 25.23 / Max: 25.42

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS5K10K15K20K25KSE +/- 79.25, N = 3SE +/- 16.17, N = 3SE +/- 39.89, N = 3239672410224929-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lm-lm1. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS4K8K12K16K20KMin: 23809 / Avg: 23967 / Max: 24057Min: 24070 / Avg: 24102.33 / Max: 24119Min: 24870 / Avg: 24929 / Max: 250051. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS9K18K27K36K45KSE +/- 38.89, N = 3SE +/- 84.10, N = 3SE +/- 93.93, N = 3408524093242476-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lm-lm1. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS7K14K21K28K35KMin: 40775 / Avg: 40852 / Max: 40900Min: 40775 / Avg: 40931.67 / Max: 41063Min: 42309 / Avg: 42476 / Max: 426341. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS4K8K12K16K20KSE +/- 58.89, N = 3SE +/- 13.09, N = 3SE +/- 70.51, N = 3201522018020947-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lm-lm1. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS4K8K12K16K20KMin: 20050 / Avg: 20152.33 / Max: 20254Min: 20163 / Avg: 20180.33 / Max: 20206Min: 20806 / Avg: 20947 / Max: 210191. (CXX) g++ options: -O3 -ldl

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: KostyaCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS0.66151.3231.98452.6463.3075SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 32.912.942.83-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: KostyaCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810Min: 2.91 / Avg: 2.91 / Max: 2.92Min: 2.94 / Avg: 2.94 / Max: 2.95Min: 2.82 / Avg: 2.83 / Max: 2.841. (CXX) g++ options: -O3

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database ShootoutCentOS Stream 9Ubuntu 20.04.1 LTS4K8K12K16K20KSE +/- 197.42, N = 3SE +/- 82.13, N = 317787.218477.3MIN: 17444.33 / MAX: 21383.13MIN: 18342.05 / MAX: 21649.71
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database ShootoutCentOS Stream 9Ubuntu 20.04.1 LTS3K6K9K12K15KMin: 17444.33 / Avg: 17787.24 / Max: 18128.2Min: 18342.05 / Avg: 18477.32 / Max: 18625.65

Test: In-Memory Database Shootout

Clear Linux 36990: The test run did not produce a result.

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Decompression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS6001200180024003000SE +/- 4.30, N = 5SE +/- 3.36, N = 3SE +/- 3.78, N = 32635.72613.22540.71. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Decompression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS5001000150020002500Min: 2619.1 / Avg: 2635.68 / Max: 2643.6Min: 2609.5 / Avg: 2613.2 / Max: 2619.9Min: 2533.7 / Avg: 2540.67 / Max: 2546.71. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported and HIP for AMD Radeon GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Barbershop - Compute: CPU-OnlyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS60120180240300SE +/- 0.55, N = 3SE +/- 0.37, N = 3SE +/- 0.19, N = 3257.15253.27262.65
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Barbershop - Compute: CPU-OnlyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS50100150200250Min: 256.22 / Avg: 257.15 / Max: 258.13Min: 252.82 / Avg: 253.27 / Max: 254.01Min: 262.27 / Avg: 262.65 / Max: 262.9

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS9K18K27K36K45KSE +/- 74.23, N = 3SE +/- 180.17, N = 3SE +/- 56.40, N = 3405804050341973-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lm-lm1. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS7K14K21K28K35KMin: 40445 / Avg: 40580 / Max: 40701Min: 40214 / Avg: 40503.33 / Max: 40834Min: 41872 / Avg: 41973 / Max: 420671. (CXX) g++ options: -O3 -ldl

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU StressCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS30K60K90K120K150KSE +/- 758.69, N = 3SE +/- 387.90, N = 3SE +/- 488.03, N = 3135517.46140290.21135774.41-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic-lapparmor -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU StressCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS20K40K60K80K100KMin: 134525.36 / Avg: 135517.46 / Max: 137007.81Min: 139889.06 / Avg: 140290.21 / Max: 141065.87Min: 134810.84 / Avg: 135774.41 / Max: 136391.021. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS50100150200250SE +/- 0.17, N = 3SE +/- 0.17, N = 3SE +/- 0.33, N = 3236241233-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto-flto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS4080120160200Min: 235.5 / Avg: 235.67 / Max: 236Min: 240.5 / Avg: 240.67 / Max: 241Min: 232.5 / Avg: 233.17 / Max: 233.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS11K22K33K44K55KSE +/- 81.93, N = 3SE +/- 111.95, N = 3SE +/- 27.95, N = 3483194850649953-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -lm-lm1. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS9K18K27K36K45KMin: 48156 / Avg: 48319 / Max: 48415Min: 48361 / Avg: 48505.67 / Max: 48726Min: 49901 / Avg: 49952.67 / Max: 499971. (CXX) g++ options: -O3 -ldl

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS80K160K240K320K400KSE +/- 2273.35, N = 3SE +/- 953.58, N = 3SE +/- 1904.28, N = 43711313642093590581. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS60K120K180K240K300KMin: 366618 / Avg: 371131 / Max: 373866Min: 362506 / Avg: 364209 / Max: 365804Min: 354209 / Avg: 359058.25 / Max: 3626791. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS612182430SE +/- 0.02, N = 3SE +/- 0.07, N = 324.2923.521. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS612182430Min: 24.25 / Avg: 24.29 / Max: 24.32Min: 23.41 / Avg: 23.52 / Max: 23.651. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported and HIP for AMD Radeon GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Pabellon Barcelona - Compute: CPU-OnlyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS20406080100SE +/- 0.02, N = 3SE +/- 0.10, N = 3SE +/- 0.17, N = 382.8981.9784.44
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Pabellon Barcelona - Compute: CPU-OnlyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS1632486480Min: 82.85 / Avg: 82.89 / Max: 82.91Min: 81.79 / Avg: 81.97 / Max: 82.13Min: 84.2 / Avg: 84.44 / Max: 84.77

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Decompression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS6001200180024003000SE +/- 6.30, N = 3SE +/- 2.24, N = 3SE +/- 5.28, N = 152571.32522.52498.11. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Decompression SpeedCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS400800120016002000Min: 2559.2 / Avg: 2571.3 / Max: 2580.4Min: 2518 / Avg: 2522.47 / Max: 2525Min: 2441.3 / Avg: 2498.06 / Max: 2511.41. CentOS Stream 9: *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***2. Clear Linux 36990: *** zstd command line interface 64-bits v1.5.2, by Yann Collet ***3. Ubuntu 20.04.1 LTS: *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported and HIP for AMD Radeon GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Classroom - Compute: CPU-OnlyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS1530456075SE +/- 0.04, N = 3SE +/- 0.20, N = 3SE +/- 0.08, N = 364.8264.2365.82
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Classroom - Compute: CPU-OnlyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS1326395265Min: 64.78 / Avg: 64.82 / Max: 64.91Min: 63.9 / Avg: 64.23 / Max: 64.58Min: 65.72 / Avg: 65.82 / Max: 65.98

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.1CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS80160240320400SE +/- 0.03, N = 3SE +/- 0.09, N = 3SE +/- 0.25, N = 3366.49358.15365.92MIN: 366.26 / MAX: 366.87-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 357.28 / MAX: 366.82MIN: 365.53 / MAX: 380.361. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.1CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS70140210280350Min: 366.45 / Avg: 366.49 / Max: 366.54Min: 357.98 / Avg: 358.15 / Max: 358.29Min: 365.66 / Avg: 365.92 / Max: 366.421. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS0.87481.74962.62443.49924.374SE +/- 0.01173, N = 3SE +/- 0.00397, N = 3SE +/- 0.00704, N = 33.811553.800803.88793MIN: 3.53-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 3.42MIN: 3.631. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810Min: 3.79 / Avg: 3.81 / Max: 3.83Min: 3.8 / Avg: 3.8 / Max: 3.81Min: 3.88 / Avg: 3.89 / Max: 3.91. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS510152025SE +/- 0.19, N = 15SE +/- 0.03, N = 3SE +/- 0.12, N = 320.0920.4520.54MIN: 17.31 / MAX: 37.29-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 19.73 / MAX: 33.83MIN: 19.73 / MAX: 36.821. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS510152025Min: 18.66 / Avg: 20.09 / Max: 21.34Min: 20.38 / Avg: 20.45 / Max: 20.49Min: 20.32 / Avg: 20.54 / Max: 20.711. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS140280420560700SE +/- 1.04, N = 3SE +/- 0.29, N = 3SE +/- 2.08, N = 3630640644-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto-flto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS110220330440550Min: 628 / Avg: 630 / Max: 631.5Min: 639.5 / Avg: 640 / Max: 640.5Min: 640.5 / Avg: 643.5 / Max: 647.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: TopTweetCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS1.29152.5833.87455.1666.4575SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 35.625.745.69-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: TopTweetCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810Min: 5.61 / Avg: 5.62 / Max: 5.64Min: 5.73 / Avg: 5.74 / Max: 5.74Min: 5.68 / Avg: 5.69 / Max: 5.71. (CXX) g++ options: -O3

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS2004006008001000SE +/- 14.44, N = 12SE +/- 0.62, N = 31071.701049.561. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS2004006008001000Min: 914.28 / Avg: 1071.7 / Max: 1092.92Min: 1048.38 / Avg: 1049.56 / Max: 1050.471. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS20406080100SE +/- 0.07, N = 3SE +/- 0.13, N = 383.2684.991. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS1632486480Min: 83.15 / Avg: 83.26 / Max: 83.4Min: 84.74 / Avg: 84.99 / Max: 85.141. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/ao/real_timeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS612182430SE +/- 0.07, N = 3SE +/- 0.14, N = 3SE +/- 0.06, N = 324.3524.5924.85
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/ao/real_timeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS612182430Min: 24.26 / Avg: 24.35 / Max: 24.49Min: 24.32 / Avg: 24.59 / Max: 24.78Min: 24.78 / Avg: 24.85 / Max: 24.98

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/scivis/real_timeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS612182430SE +/- 0.29, N = 3SE +/- 0.05, N = 3SE +/- 0.00, N = 324.3024.7224.73
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/scivis/real_timeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS612182430Min: 23.8 / Avg: 24.3 / Max: 24.8Min: 24.63 / Avg: 24.72 / Max: 24.77Min: 24.72 / Avg: 24.73 / Max: 24.73

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU CacheCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS48121620SE +/- 0.13, N = 10SE +/- 0.18, N = 5SE +/- 0.19, N = 316.2616.4016.13-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic-lapparmor -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU CacheCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS48121620Min: 15.95 / Avg: 16.26 / Max: 16.81Min: 15.96 / Avg: 16.4 / Max: 16.76Min: 15.94 / Avg: 16.13 / Max: 16.51. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS816243240SE +/- 0.01, N = 3SE +/- 0.01, N = 332.0032.53MIN: 21.78 / MAX: 67.35MIN: 14.97 / MAX: 162.721. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS714212835Min: 31.97 / Avg: 32 / Max: 32.01Min: 32.51 / Avg: 32.53 / Max: 32.551. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS10002000300040005000SE +/- 1.33, N = 3SE +/- 5.16, N = 34414.944487.451. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS8001600240032004000Min: 4412.48 / Avg: 4414.94 / Max: 4417.06Min: 4479.25 / Avg: 4487.45 / Max: 4496.971. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS918273645SE +/- 0.08, N = 3SE +/- 0.05, N = 3SE +/- 0.05, N = 340.2840.8640.20-lmpi_cxx-lmpi_cxx1. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi
OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS816243240Min: 40.14 / Avg: 40.28 / Max: 40.39Min: 40.8 / Avg: 40.86 / Max: 40.96Min: 40.11 / Avg: 40.2 / Max: 40.291. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS0.48590.97181.45771.94362.4295SE +/- 0.01538, N = 3SE +/- 0.00145, N = 3SE +/- 0.01772, N = 32.159382.126372.13563MIN: 2.04-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 2.03MIN: 2.031. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810Min: 2.14 / Avg: 2.16 / Max: 2.19Min: 2.12 / Avg: 2.13 / Max: 2.13Min: 2.12 / Avg: 2.14 / Max: 2.171. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: DistinctUserIDCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS1.30282.60563.90845.21126.514SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 35.775.715.79-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: DistinctUserIDCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810Min: 5.76 / Avg: 5.77 / Max: 5.79Min: 5.71 / Avg: 5.71 / Max: 5.72Min: 5.76 / Avg: 5.79 / Max: 5.821. (CXX) g++ options: -O3

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS612182430SE +/- 0.04, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 325.5925.6525.29
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS612182430Min: 25.52 / Avg: 25.59 / Max: 25.63Min: 25.58 / Avg: 25.65 / Max: 25.69Min: 25.23 / Avg: 25.29 / Max: 25.34

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810SE +/- 0.088, N = 15SE +/- 0.071, N = 3SE +/- 0.036, N = 38.6638.5638.602MIN: 7.71 / MAX: 20.48-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 8.16 / MAX: 9.67MIN: 7.85 / MAX: 30.881. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS3691215Min: 7.94 / Avg: 8.66 / Max: 9.23Min: 8.42 / Avg: 8.56 / Max: 8.64Min: 8.53 / Avg: 8.6 / Max: 8.641. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS5001000150020002500SE +/- 1.20, N = 3SE +/- 0.70, N = 32478.962453.451. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS400800120016002000Min: 2477.25 / Avg: 2478.96 / Max: 2481.28Min: 2452.04 / Avg: 2453.45 / Max: 2454.21. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. The system/openssl test profiles relies on benchmarking the system/OS-supplied openssl binary rather than the pts/openssl test profile that uses the locally-built OpenSSL for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsign/s, More Is BetterOpenSSLCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS4K8K12K16K20KSE +/- 205.54, N = 4SE +/- 134.27, N = 3SE +/- 199.65, N = 416866.117031.116884.71. CentOS Stream 9: OpenSSL 3.0.1 14 Dec 2021 (Library: OpenSSL 3.0.1 14 Dec 2021)2. Clear Linux 36990: OpenSSL 1.1.1q 5 Jul 20223. Ubuntu 20.04.1 LTS: OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
OpenBenchmarking.orgsign/s, More Is BetterOpenSSLCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS3K6K9K12K15KMin: 16252.7 / Avg: 16866.08 / Max: 17111.2Min: 16763 / Avg: 17031.1 / Max: 17178.6Min: 16293.2 / Avg: 16884.7 / Max: 17143.21. CentOS Stream 9: OpenSSL 3.0.1 14 Dec 2021 (Library: OpenSSL 3.0.1 14 Dec 2021)2. Clear Linux 36990: OpenSSL 1.1.1q 5 Jul 20223. Ubuntu 20.04.1 LTS: OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS0.82891.65782.48673.31564.1445SE +/- 0.03477, N = 14SE +/- 0.00647, N = 3SE +/- 0.03918, N = 143.684043.648463.66112MIN: 3.54-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 3.53MIN: 3.521. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810Min: 3.64 / Avg: 3.68 / Max: 4.13Min: 3.64 / Avg: 3.65 / Max: 3.66Min: 3.61 / Avg: 3.66 / Max: 4.171. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS11002200330044005500SE +/- 32.87, N = 3SE +/- 26.07, N = 3SE +/- 17.61, N = 3526952595305-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto-flto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: ParallelCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS9001800270036004500Min: 5205.5 / Avg: 5268.67 / Max: 5316Min: 5222 / Avg: 5258.5 / Max: 5309Min: 5274.5 / Avg: 5305 / Max: 5335.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k AtomsCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS816243240SE +/- 0.05, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 335.1235.0534.82-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k AtomsCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS816243240Min: 35.04 / Avg: 35.12 / Max: 35.21Min: 34.94 / Avg: 35.05 / Max: 35.13Min: 34.73 / Avg: 34.82 / Max: 34.871. (CXX) g++ options: -O3 -lm -ldl

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. The system/openssl test profiles relies on benchmarking the system/OS-supplied openssl binary rather than the pts/openssl test profile that uses the locally-built OpenSSL for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgverify/s, More Is BetterOpenSSLCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS200K400K600K800K1000KSE +/- 4686.71, N = 4SE +/- 706.24, N = 3SE +/- 5029.58, N = 41112427.21119831.81115888.01. CentOS Stream 9: OpenSSL 3.0.1 14 Dec 2021 (Library: OpenSSL 3.0.1 14 Dec 2021)2. Clear Linux 36990: OpenSSL 1.1.1q 5 Jul 20223. Ubuntu 20.04.1 LTS: OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
OpenBenchmarking.orgverify/s, More Is BetterOpenSSLCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS200K400K600K800K1000KMin: 1098843.1 / Avg: 1112427.23 / Max: 1119271.3Min: 1118951.2 / Avg: 1119831.77 / Max: 1121228.5Min: 1100809.5 / Avg: 1115887.95 / Max: 1121438.11. CentOS Stream 9: OpenSSL 3.0.1 14 Dec 2021 (Library: OpenSSL 3.0.1 14 Dec 2021)2. Clear Linux 36990: OpenSSL 1.1.1q 5 Jul 20223. Ubuntu 20.04.1 LTS: OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS10K20K30K40K50KSE +/- 99.47, N = 3SE +/- 29.86, N = 347224.7747005.851. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS8K16K24K32K40KMin: 47088.13 / Avg: 47224.77 / Max: 47418.3Min: 46955.9 / Avg: 47005.85 / Max: 47059.171. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

KeyDB

A benchmark of KeyDB as a multi-threaded fork of the Redis server. The KeyDB benchmark is conducted using memtier-benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.2.0Clear Linux 36990Ubuntu 20.04.1 LTS100K200K300K400K500KSE +/- 1855.84, N = 3SE +/- 3785.28, N = 3485587.54487637.291. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.2.0Clear Linux 36990Ubuntu 20.04.1 LTS80K160K240K320K400KMin: 481904.84 / Avg: 485587.54 / Max: 487829.79Min: 480846.63 / Avg: 487637.29 / Max: 493930.951. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

CentOS Stream 9: The test run did not produce a result.

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000CentOS Stream 9Ubuntu 20.04.1 LTS140K280K420K560K700KSE +/- 2481.53, N = 3SE +/- 4260.37, N = 3666008.7668433.3
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000CentOS Stream 9Ubuntu 20.04.1 LTS120K240K360K480K600KMin: 661392.3 / Avg: 666008.67 / Max: 669895.1Min: 662079 / Avg: 668433.33 / Max: 676526.7

Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000

Clear Linux 36990: The test quit with a non-zero exit status.

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: x86_64 RdRandCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS140K280K420K560K700KSE +/- 2562.02, N = 3SE +/- 30.50, N = 3SE +/- 2178.91, N = 3667284.36669368.18666992.32-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic-lapparmor -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: x86_64 RdRandCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS120K240K360K480K600KMin: 662164.34 / Avg: 667284.36 / Max: 670020.11Min: 669330.88 / Avg: 669368.18 / Max: 669428.64Min: 662636.28 / Avg: 666992.32 / Max: 669278.331. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS0.06330.12660.18990.25320.3165SE +/- 0.00094, N = 3SE +/- 0.00064, N = 3SE +/- 0.00087, N = 30.281380.280530.28101
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS12345Min: 0.28 / Avg: 0.28 / Max: 0.28Min: 0.28 / Avg: 0.28 / Max: 0.28Min: 0.28 / Avg: 0.28 / Max: 0.28

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Broadcast Inner Join Test TimeClear Linux 3699048121620SE +/- 0.83, N = 215.17

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Inner Join Test TimeClear Linux 3699048121620SE +/- 0.23, N = 215.59

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Repartition Test TimeClear Linux 369903691215SE +/- 0.21, N = 213.07

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Group By Test TimeClear Linux 36990816243240SE +/- 5.10, N = 234.45

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: StandardCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS3K6K9K12K15KSE +/- 43.63, N = 3SE +/- 690.46, N = 12SE +/- 135.56, N = 3122601015111619-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto-flto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: StandardCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2K4K6K8K10KMin: 12190.5 / Avg: 12260.17 / Max: 12340.5Min: 6882 / Avg: 10151.38 / Max: 11950.5Min: 11357.5 / Avg: 11618.83 / Max: 118121. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: StandardCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS110220330440550SE +/- 1.17, N = 3SE +/- 17.65, N = 12SE +/- 18.60, N = 12443521483-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto-flto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: StandardCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS90180270360450Min: 441.5 / Avg: 443.33 / Max: 445.5Min: 436.5 / Avg: 521.33 / Max: 571Min: 437 / Avg: 482.92 / Max: 5741. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: StandardCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS150300450600750SE +/- 1.17, N = 3SE +/- 12.00, N = 12SE +/- 7.60, N = 4694636671-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto-flto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: StandardCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS120240360480600Min: 692 / Avg: 694.17 / Max: 696Min: 604.5 / Avg: 635.67 / Max: 699Min: 661 / Avg: 670.75 / Max: 6931. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: StandardCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2K4K6K8K10KSE +/- 388.59, N = 12SE +/- 346.22, N = 9SE +/- 93.38, N = 8110451021110856-flto-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -flto=auto-flto1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: StandardCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2K4K6K8K10KMin: 9069 / Avg: 11045.42 / Max: 12021.5Min: 9378.5 / Avg: 10210.94 / Max: 12014.5Min: 10315.5 / Avg: 10855.81 / Max: 11047.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -fno-fat-lto-objects -ldl -lrt

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS0.33750.6751.01251.351.6875SE +/- 0.05, N = 15SE +/- 0.07, N = 121.500.99MIN: 0.34 / MAX: 29.48MIN: 0.21 / MAX: 76.71. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS246810Min: 1.19 / Avg: 1.5 / Max: 1.85Min: 0.73 / Avg: 0.99 / Max: 1.361. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS14K28K42K56K70KSE +/- 1567.95, N = 15SE +/- 5640.72, N = 1242731.9366238.831. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS11K22K33K44K55KMin: 32765.75 / Avg: 42731.93 / Max: 52857.39Min: 43827.36 / Avg: 66238.83 / Max: 88837.641. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS510152025SE +/- 0.30, N = 15SE +/- 0.01, N = 313.6018.31MIN: 8.57 / MAX: 68.28MIN: 10.08 / MAX: 179.251. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS510152025Min: 10.69 / Avg: 13.6 / Max: 14.26Min: 18.29 / Avg: 18.31 / Max: 18.331. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS5001000150020002500SE +/- 39.85, N = 15SE +/- 1.52, N = 31478.642178.701. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUCentOS Stream 9Ubuntu 20.04.1 LTS400800120016002000Min: 1398 / Avg: 1478.64 / Max: 1868.26Min: 2176.32 / Avg: 2178.7 / Max: 2181.531. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS100200300400500SE +/- 4.68, N = 4SE +/- 0.42, N = 3SE +/- 9.70, N = 15378.88348.82463.98MIN: 371.88 / MAX: 634.44-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 346.87 / MAX: 355.44MIN: 369.59 / MAX: 652.11. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS80160240320400Min: 373.95 / Avg: 378.88 / Max: 392.9Min: 348.16 / Avg: 348.81 / Max: 349.61Min: 386.51 / Avg: 463.98 / Max: 514.431. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS0.49030.98061.47091.96122.4515SE +/- 0.047, N = 15SE +/- 0.016, N = 3SE +/- 0.016, N = 32.0902.1792.178MIN: 1.76 / MAX: 3.93-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 2.08 / MAX: 2.41MIN: 2.08 / MAX: 2.581. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810Min: 1.78 / Avg: 2.09 / Max: 2.24Min: 2.15 / Avg: 2.18 / Max: 2.21Min: 2.15 / Avg: 2.18 / Max: 2.21. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS0.95811.91622.87433.83244.7905SE +/- 0.075, N = 15SE +/- 0.058, N = 3SE +/- 0.029, N = 33.9564.2444.258MIN: 3.51 / MAX: 9.33-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 3.95 / MAX: 8.29MIN: 4.15 / MAX: 9.781. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810Min: 3.67 / Avg: 3.96 / Max: 4.46Min: 4.17 / Avg: 4.24 / Max: 4.36Min: 4.21 / Avg: 4.26 / Max: 4.311. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS0.58281.16561.74842.33122.914SE +/- 0.050, N = 15SE +/- 0.014, N = 3SE +/- 0.012, N = 32.3562.5292.590MIN: 2.03 / MAX: 5.76-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 2.48 / MAX: 2.83MIN: 2.54 / MAX: 5.991. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810Min: 2.07 / Avg: 2.36 / Max: 2.67Min: 2.51 / Avg: 2.53 / Max: 2.56Min: 2.57 / Avg: 2.59 / Max: 2.611. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS3691215SE +/- 0.23, N = 15SE +/- 0.09, N = 3SE +/- 0.16, N = 312.1012.8112.92MIN: 10.54 / MAX: 23.03-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 12.47 / MAX: 16.42MIN: 10.67 / MAX: 25.281. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS48121620Min: 10.69 / Avg: 12.1 / Max: 13.11Min: 12.65 / Avg: 12.81 / Max: 12.97Min: 12.63 / Avg: 12.92 / Max: 13.21. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: System V Message PassingCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2M4M6M8M10MSE +/- 85352.98, N = 4SE +/- 1671.17, N = 3SE +/- 182521.68, N = 157093379.738684030.904420124.12-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic-lapparmor -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: System V Message PassingCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS1.5M3M4.5M6M7.5MMin: 6837619.62 / Avg: 7093379.73 / Max: 7189613.49Min: 8680900.29 / Avg: 8684030.9 / Max: 8686610.06Min: 2906124.63 / Avg: 4420124.12 / Max: 5110222.341. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Socket ActivityCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS10K20K30K40K50KSE +/- 900.65, N = 15SE +/- 324.72, N = 3SE +/- 570.10, N = 152460.3736452.6945595.60-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic-lapparmor -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Socket ActivityCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS8K16K24K32K40KMin: 6 / Avg: 2460.37 / Max: 8954.24Min: 35834.48 / Avg: 36452.69 / Max: 36934.11Min: 41828.77 / Avg: 45595.6 / Max: 48296.211. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Test: IO_uring

CentOS Stream 9: The test run did not produce a result.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: AtomicCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS40K80K120K160K200KSE +/- 3961.98, N = 15SE +/- 3304.16, N = 15SE +/- 3618.17, N = 15187775.77145035.86183431.66-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic-lapparmor -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: AtomicCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS30K60K90K120K150KMin: 164376.47 / Avg: 187775.77 / Max: 204483.19Min: 127421.65 / Avg: 145035.86 / Max: 159671.65Min: 159582.8 / Avg: 183431.66 / Max: 201202.471. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: FutexCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS200K400K600K800K1000KSE +/- 73263.26, N = 15SE +/- 60890.67, N = 15SE +/- 59760.04, N = 151088788.921140633.72942679.72-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -laio -latomic-lapparmor -latomic1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: FutexCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS200K400K600K800K1000KMin: 696500.15 / Avg: 1088788.92 / Max: 1490893.83Min: 846766.74 / Avg: 1140633.72 / Max: 1364896.25Min: 619867.04 / Avg: 942679.72 / Max: 1247331.681. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10

CentOS Stream 9: The test run did not produce a result. E: error: failed to prepare thread 112 for test.

Clear Linux 36990: The test run did not produce a result. E: error: failed to prepare thread 56 for test.

Ubuntu 20.04.1 LTS: The test run did not produce a result. E: error: failed to prepare thread 56 for test.

Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10

CentOS Stream 9: The test run did not produce a result.

Clear Linux 36990: The test run did not produce a result.

Ubuntu 20.04.1 LTS: The test run did not produce a result.

Protocol: Redis - Clients: 500 - Set To Get Ratio: 5:1

CentOS Stream 9: The test run did not produce a result. E: error: failed to prepare thread 112 for test.

Clear Linux 36990: The test run did not produce a result. E: error: failed to prepare thread 56 for test.

Ubuntu 20.04.1 LTS: The test run did not produce a result. E: error: failed to prepare thread 56 for test.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS400K800K1200K1600K2000KSE +/- 67672.39, N = 12SE +/- 83568.62, N = 13SE +/- 17388.30, N = 31398073.701994991.481755483.281. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS300K600K900K1200K1500KMin: 1126743.33 / Avg: 1398073.7 / Max: 1952770.48Min: 1190376.76 / Avg: 1994991.48 / Max: 2288695.61Min: 1732597.08 / Avg: 1755483.28 / Max: 1789602.931. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Protocol: Redis - Clients: 100 - Set To Get Ratio: 5:1

CentOS Stream 9: The test run did not produce a result.

Clear Linux 36990: The test run did not produce a result.

Ubuntu 20.04.1 LTS: The test run did not produce a result.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS400K800K1200K1600K2000KSE +/- 63962.41, N = 12SE +/- 66661.35, N = 12SE +/- 53478.91, N = 151339297.911760873.071457157.141. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS300K600K900K1200K1500KMin: 966119.24 / Avg: 1339297.91 / Max: 1668140.37Min: 1097948.3 / Avg: 1760873.07 / Max: 1949244.22Min: 1046747.75 / Avg: 1457157.14 / Max: 1723232.261. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read Only - Average LatencyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS0.06410.12820.19230.25640.3205SE +/- 0.005, N = 12SE +/- 0.007, N = 12SE +/- 0.004, N = 120.2700.2750.285-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-O21. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read Only - Average LatencyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS12345Min: 0.25 / Avg: 0.27 / Max: 0.29Min: 0.25 / Avg: 0.27 / Max: 0.3Min: 0.27 / Avg: 0.28 / Max: 0.311. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read OnlyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS400K800K1200K1600K2000KSE +/- 30425.21, N = 12SE +/- 42947.24, N = 12SE +/- 24026.09, N = 12185565618316651760440-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-O21. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read OnlyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS300K600K900K1200K1500KMin: 1702189.9 / Avg: 1855656.11 / Max: 1970416.52Min: 1643669.33 / Avg: 1831665.25 / Max: 2003918.94Min: 1634369.08 / Avg: 1760440.5 / Max: 1852918.691. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average LatencyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS0.03530.07060.10590.14120.1765SE +/- 0.001, N = 3SE +/- 0.004, N = 12SE +/- 0.002, N = 120.1500.1320.157-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-O21. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average LatencyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS12345Min: 0.15 / Avg: 0.15 / Max: 0.15Min: 0.12 / Avg: 0.13 / Max: 0.15Min: 0.14 / Avg: 0.16 / Max: 0.161. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read OnlyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS400K800K1200K1600K2000KSE +/- 9583.19, N = 3SE +/- 57458.70, N = 12SE +/- 17628.71, N = 12166938819131151593950-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop-O21. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read OnlyCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS300K600K900K1200K1500KMin: 1655270.14 / Avg: 1669387.76 / Max: 1687672.99Min: 1661379.78 / Avg: 1913115.4 / Max: 2124955.9Min: 1530613.95 / Avg: 1593950.05 / Max: 1778797.511. (CC) gcc options: -fno-strict-aliasing -fwrapv -lpgcommon -lpgport -lpq -lm

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V2CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS12K24K36K48K60KSE +/- 268.62, N = 3SE +/- 583.51, N = 15SE +/- 1518.21, N = 1547297.849331.954111.0
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V2CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS9K18K27K36K45KMin: 46787.4 / Avg: 47297.83 / Max: 47698.2Min: 47180.5 / Avg: 49331.85 / Max: 54041.2Min: 47734.3 / Avg: 54110.95 / Max: 69012.2

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet QuantCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2K4K6K8K10KSE +/- 100.45, N = 3SE +/- 81.76, N = 6SE +/- 337.08, N = 159540.868576.9510747.97
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet QuantCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2K4K6K8K10KMin: 9372.16 / Avg: 9540.86 / Max: 9719.7Min: 8277.81 / Avg: 8576.95 / Max: 8873.84Min: 9333.34 / Avg: 10747.97 / Max: 12938.7

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet FloatCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS9001800270036004500SE +/- 499.70, N = 12SE +/- 75.96, N = 15SE +/- 20.34, N = 34240.413701.273340.60
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet FloatCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS7001400210028003500Min: 3338.31 / Avg: 4240.41 / Max: 9146.21Min: 3362.9 / Avg: 3701.27 / Max: 4201.56Min: 3303.43 / Avg: 3340.6 / Max: 3373.51

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet MobileCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS20K40K60K80K100KSE +/- 3728.57, N = 12SE +/- 935.16, N = 15SE +/- 3387.75, N = 1268713.068566.780034.0
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet MobileCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS14K28K42K56K70KMin: 62607.9 / Avg: 68712.96 / Max: 109269Min: 64020.3 / Avg: 68566.65 / Max: 79083.9Min: 65377.7 / Avg: 80034.01 / Max: 104357

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS16K32K48K64K80KSE +/- 21727.22, N = 15SE +/- 2536.95, N = 15SE +/- 264.98, N = 1573896.542370.336132.7
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS13K26K39K52K65KMin: 35453.9 / Avg: 73896.46 / Max: 362681Min: 35507 / Avg: 42370.31 / Max: 62938.2Min: 35035.8 / Avg: 36132.68 / Max: 38853.5

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNetCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS4K8K12K16K20KSE +/- 5506.54, N = 12SE +/- 370.01, N = 12SE +/- 58.03, N = 1516614.416244.195617.26
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNetCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS3K6K9K12K15KMin: 5327.65 / Avg: 16614.41 / Max: 68861Min: 5465.21 / Avg: 6244.19 / Max: 10090.6Min: 5285.06 / Avg: 5617.26 / Max: 6097.49

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 1000CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS400K800K1200K1600K2000KSE +/- 55692.46, N = 12SE +/- 21642.05, N = 5SE +/- 13612.37, N = 31847194.122078925.131851066.21-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 1000CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS400K800K1200K1600K2000KMin: 1444277.75 / Avg: 1847194.12 / Max: 2020238.25Min: 1993126.88 / Avg: 2078925.13 / Max: 2110900.25Min: 1829513.62 / Avg: 1851066.21 / Max: 1876247.51. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 500CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS400K800K1200K1600K2000KSE +/- 47157.16, N = 12SE +/- 26407.67, N = 15SE +/- 20191.86, N = 51931278.622083152.811835435.10-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 500CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS400K800K1200K1600K2000KMin: 1439413.25 / Avg: 1931278.62 / Max: 2022608.12Min: 1853257.88 / Avg: 2083152.81 / Max: 2209886.75Min: 1781927.88 / Avg: 1835435.1 / Max: 18809351. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 500CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS600K1200K1800K2400K3000KSE +/- 89203.76, N = 15SE +/- 22655.99, N = 3SE +/- 26054.27, N = 42018201.092765192.672174436.56-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 500CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS500K1000K1500K2000K2500KMin: 1472476.75 / Avg: 2018201.09 / Max: 2357865.5Min: 2724096.25 / Avg: 2765192.67 / Max: 2802269.5Min: 2110365.75 / Avg: 2174436.56 / Max: 2236155.51. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

Clients: 200 - Set To Get Ratio: 5:1

CentOS Stream 9: The test run did not produce a result.

Clear Linux 36990: The test run did not produce a result.

Ubuntu 20.04.1 LTS: The test run did not produce a result.

Clients: 200 - Set To Get Ratio: 1:5

CentOS Stream 9: The test run did not produce a result.

Clear Linux 36990: The test run did not produce a result.

Ubuntu 20.04.1 LTS: The test run did not produce a result.

Clients: 200 - Set To Get Ratio: 1:1

CentOS Stream 9: The test run did not produce a result.

Clear Linux 36990: The test run did not produce a result.

Ubuntu 20.04.1 LTS: The test run did not produce a result.

Clients: 50 - Set To Get Ratio: 5:1

CentOS Stream 9: The test run did not produce a result.

Clients: 50 - Set To Get Ratio: 1:5

CentOS Stream 9: The test run did not produce a result.

Clients: 50 - Set To Get Ratio: 1:1

CentOS Stream 9: The test run did not produce a result.

Apache Spark

This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using DataframeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS0.66831.33662.00492.67323.3415SE +/- 0.09, N = 3SE +/- 0.04, N = 15SE +/- 0.19, N = 32.792.042.97
OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using DataframeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810Min: 2.62 / Avg: 2.79 / Max: 2.92Min: 1.86 / Avg: 2.04 / Max: 2.3Min: 2.75 / Avg: 2.97 / Max: 3.36

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS1020304050SE +/- 5.68, N = 15SE +/- 0.41, N = 12SE +/- 5.39, N = 1237.9612.2741.89MIN: 3.48-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 9.86MIN: 10.951. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS918273645Min: 4.73 / Avg: 37.96 / Max: 66.09Min: 10.57 / Avg: 12.27 / Max: 14.88Min: 15.2 / Avg: 41.89 / Max: 64.361. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS110220330440550SE +/- 7.22, N = 15SE +/- 4.67, N = 15SE +/- 8.91, N = 15447.62487.12497.81MIN: 376.51-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 431.07MIN: 385.191. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS90180270360450Min: 393.11 / Avg: 447.62 / Max: 486.98Min: 454.04 / Avg: 487.12 / Max: 533.05Min: 402.9 / Avg: 497.81 / Max: 536.791. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS0.55061.10121.65182.20242.753SE +/- 0.07640, N = 15SE +/- 0.02882, N = 15SE +/- 0.05270, N = 152.385632.302512.44691MIN: 1.7-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 1.76MIN: 1.761. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810Min: 1.87 / Avg: 2.39 / Max: 3.19Min: 2.1 / Avg: 2.3 / Max: 2.49Min: 1.94 / Avg: 2.45 / Max: 2.841. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS1.30512.61023.91535.22046.5255SE +/- 0.32475, N = 15SE +/- 0.07618, N = 15SE +/- 0.71740, N = 155.406034.640865.80053MIN: 3.28-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop - MIN: 3.55MIN: 3.081. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810Min: 3.69 / Avg: 5.41 / Max: 8.91Min: 4.23 / Avg: 4.64 / Max: 5.24Min: 3.36 / Avg: 5.8 / Max: 14.791. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

Stockfish

This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 15Total TimeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS40M80M120M160M200MSE +/- 2364357.21, N = 15SE +/- 3123886.35, N = 15SE +/- 1938385.82, N = 15179473129186079628174792587-pipe -fexceptions -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 15Total TimeCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS30M60M90M120M150MMin: 164540890 / Avg: 179473129.13 / Max: 195156974Min: 166800438 / Avg: 186079627.93 / Max: 209129501Min: 159673139 / Avg: 174792586.8 / Max: 1889688451. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: HWB Color SpaceCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS400800120016002000SE +/- 28.49, N = 12SE +/- 4.16, N = 3SE +/- 6.23, N = 1511381737636-O2 -lbz2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -ltiff -lfreetype -lSM -lICE -lbz2 -lxml2-O2 -ljbig -ltiff -lfreetype -lSM -lICE -llzma1. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: HWB Color SpaceCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS30060090012001500Min: 938 / Avg: 1138.17 / Max: 1302Min: 1729 / Avg: 1737 / Max: 1743Min: 608 / Avg: 635.6 / Max: 6851. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: ResizingCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS6001200180024003000SE +/- 27.10, N = 3SE +/- 35.23, N = 4SE +/- 9.26, N = 1527482851417-O2 -lbz2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -ltiff -lfreetype -lSM -lICE -lbz2 -lxml2-O2 -ljbig -ltiff -lfreetype -lSM -lICE -llzma1. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: ResizingCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS5001000150020002500Min: 2717 / Avg: 2748 / Max: 2802Min: 2753 / Avg: 2851.25 / Max: 2913Min: 351 / Avg: 416.73 / Max: 4661. (CC) gcc options: -fopenmp -ljpeg -lXext -lX11 -lz -lm -lpthread

Node.js Express HTTP Load Test

A Node.js Express server with a Node-based loadtest client for facilitating HTTP benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterNode.js Express HTTP Load TestCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2K4K6K8K10KSE +/- 73.75, N = 15SE +/- 47.88, N = 3SE +/- 252.99, N = 124910980862101. Nodejs
OpenBenchmarking.orgRequests Per Second, More Is BetterNode.js Express HTTP Load TestCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2K4K6K8K10KMin: 4291 / Avg: 4910.27 / Max: 5367Min: 9730 / Avg: 9807.67 / Max: 9895Min: 5063 / Avg: 6210 / Max: 73261. Nodejs

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS2K4K6K8K10KSE +/- 154.17, N = 12SE +/- 56.01, N = 3SE +/- 197.62, N = 138693.95950.58478.8MIN: 6648.05 / MAX: 15659.82MIN: 5353.91 / MAX: 6140.39MIN: 6706.47 / MAX: 17648.26
OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS15003000450060007500Min: 8107.68 / Avg: 8693.92 / Max: 9648.75Min: 5840.72 / Avg: 5950.5 / Max: 6024.72Min: 7693.23 / Avg: 8478.8 / Max: 9573.87

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS12002400360048006000SE +/- 189.31, N = 16SE +/- 7.13, N = 4SE +/- 138.80, N = 20560036834819
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS10002000300040005000Min: 4259 / Avg: 5600.38 / Max: 6538Min: 3662 / Avg: 3683.25 / Max: 3692Min: 4293 / Avg: 4818.65 / Max: 6445

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS0.68491.36982.05472.73963.4245SE +/- 0.065, N = 15SE +/- 0.002, N = 3SE +/- 0.027, N = 133.0442.6562.931-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -ltiff-O2 -ltiff1. (CC) gcc options: -fvisibility=hidden -lm -lpng16 -ljpeg
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100CentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810Min: 2.8 / Avg: 3.04 / Max: 3.4Min: 2.65 / Avg: 2.66 / Max: 2.66Min: 2.9 / Avg: 2.93 / Max: 3.261. (CC) gcc options: -fvisibility=hidden -lm -lpng16 -ljpeg

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS0.48670.97341.46011.94682.4335SE +/- 0.069, N = 15SE +/- 0.003, N = 3SE +/- 0.039, N = 152.1631.6621.975-O2-O3 -pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop -ltiff-O2 -ltiff1. (CC) gcc options: -fvisibility=hidden -lm -lpng16 -ljpeg
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS246810Min: 1.83 / Avg: 2.16 / Max: 2.4Min: 1.66 / Avg: 1.66 / Max: 1.67Min: 1.9 / Avg: 1.97 / Max: 2.281. (CC) gcc options: -fvisibility=hidden -lm -lpng16 -ljpeg

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin ProteinCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS816243240SE +/- 0.06, N = 3SE +/- 0.08, N = 3SE +/- 0.49, N = 1530.8736.0026.75-pipe -fexceptions -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -mrelax-cmpxchg-loop1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin ProteinCentOS Stream 9Clear Linux 36990Ubuntu 20.04.1 LTS816243240Min: 30.81 / Avg: 30.87 / Max: 30.98Min: 35.87 / Avg: 36 / Max: 36.14Min: 23.5 / Avg: 26.75 / Max: 29.811. (CXX) g++ options: -O3 -lm -ldl

210 Results Shown

Zstd Compression
Stress-NG
Zstd Compression
PostgreSQL pgbench:
  100 - 500 - Read Write - Average Latency
  100 - 500 - Read Write
Apache Spark
DaCapo Benchmark
PostgreSQL pgbench:
  100 - 250 - Read Write
  100 - 250 - Read Write - Average Latency
C-Blosc:
  blosclz shuffle
  blosclz bitshuffle
Dragonflydb:
  50 - 1:5
  50 - 1:1
  50 - 5:1
Natron
Stress-NG
Renaissance
ONNX Runtime
x264
Renaissance:
  Apache Spark Bayes
  Rand Forest
  ALS Movie Lens
Stress-NG
SVT-AV1
OpenVINO:
  Person Detection FP16 - CPU
  Person Detection FP32 - CPU
VP9 libvpx Encoding
OpenVINO:
  Face Detection FP16 - CPU
  Vehicle Detection FP16 - CPU
Timed LLVM Compilation
SVT-AV1
OpenVINO:
  Vehicle Detection FP16-INT8 - CPU
  Face Detection FP16-INT8 - CPU
GraphicsMagick
SVT-AV1
OpenVINO
libavif avifenc
SVT-AV1
ClickHouse:
  100M Rows Web Analytics Dataset, First Run / Cold Cache
  100M Rows Web Analytics Dataset, Second Run
libavif avifenc
ClickHouse
libavif avifenc
SVT-HEVC:
  10 - Bosphorus 4K
  7 - Bosphorus 4K
OSPRay
Redis
SVT-VP9:
  Visual Quality Optimized - Bosphorus 4K
  PSNR/SSIM Optimized - Bosphorus 4K
  VMAF Optimized - Bosphorus 4K
7-Zip Compression
GraphicsMagick:
  Rotate
  Sharpen
  Swirl
Zstd Compression
TNN
Node.js V8 Web Tooling Benchmark
Stress-NG
Zstd Compression:
  19, Long Mode - Compression Speed
  3 - Compression Speed
  19 - Compression Speed
Stress-NG
WebP Image Encode
Stress-NG
Redis
ONNX Runtime
Unpacking The Linux Kernel
Stress-NG
Timed GDB GNU Debugger Compilation
Stress-NG
Apache HTTP Server
libavif avifenc
Mobile Neural Network
Stress-NG
Apache Spark
Stress-NG
ASTC Encoder
Redis
ONNX Runtime
WebP Image Encode
OpenVINO
libavif avifenc
ASTC Encoder
oneDNN
ONNX Runtime
Zstd Compression
Stress-NG
WebP Image Encode
ASTC Encoder
GraphicsMagick
Zstd Compression
Stress-NG
Zstd Compression:
  8 - Decompression Speed
  8, Long Mode - Decompression Speed
OpenVINO:
  Weld Porosity Detection FP16-INT8 - CPU:
    FPS
    ms
OSPRay
nginx
Timed Linux Kernel Compilation
OpenVINO
OSPRay
Mobile Neural Network
OpenVINO
GROMACS
Stress-NG
simdjson
OpenVINO
TNN
Blender
Stress-NG
ASTC Encoder
Stress-NG
OSPRay Studio
ONNX Runtime
simdjson
Blender
OSPRay Studio:
  3 - 4K - 16 - Path Tracer
  2 - 4K - 32 - Path Tracer
  1 - 4K - 16 - Path Tracer
simdjson
Renaissance
Zstd Compression
Blender
OSPRay Studio
Stress-NG
ONNX Runtime
OSPRay Studio
7-Zip Compression
OpenVINO
Blender
Zstd Compression
Blender
TNN
oneDNN
Mobile Neural Network
ONNX Runtime
simdjson
OpenVINO:
  Vehicle Detection FP16 - CPU
  Face Detection FP16-INT8 - CPU
OSPRay:
  particle_volume/ao/real_time
  particle_volume/scivis/real_time
Stress-NG
OpenVINO:
  Weld Porosity Detection FP16 - CPU
  Vehicle Detection FP16-INT8 - CPU
High Performance Conjugate Gradient
oneDNN
simdjson
OSPRay
Mobile Neural Network
OpenVINO
OpenSSL
oneDNN
ONNX Runtime
LAMMPS Molecular Dynamics Simulator
OpenSSL
OpenVINO
KeyDB
InfluxDB
Stress-NG
NAMD
Apache Spark:
  40000000 - 500 - Broadcast Inner Join Test Time
  40000000 - 500 - Inner Join Test Time
  40000000 - 500 - Repartition Test Time
  40000000 - 500 - Group By Test Time
ONNX Runtime:
  super-resolution-10 - CPU - Standard
  fcn-resnet101-11 - CPU - Standard
  yolov4 - CPU - Standard
  GPT-2 - CPU - Standard
OpenVINO:
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU:
    ms
    FPS
  Person Vehicle Bike Detection FP16 - CPU:
    ms
    FPS
TNN
Mobile Neural Network:
  mobilenet-v1-1.0
  SqueezeNetV1.0
  squeezenetv1.1
  nasnet
Stress-NG:
  System V Message Passing
  Socket Activity
  Atomic
  Futex
memtier_benchmark:
  Redis - 50 - 1:10
  Redis - 50 - 5:1
PostgreSQL pgbench:
  100 - 500 - Read Only - Average Latency
  100 - 500 - Read Only
  100 - 250 - Read Only - Average Latency
  100 - 250 - Read Only
TensorFlow Lite:
  Inception ResNet V2
  Mobilenet Quant
  Mobilenet Float
  NASNet Mobile
  Inception V4
  SqueezeNet
Redis:
  SET - 1000
  SET - 500
  GET - 500
Apache Spark
oneDNN:
  Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  IP Shapes 3D - bf16bf16bf16 - CPU
  IP Shapes 1D - bf16bf16bf16 - CPU
Stockfish
GraphicsMagick:
  HWB Color Space
  Resizing
Node.js Express HTTP Load Test
Renaissance
DaCapo Benchmark
WebP Image Encode:
  Quality 100
  Default
LAMMPS Molecular Dynamics Simulator