New Tests

2 x Intel Xeon Platinum 8380 testing with a Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS) and ASPEED on CentOS Stream 9 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2209017-NE-NEWTESTS349
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
CentOS Stream 9
August 31
  1 Day, 46 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):


New TestsOpenBenchmarking.orgPhoronix Test Suite 10.8.42 x Intel Xeon Platinum 8380 @ 3.40GHz (80 Cores / 160 Threads)Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS)Intel Device 0998512GB7682GB INTEL SSDPF2KX076TZASPEEDVE2282 x Intel X710 for 10GBASE-T + 2 x Intel E810-C for QSFPCentOS Stream 95.14.0-148.el9.x86_64 (x86_64)GNOME Shell 40.10X ServerGCC 11.3.1 20220421xfs1920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen ResolutionNew Tests PerformanceSystem Logs- Transparent Huge Pages: always- --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-isl - NONE / attr2,inode64,logbsize=32k,logbufs=8,noquota,relatime,rw,seclabel / Block Size: 4096 - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xd000363 - OpenJDK Runtime Environment (Red_Hat-11.0.16.0.8-2.el9) (build 11.0.16+8-LTS)- Python 3.9.13- SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

New Testsunpack-linux: linux-5.19.tar.xzblosc: blosclz shuffleblosc: blosclz bitshufflehpcg: namd: ATPase Simulation - 327,506 Atomslammps: 20k Atomslammps: Rhodopsin Proteinwebp: Defaultwebp: Quality 100webp: Quality 100, Losslesswebp: Quality 100, Highest Compressionwebp: Quality 100, Lossless, Highest Compressionsimdjson: Kostyasimdjson: TopTweetsimdjson: LargeRandsimdjson: PartialTweetssimdjson: DistinctUserIDdacapobench: H2dacapobench: Jythondacapobench: Tradebeansrenaissance: Rand Forestrenaissance: ALS Movie Lensrenaissance: Apache Spark Bayesrenaissance: Savina Reactors.IOrenaissance: Finagle HTTP Requestsrenaissance: In-Memory Database Shootoutcompress-zstd: 3 - Compression Speedcompress-zstd: 3 - Decompression Speedcompress-zstd: 8 - Compression Speedcompress-zstd: 8 - Decompression Speedcompress-zstd: 19 - Compression Speedcompress-zstd: 19 - Decompression Speedcompress-zstd: 3, Long Mode - Compression Speedcompress-zstd: 3, Long Mode - Decompression Speedcompress-zstd: 8, Long Mode - Compression Speedcompress-zstd: 8, Long Mode - Decompression Speedcompress-zstd: 19, Long Mode - Compression Speedcompress-zstd: 19, Long Mode - Decompression Speednode-express-loadtest: graphics-magick: Swirlgraphics-magick: Rotategraphics-magick: Sharpengraphics-magick: Enhancedgraphics-magick: Resizinggraphics-magick: Noise-Gaussiangraphics-magick: HWB Color Spacesvt-av1: Preset 4 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 4Ksvt-av1: Preset 10 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Ksvt-hevc: 7 - Bosphorus 4Ksvt-hevc: 10 - Bosphorus 4Ksvt-vp9: VMAF Optimized - Bosphorus 4Ksvt-vp9: PSNR/SSIM Optimized - Bosphorus 4Ksvt-vp9: Visual Quality Optimized - Bosphorus 4Kvpxenc: Speed 0 - Bosphorus 4Kx264: Bosphorus 4Kospray: particle_volume/ao/real_timeospray: particle_volume/scivis/real_timeospray: particle_volume/pathtracer/real_timeospray: gravity_spheres_volume/dim_512/ao/real_timeospray: gravity_spheres_volume/dim_512/scivis/real_timeospray: gravity_spheres_volume/dim_512/pathtracer/real_timecompress-7zip: Compression Ratingcompress-7zip: Decompression Ratingstockfish: Total Timeavifenc: 0avifenc: 2avifenc: 6avifenc: 6, Losslessavifenc: 10, Losslessbuild-gdb: Time To Compilebuild-linux-kernel: defconfigbuild-llvm: Ninjaonednn: IP Shapes 1D - bf16bf16bf16 - CPUonednn: IP Shapes 3D - bf16bf16bf16 - CPUonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPUospray-studio: 1 - 4K - 16 - Path Tracerospray-studio: 1 - 4K - 32 - Path Tracerospray-studio: 2 - 4K - 16 - Path Tracerospray-studio: 2 - 4K - 32 - Path Tracerospray-studio: 3 - 4K - 16 - Path Tracerospray-studio: 3 - 4K - 32 - Path Tracerwebp2: Defaultwebp2: Quality 75, Compression Effort 7webp2: Quality 95, Compression Effort 7node-web-tooling: openssl: openssl: clickhouse: 100M Rows Web Analytics Dataset, First Run / Cold Cacheclickhouse: 100M Rows Web Analytics Dataset, Second Runclickhouse: 100M Rows Web Analytics Dataset, Third Runspark: 40000000 - 500 - SHA-512 Benchmark Timespark: 40000000 - 500 - Calculate Pi Benchmarkspark: 40000000 - 500 - Calculate Pi Benchmark Using Dataframeredis: GET - 50redis: SET - 50redis: GET - 500redis: SET - 500redis: GET - 1000redis: SET - 1000astcenc: Fastastcenc: Mediumastcenc: Thoroughastcenc: Exhaustivegromacs: MPI CPU - water_GMX50_baretensorflow-lite: SqueezeNettensorflow-lite: Inception V4tensorflow-lite: NASNet Mobiletensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quanttensorflow-lite: Inception ResNet V2pgbench: 100 - 250 - Read Onlypgbench: 100 - 250 - Read Only - Average Latencypgbench: 100 - 500 - Read Onlypgbench: 100 - 500 - Read Only - Average Latencypgbench: 100 - 250 - Read Writepgbench: 100 - 250 - Read Write - Average Latencypgbench: 100 - 500 - Read Writepgbench: 100 - 500 - Read Write - Average Latencymemtier-benchmark: Redis - 50 - 5:1memtier-benchmark: Redis - 50 - 1:10stress-ng: MMAPstress-ng: NUMAstress-ng: Futexstress-ng: MEMFDstress-ng: Atomicstress-ng: Cryptostress-ng: Mallocstress-ng: Forkingstress-ng: SENDFILEstress-ng: CPU Cachestress-ng: CPU Stressstress-ng: Semaphoresstress-ng: Matrix Mathstress-ng: Vector Mathstress-ng: x86_64 RdRandstress-ng: Memory Copyingstress-ng: Socket Activitystress-ng: Context Switchingstress-ng: Glibc C String Functionsstress-ng: Glibc Qsort Data Sortingstress-ng: System V Message Passingmnn: nasnetmnn: mobilenetV3mnn: squeezenetv1.1mnn: resnet-v2-50mnn: SqueezeNetV1.0mnn: MobileNetV2_224mnn: mobilenet-v1-1.0mnn: inception-v3tnn: CPU - DenseNettnn: CPU - MobileNet v2tnn: CPU - SqueezeNet v2tnn: CPU - SqueezeNet v1.1blender: BMW27 - CPU-Onlyblender: Classroom - CPU-Onlyblender: Fishy Cat - CPU-Onlyblender: Barbershop - CPU-Onlyblender: Pabellon Barcelona - CPU-Onlyopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUnginx: 1000onnx: GPT-2 - CPU - Parallelonnx: GPT-2 - CPU - Standardonnx: yolov4 - CPU - Parallelonnx: yolov4 - CPU - Standardonnx: bertsquad-12 - CPU - Parallelonnx: bertsquad-12 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Parallelonnx: fcn-resnet101-11 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Parallelonnx: ArcFace ResNet-100 - CPU - Standardonnx: super-resolution-10 - CPU - Parallelonnx: super-resolution-10 - CPU - Standardapache: 1000pyhpc: CPU - JAX - 4194304 - Equation of Statepyhpc: CPU - JAX - 4194304 - Isoneutral Mixingpyhpc: CPU - Numba - 4194304 - Equation of Statepyhpc: CPU - Numba - 4194304 - Isoneutral Mixingpyhpc: CPU - Numpy - 4194304 - Equation of Statepyhpc: CPU - Numpy - 4194304 - Isoneutral Mixingpyhpc: CPU - Aesara - 4194304 - Equation of Statepyhpc: CPU - Aesara - 4194304 - Isoneutral Mixingpyhpc: CPU - PyTorch - 4194304 - Equation of Statepyhpc: CPU - PyTorch - 4194304 - Isoneutral Mixingpyhpc: CPU - TensorFlow - 4194304 - Equation of Stateinfluxdb: 4 - 10000 - 2,5000,1 - 10000natron: SpaceshipCentOS Stream 99.1944916.73704.140.28120.2813835.12330.8702.1633.04421.1158.80241.2082.915.620.964.855.7798475600160701455.217123.91075.321219.48693.917787.27026.13022.91244.03017.586.62571.3281.03208.0307.53201.043.42635.74910234010306411153274873811381.32738.67565.62092.83286.67113.23112.93115.5099.273.0434.5724.352724.2951100.669622.417422.021525.585246786637113117947312984.31648.7106.0569.2606.60595.41529.670135.0195.406032.385632.159383.811553.68404697.282447.61637.957782015240580202614085223967483192.667111.920233.75010.5516866.11112427.2231.48244.38243.9588.8336.212.792284227.22189377.082018201.091931278.622406986.651847194.12799.1120316.374346.38484.50548.99616614.4173896.568713.04240.419540.8647297.816693880.15018556560.2702074512.0511871026.7241339297.911398073.703747.5810.371088788.924098.84187775.7783808.91306750258.8463484.451271967.0516.26135517.467186364.51286293.40322923.09667284.3612812.452460.376233126.459473078.17934.267093379.7312.0971.7532.3568.6633.9562.6632.09020.0913955.046378.87975.880366.49325.0464.8233.37257.1582.8924.29819.6313.921424.5713.671451.621071.7018.6783.26239.864414.944.522478.9632.00233.3385.479657.998.271478.6413.6047224.771.3642731.931.50200945.49526911045630694799109323644316931881325912260131349.600.0310.8640.2641.3751.9362.8780.3032.0780.1092.0600.222666008.71.9OpenBenchmarking.org

Unpacking The Linux Kernel

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernel 5.19linux-5.19.tar.xzCentOS Stream 93691215SE +/- 0.116, N = 179.194

C-Blosc

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz shuffleCentOS Stream 911002200330044005500SE +/- 23.45, N = 34916.71. (CC) gcc options: -std=gnu99 -O3 -lrt -lm

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.3Test: blosclz bitshuffleCentOS Stream 98001600240032004000SE +/- 6.11, N = 33704.11. (CC) gcc options: -std=gnu99 -O3 -lrt -lm

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1CentOS Stream 9918273645SE +/- 0.08, N = 340.281. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsCentOS Stream 90.06330.12660.18990.25320.3165SE +/- 0.00094, N = 30.28138

LAMMPS Molecular Dynamics Simulator

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k AtomsCentOS Stream 9816243240SE +/- 0.05, N = 335.121. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin ProteinCentOS Stream 9714212835SE +/- 0.06, N = 330.871. (CXX) g++ options: -O3 -lm -ldl

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultCentOS Stream 90.48670.97341.46011.94682.4335SE +/- 0.069, N = 152.1631. (CC) gcc options: -fvisibility=hidden -O2 -lm -lpng16 -ljpeg

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100CentOS Stream 90.68491.36982.05472.73963.4245SE +/- 0.065, N = 153.0441. (CC) gcc options: -fvisibility=hidden -O2 -lm -lpng16 -ljpeg

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessCentOS Stream 9510152025SE +/- 0.17, N = 321.121. (CC) gcc options: -fvisibility=hidden -O2 -lm -lpng16 -ljpeg

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionCentOS Stream 9246810SE +/- 0.061, N = 158.8021. (CC) gcc options: -fvisibility=hidden -O2 -lm -lpng16 -ljpeg

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionCentOS Stream 9918273645SE +/- 0.22, N = 341.211. (CC) gcc options: -fvisibility=hidden -O2 -lm -lpng16 -ljpeg

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: KostyaCentOS Stream 90.65481.30961.96442.61923.274SE +/- 0.00, N = 32.911. (CXX) g++ options: -O3

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: TopTweetCentOS Stream 91.26452.5293.79355.0586.3225SE +/- 0.01, N = 35.621. (CXX) g++ options: -O3

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: LargeRandomCentOS Stream 90.2160.4320.6480.8641.08SE +/- 0.00, N = 30.961. (CXX) g++ options: -O3

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: PartialTweetsCentOS Stream 91.09132.18263.27394.36525.4565SE +/- 0.01, N = 34.851. (CXX) g++ options: -O3

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: DistinctUserIDCentOS Stream 91.29832.59663.89495.19326.4915SE +/- 0.01, N = 35.771. (CXX) g++ options: -O3

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2CentOS Stream 92K4K6K8K10KSE +/- 54.95, N = 49847

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonCentOS Stream 912002400360048006000SE +/- 189.31, N = 165600

Java Test: Eclipse

CentOS Stream 9: The test quit with a non-zero exit status.

Java Test: Tradesoap

CentOS Stream 9: The test run did not produce a result.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansCentOS Stream 93K6K9K12K15KSE +/- 116.64, N = 416070

Renaissance

Renaissance is a suite of benchmarks designed to test the Java JVM from Apache Spark to a Twitter-like service to Scala and other features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Random ForestCentOS Stream 930060090012001500SE +/- 6.85, N = 31455.2MIN: 1315.52 / MAX: 1806.24

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: ALS Movie LensCentOS Stream 94K8K12K16K20KSE +/- 73.46, N = 317123.9MIN: 16240.16 / MAX: 19195.87

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Apache Spark BayesCentOS Stream 92004006008001000SE +/- 11.23, N = 31075.3MIN: 628.33 / MAX: 1551.11

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Savina Reactors.IOCentOS Stream 95K10K15K20K25KSE +/- 296.93, N = 321219.4MIN: 20627.9 / MAX: 32602.9

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: Finagle HTTP RequestsCentOS Stream 92K4K6K8K10KSE +/- 154.17, N = 128693.9MIN: 6648.05 / MAX: 15659.82

OpenBenchmarking.orgms, Fewer Is BetterRenaissance 0.14Test: In-Memory Database ShootoutCentOS Stream 94K8K12K16K20KSE +/- 197.42, N = 317787.2MIN: 17444.33 / MAX: 21383.13

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3 - Compression SpeedCentOS Stream 915003000450060007500SE +/- 78.16, N = 37026.11. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3 - Decompression SpeedCentOS Stream 96001200180024003000SE +/- 0.65, N = 23022.91. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8 - Compression SpeedCentOS Stream 930060090012001500SE +/- 18.11, N = 121244.01. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8 - Decompression SpeedCentOS Stream 96001200180024003000SE +/- 6.19, N = 123017.51. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Compression SpeedCentOS Stream 920406080100SE +/- 0.52, N = 386.61. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Decompression SpeedCentOS Stream 96001200180024003000SE +/- 6.30, N = 32571.31. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3, Long Mode - Compression SpeedCentOS Stream 960120180240300SE +/- 4.03, N = 3281.01. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3, Long Mode - Decompression SpeedCentOS Stream 97001400210028003500SE +/- 13.60, N = 33208.01. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8, Long Mode - Compression SpeedCentOS Stream 970140210280350SE +/- 0.78, N = 3307.51. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8, Long Mode - Decompression SpeedCentOS Stream 97001400210028003500SE +/- 10.41, N = 33201.01. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Compression SpeedCentOS Stream 91020304050SE +/- 0.45, N = 543.41. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Decompression SpeedCentOS Stream 96001200180024003000SE +/- 4.30, N = 52635.71. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***

Node.js Express HTTP Load Test

A Node.js Express server with a Node-based loadtest client for facilitating HTTP benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterNode.js Express HTTP Load TestCentOS Stream 911002200330044005500SE +/- 73.75, N = 1549101. Nodejs

GraphicsMagick

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SwirlCentOS Stream 95001000150020002500SE +/- 10.48, N = 323401. (CC) gcc options: -fopenmp -O2 -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: RotateCentOS Stream 92004006008001000SE +/- 7.69, N = 1510301. (CC) gcc options: -fopenmp -O2 -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: SharpenCentOS Stream 9140280420560700SE +/- 1.76, N = 36411. (CC) gcc options: -fopenmp -O2 -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: EnhancedCentOS Stream 92004006008001000SE +/- 1.76, N = 311531. (CC) gcc options: -fopenmp -O2 -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: ResizingCentOS Stream 96001200180024003000SE +/- 27.10, N = 327481. (CC) gcc options: -fopenmp -O2 -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: Noise-GaussianCentOS Stream 9160320480640800SE +/- 0.88, N = 37381. (CC) gcc options: -fopenmp -O2 -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.38Operation: HWB Color SpaceCentOS Stream 92004006008001000SE +/- 28.49, N = 1211381. (CC) gcc options: -fopenmp -O2 -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 4 - Input: Bosphorus 4KCentOS Stream 90.29860.59720.89581.19441.493SE +/- 0.001, N = 31.3271. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 8 - Input: Bosphorus 4KCentOS Stream 9918273645SE +/- 0.30, N = 338.681. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 10 - Input: Bosphorus 4KCentOS Stream 91530456075SE +/- 0.26, N = 365.621. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.2Encoder Mode: Preset 12 - Input: Bosphorus 4KCentOS Stream 920406080100SE +/- 0.83, N = 392.831. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4KCentOS Stream 920406080100SE +/- 1.08, N = 486.671. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4KCentOS Stream 9306090120150SE +/- 1.63, N = 3113.231. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 4KCentOS Stream 9306090120150SE +/- 0.07, N = 3112.931. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4KCentOS Stream 9306090120150SE +/- 1.37, N = 4115.501. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 4KCentOS Stream 920406080100SE +/- 1.23, N = 399.271. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 4KCentOS Stream 90.6841.3682.0522.7363.42SE +/- 0.02, N = 33.041. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 4KCentOS Stream 9816243240SE +/- 0.49, N = 1534.571. (CC) gcc options: -ldl -m64 -lm -lpthread -O3 -flto

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/ao/real_timeCentOS Stream 9612182430SE +/- 0.07, N = 324.35

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/scivis/real_timeCentOS Stream 9612182430SE +/- 0.29, N = 324.30

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/pathtracer/real_timeCentOS Stream 920406080100SE +/- 0.72, N = 3100.67

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/ao/real_timeCentOS Stream 9510152025SE +/- 0.06, N = 322.42

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeCentOS Stream 9510152025SE +/- 0.15, N = 322.02

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeCentOS Stream 9612182430SE +/- 0.04, N = 325.59

7-Zip Compression

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingCentOS Stream 9100K200K300K400K500KSE +/- 5624.44, N = 34678661. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingCentOS Stream 980K160K240K320K400KSE +/- 2273.35, N = 33711311. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Stockfish

This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 15Total TimeCentOS Stream 940M80M120M160M200MSE +/- 2364357.21, N = 151794731291. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 0CentOS Stream 920406080100SE +/- 0.66, N = 384.321. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 2CentOS Stream 91122334455SE +/- 0.53, N = 348.711. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6CentOS Stream 9246810SE +/- 0.037, N = 36.0561. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6, LosslessCentOS Stream 93691215SE +/- 0.070, N = 159.2601. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 10, LosslessCentOS Stream 9246810SE +/- 0.073, N = 156.6051. (CXX) g++ options: -O3 -fPIC -lm

Timed GDB GNU Debugger Compilation

This test times how long it takes to build the GNU Debugger (GDB) in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 10.2Time To CompileCentOS Stream 920406080100SE +/- 0.27, N = 395.42

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.18Build: defconfigCentOS Stream 9714212835SE +/- 0.39, N = 1329.67

Build: allmodconfig

CentOS Stream 9: The test quit with a non-zero exit status.

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: NinjaCentOS Stream 9306090120150SE +/- 0.23, N = 3135.02

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 91.21642.43283.64924.86566.082SE +/- 0.32475, N = 155.40603MIN: 3.281. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 90.53681.07361.61042.14722.684SE +/- 0.07640, N = 152.38563MIN: 1.71. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 90.48590.97181.45771.94362.4295SE +/- 0.01538, N = 32.15938MIN: 2.041. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 90.85761.71522.57283.43044.288SE +/- 0.01173, N = 33.81155MIN: 3.531. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 90.82891.65782.48673.31564.1445SE +/- 0.03477, N = 143.68404MIN: 3.541. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9150300450600750SE +/- 6.94, N = 12697.28MIN: 605.851. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9100200300400500SE +/- 7.22, N = 15447.62MIN: 376.511. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPUCentOS Stream 9918273645SE +/- 5.68, N = 1537.96MIN: 3.481. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerCentOS Stream 94K8K12K16K20KSE +/- 58.89, N = 3201521. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerCentOS Stream 99K18K27K36K45KSE +/- 74.23, N = 3405801. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerCentOS Stream 94K8K12K16K20KSE +/- 49.21, N = 3202611. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerCentOS Stream 99K18K27K36K45KSE +/- 38.89, N = 3408521. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerCentOS Stream 95K10K15K20K25KSE +/- 79.25, N = 3239671. (CXX) g++ options: -O3 -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerCentOS Stream 910K20K30K40K50KSE +/- 81.93, N = 3483191. (CXX) g++ options: -O3 -ldl

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: DefaultCentOS Stream 90.60011.20021.80032.40043.0005SE +/- 0.033, N = 152.6671. (CXX) g++ options: -fno-rtti -O3

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: Quality 75, Compression Effort 7CentOS Stream 9306090120150SE +/- 0.09, N = 3111.921. (CXX) g++ options: -fno-rtti -O3

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20220422Encode Settings: Quality 95, Compression Effort 7CentOS Stream 950100150200250SE +/- 0.09, N = 3233.751. (CXX) g++ options: -fno-rtti -O3

Node.js V8 Web Tooling Benchmark

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkCentOS Stream 93691215SE +/- 0.06, N = 310.55

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. The system/openssl test profiles relies on benchmarking the system/OS-supplied openssl binary rather than the pts/openssl test profile that uses the locally-built OpenSSL for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsign/s, More Is BetterOpenSSLCentOS Stream 94K8K12K16K20KSE +/- 205.54, N = 416866.11. OpenSSL 3.0.1 14 Dec 2021 (Library: OpenSSL 3.0.1 14 Dec 2021)

OpenBenchmarking.orgverify/s, More Is BetterOpenSSLCentOS Stream 9200K400K600K800K1000KSE +/- 4686.71, N = 41112427.21. OpenSSL 3.0.1 14 Dec 2021 (Library: OpenSSL 3.0.1 14 Dec 2021)

ClickHouse

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, First Run / Cold CacheCentOS Stream 950100150200250SE +/- 2.21, N = 15231.48MIN: 41.47 / MAX: 5454.551. ClickHouse server version 22.5.4.19 (official build).

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Second RunCentOS Stream 950100150200250SE +/- 1.48, N = 15244.38MIN: 44.09 / MAX: 5454.551. ClickHouse server version 22.5.4.19 (official build).

OpenBenchmarking.orgQueries Per Minute, Geo Mean, More Is BetterClickHouse 22.5.4.19100M Rows Web Analytics Dataset, Third RunCentOS Stream 950100150200250SE +/- 1.95, N = 15243.95MIN: 42.11 / MAX: 60001. ClickHouse server version 22.5.4.19 (official build).

Apache Spark

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark TimeCentOS Stream 920406080100SE +/- 0.53, N = 388.83

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Calculate Pi BenchmarkCentOS Stream 9816243240SE +/- 0.12, N = 336.21

OpenBenchmarking.orgSeconds, Fewer Is BetterApache Spark 3.3Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using DataframeCentOS Stream 90.62781.25561.88342.51123.139SE +/- 0.09, N = 32.79

Dragonflydb

Clients: 50 - Set To Get Ratio: 1:1

CentOS Stream 9: The test run did not produce a result.

Clients: 50 - Set To Get Ratio: 1:5

CentOS Stream 9: The test run did not produce a result.

Clients: 50 - Set To Get Ratio: 5:1

CentOS Stream 9: The test run did not produce a result.

Clients: 200 - Set To Get Ratio: 1:1

CentOS Stream 9: The test run did not produce a result.

Clients: 200 - Set To Get Ratio: 1:5

CentOS Stream 9: The test run did not produce a result.

Clients: 200 - Set To Get Ratio: 5:1

CentOS Stream 9: The test run did not produce a result.

Redis

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 50CentOS Stream 9500K1000K1500K2000K2500KSE +/- 2019.92, N = 32284227.21. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 50CentOS Stream 9500K1000K1500K2000K2500KSE +/- 29696.58, N = 32189377.081. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 500CentOS Stream 9400K800K1200K1600K2000KSE +/- 89203.76, N = 152018201.091. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 500CentOS Stream 9400K800K1200K1600K2000KSE +/- 47157.16, N = 121931278.621. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: GET - Parallel Connections: 1000CentOS Stream 9500K1000K1500K2000K2500KSE +/- 26860.41, N = 52406986.651. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 7.0.4Test: SET - Parallel Connections: 1000CentOS Stream 9400K800K1200K1600K2000KSE +/- 55692.46, N = 121847194.121. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

ASTC Encoder

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: FastCentOS Stream 92004006008001000SE +/- 3.69, N = 3799.111. (CXX) g++ options: -O3 -flto -pthread

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: MediumCentOS Stream 970140210280350SE +/- 2.47, N = 15316.371. (CXX) g++ options: -O3 -flto -pthread

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ThoroughCentOS Stream 91122334455SE +/- 0.05, N = 346.381. (CXX) g++ options: -O3 -flto -pthread

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ExhaustiveCentOS Stream 91.01372.02743.04114.05485.0685SE +/- 0.0017, N = 34.50541. (CXX) g++ options: -O3 -flto -pthread

KeyDB

A benchmark of KeyDB as a multi-threaded fork of the Redis server. The KeyDB benchmark is conducted using memtier-benchmark. Learn more via the OpenBenchmarking.org test page.

CentOS Stream 9: The test run did not produce a result.

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bareCentOS Stream 93691215SE +/- 0.002, N = 38.9961. (CXX) g++ options: -O3

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNetCentOS Stream 94K8K12K16K20KSE +/- 5506.54, N = 1216614.41

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4CentOS Stream 916K32K48K64K80KSE +/- 21727.22, N = 1573896.5

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet MobileCentOS Stream 915K30K45K60K75KSE +/- 3728.57, N = 1268713.0

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet FloatCentOS Stream 99001800270036004500SE +/- 499.70, N = 124240.41

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet QuantCentOS Stream 92K4K6K8K10KSE +/- 100.45, N = 39540.86

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V2CentOS Stream 910K20K30K40K50KSE +/- 268.62, N = 347297.8

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read OnlyCentOS Stream 9400K800K1200K1600K2000KSE +/- 9583.19, N = 316693881. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average LatencyCentOS Stream 90.03380.06760.10140.13520.169SE +/- 0.001, N = 30.1501. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read OnlyCentOS Stream 9400K800K1200K1600K2000KSE +/- 30425.21, N = 1218556561. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read Only - Average LatencyCentOS Stream 90.06080.12160.18240.24320.304SE +/- 0.005, N = 120.2701. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read WriteCentOS Stream 94K8K12K16K20KSE +/- 21.44, N = 3207451. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average LatencyCentOS Stream 93691215SE +/- 0.01, N = 312.051. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read WriteCentOS Stream 94K8K12K16K20KSE +/- 32.56, N = 3187101. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 14.0Scaling Factor: 100 - Clients: 500 - Mode: Read Write - Average LatencyCentOS Stream 9612182430SE +/- 0.05, N = 326.721. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

memtier_benchmark

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1CentOS Stream 9300K600K900K1200K1500KSE +/- 63962.41, N = 121339297.911. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Protocol: Redis - Clients: 100 - Set To Get Ratio: 5:1

CentOS Stream 9: The test run did not produce a result.

OpenBenchmarking.orgOps/sec, More Is Bettermemtier_benchmark 1.4Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10CentOS Stream 9300K600K900K1200K1500KSE +/- 67672.39, N = 121398073.701. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Protocol: Redis - Clients: 500 - Set To Get Ratio: 5:1

CentOS Stream 9: The test run did not produce a result. E: error: failed to prepare thread 112 for test.

Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10

CentOS Stream 9: The test run did not produce a result.

Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10

CentOS Stream 9: The test run did not produce a result. E: error: failed to prepare thread 112 for test.

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: MMAPCentOS Stream 98001600240032004000SE +/- 34.11, N = 33747.581. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: NUMACentOS Stream 93691215SE +/- 0.02, N = 310.371. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: FutexCentOS Stream 9200K400K600K800K1000KSE +/- 73263.26, N = 151088788.921. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: MEMFDCentOS Stream 99001800270036004500SE +/- 35.16, N = 34098.841. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: AtomicCentOS Stream 940K80K120K160K200KSE +/- 3961.98, N = 15187775.771. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CryptoCentOS Stream 920K40K60K80K100KSE +/- 289.31, N = 383808.911. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: MallocCentOS Stream 970M140M210M280M350MSE +/- 452266.97, N = 3306750258.841. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: ForkingCentOS Stream 914K28K42K56K70KSE +/- 123.25, N = 363484.451. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Test: IO_uring

CentOS Stream 9: The test run did not produce a result.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: SENDFILECentOS Stream 9300K600K900K1200K1500KSE +/- 2669.03, N = 31271967.051. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU CacheCentOS Stream 948121620SE +/- 0.13, N = 1016.261. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: CPU StressCentOS Stream 930K60K90K120K150KSE +/- 758.69, N = 3135517.461. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: SemaphoresCentOS Stream 91.5M3M4.5M6M7.5MSE +/- 27158.37, N = 37186364.511. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Matrix MathCentOS Stream 960K120K180K240K300KSE +/- 512.51, N = 3286293.401. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Vector MathCentOS Stream 970K140K210K280K350KSE +/- 944.66, N = 3322923.091. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: x86_64 RdRandCentOS Stream 9140K280K420K560K700KSE +/- 2562.02, N = 3667284.361. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Memory CopyingCentOS Stream 93K6K9K12K15KSE +/- 5.23, N = 312812.451. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Socket ActivityCentOS Stream 95001000150020002500SE +/- 900.65, N = 152460.371. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Context SwitchingCentOS Stream 91.3M2.6M3.9M5.2M6.5MSE +/- 78706.86, N = 36233126.451. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Glibc C String FunctionsCentOS Stream 92M4M6M8M10MSE +/- 103735.47, N = 49473078.171. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: Glibc Qsort Data SortingCentOS Stream 92004006008001000SE +/- 2.69, N = 3934.261. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.14Test: System V Message PassingCentOS Stream 91.5M3M4.5M6M7.5MSE +/- 85352.98, N = 47093379.731. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread

Mobile Neural Network

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetCentOS Stream 93691215SE +/- 0.23, N = 1512.10MIN: 10.54 / MAX: 23.031. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3CentOS Stream 90.39440.78881.18321.57761.972SE +/- 0.020, N = 151.753MIN: 1.61 / MAX: 4.191. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1CentOS Stream 90.53011.06021.59032.12042.6505SE +/- 0.050, N = 152.356MIN: 2.03 / MAX: 5.761. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50CentOS Stream 9246810SE +/- 0.088, N = 158.663MIN: 7.71 / MAX: 20.481. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0CentOS Stream 90.89011.78022.67033.56044.4505SE +/- 0.075, N = 153.956MIN: 3.51 / MAX: 9.331. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224CentOS Stream 90.59921.19841.79762.39682.996SE +/- 0.014, N = 152.663MIN: 2.48 / MAX: 5.571. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0CentOS Stream 90.47030.94061.41091.88122.3515SE +/- 0.047, N = 152.090MIN: 1.76 / MAX: 3.931. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3CentOS Stream 9510152025SE +/- 0.19, N = 1520.09MIN: 17.31 / MAX: 37.291. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetCentOS Stream 98001600240032004000SE +/- 27.70, N = 33955.05MIN: 3833.99 / MAX: 5510.151. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2CentOS Stream 980160240320400SE +/- 4.68, N = 4378.88MIN: 371.88 / MAX: 634.441. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v2CentOS Stream 920406080100SE +/- 0.78, N = 375.88MIN: 74.63 / MAX: 111.71. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.1CentOS Stream 980160240320400SE +/- 0.03, N = 3366.49MIN: 366.26 / MAX: 366.871. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported and HIP for AMD Radeon GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: BMW27 - Compute: CPU-OnlyCentOS Stream 9612182430SE +/- 0.03, N = 325.04

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Classroom - Compute: CPU-OnlyCentOS Stream 91428425670SE +/- 0.04, N = 364.82

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Fishy Cat - Compute: CPU-OnlyCentOS Stream 9816243240SE +/- 0.07, N = 333.37

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Barbershop - Compute: CPU-OnlyCentOS Stream 960120180240300SE +/- 0.55, N = 3257.15

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.2Blend File: Pabellon Barcelona - Compute: CPU-OnlyCentOS Stream 920406080100SE +/- 0.02, N = 382.89

OpenVINO

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUCentOS Stream 9612182430SE +/- 0.02, N = 324.291. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16 - Device: CPUCentOS Stream 92004006008001000SE +/- 0.67, N = 3819.63MIN: 519.3 / MAX: 967.181. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUCentOS Stream 948121620SE +/- 0.00, N = 313.921. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP16 - Device: CPUCentOS Stream 930060090012001500SE +/- 0.41, N = 31424.57MIN: 1046.08 / MAX: 1657.291. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUCentOS Stream 948121620SE +/- 0.02, N = 313.671. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Detection FP32 - Device: CPUCentOS Stream 930060090012001500SE +/- 1.03, N = 31451.62MIN: 1039.96 / MAX: 1708.951. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUCentOS Stream 92004006008001000SE +/- 14.44, N = 121071.701. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16 - Device: CPUCentOS Stream 9510152025SE +/- 0.29, N = 1218.67MIN: 11.54 / MAX: 79.431. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUCentOS Stream 920406080100SE +/- 0.07, N = 383.261. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Face Detection FP16-INT8 - Device: CPUCentOS Stream 950100150200250SE +/- 0.22, N = 3239.86MIN: 178.86 / MAX: 348.971. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUCentOS Stream 99001800270036004500SE +/- 1.33, N = 34414.941. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Vehicle Detection FP16-INT8 - Device: CPUCentOS Stream 91.0172.0343.0514.0685.085SE +/- 0.00, N = 34.52MIN: 4.11 / MAX: 44.741. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUCentOS Stream 95001000150020002500SE +/- 1.20, N = 32478.961. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16 - Device: CPUCentOS Stream 9714212835SE +/- 0.01, N = 332.00MIN: 21.78 / MAX: 67.351. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUCentOS Stream 950100150200250SE +/- 0.51, N = 3233.331. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Machine Translation EN To DE FP16 - Device: CPUCentOS Stream 920406080100SE +/- 0.18, N = 385.47MIN: 76.11 / MAX: 195.121. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUCentOS Stream 92K4K6K8K10KSE +/- 8.29, N = 39657.991. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPUCentOS Stream 9246810SE +/- 0.01, N = 38.27MIN: 7.23 / MAX: 27.11. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUCentOS Stream 930060090012001500SE +/- 39.85, N = 151478.641. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPUCentOS Stream 93691215SE +/- 0.30, N = 1513.60MIN: 8.57 / MAX: 68.281. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUCentOS Stream 910K20K30K40K50KSE +/- 99.47, N = 347224.771. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPUCentOS Stream 90.3060.6120.9181.2241.53SE +/- 0.00, N = 31.36MIN: 0.99 / MAX: 13.441. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUCentOS Stream 99K18K27K36K45KSE +/- 1567.95, N = 1542731.931. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUCentOS Stream 90.33750.6751.01251.351.6875SE +/- 0.05, N = 151.50MIN: 0.34 / MAX: 29.481. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.21.1Concurrent Requests: 1000CentOS Stream 940K80K120K160K200KSE +/- 1519.57, N = 3200945.491. (CC) gcc options: -lcrypt -lz -O3 -march=native

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: ParallelCentOS Stream 911002200330044005500SE +/- 32.87, N = 352691. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: StandardCentOS Stream 92K4K6K8K10KSE +/- 388.59, N = 12110451. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: ParallelCentOS Stream 9140280420560700SE +/- 1.04, N = 36301. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: StandardCentOS Stream 9150300450600750SE +/- 1.17, N = 36941. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: ParallelCentOS Stream 92004006008001000SE +/- 2.02, N = 37991. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: StandardCentOS Stream 92004006008001000SE +/- 0.50, N = 310931. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: ParallelCentOS Stream 950100150200250SE +/- 0.17, N = 32361. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: StandardCentOS Stream 9100200300400500SE +/- 1.17, N = 34431. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: ParallelCentOS Stream 9400800120016002000SE +/- 3.09, N = 316931. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardCentOS Stream 9400800120016002000SE +/- 16.82, N = 1218811. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: ParallelCentOS Stream 97001400210028003500SE +/- 4.91, N = 332591. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: StandardCentOS Stream 93K6K9K12K15KSE +/- 43.63, N = 3122601. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Apache HTTP Server

This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.48Concurrent Requests: 1000CentOS Stream 930K60K90K120K150KSE +/- 1558.40, N = 15131349.601. (CC) gcc options: -shared -fPIC -O2

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: JAX - Project Size: 4194304 - Benchmark: Equation of StateCentOS Stream 90.0070.0140.0210.0280.035SE +/- 0.000, N = 30.031

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: JAX - Project Size: 4194304 - Benchmark: Isoneutral MixingCentOS Stream 90.19440.38880.58320.77760.972SE +/- 0.004, N = 30.864

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Equation of StateCentOS Stream 90.05940.11880.17820.23760.297SE +/- 0.002, N = 30.264

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Isoneutral MixingCentOS Stream 90.30940.61880.92821.23761.547SE +/- 0.001, N = 31.375

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of StateCentOS Stream 90.43560.87121.30681.74242.178SE +/- 0.001, N = 31.936

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral MixingCentOS Stream 90.64761.29521.94282.59043.238SE +/- 0.033, N = 32.878

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Equation of StateCentOS Stream 90.06820.13640.20460.27280.341SE +/- 0.001, N = 30.303

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Isoneutral MixingCentOS Stream 90.46760.93521.40281.87042.338SE +/- 0.024, N = 32.078

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Equation of StateCentOS Stream 90.02450.0490.07350.0980.1225SE +/- 0.001, N = 30.109

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Isoneutral MixingCentOS Stream 90.46350.9271.39051.8542.3175SE +/- 0.004, N = 32.060

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Equation of StateCentOS Stream 90.050.10.150.20.25SE +/- 0.003, N = 40.222

Device: CPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Isoneutral Mixing

CentOS Stream 9: The test run did not produce a result.

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000CentOS Stream 9140K280K420K560K700KSE +/- 2481.53, N = 3666008.7

Natron

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4.3Input: SpaceshipCentOS Stream 90.42750.8551.28251.712.1375SE +/- 0.01, N = 151.9