New Tests 2 x Intel Xeon Platinum 8380 testing with a Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS) and ASPEED on CentOS Stream 9 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2209017-NE-NEWTESTS349 .
New Tests Processor Motherboard Chipset Memory Disk Graphics Monitor Network OS Kernel Desktop Display Server Compiler File-System Screen Resolution CentOS Stream 9 2 x Intel Xeon Platinum 8380 @ 3.40GHz (80 Cores / 160 Threads) Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS) Intel Device 0998 512GB 7682GB INTEL SSDPF2KX076TZ ASPEED VE228 2 x Intel X710 for 10GBASE-T + 2 x Intel E810-C for QSFP CentOS Stream 9 5.14.0-148.el9.x86_64 (x86_64) GNOME Shell 40.10 X Server GCC 11.3.1 20220421 xfs 1920x1080 OpenBenchmarking.org - Transparent Huge Pages: always - --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-isl - NONE / attr2,inode64,logbsize=32k,logbufs=8,noquota,relatime,rw,seclabel / Block Size: 4096 - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xd000363 - OpenJDK Runtime Environment (Red_Hat-11.0.16.0.8-2.el9) (build 11.0.16+8-LTS) - Python 3.9.13 - SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
New Tests unpack-linux: linux-5.19.tar.xz blosc: blosclz shuffle blosc: blosclz bitshuffle hpcg: namd: ATPase Simulation - 327,506 Atoms lammps: 20k Atoms lammps: Rhodopsin Protein webp: Default webp: Quality 100 webp: Quality 100, Lossless webp: Quality 100, Highest Compression webp: Quality 100, Lossless, Highest Compression simdjson: Kostya simdjson: TopTweet simdjson: LargeRand simdjson: PartialTweets simdjson: DistinctUserID dacapobench: H2 dacapobench: Jython dacapobench: Tradebeans renaissance: Rand Forest renaissance: ALS Movie Lens renaissance: Apache Spark Bayes renaissance: Savina Reactors.IO renaissance: Finagle HTTP Requests renaissance: In-Memory Database Shootout compress-zstd: 3 - Compression Speed compress-zstd: 3 - Decompression Speed compress-zstd: 8 - Compression Speed compress-zstd: 8 - Decompression Speed compress-zstd: 19 - Compression Speed compress-zstd: 19 - Decompression Speed compress-zstd: 3, Long Mode - Compression Speed compress-zstd: 3, Long Mode - Decompression Speed compress-zstd: 8, Long Mode - Compression Speed compress-zstd: 8, Long Mode - Decompression Speed compress-zstd: 19, Long Mode - Compression Speed compress-zstd: 19, Long Mode - Decompression Speed node-express-loadtest: graphics-magick: Swirl graphics-magick: Rotate graphics-magick: Sharpen graphics-magick: Enhanced graphics-magick: Resizing graphics-magick: Noise-Gaussian graphics-magick: HWB Color Space svt-av1: Preset 4 - Bosphorus 4K svt-av1: Preset 8 - Bosphorus 4K svt-av1: Preset 10 - Bosphorus 4K svt-av1: Preset 12 - Bosphorus 4K svt-hevc: 7 - Bosphorus 4K svt-hevc: 10 - Bosphorus 4K svt-vp9: VMAF Optimized - Bosphorus 4K svt-vp9: PSNR/SSIM Optimized - Bosphorus 4K svt-vp9: Visual Quality Optimized - Bosphorus 4K vpxenc: Speed 0 - Bosphorus 4K x264: Bosphorus 4K ospray: particle_volume/ao/real_time ospray: particle_volume/scivis/real_time ospray: particle_volume/pathtracer/real_time ospray: gravity_spheres_volume/dim_512/ao/real_time ospray: gravity_spheres_volume/dim_512/scivis/real_time ospray: gravity_spheres_volume/dim_512/pathtracer/real_time compress-7zip: Compression Rating compress-7zip: Decompression Rating stockfish: Total Time avifenc: 0 avifenc: 2 avifenc: 6 avifenc: 6, Lossless avifenc: 10, Lossless build-gdb: Time To Compile build-linux-kernel: defconfig build-llvm: Ninja onednn: IP Shapes 1D - bf16bf16bf16 - CPU onednn: IP Shapes 3D - bf16bf16bf16 - CPU onednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPU onednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPU onednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPU onednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPU ospray-studio: 1 - 4K - 16 - Path Tracer ospray-studio: 1 - 4K - 32 - Path Tracer ospray-studio: 2 - 4K - 16 - Path Tracer ospray-studio: 2 - 4K - 32 - Path Tracer ospray-studio: 3 - 4K - 16 - Path Tracer ospray-studio: 3 - 4K - 32 - Path Tracer webp2: Default webp2: Quality 75, Compression Effort 7 webp2: Quality 95, Compression Effort 7 node-web-tooling: openssl: openssl: clickhouse: 100M Rows Web Analytics Dataset, First Run / Cold Cache clickhouse: 100M Rows Web Analytics Dataset, Second Run clickhouse: 100M Rows Web Analytics Dataset, Third Run spark: 40000000 - 500 - SHA-512 Benchmark Time spark: 40000000 - 500 - Calculate Pi Benchmark spark: 40000000 - 500 - Calculate Pi Benchmark Using Dataframe redis: GET - 50 redis: SET - 50 redis: GET - 500 redis: SET - 500 redis: GET - 1000 redis: SET - 1000 astcenc: Fast astcenc: Medium astcenc: Thorough astcenc: Exhaustive gromacs: MPI CPU - water_GMX50_bare tensorflow-lite: SqueezeNet tensorflow-lite: Inception V4 tensorflow-lite: NASNet Mobile tensorflow-lite: Mobilenet Float tensorflow-lite: Mobilenet Quant tensorflow-lite: Inception ResNet V2 pgbench: 100 - 250 - Read Only pgbench: 100 - 250 - Read Only - Average Latency pgbench: 100 - 500 - Read Only pgbench: 100 - 500 - Read Only - Average Latency pgbench: 100 - 250 - Read Write pgbench: 100 - 250 - Read Write - Average Latency pgbench: 100 - 500 - Read Write pgbench: 100 - 500 - Read Write - Average Latency memtier-benchmark: Redis - 50 - 5:1 memtier-benchmark: Redis - 50 - 1:10 stress-ng: MMAP stress-ng: NUMA stress-ng: Futex stress-ng: MEMFD stress-ng: Atomic stress-ng: Crypto stress-ng: Malloc stress-ng: Forking stress-ng: SENDFILE stress-ng: CPU Cache stress-ng: CPU Stress stress-ng: Semaphores stress-ng: Matrix Math stress-ng: Vector Math stress-ng: x86_64 RdRand stress-ng: Memory Copying stress-ng: Socket Activity stress-ng: Context Switching stress-ng: Glibc C String Functions stress-ng: Glibc Qsort Data Sorting stress-ng: System V Message Passing mnn: nasnet mnn: mobilenetV3 mnn: squeezenetv1.1 mnn: resnet-v2-50 mnn: SqueezeNetV1.0 mnn: MobileNetV2_224 mnn: mobilenet-v1-1.0 mnn: inception-v3 tnn: CPU - DenseNet tnn: CPU - MobileNet v2 tnn: CPU - SqueezeNet v2 tnn: CPU - SqueezeNet v1.1 blender: BMW27 - CPU-Only blender: Classroom - CPU-Only blender: Fishy Cat - CPU-Only blender: Barbershop - CPU-Only blender: Pabellon Barcelona - CPU-Only openvino: Face Detection FP16 - CPU openvino: Face Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP32 - CPU openvino: Person Detection FP32 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU nginx: 1000 onnx: GPT-2 - CPU - Parallel onnx: GPT-2 - CPU - Standard onnx: yolov4 - CPU - Parallel onnx: yolov4 - CPU - Standard onnx: bertsquad-12 - CPU - Parallel onnx: bertsquad-12 - CPU - Standard onnx: fcn-resnet101-11 - CPU - Parallel onnx: fcn-resnet101-11 - CPU - Standard onnx: ArcFace ResNet-100 - CPU - Parallel onnx: ArcFace ResNet-100 - CPU - Standard onnx: super-resolution-10 - CPU - Parallel onnx: super-resolution-10 - CPU - Standard apache: 1000 pyhpc: CPU - JAX - 4194304 - Equation of State pyhpc: CPU - JAX - 4194304 - Isoneutral Mixing pyhpc: CPU - Numba - 4194304 - Equation of State pyhpc: CPU - Numba - 4194304 - Isoneutral Mixing pyhpc: CPU - Numpy - 4194304 - Equation of State pyhpc: CPU - Numpy - 4194304 - Isoneutral Mixing pyhpc: CPU - Aesara - 4194304 - Equation of State pyhpc: CPU - Aesara - 4194304 - Isoneutral Mixing pyhpc: CPU - PyTorch - 4194304 - Equation of State pyhpc: CPU - PyTorch - 4194304 - Isoneutral Mixing pyhpc: CPU - TensorFlow - 4194304 - Equation of State influxdb: 4 - 10000 - 2,5000,1 - 10000 natron: Spaceship CentOS Stream 9 9.194 4916.7 3704.1 40.2812 0.28138 35.123 30.870 2.163 3.044 21.115 8.802 41.208 2.91 5.62 0.96 4.85 5.77 9847 5600 16070 1455.2 17123.9 1075.3 21219.4 8693.9 17787.2 7026.1 3022.9 1244.0 3017.5 86.6 2571.3 281.0 3208.0 307.5 3201.0 43.4 2635.7 4910 2340 1030 641 1153 2748 738 1138 1.327 38.675 65.620 92.832 86.67 113.23 112.93 115.50 99.27 3.04 34.57 24.3527 24.2951 100.6696 22.4174 22.0215 25.5852 467866 371131 179473129 84.316 48.710 6.056 9.260 6.605 95.415 29.670 135.019 5.40603 2.38563 2.15938 3.81155 3.68404 697.282 447.616 37.95778 20152 40580 20261 40852 23967 48319 2.667 111.920 233.750 10.55 16866.1 1112427.2 231.48 244.38 243.95 88.83 36.21 2.79 2284227.2 2189377.08 2018201.09 1931278.62 2406986.65 1847194.12 799.1120 316.3743 46.3848 4.5054 8.996 16614.41 73896.5 68713.0 4240.41 9540.86 47297.8 1669388 0.150 1855656 0.270 20745 12.051 18710 26.724 1339297.91 1398073.70 3747.58 10.37 1088788.92 4098.84 187775.77 83808.91 306750258.84 63484.45 1271967.05 16.26 135517.46 7186364.51 286293.40 322923.09 667284.36 12812.45 2460.37 6233126.45 9473078.17 934.26 7093379.73 12.097 1.753 2.356 8.663 3.956 2.663 2.090 20.091 3955.046 378.879 75.880 366.493 25.04 64.82 33.37 257.15 82.89 24.29 819.63 13.92 1424.57 13.67 1451.62 1071.70 18.67 83.26 239.86 4414.94 4.52 2478.96 32.00 233.33 85.47 9657.99 8.27 1478.64 13.60 47224.77 1.36 42731.93 1.50 200945.49 5269 11045 630 694 799 1093 236 443 1693 1881 3259 12260 131349.60 0.031 0.864 0.264 1.375 1.936 2.878 0.303 2.078 0.109 2.060 0.222 666008.7 1.9 OpenBenchmarking.org
Unpacking The Linux Kernel linux-5.19.tar.xz OpenBenchmarking.org Seconds, Fewer Is Better Unpacking The Linux Kernel 5.19 linux-5.19.tar.xz CentOS Stream 9 3 6 9 12 15 SE +/- 0.116, N = 17 9.194
C-Blosc Test: blosclz shuffle OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.3 Test: blosclz shuffle CentOS Stream 9 1100 2200 3300 4400 5500 SE +/- 23.45, N = 3 4916.7 1. (CC) gcc options: -std=gnu99 -O3 -lrt -lm
C-Blosc Test: blosclz bitshuffle OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.3 Test: blosclz bitshuffle CentOS Stream 9 800 1600 2400 3200 4000 SE +/- 6.11, N = 3 3704.1 1. (CC) gcc options: -std=gnu99 -O3 -lrt -lm
High Performance Conjugate Gradient OpenBenchmarking.org GFLOP/s, More Is Better High Performance Conjugate Gradient 3.1 CentOS Stream 9 9 18 27 36 45 SE +/- 0.08, N = 3 40.28 1. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi
NAMD ATPase Simulation - 327,506 Atoms OpenBenchmarking.org days/ns, Fewer Is Better NAMD 2.14 ATPase Simulation - 327,506 Atoms CentOS Stream 9 0.0633 0.1266 0.1899 0.2532 0.3165 SE +/- 0.00094, N = 3 0.28138
LAMMPS Molecular Dynamics Simulator Model: 20k Atoms OpenBenchmarking.org ns/day, More Is Better LAMMPS Molecular Dynamics Simulator 23Jun2022 Model: 20k Atoms CentOS Stream 9 8 16 24 32 40 SE +/- 0.05, N = 3 35.12 1. (CXX) g++ options: -O3 -lm -ldl
LAMMPS Molecular Dynamics Simulator Model: Rhodopsin Protein OpenBenchmarking.org ns/day, More Is Better LAMMPS Molecular Dynamics Simulator 23Jun2022 Model: Rhodopsin Protein CentOS Stream 9 7 14 21 28 35 SE +/- 0.06, N = 3 30.87 1. (CXX) g++ options: -O3 -lm -ldl
WebP Image Encode Encode Settings: Default OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Default CentOS Stream 9 0.4867 0.9734 1.4601 1.9468 2.4335 SE +/- 0.069, N = 15 2.163 1. (CC) gcc options: -fvisibility=hidden -O2 -lm -lpng16 -ljpeg
WebP Image Encode Encode Settings: Quality 100 OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100 CentOS Stream 9 0.6849 1.3698 2.0547 2.7396 3.4245 SE +/- 0.065, N = 15 3.044 1. (CC) gcc options: -fvisibility=hidden -O2 -lm -lpng16 -ljpeg
WebP Image Encode Encode Settings: Quality 100, Lossless OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Lossless CentOS Stream 9 5 10 15 20 25 SE +/- 0.17, N = 3 21.12 1. (CC) gcc options: -fvisibility=hidden -O2 -lm -lpng16 -ljpeg
WebP Image Encode Encode Settings: Quality 100, Highest Compression OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Highest Compression CentOS Stream 9 2 4 6 8 10 SE +/- 0.061, N = 15 8.802 1. (CC) gcc options: -fvisibility=hidden -O2 -lm -lpng16 -ljpeg
WebP Image Encode Encode Settings: Quality 100, Lossless, Highest Compression OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Lossless, Highest Compression CentOS Stream 9 9 18 27 36 45 SE +/- 0.22, N = 3 41.21 1. (CC) gcc options: -fvisibility=hidden -O2 -lm -lpng16 -ljpeg
simdjson Throughput Test: Kostya OpenBenchmarking.org GB/s, More Is Better simdjson 2.0 Throughput Test: Kostya CentOS Stream 9 0.6548 1.3096 1.9644 2.6192 3.274 SE +/- 0.00, N = 3 2.91 1. (CXX) g++ options: -O3
simdjson Throughput Test: TopTweet OpenBenchmarking.org GB/s, More Is Better simdjson 2.0 Throughput Test: TopTweet CentOS Stream 9 1.2645 2.529 3.7935 5.058 6.3225 SE +/- 0.01, N = 3 5.62 1. (CXX) g++ options: -O3
simdjson Throughput Test: LargeRandom OpenBenchmarking.org GB/s, More Is Better simdjson 2.0 Throughput Test: LargeRandom CentOS Stream 9 0.216 0.432 0.648 0.864 1.08 SE +/- 0.00, N = 3 0.96 1. (CXX) g++ options: -O3
simdjson Throughput Test: PartialTweets OpenBenchmarking.org GB/s, More Is Better simdjson 2.0 Throughput Test: PartialTweets CentOS Stream 9 1.0913 2.1826 3.2739 4.3652 5.4565 SE +/- 0.01, N = 3 4.85 1. (CXX) g++ options: -O3
simdjson Throughput Test: DistinctUserID OpenBenchmarking.org GB/s, More Is Better simdjson 2.0 Throughput Test: DistinctUserID CentOS Stream 9 1.2983 2.5966 3.8949 5.1932 6.4915 SE +/- 0.01, N = 3 5.77 1. (CXX) g++ options: -O3
DaCapo Benchmark Java Test: H2 OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 9.12-MR1 Java Test: H2 CentOS Stream 9 2K 4K 6K 8K 10K SE +/- 54.95, N = 4 9847
DaCapo Benchmark Java Test: Jython OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 9.12-MR1 Java Test: Jython CentOS Stream 9 1200 2400 3600 4800 6000 SE +/- 189.31, N = 16 5600
DaCapo Benchmark Java Test: Tradebeans OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 9.12-MR1 Java Test: Tradebeans CentOS Stream 9 3K 6K 9K 12K 15K SE +/- 116.64, N = 4 16070
Renaissance Test: Random Forest OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Random Forest CentOS Stream 9 300 600 900 1200 1500 SE +/- 6.85, N = 3 1455.2 MIN: 1315.52 / MAX: 1806.24
Renaissance Test: ALS Movie Lens OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: ALS Movie Lens CentOS Stream 9 4K 8K 12K 16K 20K SE +/- 73.46, N = 3 17123.9 MIN: 16240.16 / MAX: 19195.87
Renaissance Test: Apache Spark Bayes OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Apache Spark Bayes CentOS Stream 9 200 400 600 800 1000 SE +/- 11.23, N = 3 1075.3 MIN: 628.33 / MAX: 1551.11
Renaissance Test: Savina Reactors.IO OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Savina Reactors.IO CentOS Stream 9 5K 10K 15K 20K 25K SE +/- 296.93, N = 3 21219.4 MIN: 20627.9 / MAX: 32602.9
Renaissance Test: Finagle HTTP Requests OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Finagle HTTP Requests CentOS Stream 9 2K 4K 6K 8K 10K SE +/- 154.17, N = 12 8693.9 MIN: 6648.05 / MAX: 15659.82
Renaissance Test: In-Memory Database Shootout OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: In-Memory Database Shootout CentOS Stream 9 4K 8K 12K 16K 20K SE +/- 197.42, N = 3 17787.2 MIN: 17444.33 / MAX: 21383.13
Zstd Compression Compression Level: 3 - Compression Speed OpenBenchmarking.org MB/s, More Is Better Zstd Compression Compression Level: 3 - Compression Speed CentOS Stream 9 1500 3000 4500 6000 7500 SE +/- 78.16, N = 3 7026.1 1. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
Zstd Compression Compression Level: 3 - Decompression Speed OpenBenchmarking.org MB/s, More Is Better Zstd Compression Compression Level: 3 - Decompression Speed CentOS Stream 9 600 1200 1800 2400 3000 SE +/- 0.65, N = 2 3022.9 1. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
Zstd Compression Compression Level: 8 - Compression Speed OpenBenchmarking.org MB/s, More Is Better Zstd Compression Compression Level: 8 - Compression Speed CentOS Stream 9 300 600 900 1200 1500 SE +/- 18.11, N = 12 1244.0 1. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
Zstd Compression Compression Level: 8 - Decompression Speed OpenBenchmarking.org MB/s, More Is Better Zstd Compression Compression Level: 8 - Decompression Speed CentOS Stream 9 600 1200 1800 2400 3000 SE +/- 6.19, N = 12 3017.5 1. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
Zstd Compression Compression Level: 19 - Compression Speed OpenBenchmarking.org MB/s, More Is Better Zstd Compression Compression Level: 19 - Compression Speed CentOS Stream 9 20 40 60 80 100 SE +/- 0.52, N = 3 86.6 1. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
Zstd Compression Compression Level: 19 - Decompression Speed OpenBenchmarking.org MB/s, More Is Better Zstd Compression Compression Level: 19 - Decompression Speed CentOS Stream 9 600 1200 1800 2400 3000 SE +/- 6.30, N = 3 2571.3 1. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
Zstd Compression Compression Level: 3, Long Mode - Compression Speed OpenBenchmarking.org MB/s, More Is Better Zstd Compression Compression Level: 3, Long Mode - Compression Speed CentOS Stream 9 60 120 180 240 300 SE +/- 4.03, N = 3 281.0 1. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
Zstd Compression Compression Level: 3, Long Mode - Decompression Speed OpenBenchmarking.org MB/s, More Is Better Zstd Compression Compression Level: 3, Long Mode - Decompression Speed CentOS Stream 9 700 1400 2100 2800 3500 SE +/- 13.60, N = 3 3208.0 1. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
Zstd Compression Compression Level: 8, Long Mode - Compression Speed OpenBenchmarking.org MB/s, More Is Better Zstd Compression Compression Level: 8, Long Mode - Compression Speed CentOS Stream 9 70 140 210 280 350 SE +/- 0.78, N = 3 307.5 1. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
Zstd Compression Compression Level: 8, Long Mode - Decompression Speed OpenBenchmarking.org MB/s, More Is Better Zstd Compression Compression Level: 8, Long Mode - Decompression Speed CentOS Stream 9 700 1400 2100 2800 3500 SE +/- 10.41, N = 3 3201.0 1. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
Zstd Compression Compression Level: 19, Long Mode - Compression Speed OpenBenchmarking.org MB/s, More Is Better Zstd Compression Compression Level: 19, Long Mode - Compression Speed CentOS Stream 9 10 20 30 40 50 SE +/- 0.45, N = 5 43.4 1. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
Zstd Compression Compression Level: 19, Long Mode - Decompression Speed OpenBenchmarking.org MB/s, More Is Better Zstd Compression Compression Level: 19, Long Mode - Decompression Speed CentOS Stream 9 600 1200 1800 2400 3000 SE +/- 4.30, N = 5 2635.7 1. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
Node.js Express HTTP Load Test OpenBenchmarking.org Requests Per Second, More Is Better Node.js Express HTTP Load Test CentOS Stream 9 1100 2200 3300 4400 5500 SE +/- 73.75, N = 15 4910 1. Nodejs
GraphicsMagick Operation: Swirl OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Swirl CentOS Stream 9 500 1000 1500 2000 2500 SE +/- 10.48, N = 3 2340 1. (CC) gcc options: -fopenmp -O2 -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread
GraphicsMagick Operation: Rotate OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Rotate CentOS Stream 9 200 400 600 800 1000 SE +/- 7.69, N = 15 1030 1. (CC) gcc options: -fopenmp -O2 -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread
GraphicsMagick Operation: Sharpen OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Sharpen CentOS Stream 9 140 280 420 560 700 SE +/- 1.76, N = 3 641 1. (CC) gcc options: -fopenmp -O2 -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread
GraphicsMagick Operation: Enhanced OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Enhanced CentOS Stream 9 200 400 600 800 1000 SE +/- 1.76, N = 3 1153 1. (CC) gcc options: -fopenmp -O2 -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread
GraphicsMagick Operation: Resizing OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Resizing CentOS Stream 9 600 1200 1800 2400 3000 SE +/- 27.10, N = 3 2748 1. (CC) gcc options: -fopenmp -O2 -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread
GraphicsMagick Operation: Noise-Gaussian OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Noise-Gaussian CentOS Stream 9 160 320 480 640 800 SE +/- 0.88, N = 3 738 1. (CC) gcc options: -fopenmp -O2 -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread
GraphicsMagick Operation: HWB Color Space OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: HWB Color Space CentOS Stream 9 200 400 600 800 1000 SE +/- 28.49, N = 12 1138 1. (CC) gcc options: -fopenmp -O2 -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 4 - Input: Bosphorus 4K CentOS Stream 9 0.2986 0.5972 0.8958 1.1944 1.493 SE +/- 0.001, N = 3 1.327 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 8 - Input: Bosphorus 4K CentOS Stream 9 9 18 27 36 45 SE +/- 0.30, N = 3 38.68 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 10 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 10 - Input: Bosphorus 4K CentOS Stream 9 15 30 45 60 75 SE +/- 0.26, N = 3 65.62 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 12 - Input: Bosphorus 4K CentOS Stream 9 20 40 60 80 100 SE +/- 0.83, N = 3 92.83 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-HEVC Tuning: 7 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 7 - Input: Bosphorus 4K CentOS Stream 9 20 40 60 80 100 SE +/- 1.08, N = 4 86.67 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
SVT-HEVC Tuning: 10 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 10 - Input: Bosphorus 4K CentOS Stream 9 30 60 90 120 150 SE +/- 1.63, N = 3 113.23 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
SVT-VP9 Tuning: VMAF Optimized - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: VMAF Optimized - Input: Bosphorus 4K CentOS Stream 9 30 60 90 120 150 SE +/- 0.07, N = 3 112.93 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
SVT-VP9 Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4K CentOS Stream 9 30 60 90 120 150 SE +/- 1.37, N = 4 115.50 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
SVT-VP9 Tuning: Visual Quality Optimized - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: Visual Quality Optimized - Input: Bosphorus 4K CentOS Stream 9 20 40 60 80 100 SE +/- 1.23, N = 3 99.27 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
VP9 libvpx Encoding Speed: Speed 0 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better VP9 libvpx Encoding 1.10.0 Speed: Speed 0 - Input: Bosphorus 4K CentOS Stream 9 0.684 1.368 2.052 2.736 3.42 SE +/- 0.02, N = 3 3.04 1. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11
x264 Video Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better x264 2022-02-22 Video Input: Bosphorus 4K CentOS Stream 9 8 16 24 32 40 SE +/- 0.49, N = 15 34.57 1. (CC) gcc options: -ldl -m64 -lm -lpthread -O3 -flto
OSPRay Benchmark: particle_volume/ao/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.10 Benchmark: particle_volume/ao/real_time CentOS Stream 9 6 12 18 24 30 SE +/- 0.07, N = 3 24.35
OSPRay Benchmark: particle_volume/scivis/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.10 Benchmark: particle_volume/scivis/real_time CentOS Stream 9 6 12 18 24 30 SE +/- 0.29, N = 3 24.30
OSPRay Benchmark: particle_volume/pathtracer/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.10 Benchmark: particle_volume/pathtracer/real_time CentOS Stream 9 20 40 60 80 100 SE +/- 0.72, N = 3 100.67
OSPRay Benchmark: gravity_spheres_volume/dim_512/ao/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.10 Benchmark: gravity_spheres_volume/dim_512/ao/real_time CentOS Stream 9 5 10 15 20 25 SE +/- 0.06, N = 3 22.42
OSPRay Benchmark: gravity_spheres_volume/dim_512/scivis/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.10 Benchmark: gravity_spheres_volume/dim_512/scivis/real_time CentOS Stream 9 5 10 15 20 25 SE +/- 0.15, N = 3 22.02
OSPRay Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.10 Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time CentOS Stream 9 6 12 18 24 30 SE +/- 0.04, N = 3 25.59
7-Zip Compression Test: Compression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 22.01 Test: Compression Rating CentOS Stream 9 100K 200K 300K 400K 500K SE +/- 5624.44, N = 3 467866 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
7-Zip Compression Test: Decompression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 22.01 Test: Decompression Rating CentOS Stream 9 80K 160K 240K 320K 400K SE +/- 2273.35, N = 3 371131 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
Stockfish Total Time OpenBenchmarking.org Nodes Per Second, More Is Better Stockfish 15 Total Time CentOS Stream 9 40M 80M 120M 160M 200M SE +/- 2364357.21, N = 15 179473129 1. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver
libavif avifenc Encoder Speed: 0 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 0 CentOS Stream 9 20 40 60 80 100 SE +/- 0.66, N = 3 84.32 1. (CXX) g++ options: -O3 -fPIC -lm
libavif avifenc Encoder Speed: 2 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 2 CentOS Stream 9 11 22 33 44 55 SE +/- 0.53, N = 3 48.71 1. (CXX) g++ options: -O3 -fPIC -lm
libavif avifenc Encoder Speed: 6 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 6 CentOS Stream 9 2 4 6 8 10 SE +/- 0.037, N = 3 6.056 1. (CXX) g++ options: -O3 -fPIC -lm
libavif avifenc Encoder Speed: 6, Lossless OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 6, Lossless CentOS Stream 9 3 6 9 12 15 SE +/- 0.070, N = 15 9.260 1. (CXX) g++ options: -O3 -fPIC -lm
libavif avifenc Encoder Speed: 10, Lossless OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 10, Lossless CentOS Stream 9 2 4 6 8 10 SE +/- 0.073, N = 15 6.605 1. (CXX) g++ options: -O3 -fPIC -lm
Timed GDB GNU Debugger Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed GDB GNU Debugger Compilation 10.2 Time To Compile CentOS Stream 9 20 40 60 80 100 SE +/- 0.27, N = 3 95.42
Timed Linux Kernel Compilation Build: defconfig OpenBenchmarking.org Seconds, Fewer Is Better Timed Linux Kernel Compilation 5.18 Build: defconfig CentOS Stream 9 7 14 21 28 35 SE +/- 0.39, N = 13 29.67
Timed LLVM Compilation Build System: Ninja OpenBenchmarking.org Seconds, Fewer Is Better Timed LLVM Compilation 13.0 Build System: Ninja CentOS Stream 9 30 60 90 120 150 SE +/- 0.23, N = 3 135.02
oneDNN Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU CentOS Stream 9 1.2164 2.4328 3.6492 4.8656 6.082 SE +/- 0.32475, N = 15 5.40603 MIN: 3.28 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
oneDNN Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU CentOS Stream 9 0.5368 1.0736 1.6104 2.1472 2.684 SE +/- 0.07640, N = 15 2.38563 MIN: 1.7 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU CentOS Stream 9 0.4859 0.9718 1.4577 1.9436 2.4295 SE +/- 0.01538, N = 3 2.15938 MIN: 2.04 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU CentOS Stream 9 0.8576 1.7152 2.5728 3.4304 4.288 SE +/- 0.01173, N = 3 3.81155 MIN: 3.53 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU CentOS Stream 9 0.8289 1.6578 2.4867 3.3156 4.1445 SE +/- 0.03477, N = 14 3.68404 MIN: 3.54 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
oneDNN Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU CentOS Stream 9 150 300 450 600 750 SE +/- 6.94, N = 12 697.28 MIN: 605.85 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
oneDNN Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU CentOS Stream 9 100 200 300 400 500 SE +/- 7.22, N = 15 447.62 MIN: 376.51 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
oneDNN Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU CentOS Stream 9 9 18 27 36 45 SE +/- 5.68, N = 15 37.96 MIN: 3.48 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OSPRay Studio Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer CentOS Stream 9 4K 8K 12K 16K 20K SE +/- 58.89, N = 3 20152 1. (CXX) g++ options: -O3 -ldl
OSPRay Studio Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer CentOS Stream 9 9K 18K 27K 36K 45K SE +/- 74.23, N = 3 40580 1. (CXX) g++ options: -O3 -ldl
OSPRay Studio Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer CentOS Stream 9 4K 8K 12K 16K 20K SE +/- 49.21, N = 3 20261 1. (CXX) g++ options: -O3 -ldl
OSPRay Studio Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer CentOS Stream 9 9K 18K 27K 36K 45K SE +/- 38.89, N = 3 40852 1. (CXX) g++ options: -O3 -ldl
OSPRay Studio Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer CentOS Stream 9 5K 10K 15K 20K 25K SE +/- 79.25, N = 3 23967 1. (CXX) g++ options: -O3 -ldl
OSPRay Studio Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer CentOS Stream 9 10K 20K 30K 40K 50K SE +/- 81.93, N = 3 48319 1. (CXX) g++ options: -O3 -ldl
WebP2 Image Encode Encode Settings: Default OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20220422 Encode Settings: Default CentOS Stream 9 0.6001 1.2002 1.8003 2.4004 3.0005 SE +/- 0.033, N = 15 2.667 1. (CXX) g++ options: -fno-rtti -O3
WebP2 Image Encode Encode Settings: Quality 75, Compression Effort 7 OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20220422 Encode Settings: Quality 75, Compression Effort 7 CentOS Stream 9 30 60 90 120 150 SE +/- 0.09, N = 3 111.92 1. (CXX) g++ options: -fno-rtti -O3
WebP2 Image Encode Encode Settings: Quality 95, Compression Effort 7 OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20220422 Encode Settings: Quality 95, Compression Effort 7 CentOS Stream 9 50 100 150 200 250 SE +/- 0.09, N = 3 233.75 1. (CXX) g++ options: -fno-rtti -O3
Node.js V8 Web Tooling Benchmark OpenBenchmarking.org runs/s, More Is Better Node.js V8 Web Tooling Benchmark CentOS Stream 9 3 6 9 12 15 SE +/- 0.06, N = 3 10.55
OpenSSL OpenBenchmarking.org sign/s, More Is Better OpenSSL CentOS Stream 9 4K 8K 12K 16K 20K SE +/- 205.54, N = 4 16866.1 1. OpenSSL 3.0.1 14 Dec 2021 (Library: OpenSSL 3.0.1 14 Dec 2021)
OpenSSL OpenBenchmarking.org verify/s, More Is Better OpenSSL CentOS Stream 9 200K 400K 600K 800K 1000K SE +/- 4686.71, N = 4 1112427.2 1. OpenSSL 3.0.1 14 Dec 2021 (Library: OpenSSL 3.0.1 14 Dec 2021)
ClickHouse 100M Rows Web Analytics Dataset, First Run / Cold Cache OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.5.4.19 100M Rows Web Analytics Dataset, First Run / Cold Cache CentOS Stream 9 50 100 150 200 250 SE +/- 2.21, N = 15 231.48 MIN: 41.47 / MAX: 5454.55 1. ClickHouse server version 22.5.4.19 (official build).
ClickHouse 100M Rows Web Analytics Dataset, Second Run OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.5.4.19 100M Rows Web Analytics Dataset, Second Run CentOS Stream 9 50 100 150 200 250 SE +/- 1.48, N = 15 244.38 MIN: 44.09 / MAX: 5454.55 1. ClickHouse server version 22.5.4.19 (official build).
ClickHouse 100M Rows Web Analytics Dataset, Third Run OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.5.4.19 100M Rows Web Analytics Dataset, Third Run CentOS Stream 9 50 100 150 200 250 SE +/- 1.95, N = 15 243.95 MIN: 42.11 / MAX: 6000 1. ClickHouse server version 22.5.4.19 (official build).
Apache Spark Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark Time CentOS Stream 9 20 40 60 80 100 SE +/- 0.53, N = 3 88.83
Apache Spark Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark CentOS Stream 9 8 16 24 32 40 SE +/- 0.12, N = 3 36.21
Apache Spark Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe CentOS Stream 9 0.6278 1.2556 1.8834 2.5112 3.139 SE +/- 0.09, N = 3 2.79
Redis Test: GET - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: GET - Parallel Connections: 50 CentOS Stream 9 500K 1000K 1500K 2000K 2500K SE +/- 2019.92, N = 3 2284227.2 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: SET - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 50 CentOS Stream 9 500K 1000K 1500K 2000K 2500K SE +/- 29696.58, N = 3 2189377.08 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: GET - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: GET - Parallel Connections: 500 CentOS Stream 9 400K 800K 1200K 1600K 2000K SE +/- 89203.76, N = 15 2018201.09 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: SET - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 500 CentOS Stream 9 400K 800K 1200K 1600K 2000K SE +/- 47157.16, N = 12 1931278.62 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: GET - Parallel Connections: 1000 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: GET - Parallel Connections: 1000 CentOS Stream 9 500K 1000K 1500K 2000K 2500K SE +/- 26860.41, N = 5 2406986.65 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Redis Test: SET - Parallel Connections: 1000 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 1000 CentOS Stream 9 400K 800K 1200K 1600K 2000K SE +/- 55692.46, N = 12 1847194.12 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
ASTC Encoder Preset: Fast OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Fast CentOS Stream 9 200 400 600 800 1000 SE +/- 3.69, N = 3 799.11 1. (CXX) g++ options: -O3 -flto -pthread
ASTC Encoder Preset: Medium OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Medium CentOS Stream 9 70 140 210 280 350 SE +/- 2.47, N = 15 316.37 1. (CXX) g++ options: -O3 -flto -pthread
ASTC Encoder Preset: Thorough OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Thorough CentOS Stream 9 11 22 33 44 55 SE +/- 0.05, N = 3 46.38 1. (CXX) g++ options: -O3 -flto -pthread
ASTC Encoder Preset: Exhaustive OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Exhaustive CentOS Stream 9 1.0137 2.0274 3.0411 4.0548 5.0685 SE +/- 0.0017, N = 3 4.5054 1. (CXX) g++ options: -O3 -flto -pthread
GROMACS Implementation: MPI CPU - Input: water_GMX50_bare OpenBenchmarking.org Ns Per Day, More Is Better GROMACS 2022.1 Implementation: MPI CPU - Input: water_GMX50_bare CentOS Stream 9 3 6 9 12 15 SE +/- 0.002, N = 3 8.996 1. (CXX) g++ options: -O3
TensorFlow Lite Model: SqueezeNet OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: SqueezeNet CentOS Stream 9 4K 8K 12K 16K 20K SE +/- 5506.54, N = 12 16614.41
TensorFlow Lite Model: Inception V4 OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: Inception V4 CentOS Stream 9 16K 32K 48K 64K 80K SE +/- 21727.22, N = 15 73896.5
TensorFlow Lite Model: NASNet Mobile OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: NASNet Mobile CentOS Stream 9 15K 30K 45K 60K 75K SE +/- 3728.57, N = 12 68713.0
TensorFlow Lite Model: Mobilenet Float OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: Mobilenet Float CentOS Stream 9 900 1800 2700 3600 4500 SE +/- 499.70, N = 12 4240.41
TensorFlow Lite Model: Mobilenet Quant OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: Mobilenet Quant CentOS Stream 9 2K 4K 6K 8K 10K SE +/- 100.45, N = 3 9540.86
TensorFlow Lite Model: Inception ResNet V2 OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: Inception ResNet V2 CentOS Stream 9 10K 20K 30K 40K 50K SE +/- 268.62, N = 3 47297.8
PostgreSQL pgbench Scaling Factor: 100 - Clients: 250 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 250 - Mode: Read Only CentOS Stream 9 400K 800K 1200K 1600K 2000K SE +/- 9583.19, N = 3 1669388 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL pgbench Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latency CentOS Stream 9 0.0338 0.0676 0.1014 0.1352 0.169 SE +/- 0.001, N = 3 0.150 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL pgbench Scaling Factor: 100 - Clients: 500 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 500 - Mode: Read Only CentOS Stream 9 400K 800K 1200K 1600K 2000K SE +/- 30425.21, N = 12 1855656 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL pgbench Scaling Factor: 100 - Clients: 500 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 500 - Mode: Read Only - Average Latency CentOS Stream 9 0.0608 0.1216 0.1824 0.2432 0.304 SE +/- 0.005, N = 12 0.270 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL pgbench Scaling Factor: 100 - Clients: 250 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 250 - Mode: Read Write CentOS Stream 9 4K 8K 12K 16K 20K SE +/- 21.44, N = 3 20745 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL pgbench Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average Latency CentOS Stream 9 3 6 9 12 15 SE +/- 0.01, N = 3 12.05 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL pgbench Scaling Factor: 100 - Clients: 500 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 500 - Mode: Read Write CentOS Stream 9 4K 8K 12K 16K 20K SE +/- 32.56, N = 3 18710 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL pgbench Scaling Factor: 100 - Clients: 500 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 500 - Mode: Read Write - Average Latency CentOS Stream 9 6 12 18 24 30 SE +/- 0.05, N = 3 26.72 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
memtier_benchmark Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1 CentOS Stream 9 300K 600K 900K 1200K 1500K SE +/- 63962.41, N = 12 1339297.91 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10 CentOS Stream 9 300K 600K 900K 1200K 1500K SE +/- 67672.39, N = 12 1398073.70 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Stress-NG Test: MMAP OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: MMAP CentOS Stream 9 800 1600 2400 3200 4000 SE +/- 34.11, N = 3 3747.58 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: NUMA OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: NUMA CentOS Stream 9 3 6 9 12 15 SE +/- 0.02, N = 3 10.37 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Futex OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Futex CentOS Stream 9 200K 400K 600K 800K 1000K SE +/- 73263.26, N = 15 1088788.92 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: MEMFD OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: MEMFD CentOS Stream 9 900 1800 2700 3600 4500 SE +/- 35.16, N = 3 4098.84 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Atomic OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Atomic CentOS Stream 9 40K 80K 120K 160K 200K SE +/- 3961.98, N = 15 187775.77 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Crypto OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Crypto CentOS Stream 9 20K 40K 60K 80K 100K SE +/- 289.31, N = 3 83808.91 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Malloc OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Malloc CentOS Stream 9 70M 140M 210M 280M 350M SE +/- 452266.97, N = 3 306750258.84 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Forking OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Forking CentOS Stream 9 14K 28K 42K 56K 70K SE +/- 123.25, N = 3 63484.45 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: SENDFILE OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: SENDFILE CentOS Stream 9 300K 600K 900K 1200K 1500K SE +/- 2669.03, N = 3 1271967.05 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: CPU Cache OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: CPU Cache CentOS Stream 9 4 8 12 16 20 SE +/- 0.13, N = 10 16.26 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: CPU Stress OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: CPU Stress CentOS Stream 9 30K 60K 90K 120K 150K SE +/- 758.69, N = 3 135517.46 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Semaphores OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Semaphores CentOS Stream 9 1.5M 3M 4.5M 6M 7.5M SE +/- 27158.37, N = 3 7186364.51 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Matrix Math OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Matrix Math CentOS Stream 9 60K 120K 180K 240K 300K SE +/- 512.51, N = 3 286293.40 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Vector Math OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Vector Math CentOS Stream 9 70K 140K 210K 280K 350K SE +/- 944.66, N = 3 322923.09 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: x86_64 RdRand OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: x86_64 RdRand CentOS Stream 9 140K 280K 420K 560K 700K SE +/- 2562.02, N = 3 667284.36 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Memory Copying OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Memory Copying CentOS Stream 9 3K 6K 9K 12K 15K SE +/- 5.23, N = 3 12812.45 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Socket Activity OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Socket Activity CentOS Stream 9 500 1000 1500 2000 2500 SE +/- 900.65, N = 15 2460.37 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Context Switching OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Context Switching CentOS Stream 9 1.3M 2.6M 3.9M 5.2M 6.5M SE +/- 78706.86, N = 3 6233126.45 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Glibc C String Functions OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Glibc C String Functions CentOS Stream 9 2M 4M 6M 8M 10M SE +/- 103735.47, N = 4 9473078.17 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Glibc Qsort Data Sorting OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Glibc Qsort Data Sorting CentOS Stream 9 200 400 600 800 1000 SE +/- 2.69, N = 3 934.26 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: System V Message Passing OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: System V Message Passing CentOS Stream 9 1.5M 3M 4.5M 6M 7.5M SE +/- 85352.98, N = 4 7093379.73 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Mobile Neural Network Model: nasnet OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: nasnet CentOS Stream 9 3 6 9 12 15 SE +/- 0.23, N = 15 12.10 MIN: 10.54 / MAX: 23.03 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: mobilenetV3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenetV3 CentOS Stream 9 0.3944 0.7888 1.1832 1.5776 1.972 SE +/- 0.020, N = 15 1.753 MIN: 1.61 / MAX: 4.19 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: squeezenetv1.1 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: squeezenetv1.1 CentOS Stream 9 0.5301 1.0602 1.5903 2.1204 2.6505 SE +/- 0.050, N = 15 2.356 MIN: 2.03 / MAX: 5.76 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: resnet-v2-50 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: resnet-v2-50 CentOS Stream 9 2 4 6 8 10 SE +/- 0.088, N = 15 8.663 MIN: 7.71 / MAX: 20.48 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: SqueezeNetV1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: SqueezeNetV1.0 CentOS Stream 9 0.8901 1.7802 2.6703 3.5604 4.4505 SE +/- 0.075, N = 15 3.956 MIN: 3.51 / MAX: 9.33 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: MobileNetV2_224 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: MobileNetV2_224 CentOS Stream 9 0.5992 1.1984 1.7976 2.3968 2.996 SE +/- 0.014, N = 15 2.663 MIN: 2.48 / MAX: 5.57 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: mobilenet-v1-1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenet-v1-1.0 CentOS Stream 9 0.4703 0.9406 1.4109 1.8812 2.3515 SE +/- 0.047, N = 15 2.090 MIN: 1.76 / MAX: 3.93 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: inception-v3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: inception-v3 CentOS Stream 9 5 10 15 20 25 SE +/- 0.19, N = 15 20.09 MIN: 17.31 / MAX: 37.29 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
TNN Target: CPU - Model: DenseNet OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: DenseNet CentOS Stream 9 800 1600 2400 3200 4000 SE +/- 27.70, N = 3 3955.05 MIN: 3833.99 / MAX: 5510.15 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
TNN Target: CPU - Model: MobileNet v2 OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: MobileNet v2 CentOS Stream 9 80 160 240 320 400 SE +/- 4.68, N = 4 378.88 MIN: 371.88 / MAX: 634.44 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
TNN Target: CPU - Model: SqueezeNet v2 OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: SqueezeNet v2 CentOS Stream 9 20 40 60 80 100 SE +/- 0.78, N = 3 75.88 MIN: 74.63 / MAX: 111.7 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
TNN Target: CPU - Model: SqueezeNet v1.1 OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: SqueezeNet v1.1 CentOS Stream 9 80 160 240 320 400 SE +/- 0.03, N = 3 366.49 MIN: 366.26 / MAX: 366.87 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
Blender Blend File: BMW27 - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.2 Blend File: BMW27 - Compute: CPU-Only CentOS Stream 9 6 12 18 24 30 SE +/- 0.03, N = 3 25.04
Blender Blend File: Classroom - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.2 Blend File: Classroom - Compute: CPU-Only CentOS Stream 9 14 28 42 56 70 SE +/- 0.04, N = 3 64.82
Blender Blend File: Fishy Cat - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.2 Blend File: Fishy Cat - Compute: CPU-Only CentOS Stream 9 8 16 24 32 40 SE +/- 0.07, N = 3 33.37
Blender Blend File: Barbershop - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.2 Blend File: Barbershop - Compute: CPU-Only CentOS Stream 9 60 120 180 240 300 SE +/- 0.55, N = 3 257.15
Blender Blend File: Pabellon Barcelona - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.2 Blend File: Pabellon Barcelona - Compute: CPU-Only CentOS Stream 9 20 40 60 80 100 SE +/- 0.02, N = 3 82.89
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Face Detection FP16 - Device: CPU CentOS Stream 9 6 12 18 24 30 SE +/- 0.02, N = 3 24.29 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Face Detection FP16 - Device: CPU CentOS Stream 9 200 400 600 800 1000 SE +/- 0.67, N = 3 819.63 MIN: 519.3 / MAX: 967.18 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Detection FP16 - Device: CPU CentOS Stream 9 4 8 12 16 20 SE +/- 0.00, N = 3 13.92 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Detection FP16 - Device: CPU CentOS Stream 9 300 600 900 1200 1500 SE +/- 0.41, N = 3 1424.57 MIN: 1046.08 / MAX: 1657.29 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Detection FP32 - Device: CPU CentOS Stream 9 4 8 12 16 20 SE +/- 0.02, N = 3 13.67 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Detection FP32 - Device: CPU CentOS Stream 9 300 600 900 1200 1500 SE +/- 1.03, N = 3 1451.62 MIN: 1039.96 / MAX: 1708.95 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16 - Device: CPU CentOS Stream 9 200 400 600 800 1000 SE +/- 14.44, N = 12 1071.70 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16 - Device: CPU CentOS Stream 9 5 10 15 20 25 SE +/- 0.29, N = 12 18.67 MIN: 11.54 / MAX: 79.43 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Face Detection FP16-INT8 - Device: CPU CentOS Stream 9 20 40 60 80 100 SE +/- 0.07, N = 3 83.26 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Face Detection FP16-INT8 - Device: CPU CentOS Stream 9 50 100 150 200 250 SE +/- 0.22, N = 3 239.86 MIN: 178.86 / MAX: 348.97 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU CentOS Stream 9 900 1800 2700 3600 4500 SE +/- 1.33, N = 3 4414.94 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU CentOS Stream 9 1.017 2.034 3.051 4.068 5.085 SE +/- 0.00, N = 3 4.52 MIN: 4.11 / MAX: 44.74 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16 - Device: CPU CentOS Stream 9 500 1000 1500 2000 2500 SE +/- 1.20, N = 3 2478.96 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16 - Device: CPU CentOS Stream 9 7 14 21 28 35 SE +/- 0.01, N = 3 32.00 MIN: 21.78 / MAX: 67.35 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU CentOS Stream 9 50 100 150 200 250 SE +/- 0.51, N = 3 233.33 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU CentOS Stream 9 20 40 60 80 100 SE +/- 0.18, N = 3 85.47 MIN: 76.11 / MAX: 195.12 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU CentOS Stream 9 2K 4K 6K 8K 10K SE +/- 8.29, N = 3 9657.99 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU CentOS Stream 9 2 4 6 8 10 SE +/- 0.01, N = 3 8.27 MIN: 7.23 / MAX: 27.1 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU CentOS Stream 9 300 600 900 1200 1500 SE +/- 39.85, N = 15 1478.64 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU CentOS Stream 9 3 6 9 12 15 SE +/- 0.30, N = 15 13.60 MIN: 8.57 / MAX: 68.28 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU CentOS Stream 9 10K 20K 30K 40K 50K SE +/- 99.47, N = 3 47224.77 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU CentOS Stream 9 0.306 0.612 0.918 1.224 1.53 SE +/- 0.00, N = 3 1.36 MIN: 0.99 / MAX: 13.44 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU CentOS Stream 9 9K 18K 27K 36K 45K SE +/- 1567.95, N = 15 42731.93 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU CentOS Stream 9 0.3375 0.675 1.0125 1.35 1.6875 SE +/- 0.05, N = 15 1.50 MIN: 0.34 / MAX: 29.48 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
nginx Concurrent Requests: 1000 OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.21.1 Concurrent Requests: 1000 CentOS Stream 9 40K 80K 120K 160K 200K SE +/- 1519.57, N = 3 200945.49 1. (CC) gcc options: -lcrypt -lz -O3 -march=native
ONNX Runtime Model: GPT-2 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: GPT-2 - Device: CPU - Executor: Parallel CentOS Stream 9 1100 2200 3300 4400 5500 SE +/- 32.87, N = 3 5269 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: GPT-2 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: GPT-2 - Device: CPU - Executor: Standard CentOS Stream 9 2K 4K 6K 8K 10K SE +/- 388.59, N = 12 11045 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: yolov4 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: yolov4 - Device: CPU - Executor: Parallel CentOS Stream 9 140 280 420 560 700 SE +/- 1.04, N = 3 630 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: yolov4 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: yolov4 - Device: CPU - Executor: Standard CentOS Stream 9 150 300 450 600 750 SE +/- 1.17, N = 3 694 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: bertsquad-12 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: bertsquad-12 - Device: CPU - Executor: Parallel CentOS Stream 9 200 400 600 800 1000 SE +/- 2.02, N = 3 799 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: bertsquad-12 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: bertsquad-12 - Device: CPU - Executor: Standard CentOS Stream 9 200 400 600 800 1000 SE +/- 0.50, N = 3 1093 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel CentOS Stream 9 50 100 150 200 250 SE +/- 0.17, N = 3 236 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: fcn-resnet101-11 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard CentOS Stream 9 100 200 300 400 500 SE +/- 1.17, N = 3 443 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel CentOS Stream 9 400 800 1200 1600 2000 SE +/- 3.09, N = 3 1693 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard CentOS Stream 9 400 800 1200 1600 2000 SE +/- 16.82, N = 12 1881 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: super-resolution-10 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: super-resolution-10 - Device: CPU - Executor: Parallel CentOS Stream 9 700 1400 2100 2800 3500 SE +/- 4.91, N = 3 3259 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: super-resolution-10 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: super-resolution-10 - Device: CPU - Executor: Standard CentOS Stream 9 3K 6K 9K 12K 15K SE +/- 43.63, N = 3 12260 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
Apache HTTP Server Concurrent Requests: 1000 OpenBenchmarking.org Requests Per Second, More Is Better Apache HTTP Server 2.4.48 Concurrent Requests: 1000 CentOS Stream 9 30K 60K 90K 120K 150K SE +/- 1558.40, N = 15 131349.60 1. (CC) gcc options: -shared -fPIC -O2
PyHPC Benchmarks Device: CPU - Backend: JAX - Project Size: 4194304 - Benchmark: Equation of State OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 4194304 - Benchmark: Equation of State CentOS Stream 9 0.007 0.014 0.021 0.028 0.035 SE +/- 0.000, N = 3 0.031
PyHPC Benchmarks Device: CPU - Backend: JAX - Project Size: 4194304 - Benchmark: Isoneutral Mixing OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 4194304 - Benchmark: Isoneutral Mixing CentOS Stream 9 0.1944 0.3888 0.5832 0.7776 0.972 SE +/- 0.004, N = 3 0.864
PyHPC Benchmarks Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Equation of State OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Equation of State CentOS Stream 9 0.0594 0.1188 0.1782 0.2376 0.297 SE +/- 0.002, N = 3 0.264
PyHPC Benchmarks Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Isoneutral Mixing OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Isoneutral Mixing CentOS Stream 9 0.3094 0.6188 0.9282 1.2376 1.547 SE +/- 0.001, N = 3 1.375
PyHPC Benchmarks Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of State OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of State CentOS Stream 9 0.4356 0.8712 1.3068 1.7424 2.178 SE +/- 0.001, N = 3 1.936
PyHPC Benchmarks Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixing OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixing CentOS Stream 9 0.6476 1.2952 1.9428 2.5904 3.238 SE +/- 0.033, N = 3 2.878
PyHPC Benchmarks Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Equation of State OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Equation of State CentOS Stream 9 0.0682 0.1364 0.2046 0.2728 0.341 SE +/- 0.001, N = 3 0.303
PyHPC Benchmarks Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Isoneutral Mixing OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Isoneutral Mixing CentOS Stream 9 0.4676 0.9352 1.4028 1.8704 2.338 SE +/- 0.024, N = 3 2.078
PyHPC Benchmarks Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Equation of State OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Equation of State CentOS Stream 9 0.0245 0.049 0.0735 0.098 0.1225 SE +/- 0.001, N = 3 0.109
PyHPC Benchmarks Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Isoneutral Mixing OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Isoneutral Mixing CentOS Stream 9 0.4635 0.927 1.3905 1.854 2.3175 SE +/- 0.004, N = 3 2.060
PyHPC Benchmarks Device: CPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Equation of State OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Equation of State CentOS Stream 9 0.05 0.1 0.15 0.2 0.25 SE +/- 0.003, N = 4 0.222
InfluxDB Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 OpenBenchmarking.org val/sec, More Is Better InfluxDB 1.8.2 Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 CentOS Stream 9 140K 280K 420K 560K 700K SE +/- 2481.53, N = 3 666008.7
Natron Input: Spaceship OpenBenchmarking.org FPS, More Is Better Natron 2.4.3 Input: Spaceship CentOS Stream 9 0.4275 0.855 1.2825 1.71 2.1375 SE +/- 0.01, N = 15 1.9
Phoronix Test Suite v10.8.4