New Tests 2 x Intel Xeon Platinum 8380 testing with a Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS) and ASPEED on CentOS Stream 9 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2209017-NE-NEWTESTS349&grr .
New Tests Processor Motherboard Chipset Memory Disk Graphics Monitor Network OS Kernel Desktop Display Server Compiler File-System Screen Resolution CentOS Stream 9 2 x Intel Xeon Platinum 8380 @ 3.40GHz (80 Cores / 160 Threads) Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS) Intel Device 0998 512GB 7682GB INTEL SSDPF2KX076TZ ASPEED VE228 2 x Intel X710 for 10GBASE-T + 2 x Intel E810-C for QSFP CentOS Stream 9 5.14.0-148.el9.x86_64 (x86_64) GNOME Shell 40.10 X Server GCC 11.3.1 20220421 xfs 1920x1080 OpenBenchmarking.org - Transparent Huge Pages: always - --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-isl - NONE / attr2,inode64,logbsize=32k,logbufs=8,noquota,relatime,rw,seclabel / Block Size: 4096 - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xd000363 - OpenJDK Runtime Environment (Red_Hat-11.0.16.0.8-2.el9) (build 11.0.16+8-LTS) - Python 3.9.13 - SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
New Tests mnn: inception-v3 mnn: mobilenet-v1-1.0 mnn: MobileNetV2_224 mnn: SqueezeNetV1.0 mnn: resnet-v2-50 mnn: squeezenetv1.1 mnn: mobilenetV3 mnn: nasnet pgbench: 100 - 500 - Read Only - Average Latency pgbench: 100 - 500 - Read Only onnx: GPT-2 - CPU - Standard onnx: ArcFace ResNet-100 - CPU - Standard renaissance: Finagle HTTP Requests apache: 1000 stockfish: Total Time renaissance: ALS Movie Lens onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPU ospray: particle_volume/scivis/real_time openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU renaissance: In-Memory Database Shootout openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU tensorflow-lite: Inception V4 onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPU renaissance: Savina Reactors.IO graphics-magick: Rotate ospray: particle_volume/pathtracer/real_time spark: 40000000 - 500 - Calculate Pi Benchmark Using Dataframe spark: 40000000 - 500 - Calculate Pi Benchmark spark: 40000000 - 500 - SHA-512 Benchmark Time tnn: CPU - DenseNet memtier-benchmark: Redis - 50 - 5:1 memtier-benchmark: Redis - 50 - 1:10 natron: Spaceship clickhouse: 100M Rows Web Analytics Dataset, Third Run clickhouse: 100M Rows Web Analytics Dataset, Second Run clickhouse: 100M Rows Web Analytics Dataset, First Run / Cold Cache blender: Barbershop - CPU-Only ospray: particle_volume/ao/real_time lammps: 20k Atoms openvino: Vehicle Detection FP16 - CPU openvino: Vehicle Detection FP16 - CPU blosc: blosclz bitshuffle tensorflow-lite: NASNet Mobile tensorflow-lite: SqueezeNet tensorflow-lite: Mobilenet Float graphics-magick: HWB Color Space webp2: Quality 95, Compression Effort 7 vpxenc: Speed 0 - Bosphorus 4K hpcg: blosc: blosclz shuffle redis: GET - 500 ospray-studio: 3 - 4K - 32 - Path Tracer stress-ng: Atomic influxdb: 4 - 10000 - 2,5000,1 - 10000 stress-ng: Futex stress-ng: Socket Activity ospray-studio: 1 - 4K - 16 - Path Tracer compress-zstd: 8 - Decompression Speed compress-zstd: 8 - Compression Speed pyhpc: CPU - Aesara - 4194304 - Isoneutral Mixing pgbench: 100 - 500 - Read Write - Average Latency pgbench: 100 - 500 - Read Write ospray: gravity_spheres_volume/dim_512/scivis/real_time pgbench: 100 - 250 - Read Write - Average Latency pgbench: 100 - 250 - Read Write ospray: gravity_spheres_volume/dim_512/ao/real_time redis: SET - 1000 pgbench: 100 - 250 - Read Only - Average Latency pgbench: 100 - 250 - Read Only ospray-studio: 2 - 4K - 32 - Path Tracer ospray-studio: 1 - 4K - 32 - Path Tracer build-llvm: Ninja ospray: gravity_spheres_volume/dim_512/pathtracer/real_time redis: SET - 500 pyhpc: CPU - Numpy - 4194304 - Isoneutral Mixing build-linux-kernel: defconfig svt-av1: Preset 4 - Bosphorus 4K ospray-studio: 2 - 4K - 16 - Path Tracer onnx: fcn-resnet101-11 - CPU - Parallel onnx: ArcFace ResNet-100 - CPU - Parallel onnx: GPT-2 - CPU - Parallel onnx: bertsquad-12 - CPU - Parallel onnx: yolov4 - CPU - Parallel onnx: bertsquad-12 - CPU - Standard onnx: fcn-resnet101-11 - CPU - Standard onnx: yolov4 - CPU - Standard onnx: super-resolution-10 - CPU - Parallel onnx: super-resolution-10 - CPU - Standard webp2: Quality 75, Compression Effort 7 ospray-studio: 3 - 4K - 16 - Path Tracer pyhpc: CPU - PyTorch - 4194304 - Isoneutral Mixing node-express-loadtest: stress-ng: CPU Cache build-gdb: Time To Compile nginx: 1000 x264: Bosphorus 4K pyhpc: CPU - JAX - 4194304 - Isoneutral Mixing avifenc: 0 blender: Pabellon Barcelona - CPU-Only pyhpc: CPU - Numba - 4194304 - Isoneutral Mixing node-web-tooling: renaissance: Rand Forest onednn: IP Shapes 1D - bf16bf16bf16 - CPU compress-zstd: 19, Long Mode - Decompression Speed compress-zstd: 19, Long Mode - Compression Speed simdjson: PartialTweets simdjson: DistinctUserID simdjson: TopTweet openvino: Person Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Face Detection FP16 - CPU openvino: Face Detection FP16 - CPU openvino: Person Detection FP32 - CPU openvino: Person Detection FP32 - CPU pyhpc: CPU - Numpy - 4194304 - Equation of State openvino: Face Detection FP16-INT8 - CPU openvino: Face Detection FP16-INT8 - CPU blender: Classroom - CPU-Only openvino: Machine Translation EN To DE FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU onednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPU tensorflow-lite: Inception ResNet V2 tensorflow-lite: Mobilenet Quant graphics-magick: Resizing graphics-magick: Enhanced graphics-magick: Sharpen graphics-magick: Noise-Gaussian graphics-magick: Swirl simdjson: Kostya simdjson: LargeRand unpack-linux: linux-5.19.tar.xz renaissance: Apache Spark Bayes avifenc: 2 redis: GET - 1000 avifenc: 6, Lossless onednn: IP Shapes 3D - bf16bf16bf16 - CPU webp: Quality 100, Highest Compression compress-zstd: 19 - Decompression Speed compress-zstd: 19 - Compression Speed compress-7zip: Decompression Rating compress-7zip: Compression Rating dacapobench: Jython webp: Quality 100, Lossless, Highest Compression stress-ng: System V Message Passing stress-ng: Glibc C String Functions compress-zstd: 8, Long Mode - Decompression Speed compress-zstd: 8, Long Mode - Compression Speed tnn: CPU - MobileNet v2 compress-zstd: 3 - Decompression Speed compress-zstd: 3 - Compression Speed compress-zstd: 3, Long Mode - Decompression Speed compress-zstd: 3, Long Mode - Compression Speed dacapobench: Tradebeans blender: Fishy Cat - CPU-Only pyhpc: CPU - PyTorch - 4194304 - Equation of State gromacs: MPI CPU - water_GMX50_bare avifenc: 10, Lossless namd: ATPase Simulation - 327,506 Atoms astcenc: Medium stress-ng: NUMA stress-ng: x86_64 RdRand stress-ng: Forking stress-ng: Malloc stress-ng: Memory Copying stress-ng: MMAP stress-ng: Context Switching stress-ng: Semaphores stress-ng: CPU Stress stress-ng: SENDFILE stress-ng: MEMFD stress-ng: Glibc Qsort Data Sorting stress-ng: Crypto stress-ng: Matrix Math stress-ng: Vector Math redis: SET - 50 pyhpc: CPU - TensorFlow - 4194304 - Equation of State redis: GET - 50 openssl: openssl: blender: BMW27 - CPU-Only tnn: CPU - SqueezeNet v1.1 pyhpc: CPU - JAX - 4194304 - Equation of State webp: Quality 100, Lossless onednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPU dacapobench: H2 svt-av1: Preset 8 - Bosphorus 4K pyhpc: CPU - Aesara - 4194304 - Equation of State svt-vp9: PSNR/SSIM Optimized - Bosphorus 4K webp: Quality 100 svt-vp9: Visual Quality Optimized - Bosphorus 4K astcenc: Exhaustive svt-vp9: VMAF Optimized - Bosphorus 4K webp2: Default onednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPU svt-hevc: 7 - Bosphorus 4K webp: Default svt-av1: Preset 10 - Bosphorus 4K pyhpc: CPU - Numba - 4194304 - Equation of State astcenc: Thorough svt-av1: Preset 12 - Bosphorus 4K svt-hevc: 10 - Bosphorus 4K astcenc: Fast onednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPU avifenc: 6 tnn: CPU - SqueezeNet v2 lammps: Rhodopsin Protein pyhpc: CPU - TensorFlow - 4194304 - Isoneutral Mixing CentOS Stream 9 20.091 2.090 2.663 3.956 8.663 2.356 1.753 12.097 0.270 1855656 11045 1881 8693.9 131349.60 179473129 17123.9 447.616 24.2951 13.60 1478.64 17787.2 1.50 42731.93 73896.5 697.282 21219.4 1030 100.6696 2.79 36.21 88.83 3955.046 1339297.91 1398073.70 1.9 243.95 244.38 231.48 257.15 24.3527 35.123 18.67 1071.70 3704.1 68713.0 16614.41 4240.41 1138 233.750 3.04 40.2812 4916.7 2018201.09 48319 187775.77 666008.7 1088788.92 2460.37 20152 3017.5 1244.0 2.078 26.724 18710 22.0215 12.051 20745 22.4174 1847194.12 0.150 1669388 40852 40580 135.019 25.5852 1931278.62 2.878 29.670 1.327 20261 236 1693 5269 799 630 1093 443 694 3259 12260 111.920 23967 2.060 4910 16.26 95.415 200945.49 34.57 0.864 84.316 82.89 1.375 10.55 1455.2 5.40603 2635.7 43.4 4.85 5.77 5.62 1424.57 13.92 819.63 24.29 1451.62 13.67 1.936 239.86 83.26 64.82 85.47 233.33 32.00 2478.96 1.36 47224.77 8.27 9657.99 4.52 4414.94 37.95778 47297.8 9540.86 2748 1153 641 738 2340 2.91 0.96 9.194 1075.3 48.710 2406986.65 9.260 2.38563 8.802 2571.3 86.6 371131 467866 5600 41.208 7093379.73 9473078.17 3201.0 307.5 378.879 3022.9 7026.1 3208.0 281.0 16070 33.37 0.109 8.996 6.605 0.28138 316.3743 10.37 667284.36 63484.45 306750258.84 12812.45 3747.58 6233126.45 7186364.51 135517.46 1271967.05 4098.84 934.26 83808.91 286293.40 322923.09 2189377.08 0.222 2284227.2 1112427.2 16866.1 25.04 366.493 0.031 21.115 3.81155 9847 38.675 0.303 115.50 3.044 99.27 4.5054 112.93 2.667 3.68404 86.67 2.163 65.620 0.264 46.3848 92.832 113.23 799.1120 2.15938 6.056 75.880 30.870 OpenBenchmarking.org
Mobile Neural Network Model: inception-v3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: inception-v3 CentOS Stream 9 5 10 15 20 25 SE +/- 0.19, N = 15 20.09 MIN: 17.31 / MAX: 37.29 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: mobilenet-v1-1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenet-v1-1.0 CentOS Stream 9 0.4703 0.9406 1.4109 1.8812 2.3515 SE +/- 0.047, N = 15 2.090 MIN: 1.76 / MAX: 3.93 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: MobileNetV2_224 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: MobileNetV2_224 CentOS Stream 9 0.5992 1.1984 1.7976 2.3968 2.996 SE +/- 0.014, N = 15 2.663 MIN: 2.48 / MAX: 5.57 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: SqueezeNetV1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: SqueezeNetV1.0 CentOS Stream 9 0.8901 1.7802 2.6703 3.5604 4.4505 SE +/- 0.075, N = 15 3.956 MIN: 3.51 / MAX: 9.33 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: resnet-v2-50 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: resnet-v2-50 CentOS Stream 9 2 4 6 8 10 SE +/- 0.088, N = 15 8.663 MIN: 7.71 / MAX: 20.48 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: squeezenetv1.1 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: squeezenetv1.1 CentOS Stream 9 0.5301 1.0602 1.5903 2.1204 2.6505 SE +/- 0.050, N = 15 2.356 MIN: 2.03 / MAX: 5.76 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: mobilenetV3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenetV3 CentOS Stream 9 0.3944 0.7888 1.1832 1.5776 1.972 SE +/- 0.020, N = 15 1.753 MIN: 1.61 / MAX: 4.19 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
Mobile Neural Network Model: nasnet OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: nasnet CentOS Stream 9 3 6 9 12 15 SE +/- 0.23, N = 15 12.10 MIN: 10.54 / MAX: 23.03 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
PostgreSQL pgbench Scaling Factor: 100 - Clients: 500 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 500 - Mode: Read Only - Average Latency CentOS Stream 9 0.0608 0.1216 0.1824 0.2432 0.304 SE +/- 0.005, N = 12 0.270 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL pgbench Scaling Factor: 100 - Clients: 500 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 500 - Mode: Read Only CentOS Stream 9 400K 800K 1200K 1600K 2000K SE +/- 30425.21, N = 12 1855656 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
ONNX Runtime Model: GPT-2 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: GPT-2 - Device: CPU - Executor: Standard CentOS Stream 9 2K 4K 6K 8K 10K SE +/- 388.59, N = 12 11045 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard CentOS Stream 9 400 800 1200 1600 2000 SE +/- 16.82, N = 12 1881 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
Renaissance Test: Finagle HTTP Requests OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Finagle HTTP Requests CentOS Stream 9 2K 4K 6K 8K 10K SE +/- 154.17, N = 12 8693.9 MIN: 6648.05 / MAX: 15659.82
Apache HTTP Server Concurrent Requests: 1000 OpenBenchmarking.org Requests Per Second, More Is Better Apache HTTP Server 2.4.48 Concurrent Requests: 1000 CentOS Stream 9 30K 60K 90K 120K 150K SE +/- 1558.40, N = 15 131349.60 1. (CC) gcc options: -shared -fPIC -O2
Stockfish Total Time OpenBenchmarking.org Nodes Per Second, More Is Better Stockfish 15 Total Time CentOS Stream 9 40M 80M 120M 160M 200M SE +/- 2364357.21, N = 15 179473129 1. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver
Renaissance Test: ALS Movie Lens OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: ALS Movie Lens CentOS Stream 9 4K 8K 12K 16K 20K SE +/- 73.46, N = 3 17123.9 MIN: 16240.16 / MAX: 19195.87
oneDNN Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU CentOS Stream 9 100 200 300 400 500 SE +/- 7.22, N = 15 447.62 MIN: 376.51 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OSPRay Benchmark: particle_volume/scivis/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.10 Benchmark: particle_volume/scivis/real_time CentOS Stream 9 6 12 18 24 30 SE +/- 0.29, N = 3 24.30
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU CentOS Stream 9 3 6 9 12 15 SE +/- 0.30, N = 15 13.60 MIN: 8.57 / MAX: 68.28 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Vehicle Bike Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU CentOS Stream 9 300 600 900 1200 1500 SE +/- 39.85, N = 15 1478.64 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
Renaissance Test: In-Memory Database Shootout OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: In-Memory Database Shootout CentOS Stream 9 4K 8K 12K 16K 20K SE +/- 197.42, N = 3 17787.2 MIN: 17444.33 / MAX: 21383.13
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU CentOS Stream 9 0.3375 0.675 1.0125 1.35 1.6875 SE +/- 0.05, N = 15 1.50 MIN: 0.34 / MAX: 29.48 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU CentOS Stream 9 9K 18K 27K 36K 45K SE +/- 1567.95, N = 15 42731.93 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
TensorFlow Lite Model: Inception V4 OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: Inception V4 CentOS Stream 9 16K 32K 48K 64K 80K SE +/- 21727.22, N = 15 73896.5
oneDNN Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU CentOS Stream 9 150 300 450 600 750 SE +/- 6.94, N = 12 697.28 MIN: 605.85 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
Renaissance Test: Savina Reactors.IO OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Savina Reactors.IO CentOS Stream 9 5K 10K 15K 20K 25K SE +/- 296.93, N = 3 21219.4 MIN: 20627.9 / MAX: 32602.9
GraphicsMagick Operation: Rotate OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Rotate CentOS Stream 9 200 400 600 800 1000 SE +/- 7.69, N = 15 1030 1. (CC) gcc options: -fopenmp -O2 -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread
OSPRay Benchmark: particle_volume/pathtracer/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.10 Benchmark: particle_volume/pathtracer/real_time CentOS Stream 9 20 40 60 80 100 SE +/- 0.72, N = 3 100.67
Apache Spark Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe CentOS Stream 9 0.6278 1.2556 1.8834 2.5112 3.139 SE +/- 0.09, N = 3 2.79
Apache Spark Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark CentOS Stream 9 8 16 24 32 40 SE +/- 0.12, N = 3 36.21
Apache Spark Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark Time OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark Time CentOS Stream 9 20 40 60 80 100 SE +/- 0.53, N = 3 88.83
TNN Target: CPU - Model: DenseNet OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: DenseNet CentOS Stream 9 800 1600 2400 3200 4000 SE +/- 27.70, N = 3 3955.05 MIN: 3833.99 / MAX: 5510.15 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
memtier_benchmark Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 50 - Set To Get Ratio: 5:1 CentOS Stream 9 300K 600K 900K 1200K 1500K SE +/- 63962.41, N = 12 1339297.91 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
memtier_benchmark Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10 OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10 CentOS Stream 9 300K 600K 900K 1200K 1500K SE +/- 67672.39, N = 12 1398073.70 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Natron Input: Spaceship OpenBenchmarking.org FPS, More Is Better Natron 2.4.3 Input: Spaceship CentOS Stream 9 0.4275 0.855 1.2825 1.71 2.1375 SE +/- 0.01, N = 15 1.9
ClickHouse 100M Rows Web Analytics Dataset, Third Run OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.5.4.19 100M Rows Web Analytics Dataset, Third Run CentOS Stream 9 50 100 150 200 250 SE +/- 1.95, N = 15 243.95 MIN: 42.11 / MAX: 6000 1. ClickHouse server version 22.5.4.19 (official build).
ClickHouse 100M Rows Web Analytics Dataset, Second Run OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.5.4.19 100M Rows Web Analytics Dataset, Second Run CentOS Stream 9 50 100 150 200 250 SE +/- 1.48, N = 15 244.38 MIN: 44.09 / MAX: 5454.55 1. ClickHouse server version 22.5.4.19 (official build).
ClickHouse 100M Rows Web Analytics Dataset, First Run / Cold Cache OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.5.4.19 100M Rows Web Analytics Dataset, First Run / Cold Cache CentOS Stream 9 50 100 150 200 250 SE +/- 2.21, N = 15 231.48 MIN: 41.47 / MAX: 5454.55 1. ClickHouse server version 22.5.4.19 (official build).
Blender Blend File: Barbershop - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.2 Blend File: Barbershop - Compute: CPU-Only CentOS Stream 9 60 120 180 240 300 SE +/- 0.55, N = 3 257.15
OSPRay Benchmark: particle_volume/ao/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.10 Benchmark: particle_volume/ao/real_time CentOS Stream 9 6 12 18 24 30 SE +/- 0.07, N = 3 24.35
LAMMPS Molecular Dynamics Simulator Model: 20k Atoms OpenBenchmarking.org ns/day, More Is Better LAMMPS Molecular Dynamics Simulator 23Jun2022 Model: 20k Atoms CentOS Stream 9 8 16 24 32 40 SE +/- 0.05, N = 3 35.12 1. (CXX) g++ options: -O3 -lm -ldl
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16 - Device: CPU CentOS Stream 9 5 10 15 20 25 SE +/- 0.29, N = 12 18.67 MIN: 11.54 / MAX: 79.43 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Vehicle Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16 - Device: CPU CentOS Stream 9 200 400 600 800 1000 SE +/- 14.44, N = 12 1071.70 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
C-Blosc Test: blosclz bitshuffle OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.3 Test: blosclz bitshuffle CentOS Stream 9 800 1600 2400 3200 4000 SE +/- 6.11, N = 3 3704.1 1. (CC) gcc options: -std=gnu99 -O3 -lrt -lm
TensorFlow Lite Model: NASNet Mobile OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: NASNet Mobile CentOS Stream 9 15K 30K 45K 60K 75K SE +/- 3728.57, N = 12 68713.0
TensorFlow Lite Model: SqueezeNet OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: SqueezeNet CentOS Stream 9 4K 8K 12K 16K 20K SE +/- 5506.54, N = 12 16614.41
TensorFlow Lite Model: Mobilenet Float OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: Mobilenet Float CentOS Stream 9 900 1800 2700 3600 4500 SE +/- 499.70, N = 12 4240.41
GraphicsMagick Operation: HWB Color Space OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: HWB Color Space CentOS Stream 9 200 400 600 800 1000 SE +/- 28.49, N = 12 1138 1. (CC) gcc options: -fopenmp -O2 -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread
WebP2 Image Encode Encode Settings: Quality 95, Compression Effort 7 OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20220422 Encode Settings: Quality 95, Compression Effort 7 CentOS Stream 9 50 100 150 200 250 SE +/- 0.09, N = 3 233.75 1. (CXX) g++ options: -fno-rtti -O3
VP9 libvpx Encoding Speed: Speed 0 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better VP9 libvpx Encoding 1.10.0 Speed: Speed 0 - Input: Bosphorus 4K CentOS Stream 9 0.684 1.368 2.052 2.736 3.42 SE +/- 0.02, N = 3 3.04 1. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11
High Performance Conjugate Gradient OpenBenchmarking.org GFLOP/s, More Is Better High Performance Conjugate Gradient 3.1 CentOS Stream 9 9 18 27 36 45 SE +/- 0.08, N = 3 40.28 1. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi
C-Blosc Test: blosclz shuffle OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.3 Test: blosclz shuffle CentOS Stream 9 1100 2200 3300 4400 5500 SE +/- 23.45, N = 3 4916.7 1. (CC) gcc options: -std=gnu99 -O3 -lrt -lm
Redis Test: GET - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: GET - Parallel Connections: 500 CentOS Stream 9 400K 800K 1200K 1600K 2000K SE +/- 89203.76, N = 15 2018201.09 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OSPRay Studio Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer CentOS Stream 9 10K 20K 30K 40K 50K SE +/- 81.93, N = 3 48319 1. (CXX) g++ options: -O3 -ldl
Stress-NG Test: Atomic OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Atomic CentOS Stream 9 40K 80K 120K 160K 200K SE +/- 3961.98, N = 15 187775.77 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
InfluxDB Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 OpenBenchmarking.org val/sec, More Is Better InfluxDB 1.8.2 Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 CentOS Stream 9 140K 280K 420K 560K 700K SE +/- 2481.53, N = 3 666008.7
Stress-NG Test: Futex OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Futex CentOS Stream 9 200K 400K 600K 800K 1000K SE +/- 73263.26, N = 15 1088788.92 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Socket Activity OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Socket Activity CentOS Stream 9 500 1000 1500 2000 2500 SE +/- 900.65, N = 15 2460.37 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OSPRay Studio Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer CentOS Stream 9 4K 8K 12K 16K 20K SE +/- 58.89, N = 3 20152 1. (CXX) g++ options: -O3 -ldl
Zstd Compression Compression Level: 8 - Decompression Speed OpenBenchmarking.org MB/s, More Is Better Zstd Compression Compression Level: 8 - Decompression Speed CentOS Stream 9 600 1200 1800 2400 3000 SE +/- 6.19, N = 12 3017.5 1. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
Zstd Compression Compression Level: 8 - Compression Speed OpenBenchmarking.org MB/s, More Is Better Zstd Compression Compression Level: 8 - Compression Speed CentOS Stream 9 300 600 900 1200 1500 SE +/- 18.11, N = 12 1244.0 1. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
PyHPC Benchmarks Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Isoneutral Mixing OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Isoneutral Mixing CentOS Stream 9 0.4676 0.9352 1.4028 1.8704 2.338 SE +/- 0.024, N = 3 2.078
PostgreSQL pgbench Scaling Factor: 100 - Clients: 500 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 500 - Mode: Read Write - Average Latency CentOS Stream 9 6 12 18 24 30 SE +/- 0.05, N = 3 26.72 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL pgbench Scaling Factor: 100 - Clients: 500 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 500 - Mode: Read Write CentOS Stream 9 4K 8K 12K 16K 20K SE +/- 32.56, N = 3 18710 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OSPRay Benchmark: gravity_spheres_volume/dim_512/scivis/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.10 Benchmark: gravity_spheres_volume/dim_512/scivis/real_time CentOS Stream 9 5 10 15 20 25 SE +/- 0.15, N = 3 22.02
PostgreSQL pgbench Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average Latency CentOS Stream 9 3 6 9 12 15 SE +/- 0.01, N = 3 12.05 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL pgbench Scaling Factor: 100 - Clients: 250 - Mode: Read Write OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 250 - Mode: Read Write CentOS Stream 9 4K 8K 12K 16K 20K SE +/- 21.44, N = 3 20745 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OSPRay Benchmark: gravity_spheres_volume/dim_512/ao/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.10 Benchmark: gravity_spheres_volume/dim_512/ao/real_time CentOS Stream 9 5 10 15 20 25 SE +/- 0.06, N = 3 22.42
Redis Test: SET - Parallel Connections: 1000 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 1000 CentOS Stream 9 400K 800K 1200K 1600K 2000K SE +/- 55692.46, N = 12 1847194.12 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
PostgreSQL pgbench Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latency OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latency CentOS Stream 9 0.0338 0.0676 0.1014 0.1352 0.169 SE +/- 0.001, N = 3 0.150 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
PostgreSQL pgbench Scaling Factor: 100 - Clients: 250 - Mode: Read Only OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 250 - Mode: Read Only CentOS Stream 9 400K 800K 1200K 1600K 2000K SE +/- 9583.19, N = 3 1669388 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OSPRay Studio Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer CentOS Stream 9 9K 18K 27K 36K 45K SE +/- 38.89, N = 3 40852 1. (CXX) g++ options: -O3 -ldl
OSPRay Studio Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer CentOS Stream 9 9K 18K 27K 36K 45K SE +/- 74.23, N = 3 40580 1. (CXX) g++ options: -O3 -ldl
Timed LLVM Compilation Build System: Ninja OpenBenchmarking.org Seconds, Fewer Is Better Timed LLVM Compilation 13.0 Build System: Ninja CentOS Stream 9 30 60 90 120 150 SE +/- 0.23, N = 3 135.02
OSPRay Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.10 Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time CentOS Stream 9 6 12 18 24 30 SE +/- 0.04, N = 3 25.59
Redis Test: SET - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 500 CentOS Stream 9 400K 800K 1200K 1600K 2000K SE +/- 47157.16, N = 12 1931278.62 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
PyHPC Benchmarks Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixing OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixing CentOS Stream 9 0.6476 1.2952 1.9428 2.5904 3.238 SE +/- 0.033, N = 3 2.878
Timed Linux Kernel Compilation Build: defconfig OpenBenchmarking.org Seconds, Fewer Is Better Timed Linux Kernel Compilation 5.18 Build: defconfig CentOS Stream 9 7 14 21 28 35 SE +/- 0.39, N = 13 29.67
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 4 - Input: Bosphorus 4K CentOS Stream 9 0.2986 0.5972 0.8958 1.1944 1.493 SE +/- 0.001, N = 3 1.327 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OSPRay Studio Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer CentOS Stream 9 4K 8K 12K 16K 20K SE +/- 49.21, N = 3 20261 1. (CXX) g++ options: -O3 -ldl
ONNX Runtime Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel CentOS Stream 9 50 100 150 200 250 SE +/- 0.17, N = 3 236 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel CentOS Stream 9 400 800 1200 1600 2000 SE +/- 3.09, N = 3 1693 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: GPT-2 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: GPT-2 - Device: CPU - Executor: Parallel CentOS Stream 9 1100 2200 3300 4400 5500 SE +/- 32.87, N = 3 5269 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: bertsquad-12 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: bertsquad-12 - Device: CPU - Executor: Parallel CentOS Stream 9 200 400 600 800 1000 SE +/- 2.02, N = 3 799 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: yolov4 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: yolov4 - Device: CPU - Executor: Parallel CentOS Stream 9 140 280 420 560 700 SE +/- 1.04, N = 3 630 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: bertsquad-12 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: bertsquad-12 - Device: CPU - Executor: Standard CentOS Stream 9 200 400 600 800 1000 SE +/- 0.50, N = 3 1093 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: fcn-resnet101-11 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard CentOS Stream 9 100 200 300 400 500 SE +/- 1.17, N = 3 443 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: yolov4 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: yolov4 - Device: CPU - Executor: Standard CentOS Stream 9 150 300 450 600 750 SE +/- 1.17, N = 3 694 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: super-resolution-10 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: super-resolution-10 - Device: CPU - Executor: Parallel CentOS Stream 9 700 1400 2100 2800 3500 SE +/- 4.91, N = 3 3259 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: super-resolution-10 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: super-resolution-10 - Device: CPU - Executor: Standard CentOS Stream 9 3K 6K 9K 12K 15K SE +/- 43.63, N = 3 12260 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
WebP2 Image Encode Encode Settings: Quality 75, Compression Effort 7 OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20220422 Encode Settings: Quality 75, Compression Effort 7 CentOS Stream 9 30 60 90 120 150 SE +/- 0.09, N = 3 111.92 1. (CXX) g++ options: -fno-rtti -O3
OSPRay Studio Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer CentOS Stream 9 5K 10K 15K 20K 25K SE +/- 79.25, N = 3 23967 1. (CXX) g++ options: -O3 -ldl
PyHPC Benchmarks Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Isoneutral Mixing OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Isoneutral Mixing CentOS Stream 9 0.4635 0.927 1.3905 1.854 2.3175 SE +/- 0.004, N = 3 2.060
Node.js Express HTTP Load Test OpenBenchmarking.org Requests Per Second, More Is Better Node.js Express HTTP Load Test CentOS Stream 9 1100 2200 3300 4400 5500 SE +/- 73.75, N = 15 4910 1. Nodejs
Stress-NG Test: CPU Cache OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: CPU Cache CentOS Stream 9 4 8 12 16 20 SE +/- 0.13, N = 10 16.26 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Timed GDB GNU Debugger Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed GDB GNU Debugger Compilation 10.2 Time To Compile CentOS Stream 9 20 40 60 80 100 SE +/- 0.27, N = 3 95.42
nginx Concurrent Requests: 1000 OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.21.1 Concurrent Requests: 1000 CentOS Stream 9 40K 80K 120K 160K 200K SE +/- 1519.57, N = 3 200945.49 1. (CC) gcc options: -lcrypt -lz -O3 -march=native
x264 Video Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better x264 2022-02-22 Video Input: Bosphorus 4K CentOS Stream 9 8 16 24 32 40 SE +/- 0.49, N = 15 34.57 1. (CC) gcc options: -ldl -m64 -lm -lpthread -O3 -flto
PyHPC Benchmarks Device: CPU - Backend: JAX - Project Size: 4194304 - Benchmark: Isoneutral Mixing OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 4194304 - Benchmark: Isoneutral Mixing CentOS Stream 9 0.1944 0.3888 0.5832 0.7776 0.972 SE +/- 0.004, N = 3 0.864
libavif avifenc Encoder Speed: 0 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 0 CentOS Stream 9 20 40 60 80 100 SE +/- 0.66, N = 3 84.32 1. (CXX) g++ options: -O3 -fPIC -lm
Blender Blend File: Pabellon Barcelona - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.2 Blend File: Pabellon Barcelona - Compute: CPU-Only CentOS Stream 9 20 40 60 80 100 SE +/- 0.02, N = 3 82.89
PyHPC Benchmarks Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Isoneutral Mixing OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Isoneutral Mixing CentOS Stream 9 0.3094 0.6188 0.9282 1.2376 1.547 SE +/- 0.001, N = 3 1.375
Node.js V8 Web Tooling Benchmark OpenBenchmarking.org runs/s, More Is Better Node.js V8 Web Tooling Benchmark CentOS Stream 9 3 6 9 12 15 SE +/- 0.06, N = 3 10.55
Renaissance Test: Random Forest OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Random Forest CentOS Stream 9 300 600 900 1200 1500 SE +/- 6.85, N = 3 1455.2 MIN: 1315.52 / MAX: 1806.24
oneDNN Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU CentOS Stream 9 1.2164 2.4328 3.6492 4.8656 6.082 SE +/- 0.32475, N = 15 5.40603 MIN: 3.28 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
Zstd Compression Compression Level: 19, Long Mode - Decompression Speed OpenBenchmarking.org MB/s, More Is Better Zstd Compression Compression Level: 19, Long Mode - Decompression Speed CentOS Stream 9 600 1200 1800 2400 3000 SE +/- 4.30, N = 5 2635.7 1. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
Zstd Compression Compression Level: 19, Long Mode - Compression Speed OpenBenchmarking.org MB/s, More Is Better Zstd Compression Compression Level: 19, Long Mode - Compression Speed CentOS Stream 9 10 20 30 40 50 SE +/- 0.45, N = 5 43.4 1. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
simdjson Throughput Test: PartialTweets OpenBenchmarking.org GB/s, More Is Better simdjson 2.0 Throughput Test: PartialTweets CentOS Stream 9 1.0913 2.1826 3.2739 4.3652 5.4565 SE +/- 0.01, N = 3 4.85 1. (CXX) g++ options: -O3
simdjson Throughput Test: DistinctUserID OpenBenchmarking.org GB/s, More Is Better simdjson 2.0 Throughput Test: DistinctUserID CentOS Stream 9 1.2983 2.5966 3.8949 5.1932 6.4915 SE +/- 0.01, N = 3 5.77 1. (CXX) g++ options: -O3
simdjson Throughput Test: TopTweet OpenBenchmarking.org GB/s, More Is Better simdjson 2.0 Throughput Test: TopTweet CentOS Stream 9 1.2645 2.529 3.7935 5.058 6.3225 SE +/- 0.01, N = 3 5.62 1. (CXX) g++ options: -O3
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Detection FP16 - Device: CPU CentOS Stream 9 300 600 900 1200 1500 SE +/- 0.41, N = 3 1424.57 MIN: 1046.08 / MAX: 1657.29 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Detection FP16 - Device: CPU CentOS Stream 9 4 8 12 16 20 SE +/- 0.00, N = 3 13.92 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Face Detection FP16 - Device: CPU CentOS Stream 9 200 400 600 800 1000 SE +/- 0.67, N = 3 819.63 MIN: 519.3 / MAX: 967.18 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Face Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Face Detection FP16 - Device: CPU CentOS Stream 9 6 12 18 24 30 SE +/- 0.02, N = 3 24.29 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Detection FP32 - Device: CPU CentOS Stream 9 300 600 900 1200 1500 SE +/- 1.03, N = 3 1451.62 MIN: 1039.96 / MAX: 1708.95 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Person Detection FP32 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Detection FP32 - Device: CPU CentOS Stream 9 4 8 12 16 20 SE +/- 0.02, N = 3 13.67 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
PyHPC Benchmarks Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of State OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of State CentOS Stream 9 0.4356 0.8712 1.3068 1.7424 2.178 SE +/- 0.001, N = 3 1.936
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Face Detection FP16-INT8 - Device: CPU CentOS Stream 9 50 100 150 200 250 SE +/- 0.22, N = 3 239.86 MIN: 178.86 / MAX: 348.97 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Face Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Face Detection FP16-INT8 - Device: CPU CentOS Stream 9 20 40 60 80 100 SE +/- 0.07, N = 3 83.26 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
Blender Blend File: Classroom - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.2 Blend File: Classroom - Compute: CPU-Only CentOS Stream 9 14 28 42 56 70 SE +/- 0.04, N = 3 64.82
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU CentOS Stream 9 20 40 60 80 100 SE +/- 0.18, N = 3 85.47 MIN: 76.11 / MAX: 195.12 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Machine Translation EN To DE FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU CentOS Stream 9 50 100 150 200 250 SE +/- 0.51, N = 3 233.33 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16 - Device: CPU CentOS Stream 9 7 14 21 28 35 SE +/- 0.01, N = 3 32.00 MIN: 21.78 / MAX: 67.35 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Weld Porosity Detection FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16 - Device: CPU CentOS Stream 9 500 1000 1500 2000 2500 SE +/- 1.20, N = 3 2478.96 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU CentOS Stream 9 0.306 0.612 0.918 1.224 1.53 SE +/- 0.00, N = 3 1.36 MIN: 0.99 / MAX: 13.44 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU CentOS Stream 9 10K 20K 30K 40K 50K SE +/- 99.47, N = 3 47224.77 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU CentOS Stream 9 2 4 6 8 10 SE +/- 0.01, N = 3 8.27 MIN: 7.23 / MAX: 27.1 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Weld Porosity Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU CentOS Stream 9 2K 4K 6K 8K 10K SE +/- 8.29, N = 3 9657.99 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU CentOS Stream 9 1.017 2.034 3.051 4.068 5.085 SE +/- 0.00, N = 3 4.52 MIN: 4.11 / MAX: 44.74 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenVINO Model: Vehicle Detection FP16-INT8 - Device: CPU OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU CentOS Stream 9 900 1800 2700 3600 4500 SE +/- 1.33, N = 3 4414.94 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
oneDNN Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU CentOS Stream 9 9 18 27 36 45 SE +/- 5.68, N = 15 37.96 MIN: 3.48 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
TensorFlow Lite Model: Inception ResNet V2 OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: Inception ResNet V2 CentOS Stream 9 10K 20K 30K 40K 50K SE +/- 268.62, N = 3 47297.8
TensorFlow Lite Model: Mobilenet Quant OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: Mobilenet Quant CentOS Stream 9 2K 4K 6K 8K 10K SE +/- 100.45, N = 3 9540.86
GraphicsMagick Operation: Resizing OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Resizing CentOS Stream 9 600 1200 1800 2400 3000 SE +/- 27.10, N = 3 2748 1. (CC) gcc options: -fopenmp -O2 -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread
GraphicsMagick Operation: Enhanced OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Enhanced CentOS Stream 9 200 400 600 800 1000 SE +/- 1.76, N = 3 1153 1. (CC) gcc options: -fopenmp -O2 -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread
GraphicsMagick Operation: Sharpen OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Sharpen CentOS Stream 9 140 280 420 560 700 SE +/- 1.76, N = 3 641 1. (CC) gcc options: -fopenmp -O2 -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread
GraphicsMagick Operation: Noise-Gaussian OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Noise-Gaussian CentOS Stream 9 160 320 480 640 800 SE +/- 0.88, N = 3 738 1. (CC) gcc options: -fopenmp -O2 -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread
GraphicsMagick Operation: Swirl OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Swirl CentOS Stream 9 500 1000 1500 2000 2500 SE +/- 10.48, N = 3 2340 1. (CC) gcc options: -fopenmp -O2 -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread
simdjson Throughput Test: Kostya OpenBenchmarking.org GB/s, More Is Better simdjson 2.0 Throughput Test: Kostya CentOS Stream 9 0.6548 1.3096 1.9644 2.6192 3.274 SE +/- 0.00, N = 3 2.91 1. (CXX) g++ options: -O3
simdjson Throughput Test: LargeRandom OpenBenchmarking.org GB/s, More Is Better simdjson 2.0 Throughput Test: LargeRandom CentOS Stream 9 0.216 0.432 0.648 0.864 1.08 SE +/- 0.00, N = 3 0.96 1. (CXX) g++ options: -O3
Unpacking The Linux Kernel linux-5.19.tar.xz OpenBenchmarking.org Seconds, Fewer Is Better Unpacking The Linux Kernel 5.19 linux-5.19.tar.xz CentOS Stream 9 3 6 9 12 15 SE +/- 0.116, N = 17 9.194
Renaissance Test: Apache Spark Bayes OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Apache Spark Bayes CentOS Stream 9 200 400 600 800 1000 SE +/- 11.23, N = 3 1075.3 MIN: 628.33 / MAX: 1551.11
libavif avifenc Encoder Speed: 2 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 2 CentOS Stream 9 11 22 33 44 55 SE +/- 0.53, N = 3 48.71 1. (CXX) g++ options: -O3 -fPIC -lm
Redis Test: GET - Parallel Connections: 1000 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: GET - Parallel Connections: 1000 CentOS Stream 9 500K 1000K 1500K 2000K 2500K SE +/- 26860.41, N = 5 2406986.65 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
libavif avifenc Encoder Speed: 6, Lossless OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 6, Lossless CentOS Stream 9 3 6 9 12 15 SE +/- 0.070, N = 15 9.260 1. (CXX) g++ options: -O3 -fPIC -lm
oneDNN Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU CentOS Stream 9 0.5368 1.0736 1.6104 2.1472 2.684 SE +/- 0.07640, N = 15 2.38563 MIN: 1.7 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
WebP Image Encode Encode Settings: Quality 100, Highest Compression OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Highest Compression CentOS Stream 9 2 4 6 8 10 SE +/- 0.061, N = 15 8.802 1. (CC) gcc options: -fvisibility=hidden -O2 -lm -lpng16 -ljpeg
Zstd Compression Compression Level: 19 - Decompression Speed OpenBenchmarking.org MB/s, More Is Better Zstd Compression Compression Level: 19 - Decompression Speed CentOS Stream 9 600 1200 1800 2400 3000 SE +/- 6.30, N = 3 2571.3 1. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
Zstd Compression Compression Level: 19 - Compression Speed OpenBenchmarking.org MB/s, More Is Better Zstd Compression Compression Level: 19 - Compression Speed CentOS Stream 9 20 40 60 80 100 SE +/- 0.52, N = 3 86.6 1. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
7-Zip Compression Test: Decompression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 22.01 Test: Decompression Rating CentOS Stream 9 80K 160K 240K 320K 400K SE +/- 2273.35, N = 3 371131 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
7-Zip Compression Test: Compression Rating OpenBenchmarking.org MIPS, More Is Better 7-Zip Compression 22.01 Test: Compression Rating CentOS Stream 9 100K 200K 300K 400K 500K SE +/- 5624.44, N = 3 467866 1. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
DaCapo Benchmark Java Test: Jython OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 9.12-MR1 Java Test: Jython CentOS Stream 9 1200 2400 3600 4800 6000 SE +/- 189.31, N = 16 5600
WebP Image Encode Encode Settings: Quality 100, Lossless, Highest Compression OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Lossless, Highest Compression CentOS Stream 9 9 18 27 36 45 SE +/- 0.22, N = 3 41.21 1. (CC) gcc options: -fvisibility=hidden -O2 -lm -lpng16 -ljpeg
Stress-NG Test: System V Message Passing OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: System V Message Passing CentOS Stream 9 1.5M 3M 4.5M 6M 7.5M SE +/- 85352.98, N = 4 7093379.73 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Glibc C String Functions OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Glibc C String Functions CentOS Stream 9 2M 4M 6M 8M 10M SE +/- 103735.47, N = 4 9473078.17 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Zstd Compression Compression Level: 8, Long Mode - Decompression Speed OpenBenchmarking.org MB/s, More Is Better Zstd Compression Compression Level: 8, Long Mode - Decompression Speed CentOS Stream 9 700 1400 2100 2800 3500 SE +/- 10.41, N = 3 3201.0 1. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
Zstd Compression Compression Level: 8, Long Mode - Compression Speed OpenBenchmarking.org MB/s, More Is Better Zstd Compression Compression Level: 8, Long Mode - Compression Speed CentOS Stream 9 70 140 210 280 350 SE +/- 0.78, N = 3 307.5 1. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
TNN Target: CPU - Model: MobileNet v2 OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: MobileNet v2 CentOS Stream 9 80 160 240 320 400 SE +/- 4.68, N = 4 378.88 MIN: 371.88 / MAX: 634.44 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
Zstd Compression Compression Level: 3 - Decompression Speed OpenBenchmarking.org MB/s, More Is Better Zstd Compression Compression Level: 3 - Decompression Speed CentOS Stream 9 600 1200 1800 2400 3000 SE +/- 0.65, N = 2 3022.9 1. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
Zstd Compression Compression Level: 3 - Compression Speed OpenBenchmarking.org MB/s, More Is Better Zstd Compression Compression Level: 3 - Compression Speed CentOS Stream 9 1500 3000 4500 6000 7500 SE +/- 78.16, N = 3 7026.1 1. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
Zstd Compression Compression Level: 3, Long Mode - Decompression Speed OpenBenchmarking.org MB/s, More Is Better Zstd Compression Compression Level: 3, Long Mode - Decompression Speed CentOS Stream 9 700 1400 2100 2800 3500 SE +/- 13.60, N = 3 3208.0 1. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
Zstd Compression Compression Level: 3, Long Mode - Compression Speed OpenBenchmarking.org MB/s, More Is Better Zstd Compression Compression Level: 3, Long Mode - Compression Speed CentOS Stream 9 60 120 180 240 300 SE +/- 4.03, N = 3 281.0 1. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
DaCapo Benchmark Java Test: Tradebeans OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 9.12-MR1 Java Test: Tradebeans CentOS Stream 9 3K 6K 9K 12K 15K SE +/- 116.64, N = 4 16070
Blender Blend File: Fishy Cat - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.2 Blend File: Fishy Cat - Compute: CPU-Only CentOS Stream 9 8 16 24 32 40 SE +/- 0.07, N = 3 33.37
PyHPC Benchmarks Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Equation of State OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Equation of State CentOS Stream 9 0.0245 0.049 0.0735 0.098 0.1225 SE +/- 0.001, N = 3 0.109
GROMACS Implementation: MPI CPU - Input: water_GMX50_bare OpenBenchmarking.org Ns Per Day, More Is Better GROMACS 2022.1 Implementation: MPI CPU - Input: water_GMX50_bare CentOS Stream 9 3 6 9 12 15 SE +/- 0.002, N = 3 8.996 1. (CXX) g++ options: -O3
libavif avifenc Encoder Speed: 10, Lossless OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 10, Lossless CentOS Stream 9 2 4 6 8 10 SE +/- 0.073, N = 15 6.605 1. (CXX) g++ options: -O3 -fPIC -lm
NAMD ATPase Simulation - 327,506 Atoms OpenBenchmarking.org days/ns, Fewer Is Better NAMD 2.14 ATPase Simulation - 327,506 Atoms CentOS Stream 9 0.0633 0.1266 0.1899 0.2532 0.3165 SE +/- 0.00094, N = 3 0.28138
ASTC Encoder Preset: Medium OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Medium CentOS Stream 9 70 140 210 280 350 SE +/- 2.47, N = 15 316.37 1. (CXX) g++ options: -O3 -flto -pthread
Stress-NG Test: NUMA OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: NUMA CentOS Stream 9 3 6 9 12 15 SE +/- 0.02, N = 3 10.37 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: x86_64 RdRand OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: x86_64 RdRand CentOS Stream 9 140K 280K 420K 560K 700K SE +/- 2562.02, N = 3 667284.36 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Forking OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Forking CentOS Stream 9 14K 28K 42K 56K 70K SE +/- 123.25, N = 3 63484.45 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Malloc OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Malloc CentOS Stream 9 70M 140M 210M 280M 350M SE +/- 452266.97, N = 3 306750258.84 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Memory Copying OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Memory Copying CentOS Stream 9 3K 6K 9K 12K 15K SE +/- 5.23, N = 3 12812.45 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: MMAP OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: MMAP CentOS Stream 9 800 1600 2400 3200 4000 SE +/- 34.11, N = 3 3747.58 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Context Switching OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Context Switching CentOS Stream 9 1.3M 2.6M 3.9M 5.2M 6.5M SE +/- 78706.86, N = 3 6233126.45 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Semaphores OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Semaphores CentOS Stream 9 1.5M 3M 4.5M 6M 7.5M SE +/- 27158.37, N = 3 7186364.51 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: CPU Stress OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: CPU Stress CentOS Stream 9 30K 60K 90K 120K 150K SE +/- 758.69, N = 3 135517.46 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: SENDFILE OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: SENDFILE CentOS Stream 9 300K 600K 900K 1200K 1500K SE +/- 2669.03, N = 3 1271967.05 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: MEMFD OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: MEMFD CentOS Stream 9 900 1800 2700 3600 4500 SE +/- 35.16, N = 3 4098.84 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Glibc Qsort Data Sorting OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Glibc Qsort Data Sorting CentOS Stream 9 200 400 600 800 1000 SE +/- 2.69, N = 3 934.26 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Crypto OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Crypto CentOS Stream 9 20K 40K 60K 80K 100K SE +/- 289.31, N = 3 83808.91 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Matrix Math OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Matrix Math CentOS Stream 9 60K 120K 180K 240K 300K SE +/- 512.51, N = 3 286293.40 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Vector Math OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Vector Math CentOS Stream 9 70K 140K 210K 280K 350K SE +/- 944.66, N = 3 322923.09 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Redis Test: SET - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 50 CentOS Stream 9 500K 1000K 1500K 2000K 2500K SE +/- 29696.58, N = 3 2189377.08 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
PyHPC Benchmarks Device: CPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Equation of State OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Equation of State CentOS Stream 9 0.05 0.1 0.15 0.2 0.25 SE +/- 0.003, N = 4 0.222
Redis Test: GET - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: GET - Parallel Connections: 50 CentOS Stream 9 500K 1000K 1500K 2000K 2500K SE +/- 2019.92, N = 3 2284227.2 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenSSL OpenBenchmarking.org verify/s, More Is Better OpenSSL CentOS Stream 9 200K 400K 600K 800K 1000K SE +/- 4686.71, N = 4 1112427.2 1. OpenSSL 3.0.1 14 Dec 2021 (Library: OpenSSL 3.0.1 14 Dec 2021)
OpenSSL OpenBenchmarking.org sign/s, More Is Better OpenSSL CentOS Stream 9 4K 8K 12K 16K 20K SE +/- 205.54, N = 4 16866.1 1. OpenSSL 3.0.1 14 Dec 2021 (Library: OpenSSL 3.0.1 14 Dec 2021)
Blender Blend File: BMW27 - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.2 Blend File: BMW27 - Compute: CPU-Only CentOS Stream 9 6 12 18 24 30 SE +/- 0.03, N = 3 25.04
TNN Target: CPU - Model: SqueezeNet v1.1 OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: SqueezeNet v1.1 CentOS Stream 9 80 160 240 320 400 SE +/- 0.03, N = 3 366.49 MIN: 366.26 / MAX: 366.87 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
PyHPC Benchmarks Device: CPU - Backend: JAX - Project Size: 4194304 - Benchmark: Equation of State OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 4194304 - Benchmark: Equation of State CentOS Stream 9 0.007 0.014 0.021 0.028 0.035 SE +/- 0.000, N = 3 0.031
WebP Image Encode Encode Settings: Quality 100, Lossless OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Lossless CentOS Stream 9 5 10 15 20 25 SE +/- 0.17, N = 3 21.12 1. (CC) gcc options: -fvisibility=hidden -O2 -lm -lpng16 -ljpeg
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU CentOS Stream 9 0.8576 1.7152 2.5728 3.4304 4.288 SE +/- 0.01173, N = 3 3.81155 MIN: 3.53 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
DaCapo Benchmark Java Test: H2 OpenBenchmarking.org msec, Fewer Is Better DaCapo Benchmark 9.12-MR1 Java Test: H2 CentOS Stream 9 2K 4K 6K 8K 10K SE +/- 54.95, N = 4 9847
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 8 - Input: Bosphorus 4K CentOS Stream 9 9 18 27 36 45 SE +/- 0.30, N = 3 38.68 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
PyHPC Benchmarks Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Equation of State OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Equation of State CentOS Stream 9 0.0682 0.1364 0.2046 0.2728 0.341 SE +/- 0.001, N = 3 0.303
SVT-VP9 Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4K CentOS Stream 9 30 60 90 120 150 SE +/- 1.37, N = 4 115.50 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
WebP Image Encode Encode Settings: Quality 100 OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100 CentOS Stream 9 0.6849 1.3698 2.0547 2.7396 3.4245 SE +/- 0.065, N = 15 3.044 1. (CC) gcc options: -fvisibility=hidden -O2 -lm -lpng16 -ljpeg
SVT-VP9 Tuning: Visual Quality Optimized - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: Visual Quality Optimized - Input: Bosphorus 4K CentOS Stream 9 20 40 60 80 100 SE +/- 1.23, N = 3 99.27 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
ASTC Encoder Preset: Exhaustive OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Exhaustive CentOS Stream 9 1.0137 2.0274 3.0411 4.0548 5.0685 SE +/- 0.0017, N = 3 4.5054 1. (CXX) g++ options: -O3 -flto -pthread
SVT-VP9 Tuning: VMAF Optimized - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: VMAF Optimized - Input: Bosphorus 4K CentOS Stream 9 30 60 90 120 150 SE +/- 0.07, N = 3 112.93 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
WebP2 Image Encode Encode Settings: Default OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20220422 Encode Settings: Default CentOS Stream 9 0.6001 1.2002 1.8003 2.4004 3.0005 SE +/- 0.033, N = 15 2.667 1. (CXX) g++ options: -fno-rtti -O3
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU CentOS Stream 9 0.8289 1.6578 2.4867 3.3156 4.1445 SE +/- 0.03477, N = 14 3.68404 MIN: 3.54 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
SVT-HEVC Tuning: 7 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 7 - Input: Bosphorus 4K CentOS Stream 9 20 40 60 80 100 SE +/- 1.08, N = 4 86.67 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
WebP Image Encode Encode Settings: Default OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Default CentOS Stream 9 0.4867 0.9734 1.4601 1.9468 2.4335 SE +/- 0.069, N = 15 2.163 1. (CC) gcc options: -fvisibility=hidden -O2 -lm -lpng16 -ljpeg
SVT-AV1 Encoder Mode: Preset 10 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 10 - Input: Bosphorus 4K CentOS Stream 9 15 30 45 60 75 SE +/- 0.26, N = 3 65.62 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
PyHPC Benchmarks Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Equation of State OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Equation of State CentOS Stream 9 0.0594 0.1188 0.1782 0.2376 0.297 SE +/- 0.002, N = 3 0.264
ASTC Encoder Preset: Thorough OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Thorough CentOS Stream 9 11 22 33 44 55 SE +/- 0.05, N = 3 46.38 1. (CXX) g++ options: -O3 -flto -pthread
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 12 - Input: Bosphorus 4K CentOS Stream 9 20 40 60 80 100 SE +/- 0.83, N = 3 92.83 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-HEVC Tuning: 10 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 10 - Input: Bosphorus 4K CentOS Stream 9 30 60 90 120 150 SE +/- 1.63, N = 3 113.23 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
ASTC Encoder Preset: Fast OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Fast CentOS Stream 9 200 400 600 800 1000 SE +/- 3.69, N = 3 799.11 1. (CXX) g++ options: -O3 -flto -pthread
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU CentOS Stream 9 0.4859 0.9718 1.4577 1.9436 2.4295 SE +/- 0.01538, N = 3 2.15938 MIN: 2.04 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
libavif avifenc Encoder Speed: 6 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 6 CentOS Stream 9 2 4 6 8 10 SE +/- 0.037, N = 3 6.056 1. (CXX) g++ options: -O3 -fPIC -lm
TNN Target: CPU - Model: SqueezeNet v2 OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: SqueezeNet v2 CentOS Stream 9 20 40 60 80 100 SE +/- 0.78, N = 3 75.88 MIN: 74.63 / MAX: 111.7 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
LAMMPS Molecular Dynamics Simulator Model: Rhodopsin Protein OpenBenchmarking.org ns/day, More Is Better LAMMPS Molecular Dynamics Simulator 23Jun2022 Model: Rhodopsin Protein CentOS Stream 9 7 14 21 28 35 SE +/- 0.06, N = 3 30.87 1. (CXX) g++ options: -O3 -lm -ldl
Phoronix Test Suite v10.8.4