2 x Intel Xeon Platinum 8380 testing with a Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS) and ASPEED on CentOS Stream 9 via the Phoronix Test Suite.
CentOS Stream 9 Processor: 2 x Intel Xeon Platinum 8380 @ 3.40GHz (80 Cores / 160 Threads), Motherboard: Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS), Chipset: Intel Device 0998, Memory: 512GB, Disk: 7682GB INTEL SSDPF2KX076TZ, Graphics: ASPEED, Monitor: VE228, Network: 2 x Intel X710 for 10GBASE-T + 2 x Intel E810-C for QSFP
OS: CentOS Stream 9, Kernel: 5.14.0-148.el9.x86_64 (x86_64), Desktop: GNOME Shell 40.10, Display Server: X Server, Compiler: GCC 11.3.1 20220421, File-System: xfs, Screen Resolution: 1920x1080
Kernel Notes: Transparent Huge Pages: alwaysCompiler Notes: --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-islDisk Notes: NONE / attr2,inode64,logbsize=32k,logbufs=8,noquota,relatime,rw,seclabel / Block Size: 4096Processor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xd000363Java Notes: OpenJDK Runtime Environment (Red_Hat-11.0.16.0.8-2.el9) (build 11.0.16+8-LTS)Python Notes: Python 3.9.13Security Notes: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.3 Test: blosclz bitshuffle CentOS Stream 9 800 1600 2400 3200 4000 SE +/- 6.11, N = 3 3704.1 1. (CC) gcc options: -std=gnu99 -O3 -lrt -lm
NAMD NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org days/ns, Fewer Is Better NAMD 2.14 ATPase Simulation - 327,506 Atoms CentOS Stream 9 0.0633 0.1266 0.1899 0.2532 0.3165 SE +/- 0.00094, N = 3 0.28138
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100 CentOS Stream 9 0.6849 1.3698 2.0547 2.7396 3.4245 SE +/- 0.065, N = 15 3.044 1. (CC) gcc options: -fvisibility=hidden -O2 -lm -lpng16 -ljpeg
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Lossless CentOS Stream 9 5 10 15 20 25 SE +/- 0.17, N = 3 21.12 1. (CC) gcc options: -fvisibility=hidden -O2 -lm -lpng16 -ljpeg
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Highest Compression CentOS Stream 9 2 4 6 8 10 SE +/- 0.061, N = 15 8.802 1. (CC) gcc options: -fvisibility=hidden -O2 -lm -lpng16 -ljpeg
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Lossless, Highest Compression CentOS Stream 9 9 18 27 36 45 SE +/- 0.22, N = 3 41.21 1. (CC) gcc options: -fvisibility=hidden -O2 -lm -lpng16 -ljpeg
simdjson This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GB/s, More Is Better simdjson 2.0 Throughput Test: Kostya CentOS Stream 9 0.6548 1.3096 1.9644 2.6192 3.274 SE +/- 0.00, N = 3 2.91 1. (CXX) g++ options: -O3
OpenBenchmarking.org GB/s, More Is Better simdjson 2.0 Throughput Test: TopTweet CentOS Stream 9 1.2645 2.529 3.7935 5.058 6.3225 SE +/- 0.01, N = 3 5.62 1. (CXX) g++ options: -O3
OpenBenchmarking.org GB/s, More Is Better simdjson 2.0 Throughput Test: LargeRandom CentOS Stream 9 0.216 0.432 0.648 0.864 1.08 SE +/- 0.00, N = 3 0.96 1. (CXX) g++ options: -O3
OpenBenchmarking.org GB/s, More Is Better simdjson 2.0 Throughput Test: PartialTweets CentOS Stream 9 1.0913 2.1826 3.2739 4.3652 5.4565 SE +/- 0.01, N = 3 4.85 1. (CXX) g++ options: -O3
OpenBenchmarking.org GB/s, More Is Better simdjson 2.0 Throughput Test: DistinctUserID CentOS Stream 9 1.2983 2.5966 3.8949 5.1932 6.4915 SE +/- 0.01, N = 3 5.77 1. (CXX) g++ options: -O3
Java Test: Eclipse
CentOS Stream 9: The test quit with a non-zero exit status.
Java Test: Tradesoap
CentOS Stream 9: The test run did not produce a result.
Zstd Compression This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better Zstd Compression Compression Level: 3 - Compression Speed CentOS Stream 9 1500 3000 4500 6000 7500 SE +/- 78.16, N = 3 7026.1 1. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
OpenBenchmarking.org MB/s, More Is Better Zstd Compression Compression Level: 3 - Decompression Speed CentOS Stream 9 600 1200 1800 2400 3000 SE +/- 0.65, N = 2 3022.9 1. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
OpenBenchmarking.org MB/s, More Is Better Zstd Compression Compression Level: 8 - Compression Speed CentOS Stream 9 300 600 900 1200 1500 SE +/- 18.11, N = 12 1244.0 1. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
OpenBenchmarking.org MB/s, More Is Better Zstd Compression Compression Level: 8 - Decompression Speed CentOS Stream 9 600 1200 1800 2400 3000 SE +/- 6.19, N = 12 3017.5 1. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
OpenBenchmarking.org MB/s, More Is Better Zstd Compression Compression Level: 19 - Compression Speed CentOS Stream 9 20 40 60 80 100 SE +/- 0.52, N = 3 86.6 1. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
OpenBenchmarking.org MB/s, More Is Better Zstd Compression Compression Level: 19 - Decompression Speed CentOS Stream 9 600 1200 1800 2400 3000 SE +/- 6.30, N = 3 2571.3 1. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
OpenBenchmarking.org MB/s, More Is Better Zstd Compression Compression Level: 3, Long Mode - Compression Speed CentOS Stream 9 60 120 180 240 300 SE +/- 4.03, N = 3 281.0 1. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
OpenBenchmarking.org MB/s, More Is Better Zstd Compression Compression Level: 3, Long Mode - Decompression Speed CentOS Stream 9 700 1400 2100 2800 3500 SE +/- 13.60, N = 3 3208.0 1. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
OpenBenchmarking.org MB/s, More Is Better Zstd Compression Compression Level: 8, Long Mode - Compression Speed CentOS Stream 9 70 140 210 280 350 SE +/- 0.78, N = 3 307.5 1. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
OpenBenchmarking.org MB/s, More Is Better Zstd Compression Compression Level: 8, Long Mode - Decompression Speed CentOS Stream 9 700 1400 2100 2800 3500 SE +/- 10.41, N = 3 3201.0 1. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
OpenBenchmarking.org MB/s, More Is Better Zstd Compression Compression Level: 19, Long Mode - Compression Speed CentOS Stream 9 10 20 30 40 50 SE +/- 0.45, N = 5 43.4 1. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
OpenBenchmarking.org MB/s, More Is Better Zstd Compression Compression Level: 19, Long Mode - Decompression Speed CentOS Stream 9 600 1200 1800 2400 3000 SE +/- 4.30, N = 5 2635.7 1. *** zstd command line interface 64-bits v1.5.1, by Yann Collet ***
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Rotate CentOS Stream 9 200 400 600 800 1000 SE +/- 7.69, N = 15 1030 1. (CC) gcc options: -fopenmp -O2 -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Sharpen CentOS Stream 9 140 280 420 560 700 SE +/- 1.76, N = 3 641 1. (CC) gcc options: -fopenmp -O2 -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Enhanced CentOS Stream 9 200 400 600 800 1000 SE +/- 1.76, N = 3 1153 1. (CC) gcc options: -fopenmp -O2 -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Resizing CentOS Stream 9 600 1200 1800 2400 3000 SE +/- 27.10, N = 3 2748 1. (CC) gcc options: -fopenmp -O2 -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Noise-Gaussian CentOS Stream 9 160 320 480 640 800 SE +/- 0.88, N = 3 738 1. (CC) gcc options: -fopenmp -O2 -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: HWB Color Space CentOS Stream 9 200 400 600 800 1000 SE +/- 28.49, N = 12 1138 1. (CC) gcc options: -fopenmp -O2 -ljpeg -lXext -lX11 -lbz2 -lz -lm -lpthread
SVT-AV1 OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 4 - Input: Bosphorus 4K CentOS Stream 9 0.2986 0.5972 0.8958 1.1944 1.493 SE +/- 0.001, N = 3 1.327 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 8 - Input: Bosphorus 4K CentOS Stream 9 9 18 27 36 45 SE +/- 0.30, N = 3 38.68 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 10 - Input: Bosphorus 4K CentOS Stream 9 15 30 45 60 75 SE +/- 0.26, N = 3 65.62 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 12 - Input: Bosphorus 4K CentOS Stream 9 20 40 60 80 100 SE +/- 0.83, N = 3 92.83 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-HEVC This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 7 - Input: Bosphorus 4K CentOS Stream 9 20 40 60 80 100 SE +/- 1.08, N = 4 86.67 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 10 - Input: Bosphorus 4K CentOS Stream 9 30 60 90 120 150 SE +/- 1.63, N = 3 113.23 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
SVT-VP9 This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: VMAF Optimized - Input: Bosphorus 4K CentOS Stream 9 30 60 90 120 150 SE +/- 0.07, N = 3 112.93 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4K CentOS Stream 9 30 60 90 120 150 SE +/- 1.37, N = 4 115.50 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: Visual Quality Optimized - Input: Bosphorus 4K CentOS Stream 9 20 40 60 80 100 SE +/- 1.23, N = 3 99.27 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OSPRay Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.10 Benchmark: particle_volume/ao/real_time CentOS Stream 9 6 12 18 24 30 SE +/- 0.07, N = 3 24.35
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.10 Benchmark: particle_volume/pathtracer/real_time CentOS Stream 9 20 40 60 80 100 SE +/- 0.72, N = 3 100.67
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.10 Benchmark: gravity_spheres_volume/dim_512/ao/real_time CentOS Stream 9 5 10 15 20 25 SE +/- 0.06, N = 3 22.42
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.10 Benchmark: gravity_spheres_volume/dim_512/scivis/real_time CentOS Stream 9 5 10 15 20 25 SE +/- 0.15, N = 3 22.02
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.10 Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time CentOS Stream 9 6 12 18 24 30 SE +/- 0.04, N = 3 25.59
Stockfish This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes Per Second, More Is Better Stockfish 15 Total Time CentOS Stream 9 40M 80M 120M 160M 200M SE +/- 2364357.21, N = 15 179473129 1. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver
Build: allmodconfig
CentOS Stream 9: The test quit with a non-zero exit status.
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU CentOS Stream 9 1.2164 2.4328 3.6492 4.8656 6.082 SE +/- 0.32475, N = 15 5.40603 MIN: 3.28 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU CentOS Stream 9 0.5368 1.0736 1.6104 2.1472 2.684 SE +/- 0.07640, N = 15 2.38563 MIN: 1.7 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU CentOS Stream 9 0.4859 0.9718 1.4577 1.9436 2.4295 SE +/- 0.01538, N = 3 2.15938 MIN: 2.04 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU CentOS Stream 9 0.8576 1.7152 2.5728 3.4304 4.288 SE +/- 0.01173, N = 3 3.81155 MIN: 3.53 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU CentOS Stream 9 0.8289 1.6578 2.4867 3.3156 4.1445 SE +/- 0.03477, N = 14 3.68404 MIN: 3.54 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU CentOS Stream 9 150 300 450 600 750 SE +/- 6.94, N = 12 697.28 MIN: 605.85 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU CentOS Stream 9 100 200 300 400 500 SE +/- 7.22, N = 15 447.62 MIN: 376.51 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU CentOS Stream 9 9 18 27 36 45 SE +/- 5.68, N = 15 37.96 MIN: 3.48 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl -lpthread
OSPRay Studio Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer CentOS Stream 9 4K 8K 12K 16K 20K SE +/- 58.89, N = 3 20152 1. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer CentOS Stream 9 9K 18K 27K 36K 45K SE +/- 74.23, N = 3 40580 1. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer CentOS Stream 9 4K 8K 12K 16K 20K SE +/- 49.21, N = 3 20261 1. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer CentOS Stream 9 9K 18K 27K 36K 45K SE +/- 38.89, N = 3 40852 1. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer CentOS Stream 9 5K 10K 15K 20K 25K SE +/- 79.25, N = 3 23967 1. (CXX) g++ options: -O3 -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer CentOS Stream 9 10K 20K 30K 40K 50K SE +/- 81.93, N = 3 48319 1. (CXX) g++ options: -O3 -ldl
WebP2 Image Encode This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20220422 Encode Settings: Default CentOS Stream 9 0.6001 1.2002 1.8003 2.4004 3.0005 SE +/- 0.033, N = 15 2.667 1. (CXX) g++ options: -fno-rtti -O3
OpenSSL OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. The system/openssl test profiles relies on benchmarking the system/OS-supplied openssl binary rather than the pts/openssl test profile that uses the locally-built OpenSSL for benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org sign/s, More Is Better OpenSSL CentOS Stream 9 4K 8K 12K 16K 20K SE +/- 205.54, N = 4 16866.1 1. OpenSSL 3.0.1 14 Dec 2021 (Library: OpenSSL 3.0.1 14 Dec 2021)
OpenBenchmarking.org verify/s, More Is Better OpenSSL CentOS Stream 9 200K 400K 600K 800K 1000K SE +/- 4686.71, N = 4 1112427.2 1. OpenSSL 3.0.1 14 Dec 2021 (Library: OpenSSL 3.0.1 14 Dec 2021)
ClickHouse ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.5.4.19 100M Rows Web Analytics Dataset, First Run / Cold Cache CentOS Stream 9 50 100 150 200 250 SE +/- 2.21, N = 15 231.48 MIN: 41.47 / MAX: 5454.55 1. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.5.4.19 100M Rows Web Analytics Dataset, Second Run CentOS Stream 9 50 100 150 200 250 SE +/- 1.48, N = 15 244.38 MIN: 44.09 / MAX: 5454.55 1. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.5.4.19 100M Rows Web Analytics Dataset, Third Run CentOS Stream 9 50 100 150 200 250 SE +/- 1.95, N = 15 243.95 MIN: 42.11 / MAX: 6000 1. ClickHouse server version 22.5.4.19 (official build).
Apache Spark This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - SHA-512 Benchmark Time CentOS Stream 9 20 40 60 80 100 SE +/- 0.53, N = 3 88.83
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 40000000 - Partitions: 500 - Calculate Pi Benchmark Using Dataframe CentOS Stream 9 0.6278 1.2556 1.8834 2.5112 3.139 SE +/- 0.09, N = 3 2.79
Dragonflydb Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.
Clients: 50 - Set To Get Ratio: 1:1
CentOS Stream 9: The test run did not produce a result.
Clients: 50 - Set To Get Ratio: 1:5
CentOS Stream 9: The test run did not produce a result.
Clients: 50 - Set To Get Ratio: 5:1
CentOS Stream 9: The test run did not produce a result.
Clients: 200 - Set To Get Ratio: 1:1
CentOS Stream 9: The test run did not produce a result.
Clients: 200 - Set To Get Ratio: 1:5
CentOS Stream 9: The test run did not produce a result.
Clients: 200 - Set To Get Ratio: 5:1
CentOS Stream 9: The test run did not produce a result.
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 50 CentOS Stream 9 500K 1000K 1500K 2000K 2500K SE +/- 29696.58, N = 3 2189377.08 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: GET - Parallel Connections: 500 CentOS Stream 9 400K 800K 1200K 1600K 2000K SE +/- 89203.76, N = 15 2018201.09 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 500 CentOS Stream 9 400K 800K 1200K 1600K 2000K SE +/- 47157.16, N = 12 1931278.62 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: GET - Parallel Connections: 1000 CentOS Stream 9 500K 1000K 1500K 2000K 2500K SE +/- 26860.41, N = 5 2406986.65 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 1000 CentOS Stream 9 400K 800K 1200K 1600K 2000K SE +/- 55692.46, N = 12 1847194.12 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
ASTC Encoder ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Fast CentOS Stream 9 200 400 600 800 1000 SE +/- 3.69, N = 3 799.11 1. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.org MT/s, More Is Better ASTC Encoder 4.0 Preset: Exhaustive CentOS Stream 9 1.0137 2.0274 3.0411 4.0548 5.0685 SE +/- 0.0017, N = 3 4.5054 1. (CXX) g++ options: -O3 -flto -pthread
GROMACS The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ns Per Day, More Is Better GROMACS 2022.1 Implementation: MPI CPU - Input: water_GMX50_bare CentOS Stream 9 3 6 9 12 15 SE +/- 0.002, N = 3 8.996 1. (CXX) g++ options: -O3
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latency CentOS Stream 9 0.0338 0.0676 0.1014 0.1352 0.169 SE +/- 0.001, N = 3 0.150 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 500 - Mode: Read Only CentOS Stream 9 400K 800K 1200K 1600K 2000K SE +/- 30425.21, N = 12 1855656 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 500 - Mode: Read Only - Average Latency CentOS Stream 9 0.0608 0.1216 0.1824 0.2432 0.304 SE +/- 0.005, N = 12 0.270 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 250 - Mode: Read Write CentOS Stream 9 4K 8K 12K 16K 20K SE +/- 21.44, N = 3 20745 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average Latency CentOS Stream 9 3 6 9 12 15 SE +/- 0.01, N = 3 12.05 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 500 - Mode: Read Write CentOS Stream 9 4K 8K 12K 16K 20K SE +/- 32.56, N = 3 18710 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 14.0 Scaling Factor: 100 - Clients: 500 - Mode: Read Write - Average Latency CentOS Stream 9 6 12 18 24 30 SE +/- 0.05, N = 3 26.72 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
Protocol: Redis - Clients: 100 - Set To Get Ratio: 5:1
CentOS Stream 9: The test run did not produce a result.
OpenBenchmarking.org Ops/sec, More Is Better memtier_benchmark 1.4 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10 CentOS Stream 9 300K 600K 900K 1200K 1500K SE +/- 67672.39, N = 12 1398073.70 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Protocol: Redis - Clients: 500 - Set To Get Ratio: 5:1
CentOS Stream 9: The test run did not produce a result. E: error: failed to prepare thread 112 for test.
Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10
CentOS Stream 9: The test run did not produce a result.
Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10
CentOS Stream 9: The test run did not produce a result. E: error: failed to prepare thread 112 for test.
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: NUMA CentOS Stream 9 3 6 9 12 15 SE +/- 0.02, N = 3 10.37 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Futex CentOS Stream 9 200K 400K 600K 800K 1000K SE +/- 73263.26, N = 15 1088788.92 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: MEMFD CentOS Stream 9 900 1800 2700 3600 4500 SE +/- 35.16, N = 3 4098.84 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Atomic CentOS Stream 9 40K 80K 120K 160K 200K SE +/- 3961.98, N = 15 187775.77 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Crypto CentOS Stream 9 20K 40K 60K 80K 100K SE +/- 289.31, N = 3 83808.91 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Malloc CentOS Stream 9 70M 140M 210M 280M 350M SE +/- 452266.97, N = 3 306750258.84 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Forking CentOS Stream 9 14K 28K 42K 56K 70K SE +/- 123.25, N = 3 63484.45 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Test: IO_uring
CentOS Stream 9: The test run did not produce a result.
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: SENDFILE CentOS Stream 9 300K 600K 900K 1200K 1500K SE +/- 2669.03, N = 3 1271967.05 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: CPU Cache CentOS Stream 9 4 8 12 16 20 SE +/- 0.13, N = 10 16.26 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: CPU Stress CentOS Stream 9 30K 60K 90K 120K 150K SE +/- 758.69, N = 3 135517.46 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Semaphores CentOS Stream 9 1.5M 3M 4.5M 6M 7.5M SE +/- 27158.37, N = 3 7186364.51 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Matrix Math CentOS Stream 9 60K 120K 180K 240K 300K SE +/- 512.51, N = 3 286293.40 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Vector Math CentOS Stream 9 70K 140K 210K 280K 350K SE +/- 944.66, N = 3 322923.09 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: x86_64 RdRand CentOS Stream 9 140K 280K 420K 560K 700K SE +/- 2562.02, N = 3 667284.36 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Memory Copying CentOS Stream 9 3K 6K 9K 12K 15K SE +/- 5.23, N = 3 12812.45 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Socket Activity CentOS Stream 9 500 1000 1500 2000 2500 SE +/- 900.65, N = 15 2460.37 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Context Switching CentOS Stream 9 1.3M 2.6M 3.9M 5.2M 6.5M SE +/- 78706.86, N = 3 6233126.45 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Glibc C String Functions CentOS Stream 9 2M 4M 6M 8M 10M SE +/- 103735.47, N = 4 9473078.17 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Glibc Qsort Data Sorting CentOS Stream 9 200 400 600 800 1000 SE +/- 2.69, N = 3 934.26 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: System V Message Passing CentOS Stream 9 1.5M 3M 4.5M 6M 7.5M SE +/- 85352.98, N = 4 7093379.73 1. (CC) gcc options: -O2 -std=gnu99 -lm -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: nasnet CentOS Stream 9 3 6 9 12 15 SE +/- 0.23, N = 15 12.10 MIN: 10.54 / MAX: 23.03 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenetV3 CentOS Stream 9 0.3944 0.7888 1.1832 1.5776 1.972 SE +/- 0.020, N = 15 1.753 MIN: 1.61 / MAX: 4.19 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: squeezenetv1.1 CentOS Stream 9 0.5301 1.0602 1.5903 2.1204 2.6505 SE +/- 0.050, N = 15 2.356 MIN: 2.03 / MAX: 5.76 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: resnet-v2-50 CentOS Stream 9 2 4 6 8 10 SE +/- 0.088, N = 15 8.663 MIN: 7.71 / MAX: 20.48 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: SqueezeNetV1.0 CentOS Stream 9 0.8901 1.7802 2.6703 3.5604 4.4505 SE +/- 0.075, N = 15 3.956 MIN: 3.51 / MAX: 9.33 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: MobileNetV2_224 CentOS Stream 9 0.5992 1.1984 1.7976 2.3968 2.996 SE +/- 0.014, N = 15 2.663 MIN: 2.48 / MAX: 5.57 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenet-v1-1.0 CentOS Stream 9 0.4703 0.9406 1.4109 1.8812 2.3515 SE +/- 0.047, N = 15 2.090 MIN: 1.76 / MAX: 3.93 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: inception-v3 CentOS Stream 9 5 10 15 20 25 SE +/- 0.19, N = 15 20.09 MIN: 17.31 / MAX: 37.29 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
TNN TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: DenseNet CentOS Stream 9 800 1600 2400 3200 4000 SE +/- 27.70, N = 3 3955.05 MIN: 3833.99 / MAX: 5510.15 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: MobileNet v2 CentOS Stream 9 80 160 240 320 400 SE +/- 4.68, N = 4 378.88 MIN: 371.88 / MAX: 634.44 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: SqueezeNet v2 CentOS Stream 9 20 40 60 80 100 SE +/- 0.78, N = 3 75.88 MIN: 74.63 / MAX: 111.7 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.org ms, Fewer Is Better TNN 0.3 Target: CPU - Model: SqueezeNet v1.1 CentOS Stream 9 80 160 240 320 400 SE +/- 0.03, N = 3 366.49 MIN: 366.26 / MAX: 366.87 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
Blender Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported and HIP for AMD Radeon GPUs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.2 Blend File: BMW27 - Compute: CPU-Only CentOS Stream 9 6 12 18 24 30 SE +/- 0.03, N = 3 25.04
OpenVINO This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Face Detection FP16 - Device: CPU CentOS Stream 9 6 12 18 24 30 SE +/- 0.02, N = 3 24.29 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Face Detection FP16 - Device: CPU CentOS Stream 9 200 400 600 800 1000 SE +/- 0.67, N = 3 819.63 MIN: 519.3 / MAX: 967.18 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Detection FP16 - Device: CPU CentOS Stream 9 4 8 12 16 20 SE +/- 0.00, N = 3 13.92 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Detection FP16 - Device: CPU CentOS Stream 9 300 600 900 1200 1500 SE +/- 0.41, N = 3 1424.57 MIN: 1046.08 / MAX: 1657.29 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Detection FP32 - Device: CPU CentOS Stream 9 4 8 12 16 20 SE +/- 0.02, N = 3 13.67 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Detection FP32 - Device: CPU CentOS Stream 9 300 600 900 1200 1500 SE +/- 1.03, N = 3 1451.62 MIN: 1039.96 / MAX: 1708.95 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16 - Device: CPU CentOS Stream 9 200 400 600 800 1000 SE +/- 14.44, N = 12 1071.70 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16 - Device: CPU CentOS Stream 9 5 10 15 20 25 SE +/- 0.29, N = 12 18.67 MIN: 11.54 / MAX: 79.43 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Face Detection FP16-INT8 - Device: CPU CentOS Stream 9 20 40 60 80 100 SE +/- 0.07, N = 3 83.26 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Face Detection FP16-INT8 - Device: CPU CentOS Stream 9 50 100 150 200 250 SE +/- 0.22, N = 3 239.86 MIN: 178.86 / MAX: 348.97 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU CentOS Stream 9 900 1800 2700 3600 4500 SE +/- 1.33, N = 3 4414.94 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU CentOS Stream 9 1.017 2.034 3.051 4.068 5.085 SE +/- 0.00, N = 3 4.52 MIN: 4.11 / MAX: 44.74 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16 - Device: CPU CentOS Stream 9 500 1000 1500 2000 2500 SE +/- 1.20, N = 3 2478.96 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16 - Device: CPU CentOS Stream 9 7 14 21 28 35 SE +/- 0.01, N = 3 32.00 MIN: 21.78 / MAX: 67.35 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU CentOS Stream 9 50 100 150 200 250 SE +/- 0.51, N = 3 233.33 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU CentOS Stream 9 20 40 60 80 100 SE +/- 0.18, N = 3 85.47 MIN: 76.11 / MAX: 195.12 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU CentOS Stream 9 2K 4K 6K 8K 10K SE +/- 8.29, N = 3 9657.99 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU CentOS Stream 9 2 4 6 8 10 SE +/- 0.01, N = 3 8.27 MIN: 7.23 / MAX: 27.1 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU CentOS Stream 9 300 600 900 1200 1500 SE +/- 39.85, N = 15 1478.64 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU CentOS Stream 9 3 6 9 12 15 SE +/- 0.30, N = 15 13.60 MIN: 8.57 / MAX: 68.28 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU CentOS Stream 9 10K 20K 30K 40K 50K SE +/- 99.47, N = 3 47224.77 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU CentOS Stream 9 0.306 0.612 0.918 1.224 1.53 SE +/- 0.00, N = 3 1.36 MIN: 0.99 / MAX: 13.44 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU CentOS Stream 9 9K 18K 27K 36K 45K SE +/- 1567.95, N = 15 42731.93 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU CentOS Stream 9 0.3375 0.675 1.0125 1.35 1.6875 SE +/- 0.05, N = 15 1.50 MIN: 0.34 / MAX: 29.48 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
nginx This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.21.1 Concurrent Requests: 1000 CentOS Stream 9 40K 80K 120K 160K 200K SE +/- 1519.57, N = 3 200945.49 1. (CC) gcc options: -lcrypt -lz -O3 -march=native
ONNX Runtime ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: GPT-2 - Device: CPU - Executor: Parallel CentOS Stream 9 1100 2200 3300 4400 5500 SE +/- 32.87, N = 3 5269 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: GPT-2 - Device: CPU - Executor: Standard CentOS Stream 9 2K 4K 6K 8K 10K SE +/- 388.59, N = 12 11045 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: yolov4 - Device: CPU - Executor: Parallel CentOS Stream 9 140 280 420 560 700 SE +/- 1.04, N = 3 630 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: yolov4 - Device: CPU - Executor: Standard CentOS Stream 9 150 300 450 600 750 SE +/- 1.17, N = 3 694 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: bertsquad-12 - Device: CPU - Executor: Parallel CentOS Stream 9 200 400 600 800 1000 SE +/- 2.02, N = 3 799 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: bertsquad-12 - Device: CPU - Executor: Standard CentOS Stream 9 200 400 600 800 1000 SE +/- 0.50, N = 3 1093 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel CentOS Stream 9 50 100 150 200 250 SE +/- 0.17, N = 3 236 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard CentOS Stream 9 100 200 300 400 500 SE +/- 1.17, N = 3 443 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel CentOS Stream 9 400 800 1200 1600 2000 SE +/- 3.09, N = 3 1693 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard CentOS Stream 9 400 800 1200 1600 2000 SE +/- 16.82, N = 12 1881 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: super-resolution-10 - Device: CPU - Executor: Parallel CentOS Stream 9 700 1400 2100 2800 3500 SE +/- 4.91, N = 3 3259 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: super-resolution-10 - Device: CPU - Executor: Standard CentOS Stream 9 3K 6K 9K 12K 15K SE +/- 43.63, N = 3 12260 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
Apache HTTP Server This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the Golang "Bombardier" program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Requests Per Second, More Is Better Apache HTTP Server 2.4.48 Concurrent Requests: 1000 CentOS Stream 9 30K 60K 90K 120K 150K SE +/- 1558.40, N = 15 131349.60 1. (CC) gcc options: -shared -fPIC -O2
PyHPC Benchmarks PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 4194304 - Benchmark: Equation of State CentOS Stream 9 0.007 0.014 0.021 0.028 0.035 SE +/- 0.000, N = 3 0.031
OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: JAX - Project Size: 4194304 - Benchmark: Isoneutral Mixing CentOS Stream 9 0.1944 0.3888 0.5832 0.7776 0.972 SE +/- 0.004, N = 3 0.864
OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Equation of State CentOS Stream 9 0.0594 0.1188 0.1782 0.2376 0.297 SE +/- 0.002, N = 3 0.264
OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Isoneutral Mixing CentOS Stream 9 0.3094 0.6188 0.9282 1.2376 1.547 SE +/- 0.001, N = 3 1.375
OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of State CentOS Stream 9 0.4356 0.8712 1.3068 1.7424 2.178 SE +/- 0.001, N = 3 1.936
OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixing CentOS Stream 9 0.6476 1.2952 1.9428 2.5904 3.238 SE +/- 0.033, N = 3 2.878
OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Equation of State CentOS Stream 9 0.0682 0.1364 0.2046 0.2728 0.341 SE +/- 0.001, N = 3 0.303
OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Isoneutral Mixing CentOS Stream 9 0.4676 0.9352 1.4028 1.8704 2.338 SE +/- 0.024, N = 3 2.078
OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Equation of State CentOS Stream 9 0.0245 0.049 0.0735 0.098 0.1225 SE +/- 0.001, N = 3 0.109
OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Isoneutral Mixing CentOS Stream 9 0.4635 0.927 1.3905 1.854 2.3175 SE +/- 0.004, N = 3 2.060
OpenBenchmarking.org Seconds, Fewer Is Better PyHPC Benchmarks 3.0 Device: CPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Equation of State CentOS Stream 9 0.05 0.1 0.15 0.2 0.25 SE +/- 0.003, N = 4 0.222
Device: CPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Isoneutral Mixing
CentOS Stream 9: The test run did not produce a result.
InfluxDB This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org val/sec, More Is Better InfluxDB 1.8.2 Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 CentOS Stream 9 140K 280K 420K 560K 700K SE +/- 2481.53, N = 3 666008.7
CentOS Stream 9 Processor: 2 x Intel Xeon Platinum 8380 @ 3.40GHz (80 Cores / 160 Threads), Motherboard: Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS), Chipset: Intel Device 0998, Memory: 512GB, Disk: 7682GB INTEL SSDPF2KX076TZ, Graphics: ASPEED, Monitor: VE228, Network: 2 x Intel X710 for 10GBASE-T + 2 x Intel E810-C for QSFP
OS: CentOS Stream 9, Kernel: 5.14.0-148.el9.x86_64 (x86_64), Desktop: GNOME Shell 40.10, Display Server: X Server, Compiler: GCC 11.3.1 20220421, File-System: xfs, Screen Resolution: 1920x1080
Kernel Notes: Transparent Huge Pages: alwaysCompiler Notes: --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-islDisk Notes: NONE / attr2,inode64,logbsize=32k,logbufs=8,noquota,relatime,rw,seclabel / Block Size: 4096Processor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xd000363Java Notes: OpenJDK Runtime Environment (Red_Hat-11.0.16.0.8-2.el9) (build 11.0.16+8-LTS)Python Notes: Python 3.9.13Security Notes: SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 31 August 2022 15:18 by user .