2 x Intel Xeon Max 9480 testing with a Supermicro X13DEM v1.10 (1.3 BIOS) and ASPEED on Ubuntu 23.04 via the Phoronix Test Suite.
v6.4 Processor: 2 x Intel Xeon Max 9480 @ 3.50GHz (112 Cores / 224 Threads), Motherboard: Supermicro X13DEM v1.10 (1.3 BIOS), Chipset: Intel Device 1bce, Memory: 512GB, Disk: 2 x 7682GB INTEL SSDPF2KX076TZ, Graphics: ASPEED, Monitor: VE228, Network: 2 x Broadcom BCM57508 NetXtreme-E 10Gb/25Gb/40Gb/50Gb/100Gb/200Gb
OS: Ubuntu 23.04, Kernel: 6.4.0-060400-generic (x86_64), Desktop: GNOME Shell 44.0, Display Server: X Server 1.21.1.7, Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 1920x1080
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-Pa930Z/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-Pa930Z/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_cpufreq performance - CPU Microcode: 0x2c0001d1Java Notes: OpenJDK Runtime Environment (build 17.0.6+10-Ubuntu-1ubuntu2)Python Notes: Python 3.11.2Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
OpenBenchmarking.org MiB/second, More Is Better Crypto++ 8.8 Test: Unkeyed Algorithms v6.4 100 200 300 400 500 SE +/- 0.02, N = 3 441.20 1. (CXX) g++ options: -g2 -O3 -fPIC -fno-devirtualize -pthread -pipe
NAS Parallel Benchmarks NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: BT.C v6.4 70K 140K 210K 280K 350K SE +/- 2411.99, N = 11 322119.99 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: CG.C v6.4 11K 22K 33K 44K 55K SE +/- 525.17, N = 6 51894.85 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: EP.C v6.4 2K 4K 6K 8K 10K SE +/- 222.04, N = 12 10844.90 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: EP.D v6.4 2K 4K 6K 8K 10K SE +/- 632.03, N = 12 10703.40 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: FT.C v6.4 20K 40K 60K 80K 100K SE +/- 1023.16, N = 15 103248.13 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: IS.D v6.4 700 1400 2100 2800 3500 SE +/- 127.93, N = 14 3073.61 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: LU.C v6.4 60K 120K 180K 240K 300K SE +/- 2141.41, N = 3 269542.21 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: MG.C v6.4 50K 100K 150K 200K 250K SE +/- 2713.74, N = 15 239408.11 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: SP.B v6.4 40K 80K 120K 160K 200K SE +/- 1564.28, N = 13 176009.15 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4
OpenBenchmarking.org Total Mop/s, More Is Better NAS Parallel Benchmarks 3.4 Test / Class: SP.C v6.4 40K 80K 120K 160K 200K SE +/- 1944.01, N = 5 193221.95 1. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4
Rodinia Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Rodinia 3.1 Test: OpenMP LavaMD v6.4 8 16 24 32 40 SE +/- 0.23, N = 3 35.24 1. (CXX) g++ options: -O2 -lOpenCL
OpenFOAM OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Mesh Time v6.4 9 18 27 36 45 39.36 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Execution Time v6.4 9 18 27 36 45 37.84 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Medium Mesh Size - Mesh Time v6.4 40 80 120 160 200 175.93 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Medium Mesh Size - Execution Time v6.4 40 80 120 160 200 195.89 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenRadioss OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2023.09.15 Model: Bumper Beam v6.4 20 40 60 80 100 SE +/- 0.64, N = 3 95.17
nekRS nekRS is an open-source Navier Stokes solver based on the spectral element method. NekRS supports both CPU and GPU/accelerator support though this test profile is currently configured for CPU execution. NekRS is part of Nek5000 of the Mathematics and Computer Science MCS at Argonne National Laboratory. This nekRS benchmark is primarily relevant to large core count HPC servers and otherwise may be very time consuming on smaller systems. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org flops/rank, More Is Better nekRS 23.0 Input: Kershaw v6.4 1300M 2600M 3900M 5200M 6500M SE +/- 38898803.89, N = 3 5865983333 1. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi
OpenBenchmarking.org flops/rank, More Is Better nekRS 23.0 Input: TurboPipe Periodic v6.4 900M 1800M 2700M 3600M 4500M SE +/- 40424400.36, N = 3 4098953333 1. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.7 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K v6.4 2 4 6 8 10 SE +/- 0.10, N = 15 7.07 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.7 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K v6.4 9 18 27 36 45 SE +/- 0.79, N = 15 41.51 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.7 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K v6.4 4 8 12 16 20 SE +/- 0.38, N = 15 14.30 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.7 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K v6.4 10 20 30 40 50 SE +/- 1.53, N = 15 41.67 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.7 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K v6.4 11 22 33 44 55 SE +/- 1.51, N = 15 47.54 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.7 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K v6.4 10 20 30 40 50 SE +/- 1.50, N = 12 44.23 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.7 Encoder Mode: Speed 11 Realtime - Input: Bosphorus 4K v6.4 11 22 33 44 55 SE +/- 1.22, N = 15 49.09 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
SVT-AV1 OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 4 - Input: Bosphorus 4K v6.4 0.9297 1.8594 2.7891 3.7188 4.6485 SE +/- 0.031, N = 15 4.132 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 8 - Input: Bosphorus 4K v6.4 12 24 36 48 60 SE +/- 3.17, N = 12 55.03 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 12 - Input: Bosphorus 4K v6.4 30 60 90 120 150 SE +/- 8.17, N = 15 133.93 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 13 - Input: Bosphorus 4K v6.4 30 60 90 120 150 SE +/- 9.48, N = 12 145.92 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 4 - Input: Bosphorus 1080p v6.4 3 6 9 12 15 SE +/- 0.04, N = 3 11.59 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 8 - Input: Bosphorus 1080p v6.4 20 40 60 80 100 SE +/- 4.48, N = 15 110.86 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 12 - Input: Bosphorus 1080p v6.4 90 180 270 360 450 SE +/- 29.20, N = 15 426.84 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 13 - Input: Bosphorus 1080p v6.4 110 220 330 440 550 SE +/- 28.12, N = 15 518.77 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
VVenC VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Fast v6.4 1.1419 2.2838 3.4257 4.5676 5.7095 SE +/- 0.058, N = 3 5.075 1. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Faster v6.4 2 4 6 8 10 SE +/- 0.097, N = 15 8.450 1. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Fast v6.4 4 8 12 16 20 SE +/- 0.19, N = 3 16.34 1. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Faster v6.4 7 14 21 28 35 SE +/- 0.34, N = 4 28.27 1. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 v6.4 30 60 90 120 150 SE +/- 2.71, N = 15 126.00 MAX: 30405.42
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 v6.4 6M 12M 18M 24M 30M SE +/- 332901.55, N = 15 30189985
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 v6.4 30 60 90 120 150 SE +/- 1.85, N = 15 130.81 MAX: 27828.94
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 v6.4 8M 16M 24M 32M 40M SE +/- 432863.04, N = 15 39122053
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 v6.4 30 60 90 120 150 SE +/- 1.48, N = 15 155.22 MAX: 27068.23
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 v6.4 5M 10M 15M 20M 25M SE +/- 171233.24, N = 12 24582941
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 v6.4 16 32 48 64 80 SE +/- 0.55, N = 12 70.77 MAX: 23941.04
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 v6.4 9M 18M 27M 36M 45M SE +/- 358083.32, N = 15 43036898
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 v6.4 20 40 60 80 100 SE +/- 0.96, N = 15 97.16 MAX: 24088.47
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 v6.4 11M 22M 33M 44M 55M SE +/- 506247.74, N = 13 52514008
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 v6.4 30 60 90 120 150 SE +/- 1.53, N = 13 125.95 MAX: 23943.82
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 v6.4 9M 18M 27M 36M 45M SE +/- 387575.50, N = 15 41675265
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 v6.4 10 20 30 40 50 SE +/- 0.46, N = 15 42.68 MAX: 15117.6
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400 v6.4 9M 18M 27M 36M 45M SE +/- 245644.87, N = 3 40520901
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400 v6.4 40 80 120 160 200 SE +/- 2.09, N = 3 163.36 MAX: 27171.99
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 v6.4 13M 26M 39M 52M 65M SE +/- 581785.82, N = 15 59376956
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 v6.4 20 40 60 80 100 SE +/- 0.85, N = 15 77.00 MAX: 12622.02
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400 v6.4 12M 24M 36M 48M 60M SE +/- 450916.16, N = 10 58096600
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400 v6.4 60 120 180 240 300 SE +/- 3.08, N = 10 265.99 MAX: 30115.56
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 v6.4 14M 28M 42M 56M 70M SE +/- 46596.93, N = 3 67045377
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 v6.4 30 60 90 120 150 SE +/- 0.10, N = 3 111.75 MAX: 10157.01
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400 v6.4 14M 28M 42M 56M 70M SE +/- 525580.37, N = 3 66098942
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400 v6.4 80 160 240 320 400 SE +/- 4.26, N = 3 370.82 MAX: 30958.08
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 v6.4 11M 22M 33M 44M 55M SE +/- 482700.73, N = 15 51248299
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 v6.4 8 16 24 32 40 SE +/- 0.38, N = 15 35.31 MAX: 23938.81
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400 v6.4 11M 22M 33M 44M 55M SE +/- 396843.42, N = 15 50423542
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400 v6.4 30 60 90 120 150 SE +/- 1.11, N = 15 140.40 MAX: 27765.12
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 v6.4 14M 28M 42M 56M 70M SE +/- 690519.22, N = 3 66287906
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 v6.4 16 32 48 64 80 SE +/- 0.22, N = 3 69.88 MAX: 23895.2
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400 v6.4 14M 28M 42M 56M 70M SE +/- 524197.72, N = 12 65784478
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400 v6.4 60 120 180 240 300 SE +/- 2.23, N = 12 277.40 MAX: 30379.43
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 v6.4 16M 32M 48M 64M 80M SE +/- 265706.48, N = 3 72605513
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 v6.4 20 40 60 80 100 SE +/- 0.13, N = 3 104.93 MAX: 23900.89
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400 v6.4 16M 32M 48M 64M 80M SE +/- 645314.77, N = 12 73678024
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400 v6.4 90 180 270 360 450 SE +/- 3.41, N = 12 399.86 MAX: 31195.45
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average Latency v6.4 0.2783 0.5566 0.8349 1.1132 1.3915 SE +/- 0.032, N = 9 1.237 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 100 - Clients: 1000 - Mode: Read Only v6.4 140K 280K 420K 560K 700K SE +/- 15922.26, N = 9 640235 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average Latency v6.4 0.3533 0.7066 1.0599 1.4132 1.7665 SE +/- 0.042, N = 9 1.570 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 100 - Clients: 800 - Mode: Read Write v6.4 10K 20K 30K 40K 50K SE +/- 557.77, N = 3 45581 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average Latency v6.4 4 8 12 16 20 SE +/- 0.22, N = 3 17.56 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 1000 - Clients: 800 - Mode: Read Only v6.4 150K 300K 450K 600K 750K SE +/- 9464.82, N = 3 701213 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 1000 - Clients: 800 - Mode: Read Only - Average Latency v6.4 0.2567 0.5134 0.7701 1.0268 1.2835 SE +/- 0.016, N = 3 1.141 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 100 - Clients: 1000 - Mode: Read Write v6.4 9K 18K 27K 36K 45K SE +/- 535.00, N = 4 42947 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average Latency v6.4 6 12 18 24 30 SE +/- 0.29, N = 4 23.30 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 1000 - Clients: 1000 - Mode: Read Only v6.4 150K 300K 450K 600K 750K SE +/- 8915.31, N = 12 710079 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 1000 - Clients: 1000 - Mode: Read Only - Average Latency v6.4 0.3175 0.635 0.9525 1.27 1.5875 SE +/- 0.018, N = 12 1.411 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 1000 - Clients: 800 - Mode: Read Write v6.4 4K 8K 12K 16K 20K SE +/- 240.76, N = 12 20416 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 1000 - Clients: 800 - Mode: Read Write - Average Latency v6.4 9 18 27 36 45 SE +/- 0.47, N = 12 39.25 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 1000 - Clients: 1000 - Mode: Read Write v6.4 4K 8K 12K 16K 20K SE +/- 218.15, N = 12 19884 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 1000 - Clients: 1000 - Mode: Read Write - Average Latency v6.4 11 22 33 44 55 SE +/- 0.55, N = 12 50.36 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
TensorFlow This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: ResNet-50 v6.4 8 16 24 32 40 SE +/- 0.35, N = 3 36.85
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: MMAP v6.4 2K 4K 6K 8K 10K SE +/- 47.78, N = 3 7996.78 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: NUMA v6.4 110 220 330 440 550 SE +/- 7.82, N = 15 514.09 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Pipe v6.4 7M 14M 21M 28M 35M SE +/- 487787.75, N = 15 31203885.98 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Poll v6.4 1.1M 2.2M 3.3M 4.4M 5.5M SE +/- 324560.78, N = 12 5307662.30 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Zlib v6.4 2K 4K 6K 8K 10K SE +/- 22.76, N = 3 8220.83 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Futex v6.4 16K 32K 48K 64K 80K SE +/- 3713.25, N = 13 72753.47 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: MEMFD v6.4 200 400 600 800 1000 SE +/- 26.29, N = 15 1055.35 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Mutex v6.4 5M 10M 15M 20M 25M SE +/- 381162.66, N = 15 23140484.14 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Atomic v6.4 4 8 12 16 20 SE +/- 1.12, N = 12 17.75 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Crypto v6.4 30K 60K 90K 120K 150K SE +/- 4552.49, N = 12 152827.15 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Malloc v6.4 50M 100M 150M 200M 250M SE +/- 129847.43, N = 3 217211109.03 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Cloning v6.4 1400 2800 4200 5600 7000 SE +/- 1413.09, N = 12 6428.75 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Forking v6.4 6K 12K 18K 24K 30K SE +/- 3958.39, N = 12 28486.68 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Pthread v6.4 8K 16K 24K 32K 40K SE +/- 43.26, N = 3 36288.24 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: AVL Tree v6.4 200 400 600 800 1000 SE +/- 1.81, N = 3 790.23 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: IO_uring v6.4 600K 1200K 1800K 2400K 3000K SE +/- 33896.79, N = 15 2619140.96 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: SENDFILE v6.4 200K 400K 600K 800K 1000K SE +/- 34998.26, N = 15 1084831.27 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: CPU Cache v6.4 200K 400K 600K 800K 1000K SE +/- 7749.59, N = 3 938764.06 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: CPU Stress v6.4 40K 80K 120K 160K 200K SE +/- 341.26, N = 3 200494.80 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Semaphores v6.4 40M 80M 120M 160M 200M SE +/- 1308655.40, N = 3 177471711.51 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Matrix Math v6.4 80K 160K 240K 320K 400K SE +/- 15424.87, N = 12 396715.18 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Vector Math v6.4 80K 160K 240K 320K 400K SE +/- 3943.55, N = 5 381412.82 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: AVX-512 VNNI v6.4 3M 6M 9M 12M 15M SE +/- 48287.23, N = 3 11752668.18 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Function Call v6.4 14K 28K 42K 56K 70K SE +/- 811.29, N = 15 63249.99 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: x86_64 RdRand v6.4 140K 280K 420K 560K 700K SE +/- 674.33, N = 3 637179.52 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Floating Point v6.4 7K 14K 21K 28K 35K SE +/- 348.48, N = 15 31253.93 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Matrix 3D Math v6.4 6K 12K 18K 24K 30K SE +/- 329.39, N = 3 28901.02 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Memory Copying v6.4 6K 12K 18K 24K 30K SE +/- 629.40, N = 15 25816.50 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Vector Shuffle v6.4 120K 240K 360K 480K 600K SE +/- 1079.72, N = 3 551599.32 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Mixed Scheduler v6.4 16K 32K 48K 64K 80K SE +/- 6830.21, N = 15 74205.17 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Socket Activity v6.4 700 1400 2100 2800 3500 SE +/- 1161.86, N = 15 3083.04 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Wide Vector Math v6.4 1.3M 2.6M 3.9M 5.2M 6.5M SE +/- 72348.80, N = 3 5961543.12 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Context Switching v6.4 3M 6M 9M 12M 15M SE +/- 1050064.40, N = 12 13968013.67 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Fused Multiply-Add v6.4 50M 100M 150M 200M 250M SE +/- 2984105.55, N = 15 253366692.27 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Vector Floating Point v6.4 50K 100K 150K 200K 250K SE +/- 620.60, N = 3 244931.41 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Glibc C String Functions v6.4 20M 40M 60M 80M 100M SE +/- 906580.48, N = 15 81547074.59 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Glibc Qsort Data Sorting v6.4 400 800 1200 1600 2000 SE +/- 0.72, N = 3 1875.83 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: System V Message Passing v6.4 4M 8M 12M 16M 20M SE +/- 54732.60, N = 3 18811154.40 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU-v2-v2 - Model: mobilenet-v2 v6.4 3 6 9 12 15 SE +/- 0.22, N = 9 13.47 MIN: 11.45 / MAX: 59.13 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU-v3-v3 - Model: mobilenet-v3 v6.4 4 8 12 16 20 SE +/- 0.21, N = 9 16.26 MIN: 13.09 / MAX: 150.4 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: shufflenet-v2 v6.4 6 12 18 24 30 SE +/- 7.18, N = 9 27.45 MIN: 13.7 / MAX: 2722.65 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: mnasnet v6.4 3 6 9 12 15 SE +/- 1.16, N = 9 13.54 MIN: 10.12 / MAX: 1632.45 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: efficientnet-b0 v6.4 5 10 15 20 25 SE +/- 1.11, N = 9 21.53 MIN: 16.55 / MAX: 1422.37 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: blazeface v6.4 3 6 9 12 15 SE +/- 1.12, N = 9 9.39 MIN: 7.14 / MAX: 1661.44 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: googlenet v6.4 8 16 24 32 40 SE +/- 4.26, N = 9 34.58 MIN: 20.75 / MAX: 3103.94 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: vgg16 v6.4 14 28 42 56 70 SE +/- 5.87, N = 9 63.23 MIN: 32.56 / MAX: 1162.81 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: resnet18 v6.4 4 8 12 16 20 SE +/- 0.78, N = 9 15.69 MIN: 11.72 / MAX: 98.03 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: alexnet v6.4 5 10 15 20 25 SE +/- 7.87, N = 9 22.78 MIN: 7.72 / MAX: 654.7 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: resnet50 v6.4 10 20 30 40 50 SE +/- 11.02, N = 9 45.37 MIN: 21.66 / MAX: 2788.11 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: yolov4-tiny v6.4 10 20 30 40 50 SE +/- 3.97, N = 9 44.93 MIN: 26.15 / MAX: 1320 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: squeezenet_ssd v6.4 6 12 18 24 30 SE +/- 1.85, N = 9 26.70 MIN: 20.19 / MAX: 1823.29 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: regnety_400m v6.4 30 60 90 120 150 SE +/- 21.29, N = 9 151.43 MIN: 61.81 / MAX: 15083.91 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: vision_transformer v6.4 30 60 90 120 150 SE +/- 3.72, N = 9 125.89 MIN: 72.76 / MAX: 4577.82 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: FastestDet v6.4 5 10 15 20 25 SE +/- 0.91, N = 9 18.43 MIN: 14.45 / MAX: 73.8 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenVINO OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Face Detection FP16 - Device: CPU v6.4 30 60 90 120 150 SE +/- 0.59, N = 3 125.69 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Face Detection FP16 - Device: CPU v6.4 60 120 180 240 300 SE +/- 1.25, N = 3 293.6 MIN: 175.19 / MAX: 761.95 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Person Detection FP16 - Device: CPU v6.4 100 200 300 400 500 SE +/- 5.93, N = 3 474.05 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Person Detection FP16 - Device: CPU v6.4 20 40 60 80 100 SE +/- 0.97, N = 3 77.97 MIN: 47.82 / MAX: 365.81 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Person Detection FP32 - Device: CPU v6.4 100 200 300 400 500 SE +/- 5.84, N = 3 475.66 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Person Detection FP32 - Device: CPU v6.4 20 40 60 80 100 SE +/- 0.95, N = 3 77.70 MIN: 49.99 / MAX: 537.56 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Vehicle Detection FP16 - Device: CPU v6.4 600 1200 1800 2400 3000 SE +/- 24.17, N = 6 2572.63 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Vehicle Detection FP16 - Device: CPU v6.4 4 8 12 16 20 SE +/- 0.13, N = 6 14.35 MIN: 9.07 / MAX: 112.91 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Face Detection FP16-INT8 - Device: CPU v6.4 70 140 210 280 350 SE +/- 0.28, N = 3 331.20 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Face Detection FP16-INT8 - Device: CPU v6.4 70 140 210 280 350 SE +/- 0.31, N = 3 337.22 MIN: 252.67 / MAX: 470.11 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Face Detection Retail FP16 - Device: CPU v6.4 2K 4K 6K 8K 10K SE +/- 16.48, N = 3 11342.54 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Face Detection Retail FP16 - Device: CPU v6.4 3 6 9 12 15 SE +/- 0.01, N = 3 9.85 MIN: 7.51 / MAX: 55.05 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Road Segmentation ADAS FP16 - Device: CPU v6.4 200 400 600 800 1000 SE +/- 13.29, N = 3 1112.46 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Road Segmentation ADAS FP16 - Device: CPU v6.4 8 16 24 32 40 SE +/- 0.39, N = 3 33.21 MIN: 23.34 / MAX: 169.71 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Vehicle Detection FP16-INT8 - Device: CPU v6.4 1100 2200 3300 4400 5500 SE +/- 20.34, N = 3 5018.03 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Vehicle Detection FP16-INT8 - Device: CPU v6.4 5 10 15 20 25 SE +/- 0.09, N = 3 22.28 MIN: 14.79 / MAX: 71.65 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Weld Porosity Detection FP16 - Device: CPU v6.4 4K 8K 12K 16K 20K SE +/- 240.02, N = 3 16690.36 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Weld Porosity Detection FP16 - Device: CPU v6.4 2 4 6 8 10 SE +/- 0.09, N = 3 6.60 MIN: 4.28 / MAX: 87.64 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Face Detection Retail FP16-INT8 - Device: CPU v6.4 3K 6K 9K 12K 15K SE +/- 27.81, N = 3 16182.55 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Face Detection Retail FP16-INT8 - Device: CPU v6.4 2 4 6 8 10 SE +/- 0.02, N = 3 6.89 MIN: 5.52 / MAX: 36.52 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Road Segmentation ADAS FP16-INT8 - Device: CPU v6.4 300 600 900 1200 1500 SE +/- 2.60, N = 3 1503.54 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Road Segmentation ADAS FP16-INT8 - Device: CPU v6.4 20 40 60 80 100 SE +/- 0.13, N = 3 74.40 MIN: 53.51 / MAX: 281.3 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Machine Translation EN To DE FP16 - Device: CPU v6.4 140 280 420 560 700 SE +/- 5.12, N = 3 667.64 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Machine Translation EN To DE FP16 - Device: CPU v6.4 12 24 36 48 60 SE +/- 0.43, N = 3 55.31 MIN: 38.54 / MAX: 312.66 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Weld Porosity Detection FP16-INT8 - Device: CPU v6.4 5K 10K 15K 20K 25K SE +/- 231.74, N = 15 25464.62 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Weld Porosity Detection FP16-INT8 - Device: CPU v6.4 0.9518 1.9036 2.8554 3.8072 4.759 SE +/- 0.03, N = 15 4.23 MIN: 2.61 / MAX: 111.71 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Person Vehicle Bike Detection FP16 - Device: CPU v6.4 1400 2800 4200 5600 7000 SE +/- 8.07, N = 3 6316.44 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Person Vehicle Bike Detection FP16 - Device: CPU v6.4 4 8 12 16 20 SE +/- 0.02, N = 3 17.70 MIN: 12.28 / MAX: 42.46 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Handwritten English Recognition FP16 - Device: CPU v6.4 800 1600 2400 3200 4000 SE +/- 9.62, N = 3 3597.65 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Handwritten English Recognition FP16 - Device: CPU v6.4 7 14 21 28 35 SE +/- 0.09, N = 3 31.10 MIN: 24.91 / MAX: 99.63 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU v6.4 16K 32K 48K 64K 80K SE +/- 1782.53, N = 15 74954.03 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU v6.4 0.1283 0.2566 0.3849 0.5132 0.6415 SE +/- 0.01, N = 15 0.57 MIN: 0.29 / MAX: 74.65 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Handwritten English Recognition FP16-INT8 - Device: CPU v6.4 500 1000 1500 2000 2500 SE +/- 5.85, N = 3 2452.14 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Handwritten English Recognition FP16-INT8 - Device: CPU v6.4 10 20 30 40 50 SE +/- 0.11, N = 3 45.63 MIN: 38.64 / MAX: 91.4 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.1 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU v6.4 20K 40K 60K 80K 100K SE +/- 1041.05, N = 15 112475.34 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.1 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU v6.4 0.0765 0.153 0.2295 0.306 0.3825 SE +/- 0.00, N = 15 0.34 MIN: 0.25 / MAX: 36.53 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
v6.4 Processor: 2 x Intel Xeon Max 9480 @ 3.50GHz (112 Cores / 224 Threads), Motherboard: Supermicro X13DEM v1.10 (1.3 BIOS), Chipset: Intel Device 1bce, Memory: 512GB, Disk: 2 x 7682GB INTEL SSDPF2KX076TZ, Graphics: ASPEED, Monitor: VE228, Network: 2 x Broadcom BCM57508 NetXtreme-E 10Gb/25Gb/40Gb/50Gb/100Gb/200Gb
OS: Ubuntu 23.04, Kernel: 6.4.0-060400-generic (x86_64), Desktop: GNOME Shell 44.0, Display Server: X Server 1.21.1.7, Compiler: GCC 12.2.0, File-System: ext4, Screen Resolution: 1920x1080
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-Pa930Z/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-Pa930Z/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_cpufreq performance - CPU Microcode: 0x2c0001d1Java Notes: OpenJDK Runtime Environment (build 17.0.6+10-Ubuntu-1ubuntu2)Python Notes: Python 3.11.2Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 4 October 2023 15:43 by user phoronix.