AMD Ryzen 9 7950X3D 16-Core testing with a ASRockRack B650D4U-2L2T/BCM (2.09 BIOS) and ASPEED 512MB on Ubuntu 22.04 via the Phoronix Test Suite.
a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: amd-pstate-epp performance (EPP: performance) - CPU Microcode: 0xa601203Java Notes: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Notes: Python 3.10.12Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
b c d Processor: AMD Ryzen 9 7950X3D 16-Core @ 5.76GHz (16 Cores / 32 Threads), Motherboard: ASRockRack B650D4U-2L2T/BCM (2.09 BIOS), Chipset: AMD Device 14d8, Memory: 2 x 32 GB DDR5-4800MT/s MTC20C2085S1EC48BA1, Disk: 3201GB Micron_7450_MTFDKCC3T2TFS + 0GB Virtual HDisk0 + 0GB Virtual HDisk1 + 0GB Virtual HDisk2 + 0GB Virtual HDisk3, Graphics: ASPEED 512MB, Audio: AMD Device 1640, Monitor: VA2431, Network: 2 x Intel I210 + 2 x Broadcom BCM57416 NetXtreme-E Dual-Media 10G RDMA
OS: Ubuntu 22.04, Kernel: 6.6.0-rc4-phx-amd-pref-core (x86_64), Desktop: GNOME Shell 42.9, Display Server: X Server, Vulkan: 1.3.238, Compiler: GCC 11.4.0, File-System: ext4, Screen Resolution: 1920x1200
QuantLib QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MFLOPS, More Is Better QuantLib 1.32 Configuration: Multi-Threaded d a b c 20K 40K 60K 80K 100K SE +/- 83.63, N = 3 82159.4 82133.5 82020.0 81958.0 1. (CXX) g++ options: -O3 -march=native -fPIE -pie
OpenBenchmarking.org MFLOPS, More Is Better QuantLib 1.32 Configuration: Single-Threaded b d c a 900 1800 2700 3600 4500 SE +/- 9.84, N = 3 4042.6 4033.0 4025.4 3980.3 1. (CXX) g++ options: -O3 -march=native -fPIE -pie
OpenBenchmarking.org Seconds, Fewer Is Better CloverLeaf 1.3 Input: clover_bm16 c b d a 300 600 900 1200 1500 SE +/- 0.59, N = 3 1494.53 1495.65 1496.30 1496.78 1. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp
OpenBenchmarking.org Seconds, Fewer Is Better CloverLeaf 1.3 Input: clover_bm64_short d a b c 40 80 120 160 200 SE +/- 0.02, N = 3 175.86 175.91 175.92 175.92 1. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp
QMCPACK QMCPACK is a modern high-performance open-source Quantum Monte Carlo (QMC) simulation code making use of MPI for this benchmark of the H20 example code. QMCPACK is an open-source production level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, and solids. QMCPACK is supported by the U.S. Department of Energy. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: H4_ae a b d c 3 6 9 12 15 SE +/- 0.09, N = 15 12.16 12.51 12.62 12.63 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: Li2_STO_ae d a c b 30 60 90 120 150 SE +/- 1.12, N = 3 135.02 135.60 135.60 135.95 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: LiH_ae_MSD a d c b 16 32 48 64 80 SE +/- 0.40, N = 3 73.47 73.49 73.60 74.10 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: simple-H2O a d c b 5 10 15 20 25 SE +/- 0.03, N = 3 18.24 18.30 18.39 18.43 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: O_ae_pyscf_UHF d c b a 30 60 90 120 150 SE +/- 1.01, N = 3 129.65 129.98 131.36 132.77 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: FeCO6_b3lyp_gms c d a b 30 60 90 120 150 SE +/- 0.20, N = 3 124.98 125.13 125.46 125.49 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
FFmpeg This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx264 - Scenario: Live c a b d 60 120 180 240 300 SE +/- 1.97, N = 3 284.24 282.96 282.92 276.55 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx265 - Scenario: Live b c a d 40 80 120 160 200 SE +/- 0.76, N = 3 180.06 180.02 179.55 179.29 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx264 - Scenario: Upload a c b d 4 8 12 16 20 SE +/- 0.14, N = 3 16.90 16.81 16.67 16.64 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx265 - Scenario: Upload d a b c 8 16 24 32 40 SE +/- 0.08, N = 3 33.57 33.54 33.53 33.46 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx264 - Scenario: Platform a b d c 14 28 42 56 70 SE +/- 0.08, N = 3 63.98 63.85 63.75 63.69 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx265 - Scenario: Platform d a b c 15 30 45 60 75 SE +/- 0.09, N = 3 68.63 68.39 68.36 68.29 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx264 - Scenario: Video On Demand d a c b 14 28 42 56 70 SE +/- 0.11, N = 3 64.26 64.16 64.14 63.78 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx265 - Scenario: Video On Demand d b a c 15 30 45 60 75 SE +/- 0.07, N = 3 68.71 68.57 68.48 68.14 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
WebP2 Image Encode This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MP/s, More Is Better WebP2 Image Encode 20220823 Encode Settings: Default d a c b 4 8 12 16 20 SE +/- 0.18, N = 3 13.71 13.51 13.38 13.05 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
easyWave The easyWave software allows simulating tsunami generation and propagation in the context of early warning systems. EasyWave supports making use of OpenMP for CPU multi-threading and there are also GPU ports available but not currently incorporated as part of this test profile. The easyWave tsunami generation software is run with one of the example/reference input files for measuring the CPU execution time. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240 b d c a 0.4833 0.9666 1.4499 1.9332 2.4165 SE +/- 0.004, N = 3 2.089 2.090 2.101 2.148 1. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200 d c b a 20 40 60 80 100 SE +/- 0.33, N = 3 80.10 80.87 81.03 81.63 1. (CXX) g++ options: -O3 -fopenmp
Embree Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Crown d a b c 7 14 21 28 35 SE +/- 0.06, N = 3 30.51 30.50 30.45 30.39 MIN: 30.28 / MAX: 31.02 MIN: 30.29 / MAX: 31 MIN: 30.1 / MAX: 31.05 MIN: 30.17 / MAX: 30.88
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Crown b a d c 7 14 21 28 35 SE +/- 0.04, N = 3 31.97 31.92 31.85 31.70 MIN: 31.62 / MAX: 32.67 MIN: 31.66 / MAX: 32.67 MIN: 31.57 / MAX: 32.5 MIN: 31.4 / MAX: 32.36
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon a b c d 7 14 21 28 35 SE +/- 0.06, N = 3 30.93 30.89 30.83 30.83 MIN: 30.8 / MAX: 31.38 MIN: 30.64 / MAX: 31.42 MIN: 30.69 / MAX: 31.19 MIN: 30.69 / MAX: 31.27
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon Obj a c d b 6 12 18 24 30 SE +/- 0.04, N = 3 27.58 27.54 27.53 27.52 MIN: 27.4 / MAX: 28.05 MIN: 27.35 / MAX: 27.99 MIN: 27.38 / MAX: 27.96 MIN: 27.3 / MAX: 28.13
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon a b d c 8 16 24 32 40 SE +/- 0.07, N = 3 33.78 33.73 33.72 33.65 MIN: 33.54 / MAX: 34.43 MIN: 33.38 / MAX: 34.7 MIN: 33.46 / MAX: 34.55 MIN: 33.44 / MAX: 34.17
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon Obj b d c a 7 14 21 28 35 SE +/- 0.05, N = 3 28.71 28.70 28.64 28.61 MIN: 28.44 / MAX: 29.67 MIN: 28.49 / MAX: 29.4 MIN: 28.43 / MAX: 29.3 MIN: 28.41 / MAX: 29.24
OpenVKL OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 2.0.0 Benchmark: vklBenchmarkCPU ISPC c d b a 130 260 390 520 650 SE +/- 0.00, N = 3 604 603 603 603 MIN: 46 / MAX: 8291 MIN: 46 / MAX: 8277 MIN: 46 / MAX: 8279 MIN: 46 / MAX: 8290
OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 2.0.0 Benchmark: vklBenchmarkCPU Scalar c b a d 50 100 150 200 250 SE +/- 0.67, N = 3 242 242 242 241 MIN: 17 / MAX: 4423 MIN: 16 / MAX: 4422 MIN: 17 / MAX: 4418 MIN: 16 / MAX: 4419
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU a c d b 0.4367 0.8734 1.3101 1.7468 2.1835 SE +/- 0.02190, N = 3 1.89379 1.92566 1.93518 1.94087 MIN: 1.7 MIN: 1.71 MIN: 1.72 MIN: 1.73 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU a c b d 0.6342 1.2684 1.9026 2.5368 3.171 SE +/- 0.01747, N = 3 2.80183 2.80377 2.81737 2.81873 MIN: 2.76 MIN: 2.76 MIN: 2.75 MIN: 2.78 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU b d a c 0.1203 0.2406 0.3609 0.4812 0.6015 SE +/- 0.010372, N = 15 0.498741 0.520575 0.523659 0.534576 MIN: 0.39 MIN: 0.43 MIN: 0.43 MIN: 0.43 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU c d a b 0.0628 0.1256 0.1884 0.2512 0.314 SE +/- 0.003205, N = 13 0.248788 0.256962 0.262956 0.279128 MIN: 0.24 MIN: 0.24 MIN: 0.25 MIN: 0.24 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU b a d c 0.1565 0.313 0.4695 0.626 0.7825 SE +/- 0.002235, N = 3 0.693123 0.694296 0.695171 0.695339 MIN: 0.64 MIN: 0.65 MIN: 0.65 MIN: 0.65 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU d c b a 0.2784 0.5568 0.8352 1.1136 1.392 SE +/- 0.01251, N = 5 1.17668 1.18372 1.18558 1.23712 MIN: 1.1 MIN: 1.11 MIN: 1.08 MIN: 1.17 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU c a b d 0.9097 1.8194 2.7291 3.6388 4.5485 SE +/- 0.00942, N = 3 4.01114 4.01781 4.03418 4.04319 MIN: 3.96 MIN: 3.96 MIN: 3.94 MIN: 3.98 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU a d c b 0.6775 1.355 2.0325 2.71 3.3875 SE +/- 0.00759, N = 3 2.98536 2.98785 3.00906 3.01099 MIN: 2.51 MIN: 2.51 MIN: 2.51 MIN: 2.51 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU c b d a 0.5848 1.1696 1.7544 2.3392 2.924 SE +/- 0.00014, N = 3 2.59835 2.59838 2.59886 2.59896 MIN: 2.59 MIN: 2.59 MIN: 2.59 MIN: 2.59 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU b c d a 0.8478 1.6956 2.5434 3.3912 4.239 SE +/- 0.01733, N = 3 3.71650 3.72781 3.76192 3.76779 MIN: 3.63 MIN: 3.66 MIN: 3.67 MIN: 3.68 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU a d c b 0.1027 0.2054 0.3081 0.4108 0.5135 SE +/- 0.000061, N = 3 0.455531 0.455553 0.456250 0.456347 MIN: 0.44 MIN: 0.44 MIN: 0.44 MIN: 0.44 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU d c b a 0.1457 0.2914 0.4371 0.5828 0.7285 SE +/- 0.000222, N = 3 0.645231 0.646223 0.646866 0.647550 MIN: 0.64 MIN: 0.64 MIN: 0.64 MIN: 0.64 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU d a b c 300 600 900 1200 1500 SE +/- 0.51, N = 3 1224.00 1227.58 1229.80 1234.14 MIN: 1220.91 MIN: 1224.62 MIN: 1224.55 MIN: 1229.87 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU a d c b 140 280 420 560 700 SE +/- 0.32, N = 3 620.71 625.50 632.39 636.50 MIN: 617.84 MIN: 622.67 MIN: 629.36 MIN: 632.82 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU d a c b 300 600 900 1200 1500 SE +/- 0.95, N = 3 1224.11 1231.93 1236.28 1236.68 MIN: 1220.23 MIN: 1227.81 MIN: 1233.21 MIN: 1230.73 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU a b d c 0.2474 0.4948 0.7422 0.9896 1.237 SE +/- 0.00130, N = 3 1.09425 1.09760 1.09782 1.09942 MIN: 1.07 MIN: 1.07 MIN: 1.07 MIN: 1.07 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU c d b a 0.5272 1.0544 1.5816 2.1088 2.636 SE +/- 0.00064, N = 3 2.33922 2.34110 2.34125 2.34329 MIN: 2.31 MIN: 2.3 MIN: 2.31 MIN: 2.31 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU d a b c 0.3347 0.6694 1.0041 1.3388 1.6735 SE +/- 0.00176, N = 3 1.48153 1.48385 1.48496 1.48752 MIN: 1.47 MIN: 1.47 MIN: 1.47 MIN: 1.48 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU c a d b 140 280 420 560 700 SE +/- 1.22, N = 3 624.14 633.91 634.66 635.30 MIN: 621.15 MIN: 629.4 MIN: 631.75 MIN: 629.42 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU c b d a 300 600 900 1200 1500 SE +/- 2.80, N = 3 1228.25 1231.68 1234.57 1236.29 MIN: 1224.56 MIN: 1222.84 MIN: 1231 MIN: 1232.2 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU c a b d 140 280 420 560 700 SE +/- 2.14, N = 3 628.31 632.07 633.61 635.31 MIN: 625.84 MIN: 628.15 MIN: 626.34 MIN: 632.15 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OSPRay Studio Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU c a d b 900 1800 2700 3600 4500 SE +/- 3.51, N = 3 4239 4251 4251 4267
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU d a c b 900 1800 2700 3600 4500 SE +/- 4.41, N = 3 4297 4303 4316 4323
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU a b c d 1100 2200 3300 4400 5500 SE +/- 4.26, N = 3 5025 5027 5035 5042
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU a d b c 16K 32K 48K 64K 80K SE +/- 88.19, N = 3 71605 71802 72346 72590
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU d a c b 30K 60K 90K 120K 150K SE +/- 200.00, N = 3 139631 139956 140119 140342
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU d b c a 16K 32K 48K 64K 80K SE +/- 124.54, N = 3 72762 72815 72888 73112
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU d a c b 30K 60K 90K 120K 150K SE +/- 201.33, N = 3 141360 141550 141698 141773
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU a d b c 20K 40K 60K 80K 100K SE +/- 192.43, N = 3 84397 84573 84660 84786
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU c d b a 40K 80K 120K 160K 200K SE +/- 70.29, N = 3 164625 164645 165034 165253
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU d a b c 200 400 600 800 1000 SE +/- 0.58, N = 3 1068 1069 1069 1069
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU a b c d 200 400 600 800 1000 SE +/- 0.58, N = 3 1083 1083 1083 1085
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU d c b a 300 600 900 1200 1500 SE +/- 2.65, N = 3 1264 1265 1266 1267
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU d c a b 4K 8K 12K 16K 20K SE +/- 12.41, N = 3 17075 17097 17116 17116
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU b c a d 8K 16K 24K 32K 40K SE +/- 262.57, N = 3 38279 38448 38499 38512
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU c a d b 4K 8K 12K 16K 20K SE +/- 12.45, N = 3 17235 17266 17289 17316
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU b c a d 8K 16K 24K 32K 40K SE +/- 34.12, N = 3 38740 38798 38852 39062
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU d a c b 4K 8K 12K 16K 20K SE +/- 20.65, N = 3 20139 20148 20186 20226
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU d b a c 10K 20K 30K 40K 50K SE +/- 116.17, N = 3 44177 44437 44670 44682
Cpuminer-Opt Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Magi a c d b 140 280 420 560 700 SE +/- 0.61, N = 3 640.30 636.52 635.79 635.66 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: scrypt d c b a 70 140 210 280 350 SE +/- 0.25, N = 3 305.36 305.22 304.62 304.46 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Deepcoin c a d b 2K 4K 6K 8K 10K SE +/- 21.85, N = 3 7982.94 7978.24 7973.87 7965.21 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Ringcoin a c b d 700 1400 2100 2800 3500 SE +/- 2.45, N = 3 3455.06 3367.45 3355.73 3350.86 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Blake-2 S d c a b 30K 60K 90K 120K 150K SE +/- 3.33, N = 3 135200 134800 134660 134177 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Garlicoin d a c b 400 800 1200 1600 2000 SE +/- 2.78, N = 3 1844.59 1796.71 1783.94 1783.76 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Skeincoin d b a c 7K 14K 21K 28K 35K SE +/- 5.77, N = 3 34470 34440 34440 34430 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Myriad-Groestl b a c d 2K 4K 6K 8K 10K SE +/- 50.00, N = 3 11490 11440 11400 11390 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: LBC, LBRY Credits a c b d 3K 6K 9K 12K 15K SE +/- 3.33, N = 3 15850 15790 15783 15780 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Quad SHA-256, Pyrite b c a d 13K 26K 39K 52K 65K SE +/- 16.67, N = 3 62127 62080 62080 62050 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Triple SHA-256, Onecoin b c a d 20K 40K 60K 80K 100K SE +/- 3.33, N = 3 105897 105890 105880 105860 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenSSL OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. The system/openssl test profiles relies on benchmarking the system/OS-supplied openssl binary rather than the pts/openssl test profile that uses the locally-built OpenSSL for benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org byte/s, More Is Better OpenSSL Algorithm: SHA256 d b c a 7000M 14000M 21000M 28000M 35000M 32930127410 32907394910 32833153190 32631708710 1. OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
OpenBenchmarking.org byte/s, More Is Better OpenSSL Algorithm: SHA512 c d a b 2000M 4000M 6000M 8000M 10000M 10649075500 10639231370 10638469960 10635835530 1. OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
OpenBenchmarking.org sign/s, More Is Better OpenSSL Algorithm: RSA4096 b a d c 1200 2400 3600 4800 6000 5508.4 5501.2 5500.8 5468.4 1. OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
OpenBenchmarking.org verify/s, More Is Better OpenSSL Algorithm: RSA4096 b c a d 80K 160K 240K 320K 400K 359044.2 358701.3 358677.5 358447.1 1. OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
OpenBenchmarking.org byte/s, More Is Better OpenSSL Algorithm: ChaCha20 a c d b 30000M 60000M 90000M 120000M 150000M 125488652010 125338580040 125277063850 125127430960 1. OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
OpenBenchmarking.org byte/s, More Is Better OpenSSL Algorithm: AES-128-GCM d c a b 20000M 40000M 60000M 80000M 100000M 99011030490 98987719340 98960081440 98958335180 1. OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
OpenBenchmarking.org byte/s, More Is Better OpenSSL Algorithm: AES-256-GCM b a c d 20000M 40000M 60000M 80000M 100000M 92512692630 92480786980 92479315690 92479134650 1. OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
OpenBenchmarking.org byte/s, More Is Better OpenSSL Algorithm: ChaCha20-Poly1305 b d c a 20000M 40000M 60000M 80000M 100000M 89582688390 89488403930 89478928250 89461502230 1. OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
RabbitMQ RabbitMQ is an open-source message broker. This test profile makes use of the RabbitMQ PerfTest with the RabbitMQ server and PerfTest client running on the same host namely as a system/CPU performance benchmark. Learn more via the OpenBenchmarking.org test page.
Scenario: Simple 2 Publishers + 4 Consumers
a: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
b: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
c: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
d: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
Scenario: 10 Queues, 100 Producers, 100 Consumers
a: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
b: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
c: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
d: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
Scenario: 60 Queues, 100 Producers, 100 Consumers
a: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
b: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
c: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
d: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
Scenario: 120 Queues, 400 Producers, 400 Consumers
a: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
b: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
c: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
d: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
Scenario: 200 Queues, 400 Producers, 400 Consumers
a: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
b: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
c: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
d: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
PyTorch OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 b a c d 16 32 48 64 80 70.39 69.41 69.05 68.99 MIN: 65.98 / MAX: 71.59 MIN: 64.47 / MAX: 71.04 MIN: 64.46 / MAX: 70.37 MIN: 65.18 / MAX: 70.36
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-152 d c a b 7 14 21 28 35 27.63 27.02 26.99 26.73 MIN: 26.37 / MAX: 27.91 MIN: 26.41 / MAX: 27.29 MIN: 25.92 / MAX: 27.21 MIN: 25.73 / MAX: 27.4
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 b c a d 11 22 33 44 55 46.73 45.94 45.70 45.16 MIN: 43.51 / MAX: 47.16 MIN: 36.22 / MAX: 46.45 MIN: 43.16 / MAX: 46.31 MIN: 42.72 / MAX: 46.38
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: ResNet-50 d b c a 11 22 33 44 55 47.70 45.98 45.91 44.97 MIN: 44.77 / MAX: 48.1 MIN: 43.4 / MAX: 46.68 MIN: 41.91 / MAX: 46.67 MIN: 42.18 / MAX: 46.53
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: ResNet-50 b c a d 11 22 33 44 55 47.01 46.10 45.45 45.43 MIN: 42.75 / MAX: 47.57 MIN: 35.44 / MAX: 46.83 MIN: 34.94 / MAX: 45.89 MIN: 43.43 / MAX: 46.04
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-152 a c d b 4 8 12 16 20 18.13 18.03 18.02 18.02 MIN: 17.59 / MAX: 18.32 MIN: 17.54 / MAX: 18.23 MIN: 17.66 / MAX: 18.18 MIN: 17.53 / MAX: 18.23
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: ResNet-50 d b c a 11 22 33 44 55 47.06 46.29 46.16 45.09 MIN: 42.87 / MAX: 47.87 MIN: 42.39 / MAX: 46.81 MIN: 42.11 / MAX: 46.89 MIN: 43.25 / MAX: 45.61
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: ResNet-152 d b c a 4 8 12 16 20 18.14 18.06 18.05 18.04 MIN: 17.74 / MAX: 18.22 MIN: 17.57 / MAX: 18.26 MIN: 17.55 / MAX: 18.12 MIN: 17.52 / MAX: 18.19
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: ResNet-50 d c b a 11 22 33 44 55 46.93 46.41 46.36 46.31 MIN: 44.02 / MAX: 47.5 MIN: 43.7 / MAX: 46.95 MIN: 42.96 / MAX: 47.26 MIN: 43.53 / MAX: 47.23
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: ResNet-152 d b a c 4 8 12 16 20 18.19 18.07 17.80 17.75 MIN: 17.82 / MAX: 18.26 MIN: 16.99 / MAX: 18.18 MIN: 17.33 / MAX: 18.03 MIN: 17.45 / MAX: 17.89
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: ResNet-152 d c b a 4 8 12 16 20 18.19 18.11 18.11 17.96 MIN: 14.43 / MAX: 18.68 MIN: 17.68 / MAX: 18.22 MIN: 17.67 / MAX: 18.23 MIN: 17.36 / MAX: 18.16
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: ResNet-152 a c d b 5 10 15 20 25 18.30 18.16 17.95 17.81 MIN: 17.67 / MAX: 18.39 MIN: 17.66 / MAX: 18.27 MIN: 17.53 / MAX: 18.17 MIN: 17.54 / MAX: 18.01
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l d a b c 4 8 12 16 20 14.25 14.20 14.12 14.01 MIN: 14.08 / MAX: 14.36 MIN: 14.07 / MAX: 14.34 MIN: 13.18 / MAX: 14.23 MIN: 13.86 / MAX: 14.16
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l d b c a 3 6 9 12 15 10.97 10.86 10.72 10.62 MIN: 9.39 / MAX: 11.28 MIN: 9.5 / MAX: 11.07 MIN: 9.34 / MAX: 10.88 MIN: 9.39 / MAX: 10.82
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l d b c a 3 6 9 12 15 10.92 10.89 10.87 10.81 MIN: 9.26 / MAX: 11.09 MIN: 9.54 / MAX: 11.07 MIN: 9.55 / MAX: 11.02 MIN: 9.24 / MAX: 10.96
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l b d a c 3 6 9 12 15 10.93 10.90 10.86 10.82 MIN: 8.84 / MAX: 11.11 MIN: 9.57 / MAX: 11.05 MIN: 9.49 / MAX: 11 MIN: 9.26 / MAX: 10.96
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l b c a d 3 6 9 12 15 10.93 10.88 10.84 10.79 MIN: 9.59 / MAX: 11.11 MIN: 9.61 / MAX: 11.08 MIN: 9.5 / MAX: 10.98 MIN: 9.49 / MAX: 10.96
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l b d a c 3 6 9 12 15 11.04 10.94 10.93 10.85 MIN: 9.87 / MAX: 11.21 MIN: 9.62 / MAX: 11.08 MIN: 9.18 / MAX: 11.07 MIN: 9.35 / MAX: 11.06
OpenVINO This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection FP16 - Device: CPU a d b c 3 6 9 12 15 13.41 13.39 13.38 13.33 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection FP16 - Device: CPU d b a c 130 260 390 520 650 594.98 595.36 595.55 597.20 MIN: 577.06 / MAX: 624.47 MIN: 575.04 / MAX: 623.57 MIN: 576.09 / MAX: 622.5 MIN: 574.49 / MAX: 623.82 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Person Detection FP16 - Device: CPU d c a b 20 40 60 80 100 94.99 94.29 94.20 93.95 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Person Detection FP16 - Device: CPU d c a b 20 40 60 80 100 84.14 84.74 84.83 85.07 MIN: 49.25 / MAX: 110.52 MIN: 44.18 / MAX: 109.72 MIN: 51.55 / MAX: 110.45 MIN: 55.85 / MAX: 113.02 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Person Detection FP32 - Device: CPU b d a c 20 40 60 80 100 94.66 94.28 94.05 93.76 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Person Detection FP32 - Device: CPU b d a c 20 40 60 80 100 84.43 84.74 84.99 85.27 MIN: 38.96 / MAX: 118.46 MIN: 43.16 / MAX: 116.66 MIN: 54.5 / MAX: 109.88 MIN: 56.98 / MAX: 111.16 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16 - Device: CPU a d b c 200 400 600 800 1000 1034.25 1033.33 1032.48 1032.00 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16 - Device: CPU a d b c 2 4 6 8 10 7.71 7.72 7.73 7.73 MIN: 4.99 / MAX: 14.68 MIN: 4.51 / MAX: 13.75 MIN: 4.78 / MAX: 16.94 MIN: 4.84 / MAX: 13.08 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection FP16-INT8 - Device: CPU c b a d 6 12 18 24 30 25.56 25.56 25.52 25.50 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection FP16-INT8 - Device: CPU c b a d 70 140 210 280 350 312.48 312.53 313.05 313.25 MIN: 296.91 / MAX: 323.74 MIN: 300.51 / MAX: 321.6 MIN: 299.21 / MAX: 324.17 MIN: 299.83 / MAX: 323.76 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16 - Device: CPU a d c b 700 1400 2100 2800 3500 3070.77 3069.35 3067.52 3063.55 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16 - Device: CPU a d b c 0.5625 1.125 1.6875 2.25 2.8125 2.49 2.49 2.50 2.50 MIN: 1.35 / MAX: 6.3 MIN: 1.35 / MAX: 9.24 MIN: 1.34 / MAX: 9.48 MIN: 1.34 / MAX: 9.59 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16 - Device: CPU a c b d 100 200 300 400 500 439.75 437.76 435.77 434.98 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16 - Device: CPU a c b d 5 10 15 20 25 18.16 18.24 18.32 18.36 MIN: 9.71 / MAX: 27.64 MIN: 12.24 / MAX: 26.29 MIN: 12.74 / MAX: 30.01 MIN: 9.65 / MAX: 27.17 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU d c b a 300 600 900 1200 1500 1619.16 1617.82 1617.79 1613.22 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU c d b a 1.107 2.214 3.321 4.428 5.535 4.90 4.90 4.91 4.92 MIN: 2.76 / MAX: 13.8 MIN: 2.75 / MAX: 9.12 MIN: 2.77 / MAX: 14.1 MIN: 2.75 / MAX: 10.58 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16 - Device: CPU a c b d 300 600 900 1200 1500 1353.91 1353.62 1352.89 1351.17 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16 - Device: CPU a c b d 3 6 9 12 15 11.79 11.79 11.80 11.81 MIN: 6.37 / MAX: 23.3 MIN: 6.2 / MAX: 21.55 MIN: 7.57 / MAX: 15.99 MIN: 6.78 / MAX: 18.07 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16-INT8 - Device: CPU d b a c 1000 2000 3000 4000 5000 4544.57 4537.36 4527.76 4512.34 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16-INT8 - Device: CPU d b a c 0.774 1.548 2.322 3.096 3.87 3.42 3.43 3.44 3.44 MIN: 1.94 / MAX: 6.89 MIN: 1.96 / MAX: 8.28 MIN: 1.95 / MAX: 10.99 MIN: 1.94 / MAX: 11.06 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16-INT8 - Device: CPU c a b d 120 240 360 480 600 532.94 524.58 522.64 521.46 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16-INT8 - Device: CPU c a b d 4 8 12 16 20 14.99 15.23 15.28 15.32 MIN: 11.64 / MAX: 20.21 MIN: 11.89 / MAX: 21.11 MIN: 9.17 / MAX: 19.71 MIN: 12.62 / MAX: 21 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU d b c a 30 60 90 120 150 131.09 130.95 130.06 130.05 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU d b a c 14 28 42 56 70 60.94 60.98 61.38 61.38 MIN: 46.99 / MAX: 72.58 MIN: 27.92 / MAX: 70.86 MIN: 46.13 / MAX: 71.27 MIN: 44.4 / MAX: 70.65 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU c a b d 600 1200 1800 2400 3000 2609.57 2608.39 2604.12 2603.35 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU a c b d 2 4 6 8 10 6.10 6.10 6.11 6.11 MIN: 3.19 / MAX: 11.01 MIN: 3.18 / MAX: 11.91 MIN: 3.19 / MAX: 13.98 MIN: 3.18 / MAX: 11.79 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU c b d a 300 600 900 1200 1500 1591.84 1587.74 1572.56 1572.38 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU c b a d 1.1385 2.277 3.4155 4.554 5.6925 5.00 5.02 5.06 5.06 MIN: 3.63 / MAX: 11.88 MIN: 3.6 / MAX: 10.68 MIN: 3.62 / MAX: 13.34 MIN: 3.24 / MAX: 9.55 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16 - Device: CPU a c d b 160 320 480 640 800 739.56 733.81 733.36 731.42 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16 - Device: CPU a c d b 5 10 15 20 25 21.61 21.77 21.79 21.85 MIN: 15.02 / MAX: 30.51 MIN: 17.91 / MAX: 28.95 MIN: 14.67 / MAX: 31.51 MIN: 14.62 / MAX: 38.4 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU a d b c 7K 14K 21K 28K 35K 33511.79 33491.83 33483.53 33482.54 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU a b c d 0.0968 0.1936 0.2904 0.3872 0.484 0.43 0.43 0.43 0.43 MIN: 0.22 / MAX: 4.2 MIN: 0.22 / MAX: 4.19 MIN: 0.22 / MAX: 7.73 MIN: 0.22 / MAX: 5.02 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16-INT8 - Device: CPU c b a d 130 260 390 520 650 587.47 585.55 584.10 579.56 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16-INT8 - Device: CPU c b a d 6 12 18 24 30 27.21 27.29 27.36 27.57 MIN: 22.22 / MAX: 34.96 MIN: 21.85 / MAX: 35.83 MIN: 19.64 / MAX: 35.39 MIN: 20.3 / MAX: 33.35 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU d a b c 10K 20K 30K 40K 50K 47453.43 47376.59 47316.17 47255.36 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU a b d c 0.0675 0.135 0.2025 0.27 0.3375 0.29 0.29 0.29 0.30 MIN: 0.17 / MAX: 7.87 MIN: 0.17 / MAX: 7.59 MIN: 0.17 / MAX: 7.66 MIN: 0.17 / MAX: 7.08 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: amd-pstate-epp performance (EPP: performance) - CPU Microcode: 0xa601203Java Notes: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Notes: Python 3.10.12Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 21 November 2023 16:01 by user root.
b Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: amd-pstate-epp performance (EPP: performance) - CPU Microcode: 0xa601203Java Notes: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Notes: Python 3.10.12Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 21 November 2023 20:09 by user root.
c Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: amd-pstate-epp performance (EPP: performance) - CPU Microcode: 0xa601203Java Notes: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Notes: Python 3.10.12Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 22 November 2023 05:51 by user root.
d Processor: AMD Ryzen 9 7950X3D 16-Core @ 5.76GHz (16 Cores / 32 Threads), Motherboard: ASRockRack B650D4U-2L2T/BCM (2.09 BIOS), Chipset: AMD Device 14d8, Memory: 2 x 32 GB DDR5-4800MT/s MTC20C2085S1EC48BA1, Disk: 3201GB Micron_7450_MTFDKCC3T2TFS + 0GB Virtual HDisk0 + 0GB Virtual HDisk1 + 0GB Virtual HDisk2 + 0GB Virtual HDisk3, Graphics: ASPEED 512MB, Audio: AMD Device 1640, Monitor: VA2431, Network: 2 x Intel I210 + 2 x Broadcom BCM57416 NetXtreme-E Dual-Media 10G RDMA
OS: Ubuntu 22.04, Kernel: 6.6.0-rc4-phx-amd-pref-core (x86_64), Desktop: GNOME Shell 42.9, Display Server: X Server, Vulkan: 1.3.238, Compiler: GCC 11.4.0, File-System: ext4, Screen Resolution: 1920x1200
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: amd-pstate-epp performance (EPP: performance) - CPU Microcode: 0xa601203Java Notes: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Notes: Python 3.10.12Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 22 November 2023 09:53 by user root.