AMD Ryzen 9 7950X3D 16-Core testing with a ASRockRack B650D4U-2L2T/BCM (2.09 BIOS) and ASPEED 512MB on Ubuntu 22.04 via the Phoronix Test Suite.
a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: amd-pstate-epp performance (EPP: performance) - CPU Microcode: 0xa601203Java Notes: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Notes: Python 3.10.12Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
b c d Processor: AMD Ryzen 9 7950X3D 16-Core @ 5.76GHz (16 Cores / 32 Threads), Motherboard: ASRockRack B650D4U-2L2T/BCM (2.09 BIOS), Chipset: AMD Device 14d8, Memory: 2 x 32 GB DDR5-4800MT/s MTC20C2085S1EC48BA1, Disk: 3201GB Micron_7450_MTFDKCC3T2TFS + 0GB Virtual HDisk0 + 0GB Virtual HDisk1 + 0GB Virtual HDisk2 + 0GB Virtual HDisk3, Graphics: ASPEED 512MB, Audio: AMD Device 1640, Monitor: VA2431, Network: 2 x Intel I210 + 2 x Broadcom BCM57416 NetXtreme-E Dual-Media 10G RDMA
OS: Ubuntu 22.04, Kernel: 6.6.0-rc4-phx-amd-pref-core (x86_64), Desktop: GNOME Shell 42.9, Display Server: X Server, Vulkan: 1.3.238, Compiler: GCC 11.4.0, File-System: ext4, Screen Resolution: 1920x1200
QuantLib QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MFLOPS, More Is Better QuantLib 1.32 Configuration: Multi-Threaded c b a d 20K 40K 60K 80K 100K SE +/- 83.63, N = 3 81958.0 82020.0 82133.5 82159.4 1. (CXX) g++ options: -O3 -march=native -fPIE -pie
OpenBenchmarking.org MFLOPS, More Is Better QuantLib 1.32 Configuration: Single-Threaded a c d b 900 1800 2700 3600 4500 SE +/- 9.84, N = 3 3980.3 4025.4 4033.0 4042.6 1. (CXX) g++ options: -O3 -march=native -fPIE -pie
OpenBenchmarking.org Seconds, Fewer Is Better CloverLeaf 1.3 Input: clover_bm16 a d b c 300 600 900 1200 1500 SE +/- 0.59, N = 3 1496.78 1496.30 1495.65 1494.53 1. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp
OpenBenchmarking.org Seconds, Fewer Is Better CloverLeaf 1.3 Input: clover_bm64_short c b a d 40 80 120 160 200 SE +/- 0.02, N = 3 175.92 175.92 175.91 175.86 1. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp
QMCPACK QMCPACK is a modern high-performance open-source Quantum Monte Carlo (QMC) simulation code making use of MPI for this benchmark of the H20 example code. QMCPACK is an open-source production level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, and solids. QMCPACK is supported by the U.S. Department of Energy. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: H4_ae c d b a 3 6 9 12 15 SE +/- 0.09, N = 15 12.63 12.62 12.51 12.16 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: Li2_STO_ae b c a d 30 60 90 120 150 SE +/- 1.12, N = 3 135.95 135.60 135.60 135.02 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: LiH_ae_MSD b c d a 16 32 48 64 80 SE +/- 0.40, N = 3 74.10 73.60 73.49 73.47 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: simple-H2O b c d a 5 10 15 20 25 SE +/- 0.03, N = 3 18.43 18.39 18.30 18.24 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: O_ae_pyscf_UHF a b c d 30 60 90 120 150 SE +/- 1.01, N = 3 132.77 131.36 129.98 129.65 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: FeCO6_b3lyp_gms b a d c 30 60 90 120 150 SE +/- 0.20, N = 3 125.49 125.46 125.13 124.98 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
FFmpeg This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx264 - Scenario: Live d b a c 60 120 180 240 300 SE +/- 1.97, N = 3 276.55 282.92 282.96 284.24 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx265 - Scenario: Live d a c b 40 80 120 160 200 SE +/- 0.76, N = 3 179.29 179.55 180.02 180.06 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx264 - Scenario: Upload d b c a 4 8 12 16 20 SE +/- 0.14, N = 3 16.64 16.67 16.81 16.90 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx265 - Scenario: Upload c b a d 8 16 24 32 40 SE +/- 0.08, N = 3 33.46 33.53 33.54 33.57 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx264 - Scenario: Platform c d b a 14 28 42 56 70 SE +/- 0.08, N = 3 63.69 63.75 63.85 63.98 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx265 - Scenario: Platform c b a d 15 30 45 60 75 SE +/- 0.09, N = 3 68.29 68.36 68.39 68.63 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx264 - Scenario: Video On Demand b c a d 14 28 42 56 70 SE +/- 0.11, N = 3 63.78 64.14 64.16 64.26 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx265 - Scenario: Video On Demand c a b d 15 30 45 60 75 SE +/- 0.07, N = 3 68.14 68.48 68.57 68.71 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
WebP2 Image Encode This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MP/s, More Is Better WebP2 Image Encode 20220823 Encode Settings: Default b c a d 4 8 12 16 20 SE +/- 0.18, N = 3 13.05 13.38 13.51 13.71 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
easyWave The easyWave software allows simulating tsunami generation and propagation in the context of early warning systems. EasyWave supports making use of OpenMP for CPU multi-threading and there are also GPU ports available but not currently incorporated as part of this test profile. The easyWave tsunami generation software is run with one of the example/reference input files for measuring the CPU execution time. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240 a c d b 0.4833 0.9666 1.4499 1.9332 2.4165 SE +/- 0.004, N = 3 2.148 2.101 2.090 2.089 1. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200 a b c d 20 40 60 80 100 SE +/- 0.33, N = 3 81.63 81.03 80.87 80.10 1. (CXX) g++ options: -O3 -fopenmp
Embree Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Crown c b a d 7 14 21 28 35 SE +/- 0.06, N = 3 30.39 30.45 30.50 30.51 MIN: 30.17 / MAX: 30.88 MIN: 30.1 / MAX: 31.05 MIN: 30.29 / MAX: 31 MIN: 30.28 / MAX: 31.02
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Crown c d a b 7 14 21 28 35 SE +/- 0.04, N = 3 31.70 31.85 31.92 31.97 MIN: 31.4 / MAX: 32.36 MIN: 31.57 / MAX: 32.5 MIN: 31.66 / MAX: 32.67 MIN: 31.62 / MAX: 32.67
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon d c b a 7 14 21 28 35 SE +/- 0.06, N = 3 30.83 30.83 30.89 30.93 MIN: 30.69 / MAX: 31.27 MIN: 30.69 / MAX: 31.19 MIN: 30.64 / MAX: 31.42 MIN: 30.8 / MAX: 31.38
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon Obj b d c a 6 12 18 24 30 SE +/- 0.04, N = 3 27.52 27.53 27.54 27.58 MIN: 27.3 / MAX: 28.13 MIN: 27.38 / MAX: 27.96 MIN: 27.35 / MAX: 27.99 MIN: 27.4 / MAX: 28.05
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon c d b a 8 16 24 32 40 SE +/- 0.07, N = 3 33.65 33.72 33.73 33.78 MIN: 33.44 / MAX: 34.17 MIN: 33.46 / MAX: 34.55 MIN: 33.38 / MAX: 34.7 MIN: 33.54 / MAX: 34.43
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon Obj a c d b 7 14 21 28 35 SE +/- 0.05, N = 3 28.61 28.64 28.70 28.71 MIN: 28.41 / MAX: 29.24 MIN: 28.43 / MAX: 29.3 MIN: 28.49 / MAX: 29.4 MIN: 28.44 / MAX: 29.67
OpenVKL OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 2.0.0 Benchmark: vklBenchmarkCPU ISPC a b d c 130 260 390 520 650 SE +/- 0.00, N = 3 603 603 603 604 MIN: 46 / MAX: 8290 MIN: 46 / MAX: 8279 MIN: 46 / MAX: 8277 MIN: 46 / MAX: 8291
OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 2.0.0 Benchmark: vklBenchmarkCPU Scalar d a b c 50 100 150 200 250 SE +/- 0.67, N = 3 241 242 242 242 MIN: 16 / MAX: 4419 MIN: 17 / MAX: 4418 MIN: 16 / MAX: 4422 MIN: 17 / MAX: 4423
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU b d c a 0.4367 0.8734 1.3101 1.7468 2.1835 SE +/- 0.02190, N = 3 1.94087 1.93518 1.92566 1.89379 MIN: 1.73 MIN: 1.72 MIN: 1.71 MIN: 1.7 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU d b c a 0.6342 1.2684 1.9026 2.5368 3.171 SE +/- 0.01747, N = 3 2.81873 2.81737 2.80377 2.80183 MIN: 2.78 MIN: 2.75 MIN: 2.76 MIN: 2.76 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU c a d b 0.1203 0.2406 0.3609 0.4812 0.6015 SE +/- 0.010372, N = 15 0.534576 0.523659 0.520575 0.498741 MIN: 0.43 MIN: 0.43 MIN: 0.43 MIN: 0.39 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU b a d c 0.0628 0.1256 0.1884 0.2512 0.314 SE +/- 0.003205, N = 13 0.279128 0.262956 0.256962 0.248788 MIN: 0.24 MIN: 0.25 MIN: 0.24 MIN: 0.24 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU c d a b 0.1565 0.313 0.4695 0.626 0.7825 SE +/- 0.002235, N = 3 0.695339 0.695171 0.694296 0.693123 MIN: 0.65 MIN: 0.65 MIN: 0.65 MIN: 0.64 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU a b c d 0.2784 0.5568 0.8352 1.1136 1.392 SE +/- 0.01251, N = 5 1.23712 1.18558 1.18372 1.17668 MIN: 1.17 MIN: 1.08 MIN: 1.11 MIN: 1.1 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU d b a c 0.9097 1.8194 2.7291 3.6388 4.5485 SE +/- 0.00942, N = 3 4.04319 4.03418 4.01781 4.01114 MIN: 3.98 MIN: 3.94 MIN: 3.96 MIN: 3.96 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU b c d a 0.6775 1.355 2.0325 2.71 3.3875 SE +/- 0.00759, N = 3 3.01099 3.00906 2.98785 2.98536 MIN: 2.51 MIN: 2.51 MIN: 2.51 MIN: 2.51 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU a d b c 0.5848 1.1696 1.7544 2.3392 2.924 SE +/- 0.00014, N = 3 2.59896 2.59886 2.59838 2.59835 MIN: 2.59 MIN: 2.59 MIN: 2.59 MIN: 2.59 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU a d c b 0.8478 1.6956 2.5434 3.3912 4.239 SE +/- 0.01733, N = 3 3.76779 3.76192 3.72781 3.71650 MIN: 3.68 MIN: 3.67 MIN: 3.66 MIN: 3.63 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU b c d a 0.1027 0.2054 0.3081 0.4108 0.5135 SE +/- 0.000061, N = 3 0.456347 0.456250 0.455553 0.455531 MIN: 0.44 MIN: 0.44 MIN: 0.44 MIN: 0.44 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU a b c d 0.1457 0.2914 0.4371 0.5828 0.7285 SE +/- 0.000222, N = 3 0.647550 0.646866 0.646223 0.645231 MIN: 0.64 MIN: 0.64 MIN: 0.64 MIN: 0.64 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU c b a d 300 600 900 1200 1500 SE +/- 0.51, N = 3 1234.14 1229.80 1227.58 1224.00 MIN: 1229.87 MIN: 1224.55 MIN: 1224.62 MIN: 1220.91 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU b c d a 140 280 420 560 700 SE +/- 0.32, N = 3 636.50 632.39 625.50 620.71 MIN: 632.82 MIN: 629.36 MIN: 622.67 MIN: 617.84 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU b c a d 300 600 900 1200 1500 SE +/- 0.95, N = 3 1236.68 1236.28 1231.93 1224.11 MIN: 1230.73 MIN: 1233.21 MIN: 1227.81 MIN: 1220.23 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU c d b a 0.2474 0.4948 0.7422 0.9896 1.237 SE +/- 0.00130, N = 3 1.09942 1.09782 1.09760 1.09425 MIN: 1.07 MIN: 1.07 MIN: 1.07 MIN: 1.07 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU a b d c 0.5272 1.0544 1.5816 2.1088 2.636 SE +/- 0.00064, N = 3 2.34329 2.34125 2.34110 2.33922 MIN: 2.31 MIN: 2.31 MIN: 2.3 MIN: 2.31 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU c b a d 0.3347 0.6694 1.0041 1.3388 1.6735 SE +/- 0.00176, N = 3 1.48752 1.48496 1.48385 1.48153 MIN: 1.48 MIN: 1.47 MIN: 1.47 MIN: 1.47 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU b d a c 140 280 420 560 700 SE +/- 1.22, N = 3 635.30 634.66 633.91 624.14 MIN: 629.42 MIN: 631.75 MIN: 629.4 MIN: 621.15 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU a d b c 300 600 900 1200 1500 SE +/- 2.80, N = 3 1236.29 1234.57 1231.68 1228.25 MIN: 1232.2 MIN: 1231 MIN: 1222.84 MIN: 1224.56 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU d b a c 140 280 420 560 700 SE +/- 2.14, N = 3 635.31 633.61 632.07 628.31 MIN: 632.15 MIN: 626.34 MIN: 628.15 MIN: 625.84 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OSPRay Studio Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU b d a c 900 1800 2700 3600 4500 SE +/- 3.51, N = 3 4267 4251 4251 4239
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU b c a d 900 1800 2700 3600 4500 SE +/- 4.41, N = 3 4323 4316 4303 4297
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU d c b a 1100 2200 3300 4400 5500 SE +/- 4.26, N = 3 5042 5035 5027 5025
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU c b d a 16K 32K 48K 64K 80K SE +/- 88.19, N = 3 72590 72346 71802 71605
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU b c a d 30K 60K 90K 120K 150K SE +/- 200.00, N = 3 140342 140119 139956 139631
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU a c b d 16K 32K 48K 64K 80K SE +/- 124.54, N = 3 73112 72888 72815 72762
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU b c a d 30K 60K 90K 120K 150K SE +/- 201.33, N = 3 141773 141698 141550 141360
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU c b d a 20K 40K 60K 80K 100K SE +/- 192.43, N = 3 84786 84660 84573 84397
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU a b d c 40K 80K 120K 160K 200K SE +/- 70.29, N = 3 165253 165034 164645 164625
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU c b a d 200 400 600 800 1000 SE +/- 0.58, N = 3 1069 1069 1069 1068
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU d c b a 200 400 600 800 1000 SE +/- 0.58, N = 3 1085 1083 1083 1083
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU a b c d 300 600 900 1200 1500 SE +/- 2.65, N = 3 1267 1266 1265 1264
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU b a c d 4K 8K 12K 16K 20K SE +/- 12.41, N = 3 17116 17116 17097 17075
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU d a c b 8K 16K 24K 32K 40K SE +/- 262.57, N = 3 38512 38499 38448 38279
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU b d a c 4K 8K 12K 16K 20K SE +/- 12.45, N = 3 17316 17289 17266 17235
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU d a c b 8K 16K 24K 32K 40K SE +/- 34.12, N = 3 39062 38852 38798 38740
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU b c a d 4K 8K 12K 16K 20K SE +/- 20.65, N = 3 20226 20186 20148 20139
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU c a b d 10K 20K 30K 40K 50K SE +/- 116.17, N = 3 44682 44670 44437 44177
Cpuminer-Opt Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Magi b d c a 140 280 420 560 700 SE +/- 0.61, N = 3 635.66 635.79 636.52 640.30 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: scrypt a b c d 70 140 210 280 350 SE +/- 0.25, N = 3 304.46 304.62 305.22 305.36 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Deepcoin b d a c 2K 4K 6K 8K 10K SE +/- 21.85, N = 3 7965.21 7973.87 7978.24 7982.94 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Ringcoin d b c a 700 1400 2100 2800 3500 SE +/- 2.45, N = 3 3350.86 3355.73 3367.45 3455.06 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Blake-2 S b a c d 30K 60K 90K 120K 150K SE +/- 3.33, N = 3 134177 134660 134800 135200 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Garlicoin b c a d 400 800 1200 1600 2000 SE +/- 2.78, N = 3 1783.76 1783.94 1796.71 1844.59 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Skeincoin c a b d 7K 14K 21K 28K 35K SE +/- 5.77, N = 3 34430 34440 34440 34470 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Myriad-Groestl d c a b 2K 4K 6K 8K 10K SE +/- 50.00, N = 3 11390 11400 11440 11490 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: LBC, LBRY Credits d b c a 3K 6K 9K 12K 15K SE +/- 3.33, N = 3 15780 15783 15790 15850 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Quad SHA-256, Pyrite d a c b 13K 26K 39K 52K 65K SE +/- 16.67, N = 3 62050 62080 62080 62127 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Triple SHA-256, Onecoin d a c b 20K 40K 60K 80K 100K SE +/- 3.33, N = 3 105860 105880 105890 105897 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenSSL OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. The system/openssl test profiles relies on benchmarking the system/OS-supplied openssl binary rather than the pts/openssl test profile that uses the locally-built OpenSSL for benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org byte/s, More Is Better OpenSSL Algorithm: SHA256 a c b d 7000M 14000M 21000M 28000M 35000M 32631708710 32833153190 32907394910 32930127410 1. OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
OpenBenchmarking.org byte/s, More Is Better OpenSSL Algorithm: SHA512 b a d c 2000M 4000M 6000M 8000M 10000M 10635835530 10638469960 10639231370 10649075500 1. OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
OpenBenchmarking.org sign/s, More Is Better OpenSSL Algorithm: RSA4096 c d a b 1200 2400 3600 4800 6000 5468.4 5500.8 5501.2 5508.4 1. OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
OpenBenchmarking.org verify/s, More Is Better OpenSSL Algorithm: RSA4096 d a c b 80K 160K 240K 320K 400K 358447.1 358677.5 358701.3 359044.2 1. OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
OpenBenchmarking.org byte/s, More Is Better OpenSSL Algorithm: ChaCha20 b d c a 30000M 60000M 90000M 120000M 150000M 125127430960 125277063850 125338580040 125488652010 1. OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
OpenBenchmarking.org byte/s, More Is Better OpenSSL Algorithm: AES-128-GCM b a c d 20000M 40000M 60000M 80000M 100000M 98958335180 98960081440 98987719340 99011030490 1. OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
OpenBenchmarking.org byte/s, More Is Better OpenSSL Algorithm: AES-256-GCM d c a b 20000M 40000M 60000M 80000M 100000M 92479134650 92479315690 92480786980 92512692630 1. OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
OpenBenchmarking.org byte/s, More Is Better OpenSSL Algorithm: ChaCha20-Poly1305 a c d b 20000M 40000M 60000M 80000M 100000M 89461502230 89478928250 89488403930 89582688390 1. OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
RabbitMQ RabbitMQ is an open-source message broker. This test profile makes use of the RabbitMQ PerfTest with the RabbitMQ server and PerfTest client running on the same host namely as a system/CPU performance benchmark. Learn more via the OpenBenchmarking.org test page.
Scenario: Simple 2 Publishers + 4 Consumers
a: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
b: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
c: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
d: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
Scenario: 10 Queues, 100 Producers, 100 Consumers
a: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
b: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
c: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
d: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
Scenario: 60 Queues, 100 Producers, 100 Consumers
a: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
b: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
c: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
d: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
Scenario: 120 Queues, 400 Producers, 400 Consumers
a: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
b: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
c: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
d: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
Scenario: 200 Queues, 400 Producers, 400 Consumers
a: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
b: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
c: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
d: The test quit with a non-zero exit status. E: java.net.ConnectException: Connection refused (Connection refused)
PyTorch OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 d c a b 16 32 48 64 80 68.99 69.05 69.41 70.39 MIN: 65.18 / MAX: 70.36 MIN: 64.46 / MAX: 70.37 MIN: 64.47 / MAX: 71.04 MIN: 65.98 / MAX: 71.59
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-152 b a c d 7 14 21 28 35 26.73 26.99 27.02 27.63 MIN: 25.73 / MAX: 27.4 MIN: 25.92 / MAX: 27.21 MIN: 26.41 / MAX: 27.29 MIN: 26.37 / MAX: 27.91
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 d a c b 11 22 33 44 55 45.16 45.70 45.94 46.73 MIN: 42.72 / MAX: 46.38 MIN: 43.16 / MAX: 46.31 MIN: 36.22 / MAX: 46.45 MIN: 43.51 / MAX: 47.16
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: ResNet-50 a c b d 11 22 33 44 55 44.97 45.91 45.98 47.70 MIN: 42.18 / MAX: 46.53 MIN: 41.91 / MAX: 46.67 MIN: 43.4 / MAX: 46.68 MIN: 44.77 / MAX: 48.1
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: ResNet-50 d a c b 11 22 33 44 55 45.43 45.45 46.10 47.01 MIN: 43.43 / MAX: 46.04 MIN: 34.94 / MAX: 45.89 MIN: 35.44 / MAX: 46.83 MIN: 42.75 / MAX: 47.57
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-152 b d c a 4 8 12 16 20 18.02 18.02 18.03 18.13 MIN: 17.53 / MAX: 18.23 MIN: 17.66 / MAX: 18.18 MIN: 17.54 / MAX: 18.23 MIN: 17.59 / MAX: 18.32
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: ResNet-50 a c b d 11 22 33 44 55 45.09 46.16 46.29 47.06 MIN: 43.25 / MAX: 45.61 MIN: 42.11 / MAX: 46.89 MIN: 42.39 / MAX: 46.81 MIN: 42.87 / MAX: 47.87
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: ResNet-152 a c b d 4 8 12 16 20 18.04 18.05 18.06 18.14 MIN: 17.52 / MAX: 18.19 MIN: 17.55 / MAX: 18.12 MIN: 17.57 / MAX: 18.26 MIN: 17.74 / MAX: 18.22
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: ResNet-50 a b c d 11 22 33 44 55 46.31 46.36 46.41 46.93 MIN: 43.53 / MAX: 47.23 MIN: 42.96 / MAX: 47.26 MIN: 43.7 / MAX: 46.95 MIN: 44.02 / MAX: 47.5
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: ResNet-152 c a b d 4 8 12 16 20 17.75 17.80 18.07 18.19 MIN: 17.45 / MAX: 17.89 MIN: 17.33 / MAX: 18.03 MIN: 16.99 / MAX: 18.18 MIN: 17.82 / MAX: 18.26
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: ResNet-152 a b c d 4 8 12 16 20 17.96 18.11 18.11 18.19 MIN: 17.36 / MAX: 18.16 MIN: 17.67 / MAX: 18.23 MIN: 17.68 / MAX: 18.22 MIN: 14.43 / MAX: 18.68
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: ResNet-152 b d c a 5 10 15 20 25 17.81 17.95 18.16 18.30 MIN: 17.54 / MAX: 18.01 MIN: 17.53 / MAX: 18.17 MIN: 17.66 / MAX: 18.27 MIN: 17.67 / MAX: 18.39
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l c b a d 4 8 12 16 20 14.01 14.12 14.20 14.25 MIN: 13.86 / MAX: 14.16 MIN: 13.18 / MAX: 14.23 MIN: 14.07 / MAX: 14.34 MIN: 14.08 / MAX: 14.36
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l a c b d 3 6 9 12 15 10.62 10.72 10.86 10.97 MIN: 9.39 / MAX: 10.82 MIN: 9.34 / MAX: 10.88 MIN: 9.5 / MAX: 11.07 MIN: 9.39 / MAX: 11.28
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l a c b d 3 6 9 12 15 10.81 10.87 10.89 10.92 MIN: 9.24 / MAX: 10.96 MIN: 9.55 / MAX: 11.02 MIN: 9.54 / MAX: 11.07 MIN: 9.26 / MAX: 11.09
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l c a d b 3 6 9 12 15 10.82 10.86 10.90 10.93 MIN: 9.26 / MAX: 10.96 MIN: 9.49 / MAX: 11 MIN: 9.57 / MAX: 11.05 MIN: 8.84 / MAX: 11.11
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l d a c b 3 6 9 12 15 10.79 10.84 10.88 10.93 MIN: 9.49 / MAX: 10.96 MIN: 9.5 / MAX: 10.98 MIN: 9.61 / MAX: 11.08 MIN: 9.59 / MAX: 11.11
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l c a d b 3 6 9 12 15 10.85 10.93 10.94 11.04 MIN: 9.35 / MAX: 11.06 MIN: 9.18 / MAX: 11.07 MIN: 9.62 / MAX: 11.08 MIN: 9.87 / MAX: 11.21
OpenVINO This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection FP16 - Device: CPU c b d a 3 6 9 12 15 13.33 13.38 13.39 13.41 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection FP16 - Device: CPU c a b d 130 260 390 520 650 597.20 595.55 595.36 594.98 MIN: 574.49 / MAX: 623.82 MIN: 576.09 / MAX: 622.5 MIN: 575.04 / MAX: 623.57 MIN: 577.06 / MAX: 624.47 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Person Detection FP16 - Device: CPU b a c d 20 40 60 80 100 93.95 94.20 94.29 94.99 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Person Detection FP16 - Device: CPU b a c d 20 40 60 80 100 85.07 84.83 84.74 84.14 MIN: 55.85 / MAX: 113.02 MIN: 51.55 / MAX: 110.45 MIN: 44.18 / MAX: 109.72 MIN: 49.25 / MAX: 110.52 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Person Detection FP32 - Device: CPU c a d b 20 40 60 80 100 93.76 94.05 94.28 94.66 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Person Detection FP32 - Device: CPU c a d b 20 40 60 80 100 85.27 84.99 84.74 84.43 MIN: 56.98 / MAX: 111.16 MIN: 54.5 / MAX: 109.88 MIN: 43.16 / MAX: 116.66 MIN: 38.96 / MAX: 118.46 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16 - Device: CPU c b d a 200 400 600 800 1000 1032.00 1032.48 1033.33 1034.25 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16 - Device: CPU c b d a 2 4 6 8 10 7.73 7.73 7.72 7.71 MIN: 4.84 / MAX: 13.08 MIN: 4.78 / MAX: 16.94 MIN: 4.51 / MAX: 13.75 MIN: 4.99 / MAX: 14.68 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection FP16-INT8 - Device: CPU d a b c 6 12 18 24 30 25.50 25.52 25.56 25.56 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection FP16-INT8 - Device: CPU d a b c 70 140 210 280 350 313.25 313.05 312.53 312.48 MIN: 299.83 / MAX: 323.76 MIN: 299.21 / MAX: 324.17 MIN: 300.51 / MAX: 321.6 MIN: 296.91 / MAX: 323.74 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16 - Device: CPU b c d a 700 1400 2100 2800 3500 3063.55 3067.52 3069.35 3070.77 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16 - Device: CPU c b d a 0.5625 1.125 1.6875 2.25 2.8125 2.50 2.50 2.49 2.49 MIN: 1.34 / MAX: 9.59 MIN: 1.34 / MAX: 9.48 MIN: 1.35 / MAX: 9.24 MIN: 1.35 / MAX: 6.3 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16 - Device: CPU d b c a 100 200 300 400 500 434.98 435.77 437.76 439.75 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16 - Device: CPU d b c a 5 10 15 20 25 18.36 18.32 18.24 18.16 MIN: 9.65 / MAX: 27.17 MIN: 12.74 / MAX: 30.01 MIN: 12.24 / MAX: 26.29 MIN: 9.71 / MAX: 27.64 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU a b c d 300 600 900 1200 1500 1613.22 1617.79 1617.82 1619.16 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU a b d c 1.107 2.214 3.321 4.428 5.535 4.92 4.91 4.90 4.90 MIN: 2.75 / MAX: 10.58 MIN: 2.77 / MAX: 14.1 MIN: 2.75 / MAX: 9.12 MIN: 2.76 / MAX: 13.8 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16 - Device: CPU d b c a 300 600 900 1200 1500 1351.17 1352.89 1353.62 1353.91 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16 - Device: CPU d b c a 3 6 9 12 15 11.81 11.80 11.79 11.79 MIN: 6.78 / MAX: 18.07 MIN: 7.57 / MAX: 15.99 MIN: 6.2 / MAX: 21.55 MIN: 6.37 / MAX: 23.3 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16-INT8 - Device: CPU c a b d 1000 2000 3000 4000 5000 4512.34 4527.76 4537.36 4544.57 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16-INT8 - Device: CPU c a b d 0.774 1.548 2.322 3.096 3.87 3.44 3.44 3.43 3.42 MIN: 1.94 / MAX: 11.06 MIN: 1.95 / MAX: 10.99 MIN: 1.96 / MAX: 8.28 MIN: 1.94 / MAX: 6.89 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16-INT8 - Device: CPU d b a c 120 240 360 480 600 521.46 522.64 524.58 532.94 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16-INT8 - Device: CPU d b a c 4 8 12 16 20 15.32 15.28 15.23 14.99 MIN: 12.62 / MAX: 21 MIN: 9.17 / MAX: 19.71 MIN: 11.89 / MAX: 21.11 MIN: 11.64 / MAX: 20.21 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU a c b d 30 60 90 120 150 130.05 130.06 130.95 131.09 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU c a b d 14 28 42 56 70 61.38 61.38 60.98 60.94 MIN: 44.4 / MAX: 70.65 MIN: 46.13 / MAX: 71.27 MIN: 27.92 / MAX: 70.86 MIN: 46.99 / MAX: 72.58 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU d b a c 600 1200 1800 2400 3000 2603.35 2604.12 2608.39 2609.57 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU d b c a 2 4 6 8 10 6.11 6.11 6.10 6.10 MIN: 3.18 / MAX: 11.79 MIN: 3.19 / MAX: 13.98 MIN: 3.18 / MAX: 11.91 MIN: 3.19 / MAX: 11.01 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU a d b c 300 600 900 1200 1500 1572.38 1572.56 1587.74 1591.84 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU d a b c 1.1385 2.277 3.4155 4.554 5.6925 5.06 5.06 5.02 5.00 MIN: 3.24 / MAX: 9.55 MIN: 3.62 / MAX: 13.34 MIN: 3.6 / MAX: 10.68 MIN: 3.63 / MAX: 11.88 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16 - Device: CPU b d c a 160 320 480 640 800 731.42 733.36 733.81 739.56 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16 - Device: CPU b d c a 5 10 15 20 25 21.85 21.79 21.77 21.61 MIN: 14.62 / MAX: 38.4 MIN: 14.67 / MAX: 31.51 MIN: 17.91 / MAX: 28.95 MIN: 15.02 / MAX: 30.51 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU c b d a 7K 14K 21K 28K 35K 33482.54 33483.53 33491.83 33511.79 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU d c b a 0.0968 0.1936 0.2904 0.3872 0.484 0.43 0.43 0.43 0.43 MIN: 0.22 / MAX: 5.02 MIN: 0.22 / MAX: 7.73 MIN: 0.22 / MAX: 4.19 MIN: 0.22 / MAX: 4.2 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16-INT8 - Device: CPU d a b c 130 260 390 520 650 579.56 584.10 585.55 587.47 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16-INT8 - Device: CPU d a b c 6 12 18 24 30 27.57 27.36 27.29 27.21 MIN: 20.3 / MAX: 33.35 MIN: 19.64 / MAX: 35.39 MIN: 21.85 / MAX: 35.83 MIN: 22.22 / MAX: 34.96 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU c b a d 10K 20K 30K 40K 50K 47255.36 47316.17 47376.59 47453.43 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU c d b a 0.0675 0.135 0.2025 0.27 0.3375 0.30 0.29 0.29 0.29 MIN: 0.17 / MAX: 7.08 MIN: 0.17 / MAX: 7.66 MIN: 0.17 / MAX: 7.59 MIN: 0.17 / MAX: 7.87 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: amd-pstate-epp performance (EPP: performance) - CPU Microcode: 0xa601203Java Notes: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Notes: Python 3.10.12Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 21 November 2023 16:01 by user root.
b Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: amd-pstate-epp performance (EPP: performance) - CPU Microcode: 0xa601203Java Notes: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Notes: Python 3.10.12Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 21 November 2023 20:09 by user root.
c Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: amd-pstate-epp performance (EPP: performance) - CPU Microcode: 0xa601203Java Notes: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Notes: Python 3.10.12Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 22 November 2023 05:51 by user root.
d Processor: AMD Ryzen 9 7950X3D 16-Core @ 5.76GHz (16 Cores / 32 Threads), Motherboard: ASRockRack B650D4U-2L2T/BCM (2.09 BIOS), Chipset: AMD Device 14d8, Memory: 2 x 32 GB DDR5-4800MT/s MTC20C2085S1EC48BA1, Disk: 3201GB Micron_7450_MTFDKCC3T2TFS + 0GB Virtual HDisk0 + 0GB Virtual HDisk1 + 0GB Virtual HDisk2 + 0GB Virtual HDisk3, Graphics: ASPEED 512MB, Audio: AMD Device 1640, Monitor: VA2431, Network: 2 x Intel I210 + 2 x Broadcom BCM57416 NetXtreme-E Dual-Media 10G RDMA
OS: Ubuntu 22.04, Kernel: 6.6.0-rc4-phx-amd-pref-core (x86_64), Desktop: GNOME Shell 42.9, Display Server: X Server, Vulkan: 1.3.238, Compiler: GCC 11.4.0, File-System: ext4, Screen Resolution: 1920x1200
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: amd-pstate-epp performance (EPP: performance) - CPU Microcode: 0xa601203Java Notes: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Notes: Python 3.10.12Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 22 November 2023 09:53 by user root.