2 x Intel Xeon Gold 5220R testing with a TYAN S7106 (V2.01.B40 BIOS) and ASPEED on Ubuntu 20.04 via the Phoronix Test Suite.
A Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-yTrUTS/gcc-9-9.4.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x5003102Java Notes: OpenJDK Runtime Environment (build 11.0.14+9-Ubuntu-0ubuntu2.20.04)Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled
B C D Processor: 2 x Intel Xeon Gold 5220R @ 3.90GHz (36 Cores / 72 Threads), Motherboard: TYAN S7106 (V2.01.B40 BIOS), Chipset: Intel Sky Lake-E DMI3 Registers, Memory: 94GB, Disk: 500GB Samsung SSD 860, Graphics: ASPEED, Monitor: VE228, Network: 2 x Intel I210 + 2 x QLogic cLOM8214 1/10GbE
OS: Ubuntu 20.04, Kernel: 5.9.0-050900rc6-generic (x86_64) 20200920, Desktop: GNOME Shell 3.36.4, Display Server: X Server 1.20.13, Compiler: GCC 9.4.0, File-System: ext4, Screen Resolution: 1920x1080
xeon gold april OpenBenchmarking.org Phoronix Test Suite 2 x Intel Xeon Gold 5220R @ 3.90GHz (36 Cores / 72 Threads) TYAN S7106 (V2.01.B40 BIOS) Intel Sky Lake-E DMI3 Registers 94GB 500GB Samsung SSD 860 ASPEED VE228 2 x Intel I210 + 2 x QLogic cLOM8214 1/10GbE Ubuntu 20.04 5.9.0-050900rc6-generic (x86_64) 20200920 GNOME Shell 3.36.4 X Server 1.20.13 GCC 9.4.0 ext4 1920x1080 Processor Motherboard Chipset Memory Disk Graphics Monitor Network OS Kernel Desktop Display Server Compiler File-System Screen Resolution Xeon Gold April Benchmarks System Logs - Transparent Huge Pages: madvise - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-yTrUTS/gcc-9-9.4.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x5003102 - OpenJDK Runtime Environment (build 11.0.14+9-Ubuntu-0ubuntu2.20.04) - itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled
A B C D Result Overview Phoronix Test Suite 100% 126% 153% 179% 206% Ethr Ethr Ethr Ethr Ethr Ethr Ethr Ethr perf-bench Ethr Ethr perf-bench Ethr Ethr Ethr Ethr Ethr perf-bench Ethr Ethr Ethr Ethr Ethr Ethr Ethr Ethr Ethr Ethr Ethr Ethr Ethr Ethr perf-bench Ethr Ethr Ethr perf-bench Ethr Ethr TCP - Connections/s - 16 TCP - Connections/s - 32 TCP - Latency - 4 TCP - Latency - 64 TCP - Bandwidth - 1 TCP - Latency - 2 TCP - Bandwidth - 2 UDP - Bandwidth - 4 Epoll Wait TCP - Latency - 16 TCP - Bandwidth - 32 Memset 1MB TCP - Latency - 1 UDP - Bandwidth - 16 UDP - Bandwidth - 16 UDP - Bandwidth - 8 UDP - Bandwidth - 8 Memcpy 1MB UDP - Bandwidth - 2 UDP - Bandwidth - 2 UDP - Bandwidth - 1 TCP - Bandwidth - 16 UDP - Bandwidth - 32 UDP - Bandwidth - 4 UDP - Bandwidth - 64 TCP - Bandwidth - 64 UDP - Bandwidth - 64 TCP - Latency - 32 TCP - Bandwidth - 8 TCP - Latency - 8 TCP - Bandwidth - 4 UDP - Bandwidth - 32 Sched Pipe TCP - Connections/s - 8 TCP - Connections/s - 64 TCP - Connections/s - 4 Futex Hash TCP - Connections/s - 2 TCP - Connections/s - 1
xeon gold april onednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU onednn: IP Shapes 1D - f32 - CPU ethr: TCP - Connections/s - 16 ethr: TCP - Connections/s - 32 ethr: TCP - Latency - 4 ethr: TCP - Latency - 64 ethr: TCP - Bandwidth - 1 ethr: TCP - Latency - 2 ethr: TCP - Bandwidth - 2 ethr: UDP - Bandwidth - 4 perf-bench: Epoll Wait ethr: TCP - Latency - 16 onednn: IP Shapes 3D - bf16bf16bf16 - CPU ethr: TCP - Bandwidth - 32 perf-bench: Memset 1MB ethr: TCP - Latency - 1 onednn: Recurrent Neural Network Inference - u8s8f32 - CPU ethr: UDP - Bandwidth - 16 ethr: UDP - Bandwidth - 16 onednn: Recurrent Neural Network Inference - f32 - CPU onednn: IP Shapes 1D - u8s8f32 - CPU ethr: UDP - Bandwidth - 8 ethr: UDP - Bandwidth - 8 perf-bench: Memcpy 1MB onednn: Recurrent Neural Network Training - u8s8f32 - CPU ethr: UDP - Bandwidth - 2 ethr: UDP - Bandwidth - 2 ethr: UDP - Bandwidth - 1 avifenc: 6 avifenc: 6, Lossless ethr: TCP - Bandwidth - 16 onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPU perf-bench: Futex Lock-Pi ethr: UDP - Bandwidth - 32 ethr: UDP - Bandwidth - 4 onednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPU ethr: UDP - Bandwidth - 64 onednn: IP Shapes 3D - u8s8f32 - CPU ethr: TCP - Bandwidth - 64 ethr: UDP - Bandwidth - 64 ethr: TCP - Latency - 32 ethr: TCP - Bandwidth - 8 ethr: TCP - Latency - 8 avifenc: 0 avifenc: 10, Lossless avifenc: 2 influxdb: 64 - 10000 - 2,5000,1 - 10000 ethr: TCP - Bandwidth - 4 ethr: UDP - Bandwidth - 32 onednn: IP Shapes 3D - f32 - CPU perf-bench: Sched Pipe onednn: Recurrent Neural Network Training - f32 - CPU onednn: Convolution Batch Shapes Auto - f32 - CPU influxdb: 1024 - 10000 - 2,5000,1 - 10000 onednn: IP Shapes 1D - bf16bf16bf16 - CPU influxdb: 4 - 10000 - 2,5000,1 - 10000 perf-bench: Syscall Basic onednn: Deconvolution Batch shapes_3d - f32 - CPU onednn: Convolution Batch Shapes Auto - u8s8f32 - CPU onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPU onednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPU onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPU onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPU java-jmh: Throughput ethr: TCP - Connections/s - 8 ethr: TCP - Connections/s - 64 onednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPU onednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPU ethr: TCP - Connections/s - 4 onednn: Deconvolution Batch shapes_1d - f32 - CPU perf-bench: Futex Hash ethr: TCP - Connections/s - 2 ethr: TCP - Connections/s - 1 A B C D 0.239859 1.77024 1011 2083 32.216 43.029 23.38 41.765 23.51 1291.93 7708 41.347 4.05176 15.24 56.894582 37.54 779.526 2172000 32.86 865.519 1.27139 40.16 1398400 16.804952 1377.51 378141 32.75 19.75 6.489 10.899 21.58 1391.25 111 20.71 812667 2.60341 2996400 1.22818 8.49 11.91 41.495 25.34 40.252 113.423 7.488 58.897 1040989.9 25.87 2762800 3.7295 180154 1396.33 7.53974 1079356.2 5.71974 783450.1 16717638 2.78136 6.95526 0.471479 8.74834 785.439 0.55597 0.691164 53387527398.055 1012 1016 9.55623 6.36114 1010 11.0833 2825232 1010 1010 0.239911 1.76689 1013 1012 42.048 40.858 17.72 41.803 24.94 1601.86 6350 41.702 3.40714 13.03 59.803173 39.873 890.068 2267200 33.88 781.327 1.36355 40.97 1425600 18.292552 1367.03 368719 31.86 18.88 6.766 10.914 20.73 1378.88 107 21.18 841225 2.53489 2940000 1.23269 8.67 11.71 42.389 24.93 41.127 112.78 7.397 59.148 1028876.4 25.63 2755600 3.76678 181416 1404.64 7.46401 1071590.1 5.67367 786195.2 16775264 2.7974 6.9515 0.472238 8.73941 781.417 0.556711 0.691175 53514068363.933 1011 1014 9.53841 6.3588 1010 11.0784 2824913 1010 1010 7.47844 4.15803 1010 1656 41.877 41.892 23.1 41.361 20.67 1426.07 7556 42.914 3.81944 15.09 52.07489 41.075 788.892 2035600 30.73 779.138 1.41028 37.47 1308400 16.794736 1473.77 387067 33.27 19.8 6.524 11.348 20.93 1434.55 107 21.47 820398 2.52027 2950800 1.19641 8.65 11.73 41.202 24.73 41.091 110.916 7.343 60 1021955 25.84 2802400 3.70428 183185 1382.23 7.47071 1082050 5.68132 780902.8 16828687 2.79349 6.91711 0.469779 8.78465 781.5 0.558546 0.693769 53315949803.277 1010 1014 9.54348 6.36553 1010 11.0815 2824285 1010 1010 2080 1012 43.087 32.258 23.14 32.236 25.74 1377.4 6314 35.971 14.55 53.019963 43.111 2003200 30.25 40.6 1428000 17.047661 366784 31.66 19.68 21.46 21.08 820263 2907600 8.74 11.57 42.018 25.16 41.203 26.09 2773600 182091 1010 1015 1011 2824783 1010 1010 OpenBenchmarking.org
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU A B C 2 4 6 8 10 0.239859 0.239911 7.478440 MIN: 0.25 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU B A C 0.9356 1.8712 2.8068 3.7424 4.678 1.76689 1.77024 4.15803 MIN: 1.68 MIN: 1.69 MIN: 1.69 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
Ethr OpenBenchmarking.org Connections/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Connections/s - Threads: 16 D B A C 400 800 1200 1600 2000 2080 1013 1011 1010 MIN: 1010 MIN: 1010 / MAX: 1020 MIN: 1010
OpenBenchmarking.org Connections/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Connections/s - Threads: 32 A C D B 400 800 1200 1600 2000 2083 1656 1012 1012 MIN: 1010 MIN: 1010 MIN: 1010 / MAX: 1020 MIN: 1010 / MAX: 1020
OpenBenchmarking.org us, Fewer Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Latency - Threads: 4 A C B D 10 20 30 40 50 32.22 41.88 42.05 43.09 MIN: 28.6 / MAX: 33.81 MIN: 37.48 / MAX: 49.72 MIN: 35.62 / MAX: 50.47 MIN: 38.43 / MAX: 49.14
OpenBenchmarking.org us, Fewer Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Latency - Threads: 64 D B C A 10 20 30 40 50 32.26 40.86 41.89 43.03 MIN: 28.92 / MAX: 47.21 MIN: 32.67 / MAX: 45.14 MIN: 31.68 / MAX: 45.62 MIN: 33.02 / MAX: 50.73
OpenBenchmarking.org Gbits/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Bandwidth - Threads: 1 A D C B 6 12 18 24 30 23.38 23.14 23.10 17.72 MIN: 21.34 / MAX: 25.05 MIN: 22.13 / MAX: 24.35 MIN: 21.11 / MAX: 24.02 MIN: 14.68 / MAX: 22.62
OpenBenchmarking.org us, Fewer Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Latency - Threads: 2 D C A B 10 20 30 40 50 32.24 41.36 41.77 41.80 MIN: 29.44 / MAX: 40.51 MIN: 37.14 / MAX: 46.94 MIN: 34.05 / MAX: 52.7 MIN: 37.72 / MAX: 49.34
OpenBenchmarking.org Gbits/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Bandwidth - Threads: 2 D B A C 6 12 18 24 30 25.74 24.94 23.51 20.67 MIN: 16.46 / MAX: 43.62 MIN: 16.75 / MAX: 43.96 MIN: 14.62 / MAX: 43.04 MIN: 13.75 / MAX: 37.78
OpenBenchmarking.org Gbits/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 4 B C D A 300 600 900 1200 1500 1601.86 1426.07 1377.40 1291.93 MIN: 24.41 MIN: 24.04 MIN: 24.11 MIN: 23.68
perf-bench This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ops/sec, More Is Better perf-bench Benchmark: Epoll Wait A C B D 1700 3400 5100 6800 8500 7708 7556 6350 6314 1. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -export-dynamic -O6 -ggdb3 -funwind-tables -std=gnu99 -fPIC -lnuma
Ethr OpenBenchmarking.org us, Fewer Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Latency - Threads: 16 D A B C 10 20 30 40 50 35.97 41.35 41.70 42.91 MIN: 29.59 / MAX: 45.4 MIN: 36.81 / MAX: 44.28 MIN: 32.41 / MAX: 45.69 MIN: 30.22 / MAX: 51.22
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU B C A 0.9116 1.8232 2.7348 3.6464 4.558 3.40714 3.81944 4.05176 MIN: 2.86 MIN: 2.93 MIN: 2.94 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
Ethr OpenBenchmarking.org Gbits/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Bandwidth - Threads: 32 A C D B 4 8 12 16 20 15.24 15.09 14.55 13.03 MIN: 5.94 / MAX: 261.6 MIN: 4.73 / MAX: 269.41 MIN: 4.94 / MAX: 249.01 MIN: 5.14 / MAX: 243.16
perf-bench This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GB/sec, More Is Better perf-bench Benchmark: Memset 1MB B A D C 13 26 39 52 65 59.80 56.89 53.02 52.07 1. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -export-dynamic -O6 -ggdb3 -funwind-tables -std=gnu99 -fPIC -lnuma
Ethr OpenBenchmarking.org us, Fewer Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Latency - Threads: 1 A B C D 10 20 30 40 50 37.54 39.87 41.08 43.11 MIN: 29.88 / MAX: 48.56 MIN: 31.88 / MAX: 47.22 MIN: 32.43 / MAX: 50.94 MIN: 38.3 / MAX: 49.42
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU A C B 200 400 600 800 1000 779.53 788.89 890.07 MIN: 769.69 MIN: 765.69 MIN: 769.73 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
Ethr OpenBenchmarking.org Packets/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 16 B A C D 500K 1000K 1500K 2000K 2500K 2267200 2172000 2035600 2003200 MIN: 2220000 / MAX: 2310000 MIN: 1940000 / MAX: 2280000 MIN: 1790000 / MAX: 2270000 MIN: 1930000 / MAX: 2090000
OpenBenchmarking.org Gbits/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 16 B A C D 8 16 24 32 40 33.88 32.86 30.73 30.25 MIN: 12.22 / MAX: 295.21 MIN: 11.43 / MAX: 291.97 MIN: 12.32 / MAX: 290.81 MIN: 12.07 / MAX: 268.09
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU C B A 200 400 600 800 1000 779.14 781.33 865.52 MIN: 771.23 MIN: 775.04 MIN: 766.63 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU A B C 0.3173 0.6346 0.9519 1.2692 1.5865 1.27139 1.36355 1.41028 MIN: 1.18 MIN: 1.27 MIN: 1.32 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
Ethr OpenBenchmarking.org Gbits/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 8 B D A C 9 18 27 36 45 40.97 40.60 40.16 37.47 MIN: 18.81 / MAX: 192.13 MIN: 18.23 / MAX: 199.35 MIN: 18.92 / MAX: 191.69 MIN: 18.98 / MAX: 175.77
OpenBenchmarking.org Packets/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 8 D B A C 300K 600K 900K 1200K 1500K 1428000 1425600 1398400 1308400 MIN: 1260000 / MAX: 1560000 MIN: 1220000 / MAX: 1500000 MIN: 1230000 / MAX: 1500000 MIN: 1210000 / MAX: 1370000
perf-bench This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GB/sec, More Is Better perf-bench Benchmark: Memcpy 1MB B D A C 5 10 15 20 25 18.29 17.05 16.80 16.79 1. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -export-dynamic -O6 -ggdb3 -funwind-tables -std=gnu99 -fPIC -lnuma
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU B A C 300 600 900 1200 1500 1367.03 1377.51 1473.77 MIN: 1359.37 MIN: 1371.47 MIN: 1371.19 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
Ethr OpenBenchmarking.org Packets/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 2 C A B D 80K 160K 240K 320K 400K 387067 378141 368719 366784 MIN: 338890 / MAX: 426850 MIN: 349770 / MAX: 426690 MIN: 335020 / MAX: 393720 MIN: 337700 / MAX: 385930
OpenBenchmarking.org Gbits/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 2 C A B D 8 16 24 32 40 33.27 32.75 31.86 31.66 MIN: 21.65 / MAX: 54.64 MIN: 22.31 / MAX: 54.62 MIN: 21.41 / MAX: 52.55 MIN: 21.42 / MAX: 51.04
OpenBenchmarking.org Gbits/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 1 C A D B 5 10 15 20 25 19.80 19.75 19.68 18.88 MIN: 17.79 / MAX: 25.36 MIN: 18.01 / MAX: 25.67 MIN: 17.78 / MAX: 24.79 MIN: 17.69 / MAX: 21.31
Ethr OpenBenchmarking.org Gbits/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Bandwidth - Threads: 16 A D C B 5 10 15 20 25 21.58 21.46 20.93 20.73 MIN: 8.82 / MAX: 183.6 MIN: 8.37 / MAX: 184.83 MIN: 8.39 / MAX: 185.03 MIN: 8.03 / MAX: 182.13
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU B A C 300 600 900 1200 1500 1378.88 1391.25 1434.55 MIN: 1370.45 MIN: 1375.6 MIN: 1353.51 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
perf-bench This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ops/sec, More Is Better perf-bench Benchmark: Futex Lock-Pi A C B 20 40 60 80 100 111 107 107 1. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -export-dynamic -O6 -ggdb3 -funwind-tables -std=gnu99 -fPIC -lnuma
Ethr OpenBenchmarking.org Gbits/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 32 C B D A 5 10 15 20 25 21.47 21.18 21.08 20.71 MIN: 5.68 / MAX: 364.15 MIN: 6.11 / MAX: 361.13 MIN: 5.84 / MAX: 360.82 MIN: 5.71 / MAX: 358.92
OpenBenchmarking.org Packets/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 4 B C D A 200K 400K 600K 800K 1000K 841225 820398 820263 812667 MIN: 798610 / MAX: 902140 MIN: 778870 / MAX: 857010 MIN: 777610 / MAX: 866340 MIN: 778180 / MAX: 866540
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU C B A 0.5858 1.1716 1.7574 2.3432 2.929 2.52027 2.53489 2.60341 MIN: 2.25 MIN: 2.3 MIN: 2.36 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
Ethr OpenBenchmarking.org Packets/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 64 A C B D 600K 1200K 1800K 2400K 3000K 2996400 2950800 2940000 2907600 MIN: 2780000 / MAX: 3170000 MIN: 2900000 / MAX: 2980000 MIN: 2910000 / MAX: 2990000 MIN: 2890000 / MAX: 2940000
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU C A B 0.2774 0.5548 0.8322 1.1096 1.387 1.19641 1.22818 1.23269 MIN: 1.12 MIN: 1.14 MIN: 1.15 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
Ethr OpenBenchmarking.org Gbits/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Bandwidth - Threads: 64 D B C A 2 4 6 8 10 8.74 8.67 8.65 8.49 MIN: 2.35 / MAX: 299.41 MIN: 2.19 / MAX: 291.04 MIN: 1.55 / MAX: 291.46 MIN: 1.22 / MAX: 296.78
OpenBenchmarking.org Gbits/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 64 A C B D 3 6 9 12 15 11.91 11.73 11.71 11.57 MIN: 4.84 / MAX: 405.65 MIN: 4.43 / MAX: 381.48 MIN: 4.19 / MAX: 382.52 MIN: 4.84 / MAX: 376.44
OpenBenchmarking.org us, Fewer Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Latency - Threads: 32 C A D B 10 20 30 40 50 41.20 41.50 42.02 42.39 MIN: 38.11 / MAX: 45.61 MIN: 37.34 / MAX: 50.59 MIN: 30.7 / MAX: 52.35 MIN: 35.95 / MAX: 46.82
OpenBenchmarking.org Gbits/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Bandwidth - Threads: 8 A D B C 6 12 18 24 30 25.34 25.16 24.93 24.73 MIN: 12.61 / MAX: 127.4 MIN: 12.8 / MAX: 126.04 MIN: 12.3 / MAX: 117.07 MIN: 11.99 / MAX: 115.22
OpenBenchmarking.org us, Fewer Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Latency - Threads: 8 A C B D 9 18 27 36 45 40.25 41.09 41.13 41.20 MIN: 31.18 / MAX: 45.88 MIN: 32.56 / MAX: 47.82 MIN: 31.32 / MAX: 48.35 MIN: 31.05 / MAX: 46.46
InfluxDB This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org val/sec, More Is Better InfluxDB 1.8.2 Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 A B C 200K 400K 600K 800K 1000K 1040989.9 1028876.4 1021955.0
Ethr OpenBenchmarking.org Gbits/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Bandwidth - Threads: 4 D A C B 6 12 18 24 30 26.09 25.87 25.84 25.63 MIN: 13.23 / MAX: 79.39 MIN: 14.13 / MAX: 78.06 MIN: 14.08 / MAX: 75.21 MIN: 13.97 / MAX: 77.66
OpenBenchmarking.org Packets/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 32 C D A B 600K 1200K 1800K 2400K 3000K 2802400 2773600 2762800 2755600 MIN: 2740000 / MAX: 2840000 MIN: 2670000 / MAX: 2820000 MIN: 2690000 / MAX: 2800000 MIN: 2630000 / MAX: 2820000
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU C A B 0.8475 1.695 2.5425 3.39 4.2375 3.70428 3.72950 3.76678 MIN: 3.65 MIN: 3.68 MIN: 3.72 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
perf-bench This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ops/sec, More Is Better perf-bench Benchmark: Sched Pipe C D B A 40K 80K 120K 160K 200K 183185 182091 181416 180154 1. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -export-dynamic -O6 -ggdb3 -funwind-tables -std=gnu99 -fPIC -lnuma
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU C A B 300 600 900 1200 1500 1382.23 1396.33 1404.64 MIN: 1361.51 MIN: 1377.62 MIN: 1353.93 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU B C A 2 4 6 8 10 7.46401 7.47071 7.53974 MIN: 7.38 MIN: 7.38 MIN: 7.45 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
InfluxDB This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org val/sec, More Is Better InfluxDB 1.8.2 Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 C A B 200K 400K 600K 800K 1000K 1082050.0 1079356.2 1071590.1
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU B C A 1.2869 2.5738 3.8607 5.1476 6.4345 5.67367 5.68132 5.71974 MIN: 5.54 MIN: 5.54 MIN: 5.55 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
InfluxDB This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org val/sec, More Is Better InfluxDB 1.8.2 Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 B A C 200K 400K 600K 800K 1000K 786195.2 783450.1 780902.8
perf-bench This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ops/sec, More Is Better perf-bench Benchmark: Syscall Basic C B A 4M 8M 12M 16M 20M 16828687 16775264 16717638 1. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -export-dynamic -O6 -ggdb3 -funwind-tables -std=gnu99 -fPIC -lnuma
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU A C B 0.6294 1.2588 1.8882 2.5176 3.147 2.78136 2.79349 2.79740 MIN: 2.75 MIN: 2.74 MIN: 2.75 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU C B A 2 4 6 8 10 6.91711 6.95150 6.95526 MIN: 6.85 MIN: 6.84 MIN: 6.88 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU C A B 0.1063 0.2126 0.3189 0.4252 0.5315 0.469779 0.471479 0.472238 MIN: 0.45 MIN: 0.45 MIN: 0.45 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU B A C 2 4 6 8 10 8.73941 8.74834 8.78465 MIN: 8.63 MIN: 8.64 MIN: 8.65 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU B C A 200 400 600 800 1000 781.42 781.50 785.44 MIN: 775.98 MIN: 772.15 MIN: 773.81 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU A B C 0.1257 0.2514 0.3771 0.5028 0.6285 0.555970 0.556711 0.558546 MIN: 0.53 MIN: 0.54 MIN: 0.54 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU A B C 0.1561 0.3122 0.4683 0.6244 0.7805 0.691164 0.691175 0.693769 MIN: 0.68 MIN: 0.68 MIN: 0.68 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
Ethr OpenBenchmarking.org Connections/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Connections/s - Threads: 8 A B D C 200 400 600 800 1000 1012 1011 1010 1010 MIN: 1010 / MAX: 1020 MIN: 1010 / MAX: 1020
OpenBenchmarking.org Connections/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Connections/s - Threads: 64 A D C B 200 400 600 800 1000 1016 1015 1014 1014 MIN: 1010 / MAX: 1020 MIN: 1010 / MAX: 1020 MIN: 1010 / MAX: 1020 MIN: 1010 / MAX: 1020
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU B C A 3 6 9 12 15 9.53841 9.54348 9.55623 MIN: 9.46 MIN: 9.45 MIN: 9.46 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU B A C 2 4 6 8 10 6.35880 6.36114 6.36553 MIN: 6.32 MIN: 6.32 MIN: 6.32 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
Ethr OpenBenchmarking.org Connections/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Connections/s - Threads: 4 D C B A 200 400 600 800 1000 1011 1010 1010 1010 MIN: 1010
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU B C A 3 6 9 12 15 11.08 11.08 11.08 MIN: 7.46 MIN: 9.51 MIN: 8.7 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
perf-bench This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ops/sec, More Is Better perf-bench Benchmark: Futex Hash A B D C 600K 1200K 1800K 2400K 3000K 2825232 2824913 2824783 2824285 1. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -export-dynamic -O6 -ggdb3 -funwind-tables -std=gnu99 -fPIC -lnuma
Ethr OpenBenchmarking.org Connections/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Connections/s - Threads: 2 D C B A 200 400 600 800 1000 1010 1010 1010 1010
OpenBenchmarking.org Connections/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Connections/s - Threads: 1 D C B A 200 400 600 800 1000 1010 1010 1010 1010
A Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-yTrUTS/gcc-9-9.4.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x5003102Java Notes: OpenJDK Runtime Environment (build 11.0.14+9-Ubuntu-0ubuntu2.20.04)Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled
Testing initiated at 20 April 2022 14:41 by user phoronix.
B Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-yTrUTS/gcc-9-9.4.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x5003102Java Notes: OpenJDK Runtime Environment (build 11.0.14+9-Ubuntu-0ubuntu2.20.04)Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled
Testing initiated at 20 April 2022 17:52 by user phoronix.
C Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-yTrUTS/gcc-9-9.4.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x5003102Java Notes: OpenJDK Runtime Environment (build 11.0.14+9-Ubuntu-0ubuntu2.20.04)Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled
Testing initiated at 20 April 2022 19:01 by user phoronix.
D Processor: 2 x Intel Xeon Gold 5220R @ 3.90GHz (36 Cores / 72 Threads), Motherboard: TYAN S7106 (V2.01.B40 BIOS), Chipset: Intel Sky Lake-E DMI3 Registers, Memory: 94GB, Disk: 500GB Samsung SSD 860, Graphics: ASPEED, Monitor: VE228, Network: 2 x Intel I210 + 2 x QLogic cLOM8214 1/10GbE
OS: Ubuntu 20.04, Kernel: 5.9.0-050900rc6-generic (x86_64) 20200920, Desktop: GNOME Shell 3.36.4, Display Server: X Server 1.20.13, Compiler: GCC 9.4.0, File-System: ext4, Screen Resolution: 1920x1080
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-yTrUTS/gcc-9-9.4.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x5003102Java Notes: OpenJDK Runtime Environment (build 11.0.14+9-Ubuntu-0ubuntu2.20.04)Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled
Testing initiated at 20 April 2022 20:02 by user phoronix.