2 x Intel Xeon Gold 5220R testing with a TYAN S7106 (V2.01.B40 BIOS) and ASPEED on Ubuntu 20.04 via the Phoronix Test Suite.
A Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-yTrUTS/gcc-9-9.4.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x5003102Java Notes: OpenJDK Runtime Environment (build 11.0.14+9-Ubuntu-0ubuntu2.20.04)Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled
B C D Processor: 2 x Intel Xeon Gold 5220R @ 3.90GHz (36 Cores / 72 Threads), Motherboard: TYAN S7106 (V2.01.B40 BIOS), Chipset: Intel Sky Lake-E DMI3 Registers, Memory: 94GB, Disk: 500GB Samsung SSD 860, Graphics: ASPEED, Monitor: VE228, Network: 2 x Intel I210 + 2 x QLogic cLOM8214 1/10GbE
OS: Ubuntu 20.04, Kernel: 5.9.0-050900rc6-generic (x86_64) 20200920, Desktop: GNOME Shell 3.36.4, Display Server: X Server 1.20.13, Compiler: GCC 9.4.0, File-System: ext4, Screen Resolution: 1920x1080
xeon gold april OpenBenchmarking.org Phoronix Test Suite 2 x Intel Xeon Gold 5220R @ 3.90GHz (36 Cores / 72 Threads) TYAN S7106 (V2.01.B40 BIOS) Intel Sky Lake-E DMI3 Registers 94GB 500GB Samsung SSD 860 ASPEED VE228 2 x Intel I210 + 2 x QLogic cLOM8214 1/10GbE Ubuntu 20.04 5.9.0-050900rc6-generic (x86_64) 20200920 GNOME Shell 3.36.4 X Server 1.20.13 GCC 9.4.0 ext4 1920x1080 Processor Motherboard Chipset Memory Disk Graphics Monitor Network OS Kernel Desktop Display Server Compiler File-System Screen Resolution Xeon Gold April Benchmarks System Logs - Transparent Huge Pages: madvise - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-yTrUTS/gcc-9-9.4.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x5003102 - OpenJDK Runtime Environment (build 11.0.14+9-Ubuntu-0ubuntu2.20.04) - itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled
A B C D Result Overview Phoronix Test Suite 100% 126% 153% 179% 206% Ethr Ethr Ethr Ethr Ethr Ethr Ethr Ethr perf-bench Ethr Ethr perf-bench Ethr Ethr Ethr Ethr Ethr perf-bench Ethr Ethr Ethr Ethr Ethr Ethr Ethr Ethr Ethr Ethr Ethr Ethr Ethr Ethr perf-bench Ethr Ethr Ethr perf-bench Ethr Ethr TCP - Connections/s - 16 TCP - Connections/s - 32 TCP - Latency - 4 TCP - Latency - 64 TCP - Bandwidth - 1 TCP - Latency - 2 TCP - Bandwidth - 2 UDP - Bandwidth - 4 Epoll Wait TCP - Latency - 16 TCP - Bandwidth - 32 Memset 1MB TCP - Latency - 1 UDP - Bandwidth - 16 UDP - Bandwidth - 16 UDP - Bandwidth - 8 UDP - Bandwidth - 8 Memcpy 1MB UDP - Bandwidth - 2 UDP - Bandwidth - 2 UDP - Bandwidth - 1 TCP - Bandwidth - 16 UDP - Bandwidth - 32 UDP - Bandwidth - 4 UDP - Bandwidth - 64 TCP - Bandwidth - 64 UDP - Bandwidth - 64 TCP - Latency - 32 TCP - Bandwidth - 8 TCP - Latency - 8 TCP - Bandwidth - 4 UDP - Bandwidth - 32 Sched Pipe TCP - Connections/s - 8 TCP - Connections/s - 64 TCP - Connections/s - 4 Futex Hash TCP - Connections/s - 2 TCP - Connections/s - 1
xeon gold april onednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU onednn: IP Shapes 1D - f32 - CPU ethr: TCP - Connections/s - 16 ethr: TCP - Connections/s - 32 ethr: TCP - Latency - 4 ethr: TCP - Latency - 64 ethr: TCP - Bandwidth - 1 ethr: TCP - Latency - 2 ethr: TCP - Bandwidth - 2 ethr: UDP - Bandwidth - 4 perf-bench: Epoll Wait ethr: TCP - Latency - 16 onednn: IP Shapes 3D - bf16bf16bf16 - CPU ethr: TCP - Bandwidth - 32 perf-bench: Memset 1MB ethr: TCP - Latency - 1 onednn: Recurrent Neural Network Inference - u8s8f32 - CPU ethr: UDP - Bandwidth - 16 ethr: UDP - Bandwidth - 16 onednn: Recurrent Neural Network Inference - f32 - CPU onednn: IP Shapes 1D - u8s8f32 - CPU ethr: UDP - Bandwidth - 8 ethr: UDP - Bandwidth - 8 perf-bench: Memcpy 1MB onednn: Recurrent Neural Network Training - u8s8f32 - CPU ethr: UDP - Bandwidth - 2 ethr: UDP - Bandwidth - 2 ethr: UDP - Bandwidth - 1 avifenc: 6 avifenc: 6, Lossless ethr: TCP - Bandwidth - 16 onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPU perf-bench: Futex Lock-Pi ethr: UDP - Bandwidth - 32 ethr: UDP - Bandwidth - 4 onednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPU ethr: UDP - Bandwidth - 64 onednn: IP Shapes 3D - u8s8f32 - CPU ethr: TCP - Bandwidth - 64 ethr: UDP - Bandwidth - 64 ethr: TCP - Latency - 32 ethr: TCP - Bandwidth - 8 ethr: TCP - Latency - 8 avifenc: 0 avifenc: 10, Lossless avifenc: 2 influxdb: 64 - 10000 - 2,5000,1 - 10000 ethr: TCP - Bandwidth - 4 ethr: UDP - Bandwidth - 32 onednn: IP Shapes 3D - f32 - CPU perf-bench: Sched Pipe onednn: Recurrent Neural Network Training - f32 - CPU onednn: Convolution Batch Shapes Auto - f32 - CPU influxdb: 1024 - 10000 - 2,5000,1 - 10000 onednn: IP Shapes 1D - bf16bf16bf16 - CPU influxdb: 4 - 10000 - 2,5000,1 - 10000 perf-bench: Syscall Basic onednn: Deconvolution Batch shapes_3d - f32 - CPU onednn: Convolution Batch Shapes Auto - u8s8f32 - CPU onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPU onednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPU onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPU onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPU java-jmh: Throughput ethr: TCP - Connections/s - 8 ethr: TCP - Connections/s - 64 onednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPU onednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPU ethr: TCP - Connections/s - 4 onednn: Deconvolution Batch shapes_1d - f32 - CPU perf-bench: Futex Hash ethr: TCP - Connections/s - 2 ethr: TCP - Connections/s - 1 A B C D 0.239859 1.77024 1011 2083 32.216 43.029 23.38 41.765 23.51 1291.93 7708 41.347 4.05176 15.24 56.894582 37.54 779.526 2172000 32.86 865.519 1.27139 40.16 1398400 16.804952 1377.51 378141 32.75 19.75 6.489 10.899 21.58 1391.25 111 20.71 812667 2.60341 2996400 1.22818 8.49 11.91 41.495 25.34 40.252 113.423 7.488 58.897 1040989.9 25.87 2762800 3.7295 180154 1396.33 7.53974 1079356.2 5.71974 783450.1 16717638 2.78136 6.95526 0.471479 8.74834 785.439 0.55597 0.691164 53387527398.055 1012 1016 9.55623 6.36114 1010 11.0833 2825232 1010 1010 0.239911 1.76689 1013 1012 42.048 40.858 17.72 41.803 24.94 1601.86 6350 41.702 3.40714 13.03 59.803173 39.873 890.068 2267200 33.88 781.327 1.36355 40.97 1425600 18.292552 1367.03 368719 31.86 18.88 6.766 10.914 20.73 1378.88 107 21.18 841225 2.53489 2940000 1.23269 8.67 11.71 42.389 24.93 41.127 112.78 7.397 59.148 1028876.4 25.63 2755600 3.76678 181416 1404.64 7.46401 1071590.1 5.67367 786195.2 16775264 2.7974 6.9515 0.472238 8.73941 781.417 0.556711 0.691175 53514068363.933 1011 1014 9.53841 6.3588 1010 11.0784 2824913 1010 1010 7.47844 4.15803 1010 1656 41.877 41.892 23.1 41.361 20.67 1426.07 7556 42.914 3.81944 15.09 52.07489 41.075 788.892 2035600 30.73 779.138 1.41028 37.47 1308400 16.794736 1473.77 387067 33.27 19.8 6.524 11.348 20.93 1434.55 107 21.47 820398 2.52027 2950800 1.19641 8.65 11.73 41.202 24.73 41.091 110.916 7.343 60 1021955 25.84 2802400 3.70428 183185 1382.23 7.47071 1082050 5.68132 780902.8 16828687 2.79349 6.91711 0.469779 8.78465 781.5 0.558546 0.693769 53315949803.277 1010 1014 9.54348 6.36553 1010 11.0815 2824285 1010 1010 2080 1012 43.087 32.258 23.14 32.236 25.74 1377.4 6314 35.971 14.55 53.019963 43.111 2003200 30.25 40.6 1428000 17.047661 366784 31.66 19.68 21.46 21.08 820263 2907600 8.74 11.57 42.018 25.16 41.203 26.09 2773600 182091 1010 1015 1011 2824783 1010 1010 OpenBenchmarking.org
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU C B A 2 4 6 8 10 7.478440 0.239911 0.239859 MIN: 0.25 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU C A B 0.9356 1.8712 2.8068 3.7424 4.678 4.15803 1.77024 1.76689 MIN: 1.69 MIN: 1.69 MIN: 1.68 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
Ethr OpenBenchmarking.org Connections/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Connections/s - Threads: 16 C A B D 400 800 1200 1600 2000 1010 1011 1013 2080 MIN: 1010 MIN: 1010 / MAX: 1020 MIN: 1010
OpenBenchmarking.org Connections/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Connections/s - Threads: 32 B D C A 400 800 1200 1600 2000 1012 1012 1656 2083 MIN: 1010 / MAX: 1020 MIN: 1010 / MAX: 1020 MIN: 1010 MIN: 1010
OpenBenchmarking.org us, Fewer Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Latency - Threads: 4 D B C A 10 20 30 40 50 43.09 42.05 41.88 32.22 MIN: 38.43 / MAX: 49.14 MIN: 35.62 / MAX: 50.47 MIN: 37.48 / MAX: 49.72 MIN: 28.6 / MAX: 33.81
OpenBenchmarking.org us, Fewer Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Latency - Threads: 64 A C B D 10 20 30 40 50 43.03 41.89 40.86 32.26 MIN: 33.02 / MAX: 50.73 MIN: 31.68 / MAX: 45.62 MIN: 32.67 / MAX: 45.14 MIN: 28.92 / MAX: 47.21
OpenBenchmarking.org Gbits/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Bandwidth - Threads: 1 B C D A 6 12 18 24 30 17.72 23.10 23.14 23.38 MIN: 14.68 / MAX: 22.62 MIN: 21.11 / MAX: 24.02 MIN: 22.13 / MAX: 24.35 MIN: 21.34 / MAX: 25.05
OpenBenchmarking.org us, Fewer Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Latency - Threads: 2 B A C D 10 20 30 40 50 41.80 41.77 41.36 32.24 MIN: 37.72 / MAX: 49.34 MIN: 34.05 / MAX: 52.7 MIN: 37.14 / MAX: 46.94 MIN: 29.44 / MAX: 40.51
OpenBenchmarking.org Gbits/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Bandwidth - Threads: 2 C A B D 6 12 18 24 30 20.67 23.51 24.94 25.74 MIN: 13.75 / MAX: 37.78 MIN: 14.62 / MAX: 43.04 MIN: 16.75 / MAX: 43.96 MIN: 16.46 / MAX: 43.62
OpenBenchmarking.org Gbits/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 4 A D C B 300 600 900 1200 1500 1291.93 1377.40 1426.07 1601.86 MIN: 23.68 MIN: 24.11 MIN: 24.04 MIN: 24.41
perf-bench This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ops/sec, More Is Better perf-bench Benchmark: Epoll Wait D B C A 1700 3400 5100 6800 8500 6314 6350 7556 7708 1. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -export-dynamic -O6 -ggdb3 -funwind-tables -std=gnu99 -fPIC -lnuma
Ethr OpenBenchmarking.org us, Fewer Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Latency - Threads: 16 C B A D 10 20 30 40 50 42.91 41.70 41.35 35.97 MIN: 30.22 / MAX: 51.22 MIN: 32.41 / MAX: 45.69 MIN: 36.81 / MAX: 44.28 MIN: 29.59 / MAX: 45.4
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU A C B 0.9116 1.8232 2.7348 3.6464 4.558 4.05176 3.81944 3.40714 MIN: 2.94 MIN: 2.93 MIN: 2.86 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
Ethr OpenBenchmarking.org Gbits/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Bandwidth - Threads: 32 B D C A 4 8 12 16 20 13.03 14.55 15.09 15.24 MIN: 5.14 / MAX: 243.16 MIN: 4.94 / MAX: 249.01 MIN: 4.73 / MAX: 269.41 MIN: 5.94 / MAX: 261.6
perf-bench This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GB/sec, More Is Better perf-bench Benchmark: Memset 1MB C D A B 13 26 39 52 65 52.07 53.02 56.89 59.80 1. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -export-dynamic -O6 -ggdb3 -funwind-tables -std=gnu99 -fPIC -lnuma
Ethr OpenBenchmarking.org us, Fewer Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Latency - Threads: 1 D C B A 10 20 30 40 50 43.11 41.08 39.87 37.54 MIN: 38.3 / MAX: 49.42 MIN: 32.43 / MAX: 50.94 MIN: 31.88 / MAX: 47.22 MIN: 29.88 / MAX: 48.56
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU B C A 200 400 600 800 1000 890.07 788.89 779.53 MIN: 769.73 MIN: 765.69 MIN: 769.69 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
Ethr OpenBenchmarking.org Packets/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 16 D C A B 500K 1000K 1500K 2000K 2500K 2003200 2035600 2172000 2267200 MIN: 1930000 / MAX: 2090000 MIN: 1790000 / MAX: 2270000 MIN: 1940000 / MAX: 2280000 MIN: 2220000 / MAX: 2310000
OpenBenchmarking.org Gbits/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 16 D C A B 8 16 24 32 40 30.25 30.73 32.86 33.88 MIN: 12.07 / MAX: 268.09 MIN: 12.32 / MAX: 290.81 MIN: 11.43 / MAX: 291.97 MIN: 12.22 / MAX: 295.21
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU A B C 200 400 600 800 1000 865.52 781.33 779.14 MIN: 766.63 MIN: 775.04 MIN: 771.23 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU C B A 0.3173 0.6346 0.9519 1.2692 1.5865 1.41028 1.36355 1.27139 MIN: 1.32 MIN: 1.27 MIN: 1.18 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
Ethr OpenBenchmarking.org Gbits/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 8 C A D B 9 18 27 36 45 37.47 40.16 40.60 40.97 MIN: 18.98 / MAX: 175.77 MIN: 18.92 / MAX: 191.69 MIN: 18.23 / MAX: 199.35 MIN: 18.81 / MAX: 192.13
OpenBenchmarking.org Packets/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 8 C A B D 300K 600K 900K 1200K 1500K 1308400 1398400 1425600 1428000 MIN: 1210000 / MAX: 1370000 MIN: 1230000 / MAX: 1500000 MIN: 1220000 / MAX: 1500000 MIN: 1260000 / MAX: 1560000
perf-bench This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GB/sec, More Is Better perf-bench Benchmark: Memcpy 1MB C A D B 5 10 15 20 25 16.79 16.80 17.05 18.29 1. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -export-dynamic -O6 -ggdb3 -funwind-tables -std=gnu99 -fPIC -lnuma
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU C A B 300 600 900 1200 1500 1473.77 1377.51 1367.03 MIN: 1371.19 MIN: 1371.47 MIN: 1359.37 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
Ethr OpenBenchmarking.org Packets/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 2 D B A C 80K 160K 240K 320K 400K 366784 368719 378141 387067 MIN: 337700 / MAX: 385930 MIN: 335020 / MAX: 393720 MIN: 349770 / MAX: 426690 MIN: 338890 / MAX: 426850
OpenBenchmarking.org Gbits/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 2 D B A C 8 16 24 32 40 31.66 31.86 32.75 33.27 MIN: 21.42 / MAX: 51.04 MIN: 21.41 / MAX: 52.55 MIN: 22.31 / MAX: 54.62 MIN: 21.65 / MAX: 54.64
OpenBenchmarking.org Gbits/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 1 B D A C 5 10 15 20 25 18.88 19.68 19.75 19.80 MIN: 17.69 / MAX: 21.31 MIN: 17.78 / MAX: 24.79 MIN: 18.01 / MAX: 25.67 MIN: 17.79 / MAX: 25.36
Ethr OpenBenchmarking.org Gbits/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Bandwidth - Threads: 16 B C D A 5 10 15 20 25 20.73 20.93 21.46 21.58 MIN: 8.03 / MAX: 182.13 MIN: 8.39 / MAX: 185.03 MIN: 8.37 / MAX: 184.83 MIN: 8.82 / MAX: 183.6
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU C A B 300 600 900 1200 1500 1434.55 1391.25 1378.88 MIN: 1353.51 MIN: 1375.6 MIN: 1370.45 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
perf-bench This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ops/sec, More Is Better perf-bench Benchmark: Futex Lock-Pi B C A 20 40 60 80 100 107 107 111 1. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -export-dynamic -O6 -ggdb3 -funwind-tables -std=gnu99 -fPIC -lnuma
Ethr OpenBenchmarking.org Gbits/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 32 A D B C 5 10 15 20 25 20.71 21.08 21.18 21.47 MIN: 5.71 / MAX: 358.92 MIN: 5.84 / MAX: 360.82 MIN: 6.11 / MAX: 361.13 MIN: 5.68 / MAX: 364.15
OpenBenchmarking.org Packets/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 4 A D C B 200K 400K 600K 800K 1000K 812667 820263 820398 841225 MIN: 778180 / MAX: 866540 MIN: 777610 / MAX: 866340 MIN: 778870 / MAX: 857010 MIN: 798610 / MAX: 902140
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU A B C 0.5858 1.1716 1.7574 2.3432 2.929 2.60341 2.53489 2.52027 MIN: 2.36 MIN: 2.3 MIN: 2.25 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
Ethr OpenBenchmarking.org Packets/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 64 D B C A 600K 1200K 1800K 2400K 3000K 2907600 2940000 2950800 2996400 MIN: 2890000 / MAX: 2940000 MIN: 2910000 / MAX: 2990000 MIN: 2900000 / MAX: 2980000 MIN: 2780000 / MAX: 3170000
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU B A C 0.2774 0.5548 0.8322 1.1096 1.387 1.23269 1.22818 1.19641 MIN: 1.15 MIN: 1.14 MIN: 1.12 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
Ethr OpenBenchmarking.org Gbits/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Bandwidth - Threads: 64 A C B D 2 4 6 8 10 8.49 8.65 8.67 8.74 MIN: 1.22 / MAX: 296.78 MIN: 1.55 / MAX: 291.46 MIN: 2.19 / MAX: 291.04 MIN: 2.35 / MAX: 299.41
OpenBenchmarking.org Gbits/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 64 D B C A 3 6 9 12 15 11.57 11.71 11.73 11.91 MIN: 4.84 / MAX: 376.44 MIN: 4.19 / MAX: 382.52 MIN: 4.43 / MAX: 381.48 MIN: 4.84 / MAX: 405.65
OpenBenchmarking.org us, Fewer Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Latency - Threads: 32 B D A C 10 20 30 40 50 42.39 42.02 41.50 41.20 MIN: 35.95 / MAX: 46.82 MIN: 30.7 / MAX: 52.35 MIN: 37.34 / MAX: 50.59 MIN: 38.11 / MAX: 45.61
OpenBenchmarking.org Gbits/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Bandwidth - Threads: 8 C B D A 6 12 18 24 30 24.73 24.93 25.16 25.34 MIN: 11.99 / MAX: 115.22 MIN: 12.3 / MAX: 117.07 MIN: 12.8 / MAX: 126.04 MIN: 12.61 / MAX: 127.4
OpenBenchmarking.org us, Fewer Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Latency - Threads: 8 D B C A 9 18 27 36 45 41.20 41.13 41.09 40.25 MIN: 31.05 / MAX: 46.46 MIN: 31.32 / MAX: 48.35 MIN: 32.56 / MAX: 47.82 MIN: 31.18 / MAX: 45.88
InfluxDB This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org val/sec, More Is Better InfluxDB 1.8.2 Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 C B A 200K 400K 600K 800K 1000K 1021955.0 1028876.4 1040989.9
Ethr OpenBenchmarking.org Gbits/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Bandwidth - Threads: 4 B C A D 6 12 18 24 30 25.63 25.84 25.87 26.09 MIN: 13.97 / MAX: 77.66 MIN: 14.08 / MAX: 75.21 MIN: 14.13 / MAX: 78.06 MIN: 13.23 / MAX: 79.39
OpenBenchmarking.org Packets/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: UDP - Test: Bandwidth - Threads: 32 B A D C 600K 1200K 1800K 2400K 3000K 2755600 2762800 2773600 2802400 MIN: 2630000 / MAX: 2820000 MIN: 2690000 / MAX: 2800000 MIN: 2670000 / MAX: 2820000 MIN: 2740000 / MAX: 2840000
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU B A C 0.8475 1.695 2.5425 3.39 4.2375 3.76678 3.72950 3.70428 MIN: 3.72 MIN: 3.68 MIN: 3.65 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
perf-bench This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ops/sec, More Is Better perf-bench Benchmark: Sched Pipe A B D C 40K 80K 120K 160K 200K 180154 181416 182091 183185 1. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -export-dynamic -O6 -ggdb3 -funwind-tables -std=gnu99 -fPIC -lnuma
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU B A C 300 600 900 1200 1500 1404.64 1396.33 1382.23 MIN: 1353.93 MIN: 1377.62 MIN: 1361.51 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU A C B 2 4 6 8 10 7.53974 7.47071 7.46401 MIN: 7.45 MIN: 7.38 MIN: 7.38 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
InfluxDB This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org val/sec, More Is Better InfluxDB 1.8.2 Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 B A C 200K 400K 600K 800K 1000K 1071590.1 1079356.2 1082050.0
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU A C B 1.2869 2.5738 3.8607 5.1476 6.4345 5.71974 5.68132 5.67367 MIN: 5.55 MIN: 5.54 MIN: 5.54 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
InfluxDB This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org val/sec, More Is Better InfluxDB 1.8.2 Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 C A B 200K 400K 600K 800K 1000K 780902.8 783450.1 786195.2
perf-bench This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ops/sec, More Is Better perf-bench Benchmark: Syscall Basic A B C 4M 8M 12M 16M 20M 16717638 16775264 16828687 1. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -export-dynamic -O6 -ggdb3 -funwind-tables -std=gnu99 -fPIC -lnuma
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU B C A 0.6294 1.2588 1.8882 2.5176 3.147 2.79740 2.79349 2.78136 MIN: 2.75 MIN: 2.74 MIN: 2.75 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU A B C 2 4 6 8 10 6.95526 6.95150 6.91711 MIN: 6.88 MIN: 6.84 MIN: 6.85 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU B A C 0.1063 0.2126 0.3189 0.4252 0.5315 0.472238 0.471479 0.469779 MIN: 0.45 MIN: 0.45 MIN: 0.45 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU C A B 2 4 6 8 10 8.78465 8.74834 8.73941 MIN: 8.65 MIN: 8.64 MIN: 8.63 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU A C B 200 400 600 800 1000 785.44 781.50 781.42 MIN: 773.81 MIN: 772.15 MIN: 775.98 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU C B A 0.1257 0.2514 0.3771 0.5028 0.6285 0.558546 0.556711 0.555970 MIN: 0.54 MIN: 0.54 MIN: 0.53 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU C B A 0.1561 0.3122 0.4683 0.6244 0.7805 0.693769 0.691175 0.691164 MIN: 0.68 MIN: 0.68 MIN: 0.68 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
Ethr OpenBenchmarking.org Connections/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Connections/s - Threads: 8 C D B A 200 400 600 800 1000 1010 1010 1011 1012 MIN: 1010 / MAX: 1020 MIN: 1010 / MAX: 1020
OpenBenchmarking.org Connections/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Connections/s - Threads: 64 B C D A 200 400 600 800 1000 1014 1014 1015 1016 MIN: 1010 / MAX: 1020 MIN: 1010 / MAX: 1020 MIN: 1010 / MAX: 1020 MIN: 1010 / MAX: 1020
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU A C B 3 6 9 12 15 9.55623 9.54348 9.53841 MIN: 9.46 MIN: 9.45 MIN: 9.46 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU C A B 2 4 6 8 10 6.36553 6.36114 6.35880 MIN: 6.32 MIN: 6.32 MIN: 6.32 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
Ethr OpenBenchmarking.org Connections/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Connections/s - Threads: 4 A B C D 200 400 600 800 1000 1010 1010 1010 1011 MIN: 1010
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU A C B 3 6 9 12 15 11.08 11.08 11.08 MIN: 8.7 MIN: 9.51 MIN: 7.46 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl
perf-bench This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ops/sec, More Is Better perf-bench Benchmark: Futex Hash C D B A 600K 1200K 1800K 2400K 3000K 2824285 2824783 2824913 2825232 1. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -export-dynamic -O6 -ggdb3 -funwind-tables -std=gnu99 -fPIC -lnuma
Ethr OpenBenchmarking.org Connections/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Connections/s - Threads: 2 A B C D 200 400 600 800 1000 1010 1010 1010 1010
OpenBenchmarking.org Connections/sec, More Is Better Ethr 1.0 Server Address: localhost - Protocol: TCP - Test: Connections/s - Threads: 1 A B C D 200 400 600 800 1000 1010 1010 1010 1010
A Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-yTrUTS/gcc-9-9.4.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x5003102Java Notes: OpenJDK Runtime Environment (build 11.0.14+9-Ubuntu-0ubuntu2.20.04)Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled
Testing initiated at 20 April 2022 14:41 by user phoronix.
B Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-yTrUTS/gcc-9-9.4.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x5003102Java Notes: OpenJDK Runtime Environment (build 11.0.14+9-Ubuntu-0ubuntu2.20.04)Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled
Testing initiated at 20 April 2022 17:52 by user phoronix.
C Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-yTrUTS/gcc-9-9.4.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x5003102Java Notes: OpenJDK Runtime Environment (build 11.0.14+9-Ubuntu-0ubuntu2.20.04)Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled
Testing initiated at 20 April 2022 19:01 by user phoronix.
D Processor: 2 x Intel Xeon Gold 5220R @ 3.90GHz (36 Cores / 72 Threads), Motherboard: TYAN S7106 (V2.01.B40 BIOS), Chipset: Intel Sky Lake-E DMI3 Registers, Memory: 94GB, Disk: 500GB Samsung SSD 860, Graphics: ASPEED, Monitor: VE228, Network: 2 x Intel I210 + 2 x QLogic cLOM8214 1/10GbE
OS: Ubuntu 20.04, Kernel: 5.9.0-050900rc6-generic (x86_64) 20200920, Desktop: GNOME Shell 3.36.4, Display Server: X Server 1.20.13, Compiler: GCC 9.4.0, File-System: ext4, Screen Resolution: 1920x1080
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-yTrUTS/gcc-9-9.4.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x5003102Java Notes: OpenJDK Runtime Environment (build 11.0.14+9-Ubuntu-0ubuntu2.20.04)Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled
Testing initiated at 20 April 2022 20:02 by user phoronix.