lnxbuild-9-9900 Intel Core i9-9900 testing with a Gigabyte Z370 AORUS Ultra Gaming-CF (F16a BIOS) and Sapphire AMD Radeon RX 470/480/570/570X/580/580X/590 8GB on ManjaroLinux 20.2 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2012162-HA-LNXBUILD968&export=pdf&gru&sor .
lnxbuild-9-9900 Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server OpenGL Vulkan Compiler File-System Screen Resolution lnxbuild-i9-9900-20201213 numpy graphics-magick-2.02 cython-i9-9900 x265 ramspeed lz4 onednn onednn-2nd onednn-100A onednn-110A numpy-110A build-linux-ker-110A build-linux-kernel-110A build-linux-kernel-110A-2nd onednn-50A onednn-75A onednn-85A onednn-95A onednn-95A-perf build-linux-ker-95A-perf numpy-95A Intel Core i9-9900 @ 5.00GHz (8 Cores / 16 Threads) Gigabyte Z370 AORUS Ultra Gaming-CF (F16a BIOS) Intel 8th/9th 64GB 960GB Corsair Force MP510 + 1000GB KINGSTON SA2000M81000G + 512GB Crucial CT512MX1 + 120GB INTEL SSDSC2CT12 + 3001GB TOSHIBA HDWA130 + 320GB Western Digital WD3200AAJS-0 Sapphire AMD Radeon RX 470/480/570/570X/580/580X/590 8GB (1340/2000MHz) Realtek ALC1220 Q3279WG5B Intel I219-V + Realtek RTL8125 2.5GbE ManjaroLinux 20.2 5.4.74-1-MANJARO (x86_64) Xfce 4.14 X Server 1.20.9 4.6 Mesa 20.2.1 (LLVM 10.0.1) 1.2.131 GCC 10.2.0 + Clang 10.0.1 + LLVM 10.0.1 ext4 2560x1440 960GB Corsair Force MP510 + 1000GB KINGSTON SA2000M81000G + 512GB Crucial CT512MX1 + 120GB INTEL SSDSC2CT12 + 320GB Western Digital WD3200AAJS-0 + 3001GB TOSHIBA HDWA130 960GB Corsair Force MP510 + 1000GB KINGSTON SA2000M81000G + 512GB Crucial CT512MX1 + 120GB INTEL SSDSC2CT12 + 3001GB TOSHIBA HDWA130 + 320GB Western Digital WD3200AAJS-0 OpenBenchmarking.org Compiler Details - lnxbuild-i9-9900-20201213, graphics-magick-2.02, x265, ramspeed, lz4, onednn, onednn-2nd, onednn-100A, onednn-110A, build-linux-ker-110A, build-linux-kernel-110A, build-linux-kernel-110A-2nd, onednn-50A, onednn-75A, onednn-85A, onednn-95A, onednn-95A-perf, build-linux-ker-95A-perf: --disable-libssp --disable-libstdcxx-pch --disable-libunwind-exceptions --disable-werror --enable-__cxa_atexit --enable-cet=auto --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-default-ssp --enable-gnu-indirect-function --enable-gnu-unique-object --enable-install-libiberty --enable-languages=c,c++,ada,fortran,go,lto,objc,obj-c++,d --enable-lto --enable-multilib --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-isl --with-linker-hash-style=gnu Processor Details - lnxbuild-i9-9900-20201213: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd6 - numpy: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd6 - graphics-magick-2.02: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd6 - cython-i9-9900: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd6 - x265: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd6 - ramspeed: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd6 - lz4: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd6 - onednn: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd6 - onednn-2nd: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd6 - onednn-100A: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd6 - onednn-110A: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd6 - numpy-110A: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd6 - build-linux-ker-110A: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd6 - build-linux-kernel-110A: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd6 - build-linux-kernel-110A-2nd: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd6 - onednn-50A: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd6 - onednn-75A: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd6 - onednn-85A: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd6 - onednn-95A: Scaling Governor: intel_pstate powersave - CPU Microcode: 0xd6 - onednn-95A-perf: Scaling Governor: intel_pstate performance - CPU Microcode: 0xd6 - build-linux-ker-95A-perf: Scaling Governor: intel_pstate performance - CPU Microcode: 0xd6 - numpy-95A: Scaling Governor: intel_pstate performance - CPU Microcode: 0xd6 Security Details - itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Mitigation of TSX disabled + tsx_async_abort: Mitigation of TSX disabled Python Details - numpy, numpy-110A, numpy-95A: sh: /home/axel/scripts/python: Is a directory + Python 3.8.6
lnxbuild-9-9900 x265: Bosphorus 4K x265: Bosphorus 1080p graphics-magick: Swirl graphics-magick: Rotate graphics-magick: Sharpen graphics-magick: Enhanced graphics-magick: Resizing graphics-magick: Noise-Gaussian graphics-magick: HWB Color Space ramspeed: Add - Integer ramspeed: Copy - Integer ramspeed: Scale - Integer ramspeed: Triad - Integer ramspeed: Average - Integer ramspeed: Add - Floating Point ramspeed: Copy - Floating Point ramspeed: Scale - Floating Point ramspeed: Triad - Floating Point ramspeed: Average - Floating Point compress-lz4: 1 - Compression Speed compress-lz4: 1 - Decompression Speed compress-lz4: 3 - Compression Speed compress-lz4: 3 - Decompression Speed compress-lz4: 9 - Compression Speed compress-lz4: 9 - Decompression Speed numpy: onednn: Recurrent Neural Network Training - f32 - CPU onednn: Recurrent Neural Network Training - u8s8f32 - CPU onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Inference - f32 - CPU onednn: Recurrent Neural Network Inference - u8s8f32 - CPU onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPU build-linux-kernel: Time To Compile cython-bench: lnxbuild-i9-9900-20201213 numpy graphics-magick-2.02 cython-i9-9900 x265 ramspeed lz4 onednn onednn-2nd onednn-100A onednn-110A numpy-110A build-linux-ker-110A build-linux-kernel-110A build-linux-kernel-110A-2nd onednn-50A onednn-75A onednn-85A onednn-95A onednn-95A-perf build-linux-ker-95A-perf numpy-95A 92.993 407.95 385 903 125 198 916 250 991 19.270 13.74 58.71 29055.45 27155.63 27346.45 29067.31 28109.86 29031.40 27112.48 26991.05 28851.25 27859.36 9023.62 10993.3 55.33 10670.5 54.22 10681.8 3524.47 3603.60 3376.72 4138.19 4090.90 4072.84 2230.75 2213.44 2258.73 415.80 104.180 104.585 4712.01 4619.16 4611.11 4347.41 4268.15 4279.40 111.861 414.96 OpenBenchmarking.org
x265 Video Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better x265 3.4 Video Input: Bosphorus 4K x265 4 8 12 16 20 SE +/- 0.19, N = 3 13.74 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
x265 Video Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better x265 3.4 Video Input: Bosphorus 1080p x265 13 26 39 52 65 SE +/- 0.33, N = 3 58.71 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
GraphicsMagick Operation: Swirl OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Swirl graphics-magick-2.02 80 160 240 320 400 SE +/- 2.96, N = 3 385 1. (CC) gcc options: -fopenmp -O2 -pthread -lwebp -lwebpmux -llcms2 -ltiff -lfreetype -ljasper -ljpeg -lwmflite -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lzstd -lm -lpthread
GraphicsMagick Operation: Rotate OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Rotate graphics-magick-2.02 200 400 600 800 1000 SE +/- 17.09, N = 3 903 1. (CC) gcc options: -fopenmp -O2 -pthread -lwebp -lwebpmux -llcms2 -ltiff -lfreetype -ljasper -ljpeg -lwmflite -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lzstd -lm -lpthread
GraphicsMagick Operation: Sharpen OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Sharpen graphics-magick-2.02 30 60 90 120 150 SE +/- 0.88, N = 3 125 1. (CC) gcc options: -fopenmp -O2 -pthread -lwebp -lwebpmux -llcms2 -ltiff -lfreetype -ljasper -ljpeg -lwmflite -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lzstd -lm -lpthread
GraphicsMagick Operation: Enhanced OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Enhanced graphics-magick-2.02 40 80 120 160 200 SE +/- 0.58, N = 3 198 1. (CC) gcc options: -fopenmp -O2 -pthread -lwebp -lwebpmux -llcms2 -ltiff -lfreetype -ljasper -ljpeg -lwmflite -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lzstd -lm -lpthread
GraphicsMagick Operation: Resizing OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Resizing graphics-magick-2.02 200 400 600 800 1000 SE +/- 1.45, N = 3 916 1. (CC) gcc options: -fopenmp -O2 -pthread -lwebp -lwebpmux -llcms2 -ltiff -lfreetype -ljasper -ljpeg -lwmflite -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lzstd -lm -lpthread
GraphicsMagick Operation: Noise-Gaussian OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Noise-Gaussian graphics-magick-2.02 50 100 150 200 250 SE +/- 0.88, N = 3 250 1. (CC) gcc options: -fopenmp -O2 -pthread -lwebp -lwebpmux -llcms2 -ltiff -lfreetype -ljasper -ljpeg -lwmflite -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lzstd -lm -lpthread
GraphicsMagick Operation: HWB Color Space OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: HWB Color Space graphics-magick-2.02 200 400 600 800 1000 SE +/- 4.16, N = 3 991 1. (CC) gcc options: -fopenmp -O2 -pthread -lwebp -lwebpmux -llcms2 -ltiff -lfreetype -ljasper -ljpeg -lwmflite -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lzstd -lm -lpthread
RAMspeed SMP Type: Add - Benchmark: Integer OpenBenchmarking.org MB/s, More Is Better RAMspeed SMP 3.5.0 Type: Add - Benchmark: Integer ramspeed 6K 12K 18K 24K 30K SE +/- 48.95, N = 3 29055.45 1. (CC) gcc options: -O3 -march=native
RAMspeed SMP Type: Copy - Benchmark: Integer OpenBenchmarking.org MB/s, More Is Better RAMspeed SMP 3.5.0 Type: Copy - Benchmark: Integer ramspeed 6K 12K 18K 24K 30K SE +/- 71.40, N = 3 27155.63 1. (CC) gcc options: -O3 -march=native
RAMspeed SMP Type: Scale - Benchmark: Integer OpenBenchmarking.org MB/s, More Is Better RAMspeed SMP 3.5.0 Type: Scale - Benchmark: Integer ramspeed 6K 12K 18K 24K 30K SE +/- 42.10, N = 3 27346.45 1. (CC) gcc options: -O3 -march=native
RAMspeed SMP Type: Triad - Benchmark: Integer OpenBenchmarking.org MB/s, More Is Better RAMspeed SMP 3.5.0 Type: Triad - Benchmark: Integer ramspeed 6K 12K 18K 24K 30K SE +/- 51.19, N = 3 29067.31 1. (CC) gcc options: -O3 -march=native
RAMspeed SMP Type: Average - Benchmark: Integer OpenBenchmarking.org MB/s, More Is Better RAMspeed SMP 3.5.0 Type: Average - Benchmark: Integer ramspeed 6K 12K 18K 24K 30K SE +/- 12.51, N = 3 28109.86 1. (CC) gcc options: -O3 -march=native
RAMspeed SMP Type: Add - Benchmark: Floating Point OpenBenchmarking.org MB/s, More Is Better RAMspeed SMP 3.5.0 Type: Add - Benchmark: Floating Point ramspeed 6K 12K 18K 24K 30K SE +/- 33.84, N = 3 29031.40 1. (CC) gcc options: -O3 -march=native
RAMspeed SMP Type: Copy - Benchmark: Floating Point OpenBenchmarking.org MB/s, More Is Better RAMspeed SMP 3.5.0 Type: Copy - Benchmark: Floating Point ramspeed 6K 12K 18K 24K 30K SE +/- 118.85, N = 3 27112.48 1. (CC) gcc options: -O3 -march=native
RAMspeed SMP Type: Scale - Benchmark: Floating Point OpenBenchmarking.org MB/s, More Is Better RAMspeed SMP 3.5.0 Type: Scale - Benchmark: Floating Point ramspeed 6K 12K 18K 24K 30K SE +/- 93.32, N = 3 26991.05 1. (CC) gcc options: -O3 -march=native
RAMspeed SMP Type: Triad - Benchmark: Floating Point OpenBenchmarking.org MB/s, More Is Better RAMspeed SMP 3.5.0 Type: Triad - Benchmark: Floating Point ramspeed 6K 12K 18K 24K 30K SE +/- 81.41, N = 3 28851.25 1. (CC) gcc options: -O3 -march=native
RAMspeed SMP Type: Average - Benchmark: Floating Point OpenBenchmarking.org MB/s, More Is Better RAMspeed SMP 3.5.0 Type: Average - Benchmark: Floating Point ramspeed 6K 12K 18K 24K 30K SE +/- 172.31, N = 3 27859.36 1. (CC) gcc options: -O3 -march=native
LZ4 Compression Compression Level: 1 - Compression Speed OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.9.3 Compression Level: 1 - Compression Speed lz4 2K 4K 6K 8K 10K SE +/- 33.80, N = 3 9023.62 1. (CC) gcc options: -O3
LZ4 Compression Compression Level: 1 - Decompression Speed OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.9.3 Compression Level: 1 - Decompression Speed lz4 2K 4K 6K 8K 10K SE +/- 16.15, N = 3 10993.3 1. (CC) gcc options: -O3
LZ4 Compression Compression Level: 3 - Compression Speed OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.9.3 Compression Level: 3 - Compression Speed lz4 12 24 36 48 60 SE +/- 0.74, N = 3 55.33 1. (CC) gcc options: -O3
LZ4 Compression Compression Level: 3 - Decompression Speed OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.9.3 Compression Level: 3 - Decompression Speed lz4 2K 4K 6K 8K 10K SE +/- 29.16, N = 3 10670.5 1. (CC) gcc options: -O3
LZ4 Compression Compression Level: 9 - Compression Speed OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.9.3 Compression Level: 9 - Compression Speed lz4 12 24 36 48 60 SE +/- 0.65, N = 3 54.22 1. (CC) gcc options: -O3
LZ4 Compression Compression Level: 9 - Decompression Speed OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.9.3 Compression Level: 9 - Decompression Speed lz4 2K 4K 6K 8K 10K SE +/- 19.59, N = 3 10681.8 1. (CC) gcc options: -O3
Numpy Benchmark OpenBenchmarking.org Score, More Is Better Numpy Benchmark numpy-110A numpy-95A numpy 90 180 270 360 450 SE +/- 0.46, N = 3 SE +/- 2.64, N = 3 SE +/- 2.49, N = 3 415.80 414.96 407.95
oneDNN Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU onednn-2nd onednn onednn-100A onednn-95A-perf onednn-85A 1000 2000 3000 4000 5000 SE +/- 3.87, N = 3 SE +/- 3.52, N = 3 SE +/- 47.28, N = 3 SE +/- 35.82, N = 3 SE +/- 75.66, N = 4 3376.72 3524.47 4138.19 4347.41 4712.01 MIN: 3279.05 MIN: 3399.85 MIN: 3969.52 MIN: 4183.07 MIN: 4509.55 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
oneDNN Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU onednn onednn-100A onednn-95A-perf onednn-85A 1000 2000 3000 4000 5000 SE +/- 64.64, N = 3 SE +/- 12.66, N = 3 SE +/- 1.11, N = 3 SE +/- 11.04, N = 3 3603.60 4090.90 4268.15 4619.16 MIN: 3422.52 MIN: 3976.87 MIN: 4186.6 MIN: 4506.42 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
oneDNN Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU onednn-100A onednn-95A-perf onednn-85A 1000 2000 3000 4000 5000 SE +/- 4.63, N = 3 SE +/- 4.23, N = 3 SE +/- 13.05, N = 3 4072.84 4279.40 4611.11 MIN: 3971.73 MIN: 4184.41 MIN: 4509.24 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
oneDNN Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU onednn-110A 500 1000 1500 2000 2500 SE +/- 30.45, N = 6 2230.75 MIN: 1639.68 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
oneDNN Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU onednn-110A 500 1000 1500 2000 2500 SE +/- 5.34, N = 3 2213.44 MIN: 2125.38 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
oneDNN Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU onednn-110A 500 1000 1500 2000 2500 SE +/- 39.39, N = 3 2258.73 MIN: 2123.73 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
Timed Linux Kernel Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Linux Kernel Compilation 5.4 Time To Compile lnxbuild-i9-9900-20201213 build-linux-kernel-110A build-linux-kernel-110A-2nd build-linux-ker-95A-perf 30 60 90 120 150 SE +/- 0.47, N = 3 SE +/- 0.75, N = 3 SE +/- 0.94, N = 3 SE +/- 0.80, N = 3 92.99 104.18 104.59 111.86
Cython benchmark OpenBenchmarking.org Seconds, Fewer Is Better Cython benchmark 0.27 cython-i9-9900 5 10 15 20 25 SE +/- 0.09, N = 3 19.27
Phoronix Test Suite v10.8.5