1080XE Linux

Intel Core i9-10980XE testing with a ASRock X299 Steel Legend (P1.30 BIOS) and NVIDIA NV132 11GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2003275-NI-1080XELIN76
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Intel Core i9-10980XE
March 26 2020
  7 Hours, 5 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


1080XE LinuxOpenBenchmarking.orgPhoronix Test SuiteIntel Core i9-10980XE @ 4.80GHz (18 Cores / 36 Threads)ASRock X299 Steel Legend (P1.30 BIOS)Intel Sky Lake-E DMI3 Registers32GBSamsung SSD 970 PRO 512GBNVIDIA NV132 11GBRealtek ALC1220DELL P2415QIntel I219-V + Intel I211Ubuntu 20.045.4.0-18-generic (x86_64)GNOME Shell 3.35.91X Server 1.20.7modesetting 1.20.74.3 Mesa 20.0.0GCC 9.3.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen Resolution1080XE Linux BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave - CPU Microcode: 0x500012c- + Python 3.8.2- itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + tsx_async_abort: Mitigation of TSX disabled

1080XE Linuxbuild-gcc: Time To Compilenumenta-nab: EXPoSEmkl-dnn: Convolution Batch conv_all - bf16bf16bf16mkl-dnn: Convolution Batch conv_all - u8s8f32mkl-dnn: Convolution Batch conv_all - f32mkl-dnn: Deconvolution Batch deconv_all - bf16bf16bf16mkl-dnn: Deconvolution Batch deconv_all - f32build-llvm: Time To Compilefftw: Float + SSE - 2D FFT Size 4096leveldb: Seq Fillleveldb: Seq Fillfftw: Stock - 2D FFT Size 4096leveldb: Rand Deletemkl-dnn: Convolution Batch conv_googlenet_v3 - bf16bf16bf16mkl-dnn: Convolution Batch conv_googlenet_v3 - u8s8f32mkl-dnn: Convolution Batch conv_googlenet_v3 - f32numenta-nab: Earthgecko Skylinemkl-dnn: Convolution Batch conv_3d - u8s8f32rocksdb: Rand Fill Synctoybrot: OpenMProcksdb: Rand Fillrocksdb: Read While Writingrocksdb: Rand Readtoybrot: C++ Threadstoybrot: C++ Tasksfftw: Float + SSE - 2D FFT Size 2048fftw: Stock - 2D FFT Size 2048mkl-dnn: IP Batch All - f32mkl-dnn: IP Batch All - bf16bf16bf16mkl-dnn: IP Batch All - u8s8f32mkl-dnn: Convolution Batch conv_3d - bf16bf16bf16mkl-dnn: Convolution Batch conv_3d - f32leveldb: Overwriteleveldb: Overwriteleveldb: Rand Fillleveldb: Rand Fillmkl-dnn: Recurrent Neural Network Training - f32lzbench: XZ 0 - Decompressionlzbench: XZ 0 - Compressionleveldb: Hot Readleveldb: Seek Randnumenta-nab: Bayesian Changepointlzbench: Crush 0 - Decompressionlzbench: Crush 0 - Compressionmkl-dnn: Deconvolution Batch deconv_3d - u8s8f32leveldb: Rand Readlzbench: Brotli 2 - Decompressionlzbench: Brotli 2 - Compressionrocksdb: Seq Filllzbench: Libdeflate 1 - Decompressionlzbench: Libdeflate 1 - Compressionlzbench: Brotli 0 - Decompressionlzbench: Brotli 0 - Compressionlzbench: Zstd 8 - Decompressionlzbench: Zstd 8 - Compressionlzbench: Zstd 1 - Decompressionlzbench: Zstd 1 - Compressionmkl-dnn: Deconvolution Batch deconv_1d - bf16bf16bf16mkl-dnn: Deconvolution Batch deconv_1d - u8s8f32mkl-dnn: Deconvolution Batch deconv_1d - f32mkl-dnn: IP Batch 1D - f32mkl-dnn: Convolution Batch conv_alexnet - bf16bf16bf16fftw: Float + SSE - 1D FFT Size 512mkl-dnn: Convolution Batch conv_alexnet - u8s8f32fftw: Float + SSE - 2D FFT Size 1024fftw: Stock - 1D FFT Size 2048mkl-dnn: Convolution Batch conv_alexnet - f32fftw: Stock - 1D FFT Size 1024mkl-dnn: IP Batch 1D - bf16bf16bf16mkl-dnn: IP Batch 1D - u8s8f32fftw: Float + SSE - 1D FFT Size 32fftw: Float + SSE - 2D FFT Size 32smhasher: MeowHashsmhasher: MeowHashnumenta-nab: Relative Entropyfftw: Stock - 1D FFT Size 128smhasher: Spooky32smhasher: Spooky32fftw: Float + SSE - 1D FFT Size 128fftw: Float + SSE - 1D FFT Size 1024fftw: Stock - 1D FFT Size 32smhasher: fasthash32smhasher: fasthash32smhasher: t1ha2_atoncesmhasher: t1ha2_atoncefftw: Stock - 2D FFT Size 1024smhasher: t1ha0_aes_avx2smhasher: t1ha0_aes_avx2smhasher: wyhashsmhasher: wyhashleveldb: Fill Syncleveldb: Fill Syncnumenta-nab: Windowed Gaussianfftw: Float + SSE - 1D FFT Size 4096fftw: Float + SSE - 2D FFT Size 512fftw: Float + SSE - 1D FFT Size 2048fftw: Stock - 1D FFT Size 4096fftw: Float + SSE - 2D FFT Size 128fftw: Stock - 2D FFT Size 512fftw: Float + SSE - 2D FFT Size 256fftw: Float + SSE - 1D FFT Size 256fftw: Stock - 2D FFT Size 128fftw: Stock - 2D FFT Size 64fftw: Stock - 1D FFT Size 64mkl-dnn: Deconvolution Batch deconv_3d - bf16bf16bf16mkl-dnn: Deconvolution Batch deconv_3d - f32fftw: Stock - 2D FFT Size 256fftw: Float + SSE - 2D FFT Size 64fftw: Stock - 2D FFT Size 32fftw: Float + SSE - 1D FFT Size 64fftw: Stock - 1D FFT Size 512fftw: Stock - 1D FFT Size 256Intel Core i9-10980XE954.720798.1704773.633766.521138.423756.501373.07369.53118701382.16710.45815.0371.892225.26920.014164.688690.6217651.8046916107613333094169333930137025744157380231426225.717.406720.31135.9163119.905412.8444378.30010.5377.41710.5152.4121274726.84232.73831.9855281204725.2227.3978192201520282132824570650214849215015518.633810.4605711.822255.38576875.6045375041.4488258838639.4126.7359090.25.705140.649125154733948045.48449652.2513.5517368.535.37718569.1123930640139348.427.5609128.9927.44421360.377470.827.90364427.4321.05023045.637817.1390.57.8735979126566624538742.1323467532.727248411749082.77688.67741.310.82692.628007821.0413658174.5201979159.98319.0OpenBenchmarking.org

Timed GCC Compilation

This test times how long it takes to build the GNU Compiler Collection (GCC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GCC Compilation 9.3.0Time To CompileIntel Core i9-10980XE2004006008001000SE +/- 0.78, N = 3954.72

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: EXPoSEIntel Core i9-10980XE2004006008001000SE +/- 3.29, N = 3798.17

MKL-DNN DNNL

This is a test of the Intel MKL-DNN (DNNL / Deep Neural Network Library) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_all - Data Type: bf16bf16bf16Intel Core i9-10980XE10002000300040005000SE +/- 0.22, N = 34773.63MIN: 4768.871. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_all - Data Type: u8s8f32Intel Core i9-10980XE8001600240032004000SE +/- 4.25, N = 33766.52MIN: 3754.671. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_all - Data Type: f32Intel Core i9-10980XE2004006008001000SE +/- 0.02, N = 31138.42MIN: 1131.881. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_all - Data Type: bf16bf16bf16Intel Core i9-10980XE8001600240032004000SE +/- 0.44, N = 33756.50MIN: 3751.931. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_all - Data Type: f32Intel Core i9-10980XE30060090012001500SE +/- 0.20, N = 31373.07MIN: 1368.751. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 10.0Time To CompileIntel Core i9-10980XE80160240320400SE +/- 4.00, N = 3369.53

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 4096Intel Core i9-10980XE4K8K12K16K20KSE +/- 208.86, N = 3187011. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Sequential FillIntel Core i9-10980XE80160240320400SE +/- 0.92, N = 3382.171. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Sequential FillIntel Core i9-10980XE3691215SE +/- 0.00, N = 310.41. (CXX) g++ options: -O3 -lsnappy -lpthread

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 2D FFT Size 4096Intel Core i9-10980XE12002400360048006000SE +/- 31.92, N = 35815.01. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random DeleteIntel Core i9-10980XE80160240320400SE +/- 0.34, N = 3371.891. (CXX) g++ options: -O3 -lsnappy -lpthread

MKL-DNN DNNL

This is a test of the Intel MKL-DNN (DNNL / Deep Neural Network Library) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_googlenet_v3 - Data Type: bf16bf16bf16Intel Core i9-10980XE50100150200250SE +/- 0.07, N = 3225.27MIN: 224.081. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_googlenet_v3 - Data Type: u8s8f32Intel Core i9-10980XE510152025SE +/- 0.06, N = 320.01MIN: 19.61. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_googlenet_v3 - Data Type: f32Intel Core i9-10980XE1428425670SE +/- 0.11, N = 364.69MIN: 63.51. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko SkylineIntel Core i9-10980XE20406080100SE +/- 0.30, N = 390.62

MKL-DNN DNNL

This is a test of the Intel MKL-DNN (DNNL / Deep Neural Network Library) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_3d - Data Type: u8s8f32Intel Core i9-10980XE16003200480064008000SE +/- 5.18, N = 37651.80MIN: 7630.391. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random Fill SyncIntel Core i9-10980XE10002000300040005000SE +/- 30.56, N = 346911. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Threaded Building Blocks, and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal GeneratorImplementation: OpenMPIntel Core i9-10980XE13K26K39K52K65K610761. (CXX) g++ options: -lpthread

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random FillIntel Core i9-10980XE300K600K900K1200K1500KSE +/- 2264.37, N = 313333091. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Read While WritingIntel Core i9-10980XE900K1800K2700K3600K4500KSE +/- 42987.50, N = 341693331. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Random ReadIntel Core i9-10980XE20M40M60M80M100MSE +/- 36505.29, N = 3930137021. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Threaded Building Blocks, and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal GeneratorImplementation: C++ ThreadsIntel Core i9-10980XE12K24K36K48K60KSE +/- 43.30, N = 3574411. (CXX) g++ options: -lpthread

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal GeneratorImplementation: C++ TasksIntel Core i9-10980XE12K24K36K48K60KSE +/- 20.10, N = 3573801. (CXX) g++ options: -lpthread

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 2048Intel Core i9-10980XE5K10K15K20K25KSE +/- 143.22, N = 3231421. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 2D FFT Size 2048Intel Core i9-10980XE13002600390052006500SE +/- 83.85, N = 36225.71. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

MKL-DNN DNNL

This is a test of the Intel MKL-DNN (DNNL / Deep Neural Network Library) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch All - Data Type: f32Intel Core i9-10980XE48121620SE +/- 0.01, N = 317.41MIN: 17.141. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch All - Data Type: bf16bf16bf16Intel Core i9-10980XE510152025SE +/- 0.07, N = 320.31MIN: 18.91. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch All - Data Type: u8s8f32Intel Core i9-10980XE1.33122.66243.99365.32486.656SE +/- 0.05056, N = 35.91631MIN: 5.651. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_3d - Data Type: bf16bf16bf16Intel Core i9-10980XE510152025SE +/- 0.04, N = 319.91MIN: 19.581. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_3d - Data Type: f32Intel Core i9-10980XE3691215SE +/- 0.03, N = 312.84MIN: 12.651. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: OverwriteIntel Core i9-10980XE80160240320400SE +/- 1.27, N = 3378.301. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: OverwriteIntel Core i9-10980XE3691215SE +/- 0.03, N = 310.51. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random FillIntel Core i9-10980XE80160240320400SE +/- 0.49, N = 3377.421. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Random FillIntel Core i9-10980XE3691215SE +/- 0.03, N = 310.51. (CXX) g++ options: -O3 -lsnappy -lpthread

MKL-DNN DNNL

This is a test of the Intel MKL-DNN (DNNL / Deep Neural Network Library) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Recurrent Neural Network Training - Data Type: f32Intel Core i9-10980XE306090120150SE +/- 0.13, N = 3152.41MIN: 150.811. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: XZ 0 - Process: DecompressionIntel Core i9-10980XE3060901201501271. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: XZ 0 - Process: CompressionIntel Core i9-10980XE1122334455471. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Hot ReadIntel Core i9-10980XE612182430SE +/- 0.40, N = 426.841. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Seek RandomIntel Core i9-10980XE816243240SE +/- 0.18, N = 332.741. (CXX) g++ options: -O3 -lsnappy -lpthread

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian ChangepointIntel Core i9-10980XE714212835SE +/- 0.10, N = 331.99

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Crush 0 - Process: DecompressionIntel Core i9-10980XE110220330440550SE +/- 0.58, N = 35281. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Crush 0 - Process: CompressionIntel Core i9-10980XE306090120150SE +/- 0.67, N = 31201. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

MKL-DNN DNNL

This is a test of the Intel MKL-DNN (DNNL / Deep Neural Network Library) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_3d - Data Type: u8s8f32Intel Core i9-10980XE10002000300040005000SE +/- 9.37, N = 34725.22MIN: 4698.921. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random ReadIntel Core i9-10980XE612182430SE +/- 0.08, N = 327.401. (CXX) g++ options: -O3 -lsnappy -lpthread

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 2 - Process: DecompressionIntel Core i9-10980XE20040060080010008191. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 2 - Process: CompressionIntel Core i9-10980XE501001502002502201. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 6.3.6Test: Sequential FillIntel Core i9-10980XE300K600K900K1200K1500KSE +/- 11182.63, N = 315202821. (CXX) g++ options: -O3 -march=native -std=c++11 -fno-builtin-memcmp -fno-rtti -rdynamic -lpthread

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Libdeflate 1 - Process: DecompressionIntel Core i9-10980XE3006009001200150013281. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Libdeflate 1 - Process: CompressionIntel Core i9-10980XE501001502002502451. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 0 - Process: DecompressionIntel Core i9-10980XE150300450600750SE +/- 0.58, N = 37061. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 0 - Process: CompressionIntel Core i9-10980XE110220330440550SE +/- 1.86, N = 35021. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 8 - Process: DecompressionIntel Core i9-10980XE30060090012001500SE +/- 1.15, N = 314841. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 8 - Process: CompressionIntel Core i9-10980XE20406080100SE +/- 0.33, N = 3921. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 1 - Process: DecompressionIntel Core i9-10980XE3006009001200150015011. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 1 - Process: CompressionIntel Core i9-10980XE120240360480600SE +/- 2.40, N = 35511. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

MKL-DNN DNNL

This is a test of the Intel MKL-DNN (DNNL / Deep Neural Network Library) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_1d - Data Type: bf16bf16bf16Intel Core i9-10980XE246810SE +/- 0.01423, N = 38.63381MIN: 8.511. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_1d - Data Type: u8s8f32Intel Core i9-10980XE0.10360.20720.31080.41440.518SE +/- 0.002075, N = 30.460571MIN: 0.441. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_1d - Data Type: f32Intel Core i9-10980XE0.410.821.231.642.05SE +/- 0.01038, N = 31.82225MIN: 1.781. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch 1D - Data Type: f32Intel Core i9-10980XE1.21182.42363.63544.84726.059SE +/- 0.07604, N = 45.38576MIN: 4.521. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_alexnet - Data Type: bf16bf16bf16Intel Core i9-10980XE2004006008001000SE +/- 2.41, N = 3875.60MIN: 870.671. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 1D FFT Size 512Intel Core i9-10980XE12K24K36K48K60KSE +/- 541.81, N = 15537501. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

MKL-DNN DNNL

This is a test of the Intel MKL-DNN (DNNL / Deep Neural Network Library) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_alexnet - Data Type: u8s8f32Intel Core i9-10980XE918273645SE +/- 0.12, N = 341.45MIN: 40.761. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 1024Intel Core i9-10980XE6K12K18K24K30KSE +/- 280.62, N = 3258831. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 1D FFT Size 2048Intel Core i9-10980XE2K4K6K8K10KSE +/- 152.55, N = 128639.41. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

MKL-DNN DNNL

This is a test of the Intel MKL-DNN (DNNL / Deep Neural Network Library) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Convolution Batch conv_alexnet - Data Type: f32Intel Core i9-10980XE306090120150SE +/- 0.59, N = 3126.74MIN: 125.11. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 1D FFT Size 1024Intel Core i9-10980XE2K4K6K8K10KSE +/- 128.62, N = 159090.21. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

MKL-DNN DNNL

This is a test of the Intel MKL-DNN (DNNL / Deep Neural Network Library) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch 1D - Data Type: bf16bf16bf16Intel Core i9-10980XE1.28372.56743.85115.13486.4185SE +/- 0.04695, N = 35.70514MIN: 5.531. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: IP Batch 1D - Data Type: u8s8f32Intel Core i9-10980XE0.14610.29220.43830.58440.7305SE +/- 0.006861, N = 30.649125MIN: 0.621. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 1D FFT Size 32Intel Core i9-10980XE3K6K9K12K15KSE +/- 891.25, N = 15154731. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 32Intel Core i9-10980XE8K16K24K32K40KSE +/- 829.25, N = 15394801. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

SMHasher

SMHasher is a hash function tester. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2020-02-29Hash: MeowHashIntel Core i9-10980XE1020304050SE +/- 0.00, N = 345.481. (CXX) g++ options: -march=native -O3 -lpthread

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2020-02-29Hash: MeowHashIntel Core i9-10980XE11K22K33K44K55KSE +/- 8.69, N = 349652.251. (CXX) g++ options: -march=native -O3 -lpthread

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative EntropyIntel Core i9-10980XE3691215SE +/- 0.09, N = 313.55

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 1D FFT Size 128Intel Core i9-10980XE16003200480064008000SE +/- 85.14, N = 157368.51. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

SMHasher

SMHasher is a hash function tester. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2020-02-29Hash: Spooky32Intel Core i9-10980XE816243240SE +/- 0.01, N = 335.381. (CXX) g++ options: -march=native -O3 -lpthread

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2020-02-29Hash: Spooky32Intel Core i9-10980XE4K8K12K16K20KSE +/- 3.05, N = 318569.111. (CXX) g++ options: -march=native -O3 -lpthread

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 1D FFT Size 128Intel Core i9-10980XE5K10K15K20K25KSE +/- 227.07, N = 13239301. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 1D FFT Size 1024Intel Core i9-10980XE14K28K42K56K70KSE +/- 665.39, N = 8640131. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 1D FFT Size 32Intel Core i9-10980XE2K4K6K8K10KSE +/- 251.82, N = 159348.41. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

SMHasher

SMHasher is a hash function tester. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2020-02-29Hash: fasthash32Intel Core i9-10980XE612182430SE +/- 0.00, N = 327.561. (CXX) g++ options: -march=native -O3 -lpthread

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2020-02-29Hash: fasthash32Intel Core i9-10980XE2K4K6K8K10KSE +/- 0.11, N = 39128.991. (CXX) g++ options: -march=native -O3 -lpthread

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2020-02-29Hash: t1ha2_atonceIntel Core i9-10980XE612182430SE +/- 0.01, N = 327.441. (CXX) g++ options: -march=native -O3 -lpthread

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2020-02-29Hash: t1ha2_atonceIntel Core i9-10980XE5K10K15K20K25KSE +/- 0.24, N = 321360.371. (CXX) g++ options: -march=native -O3 -lpthread

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 2D FFT Size 1024Intel Core i9-10980XE16003200480064008000SE +/- 43.99, N = 37470.81. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

SMHasher

SMHasher is a hash function tester. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2020-02-29Hash: t1ha0_aes_avx2Intel Core i9-10980XE714212835SE +/- 0.00, N = 327.901. (CXX) g++ options: -march=native -O3 -lpthread

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2020-02-29Hash: t1ha0_aes_avx2Intel Core i9-10980XE14K28K42K56K70KSE +/- 7.58, N = 364427.431. (CXX) g++ options: -march=native -O3 -lpthread

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2020-02-29Hash: wyhashIntel Core i9-10980XE510152025SE +/- 0.00, N = 321.051. (CXX) g++ options: -march=native -O3 -lpthread

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2020-02-29Hash: wyhashIntel Core i9-10980XE5K10K15K20K25KSE +/- 9.37, N = 323045.631. (CXX) g++ options: -march=native -O3 -lpthread

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Fill SyncIntel Core i9-10980XE2K4K6K8K10KSE +/- 239.07, N = 37817.141. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Fill SyncIntel Core i9-10980XE0.11250.2250.33750.450.5625SE +/- 0.00, N = 30.51. (CXX) g++ options: -O3 -lsnappy -lpthread

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed GaussianIntel Core i9-10980XE246810SE +/- 0.083, N = 37.873

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 1D FFT Size 4096Intel Core i9-10980XE13K26K39K52K65KSE +/- 531.83, N = 3597911. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 512Intel Core i9-10980XE6K12K18K24K30KSE +/- 52.40, N = 3265661. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 1D FFT Size 2048Intel Core i9-10980XE13K26K39K52K65KSE +/- 890.34, N = 3624531. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 1D FFT Size 4096Intel Core i9-10980XE2K4K6K8K10KSE +/- 55.78, N = 38742.11. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 128Intel Core i9-10980XE7K14K21K28K35KSE +/- 52.19, N = 3323461. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 2D FFT Size 512Intel Core i9-10980XE16003200480064008000SE +/- 42.17, N = 37532.71. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 256Intel Core i9-10980XE6K12K18K24K30KSE +/- 200.82, N = 3272481. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 1D FFT Size 256Intel Core i9-10980XE9K18K27K36K45KSE +/- 649.23, N = 3411741. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 2D FFT Size 128Intel Core i9-10980XE2K4K6K8K10KSE +/- 150.73, N = 39082.71. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 2D FFT Size 64Intel Core i9-10980XE16003200480064008000SE +/- 23.46, N = 37688.61. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 1D FFT Size 64Intel Core i9-10980XE17003400510068008500SE +/- 26.45, N = 37741.31. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

MKL-DNN DNNL

This is a test of the Intel MKL-DNN (DNNL / Deep Neural Network Library) as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_3d - Data Type: bf16bf16bf16Intel Core i9-10980XE3691215SE +/- 0.04, N = 310.83MIN: 10.631. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMKL-DNN DNNL 1.1Harness: Deconvolution Batch deconv_3d - Data Type: f32Intel Core i9-10980XE0.59131.18261.77392.36522.9565SE +/- 0.02031, N = 32.62800MIN: 2.561. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -fopenmp -pie -lpthread -ldl

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 2D FFT Size 256Intel Core i9-10980XE2K4K6K8K10KSE +/- 26.80, N = 37821.01. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 64Intel Core i9-10980XE9K18K27K36K45KSE +/- 109.88, N = 3413651. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 2D FFT Size 32Intel Core i9-10980XE2K4K6K8K10KSE +/- 37.38, N = 38174.51. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 1D FFT Size 64Intel Core i9-10980XE4K8K12K16K20KSE +/- 212.92, N = 3201971. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 1D FFT Size 512Intel Core i9-10980XE2K4K6K8K10KSE +/- 72.43, N = 39159.91. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 1D FFT Size 256Intel Core i9-10980XE2K4K6K8K10KSE +/- 12.99, N = 38319.01. (CC) gcc options: -pthread -O3 -fomit-frame-pointer -mtune=native -malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math -lm

112 Results Shown

Timed GCC Compilation
Numenta Anomaly Benchmark
MKL-DNN DNNL:
  Convolution Batch conv_all - bf16bf16bf16
  Convolution Batch conv_all - u8s8f32
  Convolution Batch conv_all - f32
  Deconvolution Batch deconv_all - bf16bf16bf16
  Deconvolution Batch deconv_all - f32
Timed LLVM Compilation
FFTW
LevelDB:
  Seq Fill:
    Microseconds Per Op
    MB/s
FFTW
LevelDB
MKL-DNN DNNL:
  Convolution Batch conv_googlenet_v3 - bf16bf16bf16
  Convolution Batch conv_googlenet_v3 - u8s8f32
  Convolution Batch conv_googlenet_v3 - f32
Numenta Anomaly Benchmark
MKL-DNN DNNL
Facebook RocksDB
toyBrot Fractal Generator
Facebook RocksDB:
  Rand Fill
  Read While Writing
  Rand Read
toyBrot Fractal Generator:
  C++ Threads
  C++ Tasks
FFTW:
  Float + SSE - 2D FFT Size 2048
  Stock - 2D FFT Size 2048
MKL-DNN DNNL:
  IP Batch All - f32
  IP Batch All - bf16bf16bf16
  IP Batch All - u8s8f32
  Convolution Batch conv_3d - bf16bf16bf16
  Convolution Batch conv_3d - f32
LevelDB:
  Overwrite:
    Microseconds Per Op
    MB/s
  Rand Fill:
    Microseconds Per Op
    MB/s
MKL-DNN DNNL
lzbench:
  XZ 0 - Decompression
  XZ 0 - Compression
LevelDB:
  Hot Read
  Seek Rand
Numenta Anomaly Benchmark
lzbench:
  Crush 0 - Decompression
  Crush 0 - Compression
MKL-DNN DNNL
LevelDB
lzbench:
  Brotli 2 - Decompression
  Brotli 2 - Compression
Facebook RocksDB
lzbench:
  Libdeflate 1 - Decompression
  Libdeflate 1 - Compression
  Brotli 0 - Decompression
  Brotli 0 - Compression
  Zstd 8 - Decompression
  Zstd 8 - Compression
  Zstd 1 - Decompression
  Zstd 1 - Compression
MKL-DNN DNNL:
  Deconvolution Batch deconv_1d - bf16bf16bf16
  Deconvolution Batch deconv_1d - u8s8f32
  Deconvolution Batch deconv_1d - f32
  IP Batch 1D - f32
  Convolution Batch conv_alexnet - bf16bf16bf16
FFTW
MKL-DNN DNNL
FFTW:
  Float + SSE - 2D FFT Size 1024
  Stock - 1D FFT Size 2048
MKL-DNN DNNL
FFTW
MKL-DNN DNNL:
  IP Batch 1D - bf16bf16bf16
  IP Batch 1D - u8s8f32
FFTW:
  Float + SSE - 1D FFT Size 32
  Float + SSE - 2D FFT Size 32
SMHasher:
  MeowHash:
    cycles/hash
    MiB/sec
Numenta Anomaly Benchmark
FFTW
SMHasher:
  Spooky32:
    cycles/hash
    MiB/sec
FFTW:
  Float + SSE - 1D FFT Size 128
  Float + SSE - 1D FFT Size 1024
  Stock - 1D FFT Size 32
SMHasher:
  fasthash32:
    cycles/hash
    MiB/sec
  t1ha2_atonce:
    cycles/hash
    MiB/sec
FFTW
SMHasher:
  t1ha0_aes_avx2:
    cycles/hash
    MiB/sec
  wyhash:
    cycles/hash
    MiB/sec
LevelDB:
  Fill Sync:
    Microseconds Per Op
    MB/s
Numenta Anomaly Benchmark
FFTW:
  Float + SSE - 1D FFT Size 4096
  Float + SSE - 2D FFT Size 512
  Float + SSE - 1D FFT Size 2048
  Stock - 1D FFT Size 4096
  Float + SSE - 2D FFT Size 128
  Stock - 2D FFT Size 512
  Float + SSE - 2D FFT Size 256
  Float + SSE - 1D FFT Size 256
  Stock - 2D FFT Size 128
  Stock - 2D FFT Size 64
  Stock - 1D FFT Size 64
MKL-DNN DNNL:
  Deconvolution Batch deconv_3d - bf16bf16bf16
  Deconvolution Batch deconv_3d - f32
FFTW:
  Stock - 2D FFT Size 256
  Float + SSE - 2D FFT Size 64
  Stock - 2D FFT Size 32
  Float + SSE - 1D FFT Size 64
  Stock - 1D FFT Size 512
  Stock - 1D FFT Size 256