Core i7 5960X 2021

Intel Core i7-5960X testing with a ASRock X99 Extreme3 (P3.70 BIOS) and AMD FirePro V7900 2GB on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2101034-HA-COREI759672
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 3 Tests
AV1 5 Tests
Bioinformatics 2 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 2 Tests
C++ Boost Tests 2 Tests
Chess Test Suite 4 Tests
Timed Code Compilation 9 Tests
C/C++ Compiler Tests 31 Tests
Compression Tests 6 Tests
CPU Massive 42 Tests
Creator Workloads 38 Tests
Cryptography 6 Tests
Database Test Suite 5 Tests
Encoding 12 Tests
Fortran Tests 4 Tests
Game Development 5 Tests
HPC - High Performance Computing 22 Tests
Imaging 11 Tests
Java 2 Tests
Common Kernel Benchmarks 5 Tests
Machine Learning 13 Tests
Molecular Dynamics 3 Tests
Multi-Core 40 Tests
NVIDIA GPU Compute 8 Tests
Intel oneAPI 5 Tests
OpenCL 2 Tests
OpenCV Tests 2 Tests
OpenMPI Tests 2 Tests
Productivity 3 Tests
Programmer / Developer System Benchmarks 17 Tests
Python 4 Tests
Raytracing 2 Tests
Renderers 6 Tests
Scientific Computing 8 Tests
Server 8 Tests
Server CPU Tests 23 Tests
Single-Threaded 10 Tests
Speech 2 Tests
Telephony 2 Tests
Texture Compression 2 Tests
Video Encoding 9 Tests
Common Workstation Benchmarks 4 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Core i7 5960X
January 01 2021
  21 Hours, 10 Minutes
Intel Core i7 5960X
January 02 2021
  21 Hours, 32 Minutes
R2
January 03 2021
  9 Minutes
Invert Hiding All Results Option
  14 Hours, 17 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Core i7 5960X 2021OpenBenchmarking.orgPhoronix Test SuiteIntel Core i7-5960X @ 3.50GHz (8 Cores / 16 Threads)ASRock X99 Extreme3 (P3.70 BIOS)Intel Xeon E7 v3/Xeon16GB120GB INTEL SSDSC2BW12AMD FirePro V7900 2GBRealtek ALC1150VA2431Intel I218-VUbuntu 20.045.4.0-58-generic (x86_64)GNOME Shell 3.36.4X Server 1.20.8modesetting 1.20.84.3 Mesa 20.0.8 (LLVM 10.0.0)GCC 9.3.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionCore I7 5960X 2021 BenchmarksSystem Logs- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - MQ-DEADLINE / errors=remount-ro,relatime,rw / Block Size: 4096- Scaling Governor: intel_pstate powersave - CPU Microcode: 0x44- GLAMOR- OpenJDK Runtime Environment (build 11.0.9.1+1-Ubuntu-0ubuntu1.20.04)- Python 3.8.5- itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

Core i7 5960X 2021stress-ng: CPU Cachesockperf: Throughputonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUsimdjson: PartialTweetsstress-ng: Socket Activityonednn: IP Shapes 3D - f32 - CPUosbench: Create Processesredis: SETx265: Bosphorus 4Kblosc: blosclzcompress-7zip: Compress Speed Testmlpack: scikit_linearridgeregressionlczero: Eigenpostmark: Disk Transaction Performancencnn: CPU - blazefaceosbench: Launch Programsmlpack: scikit_icadarktable: Server Rack - CPU-onlykvazaar: Bosphorus 1080p - Ultra Faststockfish: Total Timex265: Bosphorus 1080psimdjson: DistinctUserIDcoremark: CoreMark Size 666 - Iterations Per Seconddacapobench: Tradesoaptoybrot: OpenMPespeak: Text-To-Speech Synthesisastcenc: Fastsvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080pnumpy: embree: Pathtracer ISPC - Asian Dragonnumenta-nab: Bayesian Changepointlczero: BLASdacapobench: H2x264: H.264 Video Encodingtachyon: Total Timeembree: Pathtracer - Asian Dragon Objredis: SADDdarktable: Server Room - CPU-onlyrav1e: 10clomp: Static OMP Speedupgraphics-magick: Resizingwireguard: warsow: 1920 x 1080cryptsetup: AES-XTS 256b Encryptionrav1e: 6lzbench: Zstd 1 - Decompressionstress-ng: MMAPwebp: Defaultsockperf: Latency Ping Ponglzbench: Zstd 8 - Decompressionaom-av1: Speed 6 Realtimeopenvino: Person Detection 0106 FP32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUencode-ape: WAV To APErav1e: 1onednn: Convolution Batch Shapes Auto - u8s8f32 - CPUdav1d: Summer Nature 1080ponednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUblender: Barbershop - CPU-Onlyrodinia: OpenMP Leukocytebasis: ETC1Sgraphics-magick: HWB Color Spacencnn: CPU-v3-v3 - mobilenet-v3embree: Pathtracer - Asian Dragonbuild2: Time To Compilerodinia: OpenMP Streamclusterstress-ng: MEMFDrav1e: 5ffte: N=256, 3D Complex FFT Routineglmark2: 1920 x 1080kvazaar: Bosphorus 1080p - Very Fastkvazaar: Bosphorus 4K - Very Fastencode-opus: WAV To Opus Encodeaom-av1: Speed 4 Two-Passavifenc: 10cryptsetup: Serpent-XTS 256b Encryptionosbench: Memory Allocationsnamd: ATPase Simulation - 327,506 Atomsgraphics-magick: Noise-Gaussiandav1d: Chimera 1080ptoybrot: TBBoctave-benchmark: mafft: Multiple Sequence Alignment - LSU RNAopenvino: Face Detection 0106 FP16 - CPUsvt-vp9: VMAF Optimized - Bosphorus 1080popenvino: Age Gender Recognition Retail 0013 FP32 - CPUpyperformance: django_templatewebp: Quality 100, Lossless, Highest Compressionavifenc: 2onednn: IP Shapes 3D - u8s8f32 - CPUdacapobench: Jythonpyperformance: 2to3dolfyn: Computational Fluid Dynamicsstress-ng: Matrix Mathtensorflow-lite: NASNet Mobilestress-ng: Glibc C String Functionspyperformance: regex_compileembree: Pathtracer ISPC - Crownonednn: IP Shapes 1D - u8s8f32 - CPUncnn: CPU - resnet18rsvg: SVG Files To PNGrodinia: OpenMP HotSpot3Dcompress-lz4: 9 - Decompression Speedinfluxdb: 64 - 10000 - 2,5000,1 - 10000cryptsetup: Serpent-XTS 512b Encryptionstress-ng: Glibc Qsort Data Sortingonednn: Recurrent Neural Network Inference - f32 - CPUcompress-xz: Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9osbench: Create Threadscryptsetup: Twofish-XTS 256b Encryptionpyperformance: pathlibonednn: Convolution Batch Shapes Auto - f32 - CPUdarktable: Masskrug - CPU-onlycouchdb: 100 - 1000 - 24onednn: Recurrent Neural Network Training - f32 - CPUwebp: Quality 100, Highest Compressionrnnoise: compress-zstd: 3lzbench: Brotli 0 - Decompressionbuild-eigen: Time To Compilesvt-av1: Enc Mode 8 - 1080popenvino: Age Gender Recognition Retail 0013 FP16 - CPUpyperformance: pickle_pure_pythononednn: Deconvolution Batch shapes_3d - f32 - CPUncnn: CPU - regnety_400minfluxdb: 1024 - 10000 - 2,5000,1 - 10000cryptsetup: AES-XTS 512b Decryptionbasis: UASTC Level 0ncnn: CPU - mnasnetmnn: mobilenet-v1-1.0numenta-nab: Relative Entropykvazaar: Bosphorus 4K - Ultra Fastrodinia: OpenMP CFD Solverpyperformance: json_loadsdarktable: Boat - CPU-onlynumenta-nab: Windowed Gaussianncnn: CPU - yolov4-tinyrawtherapee: Total Benchmark Timeavifenc: 8stress-ng: Memory Copyingncnn: CPU - vgg16ncnn: CPU-v2-v2 - mobilenet-v2embree: Pathtracer - Crowncaffe: AlexNet - CPU - 100webp: Quality 100dav1d: Summer Nature 4Kkeydb: build-gdb: Time To Compilecompress-lz4: 1 - Decompression Speedcryptsetup: AES-XTS 256b Decryptionkvazaar: Bosphorus 1080p - Mediumnettle: aes256ocrmypdf: Processing 60 Page PDF Documentonednn: Recurrent Neural Network Inference - u8s8f32 - CPUlzbench: Zstd 1 - Compressionncnn: CPU - alexnetcryptsetup: PBKDF2-sha512montage: Mosaic of M17, K band, 1.5 deg x 1.5 degcrafty: Elapsed Timelzbench: Crush 0 - Decompressioncryptsetup: AES-XTS 512b Encryptionastcenc: Thoroughnode-web-tooling: gmic: Plotting Isosurface Of A 3D Volume, 1000 Timesonednn: IP Shapes 1D - f32 - CPUavifenc: 0asmfish: 1024 Hash Memory, 26 Depthsvt-vp9: Visual Quality Optimized - Bosphorus 1080pncnn: CPU - googlenetbotan: Blowfishdav1d: Chimera 1080p 10-bitbyte: Dhrystone 2onednn: Deconvolution Batch shapes_1d - f32 - CPUstress-ng: Forkingcryptsetup: Twofish-XTS 512b Decryptiontnn: CPU - MobileNet v2lzbench: Brotli 2 - Decompressionneat: mlpack: scikit_svmonednn: Recurrent Neural Network Training - u8s8f32 - CPUphpbench: PHP Benchmark Suitebotan: AES-256rodinia: OpenMP LavaMDgmic: 3D Elevated Function In Rand Colors, 100 Timescompress-lz4: 1 - Compression Speedcryptsetup: Serpent-XTS 256b Decryptioncryptsetup: Serpent-XTS 512b Decryptionlammps: Rhodopsin Proteinastcenc: Mediumbuild-llvm: Time To Compilemnn: resnet-v2-50ncnn: CPU - mobilenetopenvino: Face Detection 0106 FP16 - CPUncnn: CPU - shufflenet-v2stress-ng: SENDFILEblender: BMW27 - CPU-Onlysunflow: Global Illumination + Image Synthesisbuild-ffmpeg: Time To Compileyafaray: Total Time For Sample Sceneopenvino: Person Detection 0106 FP32 - CPUgit: Time To Complete Common Git Commandsbotan: Twofishwebp: Quality 100, Losslessbuild-php: Time To Compilev-ray: CPUtnn: CPU - SqueezeNet v1.1stress-ng: CPU Stressencode-wavpack: WAV To WavPackindigobench: CPU - Supercarai-benchmark: Device Inference Scorebotan: CAST-256stress-ng: NUMAlibraw: Post-Processing Benchmarknettle: sha512ai-benchmark: Device Training Scoretoybrot: C++ Tasksopenvino: Face Detection 0106 FP32 - CPUbuild-mplayer: Time To Compilejohn-the-ripper: MD5svt-av1: Enc Mode 4 - 1080phugin: Panorama Photo Assistant + Stitching Timebotan: KASUMIindigobench: CPU - Bedroombuild-apache: Time To Compilejohn-the-ripper: Blowfishaircrack-ng: mnn: inception-v3ncnn: CPU - squeezenet_ssdosbench: Create Filesbasis: UASTC Level 3stress-ng: Mallocnettle: chachahint: FLOATmnn: MobileNetV2_224onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUbuild-gcc: Time To Compilecryptsetup: Twofish-XTS 256b Decryptiongmic: 2D Function Plotting, 1000 Timesnettle: poly1305-aescryptopp: Unkeyed Algorithmscompress-lz4: 3 - Compression Speedinkscape: SVG Files To PNGtoybrot: C++ Threadsaom-av1: Speed 8 Realtimesqlite-speedtest: Timed Time - Size 1,000ncnn: CPU - resnet50cryptsetup: PBKDF2-whirlpoolopenvino: Person Detection 0106 FP16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUcaffe: GoogleNet - CPU - 100compress-lz4: 3 - Decompression Speedbasis: UASTC Level 2 + RDO Post-Processingstress-ng: Atomiccompress-lz4: 9 - Compression Speedstress-ng: Cryptoastcenc: Exhaustiveonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUtensorflow-lite: Mobilenet Quantstress-ng: Semaphoresinfluxdb: 4 - 10000 - 2,5000,1 - 10000mnn: SqueezeNetV1.0tensorflow-lite: Inception ResNet V2tensorflow-lite: SqueezeNethmmer: Pfam Database Searchtensorflow-lite: Inception V4stress-ng: RdRandembree: Pathtracer ISPC - Asian Dragon Objtensorflow-lite: Mobilenet Floatstress-ng: Vector Mathai-benchmark: Device AI Scorepyperformance: python_startuppyperformance: crypto_pyaespyperformance: raytracepyperformance: nbodypyperformance: floatpyperformance: chaospyperformance: goopenvino: Age Gender Recognition Retail 0013 FP32 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Person Detection 0106 FP16 - CPUopenvino: Face Detection 0106 FP32 - CPUncnn: CPU - efficientnet-b0basis: UASTC Level 2cryptsetup: Twofish-XTS 512b Encryptionluxcorerender: Rainbow Colors and Prismluxcorerender: DLSCopenvkl: vklBenchmarkoidn: Memorialsvt-av1: Enc Mode 0 - 1080pkvazaar: Bosphorus 4K - Mediumaom-av1: Speed 6 Two-Passgraphics-magick: Enhancedgraphics-magick: Sharpengraphics-magick: Rotategraphics-magick: Swirlcompress-zstd: 19simdjson: LargeRandsimdjson: Kostyalzbench: Libdeflate 1 - Decompressionlzbench: Libdeflate 1 - Compressionlzbench: Brotli 2 - Compressionlzbench: Brotli 0 - Compressionlzbench: Crush 0 - Compressionlzbench: Zstd 8 - Compressionlzbench: XZ 0 - Decompressionlzbench: XZ 0 - Compressionmlpack: scikit_qdastress-ng: System V Message Passingstress-ng: Context Switchingredis: GETredis: LPUSHredis: LPOPsockperf: Latency Under LoadCore i7 5960XIntel Core i7 5960XR215.052545248.112000.615807.577.2615729.8968951374575.848.487503.4386323.3696047162.6742.19055272.490.22665.181345839434.070.60251430.966532529015855541.9227.60106.41269.6911.399363.387997512357.75148.38458.85791569448.965.4742.31310.3619267.331175.51735.21.0291153112.551.9974.758119412.001.223.6883615.7310.25611.7821333.862625.39954.31191.13170.2515495.559.4860180.70627.941636.020.76829005.397787022170934.249.5810.8251.647.711556.291.9026542.45825167404.3815424410.12312.0991.7798.734844.1057.845.23782.6282.66455536539825.90634048.29254023899599.362069.24213.2892014.8735.809128.3126736.31025635.2557.7112.142621.5732.26616.163190347.923.214.06346.902157.5274817.889.27628.8353389.4513104.71618.9374833.495229.6610221.691091771.71431.411.2555.669.96730.10817.3334.15729.511.82515.69130.7182.0298.0642184.9151.206.438.1345642643.049119.70450609.36168.2246931.31755.814.734389.0138.9842618.8239211.871315137104.38864155474321440.744.308.9824.8554.77635138.3272051109177.5416.29345.73862.3734534903.16.7449154563.71349.5335.59259725.92624.504806.445742013096.632393.92082.4715709.81540.9541.40.6927.06921.50555.28621.722240.297.3088992.62237.641.51483.489222.3203257.8764.060287.06421.86386.8699805319.4873181.5817.6352.799955109.512157.3529.54404.619961556932244.9556.1187239172.27967.59975.7891.22633.4021137823565.77156.54526.3418.54886494.74763037538.09780.034320864866.360444.9554805.521477.819349.9138.3482071.87268.74472940.0733.51515451022.9085.72628.905310143256.013.194131597886723.31019.888201760.5539.231264.22355.996.559731951931314298.47560118.79.2473881103298339172.3564293077243866.7210.177620133846301.30195115.71285581501351252940.810.811.221.778.5850.686349.11.241.1484.836.350.0843.402.57968558225037.20.360.549951851523668268973465.926757856.832860383.271798887.051192557.051772198.1825.12415.872651268.436160.595989.777.0537530.7631491338958.348.287671.4378183.4394246292.7242.97018071.180.22264.091323558533.510.61255508.610092520915623141.3097.71107.94265.8811.556762.5451010518957.02150.27888.75141587452.715.5362.28910.2613264.848177.11750.71.0201143113.522.0144.718120412.101.213.7182515.8550.25411.8744331.302605.46947.10189.70370.7755455.599.5522181.93528.129631.870.76329193.827185603170334.469.6410.7581.637.664559.691.3443572.44335166406.8115336310.06612.0311.7898.184817.6958.145.46982.2072.67815533839626.03633878.70252760895166.042059.19803.3049714.8035.641128.9076705.31021035.5560.2111.642609.9632.12516.233126349.423.314.00406.931156.8684798.149.23828.7173375.8511105.12418.8644814.945209.6974721.611087810.21436.511.2955.6810.00230.21317.2734.27429.611.86515.63930.6181.7648.0902178.0651.046.458.1595640703.058119.35451926.80168.7156912.41760.614.694377.4538.8822612.0139311.841318413104.64164307774311444.044.208.9624.8004.78658138.0452055185377.3916.26345.11562.2634474027.66.7567954468.05350.1335.02259825.88324.544798.645751093101.440393.31682.3475701.29541.7542.20.6917.05922.79555.36321.752243.377.2988872.21237.961.51683.599222.6083253.7964.138287.40121.83886.9679816319.8433185.1117.6542.802956109.625157.1929.57404.209951558482242.8656.1697245702.27767.65775.8541.22533.4291138723584.38956.58926.3218.56287794.67762995660.61779.545320668576.437674.9524802.621476.947350.1138.4242073.00268.88335840.0933.49915444222.9185.69328.895311933257.023.193191598356721.41019.610201708.7539.221264.48355.926.560811951631314448.24560056.29.2463881410298320172.3634292933243859.9110.177820133746301.39195115.71285581501351252940.810.811.221.778.5850.686349.11.241.1484.836.350.0843.402.57968558225037.20.360.549951851523668268973467.426549886.192944645.271565730.851147031.871159344.7224.76946581698OpenBenchmarking.org

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU CacheCore i7 5960XIntel Core i7 5960X48121620SE +/- 0.22, N = 12SE +/- 0.27, N = 315.0515.871. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU CacheCore i7 5960XIntel Core i7 5960X48121620Min: 13.23 / Avg: 15.05 / Max: 15.8Min: 15.33 / Avg: 15.87 / Max: 16.21. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

Sockperf

This is a network socket API performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMessages Per Second, More Is BetterSockperf 3.4Test: ThroughputCore i7 5960XIntel Core i7 5960X60K120K180K240K300KSE +/- 2828.28, N = 25SE +/- 3312.61, N = 52545242651261. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread
OpenBenchmarking.orgMessages Per Second, More Is BetterSockperf 3.4Test: ThroughputCore i7 5960XIntel Core i7 5960X50K100K150K200K250KMin: 234736 / Avg: 254523.6 / Max: 298427Min: 255921 / Avg: 265125.6 / Max: 2728501. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUCore i7 5960XIntel Core i7 5960X246810SE +/- 0.01791, N = 3SE +/- 0.13591, N = 38.112008.43616MIN: 8.03MIN: 8.11. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUCore i7 5960XIntel Core i7 5960X3691215Min: 8.09 / Avg: 8.11 / Max: 8.15Min: 8.16 / Avg: 8.44 / Max: 8.571. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweetsCore i7 5960XIntel Core i7 5960X0.13730.27460.41190.54920.6865SE +/- 0.00, N = 3SE +/- 0.01, N = 40.610.591. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: PartialTweetsCore i7 5960XIntel Core i7 5960X246810Min: 0.61 / Avg: 0.61 / Max: 0.61Min: 0.57 / Avg: 0.59 / Max: 0.611. (CXX) g++ options: -O3 -pthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Socket ActivityCore i7 5960XIntel Core i7 5960X13002600390052006500SE +/- 7.65, N = 3SE +/- 53.14, N = 155807.575989.771. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Socket ActivityCore i7 5960XIntel Core i7 5960X10002000300040005000Min: 5796.18 / Avg: 5807.57 / Max: 5822.11Min: 5832.87 / Avg: 5989.77 / Max: 6523.671. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUCore i7 5960XIntel Core i7 5960X246810SE +/- 0.02465, N = 3SE +/- 0.00982, N = 37.261577.05375MIN: 7.14MIN: 6.871. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUCore i7 5960XIntel Core i7 5960X3691215Min: 7.23 / Avg: 7.26 / Max: 7.31Min: 7.04 / Avg: 7.05 / Max: 7.071. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OSBench

OSBench is a collection of micro-benchmarks for measuring operating system primitives like time to create threads/processes, launching programs, creating files, and memory allocation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create ProcessesCore i7 5960XIntel Core i7 5960X714212835SE +/- 0.21, N = 3SE +/- 0.32, N = 329.9030.761. (CC) gcc options: -lm
OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create ProcessesCore i7 5960XIntel Core i7 5960X714212835Min: 29.66 / Avg: 29.9 / Max: 30.31Min: 30.43 / Avg: 30.76 / Max: 31.41. (CC) gcc options: -lm

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETCore i7 5960XIntel Core i7 5960X300K600K900K1200K1500KSE +/- 22789.67, N = 3SE +/- 17223.22, N = 31374575.841338958.341. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETCore i7 5960XIntel Core i7 5960X200K400K600K800K1000KMin: 1335540.75 / Avg: 1374575.84 / Max: 1414472.38Min: 1308942.38 / Avg: 1338958.34 / Max: 1368601.881. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KCore i7 5960XIntel Core i7 5960X246810SE +/- 0.04, N = 3SE +/- 0.09, N = 38.488.281. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KCore i7 5960XIntel Core i7 5960X3691215Min: 8.41 / Avg: 8.48 / Max: 8.53Min: 8.16 / Avg: 8.28 / Max: 8.461. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

C-Blosc

A simple, compressed, fast and persistent data store library for C. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0 Beta 5Compressor: blosclzCore i7 5960XIntel Core i7 5960X16003200480064008000SE +/- 39.16, N = 3SE +/- 39.97, N = 37503.47671.41. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgMB/s, More Is BetterC-Blosc 2.0 Beta 5Compressor: blosclzCore i7 5960XIntel Core i7 5960X13002600390052006500Min: 7425.2 / Avg: 7503.37 / Max: 7546.7Min: 7591.8 / Avg: 7671.43 / Max: 7717.31. (CXX) g++ options: -rdynamic

7-Zip Compression

This is a test of 7-Zip using p7zip with its integrated benchmark feature or upstream 7-Zip for the Windows x64 build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestCore i7 5960XIntel Core i7 5960X8K16K24K32K40KSE +/- 377.33, N = 3SE +/- 315.48, N = 338632378181. (CXX) g++ options: -pipe -lpthread
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed TestCore i7 5960XIntel Core i7 5960X7K14K21K28K35KMin: 37984 / Avg: 38632.33 / Max: 39291Min: 37214 / Avg: 37818 / Max: 382781. (CXX) g++ options: -pipe -lpthread

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregressionCore i7 5960XIntel Core i7 5960X0.77181.54362.31543.08723.859SE +/- 0.03, N = 3SE +/- 0.06, N = 33.363.43
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregressionCore i7 5960XIntel Core i7 5960X246810Min: 3.31 / Avg: 3.36 / Max: 3.41Min: 3.37 / Avg: 3.43 / Max: 3.55

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: EigenCore i7 5960XIntel Core i7 5960X2004006008001000SE +/- 9.02, N = 3SE +/- 4.73, N = 39609421. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: EigenCore i7 5960XIntel Core i7 5960X2004006008001000Min: 946 / Avg: 960.33 / Max: 977Min: 935 / Avg: 942 / Max: 9511. (CXX) g++ options: -flto -pthread

PostMark

This is a test of NetApp's PostMark benchmark designed to simulate small-file testing similar to the tasks endured by web and mail servers. This test profile will set PostMark to perform 25,000 transactions with 500 files simultaneously with the file sizes ranging between 5 and 512 kilobytes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostMark 1.51Disk Transaction PerformanceCore i7 5960XIntel Core i7 5960XR210002000300040005000SE +/- 29.00, N = 34716462946581. (CC) gcc options: -O3
OpenBenchmarking.orgTPS, More Is BetterPostMark 1.51Disk Transaction PerformanceCore i7 5960XIntel Core i7 5960XR28001600240032004000Min: 4629 / Avg: 4658 / Max: 47161. (CC) gcc options: -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceCore i7 5960XIntel Core i7 5960X0.6121.2241.8362.4483.06SE +/- 0.02, N = 3SE +/- 0.01, N = 32.672.72MIN: 2.63 / MAX: 2.73MIN: 2.63 / MAX: 18.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazefaceCore i7 5960XIntel Core i7 5960X246810Min: 2.65 / Avg: 2.67 / Max: 2.71Min: 2.71 / Avg: 2.72 / Max: 2.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OSBench

OSBench is a collection of micro-benchmarks for measuring operating system primitives like time to create threads/processes, launching programs, creating files, and memory allocation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Launch ProgramsCore i7 5960XIntel Core i7 5960X1020304050SE +/- 0.15, N = 3SE +/- 0.09, N = 342.1942.971. (CC) gcc options: -lm
OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Launch ProgramsCore i7 5960XIntel Core i7 5960X918273645Min: 41.96 / Avg: 42.19 / Max: 42.46Min: 42.87 / Avg: 42.97 / Max: 43.151. (CC) gcc options: -lm

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_icaCore i7 5960XIntel Core i7 5960X1632486480SE +/- 0.89, N = 3SE +/- 0.31, N = 372.4971.18
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_icaCore i7 5960XIntel Core i7 5960X1428425670Min: 70.81 / Avg: 72.49 / Max: 73.85Min: 70.63 / Avg: 71.18 / Max: 71.69

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.0.1Test: Server Rack - Acceleration: CPU-onlyCore i7 5960XIntel Core i7 5960X0.05090.10180.15270.20360.2545SE +/- 0.000, N = 3SE +/- 0.001, N = 30.2260.222
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.0.1Test: Server Rack - Acceleration: CPU-onlyCore i7 5960XIntel Core i7 5960X12345Min: 0.23 / Avg: 0.23 / Max: 0.23Min: 0.22 / Avg: 0.22 / Max: 0.22

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Ultra FastCore i7 5960XIntel Core i7 5960X1530456075SE +/- 0.34, N = 3SE +/- 0.28, N = 365.1864.091. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Ultra FastCore i7 5960XIntel Core i7 5960X1326395265Min: 64.51 / Avg: 65.18 / Max: 65.64Min: 63.77 / Avg: 64.09 / Max: 64.651. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total TimeCore i7 5960XIntel Core i7 5960X3M6M9M12M15MSE +/- 152307.62, N = 3SE +/- 225942.99, N = 313458394132355851. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total TimeCore i7 5960XIntel Core i7 5960X2M4M6M8M10MMin: 13237847 / Avg: 13458394.33 / Max: 13750635Min: 12809394 / Avg: 13235584.67 / Max: 135787641. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pCore i7 5960XIntel Core i7 5960X816243240SE +/- 0.09, N = 3SE +/- 0.10, N = 334.0733.511. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pCore i7 5960XIntel Core i7 5960X714212835Min: 33.91 / Avg: 34.07 / Max: 34.22Min: 33.31 / Avg: 33.51 / Max: 33.631. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDCore i7 5960XIntel Core i7 5960X0.13730.27460.41190.54920.6865SE +/- 0.01, N = 3SE +/- 0.01, N = 30.600.611. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: DistinctUserIDCore i7 5960XIntel Core i7 5960X246810Min: 0.59 / Avg: 0.6 / Max: 0.61Min: 0.59 / Avg: 0.61 / Max: 0.621. (CXX) g++ options: -O3 -pthread

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondCore i7 5960XIntel Core i7 5960X50K100K150K200K250KSE +/- 3372.91, N = 15SE +/- 1028.89, N = 3251430.97255508.611. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondCore i7 5960XIntel Core i7 5960X40K80K120K160K200KMin: 216900.14 / Avg: 251430.97 / Max: 258927.61Min: 254277.69 / Avg: 255508.61 / Max: 257552.181. (CC) gcc options: -O2 -lrt" -lrt

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradesoapCore i7 5960XIntel Core i7 5960X11002200330044005500SE +/- 77.20, N = 352905209
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradesoapCore i7 5960XIntel Core i7 5960X9001800270036004500Min: 5137 / Avg: 5289.67 / Max: 5386

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Threaded Building Blocks, and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal GeneratorImplementation: OpenMPCore i7 5960XIntel Core i7 5960X30K60K90K120K150KSE +/- 2038.18, N = 3SE +/- 255.65, N = 31585551562311. (CXX) g++ options: -lpthread -isystem -fexceptions -std=c++14
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal GeneratorImplementation: OpenMPCore i7 5960XIntel Core i7 5960X30K60K90K120K150KMin: 156503 / Avg: 158554.67 / Max: 162631Min: 155899 / Avg: 156231.33 / Max: 1567341. (CXX) g++ options: -lpthread -isystem -fexceptions -std=c++14

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisCore i7 5960XIntel Core i7 5960X1020304050SE +/- 0.37, N = 11SE +/- 0.33, N = 441.9241.311. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisCore i7 5960XIntel Core i7 5960X918273645Min: 38.6 / Avg: 41.92 / Max: 43.8Min: 40.41 / Avg: 41.31 / Max: 41.841. (CC) gcc options: -O2 -std=c99

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: FastCore i7 5960XIntel Core i7 5960X246810SE +/- 0.07, N = 3SE +/- 0.04, N = 37.607.711. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: FastCore i7 5960XIntel Core i7 5960X3691215Min: 7.49 / Avg: 7.6 / Max: 7.73Min: 7.63 / Avg: 7.71 / Max: 7.751. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pCore i7 5960XIntel Core i7 5960X20406080100SE +/- 0.40, N = 3SE +/- 0.37, N = 3106.41107.941. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pCore i7 5960XIntel Core i7 5960X20406080100Min: 105.78 / Avg: 106.41 / Max: 107.14Min: 107.24 / Avg: 107.94 / Max: 108.521. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkCore i7 5960XIntel Core i7 5960X60120180240300SE +/- 0.52, N = 3SE +/- 1.34, N = 3269.69265.88
OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkCore i7 5960XIntel Core i7 5960X50100150200250Min: 268.98 / Avg: 269.69 / Max: 270.7Min: 264.3 / Avg: 265.88 / Max: 268.55

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian DragonCore i7 5960XIntel Core i7 5960X3691215SE +/- 0.08, N = 3SE +/- 0.04, N = 311.4011.56MIN: 11.24 / MAX: 11.67MIN: 11.44 / MAX: 11.74
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian DragonCore i7 5960XIntel Core i7 5960X3691215Min: 11.28 / Avg: 11.4 / Max: 11.55Min: 11.5 / Avg: 11.56 / Max: 11.63

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian ChangepointCore i7 5960XIntel Core i7 5960X1428425670SE +/- 0.89, N = 3SE +/- 0.64, N = 363.3962.55
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian ChangepointCore i7 5960XIntel Core i7 5960X1224364860Min: 62.34 / Avg: 63.39 / Max: 65.16Min: 61.28 / Avg: 62.55 / Max: 63.29

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASCore i7 5960XIntel Core i7 5960X2004006008001000SE +/- 3.38, N = 3SE +/- 4.58, N = 399710101. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.26Backend: BLASCore i7 5960XIntel Core i7 5960X2004006008001000Min: 993 / Avg: 997.33 / Max: 1004Min: 1004 / Avg: 1010 / Max: 10191. (CXX) g++ options: -flto -pthread

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Core i7 5960XIntel Core i7 5960X11002200330044005500SE +/- 54.95, N = 7SE +/- 42.50, N = 1351235189
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Core i7 5960XIntel Core i7 5960X9001800270036004500Min: 4926 / Avg: 5122.57 / Max: 5340Min: 4915 / Avg: 5188.62 / Max: 5454

x264

This is a simple test of the x264 encoder run on the CPU (OpenCL support disabled) with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2019-12-17H.264 Video EncodingCore i7 5960XIntel Core i7 5960X1326395265SE +/- 0.72, N = 5SE +/- 0.74, N = 357.7557.021. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -ffast-math -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize
OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2019-12-17H.264 Video EncodingCore i7 5960XIntel Core i7 5960X1122334455Min: 54.95 / Avg: 57.75 / Max: 58.94Min: 55.54 / Avg: 57.02 / Max: 57.841. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -ffast-math -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize

Tachyon

This is a test of the threaded Tachyon, a parallel ray-tracing system, measuring the time to ray-trace a sample scene. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total TimeCore i7 5960XIntel Core i7 5960X306090120150SE +/- 0.21, N = 3SE +/- 1.82, N = 6148.38150.281. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total TimeCore i7 5960XIntel Core i7 5960X306090120150Min: 148.01 / Avg: 148.38 / Max: 148.74Min: 147.9 / Avg: 150.28 / Max: 159.351. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon ObjCore i7 5960XIntel Core i7 5960X246810SE +/- 0.0201, N = 3SE +/- 0.1169, N = 38.85798.7514MIN: 8.79 / MAX: 8.97MIN: 8.4 / MAX: 8.96
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian Dragon ObjCore i7 5960XIntel Core i7 5960X3691215Min: 8.82 / Avg: 8.86 / Max: 8.89Min: 8.52 / Avg: 8.75 / Max: 8.87

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDCore i7 5960XIntel Core i7 5960X300K600K900K1200K1500KSE +/- 5713.11, N = 3SE +/- 21162.41, N = 31569448.961587452.711. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDCore i7 5960XIntel Core i7 5960X300K600K900K1200K1500KMin: 1558380 / Avg: 1569448.96 / Max: 1577438.5Min: 1548532.5 / Avg: 1587452.71 / Max: 1621316.121. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.0.1Test: Server Room - Acceleration: CPU-onlyCore i7 5960XIntel Core i7 5960X1.24562.49123.73684.98246.228SE +/- 0.065, N = 3SE +/- 0.053, N = 35.4745.536
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.0.1Test: Server Room - Acceleration: CPU-onlyCore i7 5960XIntel Core i7 5960X246810Min: 5.38 / Avg: 5.47 / Max: 5.6Min: 5.43 / Avg: 5.54 / Max: 5.59

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 10Core i7 5960XIntel Core i7 5960X0.52041.04081.56122.08162.602SE +/- 0.019, N = 3SE +/- 0.011, N = 32.3132.289
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 10Core i7 5960XIntel Core i7 5960X246810Min: 2.28 / Avg: 2.31 / Max: 2.33Min: 2.27 / Avg: 2.29 / Max: 2.3

CLOMP

CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading in order to influence future system designs. This particular test profile configuration is currently set to look at the OpenMP static schedule speed-up across all available CPU cores using the recommended test configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP SpeedupCore i7 5960XIntel Core i7 5960X3691215SE +/- 0.09, N = 15SE +/- 0.12, N = 310.310.21. (CC) gcc options: -fopenmp -O3 -lm
OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP SpeedupCore i7 5960XIntel Core i7 5960X3691215Min: 9.7 / Avg: 10.27 / Max: 10.9Min: 10 / Avg: 10.2 / Max: 10.41. (CC) gcc options: -fopenmp -O3 -lm

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: ResizingCore i7 5960XIntel Core i7 5960X130260390520650SE +/- 1.20, N = 36196131. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: ResizingCore i7 5960XIntel Core i7 5960X110220330440550Min: 611 / Avg: 612.67 / Max: 6151. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

WireGuard + Linux Networking Stack Stress Test

This is a benchmark of the WireGuard secure VPN tunnel and Linux networking stack stress test. The test runs on the local host but does require root permissions to run. The way it works is it creates three namespaces. ns0 has a loopback device. ns1 and ns2 each have wireguard devices. Those two wireguard devices send traffic through the loopback device of ns0. The end result of this is that tests wind up testing encryption and decryption at the same time -- a pretty CPU and scheduler-heavy workflow. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestCore i7 5960XIntel Core i7 5960X60120180240300SE +/- 2.10, N = 3SE +/- 1.56, N = 3267.33264.85
OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestCore i7 5960XIntel Core i7 5960X50100150200250Min: 264.62 / Avg: 267.33 / Max: 271.47Min: 262.05 / Avg: 264.85 / Max: 267.44

Warsow

This is a benchmark of Warsow, a popular open-source first-person shooter. This game uses the QFusion engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 1920 x 1080Core i7 5960XIntel Core i7 5960X4080120160200SE +/- 2.10, N = 3SE +/- 0.12, N = 3175.5177.1
OpenBenchmarking.orgFrames Per Second, More Is BetterWarsow 2.5 BetaResolution: 1920 x 1080Core i7 5960XIntel Core i7 5960X306090120150Min: 171.3 / Avg: 175.5 / Max: 177.7Min: 176.9 / Avg: 177.07 / Max: 177.3

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b EncryptionCore i7 5960XIntel Core i7 5960X400800120016002000SE +/- 10.18, N = 3SE +/- 3.68, N = 31735.21750.7
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b EncryptionCore i7 5960XIntel Core i7 5960X30060090012001500Min: 1719.5 / Avg: 1735.23 / Max: 1754.3Min: 1743.6 / Avg: 1750.67 / Max: 1756

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 6Core i7 5960XIntel Core i7 5960X0.23150.4630.69450.9261.1575SE +/- 0.001, N = 3SE +/- 0.004, N = 31.0291.020
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 6Core i7 5960XIntel Core i7 5960X246810Min: 1.03 / Avg: 1.03 / Max: 1.03Min: 1.01 / Avg: 1.02 / Max: 1.03

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 1 - Process: DecompressionCore i7 5960XIntel Core i7 5960X2004006008001000SE +/- 9.07, N = 3115311431. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 1 - Process: DecompressionCore i7 5960XIntel Core i7 5960X2004006008001000Min: 1125 / Avg: 1143 / Max: 11541. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MMAPCore i7 5960XIntel Core i7 5960X306090120150SE +/- 1.67, N = 4SE +/- 0.71, N = 3112.55113.521. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MMAPCore i7 5960XIntel Core i7 5960X20406080100Min: 107.63 / Avg: 112.55 / Max: 114.76Min: 112.13 / Avg: 113.52 / Max: 114.461. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultCore i7 5960XIntel Core i7 5960X0.45320.90641.35961.81282.266SE +/- 0.001, N = 3SE +/- 0.020, N = 31.9972.0141. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultCore i7 5960XIntel Core i7 5960X246810Min: 1.99 / Avg: 2 / Max: 2Min: 1.99 / Avg: 2.01 / Max: 2.051. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

Sockperf

This is a network socket API performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.4Test: Latency Ping PongCore i7 5960XIntel Core i7 5960X1.07062.14123.21184.28245.353SE +/- 0.045, N = 25SE +/- 0.058, N = 54.7584.7181. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread
OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.4Test: Latency Ping PongCore i7 5960XIntel Core i7 5960X246810Min: 4.4 / Avg: 4.76 / Max: 5.35Min: 4.62 / Avg: 4.72 / Max: 4.941. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 8 - Process: DecompressionCore i7 5960XIntel Core i7 5960X30060090012001500SE +/- 16.50, N = 3SE +/- 7.64, N = 3119412041. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 8 - Process: DecompressionCore i7 5960XIntel Core i7 5960X2004006008001000Min: 1161 / Avg: 1193.67 / Max: 1214Min: 1189 / Avg: 1204 / Max: 12141. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeCore i7 5960XIntel Core i7 5960X3691215SE +/- 0.10, N = 3SE +/- 0.06, N = 312.0012.101. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 RealtimeCore i7 5960XIntel Core i7 5960X48121620Min: 11.81 / Avg: 12 / Max: 12.11Min: 11.99 / Avg: 12.1 / Max: 12.161. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPUCore i7 5960XIntel Core i7 5960X0.27450.5490.82351.0981.3725SE +/- 0.00, N = 3SE +/- 0.01, N = 31.221.21
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPUCore i7 5960XIntel Core i7 5960X246810Min: 1.22 / Avg: 1.22 / Max: 1.22Min: 1.19 / Avg: 1.21 / Max: 1.22

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUCore i7 5960XIntel Core i7 5960X0.83661.67322.50983.34644.183SE +/- 0.02254, N = 3SE +/- 0.01948, N = 33.688363.71825MIN: 3.53MIN: 3.561. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUCore i7 5960XIntel Core i7 5960X246810Min: 3.65 / Avg: 3.69 / Max: 3.73Min: 3.68 / Avg: 3.72 / Max: 3.741. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Monkey Audio Encoding

This test times how long it takes to encode a sample WAV file to Monkey's Audio APE format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APECore i7 5960XIntel Core i7 5960X48121620SE +/- 0.06, N = 5SE +/- 0.07, N = 515.7315.861. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt
OpenBenchmarking.orgSeconds, Fewer Is BetterMonkey Audio Encoding 3.99.6WAV To APECore i7 5960XIntel Core i7 5960X48121620Min: 15.6 / Avg: 15.73 / Max: 15.93Min: 15.67 / Avg: 15.86 / Max: 16.051. (CXX) g++ options: -O3 -pedantic -rdynamic -lrt

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 1Core i7 5960XIntel Core i7 5960X0.05760.11520.17280.23040.288SE +/- 0.001, N = 3SE +/- 0.001, N = 30.2560.254
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 1Core i7 5960XIntel Core i7 5960X12345Min: 0.26 / Avg: 0.26 / Max: 0.26Min: 0.25 / Avg: 0.25 / Max: 0.26

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUCore i7 5960XIntel Core i7 5960X3691215SE +/- 0.02, N = 3SE +/- 0.02, N = 311.7811.87MIN: 11.57MIN: 11.631. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUCore i7 5960XIntel Core i7 5960X3691215Min: 11.75 / Avg: 11.78 / Max: 11.83Min: 11.84 / Avg: 11.87 / Max: 11.91. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 1080pCore i7 5960XIntel Core i7 5960X70140210280350SE +/- 0.74, N = 3SE +/- 2.29, N = 3333.86331.30MIN: 282.31 / MAX: 365.67MIN: 244.66 / MAX: 366.611. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 1080pCore i7 5960XIntel Core i7 5960X60120180240300Min: 333 / Avg: 333.86 / Max: 335.34Min: 327.71 / Avg: 331.3 / Max: 335.571. (CC) gcc options: -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUCore i7 5960XIntel Core i7 5960X6001200180024003000SE +/- 1.01, N = 3SE +/- 0.90, N = 32625.392605.46MIN: 2621.37MIN: 2593.511. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUCore i7 5960XIntel Core i7 5960X5001000150020002500Min: 2623.38 / Avg: 2625.39 / Max: 2626.6Min: 2603.73 / Avg: 2605.46 / Max: 2606.781. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Barbershop - Compute: CPU-OnlyCore i7 5960XIntel Core i7 5960X2004006008001000SE +/- 1.57, N = 3SE +/- 2.21, N = 3954.31947.10
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: Barbershop - Compute: CPU-OnlyCore i7 5960XIntel Core i7 5960X2004006008001000Min: 951.19 / Avg: 954.31 / Max: 956.25Min: 944.56 / Avg: 947.1 / Max: 951.51

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteCore i7 5960XIntel Core i7 5960X4080120160200SE +/- 0.62, N = 3SE +/- 1.45, N = 3191.13189.701. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteCore i7 5960XIntel Core i7 5960X4080120160200Min: 190.01 / Avg: 191.13 / Max: 192.14Min: 186.81 / Avg: 189.7 / Max: 191.171. (CXX) g++ options: -O2 -lOpenCL

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1SCore i7 5960XIntel Core i7 5960X1632486480SE +/- 0.45, N = 3SE +/- 0.36, N = 370.2570.781. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1SCore i7 5960XIntel Core i7 5960X1428425670Min: 69.47 / Avg: 70.25 / Max: 71.05Min: 70.17 / Avg: 70.78 / Max: 71.431. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: HWB Color SpaceCore i7 5960XIntel Core i7 5960X120240360480600SE +/- 0.58, N = 3SE +/- 2.03, N = 35495451. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: HWB Color SpaceCore i7 5960XIntel Core i7 5960X100200300400500Min: 548 / Avg: 549 / Max: 550Min: 542 / Avg: 545.33 / Max: 5491. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3Core i7 5960XIntel Core i7 5960X1.25782.51563.77345.03126.289SE +/- 0.04, N = 3SE +/- 0.02, N = 35.555.59MIN: 5.44 / MAX: 7.3MIN: 5.47 / MAX: 7.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3Core i7 5960XIntel Core i7 5960X246810Min: 5.49 / Avg: 5.55 / Max: 5.62Min: 5.55 / Avg: 5.59 / Max: 5.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian DragonCore i7 5960XIntel Core i7 5960X3691215SE +/- 0.0553, N = 3SE +/- 0.0238, N = 39.48609.5522MIN: 9.35 / MAX: 9.64MIN: 9.47 / MAX: 9.7
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: Asian DragonCore i7 5960XIntel Core i7 5960X3691215Min: 9.38 / Avg: 9.49 / Max: 9.54Min: 9.52 / Avg: 9.55 / Max: 9.6

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileCore i7 5960XIntel Core i7 5960X4080120160200SE +/- 1.51, N = 3SE +/- 1.32, N = 3180.71181.94
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileCore i7 5960XIntel Core i7 5960X306090120150Min: 177.8 / Avg: 180.71 / Max: 182.91Min: 179.3 / Avg: 181.94 / Max: 183.26

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterCore i7 5960XIntel Core i7 5960X714212835SE +/- 0.31, N = 15SE +/- 0.48, N = 327.9428.131. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterCore i7 5960XIntel Core i7 5960X612182430Min: 25.27 / Avg: 27.94 / Max: 29.37Min: 27.65 / Avg: 28.13 / Max: 29.081. (CXX) g++ options: -O2 -lOpenCL

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MEMFDCore i7 5960XIntel Core i7 5960X140280420560700SE +/- 1.13, N = 3SE +/- 2.71, N = 3636.02631.871. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MEMFDCore i7 5960XIntel Core i7 5960X110220330440550Min: 634.55 / Avg: 636.02 / Max: 638.23Min: 626.59 / Avg: 631.87 / Max: 635.541. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 5Core i7 5960XIntel Core i7 5960X0.17280.34560.51840.69120.864SE +/- 0.002, N = 3SE +/- 0.006, N = 30.7680.763
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 5Core i7 5960XIntel Core i7 5960X246810Min: 0.76 / Avg: 0.77 / Max: 0.77Min: 0.75 / Avg: 0.76 / Max: 0.77

FFTE

FFTE is a package by Daisuke Takahashi to compute Discrete Fourier Transforms of 1-, 2- and 3- dimensional sequences of length (2^p)*(3^q)*(5^r). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineCore i7 5960XIntel Core i7 5960X6K12K18K24K30KSE +/- 32.84, N = 3SE +/- 35.83, N = 329005.4029193.831. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp
OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineCore i7 5960XIntel Core i7 5960X5K10K15K20K25KMin: 28954.23 / Avg: 29005.4 / Max: 29066.64Min: 29137.48 / Avg: 29193.83 / Max: 29260.331. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp

GLmark2

This is a test of Linaro's glmark2 port, currently using the X11 OpenGL 2.0 target. GLmark2 is a basic OpenGL benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterGLmark2 2020.04Resolution: 1920 x 1080Core i7 5960XIntel Core i7 5960XR2400800120016002000170917031698

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Very FastCore i7 5960XIntel Core i7 5960X816243240SE +/- 0.13, N = 3SE +/- 0.18, N = 334.2434.461. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Very FastCore i7 5960XIntel Core i7 5960X714212835Min: 33.98 / Avg: 34.24 / Max: 34.37Min: 34.12 / Avg: 34.46 / Max: 34.731. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very FastCore i7 5960XIntel Core i7 5960X3691215SE +/- 0.02, N = 3SE +/- 0.03, N = 39.589.641. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very FastCore i7 5960XIntel Core i7 5960X3691215Min: 9.56 / Avg: 9.58 / Max: 9.61Min: 9.6 / Avg: 9.64 / Max: 9.691. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeCore i7 5960XIntel Core i7 5960X3691215SE +/- 0.07, N = 5SE +/- 0.07, N = 510.8310.761. (CXX) g++ options: -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeCore i7 5960XIntel Core i7 5960X3691215Min: 10.58 / Avg: 10.83 / Max: 10.95Min: 10.58 / Avg: 10.76 / Max: 10.931. (CXX) g++ options: -fvisibility=hidden -logg -lm

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassCore i7 5960XIntel Core i7 5960X0.3690.7381.1071.4761.845SE +/- 0.00, N = 3SE +/- 0.01, N = 31.641.631. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 4 Two-PassCore i7 5960XIntel Core i7 5960X246810Min: 1.63 / Avg: 1.64 / Max: 1.64Min: 1.61 / Avg: 1.63 / Max: 1.651. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10Core i7 5960XIntel Core i7 5960X246810SE +/- 0.020, N = 3SE +/- 0.017, N = 37.7117.6641. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 10Core i7 5960XIntel Core i7 5960X3691215Min: 7.68 / Avg: 7.71 / Max: 7.75Min: 7.63 / Avg: 7.66 / Max: 7.691. (CXX) g++ options: -O3 -fPIC

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b EncryptionCore i7 5960XIntel Core i7 5960X120240360480600SE +/- 2.40, N = 3SE +/- 0.27, N = 3556.2559.6
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b EncryptionCore i7 5960XIntel Core i7 5960X100200300400500Min: 552.2 / Avg: 556.2 / Max: 560.5Min: 559.1 / Avg: 559.63 / Max: 560

OSBench

OSBench is a collection of micro-benchmarks for measuring operating system primitives like time to create threads/processes, launching programs, creating files, and memory allocation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Event, Fewer Is BetterOSBenchTest: Memory AllocationsCore i7 5960XIntel Core i7 5960X20406080100SE +/- 0.02, N = 3SE +/- 0.03, N = 391.9091.341. (CC) gcc options: -lm
OpenBenchmarking.orgNs Per Event, Fewer Is BetterOSBenchTest: Memory AllocationsCore i7 5960XIntel Core i7 5960X20406080100Min: 91.86 / Avg: 91.9 / Max: 91.93Min: 91.3 / Avg: 91.34 / Max: 91.411. (CC) gcc options: -lm

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsCore i7 5960XIntel Core i7 5960X0.55311.10621.65932.21242.7655SE +/- 0.00589, N = 3SE +/- 0.01433, N = 32.458252.44335
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsCore i7 5960XIntel Core i7 5960X246810Min: 2.45 / Avg: 2.46 / Max: 2.47Min: 2.41 / Avg: 2.44 / Max: 2.46

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-GaussianCore i7 5960XIntel Core i7 5960X4080120160200SE +/- 0.33, N = 31671661. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-GaussianCore i7 5960XIntel Core i7 5960X306090120150Min: 166 / Avg: 166.67 / Max: 1671. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080pCore i7 5960XIntel Core i7 5960X90180270360450SE +/- 5.24, N = 3SE +/- 3.37, N = 3404.38406.81MIN: 304.9 / MAX: 544.84MIN: 302.07 / MAX: 572.731. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080pCore i7 5960XIntel Core i7 5960X70140210280350Min: 398.84 / Avg: 404.38 / Max: 414.85Min: 400.44 / Avg: 406.81 / Max: 411.91. (CC) gcc options: -pthread

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Threaded Building Blocks, and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal GeneratorImplementation: TBBCore i7 5960XIntel Core i7 5960X30K60K90K120K150KSE +/- 1315.36, N = 3SE +/- 263.82, N = 31542441533631. (CXX) g++ options: -lpthread -isystem -fexceptions -std=c++14
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal GeneratorImplementation: TBBCore i7 5960XIntel Core i7 5960X30K60K90K120K150KMin: 152631 / Avg: 154243.67 / Max: 156850Min: 152868 / Avg: 153362.67 / Max: 1537691. (CXX) g++ options: -lpthread -isystem -fexceptions -std=c++14

GNU Octave Benchmark

This test profile measures how long it takes to complete several reference GNU Octave files via octave-benchmark. GNU Octave is used for numerical computations and is an open-source alternative to MATLAB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 5.2.0Core i7 5960XIntel Core i7 5960X3691215SE +/- 0.08, N = 5SE +/- 0.07, N = 510.1210.07
OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 5.2.0Core i7 5960XIntel Core i7 5960X3691215Min: 10.01 / Avg: 10.12 / Max: 10.44Min: 9.94 / Avg: 10.07 / Max: 10.32

Timed MAFFT Alignment

This test performs an alignment of 100 pyruvate decarboxylase sequences. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNACore i7 5960XIntel Core i7 5960X3691215SE +/- 0.07, N = 3SE +/- 0.13, N = 312.1012.031. (CC) gcc options: -std=c99 -O3 -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNACore i7 5960XIntel Core i7 5960X48121620Min: 12.03 / Avg: 12.1 / Max: 12.24Min: 11.87 / Avg: 12.03 / Max: 12.31. (CC) gcc options: -std=c99 -O3 -lm -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPUCore i7 5960XIntel Core i7 5960X0.40050.8011.20151.6022.0025SE +/- 0.00, N = 3SE +/- 0.00, N = 31.771.78
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPUCore i7 5960XIntel Core i7 5960X246810Min: 1.77 / Avg: 1.77 / Max: 1.78Min: 1.77 / Avg: 1.78 / Max: 1.78

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: VMAF Optimized - Input: Bosphorus 1080pCore i7 5960XIntel Core i7 5960X20406080100SE +/- 1.28, N = 5SE +/- 1.16, N = 598.7398.181. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: VMAF Optimized - Input: Bosphorus 1080pCore i7 5960XIntel Core i7 5960X20406080100Min: 93.76 / Avg: 98.73 / Max: 100.82Min: 93.57 / Avg: 98.18 / Max: 99.91. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPUCore i7 5960XIntel Core i7 5960X10002000300040005000SE +/- 6.14, N = 3SE +/- 15.62, N = 34844.104817.69
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPUCore i7 5960XIntel Core i7 5960X8001600240032004000Min: 4835.29 / Avg: 4844.1 / Max: 4855.92Min: 4786.66 / Avg: 4817.69 / Max: 4836.37

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateCore i7 5960XIntel Core i7 5960X1326395265SE +/- 0.09, N = 3SE +/- 0.18, N = 357.858.1
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateCore i7 5960XIntel Core i7 5960X1122334455Min: 57.6 / Avg: 57.77 / Max: 57.9Min: 57.8 / Avg: 58.13 / Max: 58.4

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionCore i7 5960XIntel Core i7 5960X1020304050SE +/- 0.08, N = 3SE +/- 0.05, N = 345.2445.471. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionCore i7 5960XIntel Core i7 5960X918273645Min: 45.13 / Avg: 45.24 / Max: 45.4Min: 45.41 / Avg: 45.47 / Max: 45.561. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 2Core i7 5960XIntel Core i7 5960X20406080100SE +/- 0.30, N = 3SE +/- 0.18, N = 382.6382.211. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 2Core i7 5960XIntel Core i7 5960X1632486480Min: 82.05 / Avg: 82.63 / Max: 83.05Min: 81.85 / Avg: 82.21 / Max: 82.431. (CXX) g++ options: -O3 -fPIC

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUCore i7 5960XIntel Core i7 5960X0.60261.20521.80782.41043.013SE +/- 0.02103, N = 3SE +/- 0.00555, N = 32.664552.67815MIN: 2.62MIN: 2.641. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUCore i7 5960XIntel Core i7 5960X246810Min: 2.64 / Avg: 2.66 / Max: 2.71Min: 2.67 / Avg: 2.68 / Max: 2.691. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonCore i7 5960XIntel Core i7 5960X12002400360048006000SE +/- 68.02, N = 4SE +/- 58.68, N = 753655338
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonCore i7 5960XIntel Core i7 5960X9001800270036004500Min: 5165 / Avg: 5364.5 / Max: 5469Min: 5192 / Avg: 5338 / Max: 5664

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3Core i7 5960XIntel Core i7 5960X90180270360450SE +/- 0.88, N = 3SE +/- 0.58, N = 3398396
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3Core i7 5960XIntel Core i7 5960X70140210280350Min: 396 / Avg: 397.67 / Max: 399Min: 395 / Avg: 396 / Max: 397

Dolfyn

Dolfyn is a Computational Fluid Dynamics (CFD) code of modern numerical simulation techniques. The Dolfyn test profile measures the execution time of the bundled computational fluid dynamics demos that are bundled with Dolfyn. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid DynamicsCore i7 5960XIntel Core i7 5960X612182430SE +/- 0.25, N = 3SE +/- 0.09, N = 325.9126.04
OpenBenchmarking.orgSeconds, Fewer Is BetterDolfyn 0.527Computational Fluid DynamicsCore i7 5960XIntel Core i7 5960X612182430Min: 25.46 / Avg: 25.91 / Max: 26.32Min: 25.94 / Avg: 26.04 / Max: 26.22

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Matrix MathCore i7 5960XIntel Core i7 5960X7K14K21K28K35KSE +/- 132.33, N = 3SE +/- 38.46, N = 334048.2933878.701. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Matrix MathCore i7 5960XIntel Core i7 5960X6K12K18K24K30KMin: 33792.57 / Avg: 34048.29 / Max: 34235.19Min: 33803.92 / Avg: 33878.7 / Max: 33931.671. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileCore i7 5960XIntel Core i7 5960X50K100K150K200K250KSE +/- 387.62, N = 3SE +/- 1437.23, N = 3254023252760
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileCore i7 5960XIntel Core i7 5960X40K80K120K160K200KMin: 253249 / Avg: 254023 / Max: 254448Min: 249920 / Avg: 252760.33 / Max: 254563

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc C String FunctionsCore i7 5960XIntel Core i7 5960X200K400K600K800K1000KSE +/- 5106.20, N = 3SE +/- 5538.77, N = 3899599.36895166.041. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc C String FunctionsCore i7 5960XIntel Core i7 5960X160K320K480K640K800KMin: 889467.44 / Avg: 899599.36 / Max: 905773.51Min: 889290.45 / Avg: 895166.04 / Max: 906236.611. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileCore i7 5960XIntel Core i7 5960X50100150200250206205

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: CrownCore i7 5960XIntel Core i7 5960X3691215SE +/- 0.0302, N = 3SE +/- 0.0099, N = 39.24219.1980MIN: 9.14 / MAX: 9.43MIN: 9.13 / MAX: 9.33
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: CrownCore i7 5960XIntel Core i7 5960X3691215Min: 9.19 / Avg: 9.24 / Max: 9.29Min: 9.18 / Avg: 9.2 / Max: 9.22

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUCore i7 5960XIntel Core i7 5960X0.74361.48722.23082.97443.718SE +/- 0.00846, N = 3SE +/- 0.00412, N = 33.289203.30497MIN: 3.24MIN: 3.271. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUCore i7 5960XIntel Core i7 5960X246810Min: 3.27 / Avg: 3.29 / Max: 3.3Min: 3.3 / Avg: 3.3 / Max: 3.311. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18Core i7 5960XIntel Core i7 5960X48121620SE +/- 0.01, N = 3SE +/- 0.01, N = 314.8714.80MIN: 14.7 / MAX: 15.33MIN: 14.66 / MAX: 16.711. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18Core i7 5960XIntel Core i7 5960X48121620Min: 14.85 / Avg: 14.87 / Max: 14.88Min: 14.77 / Avg: 14.8 / Max: 14.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

librsvg

RSVG/librsvg is an SVG vector graphics library. This test profile times how long it takes to complete various operations by rsvg-convert. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterlibrsvgOperation: SVG Files To PNGCore i7 5960XIntel Core i7 5960X816243240SE +/- 0.06, N = 3SE +/- 0.10, N = 335.8135.641. rsvg-convert version 2.48.9
OpenBenchmarking.orgSeconds, Fewer Is BetterlibrsvgOperation: SVG Files To PNGCore i7 5960XIntel Core i7 5960X816243240Min: 35.72 / Avg: 35.81 / Max: 35.94Min: 35.47 / Avg: 35.64 / Max: 35.81. rsvg-convert version 2.48.9

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DCore i7 5960XIntel Core i7 5960X306090120150SE +/- 0.03, N = 3SE +/- 0.45, N = 3128.31128.911. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DCore i7 5960XIntel Core i7 5960X20406080100Min: 128.28 / Avg: 128.31 / Max: 128.37Min: 128.01 / Avg: 128.91 / Max: 129.461. (CXX) g++ options: -O2 -lOpenCL

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedCore i7 5960XIntel Core i7 5960X14002800420056007000SE +/- 2.69, N = 3SE +/- 6.70, N = 36736.36705.31. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedCore i7 5960XIntel Core i7 5960X12002400360048006000Min: 6732.3 / Avg: 6736.27 / Max: 6741.4Min: 6692.1 / Avg: 6705.27 / Max: 67141. (CC) gcc options: -O3

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Core i7 5960XIntel Core i7 5960X200K400K600K800K1000KSE +/- 1129.39, N = 3SE +/- 2872.10, N = 31025635.21021035.5
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Core i7 5960XIntel Core i7 5960X200K400K600K800K1000KMin: 1023744.8 / Avg: 1025635.23 / Max: 1027651.1Min: 1017212 / Avg: 1021035.47 / Max: 1026659.7

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b EncryptionCore i7 5960XIntel Core i7 5960X120240360480600SE +/- 0.87, N = 3SE +/- 0.15, N = 3557.7560.2
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b EncryptionCore i7 5960XIntel Core i7 5960X100200300400500Min: 556.5 / Avg: 557.7 / Max: 559.4Min: 559.9 / Avg: 560.17 / Max: 560.4

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc Qsort Data SortingCore i7 5960XIntel Core i7 5960X306090120150SE +/- 0.15, N = 3SE +/- 0.17, N = 3112.14111.641. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Glibc Qsort Data SortingCore i7 5960XIntel Core i7 5960X20406080100Min: 111.87 / Avg: 112.14 / Max: 112.37Min: 111.33 / Avg: 111.64 / Max: 111.91. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUCore i7 5960XIntel Core i7 5960X6001200180024003000SE +/- 1.00, N = 3SE +/- 2.55, N = 32621.572609.96MIN: 2617.66MIN: 2604.861. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUCore i7 5960XIntel Core i7 5960X5001000150020002500Min: 2619.68 / Avg: 2621.57 / Max: 2623.09Min: 2606.74 / Avg: 2609.96 / Max: 2614.991. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

XZ Compression

This test measures the time needed to compress a sample file (an Ubuntu file-system image) using XZ compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9Core i7 5960XIntel Core i7 5960X714212835SE +/- 0.07, N = 3SE +/- 0.05, N = 332.2732.131. (CC) gcc options: -pthread -fvisibility=hidden -O2
OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9Core i7 5960XIntel Core i7 5960X714212835Min: 32.18 / Avg: 32.27 / Max: 32.4Min: 32.04 / Avg: 32.13 / Max: 32.221. (CC) gcc options: -pthread -fvisibility=hidden -O2

OSBench

OSBench is a collection of micro-benchmarks for measuring operating system primitives like time to create threads/processes, launching programs, creating files, and memory allocation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create ThreadsCore i7 5960XIntel Core i7 5960X48121620SE +/- 0.03, N = 3SE +/- 0.05, N = 316.1616.231. (CC) gcc options: -lm
OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create ThreadsCore i7 5960XIntel Core i7 5960X48121620Min: 16.12 / Avg: 16.16 / Max: 16.21Min: 16.14 / Avg: 16.23 / Max: 16.31. (CC) gcc options: -lm

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b EncryptionCore i7 5960XIntel Core i7 5960X80160240320400SE +/- 1.25, N = 3SE +/- 0.20, N = 3347.9349.4
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b EncryptionCore i7 5960XIntel Core i7 5960X60120180240300Min: 345.4 / Avg: 347.9 / Max: 349.2Min: 349 / Avg: 349.4 / Max: 349.6

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibCore i7 5960XIntel Core i7 5960X612182430SE +/- 0.03, N = 3SE +/- 0.00, N = 323.223.3
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibCore i7 5960XIntel Core i7 5960X510152025Min: 23.2 / Avg: 23.23 / Max: 23.3Min: 23.3 / Avg: 23.3 / Max: 23.3

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUCore i7 5960XIntel Core i7 5960X48121620SE +/- 0.01, N = 3SE +/- 0.01, N = 314.0614.00MIN: 13.91MIN: 13.881. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUCore i7 5960XIntel Core i7 5960X48121620Min: 14.05 / Avg: 14.06 / Max: 14.09Min: 13.99 / Avg: 14 / Max: 14.031. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.0.1Test: Masskrug - Acceleration: CPU-onlyCore i7 5960XIntel Core i7 5960X246810SE +/- 0.002, N = 3SE +/- 0.024, N = 36.9026.931
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.0.1Test: Masskrug - Acceleration: CPU-onlyCore i7 5960XIntel Core i7 5960X3691215Min: 6.9 / Avg: 6.9 / Max: 6.91Min: 6.89 / Avg: 6.93 / Max: 6.97

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.1.1Bulk Size: 100 - Inserts: 1000 - Rounds: 24Core i7 5960XIntel Core i7 5960X306090120150SE +/- 1.80, N = 3SE +/- 1.28, N = 3157.53156.871. (CXX) g++ options: -std=c++14 -lmozjs-68 -lm -lerl_interface -lei -fPIC -MMD
OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.1.1Bulk Size: 100 - Inserts: 1000 - Rounds: 24Core i7 5960XIntel Core i7 5960X306090120150Min: 154.35 / Avg: 157.53 / Max: 160.56Min: 155.41 / Avg: 156.87 / Max: 159.411. (CXX) g++ options: -std=c++14 -lmozjs-68 -lm -lerl_interface -lei -fPIC -MMD

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUCore i7 5960XIntel Core i7 5960X10002000300040005000SE +/- 0.78, N = 3SE +/- 1.20, N = 34817.884798.14MIN: 4812.81MIN: 4792.841. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUCore i7 5960XIntel Core i7 5960X8001600240032004000Min: 4816.39 / Avg: 4817.88 / Max: 4819Min: 4796.6 / Avg: 4798.14 / Max: 4800.51. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionCore i7 5960XIntel Core i7 5960X3691215SE +/- 0.012, N = 3SE +/- 0.012, N = 39.2769.2381. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionCore i7 5960XIntel Core i7 5960X3691215Min: 9.25 / Avg: 9.28 / Max: 9.29Min: 9.22 / Avg: 9.24 / Max: 9.261. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Core i7 5960XIntel Core i7 5960X714212835SE +/- 0.41, N = 4SE +/- 0.40, N = 428.8428.721. (CC) gcc options: -O2 -pedantic -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28Core i7 5960XIntel Core i7 5960X612182430Min: 27.62 / Avg: 28.84 / Max: 29.3Min: 27.53 / Avg: 28.72 / Max: 29.221. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Core i7 5960XIntel Core i7 5960X7001400210028003500SE +/- 9.53, N = 3SE +/- 5.25, N = 33389.43375.81. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 3Core i7 5960XIntel Core i7 5960X6001200180024003000Min: 3374.8 / Avg: 3389.4 / Max: 3407.3Min: 3370.4 / Avg: 3375.8 / Max: 3386.31. (CC) gcc options: -O3 -pthread -lz -llzma

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 0 - Process: DecompressionCore i7 5960XIntel Core i7 5960X110220330440550SE +/- 2.19, N = 35135111. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 0 - Process: DecompressionCore i7 5960XIntel Core i7 5960X90180270360450Min: 507 / Avg: 511.33 / Max: 5141. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

Timed Eigen Compilation

This test times how long it takes to build all Eigen examples. The Eigen examples are compiled serially. Eigen is a C++ template library for linear algebra. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To CompileCore i7 5960XIntel Core i7 5960X20406080100SE +/- 0.20, N = 3SE +/- 0.20, N = 3104.72105.12
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Eigen Compilation 3.3.9Time To CompileCore i7 5960XIntel Core i7 5960X20406080100Min: 104.48 / Avg: 104.72 / Max: 105.12Min: 104.81 / Avg: 105.12 / Max: 105.51

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pCore i7 5960XIntel Core i7 5960X510152025SE +/- 0.01, N = 3SE +/- 0.08, N = 318.9418.861. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pCore i7 5960XIntel Core i7 5960X510152025Min: 18.93 / Avg: 18.94 / Max: 18.96Min: 18.78 / Avg: 18.86 / Max: 19.031. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUCore i7 5960XIntel Core i7 5960X10002000300040005000SE +/- 8.71, N = 3SE +/- 11.64, N = 34833.494814.94
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUCore i7 5960XIntel Core i7 5960X8001600240032004000Min: 4817.65 / Avg: 4833.49 / Max: 4847.68Min: 4794.94 / Avg: 4814.94 / Max: 4835.26

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonCore i7 5960XIntel Core i7 5960X110220330440550522520

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUCore i7 5960XIntel Core i7 5960X3691215SE +/- 0.00401, N = 3SE +/- 0.01823, N = 39.661029.69747MIN: 9.6MIN: 9.641. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUCore i7 5960XIntel Core i7 5960X3691215Min: 9.65 / Avg: 9.66 / Max: 9.67Min: 9.68 / Avg: 9.7 / Max: 9.731. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mCore i7 5960XIntel Core i7 5960X510152025SE +/- 0.10, N = 3SE +/- 0.07, N = 321.6921.61MIN: 21.42 / MAX: 38.41MIN: 21.39 / MAX: 22.141. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400mCore i7 5960XIntel Core i7 5960X510152025Min: 21.51 / Avg: 21.69 / Max: 21.84Min: 21.49 / Avg: 21.61 / Max: 21.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Core i7 5960XIntel Core i7 5960X200K400K600K800K1000KSE +/- 2519.30, N = 3SE +/- 5561.09, N = 31091771.71087810.2
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Core i7 5960XIntel Core i7 5960X200K400K600K800K1000KMin: 1089119.1 / Avg: 1091771.7 / Max: 1096807.9Min: 1079115.1 / Avg: 1087810.23 / Max: 1098163.9

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b DecryptionCore i7 5960XIntel Core i7 5960X30060090012001500SE +/- 3.33, N = 3SE +/- 0.67, N = 31431.41436.5
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b DecryptionCore i7 5960XIntel Core i7 5960X30060090012001500Min: 1426.2 / Avg: 1431.4 / Max: 1437.6Min: 1435.2 / Avg: 1436.5 / Max: 1437.4

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 0Core i7 5960XIntel Core i7 5960X3691215SE +/- 0.06, N = 3SE +/- 0.04, N = 311.2611.301. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 0Core i7 5960XIntel Core i7 5960X3691215Min: 11.15 / Avg: 11.25 / Max: 11.33Min: 11.22 / Avg: 11.3 / Max: 11.341. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetCore i7 5960XIntel Core i7 5960X1.2782.5563.8345.1126.39SE +/- 0.01, N = 3SE +/- 0.02, N = 35.665.68MIN: 5.53 / MAX: 6.76MIN: 5.57 / MAX: 8.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnetCore i7 5960XIntel Core i7 5960X246810Min: 5.65 / Avg: 5.66 / Max: 5.68Min: 5.65 / Avg: 5.68 / Max: 5.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Core i7 5960XIntel Core i7 5960X3691215SE +/- 0.004, N = 3SE +/- 0.053, N = 39.96710.002MIN: 9.93 / MAX: 16.08MIN: 9.92 / MAX: 58.941. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: mobilenet-v1-1.0Core i7 5960XIntel Core i7 5960X3691215Min: 9.96 / Avg: 9.97 / Max: 9.97Min: 9.95 / Avg: 10 / Max: 10.111. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative EntropyCore i7 5960XIntel Core i7 5960X714212835SE +/- 0.47, N = 3SE +/- 0.24, N = 330.1130.21
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative EntropyCore i7 5960XIntel Core i7 5960X714212835Min: 29.24 / Avg: 30.11 / Max: 30.87Min: 29.82 / Avg: 30.21 / Max: 30.66

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra FastCore i7 5960XIntel Core i7 5960X48121620SE +/- 0.07, N = 3SE +/- 0.16, N = 317.3317.271. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra FastCore i7 5960XIntel Core i7 5960X48121620Min: 17.2 / Avg: 17.33 / Max: 17.42Min: 16.96 / Avg: 17.27 / Max: 17.461. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverCore i7 5960XIntel Core i7 5960X816243240SE +/- 0.09, N = 3SE +/- 0.07, N = 334.1634.271. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverCore i7 5960XIntel Core i7 5960X714212835Min: 34 / Avg: 34.16 / Max: 34.32Min: 34.14 / Avg: 34.27 / Max: 34.361. (CXX) g++ options: -O2 -lOpenCL

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsCore i7 5960XIntel Core i7 5960X714212835SE +/- 0.00, N = 3SE +/- 0.10, N = 329.529.6
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsCore i7 5960XIntel Core i7 5960X714212835Min: 29.5 / Avg: 29.5 / Max: 29.5Min: 29.5 / Avg: 29.6 / Max: 29.8

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.0.1Test: Boat - Acceleration: CPU-onlyCore i7 5960XIntel Core i7 5960X3691215SE +/- 0.01, N = 3SE +/- 0.02, N = 311.8311.87
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.0.1Test: Boat - Acceleration: CPU-onlyCore i7 5960XIntel Core i7 5960X3691215Min: 11.82 / Avg: 11.83 / Max: 11.84Min: 11.83 / Avg: 11.87 / Max: 11.9

Numenta Anomaly Benchmark

Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed GaussianCore i7 5960XIntel Core i7 5960X48121620SE +/- 0.09, N = 3SE +/- 0.08, N = 315.6915.64
OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed GaussianCore i7 5960XIntel Core i7 5960X48121620Min: 15.52 / Avg: 15.69 / Max: 15.82Min: 15.5 / Avg: 15.64 / Max: 15.78

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyCore i7 5960XIntel Core i7 5960X714212835SE +/- 0.44, N = 3SE +/- 0.56, N = 330.7130.61MIN: 29.82 / MAX: 34.1MIN: 29.52 / MAX: 32.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tinyCore i7 5960XIntel Core i7 5960X714212835Min: 30.21 / Avg: 30.71 / Max: 31.59Min: 29.81 / Avg: 30.61 / Max: 31.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeCore i7 5960XIntel Core i7 5960X20406080100SE +/- 0.10, N = 3SE +/- 0.13, N = 382.0381.761. RawTherapee, version 5.8, command line.
OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeCore i7 5960XIntel Core i7 5960X1632486480Min: 81.85 / Avg: 82.03 / Max: 82.2Min: 81.62 / Avg: 81.76 / Max: 82.031. RawTherapee, version 5.8, command line.

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 8Core i7 5960XIntel Core i7 5960X246810SE +/- 0.032, N = 3SE +/- 0.005, N = 38.0648.0901. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 8Core i7 5960XIntel Core i7 5960X3691215Min: 8 / Avg: 8.06 / Max: 8.11Min: 8.08 / Avg: 8.09 / Max: 8.11. (CXX) g++ options: -O3 -fPIC

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Memory CopyingCore i7 5960XIntel Core i7 5960X5001000150020002500SE +/- 1.48, N = 3SE +/- 2.60, N = 32184.912178.061. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Memory CopyingCore i7 5960XIntel Core i7 5960X400800120016002000Min: 2182.45 / Avg: 2184.91 / Max: 2187.55Min: 2173.09 / Avg: 2178.06 / Max: 2181.861. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16Core i7 5960XIntel Core i7 5960X1224364860SE +/- 0.09, N = 3SE +/- 0.08, N = 351.2051.04MIN: 50.82 / MAX: 64.91MIN: 50.67 / MAX: 591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16Core i7 5960XIntel Core i7 5960X1020304050Min: 51.05 / Avg: 51.2 / Max: 51.37Min: 50.9 / Avg: 51.04 / Max: 51.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2Core i7 5960XIntel Core i7 5960X246810SE +/- 0.02, N = 3SE +/- 0.02, N = 36.436.45MIN: 6.32 / MAX: 8.18MIN: 6.32 / MAX: 9.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2Core i7 5960XIntel Core i7 5960X3691215Min: 6.4 / Avg: 6.43 / Max: 6.45Min: 6.41 / Avg: 6.45 / Max: 6.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: CrownCore i7 5960XIntel Core i7 5960X246810SE +/- 0.0271, N = 3SE +/- 0.0253, N = 38.13458.1595MIN: 8.06 / MAX: 8.27MIN: 8.08 / MAX: 8.31
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer - Model: CrownCore i7 5960XIntel Core i7 5960X3691215Min: 8.1 / Avg: 8.13 / Max: 8.19Min: 8.12 / Avg: 8.16 / Max: 8.21

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100Core i7 5960XIntel Core i7 5960X14K28K42K56K70KSE +/- 178.22, N = 3SE +/- 98.17, N = 364264640701. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100Core i7 5960XIntel Core i7 5960X11K22K33K44K55KMin: 63908 / Avg: 64264.33 / Max: 64450Min: 63898 / Avg: 64070 / Max: 642381. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100Core i7 5960XIntel Core i7 5960X0.68811.37622.06432.75243.4405SE +/- 0.001, N = 3SE +/- 0.004, N = 33.0493.0581. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100Core i7 5960XIntel Core i7 5960X246810Min: 3.05 / Avg: 3.05 / Max: 3.05Min: 3.05 / Avg: 3.06 / Max: 3.061. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 4KCore i7 5960XIntel Core i7 5960X306090120150SE +/- 0.20, N = 3SE +/- 1.06, N = 3119.70119.35MIN: 111.56 / MAX: 136.4MIN: 99.66 / MAX: 136.11. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Summer Nature 4KCore i7 5960XIntel Core i7 5960X20406080100Min: 119.48 / Avg: 119.7 / Max: 120.09Min: 117.43 / Avg: 119.35 / Max: 121.091. (CC) gcc options: -pthread

KeyDB

A benchmark of KeyDB as a multi-threaded fork of the Redis server. The KeyDB benchmark is conducted using memtier-benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.16Core i7 5960XIntel Core i7 5960X100K200K300K400K500KSE +/- 1058.53, N = 3SE +/- 833.34, N = 3450609.36451926.801. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.16Core i7 5960XIntel Core i7 5960X80K160K240K320K400KMin: 448514.95 / Avg: 450609.36 / Max: 451924.09Min: 450631.04 / Avg: 451926.8 / Max: 453482.481. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Timed GDB GNU Debugger Compilation

This test times how long it takes to build the GNU Debugger (GDB) in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 9.1Time To CompileCore i7 5960XIntel Core i7 5960X4080120160200SE +/- 0.24, N = 3SE +/- 0.20, N = 3168.22168.72
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 9.1Time To CompileCore i7 5960XIntel Core i7 5960X306090120150Min: 167.8 / Avg: 168.22 / Max: 168.61Min: 168.32 / Avg: 168.71 / Max: 168.93

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedCore i7 5960XIntel Core i7 5960X15003000450060007500SE +/- 3.10, N = 3SE +/- 10.08, N = 36931.36912.41. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedCore i7 5960XIntel Core i7 5960X12002400360048006000Min: 6925.1 / Avg: 6931.3 / Max: 6934.7Min: 6893 / Avg: 6912.37 / Max: 6926.91. (CC) gcc options: -O3

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b DecryptionCore i7 5960XIntel Core i7 5960X400800120016002000SE +/- 3.99, N = 3SE +/- 2.48, N = 31755.81760.6
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 256b DecryptionCore i7 5960XIntel Core i7 5960X30060090012001500Min: 1750.1 / Avg: 1755.83 / Max: 1763.5Min: 1756.7 / Avg: 1760.6 / Max: 1765.2

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: MediumCore i7 5960XIntel Core i7 5960X48121620SE +/- 0.02, N = 3SE +/- 0.03, N = 314.7314.691. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: MediumCore i7 5960XIntel Core i7 5960X48121620Min: 14.7 / Avg: 14.73 / Max: 14.76Min: 14.65 / Avg: 14.69 / Max: 14.751. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Nettle

GNU Nettle is a low-level cryptographic library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbyte/s, More Is BetterNettle 3.5.1Test: aes256Core i7 5960XIntel Core i7 5960X9001800270036004500SE +/- 10.02, N = 3SE +/- 6.19, N = 34389.014377.45MIN: 2499.98 / MAX: 8087.46MIN: 2499.28 / MAX: 8079.831. (CC) gcc options: -O2 -ggdb3 -lnettle -lgmp -lm -lcrypto
OpenBenchmarking.orgMbyte/s, More Is BetterNettle 3.5.1Test: aes256Core i7 5960XIntel Core i7 5960X8001600240032004000Min: 4369.09 / Avg: 4389.01 / Max: 4400.89Min: 4370.6 / Avg: 4377.45 / Max: 4389.811. (CC) gcc options: -O2 -ggdb3 -lnettle -lgmp -lm -lcrypto

OCRMyPDF

OCRMyPDF is an optical character recognition (OCR) text layer to scanned PDF files, producing new PDFs with the text now selectable/searchable/copy-paste capable. OCRMyPDF leverages the Tesseract OCR engine and is written in Python. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 9.6.0+dfsgProcessing 60 Page PDF DocumentCore i7 5960XIntel Core i7 5960X918273645SE +/- 0.08, N = 3SE +/- 0.18, N = 338.9838.88
OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 9.6.0+dfsgProcessing 60 Page PDF DocumentCore i7 5960XIntel Core i7 5960X816243240Min: 38.86 / Avg: 38.98 / Max: 39.14Min: 38.58 / Avg: 38.88 / Max: 39.2

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUCore i7 5960XIntel Core i7 5960X6001200180024003000SE +/- 0.05, N = 3SE +/- 2.72, N = 32618.822612.01MIN: 2616.48MIN: 2603.111. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUCore i7 5960XIntel Core i7 5960X5001000150020002500Min: 2618.72 / Avg: 2618.82 / Max: 2618.9Min: 2606.67 / Avg: 2612.01 / Max: 2615.531. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 1 - Process: CompressionCore i7 5960XIntel Core i7 5960X90180270360450SE +/- 0.88, N = 3SE +/- 0.33, N = 33923931. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 1 - Process: CompressionCore i7 5960XIntel Core i7 5960X70140210280350Min: 390 / Avg: 391.67 / Max: 393Min: 392 / Avg: 392.67 / Max: 3931. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetCore i7 5960XIntel Core i7 5960X3691215SE +/- 0.04, N = 3SE +/- 0.01, N = 311.8711.84MIN: 11.76 / MAX: 16.6MIN: 11.76 / MAX: 13.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnetCore i7 5960XIntel Core i7 5960X3691215Min: 11.83 / Avg: 11.87 / Max: 11.95Min: 11.82 / Avg: 11.84 / Max: 11.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-sha512Core i7 5960XIntel Core i7 5960X300K600K900K1200K1500KSE +/- 4681.45, N = 3SE +/- 1461.09, N = 313151371318413
OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-sha512Core i7 5960XIntel Core i7 5960X200K400K600K800K1000KMin: 1305823 / Avg: 1315136.67 / Max: 1320624Min: 1315653 / Avg: 1318413.33 / Max: 1320624

Montage Astronomical Image Mosaic Engine

Montage is an open-source astronomical image mosaic engine. This BSD-licensed astronomy software is developed by the California Institute of Technology, Pasadena. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMontage Astronomical Image Mosaic Engine 6.0Mosaic of M17, K band, 1.5 deg x 1.5 degCore i7 5960XIntel Core i7 5960X20406080100SE +/- 0.26, N = 3SE +/- 0.13, N = 3104.39104.641. (CC) gcc options: -std=gnu99 -lcfitsio -lm -O2
OpenBenchmarking.orgSeconds, Fewer Is BetterMontage Astronomical Image Mosaic Engine 6.0Mosaic of M17, K band, 1.5 deg x 1.5 degCore i7 5960XIntel Core i7 5960X20406080100Min: 103.92 / Avg: 104.39 / Max: 104.83Min: 104.43 / Avg: 104.64 / Max: 104.861. (CC) gcc options: -std=gnu99 -lcfitsio -lm -O2

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeCore i7 5960XIntel Core i7 5960X1.4M2.8M4.2M5.6M7MSE +/- 17334.79, N = 3SE +/- 15888.98, N = 3641554764307771. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm
OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeCore i7 5960XIntel Core i7 5960X1.1M2.2M3.3M4.4M5.5MMin: 6387290 / Avg: 6415546.67 / Max: 6447072Min: 6399127 / Avg: 6430776.67 / Max: 64490721. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Crush 0 - Process: DecompressionCore i7 5960XIntel Core i7 5960X90180270360450SE +/- 0.33, N = 34324311. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Crush 0 - Process: DecompressionCore i7 5960XIntel Core i7 5960X80160240320400Min: 431 / Avg: 431.33 / Max: 4321. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b EncryptionCore i7 5960XIntel Core i7 5960X30060090012001500SE +/- 3.81, N = 3SE +/- 0.53, N = 31440.71444.0
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupAES-XTS 512b EncryptionCore i7 5960XIntel Core i7 5960X30060090012001500Min: 1434 / Avg: 1440.7 / Max: 1447.2Min: 1443 / Avg: 1444 / Max: 1444.8

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughCore i7 5960XIntel Core i7 5960X1020304050SE +/- 0.07, N = 3SE +/- 0.02, N = 344.3044.201. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ThoroughCore i7 5960XIntel Core i7 5960X918273645Min: 44.19 / Avg: 44.3 / Max: 44.43Min: 44.17 / Avg: 44.2 / Max: 44.251. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkCore i7 5960XIntel Core i7 5960X3691215SE +/- 0.01, N = 3SE +/- 0.04, N = 38.988.961. Nodejs v10.19.0
OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkCore i7 5960XIntel Core i7 5960X3691215Min: 8.96 / Avg: 8.98 / Max: 9Min: 8.88 / Avg: 8.96 / Max: 9.011. Nodejs v10.19.0

G'MIC

G'MIC is an open-source framework for image processing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: Plotting Isosurface Of A 3D Volume, 1000 TimesCore i7 5960XIntel Core i7 5960X612182430SE +/- 0.03, N = 3SE +/- 0.07, N = 324.8624.801. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.
OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: Plotting Isosurface Of A 3D Volume, 1000 TimesCore i7 5960XIntel Core i7 5960X612182430Min: 24.82 / Avg: 24.86 / Max: 24.91Min: 24.69 / Avg: 24.8 / Max: 24.941. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUCore i7 5960XIntel Core i7 5960X1.0772.1543.2314.3085.385SE +/- 0.00689, N = 3SE +/- 0.00899, N = 34.776354.78658MIN: 4.7MIN: 4.721. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUCore i7 5960XIntel Core i7 5960X246810Min: 4.77 / Avg: 4.78 / Max: 4.79Min: 4.78 / Avg: 4.79 / Max: 4.81. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 0Core i7 5960XIntel Core i7 5960X306090120150SE +/- 0.20, N = 3SE +/- 0.24, N = 3138.33138.051. (CXX) g++ options: -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.7.3Encoder Speed: 0Core i7 5960XIntel Core i7 5960X306090120150Min: 137.98 / Avg: 138.33 / Max: 138.66Min: 137.7 / Avg: 138.04 / Max: 138.511. (CXX) g++ options: -O3 -fPIC

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthCore i7 5960XIntel Core i7 5960X4M8M12M16M20MSE +/- 247551.23, N = 3SE +/- 22586.13, N = 32051109120551853
OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthCore i7 5960XIntel Core i7 5960X4M8M12M16M20MMin: 20072566 / Avg: 20511090.67 / Max: 20929393Min: 20513432 / Avg: 20551852.67 / Max: 20591637

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: Visual Quality Optimized - Input: Bosphorus 1080pCore i7 5960XIntel Core i7 5960X20406080100SE +/- 0.31, N = 3SE +/- 0.31, N = 377.5477.391. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.1Tuning: Visual Quality Optimized - Input: Bosphorus 1080pCore i7 5960XIntel Core i7 5960X1530456075Min: 77.1 / Avg: 77.54 / Max: 78.15Min: 76.77 / Avg: 77.39 / Max: 77.771. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetCore i7 5960XIntel Core i7 5960X48121620SE +/- 0.02, N = 3SE +/- 0.02, N = 316.2916.26MIN: 16.04 / MAX: 18.76MIN: 16.06 / MAX: 18.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenetCore i7 5960XIntel Core i7 5960X48121620Min: 16.25 / Avg: 16.29 / Max: 16.31Min: 16.22 / Avg: 16.26 / Max: 16.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Botan

Botan is a cross-platform open-source C++ crypto library that supports most all publicly known cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: BlowfishCore i7 5960XIntel Core i7 5960X80160240320400SE +/- 0.31, N = 3SE +/- 0.16, N = 3345.74345.121. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: BlowfishCore i7 5960XIntel Core i7 5960X60120180240300Min: 345.33 / Avg: 345.74 / Max: 346.34Min: 344.86 / Avg: 345.11 / Max: 345.421. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080p 10-bitCore i7 5960XIntel Core i7 5960X1428425670SE +/- 0.09, N = 3SE +/- 0.10, N = 362.3762.26MIN: 39.64 / MAX: 157.72MIN: 39.52 / MAX: 158.831. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.7.0Video Input: Chimera 1080p 10-bitCore i7 5960XIntel Core i7 5960X1224364860Min: 62.18 / Avg: 62.37 / Max: 62.47Min: 62.1 / Avg: 62.26 / Max: 62.441. (CC) gcc options: -pthread

BYTE Unix Benchmark

This is a test of BYTE. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 2Core i7 5960XIntel Core i7 5960X7M14M21M28M35MSE +/- 33693.88, N = 3SE +/- 106236.98, N = 334534903.134474027.6
OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 2Core i7 5960XIntel Core i7 5960X6M12M18M24M30MMin: 34477418.9 / Avg: 34534903.07 / Max: 34594100.2Min: 34292818.2 / Avg: 34474027.63 / Max: 34660711.7

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUCore i7 5960XIntel Core i7 5960X246810SE +/- 0.03107, N = 3SE +/- 0.01260, N = 36.744916.75679MIN: 6.65MIN: 6.681. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUCore i7 5960XIntel Core i7 5960X3691215Min: 6.71 / Avg: 6.74 / Max: 6.81Min: 6.74 / Avg: 6.76 / Max: 6.781. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: ForkingCore i7 5960XIntel Core i7 5960X12K24K36K48K60KSE +/- 151.74, N = 3SE +/- 52.93, N = 354563.7154468.051. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: ForkingCore i7 5960XIntel Core i7 5960X9K18K27K36K45KMin: 54263.88 / Avg: 54563.71 / Max: 54754.26Min: 54365.4 / Avg: 54468.05 / Max: 54541.821. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b DecryptionCore i7 5960XIntel Core i7 5960X80160240320400SE +/- 0.24, N = 3SE +/- 0.05, N = 2349.5350.1
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b DecryptionCore i7 5960XIntel Core i7 5960X60120180240300Min: 349.2 / Avg: 349.53 / Max: 350Min: 350 / Avg: 350.05 / Max: 350.1

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2Core i7 5960XIntel Core i7 5960X70140210280350SE +/- 1.16, N = 3SE +/- 0.35, N = 3335.59335.02MIN: 333.35 / MAX: 385.75MIN: 333.18 / MAX: 376.691. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2Core i7 5960XIntel Core i7 5960X60120180240300Min: 334.01 / Avg: 335.59 / Max: 337.85Min: 334.52 / Avg: 335.02 / Max: 335.691. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 2 - Process: DecompressionCore i7 5960XIntel Core i7 5960X130260390520650SE +/- 1.20, N = 3SE +/- 0.58, N = 35975981. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 2 - Process: DecompressionCore i7 5960XIntel Core i7 5960X110220330440550Min: 595 / Avg: 597.33 / Max: 599Min: 597 / Avg: 598 / Max: 5991. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

Nebular Empirical Analysis Tool

NEAT is the Nebular Empirical Analysis Tool for empirical analysis of ionised nebulae, with uncertainty propagation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNebular Empirical Analysis Tool 2020-02-29Core i7 5960XIntel Core i7 5960X612182430SE +/- 0.04, N = 3SE +/- 0.05, N = 325.9325.881. (F9X) gfortran options: -cpp -ffree-line-length-0 -Jsource/ -fopenmp -O3 -fno-backtrace
OpenBenchmarking.orgSeconds, Fewer Is BetterNebular Empirical Analysis Tool 2020-02-29Core i7 5960XIntel Core i7 5960X612182430Min: 25.85 / Avg: 25.93 / Max: 25.99Min: 25.79 / Avg: 25.88 / Max: 25.971. (F9X) gfortran options: -cpp -ffree-line-length-0 -Jsource/ -fopenmp -O3 -fno-backtrace

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svmCore i7 5960XIntel Core i7 5960X612182430SE +/- 0.01, N = 3SE +/- 0.08, N = 324.5024.54
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svmCore i7 5960XIntel Core i7 5960X612182430Min: 24.49 / Avg: 24.5 / Max: 24.51Min: 24.43 / Avg: 24.54 / Max: 24.69

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUCore i7 5960XIntel Core i7 5960X10002000300040005000SE +/- 3.75, N = 3SE +/- 4.93, N = 34806.444798.64MIN: 4797.93MIN: 4786.841. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUCore i7 5960XIntel Core i7 5960X8001600240032004000Min: 4801.67 / Avg: 4806.44 / Max: 4813.84Min: 4790.88 / Avg: 4798.64 / Max: 4807.791. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteCore i7 5960XIntel Core i7 5960X120K240K360K480K600KSE +/- 1485.01, N = 3SE +/- 2114.75, N = 3574201575109
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteCore i7 5960XIntel Core i7 5960X100K200K300K400K500KMin: 571662 / Avg: 574201 / Max: 576805Min: 571050 / Avg: 575109 / Max: 578168

Botan

Botan is a cross-platform open-source C++ crypto library that supports most all publicly known cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: AES-256Core i7 5960XIntel Core i7 5960X7001400210028003500SE +/- 3.45, N = 3SE +/- 1.75, N = 33096.633101.441. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: AES-256Core i7 5960XIntel Core i7 5960X5001000150020002500Min: 3090.67 / Avg: 3096.63 / Max: 3102.61Min: 3098.77 / Avg: 3101.44 / Max: 3104.751. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDCore i7 5960XIntel Core i7 5960X90180270360450SE +/- 0.30, N = 3SE +/- 0.02, N = 3393.92393.321. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDCore i7 5960XIntel Core i7 5960X70140210280350Min: 393.46 / Avg: 393.92 / Max: 394.48Min: 393.27 / Avg: 393.32 / Max: 393.351. (CXX) g++ options: -O2 -lOpenCL

G'MIC

G'MIC is an open-source framework for image processing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 3D Elevated Function In Random Colors, 100 TimesCore i7 5960XIntel Core i7 5960X20406080100SE +/- 0.08, N = 3SE +/- 0.07, N = 382.4782.351. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.
OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 3D Elevated Function In Random Colors, 100 TimesCore i7 5960XIntel Core i7 5960X1632486480Min: 82.39 / Avg: 82.47 / Max: 82.63Min: 82.24 / Avg: 82.35 / Max: 82.471. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedCore i7 5960XIntel Core i7 5960X12002400360048006000SE +/- 3.84, N = 3SE +/- 2.86, N = 35709.815701.291. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedCore i7 5960XIntel Core i7 5960X10002000300040005000Min: 5702.73 / Avg: 5709.81 / Max: 5715.94Min: 5697.47 / Avg: 5701.29 / Max: 5706.881. (CC) gcc options: -O3

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b DecryptionCore i7 5960XIntel Core i7 5960X120240360480600SE +/- 0.31, N = 3SE +/- 0.55, N = 3540.9541.7
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 256b DecryptionCore i7 5960XIntel Core i7 5960X100200300400500Min: 540.5 / Avg: 540.9 / Max: 541.5Min: 540.7 / Avg: 541.7 / Max: 542.6

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b DecryptionCore i7 5960XIntel Core i7 5960X120240360480600SE +/- 0.69, N = 3SE +/- 0.09, N = 3541.4542.2
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupSerpent-XTS 512b DecryptionCore i7 5960XIntel Core i7 5960X100200300400500Min: 540.4 / Avg: 541.37 / Max: 542.7Min: 542 / Avg: 542.17 / Max: 542.3

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinCore i7 5960XIntel Core i7 5960X0.15570.31140.46710.62280.7785SE +/- 0.001, N = 3SE +/- 0.001, N = 30.6920.6911. (CXX) g++ options: -O3 -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinCore i7 5960XIntel Core i7 5960X246810Min: 0.69 / Avg: 0.69 / Max: 0.69Min: 0.69 / Avg: 0.69 / Max: 0.691. (CXX) g++ options: -O3 -lm

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: MediumCore i7 5960XIntel Core i7 5960X246810SE +/- 0.01, N = 3SE +/- 0.01, N = 37.067.051. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: MediumCore i7 5960XIntel Core i7 5960X3691215Min: 7.05 / Avg: 7.06 / Max: 7.07Min: 7.04 / Avg: 7.05 / Max: 7.061. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 10.0Time To CompileCore i7 5960XIntel Core i7 5960X2004006008001000SE +/- 0.74, N = 3SE +/- 2.26, N = 3921.51922.80
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 10.0Time To CompileCore i7 5960XIntel Core i7 5960X160320480640800Min: 920.35 / Avg: 921.51 / Max: 922.89Min: 919.51 / Avg: 922.79 / Max: 927.12

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Core i7 5960XIntel Core i7 5960X1224364860SE +/- 0.03, N = 3SE +/- 0.04, N = 355.2955.36MIN: 55.11 / MAX: 79.33MIN: 54.74 / MAX: 121.411. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: resnet-v2-50Core i7 5960XIntel Core i7 5960X1122334455Min: 55.22 / Avg: 55.29 / Max: 55.34Min: 55.3 / Avg: 55.36 / Max: 55.441. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetCore i7 5960XIntel Core i7 5960X510152025SE +/- 0.17, N = 3SE +/- 0.18, N = 321.7221.75MIN: 21.38 / MAX: 23.07MIN: 21.38 / MAX: 23.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenetCore i7 5960XIntel Core i7 5960X510152025Min: 21.48 / Avg: 21.72 / Max: 22.05Min: 21.47 / Avg: 21.75 / Max: 22.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPUCore i7 5960XIntel Core i7 5960X5001000150020002500SE +/- 0.34, N = 3SE +/- 0.77, N = 32240.292243.37
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP16 - Device: CPUCore i7 5960XIntel Core i7 5960X400800120016002000Min: 2239.74 / Avg: 2240.29 / Max: 2240.91Min: 2242.21 / Avg: 2243.37 / Max: 2244.82

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2Core i7 5960XIntel Core i7 5960X246810SE +/- 0.01, N = 3SE +/- 0.01, N = 37.307.29MIN: 7.24 / MAX: 8.92MIN: 7.24 / MAX: 9.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2Core i7 5960XIntel Core i7 5960X3691215Min: 7.27 / Avg: 7.3 / Max: 7.32Min: 7.28 / Avg: 7.29 / Max: 7.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SENDFILECore i7 5960XIntel Core i7 5960X20K40K60K80K100KSE +/- 158.84, N = 3SE +/- 164.31, N = 388992.6288872.211. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SENDFILECore i7 5960XIntel Core i7 5960X15K30K45K60K75KMin: 88676.96 / Avg: 88992.62 / Max: 89181.33Min: 88545.4 / Avg: 88872.21 / Max: 89065.441. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

Blender

Blender is an open-source 3D creation software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL or CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: CPU-OnlyCore i7 5960XIntel Core i7 5960X50100150200250SE +/- 0.41, N = 3SE +/- 0.55, N = 3237.64237.96
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.90Blend File: BMW27 - Compute: CPU-OnlyCore i7 5960XIntel Core i7 5960X4080120160200Min: 236.98 / Avg: 237.64 / Max: 238.39Min: 236.89 / Avg: 237.96 / Max: 238.71

Sunflow Rendering System

This test runs benchmarks of the Sunflow Rendering System. The Sunflow Rendering System is an open-source render engine for photo-realistic image synthesis with a ray-tracing core. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSunflow Rendering System 0.07.2Global Illumination + Image SynthesisCore i7 5960XIntel Core i7 5960X0.34110.68221.02331.36441.7055SE +/- 0.011, N = 3SE +/- 0.010, N = 31.5141.516MIN: 1.4 / MAX: 2.07MIN: 1.42 / MAX: 2.11
OpenBenchmarking.orgSeconds, Fewer Is BetterSunflow Rendering System 0.07.2Global Illumination + Image SynthesisCore i7 5960XIntel Core i7 5960X246810Min: 1.49 / Avg: 1.51 / Max: 1.53Min: 1.5 / Avg: 1.52 / Max: 1.54

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileCore i7 5960XIntel Core i7 5960X20406080100SE +/- 0.18, N = 3SE +/- 0.09, N = 383.4983.60
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.2.2Time To CompileCore i7 5960XIntel Core i7 5960X1632486480Min: 83.27 / Avg: 83.49 / Max: 83.85Min: 83.45 / Avg: 83.6 / Max: 83.75

YafaRay

YafaRay is an open-source physically based montecarlo ray-tracing engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterYafaRay 3.4.1Total Time For Sample SceneCore i7 5960XIntel Core i7 5960X50100150200250SE +/- 0.46, N = 3SE +/- 0.47, N = 3222.32222.611. (CXX) g++ options: -std=c++11 -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterYafaRay 3.4.1Total Time For Sample SceneCore i7 5960XIntel Core i7 5960X4080120160200Min: 221.86 / Avg: 222.32 / Max: 223.23Min: 221.89 / Avg: 222.61 / Max: 223.51. (CXX) g++ options: -std=c++11 -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPUCore i7 5960XIntel Core i7 5960X7001400210028003500SE +/- 3.01, N = 3SE +/- 2.81, N = 33257.873253.79
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP32 - Device: CPUCore i7 5960XIntel Core i7 5960X6001200180024003000Min: 3252.06 / Avg: 3257.87 / Max: 3262.15Min: 3248.2 / Avg: 3253.79 / Max: 3256.99

Git

This test measures the time needed to carry out some sample Git operations on an example, static repository that happens to be a copy of the GNOME GTK tool-kit repository. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGitTime To Complete Common Git CommandsCore i7 5960XIntel Core i7 5960X1428425670SE +/- 0.08, N = 3SE +/- 0.12, N = 364.0664.141. git version 2.25.1
OpenBenchmarking.orgSeconds, Fewer Is BetterGitTime To Complete Common Git CommandsCore i7 5960XIntel Core i7 5960X1326395265Min: 63.91 / Avg: 64.06 / Max: 64.17Min: 63.95 / Avg: 64.14 / Max: 64.371. git version 2.25.1

Botan

Botan is a cross-platform open-source C++ crypto library that supports most all publicly known cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: TwofishCore i7 5960XIntel Core i7 5960X60120180240300SE +/- 0.13, N = 3SE +/- 0.24, N = 3287.06287.401. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: TwofishCore i7 5960XIntel Core i7 5960X50100150200250Min: 286.86 / Avg: 287.06 / Max: 287.29Min: 287.14 / Avg: 287.4 / Max: 287.871. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessCore i7 5960XIntel Core i7 5960X510152025SE +/- 0.01, N = 3SE +/- 0.02, N = 321.8621.841. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessCore i7 5960XIntel Core i7 5960X510152025Min: 21.85 / Avg: 21.86 / Max: 21.87Min: 21.81 / Avg: 21.84 / Max: 21.871. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

Timed PHP Compilation

This test times how long it takes to build PHP 7. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 7.4.2Time To CompileCore i7 5960XIntel Core i7 5960X20406080100SE +/- 0.20, N = 3SE +/- 0.18, N = 386.8786.97
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 7.4.2Time To CompileCore i7 5960XIntel Core i7 5960X1632486480Min: 86.47 / Avg: 86.87 / Max: 87.09Min: 86.76 / Avg: 86.97 / Max: 87.32

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgKsamples, More Is BetterChaos Group V-RAY 4.10.07Mode: CPUCore i7 5960XIntel Core i7 5960X2K4K6K8K10KSE +/- 23.78, N = 3SE +/- 54.11, N = 398059816
OpenBenchmarking.orgKsamples, More Is BetterChaos Group V-RAY 4.10.07Mode: CPUCore i7 5960XIntel Core i7 5960X2K4K6K8K10KMin: 9758 / Avg: 9805.33 / Max: 9833Min: 9709 / Avg: 9816.33 / Max: 9882

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1Core i7 5960XIntel Core i7 5960X70140210280350SE +/- 0.07, N = 3SE +/- 0.14, N = 3319.49319.84MIN: 319.19 / MAX: 320.37MIN: 319.47 / MAX: 320.711. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1Core i7 5960XIntel Core i7 5960X60120180240300Min: 319.39 / Avg: 319.49 / Max: 319.63Min: 319.7 / Avg: 319.84 / Max: 320.121. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU StressCore i7 5960XIntel Core i7 5960X7001400210028003500SE +/- 22.73, N = 3SE +/- 12.76, N = 33181.583185.111. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CPU StressCore i7 5960XIntel Core i7 5960X6001200180024003000Min: 3140.61 / Avg: 3181.58 / Max: 3219.11Min: 3171.39 / Avg: 3185.11 / Max: 3210.611. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

WavPack Audio Encoding

This test times how long it takes to encode a sample WAV file to WavPack format with very high quality settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPackCore i7 5960XIntel Core i7 5960X48121620SE +/- 0.03, N = 5SE +/- 0.04, N = 517.6417.651. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPackCore i7 5960XIntel Core i7 5960X48121620Min: 17.57 / Avg: 17.63 / Max: 17.71Min: 17.56 / Avg: 17.65 / Max: 17.721. (CXX) g++ options: -rdynamic

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarCore i7 5960XIntel Core i7 5960X0.63051.2611.89152.5223.1525SE +/- 0.001, N = 3SE +/- 0.007, N = 32.7992.802
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarCore i7 5960XIntel Core i7 5960X246810Min: 2.8 / Avg: 2.8 / Max: 2.8Min: 2.79 / Avg: 2.8 / Max: 2.81

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference ScoreCore i7 5960XIntel Core i7 5960X2004006008001000955956

Botan

Botan is a cross-platform open-source C++ crypto library that supports most all publicly known cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: CAST-256Core i7 5960XIntel Core i7 5960X20406080100SE +/- 0.05, N = 3SE +/- 0.07, N = 3109.51109.631. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: CAST-256Core i7 5960XIntel Core i7 5960X20406080100Min: 109.42 / Avg: 109.51 / Max: 109.56Min: 109.51 / Avg: 109.63 / Max: 109.761. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: NUMACore i7 5960XIntel Core i7 5960X306090120150SE +/- 1.39, N = 3SE +/- 1.55, N = 3157.35157.191. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: NUMACore i7 5960XIntel Core i7 5960X306090120150Min: 154.58 / Avg: 157.35 / Max: 158.74Min: 155.6 / Avg: 157.19 / Max: 160.291. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkCore i7 5960XIntel Core i7 5960X714212835SE +/- 0.04, N = 3SE +/- 0.02, N = 329.5429.571. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkCore i7 5960XIntel Core i7 5960X714212835Min: 29.46 / Avg: 29.54 / Max: 29.59Min: 29.54 / Avg: 29.57 / Max: 29.61. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

Nettle

GNU Nettle is a low-level cryptographic library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbyte/s, More Is BetterNettle 3.5.1Test: sha512Core i7 5960XIntel Core i7 5960X90180270360450SE +/- 0.25, N = 3SE +/- 0.40, N = 3404.61404.201. (CC) gcc options: -O2 -ggdb3 -lnettle -lgmp -lm -lcrypto
OpenBenchmarking.orgMbyte/s, More Is BetterNettle 3.5.1Test: sha512Core i7 5960XIntel Core i7 5960X70140210280350Min: 404.11 / Avg: 404.61 / Max: 404.92Min: 403.44 / Avg: 404.2 / Max: 404.81. (CC) gcc options: -O2 -ggdb3 -lnettle -lgmp -lm -lcrypto

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training ScoreCore i7 5960XIntel Core i7 5960X2004006008001000996995

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Threaded Building Blocks, and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal GeneratorImplementation: C++ TasksCore i7 5960XIntel Core i7 5960X30K60K90K120K150KSE +/- 71.52, N = 3SE +/- 92.48, N = 31556931558481. (CXX) g++ options: -lpthread -isystem -fexceptions -std=c++14
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal GeneratorImplementation: C++ TasksCore i7 5960XIntel Core i7 5960X30K60K90K120K150KMin: 155595 / Avg: 155692.67 / Max: 155832Min: 155671 / Avg: 155848 / Max: 1559831. (CXX) g++ options: -lpthread -isystem -fexceptions -std=c++14

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPUCore i7 5960XIntel Core i7 5960X5001000150020002500SE +/- 1.03, N = 3SE +/- 0.64, N = 32244.952242.86
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPUCore i7 5960XIntel Core i7 5960X400800120016002000Min: 2243.14 / Avg: 2244.95 / Max: 2246.7Min: 2241.82 / Avg: 2242.86 / Max: 2244.04

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.4Time To CompileCore i7 5960XIntel Core i7 5960X1326395265SE +/- 0.05, N = 3SE +/- 0.07, N = 356.1256.17
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.4Time To CompileCore i7 5960XIntel Core i7 5960X1122334455Min: 56.02 / Avg: 56.12 / Max: 56.18Min: 56.09 / Avg: 56.17 / Max: 56.32

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: MD5Core i7 5960XIntel Core i7 5960X160K320K480K640K800KSE +/- 631.86, N = 3SE +/- 214.95, N = 37239177245701. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2
OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: MD5Core i7 5960XIntel Core i7 5960X130K260K390K520K650KMin: 722688 / Avg: 723916.67 / Max: 724787Min: 724289 / Avg: 724569.67 / Max: 7249921. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pCore i7 5960XIntel Core i7 5960X0.51281.02561.53842.05122.564SE +/- 0.002, N = 3SE +/- 0.002, N = 32.2792.2771. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pCore i7 5960XIntel Core i7 5960X246810Min: 2.28 / Avg: 2.28 / Max: 2.28Min: 2.28 / Avg: 2.28 / Max: 2.281. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

Hugin

Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeCore i7 5960XIntel Core i7 5960X1530456075SE +/- 0.31, N = 3SE +/- 0.18, N = 367.6067.66
OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeCore i7 5960XIntel Core i7 5960X1326395265Min: 67.1 / Avg: 67.6 / Max: 68.16Min: 67.35 / Avg: 67.66 / Max: 67.96

Botan

Botan is a cross-platform open-source C++ crypto library that supports most all publicly known cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: KASUMICore i7 5960XIntel Core i7 5960X20406080100SE +/- 0.04, N = 3SE +/- 0.02, N = 375.7975.851. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.13.0Test: KASUMICore i7 5960XIntel Core i7 5960X1530456075Min: 75.73 / Avg: 75.79 / Max: 75.87Min: 75.82 / Avg: 75.85 / Max: 75.891. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomCore i7 5960XIntel Core i7 5960X0.27590.55180.82771.10361.3795SE +/- 0.001, N = 3SE +/- 0.002, N = 31.2261.225
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomCore i7 5960XIntel Core i7 5960X246810Min: 1.22 / Avg: 1.23 / Max: 1.23Min: 1.22 / Avg: 1.23 / Max: 1.23

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileCore i7 5960XIntel Core i7 5960X816243240SE +/- 0.11, N = 3SE +/- 0.08, N = 333.4033.43
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileCore i7 5960XIntel Core i7 5960X714212835Min: 33.28 / Avg: 33.4 / Max: 33.62Min: 33.28 / Avg: 33.43 / Max: 33.54

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: BlowfishCore i7 5960XIntel Core i7 5960X2K4K6K8K10KSE +/- 6.94, N = 3SE +/- 3.00, N = 311378113871. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2
OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 1.9.0-jumbo-1Test: BlowfishCore i7 5960XIntel Core i7 5960X2K4K6K8K10KMin: 11366 / Avg: 11377.67 / Max: 11390Min: 11381 / Avg: 11387 / Max: 113901. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2

Aircrack-ng

Aircrack-ng is a tool for assessing WiFi/WLAN network security. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.5.2Core i7 5960XIntel Core i7 5960X5K10K15K20K25KSE +/- 2.43, N = 3SE +/- 34.51, N = 323565.7723584.391. (CXX) g++ options: -O3 -fvisibility=hidden -masm=intel -fcommon -rdynamic -lpthread -lz -lcrypto -lhwloc -ldl -lm -pthread
OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.5.2Core i7 5960XIntel Core i7 5960X4K8K12K16K20KMin: 23560.94 / Avg: 23565.77 / Max: 23568.56Min: 23548.93 / Avg: 23584.39 / Max: 23653.41. (CXX) g++ options: -O3 -fvisibility=hidden -masm=intel -fcommon -rdynamic -lpthread -lz -lcrypto -lhwloc -ldl -lm -pthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Core i7 5960XIntel Core i7 5960X1326395265SE +/- 0.00, N = 3SE +/- 0.02, N = 356.5556.59MIN: 56.37 / MAX: 89.68MIN: 56.42 / MAX: 76.051. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: inception-v3Core i7 5960XIntel Core i7 5960X1122334455Min: 56.54 / Avg: 56.55 / Max: 56.55Min: 56.55 / Avg: 56.59 / Max: 56.631. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdCore i7 5960XIntel Core i7 5960X612182430SE +/- 0.09, N = 3SE +/- 0.07, N = 326.3426.32MIN: 26.16 / MAX: 41.34MIN: 26.18 / MAX: 41.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssdCore i7 5960XIntel Core i7 5960X612182430Min: 26.25 / Avg: 26.34 / Max: 26.52Min: 26.24 / Avg: 26.32 / Max: 26.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OSBench

OSBench is a collection of micro-benchmarks for measuring operating system primitives like time to create threads/processes, launching programs, creating files, and memory allocation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create FilesCore i7 5960XIntel Core i7 5960X510152025SE +/- 0.17, N = 3SE +/- 0.10, N = 318.5518.561. (CC) gcc options: -lm
OpenBenchmarking.orgus Per Event, Fewer Is BetterOSBenchTest: Create FilesCore i7 5960XIntel Core i7 5960X510152025Min: 18.29 / Avg: 18.55 / Max: 18.86Min: 18.37 / Avg: 18.56 / Max: 18.711. (CC) gcc options: -lm

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 3Core i7 5960XIntel Core i7 5960X20406080100SE +/- 0.00, N = 3SE +/- 0.00, N = 394.7594.681. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 3Core i7 5960XIntel Core i7 5960X20406080100Min: 94.74 / Avg: 94.75 / Max: 94.75Min: 94.67 / Avg: 94.68 / Max: 94.681. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MallocCore i7 5960XIntel Core i7 5960X14M28M42M56M70MSE +/- 230662.60, N = 3SE +/- 170434.98, N = 363037538.0962995660.611. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: MallocCore i7 5960XIntel Core i7 5960X11M22M33M44M55MMin: 62581633.78 / Avg: 63037538.09 / Max: 63326557.08Min: 62655456.21 / Avg: 62995660.61 / Max: 63184201.151. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

Nettle

GNU Nettle is a low-level cryptographic library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbyte/s, More Is BetterNettle 3.5.1Test: chachaCore i7 5960XIntel Core i7 5960X2004006008001000SE +/- 0.30, N = 3SE +/- 0.07, N = 3780.03779.55MIN: 406.66 / MAX: 2076.44MIN: 406.61 / MAX: 2072.371. (CC) gcc options: -O2 -ggdb3 -lnettle -lgmp -lm -lcrypto
OpenBenchmarking.orgMbyte/s, More Is BetterNettle 3.5.1Test: chachaCore i7 5960XIntel Core i7 5960X140280420560700Min: 779.57 / Avg: 780.03 / Max: 780.59Min: 779.4 / Avg: 779.55 / Max: 779.641. (CC) gcc options: -O2 -ggdb3 -lnettle -lgmp -lm -lcrypto

Hierarchical INTegration

This test runs the U.S. Department of Energy's Ames Laboratory Hierarchical INTegration (HINT) benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOATCore i7 5960XIntel Core i7 5960X70M140M210M280M350MSE +/- 336091.10, N = 3SE +/- 557628.44, N = 3320864866.36320668576.441. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOATCore i7 5960XIntel Core i7 5960X60M120M180M240M300MMin: 320486564.1 / Avg: 320864866.36 / Max: 321535200.66Min: 319871821.24 / Avg: 320668576.44 / Max: 321742775.231. (CC) gcc options: -O3 -march=native -lm

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Core i7 5960XIntel Core i7 5960X1.11492.22983.34474.45965.5745SE +/- 0.005, N = 3SE +/- 0.006, N = 34.9554.952MIN: 4.91 / MAX: 28.96MIN: 4.91 / MAX: 29.291. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: MobileNetV2_224Core i7 5960XIntel Core i7 5960X246810Min: 4.95 / Avg: 4.95 / Max: 4.96Min: 4.95 / Avg: 4.95 / Max: 4.961. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUCore i7 5960XIntel Core i7 5960X10002000300040005000SE +/- 1.11, N = 3SE +/- 1.94, N = 34805.524802.62MIN: 4798.05MIN: 4795.991. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUCore i7 5960XIntel Core i7 5960X8001600240032004000Min: 4804.24 / Avg: 4805.52 / Max: 4807.74Min: 4800.61 / Avg: 4802.62 / Max: 4806.491. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Timed GCC Compilation

This test times how long it takes to build the GNU Compiler Collection (GCC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GCC Compilation 9.3.0Time To CompileCore i7 5960XIntel Core i7 5960X30060090012001500SE +/- 3.21, N = 3SE +/- 3.26, N = 31477.821476.95
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GCC Compilation 9.3.0Time To CompileCore i7 5960XIntel Core i7 5960X30060090012001500Min: 1471.41 / Avg: 1477.82 / Max: 1481.26Min: 1470.6 / Avg: 1476.95 / Max: 1481.38

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b DecryptionCore i7 5960XIntel Core i7 5960X80160240320400SE +/- 0.12, N = 3SE +/- 0.15, N = 3349.9350.1
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 256b DecryptionCore i7 5960XIntel Core i7 5960X60120180240300Min: 349.7 / Avg: 349.9 / Max: 350.1Min: 349.8 / Avg: 350.07 / Max: 350.3

G'MIC

G'MIC is an open-source framework for image processing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 2D Function Plotting, 1000 TimesCore i7 5960XIntel Core i7 5960X306090120150SE +/- 0.16, N = 3SE +/- 0.38, N = 3138.35138.421. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.
OpenBenchmarking.orgSeconds, Fewer Is BetterG'MICTest: 2D Function Plotting, 1000 TimesCore i7 5960XIntel Core i7 5960X306090120150Min: 138.07 / Avg: 138.35 / Max: 138.63Min: 137.68 / Avg: 138.42 / Max: 138.951. Version 2.4.5, Copyright (c) 2008-2019, David Tschumperle.

Nettle

GNU Nettle is a low-level cryptographic library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbyte/s, More Is BetterNettle 3.5.1Test: poly1305-aesCore i7 5960XIntel Core i7 5960X400800120016002000SE +/- 1.73, N = 3SE +/- 1.38, N = 32071.872073.001. (CC) gcc options: -O2 -ggdb3 -lnettle -lgmp -lm -lcrypto
OpenBenchmarking.orgMbyte/s, More Is BetterNettle 3.5.1Test: poly1305-aesCore i7 5960XIntel Core i7 5960X400800120016002000Min: 2069.73 / Avg: 2071.87 / Max: 2075.3Min: 2070.74 / Avg: 2073 / Max: 2075.491. (CC) gcc options: -O2 -ggdb3 -lnettle -lgmp -lm -lcrypto

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Unkeyed AlgorithmsCore i7 5960XIntel Core i7 5960X60120180240300SE +/- 0.20, N = 3SE +/- 0.05, N = 3268.74268.881. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe
OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Unkeyed AlgorithmsCore i7 5960XIntel Core i7 5960X50100150200250Min: 268.36 / Avg: 268.74 / Max: 269.01Min: 268.78 / Avg: 268.88 / Max: 268.961. (CXX) g++ options: -g2 -O3 -fPIC -pthread -pipe

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedCore i7 5960XIntel Core i7 5960X918273645SE +/- 0.00, N = 3SE +/- 0.01, N = 340.0740.091. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedCore i7 5960XIntel Core i7 5960X816243240Min: 40.06 / Avg: 40.07 / Max: 40.07Min: 40.07 / Avg: 40.09 / Max: 40.111. (CC) gcc options: -O3

Inkscape

Inkscape is an open-source vector graphics editor. This test profile times how long it takes to complete various operations by Inkscape. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterInkscapeOperation: SVG Files To PNGCore i7 5960XIntel Core i7 5960X816243240SE +/- 0.20, N = 3SE +/- 0.22, N = 333.5233.501. Inkscape 0.92.5 (2060ec1f9f, 2020-04-08)
OpenBenchmarking.orgSeconds, Fewer Is BetterInkscapeOperation: SVG Files To PNGCore i7 5960XIntel Core i7 5960X714212835Min: 33.28 / Avg: 33.51 / Max: 33.91Min: 33.11 / Avg: 33.5 / Max: 33.861. Inkscape 0.92.5 (2060ec1f9f, 2020-04-08)

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Threaded Building Blocks, and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal GeneratorImplementation: C++ ThreadsCore i7 5960XIntel Core i7 5960X30K60K90K120K150KSE +/- 27.54, N = 3SE +/- 48.05, N = 31545101544421. (CXX) g++ options: -lpthread -isystem -fexceptions -std=c++14
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal GeneratorImplementation: C++ ThreadsCore i7 5960XIntel Core i7 5960X30K60K90K120K150KMin: 154455 / Avg: 154510 / Max: 154540Min: 154382 / Avg: 154442 / Max: 1545371. (CXX) g++ options: -lpthread -isystem -fexceptions -std=c++14

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeCore i7 5960XIntel Core i7 5960X510152025SE +/- 0.08, N = 3SE +/- 0.07, N = 322.9022.911. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 8 RealtimeCore i7 5960XIntel Core i7 5960X510152025Min: 22.81 / Avg: 22.9 / Max: 23.07Min: 22.77 / Avg: 22.91 / Max: 22.981. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Core i7 5960XIntel Core i7 5960X20406080100SE +/- 0.35, N = 3SE +/- 0.23, N = 385.7385.691. (CC) gcc options: -O2 -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000Core i7 5960XIntel Core i7 5960X1632486480Min: 85.19 / Avg: 85.73 / Max: 86.39Min: 85.27 / Avg: 85.69 / Max: 86.041. (CC) gcc options: -O2 -ldl -lz -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50Core i7 5960XIntel Core i7 5960X714212835SE +/- 0.02, N = 3SE +/- 0.03, N = 328.9028.89MIN: 28.32 / MAX: 32.85MIN: 28.56 / MAX: 32.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50Core i7 5960XIntel Core i7 5960X612182430Min: 28.87 / Avg: 28.9 / Max: 28.92Min: 28.85 / Avg: 28.89 / Max: 28.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-whirlpoolCore i7 5960XIntel Core i7 5960X110K220K330K440K550KSE +/- 179.33, N = 3531014531193
OpenBenchmarking.orgIterations Per Second, More Is BetterCryptsetupPBKDF2-whirlpoolCore i7 5960XIntel Core i7 5960X90K180K270K360K450KMin: 530655 / Avg: 531013.67 / Max: 531193

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPUCore i7 5960XIntel Core i7 5960X7001400210028003500SE +/- 1.13, N = 3SE +/- 3.76, N = 33256.013257.02
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPUCore i7 5960XIntel Core i7 5960X6001200180024003000Min: 3253.85 / Avg: 3256.01 / Max: 3257.69Min: 3252.8 / Avg: 3257.02 / Max: 3264.51

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUCore i7 5960XIntel Core i7 5960X0.71871.43742.15612.87483.5935SE +/- 0.00260, N = 3SE +/- 0.00284, N = 33.194133.19319MIN: 3.13MIN: 3.141. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUCore i7 5960XIntel Core i7 5960X246810Min: 3.19 / Avg: 3.19 / Max: 3.2Min: 3.19 / Avg: 3.19 / Max: 3.21. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100Core i7 5960XIntel Core i7 5960X30K60K90K120K150KSE +/- 196.27, N = 3SE +/- 305.53, N = 31597881598351. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100Core i7 5960XIntel Core i7 5960X30K60K90K120K150KMin: 159407 / Avg: 159787.67 / Max: 160061Min: 159405 / Avg: 159835 / Max: 1604261. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedCore i7 5960XIntel Core i7 5960X14002800420056007000SE +/- 8.58, N = 3SE +/- 3.52, N = 36723.36721.41. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedCore i7 5960XIntel Core i7 5960X12002400360048006000Min: 6707.5 / Avg: 6723.3 / Max: 6737Min: 6715.2 / Avg: 6721.37 / Max: 6727.41. (CC) gcc options: -O3

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2 + RDO Post-ProcessingCore i7 5960XIntel Core i7 5960X2004006008001000SE +/- 0.31, N = 3SE +/- 0.24, N = 31019.891019.611. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2 + RDO Post-ProcessingCore i7 5960XIntel Core i7 5960X2004006008001000Min: 1019.26 / Avg: 1019.89 / Max: 1020.23Min: 1019.31 / Avg: 1019.61 / Max: 1020.091. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: AtomicCore i7 5960XIntel Core i7 5960X40K80K120K160K200KSE +/- 106.18, N = 3SE +/- 51.95, N = 3201760.55201708.751. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: AtomicCore i7 5960XIntel Core i7 5960X30K60K90K120K150KMin: 201570.61 / Avg: 201760.55 / Max: 201937.78Min: 201625.93 / Avg: 201708.75 / Max: 201804.481. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedCore i7 5960XIntel Core i7 5960X918273645SE +/- 0.01, N = 3SE +/- 0.01, N = 339.2339.221. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedCore i7 5960XIntel Core i7 5960X816243240Min: 39.21 / Avg: 39.23 / Max: 39.24Min: 39.21 / Avg: 39.22 / Max: 39.231. (CC) gcc options: -O3

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CryptoCore i7 5960XIntel Core i7 5960X30060090012001500SE +/- 0.77, N = 3SE +/- 1.77, N = 31264.221264.481. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: CryptoCore i7 5960XIntel Core i7 5960X2004006008001000Min: 1262.7 / Avg: 1264.22 / Max: 1265.23Min: 1261.07 / Avg: 1264.48 / Max: 1267.031. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveCore i7 5960XIntel Core i7 5960X80160240320400SE +/- 0.05, N = 3SE +/- 0.07, N = 3355.99355.921. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.0Preset: ExhaustiveCore i7 5960XIntel Core i7 5960X60120180240300Min: 355.9 / Avg: 355.99 / Max: 356.06Min: 355.85 / Avg: 355.92 / Max: 356.061. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUCore i7 5960XIntel Core i7 5960X246810SE +/- 0.01209, N = 3SE +/- 0.02357, N = 36.559736.56081MIN: 6.5MIN: 6.491. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUCore i7 5960XIntel Core i7 5960X3691215Min: 6.54 / Avg: 6.56 / Max: 6.57Min: 6.53 / Avg: 6.56 / Max: 6.611. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantCore i7 5960XIntel Core i7 5960X40K80K120K160K200KSE +/- 18.52, N = 3SE +/- 70.53, N = 3195193195163
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantCore i7 5960XIntel Core i7 5960X30K60K90K120K150KMin: 195173 / Avg: 195193 / Max: 195230Min: 195024 / Avg: 195163.33 / Max: 195252

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SemaphoresCore i7 5960XIntel Core i7 5960X300K600K900K1200K1500KSE +/- 276.15, N = 3SE +/- 800.12, N = 31314298.471314448.241. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: SemaphoresCore i7 5960XIntel Core i7 5960X200K400K600K800K1000KMin: 1313958.07 / Avg: 1314298.47 / Max: 1314845.33Min: 1313147.95 / Avg: 1314448.24 / Max: 1315906.181. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Core i7 5960XIntel Core i7 5960X120K240K360K480K600KSE +/- 2528.38, N = 3SE +/- 3405.39, N = 3560118.7560056.2
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000Core i7 5960XIntel Core i7 5960X100K200K300K400K500KMin: 555136.5 / Avg: 560118.7 / Max: 563359.1Min: 553711.5 / Avg: 560056.2 / Max: 565373

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Core i7 5960XIntel Core i7 5960X3691215SE +/- 0.005, N = 3SE +/- 0.003, N = 39.2479.246MIN: 9.18 / MAX: 12.45MIN: 9.19 / MAX: 10.981. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2020-09-17Model: SqueezeNetV1.0Core i7 5960XIntel Core i7 5960X3691215Min: 9.24 / Avg: 9.25 / Max: 9.25Min: 9.24 / Avg: 9.25 / Max: 9.251. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Core i7 5960XIntel Core i7 5960X800K1600K2400K3200K4000KSE +/- 157.09, N = 3SE +/- 708.12, N = 338811033881410
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Core i7 5960XIntel Core i7 5960X700K1400K2100K2800K3500KMin: 3880790 / Avg: 3881103.33 / Max: 3881280Min: 3880020 / Avg: 3881410 / Max: 3882340

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetCore i7 5960XIntel Core i7 5960X60K120K180K240K300KSE +/- 43.33, N = 3SE +/- 27.39, N = 3298339298320
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetCore i7 5960XIntel Core i7 5960X50K100K150K200K250KMin: 298296 / Avg: 298339.33 / Max: 298426Min: 298266 / Avg: 298320 / Max: 298355

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchCore i7 5960XIntel Core i7 5960X4080120160200SE +/- 0.23, N = 3SE +/- 0.41, N = 3172.36172.361. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchCore i7 5960XIntel Core i7 5960X306090120150Min: 171.89 / Avg: 172.36 / Max: 172.6Min: 171.69 / Avg: 172.36 / Max: 173.121. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Core i7 5960XIntel Core i7 5960X900K1800K2700K3600K4500KSE +/- 456.70, N = 3SE +/- 264.34, N = 342930774292933
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Core i7 5960XIntel Core i7 5960X700K1400K2100K2800K3500KMin: 4292610 / Avg: 4293076.67 / Max: 4293990Min: 4292630 / Avg: 4292933.33 / Max: 4293460

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: RdRandCore i7 5960XIntel Core i7 5960X50K100K150K200K250KSE +/- 2.30, N = 3SE +/- 5.72, N = 3243866.72243859.911. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: RdRandCore i7 5960XIntel Core i7 5960X40K80K120K160K200KMin: 243862.97 / Avg: 243866.72 / Max: 243870.9Min: 243850.52 / Avg: 243859.91 / Max: 243870.251. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon ObjCore i7 5960XIntel Core i7 5960X3691215SE +/- 0.01, N = 3SE +/- 0.04, N = 310.1810.18MIN: 10.13 / MAX: 10.31MIN: 10.08 / MAX: 10.39
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.9.0Binary: Pathtracer ISPC - Model: Asian Dragon ObjCore i7 5960XIntel Core i7 5960X3691215Min: 10.17 / Avg: 10.18 / Max: 10.2Min: 10.13 / Avg: 10.18 / Max: 10.25

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatCore i7 5960XIntel Core i7 5960X40K80K120K160K200KSE +/- 36.09, N = 3SE +/- 32.83, N = 3201338201337
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatCore i7 5960XIntel Core i7 5960X30K60K90K120K150KMin: 201266 / Avg: 201337.67 / Max: 201381Min: 201271 / Avg: 201336.67 / Max: 201370

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Vector MathCore i7 5960XIntel Core i7 5960X10K20K30K40K50KSE +/- 5.13, N = 3SE +/- 6.83, N = 346301.3046301.391. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Vector MathCore i7 5960XIntel Core i7 5960X8K16K24K32K40KMin: 46291.11 / Avg: 46301.3 / Max: 46307.41Min: 46287.72 / Avg: 46301.39 / Max: 46308.281. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI ScoreCore i7 5960XIntel Core i7 5960X40080012001600200019511951

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupCore i7 5960XIntel Core i7 5960X48121620SE +/- 0.00, N = 3SE +/- 0.03, N = 315.715.7
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupCore i7 5960XIntel Core i7 5960X48121620Min: 15.7 / Avg: 15.7 / Max: 15.7Min: 15.6 / Avg: 15.67 / Max: 15.7

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesCore i7 5960XIntel Core i7 5960X306090120150128128

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceCore i7 5960XIntel Core i7 5960X120240360480600SE +/- 1.15, N = 3SE +/- 0.58, N = 3558558
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceCore i7 5960XIntel Core i7 5960X100200300400500Min: 556 / Avg: 558 / Max: 560Min: 557 / Avg: 558 / Max: 559

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyCore i7 5960XIntel Core i7 5960X306090120150150150

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatCore i7 5960XIntel Core i7 5960X306090120150135135

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosCore i7 5960XIntel Core i7 5960X306090120150125125

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goCore i7 5960XIntel Core i7 5960X60120180240300SE +/- 0.58, N = 3SE +/- 0.33, N = 3294294
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goCore i7 5960XIntel Core i7 5960X50100150200250Min: 293 / Avg: 294 / Max: 295Min: 293 / Avg: 293.67 / Max: 294

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPUCore i7 5960XIntel Core i7 5960X0.18230.36460.54690.72920.9115SE +/- 0.00, N = 3SE +/- 0.00, N = 30.810.81
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP32 - Device: CPUCore i7 5960XIntel Core i7 5960X246810Min: 0.81 / Avg: 0.81 / Max: 0.81Min: 0.81 / Avg: 0.81 / Max: 0.82

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUCore i7 5960XIntel Core i7 5960X0.18230.36460.54690.72920.9115SE +/- 0.00, N = 3SE +/- 0.00, N = 30.810.81
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2021.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUCore i7 5960XIntel Core i7 5960X246810Min: 0.81 / Avg: 0.81 / Max: 0.81Min: 0.81 / Avg: 0.81 / Max: 0.82

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPUCore i7 5960XIntel Core i7 5960X0.27450.5490.82351.0981.3725SE +/- 0.00, N = 3SE +/- 0.01, N = 31.221.22
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Person Detection 0106 FP16 - Device: CPUCore i7 5960XIntel Core i7 5960X246810Min: 1.22 / Avg: 1.22 / Max: 1.23Min: 1.21 / Avg: 1.22 / Max: 1.23

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPUCore i7 5960XIntel Core i7 5960X0.39830.79661.19491.59321.9915SE +/- 0.00, N = 3SE +/- 0.00, N = 31.771.77
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2021.1Model: Face Detection 0106 FP32 - Device: CPUCore i7 5960XIntel Core i7 5960X246810Min: 1.77 / Avg: 1.77 / Max: 1.77Min: 1.77 / Avg: 1.77 / Max: 1.78

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0Core i7 5960XIntel Core i7 5960X246810SE +/- 0.00, N = 3SE +/- 0.02, N = 38.588.58MIN: 8.5 / MAX: 9.01MIN: 8.51 / MAX: 8.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0Core i7 5960XIntel Core i7 5960X3691215Min: 8.57 / Avg: 8.58 / Max: 8.58Min: 8.56 / Avg: 8.58 / Max: 8.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2Core i7 5960XIntel Core i7 5960X1122334455SE +/- 0.01, N = 3SE +/- 0.01, N = 350.6950.691. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2Core i7 5960XIntel Core i7 5960X1020304050Min: 50.67 / Avg: 50.69 / Max: 50.7Min: 50.67 / Avg: 50.69 / Max: 50.71. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

Cryptsetup

This is a test profile for running the cryptsetup benchmark to report on the system's cryptography performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b EncryptionCore i7 5960XIntel Core i7 5960X80160240320400SE +/- 0.27, N = 3SE +/- 0.20, N = 3349.1349.1
OpenBenchmarking.orgMiB/s, More Is BetterCryptsetupTwofish-XTS 512b EncryptionCore i7 5960XIntel Core i7 5960X60120180240300Min: 348.8 / Avg: 349.07 / Max: 349.6Min: 348.7 / Avg: 349.1 / Max: 349.3

LuxCoreRender

LuxCoreRender is an open-source physically based renderer. This test profile is focused on running LuxCoreRender on the CPU as opposed to the OpenCL version. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: Rainbow Colors and PrismCore i7 5960XIntel Core i7 5960X0.2790.5580.8371.1161.395SE +/- 0.00, N = 3SE +/- 0.00, N = 31.241.24MIN: 1.21 / MAX: 1.3MIN: 1.21 / MAX: 1.3
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: Rainbow Colors and PrismCore i7 5960XIntel Core i7 5960X246810Min: 1.24 / Avg: 1.24 / Max: 1.25Min: 1.24 / Avg: 1.24 / Max: 1.25

OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: DLSCCore i7 5960XIntel Core i7 5960X0.25650.5130.76951.0261.2825SE +/- 0.00, N = 3SE +/- 0.00, N = 31.141.14MIN: 1.1 / MAX: 1.15MIN: 1.09 / MAX: 1.16
OpenBenchmarking.orgM samples/sec, More Is BetterLuxCoreRender 2.3Scene: DLSCCore i7 5960XIntel Core i7 5960X246810Min: 1.14 / Avg: 1.14 / Max: 1.14Min: 1.14 / Avg: 1.14 / Max: 1.15

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkCore i7 5960XIntel Core i7 5960X20406080100SE +/- 0.14, N = 3SE +/- 0.13, N = 384.8384.83MIN: 1 / MAX: 335MIN: 1 / MAX: 336
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 0.9Benchmark: vklBenchmarkCore i7 5960XIntel Core i7 5960X1632486480Min: 84.58 / Avg: 84.83 / Max: 85.08Min: 84.67 / Avg: 84.83 / Max: 85.08

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.2.0Scene: MemorialCore i7 5960XIntel Core i7 5960X246810SE +/- 0.00, N = 3SE +/- 0.00, N = 36.356.35
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.2.0Scene: MemorialCore i7 5960XIntel Core i7 5960X3691215Min: 6.35 / Avg: 6.35 / Max: 6.36Min: 6.34 / Avg: 6.35 / Max: 6.35

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 0 - Input: 1080pCore i7 5960XIntel Core i7 5960X0.01890.03780.05670.07560.0945SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0840.0841. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 0 - Input: 1080pCore i7 5960XIntel Core i7 5960X12345Min: 0.08 / Avg: 0.08 / Max: 0.08Min: 0.08 / Avg: 0.08 / Max: 0.081. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: MediumCore i7 5960XIntel Core i7 5960X0.7651.532.2953.063.825SE +/- 0.01, N = 3SE +/- 0.01, N = 33.403.401. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: MediumCore i7 5960XIntel Core i7 5960X246810Min: 3.39 / Avg: 3.4 / Max: 3.41Min: 3.39 / Avg: 3.4 / Max: 3.411. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

AOM AV1

This is a simple test of the AOMedia AV1 encoder run on the CPU with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassCore i7 5960XIntel Core i7 5960X0.57831.15661.73492.31322.8915SE +/- 0.00, N = 3SE +/- 0.01, N = 32.572.571. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 2.0Encoder Mode: Speed 6 Two-PassCore i7 5960XIntel Core i7 5960X246810Min: 2.56 / Avg: 2.57 / Max: 2.57Min: 2.55 / Avg: 2.57 / Max: 2.581. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: EnhancedCore i7 5960XIntel Core i7 5960X2040608010096961. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SharpenCore i7 5960XIntel Core i7 5960X20406080100SE +/- 0.33, N = 385851. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SharpenCore i7 5960XIntel Core i7 5960X1632486480Min: 84 / Avg: 84.67 / Max: 851. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: RotateCore i7 5960XIntel Core i7 5960X130260390520650SE +/- 0.33, N = 35825821. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: RotateCore i7 5960XIntel Core i7 5960X100200300400500Min: 582 / Avg: 582.33 / Max: 5831. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SwirlCore i7 5960XIntel Core i7 5960X50100150200250SE +/- 0.33, N = 3SE +/- 0.33, N = 32502501. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SwirlCore i7 5960XIntel Core i7 5960X50100150200250Min: 249 / Avg: 249.67 / Max: 250Min: 249 / Avg: 249.67 / Max: 2501. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Zstd Compression

This test measures the time needed to compress a sample file (an Ubuntu ISO) using Zstd compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Core i7 5960XIntel Core i7 5960X918273645SE +/- 0.09, N = 3SE +/- 0.03, N = 337.237.21. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.4.5Compression Level: 19Core i7 5960XIntel Core i7 5960X816243240Min: 37.1 / Avg: 37.23 / Max: 37.4Min: 37.1 / Avg: 37.17 / Max: 37.21. (CC) gcc options: -O3 -pthread -lz -llzma

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandomCore i7 5960XIntel Core i7 5960X0.0810.1620.2430.3240.405SE +/- 0.00, N = 3SE +/- 0.00, N = 30.360.361. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: LargeRandomCore i7 5960XIntel Core i7 5960X12345Min: 0.36 / Avg: 0.36 / Max: 0.36Min: 0.36 / Avg: 0.36 / Max: 0.361. (CXX) g++ options: -O3 -pthread

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: KostyaCore i7 5960XIntel Core i7 5960X0.12150.2430.36450.4860.6075SE +/- 0.00, N = 3SE +/- 0.00, N = 30.540.541. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.7.1Throughput Test: KostyaCore i7 5960XIntel Core i7 5960X246810Min: 0.54 / Avg: 0.54 / Max: 0.54Min: 0.54 / Avg: 0.54 / Max: 0.541. (CXX) g++ options: -O3 -pthread

lzbench

lzbench is an in-memory benchmark of various compressors. The file used for compression is a Linux kernel source tree tarball. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Libdeflate 1 - Process: DecompressionCore i7 5960XIntel Core i7 5960X2004006008001000SE +/- 0.67, N = 3SE +/- 0.33, N = 39959951. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Libdeflate 1 - Process: DecompressionCore i7 5960XIntel Core i7 5960X2004006008001000Min: 994 / Avg: 994.67 / Max: 996Min: 994 / Avg: 994.67 / Max: 9951. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Libdeflate 1 - Process: CompressionCore i7 5960XIntel Core i7 5960X4080120160200SE +/- 0.67, N = 31851851. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Libdeflate 1 - Process: CompressionCore i7 5960XIntel Core i7 5960X306090120150Min: 184 / Avg: 185.33 / Max: 1861. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 2 - Process: CompressionCore i7 5960XIntel Core i7 5960X3060901201501521521. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 0 - Process: CompressionCore i7 5960XIntel Core i7 5960X80160240320400SE +/- 1.33, N = 3SE +/- 0.58, N = 33663661. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Brotli 0 - Process: CompressionCore i7 5960XIntel Core i7 5960X70140210280350Min: 363 / Avg: 365.67 / Max: 367Min: 365 / Avg: 366 / Max: 3671. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Crush 0 - Process: CompressionCore i7 5960XIntel Core i7 5960X20406080100SE +/- 0.33, N = 3SE +/- 0.33, N = 382821. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Crush 0 - Process: CompressionCore i7 5960XIntel Core i7 5960X1632486480Min: 81 / Avg: 81.67 / Max: 82Min: 81 / Avg: 81.67 / Max: 821. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: Zstd 8 - Process: CompressionCore i7 5960XIntel Core i7 5960X153045607568681. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: XZ 0 - Process: DecompressionCore i7 5960XIntel Core i7 5960X20406080100SE +/- 0.33, N = 397971. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3
OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: XZ 0 - Process: DecompressionCore i7 5960XIntel Core i7 5960X20406080100Min: 96 / Avg: 96.67 / Max: 971. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

OpenBenchmarking.orgMB/s, More Is Betterlzbench 1.8Test: XZ 0 - Process: CompressionCore i7 5960XIntel Core i7 5960X81624324034341. (CXX) g++ options: -pthread -fomit-frame-pointer -fstrict-aliasing -ffast-math -O3

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qdaCore i7 5960XIntel Core i7 5960X1530456075SE +/- 0.22, N = 3SE +/- 1.55, N = 1265.9267.42
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qdaCore i7 5960XIntel Core i7 5960X1326395265Min: 65.7 / Avg: 65.92 / Max: 66.37Min: 64.94 / Avg: 67.42 / Max: 84.27

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: System V Message PassingCore i7 5960XIntel Core i7 5960X1.4M2.8M4.2M5.6M7MSE +/- 28385.67, N = 3SE +/- 148172.66, N = 156757856.836549886.191. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: System V Message PassingCore i7 5960XIntel Core i7 5960X1.2M2.4M3.6M4.8M6MMin: 6727637.43 / Avg: 6757856.83 / Max: 6814587.8Min: 4827156.34 / Avg: 6549886.19 / Max: 6848555.241. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context SwitchingCore i7 5960XIntel Core i7 5960X600K1200K1800K2400K3000KSE +/- 24413.26, N = 3SE +/- 55634.55, N = 152860383.272944645.271. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.11.07Test: Context SwitchingCore i7 5960XIntel Core i7 5960X500K1000K1500K2000K2500KMin: 2821769.67 / Avg: 2860383.27 / Max: 2905569.87Min: 2654675.11 / Avg: 2944645.27 / Max: 3448214.961. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lz -ldl -lpthread -lc

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETCore i7 5960XIntel Core i7 5960X400K800K1200K1600K2000KSE +/- 18732.01, N = 15SE +/- 34691.30, N = 151798887.051565730.851. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETCore i7 5960XIntel Core i7 5960X300K600K900K1200K1500KMin: 1579778.75 / Avg: 1798887.05 / Max: 1869697.12Min: 1233085.12 / Avg: 1565730.85 / Max: 1692263.881. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSHCore i7 5960XIntel Core i7 5960X300K600K900K1200K1500KSE +/- 25198.30, N = 15SE +/- 30657.02, N = 121192557.051147031.871. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSHCore i7 5960XIntel Core i7 5960X200K400K600K800K1000KMin: 953410.88 / Avg: 1192557.05 / Max: 1250280Min: 951474.81 / Avg: 1147031.87 / Max: 1234923.51. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOPCore i7 5960XIntel Core i7 5960X400K800K1200K1600K2000KSE +/- 36196.65, N = 15SE +/- 26363.62, N = 151772198.181159344.721. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOPCore i7 5960XIntel Core i7 5960X300K600K900K1200K1500KMin: 1404494.38 / Avg: 1772198.18 / Max: 1930687.25Min: 943396.25 / Avg: 1159344.72 / Max: 1275673.51. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Sockperf

This is a network socket API performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.4Test: Latency Under LoadCore i7 5960XIntel Core i7 5960X612182430SE +/- 1.00, N = 25SE +/- 1.23, N = 2025.1224.771. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread
OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.4Test: Latency Under LoadCore i7 5960XIntel Core i7 5960X612182430Min: 12.29 / Avg: 25.12 / Max: 30.96Min: 12.38 / Avg: 24.77 / Max: 29.91. (CXX) g++ options: --param -O3 -rdynamic -ldl -lpthread

313 Results Shown

Stress-NG
Sockperf
oneDNN
simdjson
Stress-NG
oneDNN
OSBench
Redis
x265
C-Blosc
7-Zip Compression
Mlpack Benchmark
LeelaChessZero
PostMark
NCNN
OSBench
Mlpack Benchmark
Darktable
Kvazaar
Stockfish
x265
simdjson
Coremark
DaCapo Benchmark
toyBrot Fractal Generator
eSpeak-NG Speech Engine
ASTC Encoder
SVT-VP9
Numpy Benchmark
Embree
Numenta Anomaly Benchmark
LeelaChessZero
DaCapo Benchmark
x264
Tachyon
Embree
Redis
Darktable
rav1e
CLOMP
GraphicsMagick
WireGuard + Linux Networking Stack Stress Test
Warsow
Cryptsetup
rav1e
lzbench
Stress-NG
WebP Image Encode
Sockperf
lzbench
AOM AV1
OpenVINO
oneDNN
Monkey Audio Encoding
rav1e
oneDNN
dav1d
oneDNN
Blender
Rodinia
Basis Universal
GraphicsMagick
NCNN
Embree
Build2
Rodinia
Stress-NG
rav1e
FFTE
GLmark2
Kvazaar:
  Bosphorus 1080p - Very Fast
  Bosphorus 4K - Very Fast
Opus Codec Encoding
AOM AV1
libavif avifenc
Cryptsetup
OSBench
NAMD
GraphicsMagick
dav1d
toyBrot Fractal Generator
GNU Octave Benchmark
Timed MAFFT Alignment
OpenVINO
SVT-VP9
OpenVINO
PyPerformance
WebP Image Encode
libavif avifenc
oneDNN
DaCapo Benchmark
PyPerformance
Dolfyn
Stress-NG
TensorFlow Lite
Stress-NG
PyPerformance
Embree
oneDNN
NCNN
librsvg
Rodinia
LZ4 Compression
InfluxDB
Cryptsetup
Stress-NG
oneDNN
XZ Compression
OSBench
Cryptsetup
PyPerformance
oneDNN
Darktable
Apache CouchDB
oneDNN
WebP Image Encode
RNNoise
Zstd Compression
lzbench
Timed Eigen Compilation
SVT-AV1
OpenVINO
PyPerformance
oneDNN
NCNN
InfluxDB
Cryptsetup
Basis Universal
NCNN
Mobile Neural Network
Numenta Anomaly Benchmark
Kvazaar
Rodinia
PyPerformance
Darktable
Numenta Anomaly Benchmark
NCNN
RawTherapee
libavif avifenc
Stress-NG
NCNN:
  CPU - vgg16
  CPU-v2-v2 - mobilenet-v2
Embree
Caffe
WebP Image Encode
dav1d
KeyDB
Timed GDB GNU Debugger Compilation
LZ4 Compression
Cryptsetup
Kvazaar
Nettle
OCRMyPDF
oneDNN
lzbench
NCNN
Cryptsetup
Montage Astronomical Image Mosaic Engine
Crafty
lzbench
Cryptsetup
ASTC Encoder
Node.js V8 Web Tooling Benchmark
G'MIC
oneDNN
libavif avifenc
asmFish
SVT-VP9
NCNN
Botan
dav1d
BYTE Unix Benchmark
oneDNN
Stress-NG
Cryptsetup
TNN
lzbench
Nebular Empirical Analysis Tool
Mlpack Benchmark
oneDNN
PHPBench
Botan
Rodinia
G'MIC
LZ4 Compression
Cryptsetup:
  Serpent-XTS 256b Decryption
  Serpent-XTS 512b Decryption
LAMMPS Molecular Dynamics Simulator
ASTC Encoder
Timed LLVM Compilation
Mobile Neural Network
NCNN
OpenVINO
NCNN
Stress-NG
Blender
Sunflow Rendering System
Timed FFmpeg Compilation
YafaRay
OpenVINO
Git
Botan
WebP Image Encode
Timed PHP Compilation
Chaos Group V-RAY
TNN
Stress-NG
WavPack Audio Encoding
IndigoBench
AI Benchmark Alpha
Botan
Stress-NG
LibRaw
Nettle
AI Benchmark Alpha
toyBrot Fractal Generator
OpenVINO
Timed MPlayer Compilation
John The Ripper
SVT-AV1
Hugin
Botan
IndigoBench
Timed Apache Compilation
John The Ripper
Aircrack-ng
Mobile Neural Network
NCNN
OSBench
Basis Universal
Stress-NG
Nettle
Hierarchical INTegration
Mobile Neural Network
oneDNN
Timed GCC Compilation
Cryptsetup
G'MIC
Nettle
Crypto++
LZ4 Compression
Inkscape
toyBrot Fractal Generator
AOM AV1
SQLite Speedtest
NCNN
Cryptsetup
OpenVINO
oneDNN
Caffe
LZ4 Compression
Basis Universal
Stress-NG
LZ4 Compression
Stress-NG
ASTC Encoder
oneDNN
TensorFlow Lite
Stress-NG
InfluxDB
Mobile Neural Network
TensorFlow Lite:
  Inception ResNet V2
  SqueezeNet
Timed HMMer Search
TensorFlow Lite
Stress-NG
Embree
TensorFlow Lite
Stress-NG
AI Benchmark Alpha
PyPerformance:
  python_startup
  crypto_pyaes
  raytrace
  nbody
  float
  chaos
  go
OpenVINO:
  Age Gender Recognition Retail 0013 FP32 - CPU
  Age Gender Recognition Retail 0013 FP16 - CPU
  Person Detection 0106 FP16 - CPU
  Face Detection 0106 FP32 - CPU
NCNN
Basis Universal
Cryptsetup
LuxCoreRender:
  Rainbow Colors and Prism
  DLSC
OpenVKL
Intel Open Image Denoise
SVT-AV1
Kvazaar
AOM AV1
GraphicsMagick:
  Enhanced
  Sharpen
  Rotate
  Swirl
Zstd Compression
simdjson:
  LargeRand
  Kostya
lzbench:
  Libdeflate 1 - Decompression
  Libdeflate 1 - Compression
  Brotli 2 - Compression
  Brotli 0 - Compression
  Crush 0 - Compression
  Zstd 8 - Compression
  XZ 0 - Decompression
  XZ 0 - Compression
Mlpack Benchmark
Stress-NG:
  System V Message Passing
  Context Switching
Redis:
  GET
  LPUSH
  LPOP
Sockperf