Core i9 12900K Core Configuration

Benchmarks by Michael Larabel for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2111076-TJ-COREI912907
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 3 Tests
Web Browsers 1 Tests
Chess Test Suite 2 Tests
Timed Code Compilation 3 Tests
C/C++ Compiler Tests 8 Tests
Compression Tests 2 Tests
CPU Massive 14 Tests
Creator Workloads 17 Tests
Cryptocurrency Benchmarks, CPU Mining Tests 2 Tests
Cryptography 3 Tests
Desktop Graphics 2 Tests
Encoding 5 Tests
Game Development 3 Tests
HPC - High Performance Computing 10 Tests
Imaging 6 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 7 Tests
Molecular Dynamics 2 Tests
Multi-Core 20 Tests
NVIDIA GPU Compute 4 Tests
Intel oneAPI 4 Tests
Programmer / Developer System Benchmarks 4 Tests
Python Tests 5 Tests
Scientific Computing 2 Tests
Server CPU Tests 12 Tests
Video Encoding 5 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
1 P + HT + 8 E
November 05 2021
  11 Hours, 13 Minutes
8 P + HT
November 06 2021
  6 Hours, 53 Minutes
8 P + 8 E
November 05 2021
  11 Hours, 31 Minutes
8 P + HT + 8 E
November 04 2021
  9 Hours, 46 Minutes
Invert Hiding All Results Option
  9 Hours, 51 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Core i9 12900K Core ConfigurationOpenBenchmarking.orgPhoronix Test SuiteIntel Core i9-12900K @ 6.50GHz (9 Cores / 10 Threads)Intel Core i9-12900K @ 6.50GHz (8 Cores / 16 Threads)Intel Core i9-12900K @ 6.50GHz (16 Cores)Intel Core i9-12900K @ 6.50GHz (16 Cores / 24 Threads)ASUS ROG STRIX Z690-E GAMING WIFI (0702 BIOS)Intel Device 7aa764GB1000GB Western Digital WDS100T1X0E-00AFY0Gigabyte AMD Radeon RX 6800/6800 XT / 6900 16GB (2575/1000MHz)Intel Device 7ad0ASUS VP28UIntel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411Ubuntu 21.105.15.0-051500rc6daily20211023-generic (x86_64) 20211022GNOME Shell 40.5X Server 1.20.13 + Wayland4.6 Mesa 22.0.0-devel (git-c2d522b 2021-10-23 impish-oibaf-ppa) (LLVM 12.0.1 DRM 3.42 5.15.0-051500rc6daily20211023-generic)1.2.195GCC 11.2.0ext43840x2160ProcessorsMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionCore I9 12900K Core Configuration BenchmarksSystem Logs- Transparent Huge Pages: madvise- CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-ZPT0kp/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-ZPT0kp/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate powersave - CPU Microcode: 0x12 - Thermald 2.4.6- BAR1 / Visible vRAM Size: 16368 MB- Python 3.9.7- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 EResult OverviewPhoronix Test Suite100%156%211%267%OSPrayONNX RuntimeOpenSSLEmbreeNAMDStockfishPlaidMLASTC EncoderIntel Open Image DenoiseSVT-HEVCCoremarkoneDNNXmrigSVT-VP9Cpuminer-OptSVT-AV1Timed Linux Kernel CompilationET: Legacy7-Zip CompressionGROMACSTimed Godot Game Engine CompilationSysbenchJPEG XL Decoding libjxlTimed Mesa CompilationNCNNOpenCVWireGuard + Linux Networking Stack Stress TestZstd CompressionUnvanquishedLeelaChessZeroTesseractAOM AV1XonoticSeleniumJPEG XL libjxllibavif avifencRawTherapeeLibRawHuginDeepSpeechDarmstadt Automotive Parallel Heterogeneous Suite

1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 EPer Watt Result OverviewPhoronix Test Suite100%121%142%162%183%SysbenchCpuminer-OptSVT-VP9SVT-HEVCCoremarkOSPrayET: LegacyJPEG XL Decoding libjxlLeelaChessZeroDarmstadt Automotive Parallel Heterogeneous SuiteOpenSSLStockfishIntel Open Image DenoiseAOM AV1Zstd CompressionTesseract7-Zip CompressionSVT-AV1LibRawPlaidMLONNX RuntimeXonoticEmbreeXmrigSeleniumJPEG XL libjxlGROMACSUnvanquishedP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.M

Core i9 12900K Core Configurationospray: Magnetic Reconnection - SciVisonnx: super-resolution-10 - CPUospray: XFrog Forest - Path Tracerastcenc: Exhaustiveonnx: yolov4 - CPUospray: XFrog Forest - SciVisopenssl: RSA4096openssl: RSA4096ospray: San Miguel - SciVisospray: San Miguel - Path Tracercpuminer-opt: Blake-2 Sonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUxmrig: Wownero - 1Mospray: NASA Streamlines - SciVisembree: Pathtracer ISPC - Crownospray: NASA Streamlines - Path Tracercpuminer-opt: Deepcoincpuminer-opt: LBC, LBRY Creditsembree: Pathtracer ISPC - Asian Dragonembree: Pathtracer - Crownnamd: ATPase Simulation - 327,506 Atomsembree: Pathtracer - Asian Dragonplaidml: No - Inference - VGG19 - CPUoidn: RT.ldr_alb_nrm.3840x2160oidn: RT.hdr_alb_nrm.3840x2160stockfish: Total Timecpuminer-opt: x25xncnn: CPU - efficientnet-b0svt-hevc: 7 - Bosphorus 1080pcpuminer-opt: Skeincoinsvt-av1: Preset 8 - Bosphorus 1080pplaidml: No - Inference - VGG16 - CPUcpuminer-opt: Garlicoincoremark: CoreMark Size 666 - Iterations Per Secondsvt-av1: Preset 4 - Bosphorus 1080psvt-vp9: VMAF Optimized - Bosphorus 1080psvt-hevc: 10 - Bosphorus 1080poidn: RTLightmap.hdr.4096x4096svt-vp9: PSNR/SSIM Optimized - Bosphorus 1080pcpuminer-opt: Ringcoincompress-zstd: 19, Long Mode - Compression Speedbuild-linux-kernel: Time To Compileetlegacy: 3840 x 2160etlegacy: 1920 x 1080cpuminer-opt: Magicompress-7zip: Compress Speed Testxmrig: Monero - 1Mncnn: CPU - mnasnetcpuminer-opt: Quad SHA-256, Pyritegromacs: MPI CPU - water_GMX50_baresvt-av1: Preset 8 - Bosphorus 4Kncnn: CPU - resnet50build-godot: Time To Compilencnn: CPU-v2-v2 - mobilenet-v2openssl: SHA256onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUncnn: CPU-v3-v3 - mobilenet-v3sysbench: CPUncnn: CPU - googlenetcompress-zstd: 3 - Compression Speedjpegxl-decode: Allsvt-av1: Preset 4 - Bosphorus 4Kavifenc: 6, Losslessncnn: CPU - squeezenet_ssdastcenc: Thoroughcpuminer-opt: Myriad-Groestlncnn: CPU - resnet18lczero: BLASncnn: CPU - alexnetcompress-zstd: 19 - Compression Speedonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUncnn: CPU - vgg16jpegxl: PNG - 8aom-av1: Speed 8 Realtime - Bosphorus 4Klczero: Eigenavifenc: 6aom-av1: Speed 10 Realtime - Bosphorus 4Kaom-av1: Speed 10 Realtime - Bosphorus 1080prawtherapee: Total Benchmark Timejpegxl: PNG - 7ncnn: CPU - shufflenet-v2daphne: OpenMP - NDT Mappingdeepspeech: CPUavifenc: 10daphne: OpenMP - Euclidean Clusterselenium: Kraken - Google Chromeselenium: WASM imageConvolute - Google Chromeselenium: WASM collisionDetection - Google Chromeselenium: PSPDFKit WASM - Google Chromeselenium: StyleBench - Google Chromeselenium: Speedometer - Google Chromeselenium: ARES-6 - Google Chromedaphne: OpenMP - Points2Imagencnn: CPU - regnety_400mncnn: CPU - yolov4-tinyncnn: CPU - blazefacencnn: CPU - mobilenetopencv: Object Detectionopencv: DNN - Deep Neural Networkonnx: shufflenet-v2-10 - CPUonnx: fcn-resnet101-11 - CPUcpuminer-opt: Triple SHA-256, Onecoinbuild-mesa: Time To Compileospray: Magnetic Reconnection - Path Traceraom-av1: Speed 6 Realtime - Bosphorus 4Kaom-av1: Speed 6 Realtime - Bosphorus 1080paom-av1: Speed 8 Realtime - Bosphorus 1080plibraw: Post-Processing Benchmarkhugin: Panorama Photo Assistant + Stitching Timejpegxl: JPEG - 8jpegxl: JPEG - 7compress-zstd: 19, Long Mode - Decompression Speedcompress-zstd: 19 - Decompression Speedcompress-zstd: 8 - Compression Speedwireguard: unvanquished: 3840 x 2160 - Ultraunvanquished: 3840 x 2160 - Highunvanquished: 3840 x 2160 - Mediumunvanquished: 1920 x 1080 - Ultraunvanquished: 1920 x 1080 - Highunvanquished: 1920 x 1080 - Mediumxonotic: 3840 x 2160 - Ultimatexonotic: 3840 x 2160 - Ultraxonotic: 3840 x 2160 - Hightesseract: 1920 x 10801 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E7.5819410.72111.62842261.411311.085945.09.260.761774433638.973639.206875.266870.903637.112.207.38202.435064.82156508.82447.16542.077128.44498.100.190.1919190130268.017.1192.204690038.72410.301215.28304050.6760043.116158.67201.150.10160.681793.2219.7104.780349.9353.5315.86456073326.13.91633030.86211.54926.30133.9484.20116179540402.351923.5632143.0113.202525.8251.741.14559.28920.8012.21621254013.4664711.1728.38.105098.1816342.640.7440.25120813.20160.96169.5950.8558.033.041058.3559.626812.8271471.08718.328.83413.42302947.120415.9937642.9997170296.8818.621.4014.113204110323220083214844253.681136.909.026.84123.9575.0237.43828.4170.203121.63753.9414.9174.057252.4266.4280.7280.6283.4285.5293.5745640312.9849660417.1591213631.195731.2546872.0542.29555093.823248.5214809.2251.994197971197.421197.972301.332311.449070.829.4117.85405.209842.673027420.700615.08601.1088516.460219.680.480.4831639934445.772.86180.547559394.98524.082514.88469735.3029557.372276.24381.090.23281.902564.0243.262.849762.0763.1258.12782557015.31.86792071.63923.89112.7983.0112.06132679063771.192611.8135560.176.764930.6327.142.16843.67011.427.6782170167.6010316.6246.94.905815.0606826.721.1162.0718839.08991.53243.7138.35910.432.191341.1348.710112.5191640.03442.818.21210.76252864.928910.4239962.9532484714.6512.120.907.45274604795387827312854736.58650014.8712.98183.5084.3531.73243.06108.944622.84461.3969.298.599451.2443.1461.1457.8459.4467.5449.4448242527.6996597579.1043456997.768519.2378021.8539.01827573.513789.5241292.121.281.704718802372.952362.084532.544533.218015.427.0315.33035.34114873766317.880114.42300.9038416.140919.730.340.3438408790595.044.72190.829965081.57823.982473.24511767.6379575.793241.47390.730.17256.223599.1330.554.844754.8751.6456.66737586676.62.961038701.78420.51520.2672.3662.90207628054771.670732.7161343.7010.813717.6477.301.81436.00918.017.65119544.7811.5710619.0637.97.594787.6826432.471.1163.4516839.80691.79191.4745.98311.182.611271.1453.389352.4641659.11555.520.77345.53287951.4620715.3238406.4454863926.0615.761.1010.675295477014523412224765328.445333.3312.8410.60108.3877.9440.66040.22112.964088.04460.0850.4140.641324.0340.7361.5353.2374.7367.3405.2354885534.6201895571.6375036923.784226.3250062.4832.75226404.714272.1278159.728.572.335420071615.781615.762878.712880.9310624.135.5121.25186.99141304356724.525719.75580.7912521.937520.580.340.3448294830669.244.35226.9811529089.21524.902902.28724700.4686576.024369.93466.330.17369.284028.4031.348.084746.2743.9557.05970906298.02.491321601.63222.04114.9265.5532.78236205183001.266992.4163104.697.984432.9417.811.90432.28012.196.8296139178.5310928.8842.27.488507.1376928.190.759.9018048.50489.24205.8336.0778.822.521280.0353.871422.5181617.13553.024.22270.66284565.126710.6534007.6225369055.9313.421.098.79449335048402579726487328.39350012.7210.47150.8562.6132.33430.2678.373587.23500.8787.0107.444383.9376.9371.7355.0373.9414.6452.5932885505.6699467561.1740678988.5098OpenBenchmarking.org

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: Magnetic Reconnection - Renderer: SciVis1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E714212835SE +/- 0.03, N = 3SE +/- 0.00, N = 6SE +/- 0.00, N = 4SE +/- 0.00, N = 57.5831.2519.2326.32MIN: 7.09 / MAX: 7.81MIN: 30.3 / MAX: 32.26MIN: 18.52 / MAX: 19.61MIN: 19.61 / MAX: 27.03
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: Magnetic Reconnection - Renderer: SciVis1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E714212835Min: 7.52 / Avg: 7.58 / Max: 7.63Min: 31.25 / Avg: 31.25 / Max: 31.25Min: 19.23 / Avg: 19.23 / Max: 19.23Min: 26.32 / Avg: 26.32 / Max: 26.32

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.9.1Model: super-resolution-10 - Device: CPU1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E2K4K6K8K10KSE +/- 2.80, N = 3SE +/- 17.95, N = 3SE +/- 14.41, N = 3SE +/- 76.35, N = 1219414687780250061. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.9.1Model: super-resolution-10 - Device: CPU1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E14002800420056007000Min: 1937 / Avg: 1941.17 / Max: 1946.5Min: 4651.5 / Avg: 4687.17 / Max: 4708.5Min: 7773 / Avg: 7801.5 / Max: 7819.5Min: 4723 / Avg: 5005.92 / Max: 53161. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: XFrog Forest - Renderer: Path Tracer1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E0.5581.1161.6742.2322.79SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.722.051.852.48MIN: 0.69 / MAX: 0.73MIN: 1.46 / MAX: 2.13MIN: 1.82 / MAX: 1.87MIN: 1.95 / MAX: 2.52
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: XFrog Forest - Renderer: Path Tracer1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E246810Min: 0.72 / Avg: 0.72 / Max: 0.72Min: 2.05 / Avg: 2.05 / Max: 2.07Min: 1.85 / Avg: 1.85 / Max: 1.85Min: 2.48 / Avg: 2.48 / Max: 2.49

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: Exhaustive1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E20406080100SE +/- 0.01, N = 3SE +/- 0.41, N = 3SE +/- 0.08, N = 3SE +/- 0.13, N = 3111.6342.3039.0232.751. (CXX) g++ options: -O3 -march=native -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: Exhaustive1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E20406080100Min: 111.6 / Avg: 111.63 / Max: 111.64Min: 41.51 / Avg: 42.3 / Max: 42.88Min: 38.93 / Avg: 39.02 / Max: 39.18Min: 32.48 / Avg: 32.75 / Max: 32.891. (CXX) g++ options: -O3 -march=native -flto -pthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.9.1Model: yolov4 - Device: CPU1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E160320480640800SE +/- 0.00, N = 3SE +/- 0.33, N = 3SE +/- 0.33, N = 3SE +/- 9.36, N = 122265097576401. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.9.1Model: yolov4 - Device: CPU1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E130260390520650Min: 226 / Avg: 226 / Max: 226Min: 509 / Avg: 509.33 / Max: 510Min: 756 / Avg: 756.67 / Max: 757Min: 607 / Avg: 640.25 / Max: 673.51. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: XFrog Forest - Renderer: SciVis1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E1.05982.11963.17944.23925.299SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 31.413.823.514.71MIN: 1.39 / MAX: 1.43MIN: 2.63 / MAX: 3.94MIN: 3.47 / MAX: 3.56MIN: 3.6 / MAX: 4.78
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: XFrog Forest - Renderer: SciVis1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E246810Min: 1.41 / Avg: 1.41 / Max: 1.41Min: 3.8 / Avg: 3.82 / Max: 3.85Min: 3.51 / Avg: 3.51 / Max: 3.52Min: 4.69 / Avg: 4.71 / Max: 4.72

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA40961 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E9001800270036004500SE +/- 0.09, N = 3SE +/- 35.74, N = 4SE +/- 0.34, N = 3SE +/- 16.07, N = 31311.03248.53789.54272.11. (CC) gcc options: -pthread -m64 -O3 -march=native -lssl -lcrypto -ldl
OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA40961 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E7001400210028003500Min: 1310.8 / Avg: 1310.97 / Max: 1311.1Min: 3195 / Avg: 3248.5 / Max: 3353.9Min: 3788.8 / Avg: 3789.47 / Max: 3789.9Min: 4242.1 / Avg: 4272.1 / Max: 4297.11. (CC) gcc options: -pthread -m64 -O3 -march=native -lssl -lcrypto -ldl

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA40961 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E60K120K180K240K300KSE +/- 8.82, N = 3SE +/- 848.81, N = 4SE +/- 27.11, N = 3SE +/- 1645.32, N = 385945.0214809.2241292.1278159.71. (CC) gcc options: -pthread -m64 -O3 -march=native -lssl -lcrypto -ldl
OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA40961 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E50K100K150K200K250KMin: 85935.9 / Avg: 85944.97 / Max: 85962.6Min: 212725.6 / Avg: 214809.2 / Max: 216870.9Min: 241245.7 / Avg: 241292.1 / Max: 241339.6Min: 274869.4 / Avg: 278159.67 / Max: 279848.11. (CC) gcc options: -pthread -m64 -O3 -march=native -lssl -lcrypto -ldl

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: SciVis1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E714212835SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 39.2625.0021.2828.57MIN: 9.09 / MAX: 9.35MIN: 23.26MIN: 20.83 / MAX: 21.74MIN: 23.26
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: SciVis1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E612182430Min: 9.26 / Avg: 9.26 / Max: 9.26Min: 25 / Avg: 25 / Max: 25Min: 21.28 / Avg: 21.28 / Max: 21.28Min: 28.57 / Avg: 28.57 / Max: 28.57

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: Path Tracer1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E0.52431.04861.57292.09722.6215SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.761.991.702.33MAX: 0.77MIN: 1.94 / MAX: 2.03MIN: 1.69 / MAX: 1.71MIN: 1.97 / MAX: 2.35
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: Path Tracer1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E246810Min: 0.76 / Avg: 0.76 / Max: 0.76Min: 1.98 / Avg: 1.99 / Max: 2Min: 1.69 / Avg: 1.7 / Max: 1.7Min: 2.33 / Avg: 2.33 / Max: 2.33

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.18Algorithm: Blake-2 S1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E120K240K360K480K600KSE +/- 951.53, N = 3SE +/- 1649.16, N = 3SE +/- 2070.53, N = 3SE +/- 2502.49, N = 31774434197974718805420071. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.18Algorithm: Blake-2 S1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E90K180K270K360K450KMin: 175740 / Avg: 177443.33 / Max: 179030Min: 416780 / Avg: 419796.67 / Max: 422460Min: 467850 / Avg: 471880 / Max: 474720Min: 537790 / Avg: 542006.67 / Max: 5464501. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E8001600240032004000SE +/- 0.75, N = 3SE +/- 3.30, N = 3SE +/- 9.92, N = 3SE +/- 0.96, N = 33638.971197.422372.951615.78MIN: 3632.57MIN: 1170.37MIN: 2357.62MIN: 1610.551. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E6001200180024003000Min: 3637.64 / Avg: 3638.97 / Max: 3640.24Min: 1193.44 / Avg: 1197.42 / Max: 1203.98Min: 2362.69 / Avg: 2372.95 / Max: 2392.78Min: 1614.41 / Avg: 1615.78 / Max: 1617.621. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E8001600240032004000SE +/- 0.46, N = 3SE +/- 3.65, N = 3SE +/- 0.86, N = 3SE +/- 0.52, N = 33639.201197.972362.081615.76MIN: 3631.91MIN: 1170.84MIN: 2355.68MIN: 1609.921. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E6001200180024003000Min: 3638.68 / Avg: 3639.2 / Max: 3640.11Min: 1193.61 / Avg: 1197.97 / Max: 1205.22Min: 2360.45 / Avg: 2362.08 / Max: 2363.35Min: 1615 / Avg: 1615.76 / Max: 1616.751. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E15003000450060007500SE +/- 7.46, N = 3SE +/- 3.29, N = 3SE +/- 0.55, N = 3SE +/- 0.34, N = 36875.262301.334532.542878.71MIN: 6858.24MIN: 2260.56MIN: 4525.15MIN: 2870.731. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E12002400360048006000Min: 6862.15 / Avg: 6875.26 / Max: 6887.98Min: 2296.59 / Avg: 2301.33 / Max: 2307.66Min: 4531.64 / Avg: 4532.54 / Max: 4533.54Min: 2878.29 / Avg: 2878.71 / Max: 2879.391. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E15003000450060007500SE +/- 3.10, N = 3SE +/- 2.63, N = 3SE +/- 0.18, N = 3SE +/- 1.27, N = 36870.902311.444533.212880.93MIN: 6861.31MIN: 2260.92MIN: 4526.59MIN: 2871.581. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E12002400360048006000Min: 6866.08 / Avg: 6870.9 / Max: 6876.69Min: 2306.2 / Avg: 2311.44 / Max: 2314.37Min: 4532.86 / Avg: 4533.21 / Max: 4533.45Min: 2879.62 / Avg: 2880.93 / Max: 2883.471. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.12.1Variant: Wownero - Hash Count: 1M1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E2K4K6K8K10KSE +/- 2.85, N = 3SE +/- 26.33, N = 3SE +/- 2.95, N = 3SE +/- 14.90, N = 33637.19070.88015.410624.11. (CXX) g++ options: -O3 -march=native -fexceptions -fno-rtti -maes -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.12.1Variant: Wownero - Hash Count: 1M1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E2K4K6K8K10KMin: 3631.4 / Avg: 3637.1 / Max: 3640Min: 9025.4 / Avg: 9070.83 / Max: 9116.6Min: 8010.4 / Avg: 8015.37 / Max: 8020.6Min: 10597.8 / Avg: 10624.1 / Max: 10649.41. (CXX) g++ options: -O3 -march=native -fexceptions -fno-rtti -maes -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: SciVis1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E816243240SE +/- 0.00, N = 3SE +/- 0.00, N = 5SE +/- 0.00, N = 5SE +/- 0.21, N = 612.2029.4127.0335.51MIN: 11.76 / MAX: 12.35MIN: 27.78 / MAX: 30.3MIN: 25.64 / MAX: 27.78MIN: 34.48 / MAX: 35.71
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: SciVis1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E816243240Min: 12.2 / Avg: 12.2 / Max: 12.2Min: 29.41 / Avg: 29.41 / Max: 29.41Min: 27.03 / Avg: 27.03 / Max: 27.03Min: 34.48 / Avg: 35.51 / Max: 35.71

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Crown1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E510152025SE +/- 0.0112, N = 3SE +/- 0.0219, N = 3SE +/- 0.0471, N = 3SE +/- 0.0383, N = 37.382017.854015.330321.2518MIN: 7.33 / MAX: 7.49MIN: 17.35 / MAX: 18.58MIN: 15.12 / MAX: 15.73MIN: 20.93 / MAX: 21.83
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Crown1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E510152025Min: 7.37 / Avg: 7.38 / Max: 7.4Min: 17.82 / Avg: 17.85 / Max: 17.89Min: 15.25 / Avg: 15.33 / Max: 15.42Min: 21.19 / Avg: 21.25 / Max: 21.32

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: Path Tracer1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E246810SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 32.435.205.346.99MIN: 2.39 / MAX: 2.46MIN: 3.62 / MAX: 5.46MIN: 5.24 / MAX: 5.46MIN: 5.41 / MAX: 7.14
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: Path Tracer1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E3691215Min: 2.42 / Avg: 2.43 / Max: 2.43Min: 5.18 / Avg: 5.2 / Max: 5.24Min: 5.32 / Avg: 5.34 / Max: 5.35Min: 6.99 / Avg: 6.99 / Max: 6.99

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.18Algorithm: Deepcoin1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E3K6K9K12K15KSE +/- 14.03, N = 3SE +/- 0.93, N = 3SE +/- 8.82, N = 3SE +/- 17.32, N = 35064.829842.6711487.0014130.001. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.18Algorithm: Deepcoin1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E2K4K6K8K10KMin: 5045.49 / Avg: 5064.82 / Max: 5092.1Min: 9841.15 / Avg: 9842.67 / Max: 9844.36Min: 11470 / Avg: 11486.67 / Max: 11500Min: 14100 / Avg: 14130 / Max: 141601. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.18Algorithm: LBC, LBRY Credits1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E9K18K27K36K45KSE +/- 195.02, N = 3SE +/- 235.34, N = 15SE +/- 101.71, N = 3SE +/- 196.50, N = 3156503027437663435671. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.18Algorithm: LBC, LBRY Credits1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E8K16K24K32K40KMin: 15260 / Avg: 15650 / Max: 15850Min: 29160 / Avg: 30274 / Max: 31890Min: 37460 / Avg: 37663.33 / Max: 37770Min: 43300 / Avg: 43566.67 / Max: 439501. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Asian Dragon1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E612182430SE +/- 0.0104, N = 3SE +/- 0.1319, N = 3SE +/- 0.0535, N = 3SE +/- 0.0607, N = 38.824420.700617.880124.5257MIN: 8.77 / MAX: 8.95MIN: 20.09 / MAX: 21.57MIN: 17.67 / MAX: 18.22MIN: 24.29 / MAX: 25.1
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Asian Dragon1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E612182430Min: 8.81 / Avg: 8.82 / Max: 8.84Min: 20.56 / Avg: 20.7 / Max: 20.96Min: 17.77 / Avg: 17.88 / Max: 17.94Min: 24.44 / Avg: 24.53 / Max: 24.64

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Crown1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E510152025SE +/- 0.0215, N = 3SE +/- 0.0161, N = 3SE +/- 0.0188, N = 3SE +/- 0.0667, N = 37.165415.086014.423019.7558MIN: 7.1 / MAX: 7.3MIN: 14.69 / MAX: 15.65MIN: 14.3 / MAX: 14.68MIN: 19.46 / MAX: 20.24
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Crown1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E510152025Min: 7.13 / Avg: 7.17 / Max: 7.21Min: 15.06 / Avg: 15.09 / Max: 15.12Min: 14.4 / Avg: 14.42 / Max: 14.46Min: 19.66 / Avg: 19.76 / Max: 19.88

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atoms1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E0.46740.93481.40221.86962.337SE +/- 0.00335, N = 3SE +/- 0.00248, N = 3SE +/- 0.00440, N = 3SE +/- 0.00323, N = 32.077121.108850.903840.79125
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atoms1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E246810Min: 2.07 / Avg: 2.08 / Max: 2.08Min: 1.1 / Avg: 1.11 / Max: 1.11Min: 0.9 / Avg: 0.9 / Max: 0.91Min: 0.79 / Avg: 0.79 / Max: 0.8

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Asian Dragon1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E510152025SE +/- 0.0501, N = 3SE +/- 0.0184, N = 3SE +/- 0.0516, N = 3SE +/- 0.2045, N = 38.444916.460216.140921.9375MIN: 8.35 / MAX: 8.67MIN: 16.06 / MAX: 16.95MIN: 16 / MAX: 16.46MIN: 21.55 / MAX: 22.7
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Asian Dragon1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E510152025Min: 8.39 / Avg: 8.44 / Max: 8.54Min: 16.43 / Avg: 16.46 / Max: 16.49Min: 16.08 / Avg: 16.14 / Max: 16.24Min: 21.67 / Avg: 21.94 / Max: 22.34

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG19 - Device: CPU1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E510152025SE +/- 0.10, N = 3SE +/- 0.06, N = 3SE +/- 0.11, N = 3SE +/- 0.08, N = 38.1019.6819.7320.58
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG19 - Device: CPU1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E510152025Min: 7.91 / Avg: 8.1 / Max: 8.24Min: 19.61 / Avg: 19.68 / Max: 19.8Min: 19.53 / Avg: 19.73 / Max: 19.92Min: 20.43 / Avg: 20.58 / Max: 20.71

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RT.ldr_alb_nrm.3840x21601 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E0.1080.2160.3240.4320.54SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.190.480.340.34
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RT.ldr_alb_nrm.3840x21601 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E12345Min: 0.19 / Avg: 0.19 / Max: 0.19Min: 0.48 / Avg: 0.48 / Max: 0.48Min: 0.34 / Avg: 0.34 / Max: 0.34Min: 0.34 / Avg: 0.34 / Max: 0.34

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RT.hdr_alb_nrm.3840x21601 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E0.1080.2160.3240.4320.54SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.190.480.340.34
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RT.hdr_alb_nrm.3840x21601 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E12345Min: 0.19 / Avg: 0.19 / Max: 0.19Min: 0.48 / Avg: 0.48 / Max: 0.48Min: 0.34 / Avg: 0.34 / Max: 0.34Min: 0.34 / Avg: 0.34 / Max: 0.34

Stockfish

This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 13Total Time1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E10M20M30M40M50MSE +/- 167801.45, N = 3SE +/- 212526.58, N = 15SE +/- 323337.19, N = 3SE +/- 367596.14, N = 15191901303163993438408790482948301. (CXX) g++ options: -lgcov -m64 -lpthread -O3 -march=native -fno-exceptions -std=c++17 -pedantic -msse -msse3 -mpopcnt -mavx2 -msse4.1 -mssse3 -msse2 -mbmi2 -flto -fprofile-use -fno-peel-loops -fno-tracer -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 13Total Time1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E8M16M24M32M40MMin: 18982531 / Avg: 19190130 / Max: 19522291Min: 30896689 / Avg: 31639933.6 / Max: 33847531Min: 37862277 / Avg: 38408790 / Max: 38981435Min: 46559430 / Avg: 48294829.6 / Max: 519135401. (CXX) g++ options: -lgcov -m64 -lpthread -O3 -march=native -fno-exceptions -std=c++17 -pedantic -msse -msse3 -mpopcnt -mavx2 -msse4.1 -mssse3 -msse2 -mbmi2 -flto -fprofile-use -fno-peel-loops -fno-tracer -flto=jobserver

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.18Algorithm: x25x1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E140280420560700SE +/- 0.61, N = 3SE +/- 4.47, N = 3SE +/- 0.13, N = 3SE +/- 1.16, N = 3268.01445.77595.04669.241. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.18Algorithm: x25x1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E120240360480600Min: 267.13 / Avg: 268.01 / Max: 269.17Min: 436.95 / Avg: 445.77 / Max: 451.4Min: 594.78 / Avg: 595.04 / Max: 595.18Min: 666.94 / Avg: 669.24 / Max: 670.71. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: efficientnet-b01 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E246810SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 12SE +/- 0.00, N = 87.112.864.724.35MIN: 7.05 / MAX: 8.66MIN: 2.79 / MAX: 3.15MIN: 4.67 / MAX: 10.5MIN: 4.3 / MAX: 4.721. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: efficientnet-b01 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E3691215Min: 7.09 / Avg: 7.11 / Max: 7.13Min: 2.82 / Avg: 2.86 / Max: 2.93Min: 4.69 / Avg: 4.72 / Max: 4.77Min: 4.33 / Avg: 4.35 / Max: 4.371. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080p1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E50100150200250SE +/- 0.17, N = 6SE +/- 0.72, N = 9SE +/- 0.50, N = 9SE +/- 1.44, N = 992.20180.54190.82226.981. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080p1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E4080120160200Min: 91.49 / Avg: 92.2 / Max: 92.68Min: 177.36 / Avg: 180.54 / Max: 183.6Min: 187.44 / Avg: 190.82 / Max: 192.37Min: 215.52 / Avg: 226.98 / Max: 229.181. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.18Algorithm: Skeincoin1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E20K40K60K80K100KSE +/- 32.15, N = 3SE +/- 83.53, N = 3SE +/- 43.59, N = 3SE +/- 745.01, N = 34690075593996501152901. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.18Algorithm: Skeincoin1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E20K40K60K80K100KMin: 46850 / Avg: 46900 / Max: 46960Min: 75500 / Avg: 75593.33 / Max: 75760Min: 99580 / Avg: 99650 / Max: 99730Min: 113800 / Avg: 115290 / Max: 1160401. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 8 - Input: Bosphorus 1080p1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E20406080100SE +/- 0.10, N = 4SE +/- 0.27, N = 6SE +/- 0.25, N = 6SE +/- 0.37, N = 638.7294.9981.5889.221. (CXX) g++ options: -O3 -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 8 - Input: Bosphorus 1080p1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E20406080100Min: 38.44 / Avg: 38.72 / Max: 38.93Min: 94.07 / Avg: 94.99 / Max: 96.06Min: 80.76 / Avg: 81.58 / Max: 82.44Min: 88.62 / Avg: 89.22 / Max: 90.851. (CXX) g++ options: -O3 -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: CPU1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E612182430SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.21, N = 8SE +/- 0.25, N = 610.3024.0823.9824.90
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: CPU1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E612182430Min: 10.24 / Avg: 10.3 / Max: 10.34Min: 24.02 / Avg: 24.08 / Max: 24.19Min: 22.69 / Avg: 23.98 / Max: 24.5Min: 23.69 / Avg: 24.9 / Max: 25.33

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.18Algorithm: Garlicoin1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E6001200180024003000SE +/- 3.48, N = 3SE +/- 16.70, N = 15SE +/- 3.35, N = 3SE +/- 8.88, N = 31215.282514.882473.242902.281. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.18Algorithm: Garlicoin1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E5001000150020002500Min: 1211.58 / Avg: 1215.28 / Max: 1222.23Min: 2405.27 / Avg: 2514.88 / Max: 2586.59Min: 2466.6 / Avg: 2473.24 / Max: 2477.35Min: 2886.68 / Avg: 2902.28 / Max: 2917.431. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E160K320K480K640K800KSE +/- 679.52, N = 3SE +/- 5428.02, N = 3SE +/- 5111.49, N = 3SE +/- 2264.99, N = 3304050.68469735.30511767.64724700.471. (CC) gcc options: -O2 -O3 -march=native -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E130K260K390K520K650KMin: 302743.61 / Avg: 304050.68 / Max: 305026.58Min: 463022.51 / Avg: 469735.3 / Max: 480480.48Min: 505582.47 / Avg: 511767.64 / Max: 521909.32Min: 720684.65 / Avg: 724700.47 / Max: 728523.731. (CC) gcc options: -O2 -O3 -march=native -lrt" -lrt

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 4 - Input: Bosphorus 1080p1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E246810SE +/- 0.010, N = 3SE +/- 0.028, N = 3SE +/- 0.043, N = 3SE +/- 0.010, N = 33.1167.3725.7936.0241. (CXX) g++ options: -O3 -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 4 - Input: Bosphorus 1080p1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E3691215Min: 3.1 / Avg: 3.12 / Max: 3.13Min: 7.33 / Avg: 7.37 / Max: 7.43Min: 5.71 / Avg: 5.79 / Max: 5.86Min: 6.01 / Avg: 6.02 / Max: 6.041. (CXX) g++ options: -O3 -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080p1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E80160240320400SE +/- 0.29, N = 8SE +/- 0.59, N = 10SE +/- 0.64, N = 10SE +/- 0.68, N = 11158.67276.24241.47369.931. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080p1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E70140210280350Min: 157.71 / Avg: 158.67 / Max: 160.14Min: 273.93 / Avg: 276.24 / Max: 279.21Min: 238.44 / Avg: 241.47 / Max: 243.99Min: 366.58 / Avg: 369.93 / Max: 373.141. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080p1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E100200300400500SE +/- 0.20, N = 9SE +/- 0.35, N = 11SE +/- 0.61, N = 11SE +/- 0.39, N = 12201.15381.09390.73466.331. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080p1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E80160240320400Min: 200.6 / Avg: 201.15 / Max: 202.09Min: 379.75 / Avg: 381.09 / Max: 384.12Min: 387.6 / Avg: 390.73 / Max: 395Min: 464.4 / Avg: 466.33 / Max: 468.751. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RTLightmap.hdr.4096x40961 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E0.05180.10360.15540.20720.259SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.100.230.170.17
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RTLightmap.hdr.4096x40961 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E12345Min: 0.1 / Avg: 0.1 / Max: 0.1Min: 0.23 / Avg: 0.23 / Max: 0.23Min: 0.17 / Avg: 0.17 / Max: 0.17Min: 0.17 / Avg: 0.17 / Max: 0.17

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080p1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E80160240320400SE +/- 0.37, N = 8SE +/- 0.67, N = 10SE +/- 0.72, N = 10SE +/- 5.68, N = 15160.68281.90256.22369.281. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080p1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E70140210280350Min: 158.36 / Avg: 160.68 / Max: 161.54Min: 276.09 / Avg: 281.9 / Max: 283.29Min: 253.15 / Avg: 256.22 / Max: 260.33Min: 289.98 / Avg: 369.28 / Max: 378.381. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.18Algorithm: Ringcoin1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E9001800270036004500SE +/- 4.00, N = 3SE +/- 28.61, N = 3SE +/- 1.60, N = 3SE +/- 53.56, N = 31793.222564.023599.134028.401. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.18Algorithm: Ringcoin1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E7001400210028003500Min: 1786.07 / Avg: 1793.22 / Max: 1799.91Min: 2507.27 / Avg: 2564.02 / Max: 2598.72Min: 3595.98 / Avg: 3599.13 / Max: 3601.2Min: 3973.92 / Avg: 4028.4 / Max: 4135.521. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression Speed1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E1020304050SE +/- 0.00, N = 3SE +/- 0.25, N = 3SE +/- 0.36, N = 3SE +/- 0.03, N = 319.743.230.531.31. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression Speed1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E918273645Min: 19.7 / Avg: 19.7 / Max: 19.7Min: 42.9 / Avg: 43.2 / Max: 43.7Min: 30 / Avg: 30.5 / Max: 31.2Min: 31.3 / Avg: 31.33 / Max: 31.41. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.14Time To Compile1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E20406080100SE +/- 0.31, N = 3SE +/- 0.31, N = 3SE +/- 0.71, N = 3SE +/- 0.40, N = 3104.7862.8554.8448.08
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.14Time To Compile1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E20406080100Min: 104.44 / Avg: 104.78 / Max: 105.41Min: 62.25 / Avg: 62.85 / Max: 63.31Min: 53.52 / Avg: 54.84 / Max: 55.93Min: 47.35 / Avg: 48.08 / Max: 48.7

ET: Legacy

ETLegacy is an open-source engine evolution of Wolfenstein: Enemy Territory, a World War II era first person shooter that was released for free by Splash Damage using the id Tech 3 engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterET: Legacy 2.78Resolution: 3840 x 21601 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E160320480640800SE +/- 0.90, N = 3SE +/- 1.04, N = 4SE +/- 2.41, N = 4SE +/- 2.49, N = 4349.9762.0754.8746.2
OpenBenchmarking.orgFrames Per Second, More Is BetterET: Legacy 2.78Resolution: 3840 x 21601 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E130260390520650Min: 348.1 / Avg: 349.9 / Max: 350.9Min: 759.8 / Avg: 762.03 / Max: 764.7Min: 748.9 / Avg: 754.8 / Max: 759.6Min: 741.4 / Avg: 746.23 / Max: 751.1

OpenBenchmarking.orgFrames Per Second, More Is BetterET: Legacy 2.78Resolution: 1920 x 10801 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E160320480640800SE +/- 1.34, N = 3SE +/- 0.74, N = 4SE +/- 2.40, N = 4SE +/- 7.99, N = 4353.5763.1751.6743.9
OpenBenchmarking.orgFrames Per Second, More Is BetterET: Legacy 2.78Resolution: 1920 x 10801 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E130260390520650Min: 351.4 / Avg: 353.53 / Max: 356Min: 760.9 / Avg: 763.05 / Max: 764.3Min: 746.3 / Avg: 751.55 / Max: 757.9Min: 720.2 / Avg: 743.85 / Max: 755.3

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.18Algorithm: Magi1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E120240360480600SE +/- 0.30, N = 3SE +/- 2.98, N = 4SE +/- 0.36, N = 3SE +/- 0.26, N = 3315.86258.12456.66557.051. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.18Algorithm: Magi1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E100200300400500Min: 315.52 / Avg: 315.86 / Max: 316.46Min: 249.17 / Avg: 258.12 / Max: 261.13Min: 455.97 / Avg: 456.66 / Max: 457.2Min: 556.54 / Avg: 557.05 / Max: 557.331. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp

7-Zip Compression

This is a test of 7-Zip using p7zip with its integrated benchmark feature or upstream 7-Zip for the Windows x64 build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed Test1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E20K40K60K80K100KSE +/- 188.76, N = 3SE +/- 428.31, N = 3SE +/- 178.68, N = 3SE +/- 682.68, N = 3456077825573758970901. (CXX) g++ options: -pipe -lpthread
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed Test1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E20K40K60K80K100KMin: 45236 / Avg: 45606.67 / Max: 45854Min: 77407 / Avg: 78255 / Max: 78784Min: 73401 / Avg: 73758.33 / Max: 73940Min: 96052 / Avg: 97089.67 / Max: 983771. (CXX) g++ options: -pipe -lpthread

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.12.1Variant: Monero - Hash Count: 1M1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E15003000450060007500SE +/- 19.29, N = 3SE +/- 17.17, N = 3SE +/- 19.28, N = 3SE +/- 4.90, N = 33326.17015.36676.66298.01. (CXX) g++ options: -O3 -march=native -fexceptions -fno-rtti -maes -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.12.1Variant: Monero - Hash Count: 1M1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E12002400360048006000Min: 3296.6 / Avg: 3326.13 / Max: 3362.4Min: 6983.8 / Avg: 7015.3 / Max: 7042.9Min: 6649.5 / Avg: 6676.6 / Max: 6713.9Min: 6290.9 / Avg: 6298 / Max: 6307.41. (CXX) g++ options: -O3 -march=native -fexceptions -fno-rtti -maes -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mnasnet1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E0.87981.75962.63943.51924.399SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 12SE +/- 0.00, N = 83.911.862.962.49MIN: 3.82 / MAX: 4.83MIN: 1.81 / MAX: 2.06MIN: 2.93 / MAX: 3.31MIN: 2.45 / MAX: 3.011. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mnasnet1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E246810Min: 3.85 / Avg: 3.91 / Max: 3.95Min: 1.84 / Avg: 1.86 / Max: 1.89Min: 2.95 / Avg: 2.96 / Max: 2.99Min: 2.48 / Avg: 2.49 / Max: 2.521. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.18Algorithm: Quad SHA-256, Pyrite1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E30K60K90K120K150KSE +/- 193.42, N = 3SE +/- 105.25, N = 3SE +/- 140.00, N = 3SE +/- 20.82, N = 363303792071038701321601. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.18Algorithm: Quad SHA-256, Pyrite1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E20K40K60K80K100KMin: 62920 / Avg: 63303.33 / Max: 63540Min: 79010 / Avg: 79206.67 / Max: 79370Min: 103590 / Avg: 103870 / Max: 104010Min: 132130 / Avg: 132160 / Max: 1322001. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2021.2Implementation: MPI CPU - Input: water_GMX50_bare1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E0.40140.80281.20421.60562.007SE +/- 0.000, N = 3SE +/- 0.005, N = 3SE +/- 0.020, N = 3SE +/- 0.012, N = 30.8621.6391.7841.6321. (CXX) g++ options: -O3 -march=native
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2021.2Implementation: MPI CPU - Input: water_GMX50_bare1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E246810Min: 0.86 / Avg: 0.86 / Max: 0.86Min: 1.63 / Avg: 1.64 / Max: 1.65Min: 1.75 / Avg: 1.78 / Max: 1.81Min: 1.61 / Avg: 1.63 / Max: 1.651. (CXX) g++ options: -O3 -march=native

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 8 - Input: Bosphorus 4K1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E612182430SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.10, N = 3SE +/- 0.22, N = 311.5523.8920.5222.041. (CXX) g++ options: -O3 -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 8 - Input: Bosphorus 4K1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E612182430Min: 11.45 / Avg: 11.55 / Max: 11.63Min: 23.87 / Avg: 23.89 / Max: 23.92Min: 20.39 / Avg: 20.52 / Max: 20.72Min: 21.62 / Avg: 22.04 / Max: 22.381. (CXX) g++ options: -O3 -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet501 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E612182430SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.05, N = 12SE +/- 0.02, N = 826.3012.7920.2614.92MIN: 25.99 / MAX: 33.88MIN: 12.68 / MAX: 14.66MIN: 19.67 / MAX: 28.17MIN: 14.82 / MAX: 22.431. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet501 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E612182430Min: 26.24 / Avg: 26.3 / Max: 26.34Min: 12.72 / Avg: 12.79 / Max: 12.9Min: 19.94 / Avg: 20.26 / Max: 20.63Min: 14.87 / Avg: 14.92 / Max: 15.011. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To Compile1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E306090120150SE +/- 0.54, N = 3SE +/- 0.18, N = 3SE +/- 0.66, N = 7SE +/- 0.77, N = 3133.9583.0172.3765.55
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To Compile1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E306090120150Min: 132.97 / Avg: 133.95 / Max: 134.81Min: 82.65 / Avg: 83.01 / Max: 83.27Min: 70.6 / Avg: 72.37 / Max: 75.21Min: 64.69 / Avg: 65.55 / Max: 67.09

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v2-v2 - Model: mobilenet-v21 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E0.9451.892.8353.784.725SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 12SE +/- 0.01, N = 84.202.062.902.78MIN: 4.14 / MAX: 6.63MIN: 2.01 / MAX: 2.34MIN: 2.86 / MAX: 8.79MIN: 2.73 / MAX: 3.281. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v2-v2 - Model: mobilenet-v21 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E246810Min: 4.18 / Avg: 4.2 / Max: 4.21Min: 2.03 / Avg: 2.06 / Max: 2.1Min: 2.89 / Avg: 2.9 / Max: 2.93Min: 2.77 / Avg: 2.78 / Max: 2.821. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA2561 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E5000M10000M15000M20000M25000MSE +/- 8776822.76, N = 3SE +/- 22960241.11, N = 3SE +/- 18842240.97, N = 3SE +/- 86656587.15, N = 3116179540401326790637720762805477236205183001. (CC) gcc options: -pthread -m64 -O3 -march=native -lssl -lcrypto -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA2561 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E4000M8000M12000M16000M20000MMin: 11604784720 / Avg: 11617954040 / Max: 11634589800Min: 13224066240 / Avg: 13267906376.67 / Max: 13301660710Min: 20740738780 / Avg: 20762805476.67 / Max: 20800294180Min: 23455355030 / Avg: 23620518300 / Max: 237485852401. (CC) gcc options: -pthread -m64 -O3 -march=native -lssl -lcrypto -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E0.52921.05841.58762.11682.646SE +/- 0.00190, N = 4SE +/- 0.00130, N = 4SE +/- 0.00144, N = 4SE +/- 0.00071, N = 42.351921.192611.670731.26699MIN: 2.32MIN: 1.15MIN: 1.64MIN: 1.231. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E246810Min: 2.35 / Avg: 2.35 / Max: 2.36Min: 1.19 / Avg: 1.19 / Max: 1.2Min: 1.67 / Avg: 1.67 / Max: 1.67Min: 1.27 / Avg: 1.27 / Max: 1.271. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v3-v3 - Model: mobilenet-v31 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E0.8011.6022.4033.2044.005SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 12SE +/- 0.01, N = 83.561.812.712.41MIN: 3.48 / MAX: 4.4MIN: 1.76 / MAX: 2.2MIN: 2.63 / MAX: 3.02MIN: 2.36 / MAX: 4.241. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v3-v3 - Model: mobilenet-v31 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E246810Min: 3.51 / Avg: 3.56 / Max: 3.59Min: 1.78 / Avg: 1.81 / Max: 1.86Min: 2.65 / Avg: 2.71 / Max: 2.74Min: 2.39 / Avg: 2.41 / Max: 2.431. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPU1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E14K28K42K56K70KSE +/- 11.61, N = 3SE +/- 2.76, N = 3SE +/- 15.84, N = 3SE +/- 2.25, N = 332143.0135560.1761343.7063104.691. (CC) gcc options: -O2 -funroll-loops -O3 -march=native -rdynamic -ldl -laio -lm
OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPU1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E11K22K33K44K55KMin: 32130.57 / Avg: 32143.01 / Max: 32166.2Min: 35556.32 / Avg: 35560.17 / Max: 35565.52Min: 61325.29 / Avg: 61343.7 / Max: 61375.23Min: 63102.42 / Avg: 63104.69 / Max: 63109.191. (CC) gcc options: -O2 -funroll-loops -O3 -march=native -rdynamic -ldl -laio -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: googlenet1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E3691215SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.09, N = 12SE +/- 0.01, N = 813.206.7610.817.98MIN: 12.88 / MAX: 20.54MIN: 6.67 / MAX: 6.97MIN: 10.28 / MAX: 12MIN: 7.9 / MAX: 11.381. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: googlenet1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E48121620Min: 13.18 / Avg: 13.2 / Max: 13.21Min: 6.72 / Avg: 6.76 / Max: 6.82Min: 10.55 / Avg: 10.81 / Max: 11.73Min: 7.94 / Avg: 7.98 / Max: 8.031. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Compression Speed1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E11002200330044005500SE +/- 9.01, N = 3SE +/- 27.65, N = 3SE +/- 40.54, N = 5SE +/- 45.22, N = 62525.84930.63717.64432.91. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Compression Speed1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E9001800270036004500Min: 2509 / Avg: 2525.83 / Max: 2539.8Min: 4883.9 / Avg: 4930.57 / Max: 4979.6Min: 3619.2 / Avg: 3717.6 / Max: 3826Min: 4307.2 / Avg: 4432.93 / Max: 4612.51. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

JPEG XL Decoding libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.6.1CPU Threads: All1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E100200300400500SE +/- 2.49, N = 3SE +/- 0.06, N = 3SE +/- 0.47, N = 4SE +/- 0.21, N = 3251.74327.14477.30417.81
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.6.1CPU Threads: All1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E80160240320400Min: 246.76 / Avg: 251.74 / Max: 254.24Min: 327.01 / Avg: 327.14 / Max: 327.21Min: 476 / Avg: 477.3 / Max: 478.21Min: 417.4 / Avg: 417.81 / Max: 418.11

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 4 - Input: Bosphorus 4K1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E0.48780.97561.46341.95122.439SE +/- 0.004, N = 3SE +/- 0.009, N = 3SE +/- 0.005, N = 3SE +/- 0.020, N = 31.1452.1681.8141.9041. (CXX) g++ options: -O3 -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 4 - Input: Bosphorus 4K1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E246810Min: 1.14 / Avg: 1.15 / Max: 1.15Min: 2.15 / Avg: 2.17 / Max: 2.18Min: 1.81 / Avg: 1.81 / Max: 1.82Min: 1.87 / Avg: 1.9 / Max: 1.931. (CXX) g++ options: -O3 -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6, Lossless1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E1326395265SE +/- 0.20, N = 3SE +/- 0.32, N = 3SE +/- 0.07, N = 3SE +/- 0.42, N = 359.2943.6736.0132.281. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6, Lossless1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E1224364860Min: 59.06 / Avg: 59.29 / Max: 59.7Min: 43.07 / Avg: 43.67 / Max: 44.18Min: 35.92 / Avg: 36.01 / Max: 36.14Min: 31.67 / Avg: 32.28 / Max: 33.081. (CXX) g++ options: -O3 -fPIC -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: squeezenet_ssd1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E510152025SE +/- 0.04, N = 3SE +/- 0.00, N = 3SE +/- 0.22, N = 12SE +/- 0.10, N = 820.8011.4218.0112.19MIN: 20.49 / MAX: 21.4MIN: 11.36 / MAX: 11.82MIN: 17.46 / MAX: 24.95MIN: 11.99 / MAX: 118.191. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: squeezenet_ssd1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E510152025Min: 20.71 / Avg: 20.8 / Max: 20.84Min: 11.42 / Avg: 11.42 / Max: 11.43Min: 17.54 / Avg: 18.01 / Max: 20.28Min: 12.05 / Avg: 12.19 / Max: 12.921. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: Thorough1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E3691215SE +/- 0.0075, N = 4SE +/- 0.0248, N = 6SE +/- 0.0249, N = 6SE +/- 0.0126, N = 612.21627.67827.65116.82961. (CXX) g++ options: -O3 -march=native -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: Thorough1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E48121620Min: 12.2 / Avg: 12.22 / Max: 12.24Min: 7.58 / Avg: 7.68 / Max: 7.76Min: 7.58 / Avg: 7.65 / Max: 7.73Min: 6.79 / Avg: 6.83 / Max: 6.871. (CXX) g++ options: -O3 -march=native -flto -pthread

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.18Algorithm: Myriad-Groestl1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E4K8K12K16K20KSE +/- 141.36, N = 4SE +/- 170.40, N = 15SE +/- 32.81, N = 3SE +/- 21.86, N = 312540.0017016.009544.7813917.001. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.18Algorithm: Myriad-Groestl1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E3K6K9K12K15KMin: 12220 / Avg: 12540 / Max: 12890Min: 16210 / Avg: 17016 / Max: 18480Min: 9481.89 / Avg: 9544.78 / Max: 9592.43Min: 13890 / Avg: 13916.67 / Max: 139601. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet181 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E3691215SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 12SE +/- 0.00, N = 813.467.6011.578.53MIN: 13.19 / MAX: 15.82MIN: 7.56 / MAX: 7.87MIN: 11.1 / MAX: 19.26MIN: 8.45 / MAX: 15.511. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet181 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E48121620Min: 13.45 / Avg: 13.46 / Max: 13.47Min: 7.59 / Avg: 7.6 / Max: 7.6Min: 11.37 / Avg: 11.57 / Max: 11.76Min: 8.51 / Avg: 8.53 / Max: 8.551. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLAS1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E2004006008001000SE +/- 9.02, N = 3SE +/- 9.68, N = 9SE +/- 14.24, N = 3SE +/- 4.33, N = 36471031106110921. (CXX) g++ options: -flto -O3 -march=native -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLAS1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E2004006008001000Min: 630 / Avg: 646.67 / Max: 661Min: 993 / Avg: 1030.67 / Max: 1075Min: 1033 / Avg: 1061.33 / Max: 1078Min: 1083 / Avg: 1091.67 / Max: 10961. (CXX) g++ options: -flto -O3 -march=native -pthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: alexnet1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E3691215SE +/- 0.10, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 12SE +/- 0.01, N = 811.176.629.068.88MIN: 10.93 / MAX: 11.74MIN: 6.59 / MAX: 6.73MIN: 8.98 / MAX: 17.04MIN: 8.85 / MAX: 9.71. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: alexnet1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E3691215Min: 10.96 / Avg: 11.17 / Max: 11.27Min: 6.62 / Avg: 6.62 / Max: 6.62Min: 9.01 / Avg: 9.06 / Max: 9.24Min: 8.87 / Avg: 8.88 / Max: 8.911. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression Speed1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E1122334455SE +/- 0.00, N = 3SE +/- 0.28, N = 3SE +/- 0.19, N = 3SE +/- 0.20, N = 328.346.937.942.21. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression Speed1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E1020304050Min: 28.3 / Avg: 28.3 / Max: 28.3Min: 46.3 / Avg: 46.87 / Max: 47.2Min: 37.5 / Avg: 37.87 / Max: 38.1Min: 41.9 / Avg: 42.23 / Max: 42.61. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E246810SE +/- 0.00348, N = 7SE +/- 0.00185, N = 7SE +/- 0.00260, N = 7SE +/- 0.00286, N = 78.105094.905817.594787.48850MIN: 7.93MIN: 4.86MIN: 7.45MIN: 7.391. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E3691215Min: 8.09 / Avg: 8.11 / Max: 8.11Min: 4.9 / Avg: 4.91 / Max: 4.91Min: 7.59 / Avg: 7.59 / Max: 7.61Min: 7.48 / Avg: 7.49 / Max: 7.51. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E246810SE +/- 0.00121, N = 7SE +/- 0.00153, N = 7SE +/- 0.00282, N = 7SE +/- 0.00204, N = 78.181635.060687.682647.13769MIN: 8.12MIN: 5.02MIN: 7.6MIN: 7.051. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E3691215Min: 8.18 / Avg: 8.18 / Max: 8.18Min: 5.06 / Avg: 5.06 / Max: 5.07Min: 7.68 / Avg: 7.68 / Max: 7.7Min: 7.13 / Avg: 7.14 / Max: 7.141. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: vgg161 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E1020304050SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.37, N = 12SE +/- 0.03, N = 842.6426.7232.4728.19MIN: 42.14 / MAX: 48.63MIN: 26.63 / MAX: 27.5MIN: 30.85 / MAX: 161.13MIN: 27.95 / MAX: 33.831. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: vgg161 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E918273645Min: 42.54 / Avg: 42.64 / Max: 42.71Min: 26.69 / Avg: 26.72 / Max: 26.75Min: 31.01 / Avg: 32.47 / Max: 35.32Min: 28.06 / Avg: 28.19 / Max: 28.271. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.6.1Input: PNG - Encode Speed: 81 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E0.24980.49960.74940.99921.249SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 9SE +/- 0.00, N = 30.741.111.110.701. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.6.1Input: PNG - Encode Speed: 81 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E246810Min: 0.74 / Avg: 0.74 / Max: 0.74Min: 1.1 / Avg: 1.11 / Max: 1.11Min: 1.03 / Avg: 1.11 / Max: 1.12Min: 0.7 / Avg: 0.7 / Max: 0.71. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E1428425670SE +/- 0.35, N = 4SE +/- 0.06, N = 5SE +/- 0.36, N = 5SE +/- 0.53, N = 1540.2562.0763.4559.901. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E1224364860Min: 39.2 / Avg: 40.25 / Max: 40.69Min: 61.98 / Avg: 62.07 / Max: 62.3Min: 62.15 / Avg: 63.45 / Max: 64.07Min: 54.74 / Avg: 59.9 / Max: 62.721. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: Eigen1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E400800120016002000SE +/- 10.58, N = 3SE +/- 6.23, N = 3SE +/- 20.61, N = 4SE +/- 17.57, N = 312081883168318041. (CXX) g++ options: -flto -O3 -march=native -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: Eigen1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E30060090012001500Min: 1188 / Avg: 1208 / Max: 1224Min: 1875 / Avg: 1882.67 / Max: 1895Min: 1627 / Avg: 1683 / Max: 1725Min: 1770 / Avg: 1804.33 / Max: 18281. (CXX) g++ options: -flto -O3 -march=native -pthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 61 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E3691215SE +/- 0.196, N = 15SE +/- 0.019, N = 5SE +/- 0.032, N = 5SE +/- 0.055, N = 513.2019.0899.8068.5041. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 61 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E48121620Min: 12.95 / Avg: 13.2 / Max: 15.95Min: 9.05 / Avg: 9.09 / Max: 9.15Min: 9.7 / Avg: 9.81 / Max: 9.89Min: 8.37 / Avg: 8.5 / Max: 8.631. (CXX) g++ options: -O3 -fPIC -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E20406080100SE +/- 0.79, N = 12SE +/- 0.06, N = 6SE +/- 1.17, N = 15SE +/- 0.81, N = 1560.9691.5391.7989.241. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E20406080100Min: 55.85 / Avg: 60.96 / Max: 64.32Min: 91.26 / Avg: 91.53 / Max: 91.65Min: 78.88 / Avg: 91.79 / Max: 95.98Min: 79.07 / Avg: 89.24 / Max: 92.011. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E50100150200250SE +/- 1.53, N = 15SE +/- 0.44, N = 10SE +/- 1.58, N = 9SE +/- 2.81, N = 15169.59243.71191.47205.831. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E4080120160200Min: 159.67 / Avg: 169.59 / Max: 180.38Min: 241.86 / Avg: 243.71 / Max: 245.7Min: 184.78 / Avg: 191.47 / Max: 199.43Min: 188.93 / Avg: 205.83 / Max: 220.161. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark Time1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E1122334455SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.77, N = 12SE +/- 0.02, N = 350.8638.3645.9836.081. RawTherapee, version 5.8, command line.
OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark Time1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E1020304050Min: 50.83 / Avg: 50.86 / Max: 50.89Min: 38.33 / Avg: 38.36 / Max: 38.38Min: 40.94 / Avg: 45.98 / Max: 48.31Min: 36.04 / Avg: 36.08 / Max: 36.111. RawTherapee, version 5.8, command line.

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.6.1Input: PNG - Encode Speed: 71 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E3691215SE +/- 0.07, N = 12SE +/- 0.07, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 38.0310.4311.188.821. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.6.1Input: PNG - Encode Speed: 71 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E3691215Min: 7.76 / Avg: 8.03 / Max: 8.55Min: 10.32 / Avg: 10.43 / Max: 10.55Min: 11.13 / Avg: 11.18 / Max: 11.25Min: 8.81 / Avg: 8.82 / Max: 8.831. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: shufflenet-v21 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E0.6841.3682.0522.7363.42SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 12SE +/- 0.00, N = 83.042.192.612.52MIN: 3.01 / MAX: 3.78MIN: 2.15 / MAX: 2.34MIN: 2.57 / MAX: 4.83MIN: 2.48 / MAX: 2.721. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: shufflenet-v21 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E246810Min: 3.04 / Avg: 3.04 / Max: 3.05Min: 2.18 / Avg: 2.19 / Max: 2.2Min: 2.59 / Avg: 2.61 / Max: 2.64Min: 2.51 / Avg: 2.52 / Max: 2.531. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: NDT Mapping1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E30060090012001500SE +/- 0.40, N = 3SE +/- 0.73, N = 4SE +/- 1.02, N = 4SE +/- 0.87, N = 41058.351341.131271.141280.031. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: NDT Mapping1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E2004006008001000Min: 1057.63 / Avg: 1058.35 / Max: 1059.02Min: 1339.52 / Avg: 1341.13 / Max: 1342.89Min: 1270.08 / Avg: 1271.14 / Max: 1274.2Min: 1277.52 / Avg: 1280.03 / Max: 1281.241. (CXX) g++ options: -O3 -std=c++11 -fopenmp

DeepSpeech

Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPU1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E1326395265SE +/- 0.48, N = 3SE +/- 0.15, N = 3SE +/- 0.65, N = 4SE +/- 0.50, N = 1559.6348.7153.3953.87
OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPU1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E1224364860Min: 59.12 / Avg: 59.63 / Max: 60.58Min: 48.55 / Avg: 48.71 / Max: 49Min: 51.96 / Avg: 53.39 / Max: 55.06Min: 51.52 / Avg: 53.87 / Max: 57.42

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 101 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E0.63611.27221.90832.54443.1805SE +/- 0.003, N = 10SE +/- 0.004, N = 10SE +/- 0.005, N = 10SE +/- 0.026, N = 152.8272.5192.4642.5181. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 101 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E246810Min: 2.81 / Avg: 2.83 / Max: 2.84Min: 2.5 / Avg: 2.52 / Max: 2.54Min: 2.45 / Avg: 2.46 / Max: 2.5Min: 2.46 / Avg: 2.52 / Max: 2.771. (CXX) g++ options: -O3 -fPIC -lm

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Euclidean Cluster1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E400800120016002000SE +/- 0.53, N = 4SE +/- 0.79, N = 4SE +/- 1.00, N = 4SE +/- 17.52, N = 151471.081640.031659.111617.131. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Euclidean Cluster1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E30060090012001500Min: 1469.9 / Avg: 1471.08 / Max: 1472.19Min: 1638.78 / Avg: 1640.03 / Max: 1642.33Min: 1657.22 / Avg: 1659.11 / Max: 1661.66Min: 1418.5 / Avg: 1617.13 / Max: 1650.771. (CXX) g++ options: -O3 -std=c++11 -fopenmp

Xonotic

MinAvgMax1 P + HT + 8 E5.021.834.98 P + HT4.737.448.18 P + 8 E4.228.037.9OpenBenchmarking.orgWatts, Fewer Is BetterXonotic 0.8.2CPU Power Consumption Monitor1428425670

MinAvgMax1 P + HT + 8 E4.820.025.58 P + HT4.936.248.48 P + 8 E4.928.936.2OpenBenchmarking.orgWatts, Fewer Is BetterXonotic 0.8.2CPU Power Consumption Monitor1428425670

MinAvgMax1 P + HT + 8 E5.021.832.28 P + HT4.935.046.08 P + 8 E4.929.136.2OpenBenchmarking.orgWatts, Fewer Is BetterXonotic 0.8.2CPU Power Consumption Monitor1224364860

CPU Power Consumption Monitor

OpenBenchmarking.orgWattsCPU Power Consumption MonitorPhoronix Test Suite System Monitoring1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E50100150200250Min: 2.41 / Avg: 49.16 / Max: 102.04Min: 2.43 / Avg: 120.83 / Max: 255.58Min: 2.47 / Avg: 82.75 / Max: 225.64Min: 2.34 / Avg: 82.91 / Max: 258.28

Selenium

MinAvgMax1 P + HT + 8 E4.913.431.88 P + HT4.919.546.18 P + 8 E4.614.836.98 P + HT + 8 E4.215.539.0OpenBenchmarking.orgWatts, Fewer Is BetterSeleniumCPU Power Consumption Monitor1224364860

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: Google Chrome1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E150300450600750SE +/- 25.19, N = 14SE +/- 2.61, N = 3SE +/- 38.09, N = 15SE +/- 40.43, N = 15718.3442.8555.5553.01. chrome 95.0.4638.54
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: Google Chrome1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E130260390520650Min: 459.3 / Avg: 718.29 / Max: 776.4Min: 438.6 / Avg: 442.83 / Max: 447.6Min: 441.7 / Avg: 555.52 / Max: 779.8Min: 442.8 / Avg: 553.03 / Max: 773.91. chrome 95.0.4638.54

MinAvgMax1 P + HT + 8 E4.98.022.68 P + HT4.88.726.48 P + 8 E4.68.322.78 P + HT + 8 E4.68.425.4OpenBenchmarking.orgWatts, Fewer Is BetterSeleniumCPU Power Consumption Monitor816243240

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: Google Chrome1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E714212835SE +/- 1.44, N = 15SE +/- 0.08, N = 6SE +/- 1.41, N = 15SE +/- 1.72, N = 1528.8318.2120.7724.221. chrome 95.0.4638.54
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: Google Chrome1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E612182430Min: 18.07 / Avg: 28.83 / Max: 33.77Min: 18.01 / Avg: 18.21 / Max: 18.48Min: 17.86 / Avg: 20.77 / Max: 31.47Min: 17.73 / Avg: 24.22 / Max: 31.611. chrome 95.0.4638.54

MinAvgMax1 P + HT + 8 E2.612.420.08 P + HT4.815.937.18 P + 8 E4.613.137.38 P + HT + 8 E4.614.636.9OpenBenchmarking.orgWatts, Fewer Is BetterSeleniumCPU Power Consumption Monitor1122334455

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM collisionDetection - Browser: Google Chrome1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E90180270360450SE +/- 0.03, N = 4SE +/- 0.27, N = 5SE +/- 25.65, N = 15SE +/- 23.23, N = 15413.42210.76345.53270.661. chrome 95.0.4638.54
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM collisionDetection - Browser: Google Chrome1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E70140210280350Min: 413.37 / Avg: 413.42 / Max: 413.51Min: 209.9 / Avg: 210.76 / Max: 211.55Min: 206.74 / Avg: 345.53 / Max: 413.48Min: 207.52 / Avg: 270.66 / Max: 413.551. chrome 95.0.4638.54

MinAvgMax1 P + HT + 8 E4.917.271.38 P + HT2.424.6108.88 P + 8 E4.119.3114.08 P + HT + 8 E2.422.1126.6OpenBenchmarking.orgWatts, Fewer Is BetterSeleniumCPU Power Consumption Monitor4080120160200

OpenBenchmarking.orgScore, Fewer Is BetterSeleniumBenchmark: PSPDFKit WASM - Browser: Google Chrome1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E6001200180024003000SE +/- 28.91, N = 15SE +/- 2.67, N = 3SE +/- 58.99, N = 15SE +/- 53.74, N = 1530292528287928451. chrome 95.0.4638.54
OpenBenchmarking.orgScore, Fewer Is BetterSeleniumBenchmark: PSPDFKit WASM - Browser: Google Chrome1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E5001000150020002500Min: 2809 / Avg: 3028.87 / Max: 3124Min: 2523 / Avg: 2528.33 / Max: 2531Min: 2543 / Avg: 2878.67 / Max: 3158Min: 2585 / Avg: 2845.2 / Max: 31141. chrome 95.0.4638.54

OpenBenchmarking.orgWatts, Fewer Is BetterSeleniumCPU Power Consumption Monitor1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E1020304050Min: 4.77 / Avg: 18.69 / Max: 33.85Min: 4.78 / Avg: 30.05 / Max: 49.61Min: 4.41 / Avg: 19.47 / Max: 37.93Min: 4.73 / Avg: 26.44 / Max: 38.67

OpenBenchmarking.orgRuns / Minute Per Watt, More Is BetterSeleniumBenchmark: StyleBench - Browser: Google Chrome1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E0.59451.1891.78352.3782.97252.5202.1602.6422.462

OpenBenchmarking.orgRuns / Minute, More Is BetterSeleniumBenchmark: StyleBench - Browser: Google Chrome1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E1530456075SE +/- 1.22, N = 15SE +/- 0.07, N = 3SE +/- 2.46, N = 15SE +/- 0.46, N = 347.1064.9051.4665.101. chrome 95.0.4638.54
OpenBenchmarking.orgRuns / Minute, More Is BetterSeleniumBenchmark: StyleBench - Browser: Google Chrome1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E1326395265Min: 40.6 / Avg: 47.1 / Max: 59.3Min: 64.8 / Avg: 64.87 / Max: 65Min: 40.73 / Avg: 51.46 / Max: 66.2Min: 64.2 / Avg: 65.1 / Max: 65.71. chrome 95.0.4638.54

MinAvgMax1 P + HT + 8 E3.416.734.88 P + HT4.725.647.68 P + 8 E2.617.337.78 P + HT + 8 E2.621.040.2OpenBenchmarking.orgWatts, Fewer Is BetterSeleniumCPU Power Consumption Monitor1428425670

OpenBenchmarking.orgRuns Per Minute Per Watt, More Is BetterSeleniumBenchmark: Speedometer - Browser: Google Chrome1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E369121512.2211.2711.9912.74

OpenBenchmarking.orgRuns Per Minute, More Is BetterSeleniumBenchmark: Speedometer - Browser: Google Chrome1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E60120180240300SE +/- 6.48, N = 12SE +/- 0.88, N = 3SE +/- 14.34, N = 12SE +/- 10.56, N = 152042892072671. chrome 95.0.4638.54
OpenBenchmarking.orgRuns Per Minute, More Is BetterSeleniumBenchmark: Speedometer - Browser: Google Chrome1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E50100150200250Min: 167 / Avg: 203.75 / Max: 237Min: 287 / Avg: 288.67 / Max: 290Min: 159 / Avg: 207.33 / Max: 290Min: 173 / Avg: 267.07 / Max: 2941. chrome 95.0.4638.54

MinAvgMax1 P + HT + 8 E4.916.436.28 P + HT4.925.343.98 P + 8 E4.417.139.58 P + HT + 8 E4.423.941.2OpenBenchmarking.orgWatts, Fewer Is BetterSeleniumCPU Power Consumption Monitor1224364860

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: ARES-6 - Browser: Google Chrome1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E48121620SE +/- 0.39, N = 15SE +/- 0.09, N = 3SE +/- 0.81, N = 12SE +/- 0.10, N = 315.9910.4215.3210.651. chrome 95.0.4638.54
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: ARES-6 - Browser: Google Chrome1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E48121620Min: 12.49 / Avg: 15.99 / Max: 17.38Min: 10.33 / Avg: 10.42 / Max: 10.6Min: 10.33 / Avg: 15.32 / Max: 17.4Min: 10.55 / Avg: 10.65 / Max: 10.851. chrome 95.0.4638.54

Darmstadt Automotive Parallel Heterogeneous Suite

MinAvgMax1 P + HT + 8 E4.834.338.98 P + HT4.664.070.48 P + 8 E4.760.967.68 P + HT + 8 E4.556.370.1OpenBenchmarking.orgWatts, Fewer Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteCPU Power Consumption Monitor20406080100

OpenBenchmarking.orgTest Cases Per Minute Per Watt, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Points2Image1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E20040060080010001096.68624.87630.40604.21

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Points2Image1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E9K18K27K36K45KSE +/- 151.50, N = 3SE +/- 206.03, N = 3SE +/- 75.89, N = 3SE +/- 790.95, N = 1537643.0039962.9538406.4534007.621. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous SuiteBackend: OpenMP - Kernel: Points2Image1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E7K14K21K28K35KMin: 37437.76 / Avg: 37643 / Max: 37938.67Min: 39699.08 / Avg: 39962.95 / Max: 40368.97Min: 38255.79 / Avg: 38406.45 / Max: 38497.82Min: 27983.51 / Avg: 34007.62 / Max: 37066.551. (CXX) g++ options: -O3 -std=c++11 -fopenmp

NCNN

MinAvgMax1 P + HT + 8 E4.860.972.98 P + HT4.6138.9210.98 P + 8 E4.8112.6143.28 P + HT + 8 E4.5125.0168.9OpenBenchmarking.orgWatts, Fewer Is BetterNCNN 20210720CPU Power Consumption Monitor60120180240300

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: regnety_400m1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E246810SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.21, N = 12SE +/- 0.02, N = 86.884.656.065.93MIN: 6.81 / MAX: 7.46MIN: 4.57 / MAX: 10.7MIN: 5.75 / MAX: 17.93MIN: 5.78 / MAX: 13.841. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: regnety_400m1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E3691215Min: 6.83 / Avg: 6.88 / Max: 6.92Min: 4.6 / Avg: 4.65 / Max: 4.69Min: 5.8 / Avg: 6.06 / Max: 8.42Min: 5.85 / Avg: 5.93 / Max: 6.071. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: yolov4-tiny1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E510152025SE +/- 0.11, N = 3SE +/- 0.02, N = 3SE +/- 0.65, N = 12SE +/- 0.26, N = 818.6212.1215.7613.42MIN: 18.3 / MAX: 19.22MIN: 12 / MAX: 12.56MIN: 14.36 / MAX: 21.49MIN: 12.99 / MAX: 22.091. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: yolov4-tiny1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E510152025Min: 18.4 / Avg: 18.62 / Max: 18.73Min: 12.1 / Avg: 12.12 / Max: 12.15Min: 14.46 / Avg: 15.76 / Max: 21.17Min: 13.12 / Avg: 13.42 / Max: 15.211. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: blazeface1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E0.3150.630.9451.261.575SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 12SE +/- 0.00, N = 81.400.901.101.09MIN: 1.38 / MAX: 1.6MIN: 0.88 / MAX: 1.12MIN: 1.04 / MAX: 1.66MIN: 1.06 / MAX: 1.431. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: blazeface1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E246810Min: 1.4 / Avg: 1.4 / Max: 1.41Min: 0.89 / Avg: 0.9 / Max: 0.93Min: 1.05 / Avg: 1.1 / Max: 1.52Min: 1.08 / Avg: 1.09 / Max: 1.111. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mobilenet1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E48121620SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.42, N = 12SE +/- 0.08, N = 814.117.4510.678.79MIN: 14.02 / MAX: 14.53MIN: 7.33 / MAX: 7.86MIN: 9.94 / MAX: 15.33MIN: 8.59 / MAX: 18.461. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mobilenet1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E48121620Min: 14.07 / Avg: 14.11 / Max: 14.13Min: 7.41 / Avg: 7.45 / Max: 7.51Min: 10 / Avg: 10.67 / Max: 15.14Min: 8.66 / Avg: 8.79 / Max: 9.321. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenCV

MinAvgMax1 P + HT + 8 E4.638.367.78 P + HT4.868.8151.08 P + 8 E4.558.5140.08 P + HT + 8 E4.451.6151.9OpenBenchmarking.orgWatts, Fewer Is BetterOpenCV 4.5.4CPU Power Consumption Monitor4080120160200

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.5.4Test: Object Detection1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E11K22K33K44K55KSE +/- 741.36, N = 12SE +/- 549.85, N = 15SE +/- 2752.84, N = 15SE +/- 3180.69, N = 15320412746052954449331. (CXX) g++ options: -fPIC -O3 -march=native -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -fvisibility=hidden -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.5.4Test: Object Detection1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E9K18K27K36K45KMin: 28819 / Avg: 32041.17 / Max: 37034Min: 23471 / Avg: 27460.33 / Max: 30061Min: 34399 / Avg: 52954.4 / Max: 66707Min: 26764 / Avg: 44932.67 / Max: 666231. (CXX) g++ options: -fPIC -O3 -march=native -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -fvisibility=hidden -shared

MinAvgMax1 P + HT + 8 E4.863.492.88 P + HT4.8119.0241.08 P + 8 E4.484.5152.78 P + HT + 8 E4.9103.5202.1OpenBenchmarking.orgWatts, Fewer Is BetterOpenCV 4.5.4CPU Power Consumption Monitor60120180240300

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.5.4Test: DNN - Deep Neural Network1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E2K4K6K8K10KSE +/- 90.25, N = 8SE +/- 106.47, N = 15SE +/- 62.50, N = 15SE +/- 80.53, N = 15103234795770150481. (CXX) g++ options: -fPIC -O3 -march=native -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -fvisibility=hidden -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.5.4Test: DNN - Deep Neural Network1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E2K4K6K8K10KMin: 10140 / Avg: 10322.88 / Max: 10886Min: 4404 / Avg: 4795.2 / Max: 5582Min: 7242 / Avg: 7700.53 / Max: 8235Min: 4718 / Avg: 5047.87 / Max: 59961. (CXX) g++ options: -fPIC -O3 -march=native -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -fvisibility=hidden -shared

ONNX Runtime

MinAvgMax1 P + HT + 8 E4.944.547.28 P + HT4.890.6102.88 P + 8 E4.9134.5141.18 P + HT + 8 E4.6120.0136.1OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.9.1CPU Power Consumption Monitor4080120160200

OpenBenchmarking.orgInferences Per Minute Per Watt, More Is BetterONNX Runtime 1.9.1Model: shufflenet-v2-10 - Device: CPU1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E110220330440550495.06428.04336.22335.57

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.9.1Model: shufflenet-v2-10 - Device: CPU1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E10K20K30K40K50KSE +/- 19.55, N = 3SE +/- 54.24, N = 3SE +/- 162.14, N = 3SE +/- 905.56, N = 12220083878245234402571. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.9.1Model: shufflenet-v2-10 - Device: CPU1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E8K16K24K32K40KMin: 21970 / Avg: 22008 / Max: 22035Min: 38703.5 / Avg: 38781.83 / Max: 38886Min: 44916 / Avg: 45234.33 / Max: 45447Min: 36990 / Avg: 40256.67 / Max: 43450.51. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt

MinAvgMax1 P + HT + 8 E4.855.059.38 P + HT4.6106.6122.78 P + 8 E2.6199.7225.28 P + HT + 8 E5.1173.4199.1OpenBenchmarking.orgWatts, Fewer Is BetterONNX Runtime 1.9.1CPU Power Consumption Monitor60120180240300

OpenBenchmarking.orgInferences Per Minute Per Watt, More Is BetterONNX Runtime 1.9.1Model: fcn-resnet101-11 - Device: CPU1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E0.15410.30820.46230.61640.77050.5820.6850.6110.560

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.9.1Model: fcn-resnet101-11 - Device: CPU1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E306090120150SE +/- 0.00, N = 3SE +/- 1.63, N = 12SE +/- 0.17, N = 3SE +/- 0.44, N = 33273122971. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.9.1Model: fcn-resnet101-11 - Device: CPU1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E20406080100Min: 32 / Avg: 32 / Max: 32Min: 65 / Avg: 72.63 / Max: 76.5Min: 122 / Avg: 122.17 / Max: 122.5Min: 96 / Avg: 96.67 / Max: 97.51. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto -fno-fat-lto-objects -ldl -lrt

Cpuminer-Opt

MinAvgMax1 P + HT + 8 E4.744.075.08 P + HT4.8111.7145.68 P + 8 E5.067.2135.98 P + HT + 8 E5.173.0115.9OpenBenchmarking.orgWatts, Fewer Is BetterCpuminer-Opt 3.18CPU Power Consumption Monitor4080120160200

OpenBenchmarking.orgkH/s Per Watt, More Is BetterCpuminer-Opt 3.18Algorithm: Triple SHA-256, Onecoin1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E80016002400320040003374.301151.023684.143629.99

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.18Algorithm: Triple SHA-256, Onecoin1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E60K120K180K240K300KSE +/- 6559.72, N = 12SE +/- 343.72, N = 3SE +/- 315.40, N = 3SE +/- 1251.19, N = 31484421285472476532648731. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.18Algorithm: Triple SHA-256, Onecoin1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E50K100K150K200K250KMin: 76380 / Avg: 148441.67 / Max: 156410Min: 127940 / Avg: 128546.67 / Max: 129130Min: 247130 / Avg: 247653.33 / Max: 248220Min: 262550 / Avg: 264873.33 / Max: 2668401. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Timed Mesa Compilation

MinAvgMax1 P + HT + 8 E5.078.790.18 P + HT4.8148.2227.28 P + 8 E4.9145.5194.08 P + HT + 8 E4.6150.1224.2OpenBenchmarking.orgWatts, Fewer Is BetterTimed Mesa Compilation 21.0CPU Power Consumption Monitor60120180240300

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To Compile1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E1224364860SE +/- 0.12, N = 3SE +/- 0.28, N = 3SE +/- 0.45, N = 14SE +/- 0.50, N = 1553.6836.5928.4528.39
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To Compile1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E1122334455Min: 53.53 / Avg: 53.68 / Max: 53.92Min: 36.21 / Avg: 36.59 / Max: 37.14Min: 26.75 / Avg: 28.44 / Max: 31.99Min: 26.25 / Avg: 28.39 / Max: 31.54

OSPray

MinAvgMax1 P + HT + 8 E4.930.882.98 P + HT4.829.2130.48 P + 8 E4.233.2147.48 P + HT + 8 E4.735.7158.3OpenBenchmarking.orgWatts, Fewer Is BetterOSPray 1.8.5CPU Power Consumption Monitor4080120160200

OpenBenchmarking.orgFPS Per Watt, More Is BetterOSPray 1.8.5Demo: Magnetic Reconnection - Renderer: Path Tracer1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E481216204.44817.11210.04813.996

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: Magnetic Reconnection - Renderer: Path Tracer1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E110220330440550SE +/- 2.25, N = 15SE +/- 0.00, N = 13SE +/- 0.00, N = 12SE +/- 0.00, N = 12136.90500.00333.33500.00MIN: 83.33 / MAX: 142.86MAX: 1000MIN: 250 / MAX: 500MIN: 250
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: Magnetic Reconnection - Renderer: Path Tracer1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E90180270360450Min: 125 / Avg: 136.9 / Max: 142.86Min: 500 / Avg: 500 / Max: 500Min: 333.33 / Avg: 333.33 / Max: 333.33Min: 500 / Avg: 500 / Max: 500

AOM AV1

MinAvgMax1 P + HT + 8 E4.854.264.78 P + HT4.9117.0141.58 P + 8 E4.472.5102.08 P + HT + 8 E2.477.7108.4OpenBenchmarking.orgWatts, Fewer Is BetterAOM AV1 3.2CPU Power Consumption Monitor4080120160200

OpenBenchmarking.orgFrames Per Second Per Watt, More Is BetterAOM AV1 3.2Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E0.03980.07960.11940.15920.1990.1660.1270.1770.164

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E48121620SE +/- 0.11, N = 15SE +/- 0.14, N = 7SE +/- 0.30, N = 15SE +/- 0.39, N = 159.0214.8712.8412.721. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E48121620Min: 8.3 / Avg: 9.02 / Max: 9.45Min: 14.2 / Avg: 14.87 / Max: 15.25Min: 10.63 / Avg: 12.84 / Max: 14.52Min: 10.85 / Avg: 12.72 / Max: 14.871. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgWatts, Fewer Is BetterAOM AV1 3.2CPU Power Consumption Monitor1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E20406080100Min: 4.44 / Avg: 34.98 / Max: 55.93Min: 4.86 / Avg: 81.28 / Max: 101.55Min: 4.48 / Avg: 46.92 / Max: 71.84Min: 4.34 / Avg: 59.51 / Max: 90.73

OpenBenchmarking.orgFrames Per Second Per Watt, More Is BetterAOM AV1 3.2Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E0.05090.10180.15270.20360.25450.1960.1600.2260.176

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E3691215SE +/- 0.25, N = 15SE +/- 0.17, N = 3SE +/- 0.56, N = 15SE +/- 0.49, N = 156.8412.9810.6010.471. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E48121620Min: 5.46 / Avg: 6.84 / Max: 8.09Min: 12.71 / Avg: 12.98 / Max: 13.29Min: 6.78 / Avg: 10.6 / Max: 12.58Min: 7.16 / Avg: 10.47 / Max: 12.691. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm

MinAvgMax1 P + HT + 8 E4.936.766.58 P + HT4.863.3138.78 P + 8 E4.529.176.68 P + HT + 8 E4.745.5107.6OpenBenchmarking.orgWatts, Fewer Is BetterAOM AV1 3.2CPU Power Consumption Monitor4080120160200

OpenBenchmarking.orgFrames Per Second Per Watt, More Is BetterAOM AV1 3.2Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E0.83751.6752.51253.354.18753.3792.8973.7223.314

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E4080120160200SE +/- 0.90, N = 15SE +/- 0.53, N = 9SE +/- 1.75, N = 15SE +/- 2.26, N = 15123.95183.50108.38150.851. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E306090120150Min: 115.51 / Avg: 123.95 / Max: 127.45Min: 181.45 / Avg: 183.5 / Max: 185.94Min: 97.55 / Avg: 108.38 / Max: 118.27Min: 141.79 / Avg: 150.85 / Max: 169.151. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm

LibRaw

MinAvgMax1 P + HT + 8 E5.045.056.88 P + HT4.864.088.08 P + 8 E4.150.077.58 P + HT + 8 E4.543.572.6OpenBenchmarking.orgWatts, Fewer Is BetterLibRaw 0.20CPU Power Consumption Monitor20406080100

OpenBenchmarking.orgMpix/sec Per Watt, More Is BetterLibRaw 0.20Post-Processing Benchmark1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E0.37530.75061.12591.50121.87651.6681.3181.5591.438

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing Benchmark1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E20406080100SE +/- 0.07, N = 4SE +/- 0.17, N = 4SE +/- 0.17, N = 4SE +/- 2.34, N = 1575.0284.3577.9462.611. (CXX) g++ options: -O3 -march=native -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing Benchmark1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E1632486480Min: 74.82 / Avg: 75.02 / Max: 75.1Min: 83.93 / Avg: 84.35 / Max: 84.67Min: 77.62 / Avg: 77.94 / Max: 78.38Min: 54.77 / Avg: 62.61 / Max: 79.311. (CXX) g++ options: -O3 -march=native -fopenmp -ljpeg -lz -lm

Hugin

MinAvgMax1 P + HT + 8 E4.946.190.48 P + HT4.974.5175.48 P + 8 E4.651.4163.68 P + HT + 8 E4.763.2160.6OpenBenchmarking.orgWatts, Fewer Is BetterHuginCPU Power Consumption Monitor50100150200250

OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching Time1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E918273645SE +/- 0.08, N = 3SE +/- 0.40, N = 3SE +/- 1.65, N = 15SE +/- 0.16, N = 337.4431.7340.6632.33
OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching Time1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E816243240Min: 37.29 / Avg: 37.44 / Max: 37.52Min: 31.12 / Avg: 31.73 / Max: 32.49Min: 33.51 / Avg: 40.66 / Max: 46.99Min: 32.01 / Avg: 32.33 / Max: 32.52

JPEG XL libjxl

MinAvgMax1 P + HT + 8 E4.617.825.18 P + HT4.826.240.98 P + 8 E4.622.940.38 P + HT + 8 E4.618.927.3OpenBenchmarking.orgWatts, Fewer Is BetterJPEG XL libjxl 0.6.1CPU Power Consumption Monitor1122334455

OpenBenchmarking.orgMP/s Per Watt, More Is BetterJPEG XL libjxl 0.6.1Input: JPEG - Encode Speed: 81 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E0.3960.7921.1881.5841.981.5971.6441.7601.604

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.6.1Input: JPEG - Encode Speed: 81 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E1020304050SE +/- 0.10, N = 5SE +/- 0.09, N = 6SE +/- 0.97, N = 15SE +/- 0.05, N = 528.4143.0640.2230.261. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.6.1Input: JPEG - Encode Speed: 81 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E918273645Min: 28.16 / Avg: 28.41 / Max: 28.68Min: 42.68 / Avg: 43.06 / Max: 43.28Min: 31.31 / Avg: 40.22 / Max: 44.97Min: 30.13 / Avg: 30.26 / Max: 30.461. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie

MinAvgMax1 P + HT + 8 E4.821.026.08 P + HT4.532.846.68 P + 8 E3.930.445.58 P + HT + 8 E4.523.543.1OpenBenchmarking.orgWatts, Fewer Is BetterJPEG XL libjxl 0.6.1CPU Power Consumption Monitor1428425670

OpenBenchmarking.orgMP/s Per Watt, More Is BetterJPEG XL libjxl 0.6.1Input: JPEG - Encode Speed: 71 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E0.83521.67042.50563.34084.1763.3433.3193.7123.332

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.6.1Input: JPEG - Encode Speed: 71 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E306090120150SE +/- 0.28, N = 4SE +/- 0.41, N = 5SE +/- 1.19, N = 5SE +/- 1.27, N = 1570.20108.94112.9678.371. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.6.1Input: JPEG - Encode Speed: 71 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E20406080100Min: 69.42 / Avg: 70.2 / Max: 70.75Min: 107.42 / Avg: 108.94 / Max: 109.85Min: 108.74 / Avg: 112.96 / Max: 115.46Min: 76.13 / Avg: 78.37 / Max: 91.261. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie

Zstd Compression

MinAvgMax1 P + HT + 8 E4.941.964.38 P + HT4.977.8131.98 P + 8 E4.857.1121.38 P + HT + 8 E4.957.3122.9OpenBenchmarking.orgWatts, Fewer Is BetterZstd Compression 1.5.0CPU Power Consumption Monitor4080120160200

OpenBenchmarking.orgMB/s Per Watt, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression Speed1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E2040608010074.4759.4471.6062.58

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression Speed1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E10002000300040005000SE +/- 1.30, N = 3SE +/- 17.83, N = 3SE +/- 495.09, N = 3SE +/- 492.55, N = 33121.64622.84088.03587.21. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression Speed1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E8001600240032004000Min: 3119.7 / Avg: 3121.63 / Max: 3124.1Min: 4587.2 / Avg: 4622.83 / Max: 4642Min: 3097.8 / Avg: 4087.97 / Max: 4585.5Min: 3087.1 / Avg: 3587.23 / Max: 4572.31. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

MinAvgMax1 P + HT + 8 E4.745.670.28 P + HT5.072.4135.78 P + 8 E4.958.4125.58 P + HT + 8 E4.561.7124.7OpenBenchmarking.orgWatts, Fewer Is BetterZstd Compression 1.5.0CPU Power Consumption Monitor4080120160200

OpenBenchmarking.orgMB/s Per Watt, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression Speed1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E2040608010082.2561.6476.3156.78

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression Speed1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E10002000300040005000SE +/- 652.99, N = 3SE +/- 18.40, N = 3SE +/- 18.90, N = 3SE +/- 496.73, N = 33753.94461.34460.03500.81. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression Speed1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E8001600240032004000Min: 2447.9 / Avg: 3753.87 / Max: 4409.5Min: 4425.1 / Avg: 4461.27 / Max: 4485.2Min: 4422.6 / Avg: 4459.97 / Max: 4483.6Min: 2997.9 / Avg: 3500.77 / Max: 4494.21. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Compression Speed1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E2004006008001000SE +/- 16.69, N = 15SE +/- 7.36, N = 3SE +/- 8.69, N = 15SE +/- 8.96, N = 4414.9969.2850.4787.01. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Compression Speed1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E2004006008001000Min: 354.6 / Avg: 414.87 / Max: 556.9Min: 954.7 / Avg: 969.23 / Max: 978.5Min: 788.1 / Avg: 850.43 / Max: 900.7Min: 761.5 / Avg: 787 / Max: 803.41. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

WireGuard + Linux Networking Stack Stress Test

OpenBenchmarking.orgWatts, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestCPU Power Consumption Monitor1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E306090120150Min: 4.84 / Avg: 49.72 / Max: 71.54Min: 4.82 / Avg: 124.81 / Max: 171.77Min: 4.76 / Avg: 78.54 / Max: 126.46Min: 4.56 / Avg: 96.88 / Max: 137.74

OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress Test1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E4080120160200SE +/- 1.69, N = 12SE +/- 0.21, N = 3SE +/- 5.38, N = 12SE +/- 0.56, N = 3174.0698.60140.64107.44
OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress Test1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E306090120150Min: 165.09 / Avg: 174.06 / Max: 182.41Min: 98.32 / Avg: 98.6 / Max: 99Min: 117.09 / Avg: 140.64 / Max: 164.16Min: 106.66 / Avg: 107.44 / Max: 108.53

Unvanquished

MinAvgMax1 P + HT + 8 E5.020.230.48 P + HT4.934.248.58 P + 8 E4.924.637.18 P + HT + 8 E5.028.141.4OpenBenchmarking.orgWatts, Fewer Is BetterUnvanquished 0.52.1CPU Power Consumption Monitor1428425670

OpenBenchmarking.orgFrames Per Second Per Watt, More Is BetterUnvanquished 0.52.1Resolution: 3840 x 2160 - Effects Quality: Ultra1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E4812162012.4713.1913.1713.66

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.52.1Resolution: 3840 x 2160 - Effects Quality: Ultra1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E100200300400500SE +/- 3.12, N = 3SE +/- 3.06, N = 3SE +/- 13.28, N = 15SE +/- 16.01, N = 12252.4451.2324.0383.9
OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.52.1Resolution: 3840 x 2160 - Effects Quality: Ultra1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E80160240320400Min: 246.4 / Avg: 252.43 / Max: 256.8Min: 446.4 / Avg: 451.23 / Max: 456.9Min: 262.1 / Avg: 324.01 / Max: 441.1Min: 271.3 / Avg: 383.91 / Max: 444.6

MinAvgMax1 P + HT + 8 E5.020.328.58 P + HT4.933.946.48 P + 8 E4.923.837.88 P + HT + 8 E4.928.745.9OpenBenchmarking.orgWatts, Fewer Is BetterUnvanquished 0.52.1CPU Power Consumption Monitor1428425670

OpenBenchmarking.orgFrames Per Second Per Watt, More Is BetterUnvanquished 0.52.1Resolution: 3840 x 2160 - Effects Quality: High1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E4812162013.1213.0914.3213.13

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.52.1Resolution: 3840 x 2160 - Effects Quality: High1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E100200300400500SE +/- 3.46, N = 3SE +/- 3.38, N = 3SE +/- 14.80, N = 12SE +/- 12.47, N = 15266.4443.1340.7376.9
OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.52.1Resolution: 3840 x 2160 - Effects Quality: High1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E80160240320400Min: 260.9 / Avg: 266.4 / Max: 272.8Min: 437.2 / Avg: 443.07 / Max: 448.9Min: 276.9 / Avg: 340.74 / Max: 437.4Min: 266.8 / Avg: 376.91 / Max: 443.5

MinAvgMax1 P + HT + 8 E5.020.732.08 P + HT4.933.848.48 P + 8 E5.023.938.48 P + HT + 8 E4.927.043.5OpenBenchmarking.orgWatts, Fewer Is BetterUnvanquished 0.52.1CPU Power Consumption Monitor1428425670

OpenBenchmarking.orgFrames Per Second Per Watt, More Is BetterUnvanquished 0.52.1Resolution: 3840 x 2160 - Effects Quality: Medium1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E4812162013.5413.6315.1113.78

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.52.1Resolution: 3840 x 2160 - Effects Quality: Medium1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E100200300400500SE +/- 4.16, N = 12SE +/- 2.14, N = 3SE +/- 16.42, N = 15SE +/- 17.98, N = 12280.7461.1361.5371.7
OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.52.1Resolution: 3840 x 2160 - Effects Quality: Medium1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E80160240320400Min: 267.2 / Avg: 280.74 / Max: 309.5Min: 458.3 / Avg: 461.1 / Max: 465.3Min: 265.3 / Avg: 361.5 / Max: 459.1Min: 277.6 / Avg: 371.66 / Max: 452.3

MinAvgMax1 P + HT + 8 E5.021.031.78 P + HT4.934.447.88 P + 8 E5.024.538.58 P + HT + 8 E4.926.844.2OpenBenchmarking.orgWatts, Fewer Is BetterUnvanquished 0.52.1CPU Power Consumption Monitor1428425670

OpenBenchmarking.orgFrames Per Second Per Watt, More Is BetterUnvanquished 0.52.1Resolution: 1920 x 1080 - Effects Quality: Ultra1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E4812162013.3513.3214.4313.26

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.52.1Resolution: 1920 x 1080 - Effects Quality: Ultra1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E100200300400500SE +/- 3.94, N = 13SE +/- 1.43, N = 3SE +/- 12.26, N = 15SE +/- 16.79, N = 15280.6457.8353.2355.0
OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.52.1Resolution: 1920 x 1080 - Effects Quality: Ultra1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E80160240320400Min: 261.6 / Avg: 280.62 / Max: 300.3Min: 455.8 / Avg: 457.83 / Max: 460.6Min: 276.5 / Avg: 353.23 / Max: 453.3Min: 264.7 / Avg: 354.95 / Max: 445.1

MinAvgMax1 P + HT + 8 E4.921.132.88 P + HT4.833.545.68 P + 8 E4.925.237.68 P + HT + 8 E4.927.744.3OpenBenchmarking.orgWatts, Fewer Is BetterUnvanquished 0.52.1CPU Power Consumption Monitor1224364860

OpenBenchmarking.orgFrames Per Second Per Watt, More Is BetterUnvanquished 0.52.1Resolution: 1920 x 1080 - Effects Quality: High1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E4812162013.4313.7314.8513.48

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.52.1Resolution: 1920 x 1080 - Effects Quality: High1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E100200300400500SE +/- 5.62, N = 15SE +/- 4.57, N = 3SE +/- 16.36, N = 15SE +/- 14.47, N = 15283.4459.4374.7373.9
OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.52.1Resolution: 1920 x 1080 - Effects Quality: High1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E80160240320400Min: 262.9 / Avg: 283.39 / Max: 339.5Min: 450.4 / Avg: 459.43 / Max: 465.2Min: 264.5 / Avg: 374.68 / Max: 457.1Min: 267.4 / Avg: 373.87 / Max: 452.5

MinAvgMax1 P + HT + 8 E5.021.130.28 P + HT4.933.846.88 P + 8 E4.925.437.68 P + HT + 8 E5.028.745.0OpenBenchmarking.orgWatts, Fewer Is BetterUnvanquished 0.52.1CPU Power Consumption Monitor1428425670

OpenBenchmarking.orgFrames Per Second Per Watt, More Is BetterUnvanquished 0.52.1Resolution: 1920 x 1080 - Effects Quality: Medium1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E4812162013.5613.8514.4714.47

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.52.1Resolution: 1920 x 1080 - Effects Quality: Medium1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E100200300400500SE +/- 3.82, N = 3SE +/- 0.70, N = 3SE +/- 15.19, N = 15SE +/- 11.08, N = 15285.5467.5367.3414.6
OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.52.1Resolution: 1920 x 1080 - Effects Quality: Medium1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E80160240320400Min: 277.9 / Avg: 285.47 / Max: 290.1Min: 466.2 / Avg: 467.47 / Max: 468.6Min: 286.5 / Avg: 367.33 / Max: 467.5Min: 353.3 / Avg: 414.63 / Max: 459.1

Xonotic

MinAvgMax8 P + HT + 8 E4.837.551.2OpenBenchmarking.orgWatts, Fewer Is BetterXonotic 0.8.2CPU Power Consumption Monitor1530456075

OpenBenchmarking.orgFrames Per Second Per Watt, More Is BetterXonotic 0.8.2Resolution: 3840 x 2160 - Effects Quality: Ultimate1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E4812162013.4612.0114.4512.06

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.2Resolution: 3840 x 2160 - Effects Quality: Ultimate1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E100200300400500SE +/- 18.66, N = 12SE +/- 5.78, N = 3SE +/- 20.23, N = 15SE +/- 5.40, N = 3293.57449.44405.24452.59MIN: 44 / MAX: 1241MIN: 77 / MAX: 1279MIN: 45 / MAX: 1265MIN: 78 / MAX: 1269
OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.2Resolution: 3840 x 2160 - Effects Quality: Ultimate1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E80160240320400Min: 251.99 / Avg: 293.57 / Max: 427.16Min: 438.48 / Avg: 449.44 / Max: 458.1Min: 254.29 / Avg: 405.24 / Max: 460.68Min: 442.29 / Avg: 452.59 / Max: 460.54

MinAvgMax8 P + HT + 8 E4.833.851.9OpenBenchmarking.orgWatts, Fewer Is BetterXonotic 0.8.2CPU Power Consumption Monitor1530456075

OpenBenchmarking.orgFrames Per Second Per Watt, More Is BetterXonotic 0.8.2Resolution: 3840 x 2160 - Effects Quality: Ultra1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E51015202515.6214.5918.5114.96

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.2Resolution: 3840 x 2160 - Effects Quality: Ultra1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E120240360480600SE +/- 1.66, N = 3SE +/- 5.36, N = 3SE +/- 7.10, N = 3SE +/- 16.73, N = 15312.98527.70534.62505.67MIN: 181 / MAX: 726MIN: 302 / MAX: 1392MIN: 305 / MAX: 1356MIN: 186 / MAX: 1390
OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.2Resolution: 3840 x 2160 - Effects Quality: Ultra1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E90180270360450Min: 310.28 / Avg: 312.98 / Max: 316Min: 520.56 / Avg: 527.7 / Max: 538.19Min: 520.43 / Avg: 534.62 / Max: 541.97Min: 318.76 / Avg: 505.67 / Max: 544.5

MinAvgMax8 P + HT + 8 E4.632.850.3OpenBenchmarking.orgWatts, Fewer Is BetterXonotic 0.8.2CPU Power Consumption Monitor1428425670

OpenBenchmarking.orgFrames Per Second Per Watt, More Is BetterXonotic 0.8.2Resolution: 3840 x 2160 - Effects Quality: High1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E51015202519.1616.5619.6417.11

OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.2Resolution: 3840 x 2160 - Effects Quality: High1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E130260390520650SE +/- 18.57, N = 15SE +/- 3.51, N = 3SE +/- 6.36, N = 4SE +/- 10.90, N = 12417.16579.10571.64561.17MIN: 215 / MAX: 1443MIN: 357 / MAX: 1583MIN: 168 / MAX: 1519MIN: 226 / MAX: 1581
OpenBenchmarking.orgFrames Per Second, More Is BetterXonotic 0.8.2Resolution: 3840 x 2160 - Effects Quality: High1 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E100200300400500Min: 352.89 / Avg: 417.16 / Max: 565.08Min: 575.1 / Avg: 579.1 / Max: 586.09Min: 556.09 / Avg: 571.64 / Max: 584.41Min: 445 / Avg: 561.17 / Max: 579.14

Tesseract

MinAvgMax1 P + HT + 8 E5.020.129.98 P + HT4.941.671.08 P + 8 E4.929.439.18 P + HT + 8 E4.833.455.0OpenBenchmarking.orgWatts, Fewer Is BetterTesseract 2014-05-12CPU Power Consumption Monitor20406080100

OpenBenchmarking.orgFrames Per Second Per Watt, More Is BetterTesseract 2014-05-12Resolution: 1920 x 10801 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E71421283531.4824.0031.4129.59

OpenBenchmarking.orgFrames Per Second, More Is BetterTesseract 2014-05-12Resolution: 1920 x 10801 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E2004006008001000SE +/- 4.90, N = 3SE +/- 1.47, N = 3SE +/- 35.53, N = 15SE +/- 2.41, N = 3631.20997.77923.78988.51
OpenBenchmarking.orgFrames Per Second, More Is BetterTesseract 2014-05-12Resolution: 1920 x 10801 P + HT + 8 E8 P + HT8 P + 8 E8 P + HT + 8 E2004006008001000Min: 622.33 / Avg: 631.2 / Max: 639.26Min: 994.98 / Avg: 997.77 / Max: 1000Min: 624.58 / Avg: 923.78 / Max: 998.33Min: 985.62 / Avg: 988.51 / Max: 993.29

194 Results Shown

OSPray
ONNX Runtime
OSPray
ASTC Encoder
ONNX Runtime
OSPray
OpenSSL:
  RSA4096:
    sign/s
    verify/s
OSPray:
  San Miguel - SciVis
  San Miguel - Path Tracer
Cpuminer-Opt
oneDNN:
  Recurrent Neural Network Inference - f32 - CPU
  Recurrent Neural Network Inference - u8s8f32 - CPU
  Recurrent Neural Network Training - f32 - CPU
  Recurrent Neural Network Training - u8s8f32 - CPU
Xmrig
OSPray
Embree
OSPray
Cpuminer-Opt:
  Deepcoin
  LBC, LBRY Credits
Embree:
  Pathtracer ISPC - Asian Dragon
  Pathtracer - Crown
NAMD
Embree
PlaidML
Intel Open Image Denoise:
  RT.ldr_alb_nrm.3840x2160
  RT.hdr_alb_nrm.3840x2160
Stockfish
Cpuminer-Opt
NCNN
SVT-HEVC
Cpuminer-Opt
SVT-AV1
PlaidML
Cpuminer-Opt
Coremark
SVT-AV1
SVT-VP9
SVT-HEVC
Intel Open Image Denoise
SVT-VP9
Cpuminer-Opt
Zstd Compression
Timed Linux Kernel Compilation
ET: Legacy:
  3840 x 2160
  1920 x 1080
Cpuminer-Opt
7-Zip Compression
Xmrig
NCNN
Cpuminer-Opt
GROMACS
SVT-AV1
NCNN
Timed Godot Game Engine Compilation
NCNN
OpenSSL
oneDNN
NCNN
Sysbench
NCNN
Zstd Compression
JPEG XL Decoding libjxl
SVT-AV1
libavif avifenc
NCNN
ASTC Encoder
Cpuminer-Opt
NCNN
LeelaChessZero
NCNN
Zstd Compression
oneDNN:
  Convolution Batch Shapes Auto - u8s8f32 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
NCNN
JPEG XL libjxl
AOM AV1
LeelaChessZero
libavif avifenc
AOM AV1:
  Speed 10 Realtime - Bosphorus 4K
  Speed 10 Realtime - Bosphorus 1080p
RawTherapee
JPEG XL libjxl
NCNN
Darmstadt Automotive Parallel Heterogeneous Suite
DeepSpeech
libavif avifenc
Darmstadt Automotive Parallel Heterogeneous Suite
Xonotic:
  CPU Power Consumption Monitor:
    Watts
    Watts
    Watts
  Phoronix Test Suite System Monitoring:
    Watts
  CPU Power Consumption Monitor:
    Watts
Selenium
Selenium
Selenium
Selenium
Selenium
Selenium
Selenium
Selenium:
  CPU Power Consumption Monitor
  StyleBench - Google Chrome
Selenium
Selenium:
  CPU Power Consumption Monitor
  Speedometer - Google Chrome
Selenium
Selenium
Selenium
Darmstadt Automotive Parallel Heterogeneous Suite:
  CPU Power Consumption Monitor
  OpenMP - Points2Image
Darmstadt Automotive Parallel Heterogeneous Suite
NCNN
NCNN:
  CPU - regnety_400m
  CPU - yolov4-tiny
  CPU - blazeface
  CPU - mobilenet
OpenCV
OpenCV
OpenCV
OpenCV
ONNX Runtime:
  CPU Power Consumption Monitor
  shufflenet-v2-10 - CPU
ONNX Runtime
ONNX Runtime:
  CPU Power Consumption Monitor
  fcn-resnet101-11 - CPU
ONNX Runtime
Cpuminer-Opt:
  CPU Power Consumption Monitor
  Triple SHA-256, Onecoin
Cpuminer-Opt
Timed Mesa Compilation
Timed Mesa Compilation
OSPray:
  CPU Power Consumption Monitor
  Magnetic Reconnection - Path Tracer
OSPray
AOM AV1:
  CPU Power Consumption Monitor
  Speed 6 Realtime - Bosphorus 4K
AOM AV1
AOM AV1:
  CPU Power Consumption Monitor
  Speed 6 Realtime - Bosphorus 1080p
AOM AV1
AOM AV1:
  CPU Power Consumption Monitor
  Speed 8 Realtime - Bosphorus 1080p
AOM AV1
LibRaw:
  CPU Power Consumption Monitor
  Post-Processing Benchmark
LibRaw
Hugin
Hugin
JPEG XL libjxl:
  CPU Power Consumption Monitor
  JPEG - 8
JPEG XL libjxl
JPEG XL libjxl:
  CPU Power Consumption Monitor
  JPEG - 7
JPEG XL libjxl
Zstd Compression:
  CPU Power Consumption Monitor
  19, Long Mode - Decompression Speed
Zstd Compression
Zstd Compression:
  CPU Power Consumption Monitor
  19 - Decompression Speed
Zstd Compression:
  19 - Decompression Speed
  8 - Compression Speed
WireGuard + Linux Networking Stack Stress Test
WireGuard + Linux Networking Stack Stress Test
Unvanquished:
  CPU Power Consumption Monitor
  3840 x 2160 - Ultra
Unvanquished
Unvanquished:
  CPU Power Consumption Monitor
  3840 x 2160 - High
Unvanquished
Unvanquished:
  CPU Power Consumption Monitor
  3840 x 2160 - Medium
Unvanquished
Unvanquished:
  CPU Power Consumption Monitor
  1920 x 1080 - Ultra
Unvanquished
Unvanquished:
  CPU Power Consumption Monitor
  1920 x 1080 - High
Unvanquished
Unvanquished:
  CPU Power Consumption Monitor
  1920 x 1080 - Medium
Unvanquished
Xonotic:
  CPU Power Consumption Monitor
  3840 x 2160 - Ultimate
Xonotic
Xonotic:
  CPU Power Consumption Monitor
  3840 x 2160 - Ultra
Xonotic
Xonotic:
  CPU Power Consumption Monitor
  3840 x 2160 - High
Xonotic
Tesseract:
  CPU Power Consumption Monitor
  1920 x 1080
Tesseract