Alder Lake Intel Linux Tests

Intel Core i5-12400 testing with a ASUS PRIME Z690-P WIFI D4 (0605 BIOS) and ASUS Intel ADL-S GT1 3GB on Ubuntu 21.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2201063-PTS-ALDERLAK06
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 4 Tests
C++ Boost Tests 2 Tests
Web Browsers 1 Tests
Chess Test Suite 3 Tests
Timed Code Compilation 7 Tests
C/C++ Compiler Tests 14 Tests
Compression Tests 4 Tests
CPU Massive 23 Tests
Creator Workloads 24 Tests
Cryptocurrency Benchmarks, CPU Mining Tests 2 Tests
Cryptography 5 Tests
Encoding 7 Tests
Game Development 3 Tests
HPC - High Performance Computing 11 Tests
Imaging 9 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 8 Tests
Multi-Core 27 Tests
NVIDIA GPU Compute 6 Tests
Intel oneAPI 3 Tests
Productivity 2 Tests
Programmer / Developer System Benchmarks 11 Tests
Python 3 Tests
Renderers 6 Tests
Rust Tests 2 Tests
Scientific Computing 2 Tests
Server 4 Tests
Server CPU Tests 17 Tests
Single-Threaded 3 Tests
Texture Compression 2 Tests
Video Encoding 6 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable
Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Core i9 12900K
January 04 2022
  10 Hours, 42 Minutes
Core i5 12600K
January 05 2022
  11 Hours, 50 Minutes
Core i5 12400
January 06 2022
  11 Hours, 5 Minutes
Invert Hiding All Results Option
  11 Hours, 12 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Alder Lake Intel Linux TestsProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionCore i9 12900KCore i5 12600KCore i5 12400Intel Core i9-12900K @ 6.50GHz (16 Cores / 24 Threads)ASUS ROG STRIX Z690-E GAMING WIFI (0811 BIOS)Intel Device 7aa764GB1000GB GenericASUS Intel ADL-S GT1 3GB (1550MHz)Intel Device 7ad0ASUS VP28UIntel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411Ubuntu 21.105.16.0-051600rc8-generic (x86_64)GNOME Shell 40.5X Server 1.20.13 + Wayland4.6 Mesa 21.2.21.2.182GCC 11.2.0ext43840x2160Intel Core i5-12600K @ 6.30GHz (10 Cores / 16 Threads)ASUS PRIME Z690-P WIFI D4 (0605 BIOS)16GB1000GB Western Digital WDS100T1X0E-00AFY0 + 1000GB GenericASUS Intel ADL-S GT1 3GB (1450MHz)Realtek ALC897Realtek RTL8125 2.5GbE + Intel Device 7af0Intel Core i5-12400 @ 5.60GHz (6 Cores / 12 Threads)OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-ZPT0kp/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-ZPT0kp/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Disk Details- MQ-DEADLINE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Details- Core i9 12900K: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x15 - Thermald 2.4.6- Core i5 12600K: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x12 - Thermald 2.4.6- Core i5 12400: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x12 - Thermald 2.4.6Java Details- OpenJDK Runtime Environment (build 11.0.13+8-Ubuntu-0ubuntu1.21.10)Python Details- Python 3.9.7Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

Core i9 12900KCore i5 12600KCore i5 12400Logarithmic Result OverviewPhoronix Test SuiteTensorFlow LiteCoremarkStockfishAircrack-ngOpenSSLStargate Digital Audio WorkstationBlenderIndigoBenchChaos Group V-RAY7-Zip CompressionTimed MPlayer CompilationAppleseedHelsingSVT-HEVCTimed Linux Kernel CompilationTungsten RendererONNX RuntimeSVT-VP9Timed LLVM CompilationEmbreeOSPraylibavif avifencPrimesieveTimed Mesa CompilationTimed Node.js CompilationDarktableTimed Wasmer CompilationXmrigLAMMPS Molecular Dynamics SimulatorSVT-AV1XZ CompressionOpenCVASTC EncoderPlaidMLLeelaChessZeroRawTherapeeTimed GDB GNU Debugger CompilationHuginLibRawrav1eDaCapo BenchmarkNCNNTNNNode.js V8 Web Tooling BenchmarkAOM AV1Zstd CompressionsrsRANCraftyEtcpakWireGuard + Linux Networking Stack Stress Testlibjpeg-turbo tjbenchPyBenchSecureMarkUnpacking FirefoxWebP Image EncodeCython BenchmarkPHPBenchsimdjsonJPEG XL libjxlChia Blockchain VDFUnpacking The Linux KernelSeleniumGNU Octave BenchmarkGIMPMobile Neural NetworkPyHPC BenchmarksLZ4 Compression

Core i9 12900KCore i5 12600KCore i5 12400Per Watt Result OverviewPhoronix Test Suite100%122%144%166%SeleniumLeelaChessZeroLAMMPS Molecular Dynamics SimulatorONNX RuntimeZstd CompressionLZ4 CompressionPlaidMLXmrigMeta Performance Per WattsPHPBenchSecureMarkSVT-VP9simdjsonChia Blockchain VDFsrsRANlibjpeg-turbo tjbenchCraftyNode.js V8 Web Tooling BenchmarkJPEG XL libjxlEtcpakLibRawEmbreeChaos Group V-RAYIndigoBenchSVT-HEVCOSPrayAircrack-ngOpenSSLrav1eStockfishAOM AV1Stargate Digital Audio Workstation7-Zip CompressionSVT-AV1CoremarkP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.M

Alder Lake Intel Linux Testsopenssl: SHA256tensorflow-lite: Inception ResNet V2tensorflow-lite: Mobilenet Quanttensorflow-lite: Inception V4tensorflow-lite: Mobilenet Floattensorflow-lite: SqueezeNettensorflow-lite: NASNet Mobilecoremark: CoreMark Size 666 - Iterations Per Secondcompress-7zip: Decompression Ratingstockfish: Total Timeindigobench: CPU - Supercaraircrack-ng: stargate: 480000 - 1024onnx: fcn-resnet101-11 - CPUavifenc: 6, Losslessonnx: yolov4 - CPUblender: BMW27 - CPU-Onlytungsten: Hairblender: Fishy Cat - CPU-Onlystargate: 480000 - 512tungsten: Non-Exponentialv-ray: CPUbuild-mplayer: Time To Compileopenssl: RSA4096openssl: RSA4096appleseed: Disney Materialindigobench: CPU - Bedroomappleseed: Material Testerastcenc: Exhaustivehelsing: 12 digitsvt-hevc: 7 - Bosphorus 1080ptungsten: Volumetric Causticdarktable: Boat - CPU-onlyembree: Pathtracer - Crownbuild-linux-kernel: Time To Compilesvt-hevc: 10 - Bosphorus 1080psvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080pospray: NASA Streamlines - SciVisbuild-llvm: Ninjacompress-7zip: Compression Ratingprimesieve: 1e12 Prime Number Generationsvt-av1: Preset 8 - Bosphorus 1080pospray: San Miguel - SciVisbuild-mesa: Time To Compilexmrig: Monero - 1Mbuild-nodejs: Time To Compileembree: Pathtracer ISPC - Crownxmrig: Wownero - 1Mbuild-wasmer: Time To Compiledarktable: Server Room - CPU-onlylammps: Rhodopsin Proteinonnx: shufflenet-v2-10 - CPUplaidml: No - Inference - VGG19 - CPUonnx: super-resolution-10 - CPUplaidml: No - Inference - VGG16 - CPUdarktable: Masskrug - CPU-onlycompress-xz: Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9lczero: BLASavifenc: 6compress-zstd: 19, Long Mode - Compression Speedsvt-av1: Preset 8 - Bosphorus 4Ktungsten: Water Causticrawtherapee: Total Benchmark Timeaom-av1: Speed 6 Two-Pass - Bosphorus 4Kbuild-gdb: Time To Compiletnn: CPU - DenseNetaom-av1: Speed 6 Realtime - Bosphorus 4Klczero: Eigenastcenc: Thoroughhugin: Panorama Photo Assistant + Stitching Timesrsran: OFDM_Testlibraw: Post-Processing Benchmarkrav1e: 10dacapobench: Jythonnode-web-tooling: mnn: mobilenet-v1-1.0compress-zstd: 19 - Compression Speedmnn: MobileNetV2_224srsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMsrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMjpegxl: JPEG - 8srsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMcrafty: Elapsed Timesimdjson: PartialTweetssrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMwireguard: simdjson: LargeRandsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMtjbench: Decompression Throughputsecuremark: SecureMark-TLStnn: CPU - MobileNet v2webp: Quality 100, Highest Compressionpybench: Total For Average Test Timesselenium: Kraken - Google Chrometnn: CPU - SqueezeNet v1.1etcpak: ETC2compress-lz4: 9 - Compression Speedselenium: Octane - Google Chrometnn: CPU - SqueezeNet v2chia-vdf: Square Plain C++unpack-firefox: firefox-84.0.source.tar.xzsrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMcompress-lz4: 3 - Compression Speedcython-bench: N-Queensaom-av1: Speed 8 Realtime - Bosphorus 4Kphpbench: PHP Benchmark Suitewebp: Quality 100, Losslesssrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMselenium: WASM imageConvolute - Google Chromencnn: CPU - resnet50srsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMsimdjson: DistinctUserIDpyhpc: CPU - Numba - 4194304 - Isoneutral Mixingselenium: Jetstream 2 - Google Chromepyhpc: CPU - Numba - 4194304 - Equation of Stateunpack-linux: linux-4.15.tar.xzsimdjson: Kostyaselenium: Speedometer - Google Chromechia-vdf: Square Assembly Optimizedaom-av1: Speed 10 Realtime - Bosphorus 4Kgimp: resizejpegxl: PNG - 8selenium: StyleBench - Google Chromeplaidml: No - Inference - ResNet 50 - CPUpyhpc: CPU - TensorFlow - 4194304 - Equation of Stategimp: rotateaom-av1: Speed 9 Realtime - Bosphorus 4Kcompress-lz4: 9 - Decompression Speedpyhpc: CPU - Aesara - 4194304 - Equation of Statencnn: CPU - alexnetcompress-lz4: 3 - Decompression Speedoctave-benchmark: gimp: auto-levelscompress-zstd: 19 - Decompression Speedpyhpc: CPU - PyTorch - 4194304 - Isoneutral Mixingcompress-zstd: 19, Long Mode - Decompression Speedpyhpc: CPU - Aesara - 4194304 - Isoneutral Mixingncnn: CPU - vgg16mnn: mobilenetV3pyhpc: CPU - Numpy - 4194304 - Equation of Stategimp: unsharp-maskpyhpc: CPU - Numpy - 4194304 - Isoneutral Mixingselenium: PSPDFKit WASM - Google Chromeopencv: DNN - Deep Neural Networkopencv: Object Detectionncnn: CPU - regnety_400mncnn: CPU - squeezenet_ssdncnn: CPU - yolov4-tinyncnn: CPU - resnet18ncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - mobilenetmnn: inception-v3mnn: SqueezeNetV1.0mnn: resnet-v2-50mnn: squeezenetv1.1svt-vp9: VMAF Optimized - Bosphorus 1080petcpak: DXT1Core i9 12900KCore i5 12600KCore i5 1240024194561627113597062049.1126330358896.487910.373445.2752115.82216996602475241468.37850983.8586.8715439829.36267274.6614.3871110.246.3667373.721732016117.975279362.34284.4109.0149263.044115.03447433.47173.070233.214.980452.69018.827347.845475.92381.5734.48358.46711941316.34293.44828.5725.8576125.9316.86620.157210428.932.8182.0199.5644669020.44467724.662.89421.71011358.24739.023.63118.287236.65712.0744.7881733.71814.5418416.926631.20919570000057.7312.024190021.612.88743.72.442206.3201.842.66621.1126331176.61568.1161.9109.5131.73225.2277.503339389601178.6845.235499429.1153.146238.42976.479909640.81222516713.23992.4182.377.9514.07249.61142206812.517551.017.9718.23623.17.640.797261.8040.1373.8274.7530325556776.655.2081.0964.39.320.0927.47968.3711991.20.1748.9712007.74.8958.0074550.21.2174679.81.22532.631.1621.2969.9061.95325958841501736.8713.3516.2410.839.771.375.083.002.962.783.3010.9023.6954.25520.0062.843366.291655.329147845674971816837101071202739095135.7140924108980460322.55514161848308476745.24132917.1854.7996756742.772458116.2721.8720170.374.6418085.894501311426.551180864.32785.9161.7803172.010171.89939750.96144.664158.977.059193.88012.622768.586330.12229.1323.26517.3127797524.25075.54518.8735.8473411.0451.83113.93007375.943.3742.7815.8673977115.31421518.613.91225.7679599.86732.919.21221.702145.69910.5652.2881974.00413.1514498.987634.24918410000052.7910.361218419.902.36535.92.259194.5188.641.15587.8118906516.23538.4152.3118.6061.62213.7261.903989367882193.8885.539528456.6161.006225.21172.309433043.17421313313.85586.5173.574.0114.89751.20134539013.183523.518.9719.54593.27.300.829250.6520.1443.9674.5629024553376.885.4031.0662.68.800.0947.71868.4013475.50.1849.2013431.34.9328.2364483.21.1934604.21.20935.551.1991.21610.0151.879261710012521057.3917.0917.0210.9711.101.436.083.453.193.203.8912.6328.8984.54618.7562.577220.861556.072876139895329218801587673229520150209221127182521304747.30957941194207469433.65922304.5083.0121894365.457302165.4631.3260238.652.9500767.79702965737.172135401.22092.3223.1807921.491231.88964167.20626.156116.789.937685.2839.667492.075250.22201.1618.27664.5746523729.80751.57515.7946.5573807.9567.76111.48856230.754.8303.3616.7302927412.93298615.834.46833.01375412.33227.016.59425.830151.2828.7661.5992369.34611.1514157.085540.04115276000045.279.496234417.642.83035.82.020172.4169.035.77522.5106314275.57478.8136.5129.8401.46190.2234.388308329402211.2866.190590507.1180.981201.76364.748389748.18319073315.61178.4154.766.2816.54043.60121169814.655470.921.0116.74534.06.560.928225.2600.1594.4414.126222116766.615.9900.9556.208.160.1058.51860.3612982.90.1958.2112950.95.4528.9024096.01.3244218.01.33835.181.1031.31910.4261.861270010349315605.9714.9316.689.689.141.083.712.412.872.422.659.9623.0973.53019.8892.354197.401386.753OpenBenchmarking.org

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA256Core i9 12900KCore i5 12600KCore i5 124005000M10000M15000M20000M25000MSE +/- 12372809.64, N = 3SE +/- 18429785.49, N = 3SE +/- 15500411.55, N = 3241945616271478456749787613989531. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA256Core i9 12900KCore i5 12600KCore i5 124004000M8000M12000M16000M20000MMin: 24180973640 / Avg: 24194561626.67 / Max: 24219266070Min: 14754223380 / Avg: 14784567496.67 / Max: 14817861280Min: 8740068100 / Avg: 8761398953.33 / Max: 87915460201. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Core i9 12900KCore i5 12600KCore i5 12400600K1200K1800K2400K3000KSE +/- 1232.94, N = 3SE +/- 1136.62, N = 3SE +/- 684.13, N = 3113597018168372921880
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2Core i9 12900KCore i5 12600KCore i5 12400500K1000K1500K2000K2500KMin: 1134050 / Avg: 1135970 / Max: 1138270Min: 1815450 / Avg: 1816836.67 / Max: 1819090Min: 2920520 / Avg: 2921880 / Max: 2922690

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantCore i9 12900KCore i5 12600KCore i5 1240030K60K90K120K150KSE +/- 33.91, N = 3SE +/- 34.15, N = 3SE +/- 17.37, N = 362049.1101071.0158767.0
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantCore i9 12900KCore i5 12600KCore i5 1240030K60K90K120K150KMin: 61993.6 / Avg: 62049.1 / Max: 62110.6Min: 101004 / Avg: 101071 / Max: 101116Min: 158736 / Avg: 158767.33 / Max: 158796

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Core i9 12900KCore i5 12600KCore i5 12400700K1400K2100K2800K3500KSE +/- 923.59, N = 3SE +/- 382.23, N = 3SE +/- 361.66, N = 3126330320273903229520
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4Core i9 12900KCore i5 12600KCore i5 12400600K1200K1800K2400K3000KMin: 1261890 / Avg: 1263303.33 / Max: 1265040Min: 2026700 / Avg: 2027390 / Max: 2028020Min: 3229100 / Avg: 3229520 / Max: 3230240

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatCore i9 12900KCore i5 12600KCore i5 1240030K60K90K120K150KSE +/- 43.51, N = 3SE +/- 34.07, N = 3SE +/- 30.67, N = 358896.495135.7150209.0
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatCore i9 12900KCore i5 12600KCore i5 1240030K60K90K120K150KMin: 58824 / Avg: 58896.4 / Max: 58974.4Min: 95074.7 / Avg: 95135.67 / Max: 95192.5Min: 150177 / Avg: 150208.67 / Max: 150270

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetCore i9 12900KCore i5 12600KCore i5 1240050K100K150K200K250KSE +/- 57.01, N = 3SE +/- 61.36, N = 3SE +/- 35.45, N = 387910.3140924.0221127.0
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetCore i9 12900KCore i5 12600KCore i5 1240040K80K120K160K200KMin: 87832.6 / Avg: 87910.27 / Max: 88021.4Min: 140854 / Avg: 140923.67 / Max: 141046Min: 221056 / Avg: 221126.67 / Max: 221167

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileCore i9 12900KCore i5 12600KCore i5 1240040K80K120K160K200KSE +/- 927.60, N = 3SE +/- 216.29, N = 3SE +/- 105.69, N = 373445.2108980.0182521.0
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileCore i9 12900KCore i5 12600KCore i5 1240030K60K90K120K150KMin: 71591.9 / Avg: 73445.17 / Max: 74445.1Min: 108561 / Avg: 108980.33 / Max: 109282Min: 182369 / Avg: 182520.67 / Max: 182724

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondCore i9 12900KCore i5 12600KCore i5 12400160K320K480K640K800KSE +/- 432.64, N = 3SE +/- 1410.86, N = 3SE +/- 1423.21, N = 3752115.82460322.56304747.311. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondCore i9 12900KCore i5 12600KCore i5 12400130K260K390K520K650KMin: 751330.48 / Avg: 752115.82 / Max: 752823.09Min: 457913.31 / Avg: 460322.56 / Max: 462799.29Min: 301918.44 / Avg: 304747.31 / Max: 306435.141. (CC) gcc options: -O2 -lrt" -lrt

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 21.06Test: Decompression RatingCore i9 12900KCore i5 12600KCore i5 1240020K40K60K80K100KSE +/- 528.22, N = 3SE +/- 259.95, N = 3SE +/- 149.29, N = 39660261848411941. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 21.06Test: Decompression RatingCore i9 12900KCore i5 12600KCore i5 1240020K40K60K80K100KMin: 95918 / Avg: 96601.67 / Max: 97641Min: 61331 / Avg: 61848 / Max: 62154Min: 40917 / Avg: 41194 / Max: 414291. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Stockfish

This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 13Total TimeCore i9 12900KCore i5 12600KCore i5 1240010M20M30M40M50MSE +/- 478120.04, N = 6SE +/- 113304.69, N = 3SE +/- 43155.70, N = 34752414630847674207469431. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fprofile-use -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 13Total TimeCore i9 12900KCore i5 12600KCore i5 124008M16M24M32M40MMin: 46037631 / Avg: 47524146.33 / Max: 49580477Min: 30639647 / Avg: 30847674 / Max: 31029517Min: 20686541 / Avg: 20746942.67 / Max: 208305381. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fprofile-use -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarCore i9 12900KCore i5 12600KCore i5 12400246810SE +/- 0.035, N = 3SE +/- 0.039, N = 3SE +/- 0.004, N = 38.3785.2413.659
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarCore i9 12900KCore i5 12600KCore i5 124003691215Min: 8.31 / Avg: 8.38 / Max: 8.43Min: 5.17 / Avg: 5.24 / Max: 5.3Min: 3.65 / Avg: 3.66 / Max: 3.67

Aircrack-ng

Aircrack-ng is a tool for assessing WiFi/WLAN network security. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.5.2Core i9 12900KCore i5 12600KCore i5 1240011K22K33K44K55KSE +/- 213.61, N = 3SE +/- 347.90, N = 3SE +/- 242.86, N = 1550983.8632917.1922304.511. (CXX) g++ options: -O3 -fvisibility=hidden -masm=intel -fcommon -rdynamic -lpthread -lz -lcrypto -ldl -lm -pthread
OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.5.2Core i9 12900KCore i5 12600KCore i5 124009K18K27K36K45KMin: 50756.65 / Avg: 50983.86 / Max: 51410.79Min: 32324.99 / Avg: 32917.18 / Max: 33529.63Min: 20825.26 / Avg: 22304.51 / Max: 23636.181. (CXX) g++ options: -O3 -fvisibility=hidden -masm=intel -fcommon -rdynamic -lpthread -lz -lcrypto -ldl -lm -pthread

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 480000 - Buffer Size: 1024Core i9 12900KCore i5 12600KCore i5 12400246810SE +/- 0.019777, N = 3SE +/- 0.006987, N = 3SE +/- 0.001428, N = 36.8715434.7996753.0121891. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 480000 - Buffer Size: 1024Core i9 12900KCore i5 12600KCore i5 124003691215Min: 6.83 / Avg: 6.87 / Max: 6.9Min: 4.79 / Avg: 4.8 / Max: 4.81Min: 3.01 / Avg: 3.01 / Max: 3.011. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.10Model: fcn-resnet101-11 - Device: CPUCore i9 12900KCore i5 12600KCore i5 1240020406080100SE +/- 0.00, N = 3SE +/- 0.17, N = 3SE +/- 0.00, N = 39867431. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.10Model: fcn-resnet101-11 - Device: CPUCore i9 12900KCore i5 12600KCore i5 1240020406080100Min: 98 / Avg: 98 / Max: 98Min: 66.5 / Avg: 66.83 / Max: 67Min: 42.5 / Avg: 42.5 / Max: 42.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6, LosslessCore i9 12900KCore i5 12600KCore i5 124001530456075SE +/- 0.17, N = 3SE +/- 0.05, N = 3SE +/- 0.13, N = 329.3642.7765.461. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6, LosslessCore i9 12900KCore i5 12600KCore i5 124001326395265Min: 29.03 / Avg: 29.36 / Max: 29.55Min: 42.67 / Avg: 42.77 / Max: 42.85Min: 65.25 / Avg: 65.46 / Max: 65.711. (CXX) g++ options: -O3 -fPIC -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.10Model: yolov4 - Device: CPUCore i9 12900KCore i5 12600KCore i5 12400150300450600750SE +/- 1.92, N = 3SE +/- 0.17, N = 3SE +/- 0.00, N = 36724583021. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.10Model: yolov4 - Device: CPUCore i9 12900KCore i5 12600KCore i5 12400120240360480600Min: 669.5 / Avg: 671.67 / Max: 675.5Min: 458 / Avg: 458.17 / Max: 458.5Min: 301.5 / Avg: 301.5 / Max: 301.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.0Blend File: BMW27 - Compute: CPU-OnlyCore i9 12900KCore i5 12600KCore i5 124004080120160200SE +/- 0.16, N = 3SE +/- 0.15, N = 3SE +/- 0.16, N = 374.66116.27165.46
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.0Blend File: BMW27 - Compute: CPU-OnlyCore i9 12900KCore i5 12600KCore i5 12400306090120150Min: 74.34 / Avg: 74.66 / Max: 74.83Min: 116.11 / Avg: 116.27 / Max: 116.56Min: 165.19 / Avg: 165.46 / Max: 165.75

Tungsten Renderer

Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: HairCore i9 12900KCore i5 12600KCore i5 12400714212835SE +/- 0.06, N = 4SE +/- 0.02, N = 3SE +/- 0.01, N = 314.3921.8731.331. (CXX) g++ options: -std=c++0x -march=core2 -msse2 -msse3 -mssse3 -mno-sse4.1 -mno-sse4.2 -mno-sse4a -mno-avx -mno-fma -mno-bmi2 -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: HairCore i9 12900KCore i5 12600KCore i5 12400714212835Min: 14.22 / Avg: 14.39 / Max: 14.51Min: 21.85 / Avg: 21.87 / Max: 21.91Min: 31.3 / Avg: 31.33 / Max: 31.341. (CXX) g++ options: -std=c++0x -march=core2 -msse2 -msse3 -mssse3 -mno-sse4.1 -mno-sse4.2 -mno-sse4a -mno-avx -mno-fma -mno-bmi2 -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -ldl

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.0Blend File: Fishy Cat - Compute: CPU-OnlyCore i9 12900KCore i5 12600KCore i5 1240050100150200250SE +/- 0.15, N = 3SE +/- 0.16, N = 3SE +/- 0.07, N = 3110.24170.37238.65
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.0Blend File: Fishy Cat - Compute: CPU-OnlyCore i9 12900KCore i5 12600KCore i5 124004080120160200Min: 110.02 / Avg: 110.24 / Max: 110.52Min: 170.11 / Avg: 170.37 / Max: 170.67Min: 238.55 / Avg: 238.65 / Max: 238.78

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 480000 - Buffer Size: 512Core i9 12900KCore i5 12600KCore i5 12400246810SE +/- 0.008920, N = 3SE +/- 0.001307, N = 3SE +/- 0.003403, N = 36.3667374.6418082.9500761. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 21.10.9Sample Rate: 480000 - Buffer Size: 512Core i9 12900KCore i5 12600KCore i5 124003691215Min: 6.35 / Avg: 6.37 / Max: 6.38Min: 4.64 / Avg: 4.64 / Max: 4.64Min: 2.94 / Avg: 2.95 / Max: 2.951. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Tungsten Renderer

Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Non-ExponentialCore i9 12900KCore i5 12600KCore i5 12400246810SE +/- 0.02843, N = 15SE +/- 0.01340, N = 7SE +/- 0.05588, N = 63.721735.894507.797021. (CXX) g++ options: -std=c++0x -march=core2 -msse2 -msse3 -mssse3 -mno-sse4.1 -mno-sse4.2 -mno-sse4a -mno-avx -mno-fma -mno-bmi2 -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Non-ExponentialCore i9 12900KCore i5 12600KCore i5 124003691215Min: 3.61 / Avg: 3.72 / Max: 3.92Min: 5.84 / Avg: 5.89 / Max: 5.95Min: 7.61 / Avg: 7.8 / Max: 7.91. (CXX) g++ options: -std=c++0x -march=core2 -msse2 -msse3 -mssse3 -mno-sse4.1 -mno-sse4.2 -mno-sse4a -mno-avx -mno-fma -mno-bmi2 -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -ldl

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. V-RAY is a commercial renderer that can integrate with various creator software products like SketchUp and 3ds Max. The V-RAY benchmark is standalone and supports CPU and NVIDIA CUDA/RTX based rendering. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5Mode: CPUCore i9 12900KCore i5 12600KCore i5 124004K8K12K16K20KSE +/- 52.31, N = 3SE +/- 9.91, N = 3SE +/- 13.43, N = 320161131149657
OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5Mode: CPUCore i9 12900KCore i5 12600KCore i5 124003K6K9K12K15KMin: 20077 / Avg: 20161 / Max: 20257Min: 13098 / Avg: 13113.67 / Max: 13132Min: 9632 / Avg: 9657 / Max: 9678

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.4Time To CompileCore i9 12900KCore i5 12600KCore i5 12400918273645SE +/- 0.10, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 317.9826.5537.17
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.4Time To CompileCore i9 12900KCore i5 12600KCore i5 12400816243240Min: 17.87 / Avg: 17.97 / Max: 18.17Min: 26.54 / Avg: 26.55 / Max: 26.56Min: 37.15 / Avg: 37.17 / Max: 37.19

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Core i9 12900KCore i5 12600KCore i5 1240060K120K180K240K300KSE +/- 64.72, N = 3SE +/- 58.76, N = 3SE +/- 5.04, N = 3279362.3180864.3135401.21. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Core i9 12900KCore i5 12600KCore i5 1240050K100K150K200K250KMin: 279279.3 / Avg: 279362.27 / Max: 279489.8Min: 180790.6 / Avg: 180864.27 / Max: 180980.4Min: 135395.1 / Avg: 135401.2 / Max: 135411.21. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Core i9 12900KCore i5 12600KCore i5 124009001800270036004500SE +/- 16.47, N = 3SE +/- 0.87, N = 3SE +/- 0.30, N = 34284.42785.92092.31. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Core i9 12900KCore i5 12600KCore i5 124007001400210028003500Min: 4251.5 / Avg: 4284.43 / Max: 4301.8Min: 2784.3 / Avg: 2785.87 / Max: 2787.3Min: 2091.7 / Avg: 2092.27 / Max: 2092.71. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Disney MaterialCore i9 12900KCore i5 12600KCore i5 1240050100150200250109.01161.78223.18

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomCore i9 12900KCore i5 12600KCore i5 124000.68491.36982.05472.73963.4245SE +/- 0.012, N = 3SE +/- 0.004, N = 3SE +/- 0.002, N = 33.0442.0101.491
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomCore i9 12900KCore i5 12600KCore i5 12400246810Min: 3.03 / Avg: 3.04 / Max: 3.07Min: 2 / Avg: 2.01 / Max: 2.01Min: 1.49 / Avg: 1.49 / Max: 1.49

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Material TesterCore i9 12900KCore i5 12600KCore i5 1240050100150200250115.03171.90231.89

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: ExhaustiveCore i9 12900KCore i5 12600KCore i5 124001530456075SE +/- 0.06, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 333.4750.9667.211. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: ExhaustiveCore i9 12900KCore i5 12600KCore i5 124001326395265Min: 33.35 / Avg: 33.47 / Max: 33.56Min: 50.95 / Avg: 50.96 / Max: 50.97Min: 67.19 / Avg: 67.21 / Max: 67.231. (CXX) g++ options: -O3 -flto -pthread

Helsing

Helsing is an open-source POSIX vampire number generator. This test profile measures the time it takes to generate vampire numbers between varying numbers of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHelsing 1.0-betaDigit Range: 12 digitCore i9 12900KCore i5 12600KCore i5 12400246810SE +/- 0.011, N = 9SE +/- 0.014, N = 8SE +/- 0.014, N = 73.0704.6646.1561. (CC) gcc options: -O2 -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterHelsing 1.0-betaDigit Range: 12 digitCore i9 12900KCore i5 12600KCore i5 12400246810Min: 3.04 / Avg: 3.07 / Max: 3.12Min: 4.63 / Avg: 4.66 / Max: 4.75Min: 6.12 / Avg: 6.16 / Max: 6.231. (CC) gcc options: -O2 -pthread

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pCore i9 12900KCore i5 12600KCore i5 1240050100150200250SE +/- 1.92, N = 9SE +/- 0.94, N = 8SE +/- 0.57, N = 7233.21158.97116.781. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pCore i9 12900KCore i5 12600KCore i5 124004080120160200Min: 217.94 / Avg: 233.21 / Max: 235.94Min: 152.44 / Avg: 158.97 / Max: 160.26Min: 113.4 / Avg: 116.78 / Max: 117.761. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Tungsten Renderer

Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Volumetric CausticCore i9 12900KCore i5 12600KCore i5 124003691215SE +/- 0.04022, N = 9SE +/- 0.01987, N = 6SE +/- 0.01686, N = 54.980457.059199.937681. (CXX) g++ options: -std=c++0x -march=core2 -msse2 -msse3 -mssse3 -mno-sse4.1 -mno-sse4.2 -mno-sse4a -mno-avx -mno-fma -mno-bmi2 -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Volumetric CausticCore i9 12900KCore i5 12600KCore i5 124003691215Min: 4.81 / Avg: 4.98 / Max: 5.13Min: 7.01 / Avg: 7.06 / Max: 7.15Min: 9.89 / Avg: 9.94 / Max: 9.981. (CXX) g++ options: -std=c++0x -march=core2 -msse2 -msse3 -mssse3 -mno-sse4.1 -mno-sse4.2 -mno-sse4a -mno-avx -mno-fma -mno-bmi2 -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -ldl

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.6.0Test: Boat - Acceleration: CPU-onlyCore i9 12900KCore i5 12600KCore i5 124001.18872.37743.56614.75485.9435SE +/- 0.009, N = 9SE +/- 0.013, N = 8SE +/- 0.006, N = 72.6903.8805.283
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.6.0Test: Boat - Acceleration: CPU-onlyCore i9 12900KCore i5 12600KCore i5 12400246810Min: 2.65 / Avg: 2.69 / Max: 2.73Min: 3.83 / Avg: 3.88 / Max: 3.92Min: 5.26 / Avg: 5.28 / Max: 5.31

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: CrownCore i9 12900KCore i5 12600KCore i5 12400510152025SE +/- 0.0715, N = 3SE +/- 0.0463, N = 3SE +/- 0.0108, N = 318.827312.62279.6674MIN: 17.29 / MAX: 20.05MIN: 12.03 / MAX: 13.01MIN: 9.6 / MAX: 9.8
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: CrownCore i9 12900KCore i5 12600KCore i5 12400510152025Min: 18.7 / Avg: 18.83 / Max: 18.95Min: 12.53 / Avg: 12.62 / Max: 12.68Min: 9.65 / Avg: 9.67 / Max: 9.68

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.14Time To CompileCore i9 12900KCore i5 12600KCore i5 1240020406080100SE +/- 0.46, N = 3SE +/- 0.83, N = 3SE +/- 0.54, N = 347.8568.5992.08
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.14Time To CompileCore i9 12900KCore i5 12600KCore i5 1240020406080100Min: 47.34 / Avg: 47.84 / Max: 48.77Min: 67.42 / Avg: 68.59 / Max: 70.19Min: 91.44 / Avg: 92.08 / Max: 93.15

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pCore i9 12900KCore i5 12600KCore i5 12400100200300400500SE +/- 0.58, N = 12SE +/- 0.32, N = 11SE +/- 0.12, N = 10475.92330.12250.221. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pCore i9 12900KCore i5 12600KCore i5 1240080160240320400Min: 472.44 / Avg: 475.92 / Max: 480Min: 328.59 / Avg: 330.12 / Max: 331.67Min: 249.48 / Avg: 250.22 / Max: 250.731. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pCore i9 12900KCore i5 12600KCore i5 1240080160240320400SE +/- 0.55, N = 11SE +/- 0.23, N = 10SE +/- 0.13, N = 9381.57229.13201.161. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pCore i9 12900KCore i5 12600KCore i5 1240070140210280350Min: 378.33 / Avg: 381.57 / Max: 385.09Min: 228.23 / Avg: 229.13 / Max: 230.37Min: 200.41 / Avg: 201.16 / Max: 201.541. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: SciVisCore i9 12900KCore i5 12600KCore i5 12400816243240SE +/- 0.00, N = 6SE +/- 0.00, N = 5SE +/- 0.08, N = 434.4823.2618.27MIN: 32.26 / MAX: 35.71MIN: 20 / MAX: 23.81MIN: 18.18 / MAX: 18.52
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: SciVisCore i9 12900KCore i5 12600KCore i5 12400714212835Min: 34.48 / Avg: 34.48 / Max: 34.48Min: 23.26 / Avg: 23.26 / Max: 23.26Min: 18.18 / Avg: 18.27 / Max: 18.52

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: NinjaCore i9 12900KCore i5 12600KCore i5 12400140280420560700SE +/- 0.70, N = 3SE +/- 0.49, N = 3SE +/- 0.21, N = 3358.47517.31664.57
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: NinjaCore i9 12900KCore i5 12600KCore i5 12400120240360480600Min: 357.6 / Avg: 358.47 / Max: 359.85Min: 516.45 / Avg: 517.31 / Max: 518.15Min: 664.15 / Avg: 664.57 / Max: 664.84

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 21.06Test: Compression RatingCore i9 12900KCore i5 12600KCore i5 1240030K60K90K120K150KSE +/- 654.91, N = 3SE +/- 224.01, N = 3SE +/- 265.46, N = 311941377975652371. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 21.06Test: Compression RatingCore i9 12900KCore i5 12600KCore i5 1240020K40K60K80K100KMin: 118360 / Avg: 119412.67 / Max: 120614Min: 77634 / Avg: 77974.67 / Max: 78397Min: 64823 / Avg: 65237.33 / Max: 657321. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 7.71e12 Prime Number GenerationCore i9 12900KCore i5 12600KCore i5 12400714212835SE +/- 0.05, N = 4SE +/- 0.01, N = 3SE +/- 0.03, N = 316.3424.2529.811. (CXX) g++ options: -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 7.71e12 Prime Number GenerationCore i9 12900KCore i5 12600KCore i5 12400714212835Min: 16.22 / Avg: 16.34 / Max: 16.43Min: 24.24 / Avg: 24.25 / Max: 24.28Min: 29.75 / Avg: 29.81 / Max: 29.861. (CXX) g++ options: -O3

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 8 - Input: Bosphorus 1080pCore i9 12900KCore i5 12600KCore i5 1240020406080100SE +/- 0.25, N = 6SE +/- 0.33, N = 6SE +/- 0.25, N = 493.4575.5551.581. (CXX) g++ options: -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 8 - Input: Bosphorus 1080pCore i9 12900KCore i5 12600KCore i5 1240020406080100Min: 92.32 / Avg: 93.45 / Max: 93.99Min: 74.75 / Avg: 75.54 / Max: 76.44Min: 50.82 / Avg: 51.57 / Max: 51.921. (CXX) g++ options: -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: SciVisCore i9 12900KCore i5 12600KCore i5 12400714212835SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.08, N = 328.5718.8715.79MIN: 25.64MIN: 17.24 / MAX: 19.23MIN: 15.63 / MAX: 15.87
OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: SciVisCore i9 12900KCore i5 12600KCore i5 12400612182430Min: 28.57 / Avg: 28.57 / Max: 28.57Min: 18.87 / Avg: 18.87 / Max: 18.87Min: 15.63 / Avg: 15.79 / Max: 15.87

Timed Mesa Compilation

This test profile times how long it takes to compile Mesa with Meson/Ninja. For minimizing build dependencies and avoid versioning conflicts, test this is just the core Mesa build without LLVM or the extra Gallium3D/Mesa drivers enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To CompileCore i9 12900KCore i5 12600KCore i5 124001122334455SE +/- 0.12, N = 3SE +/- 0.06, N = 3SE +/- 0.01, N = 325.8635.8546.56
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To CompileCore i9 12900KCore i5 12600KCore i5 12400918273645Min: 25.72 / Avg: 25.86 / Max: 26.1Min: 35.74 / Avg: 35.85 / Max: 35.95Min: 46.54 / Avg: 46.56 / Max: 46.58

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.12.1Variant: Monero - Hash Count: 1MCore i9 12900KCore i5 12400Core i5 12600K13002600390052006500SE +/- 48.35, N = 3SE +/- 4.62, N = 3SE +/- 1.47, N = 36125.93807.93411.01. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.12.1Variant: Monero - Hash Count: 1MCore i9 12900KCore i5 12400Core i5 12600K11002200330044005500Min: 6029.4 / Avg: 6125.9 / Max: 6179.4Min: 3799.7 / Avg: 3807.93 / Max: 3815.7Min: 3408.2 / Avg: 3411.03 / Max: 3413.11. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 17.3Time To CompileCore i9 12900KCore i5 12600KCore i5 12400120240360480600SE +/- 0.05, N = 3SE +/- 1.07, N = 3SE +/- 0.17, N = 3316.87451.83567.76
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 17.3Time To CompileCore i9 12900KCore i5 12600KCore i5 12400100200300400500Min: 316.76 / Avg: 316.87 / Max: 316.93Min: 449.7 / Avg: 451.83 / Max: 453.1Min: 567.42 / Avg: 567.76 / Max: 567.96

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: CrownCore i9 12900KCore i5 12600KCore i5 12400510152025SE +/- 0.11, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 320.1613.9311.49MIN: 18.48 / MAX: 21.53MIN: 13.38 / MAX: 14.31MIN: 11.36 / MAX: 11.74
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: CrownCore i9 12900KCore i5 12600KCore i5 12400510152025Min: 19.96 / Avg: 20.16 / Max: 20.36Min: 13.9 / Avg: 13.93 / Max: 13.96Min: 11.43 / Avg: 11.49 / Max: 11.54

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.12.1Variant: Wownero - Hash Count: 1MCore i9 12900KCore i5 12600KCore i5 124002K4K6K8K10KSE +/- 95.81, N = 3SE +/- 45.80, N = 3SE +/- 36.22, N = 310428.97375.96230.71. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.12.1Variant: Wownero - Hash Count: 1MCore i9 12900KCore i5 12600KCore i5 124002K4K6K8K10KMin: 10245.5 / Avg: 10428.9 / Max: 10568.7Min: 7293 / Avg: 7375.93 / Max: 7451.1Min: 6158.8 / Avg: 6230.7 / Max: 6274.31. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Timed Wasmer Compilation

This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 1.0.2Time To CompileCore i9 12900KCore i5 12600KCore i5 124001224364860SE +/- 0.16, N = 3SE +/- 0.06, N = 3SE +/- 0.28, N = 332.8243.3754.831. (CC) gcc options: -m64 -pie -nodefaultlibs -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 1.0.2Time To CompileCore i9 12900KCore i5 12600KCore i5 124001122334455Min: 32.64 / Avg: 32.82 / Max: 33.13Min: 43.31 / Avg: 43.37 / Max: 43.49Min: 54.28 / Avg: 54.83 / Max: 55.221. (CC) gcc options: -m64 -pie -nodefaultlibs -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.6.0Test: Server Room - Acceleration: CPU-onlyCore i9 12900KCore i5 12600KCore i5 124000.75621.51242.26863.02483.781SE +/- 0.005, N = 9SE +/- 0.003, N = 9SE +/- 0.002, N = 82.0192.7813.361
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.6.0Test: Server Room - Acceleration: CPU-onlyCore i9 12900KCore i5 12600KCore i5 12400246810Min: 2 / Avg: 2.02 / Max: 2.04Min: 2.77 / Avg: 2.78 / Max: 2.79Min: 3.35 / Avg: 3.36 / Max: 3.37

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinCore i9 12900KCore i5 12400Core i5 12600K3691215SE +/- 0.040, N = 10SE +/- 0.012, N = 9SE +/- 0.035, N = 99.5646.7305.8671. (CXX) g++ options: -O3 -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinCore i9 12900KCore i5 12400Core i5 12600K3691215Min: 9.41 / Avg: 9.56 / Max: 9.75Min: 6.64 / Avg: 6.73 / Max: 6.76Min: 5.77 / Avg: 5.87 / Max: 6.11. (CXX) g++ options: -O3 -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.10Model: shufflenet-v2-10 - Device: CPUCore i9 12900KCore i5 12600KCore i5 1240010K20K30K40K50KSE +/- 194.60, N = 3SE +/- 43.00, N = 3SE +/- 108.93, N = 34669039771292741. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.10Model: shufflenet-v2-10 - Device: CPUCore i9 12900KCore i5 12600KCore i5 124008K16K24K32K40KMin: 46302.5 / Avg: 46690.33 / Max: 46912.5Min: 39685.5 / Avg: 39771.33 / Max: 39819Min: 29111.5 / Avg: 29274.17 / Max: 294811. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG19 - Device: CPUCore i9 12900KCore i5 12600KCore i5 12400510152025SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 320.4415.3112.93
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG19 - Device: CPUCore i9 12900KCore i5 12600KCore i5 12400510152025Min: 20.41 / Avg: 20.44 / Max: 20.47Min: 15.26 / Avg: 15.31 / Max: 15.38Min: 12.91 / Avg: 12.93 / Max: 12.96

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.10Model: super-resolution-10 - Device: CPUCore i9 12900KCore i5 12600KCore i5 1240010002000300040005000SE +/- 1.20, N = 3SE +/- 2.24, N = 3SE +/- 6.25, N = 34677421529861. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.10Model: super-resolution-10 - Device: CPUCore i9 12900KCore i5 12600KCore i5 124008001600240032004000Min: 4675.5 / Avg: 4677.17 / Max: 4679.5Min: 4210.5 / Avg: 4214.83 / Max: 4218Min: 2973.5 / Avg: 2986 / Max: 2992.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: CPUCore i9 12900KCore i5 12600KCore i5 12400612182430SE +/- 0.05, N = 3SE +/- 0.06, N = 3SE +/- 0.02, N = 324.6618.6115.83
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: CPUCore i9 12900KCore i5 12600KCore i5 12400612182430Min: 24.56 / Avg: 24.66 / Max: 24.72Min: 18.55 / Avg: 18.61 / Max: 18.72Min: 15.8 / Avg: 15.83 / Max: 15.85

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.6.0Test: Masskrug - Acceleration: CPU-onlyCore i9 12900KCore i5 12600KCore i5 124001.00532.01063.01594.02125.0265SE +/- 0.007, N = 9SE +/- 0.009, N = 8SE +/- 0.003, N = 72.8943.9124.468
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.6.0Test: Masskrug - Acceleration: CPU-onlyCore i9 12900KCore i5 12600KCore i5 12400246810Min: 2.87 / Avg: 2.89 / Max: 2.92Min: 3.88 / Avg: 3.91 / Max: 3.95Min: 4.46 / Avg: 4.47 / Max: 4.48

XZ Compression

This test measures the time needed to compress a sample file (an Ubuntu file-system image) using XZ compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9Core i9 12900KCore i5 12600KCore i5 12400816243240SE +/- 0.08, N = 3SE +/- 0.00, N = 3SE +/- 0.10, N = 321.7125.7733.011. (CC) gcc options: -fvisibility=hidden -O2
OpenBenchmarking.orgSeconds, Fewer Is BetterXZ Compression 5.2.4Compressing ubuntu-16.04.3-server-i386.img, Compression Level 9Core i9 12900KCore i5 12600KCore i5 12400714212835Min: 21.62 / Avg: 21.71 / Max: 21.86Min: 25.76 / Avg: 25.77 / Max: 25.77Min: 32.82 / Avg: 33.01 / Max: 33.171. (CC) gcc options: -fvisibility=hidden -O2

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASCore i9 12900KCore i5 12600KCore i5 124002004006008001000SE +/- 14.13, N = 4SE +/- 8.95, N = 3SE +/- 4.41, N = 311359597541. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASCore i9 12900KCore i5 12600KCore i5 124002004006008001000Min: 1094 / Avg: 1135.25 / Max: 1157Min: 944 / Avg: 959.33 / Max: 975Min: 747 / Avg: 753.67 / Max: 7621. (CXX) g++ options: -flto -pthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6Core i9 12900KCore i5 12600KCore i5 124003691215SE +/- 0.026, N = 6SE +/- 0.032, N = 5SE +/- 0.033, N = 48.2479.86712.3321. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6Core i9 12900KCore i5 12600KCore i5 1240048121620Min: 8.17 / Avg: 8.25 / Max: 8.34Min: 9.76 / Avg: 9.87 / Max: 9.96Min: 12.25 / Avg: 12.33 / Max: 12.41. (CXX) g++ options: -O3 -fPIC -lm

Meta Performance Per Watts

OpenBenchmarking.orgPerformance Per Watts, More Is BetterMeta Performance Per WattsPerformance Per WattsCore i9 12900KCore i5 12600KCore i5 12400160320480640800736.15605.12502.88

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression SpeedCore i9 12900KCore i5 12600KCore i5 12400918273645SE +/- 0.31, N = 3SE +/- 0.07, N = 3SE +/- 0.03, N = 339.032.927.01. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression SpeedCore i9 12900KCore i5 12600KCore i5 12400816243240Min: 38.4 / Avg: 39 / Max: 39.4Min: 32.8 / Avg: 32.87 / Max: 33Min: 27 / Avg: 27.03 / Max: 27.11. (CC) gcc options: -O3 -pthread -lz -llzma

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 8 - Input: Bosphorus 4KCore i9 12900KCore i5 12600KCore i5 12400612182430SE +/- 0.18, N = 10SE +/- 0.20, N = 3SE +/- 0.07, N = 323.6319.2116.591. (CXX) g++ options: -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 8 - Input: Bosphorus 4KCore i9 12900KCore i5 12600KCore i5 12400612182430Min: 22.04 / Avg: 23.63 / Max: 23.95Min: 18.95 / Avg: 19.21 / Max: 19.6Min: 16.52 / Avg: 16.59 / Max: 16.731. (CXX) g++ options: -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

Tungsten Renderer

Tungsten is a C++ physically based renderer that makes use of Intel's Embree ray tracing library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Water CausticCore i9 12900KCore i5 12600KCore i5 12400612182430SE +/- 0.09, N = 3SE +/- 0.06, N = 3SE +/- 0.02, N = 318.2921.7025.831. (CXX) g++ options: -std=c++0x -march=core2 -msse2 -msse3 -mssse3 -mno-sse4.1 -mno-sse4.2 -mno-sse4a -mno-avx -mno-fma -mno-bmi2 -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -ldl
OpenBenchmarking.orgSeconds, Fewer Is BetterTungsten Renderer 0.2.2Scene: Water CausticCore i9 12900KCore i5 12600KCore i5 12400612182430Min: 18.12 / Avg: 18.29 / Max: 18.4Min: 21.58 / Avg: 21.7 / Max: 21.77Min: 25.79 / Avg: 25.83 / Max: 25.871. (CXX) g++ options: -std=c++0x -march=core2 -msse2 -msse3 -mssse3 -mno-sse4.1 -mno-sse4.2 -mno-sse4a -mno-avx -mno-fma -mno-bmi2 -mno-avx2 -mno-xop -mno-fma4 -mno-avx512f -mno-avx512vl -mno-avx512pf -mno-avx512er -mno-avx512cd -mno-avx512dq -mno-avx512bw -mno-avx512ifma -mno-avx512vbmi -fstrict-aliasing -O3 -rdynamic -lIlmImf -lIlmThread -lImath -lHalf -lIex -lz -ljpeg -ldl

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeCore i9 12900KCore i5 12600KCore i5 124001224364860SE +/- 0.05, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 336.6645.7051.281. RawTherapee, version 5.8, command line.
OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark TimeCore i9 12900KCore i5 12600KCore i5 124001020304050Min: 36.59 / Avg: 36.66 / Max: 36.77Min: 45.63 / Avg: 45.7 / Max: 45.79Min: 51.26 / Avg: 51.28 / Max: 51.291. RawTherapee, version 5.8, command line.

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KCore i9 12900KCore i5 12600KCore i5 124003691215SE +/- 0.07, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 312.0710.568.761. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KCore i9 12900KCore i5 12600KCore i5 1240048121620Min: 11.97 / Avg: 12.07 / Max: 12.2Min: 10.54 / Avg: 10.56 / Max: 10.58Min: 8.74 / Avg: 8.76 / Max: 8.771. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Timed GDB GNU Debugger Compilation

This test times how long it takes to build the GNU Debugger (GDB) in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 10.2Time To CompileCore i9 12900KCore i5 12600KCore i5 124001428425670SE +/- 0.08, N = 3SE +/- 0.61, N = 3SE +/- 0.13, N = 344.7952.2961.60
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 10.2Time To CompileCore i9 12900KCore i5 12600KCore i5 124001224364860Min: 44.71 / Avg: 44.79 / Max: 44.94Min: 51.59 / Avg: 52.29 / Max: 53.5Min: 61.35 / Avg: 61.6 / Max: 61.76

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetCore i9 12900KCore i5 12600KCore i5 124005001000150020002500SE +/- 3.43, N = 3SE +/- 5.94, N = 3SE +/- 1.01, N = 31733.721974.002369.35MIN: 1690.5 / MAX: 1836.99MIN: 1895.01 / MAX: 2094.31MIN: 2323.03 / MAX: 2425.471. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetCore i9 12900KCore i5 12600KCore i5 12400400800120016002000Min: 1729.16 / Avg: 1733.72 / Max: 1740.45Min: 1964.32 / Avg: 1974 / Max: 1984.79Min: 2367.38 / Avg: 2369.35 / Max: 2370.751. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KCore i9 12900KCore i5 12600KCore i5 1240048121620SE +/- 0.15, N = 3SE +/- 0.07, N = 3SE +/- 0.15, N = 314.5413.1511.151. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KCore i9 12900KCore i5 12600KCore i5 1240048121620Min: 14.25 / Avg: 14.54 / Max: 14.69Min: 13.02 / Avg: 13.15 / Max: 13.24Min: 10.86 / Avg: 11.15 / Max: 11.351. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: EigenCore i9 12900KCore i5 12600KCore i5 12400400800120016002000SE +/- 11.46, N = 3SE +/- 14.15, N = 6SE +/- 15.72, N = 31841144914151. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: EigenCore i9 12900KCore i5 12600KCore i5 1240030060090012001500Min: 1825 / Avg: 1840.67 / Max: 1863Min: 1406 / Avg: 1448.67 / Max: 1504Min: 1384 / Avg: 1415 / Max: 14351. (CXX) g++ options: -flto -pthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: ThoroughCore i9 12900KCore i5 12400Core i5 12600K3691215SE +/- 0.0080, N = 6SE +/- 0.0015, N = 5SE +/- 0.0052, N = 56.92667.08558.98761. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: ThoroughCore i9 12900KCore i5 12400Core i5 12600K3691215Min: 6.91 / Avg: 6.93 / Max: 6.95Min: 7.08 / Avg: 7.09 / Max: 7.09Min: 8.98 / Avg: 8.99 / Max: 9.011. (CXX) g++ options: -O3 -flto -pthread

Hugin

Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeCore i9 12900KCore i5 12600KCore i5 12400918273645SE +/- 0.04, N = 3SE +/- 0.06, N = 3SE +/- 0.18, N = 331.2134.2540.04
OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeCore i9 12900KCore i5 12600KCore i5 12400816243240Min: 31.14 / Avg: 31.21 / Max: 31.27Min: 34.13 / Avg: 34.25 / Max: 34.32Min: 39.76 / Avg: 40.04 / Max: 40.39

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSamples / Second, More Is BettersrsRAN 21.10Test: OFDM_TestCore i9 12900KCore i5 12600KCore i5 1240040M80M120M160M200MSE +/- 907377.17, N = 3SE +/- 1201388.09, N = 3SE +/- 2061386.50, N = 151957000001841000001527600001. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f
OpenBenchmarking.orgSamples / Second, More Is BettersrsRAN 21.10Test: OFDM_TestCore i9 12900KCore i5 12600KCore i5 1240030M60M90M120M150MMin: 193900000 / Avg: 195700000 / Max: 196800000Min: 181700000 / Avg: 184100000 / Max: 185400000Min: 140000000 / Avg: 152760000 / Max: 1649000001. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkCore i9 12900KCore i5 12600KCore i5 124001326395265SE +/- 0.44, N = 3SE +/- 0.07, N = 3SE +/- 0.05, N = 357.7352.7945.271. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkCore i9 12900KCore i5 12600KCore i5 124001122334455Min: 56.85 / Avg: 57.73 / Max: 58.18Min: 52.69 / Avg: 52.79 / Max: 52.92Min: 45.19 / Avg: 45.27 / Max: 45.351. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

rav1e

Xiph rav1e is a Rust-written AV1 video encoder that claims to be the fastest and safest AV1 encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.5Speed: 10Core i9 12900KCore i5 12600KCore i5 124003691215SE +/- 0.075, N = 15SE +/- 0.083, N = 15SE +/- 0.078, N = 912.02410.3619.496
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.5Speed: 10Core i9 12900KCore i5 12600KCore i5 1240048121620Min: 11.19 / Avg: 12.02 / Max: 12.47Min: 9.74 / Avg: 10.36 / Max: 10.72Min: 9.01 / Avg: 9.5 / Max: 9.72

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonCore i9 12900KCore i5 12600KCore i5 124005001000150020002500SE +/- 14.73, N = 10SE +/- 18.22, N = 20SE +/- 12.86, N = 9190021842344
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonCore i9 12900KCore i5 12600KCore i5 12400400800120016002000Min: 1842 / Avg: 1900.4 / Max: 2001Min: 2017 / Avg: 2184.15 / Max: 2321Min: 2307 / Avg: 2343.78 / Max: 2437

Node.js V8 Web Tooling Benchmark

Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkCore i9 12900KCore i5 12600KCore i5 12400510152025SE +/- 0.09, N = 3SE +/- 0.02, N = 3SE +/- 0.14, N = 321.6119.9017.64
OpenBenchmarking.orgruns/s, More Is BetterNode.js V8 Web Tooling BenchmarkCore i9 12900KCore i5 12600KCore i5 12400510152025Min: 21.45 / Avg: 21.61 / Max: 21.76Min: 19.86 / Avg: 19.9 / Max: 19.93Min: 17.44 / Avg: 17.64 / Max: 17.91

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenet-v1-1.0Core i5 12600KCore i5 12400Core i9 12900K0.64961.29921.94882.59843.248SE +/- 0.007, N = 3SE +/- 0.001, N = 3SE +/- 0.017, N = 32.3652.8302.887MIN: 2.34 / MAX: 2.6MIN: 2.81 / MAX: 3.08MIN: 2.84 / MAX: 7.741. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenet-v1-1.0Core i5 12600KCore i5 12400Core i9 12900K246810Min: 2.35 / Avg: 2.37 / Max: 2.38Min: 2.83 / Avg: 2.83 / Max: 2.83Min: 2.85 / Avg: 2.89 / Max: 2.91. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression SpeedCore i9 12900KCore i5 12600KCore i5 124001020304050SE +/- 0.52, N = 4SE +/- 0.12, N = 3SE +/- 0.17, N = 343.735.935.81. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression SpeedCore i9 12900KCore i5 12600KCore i5 12400918273645Min: 42.7 / Avg: 43.68 / Max: 45Min: 35.7 / Avg: 35.87 / Max: 36.1Min: 35.5 / Avg: 35.8 / Max: 36.11. (CC) gcc options: -O3 -pthread -lz -llzma

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: MobileNetV2_224Core i5 12400Core i5 12600KCore i9 12900K0.54951.0991.64852.1982.7475SE +/- 0.004, N = 3SE +/- 0.029, N = 3SE +/- 0.013, N = 32.0202.2592.442MIN: 1.99 / MAX: 9.3MIN: 2.21 / MAX: 3.04MIN: 2.39 / MAX: 9.391. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: MobileNetV2_224Core i5 12400Core i5 12600KCore i9 12900K246810Min: 2.01 / Avg: 2.02 / Max: 2.03Min: 2.23 / Avg: 2.26 / Max: 2.32Min: 2.42 / Avg: 2.44 / Max: 2.461. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMCore i9 12900KCore i5 12600KCore i5 1240050100150200250SE +/- 0.97, N = 5SE +/- 0.21, N = 4SE +/- 0.73, N = 4206.3194.5172.41. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMCore i9 12900KCore i5 12600KCore i5 124004080120160200Min: 202.8 / Avg: 206.34 / Max: 208.2Min: 194 / Avg: 194.5 / Max: 194.9Min: 170.4 / Avg: 172.43 / Max: 173.91. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.10Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMCore i9 12900KCore i5 12600KCore i5 124004080120160200SE +/- 0.25, N = 4SE +/- 0.94, N = 4SE +/- 0.26, N = 3201.8188.6169.01. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.10Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMCore i9 12900KCore i5 12600KCore i5 124004080120160200Min: 201.4 / Avg: 201.83 / Max: 202.4Min: 187 / Avg: 188.63 / Max: 190.8Min: 168.6 / Avg: 169 / Max: 169.51. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.6.1Input: JPEG - Encode Speed: 8Core i9 12900KCore i5 12600KCore i5 124001020304050SE +/- 0.27, N = 6SE +/- 0.23, N = 6SE +/- 0.11, N = 642.6641.1535.771. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.6.1Input: JPEG - Encode Speed: 8Core i9 12900KCore i5 12600KCore i5 12400918273645Min: 42.05 / Avg: 42.66 / Max: 43.8Min: 40.3 / Avg: 41.15 / Max: 41.83Min: 35.47 / Avg: 35.77 / Max: 36.091. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMCore i9 12900KCore i5 12600KCore i5 12400130260390520650SE +/- 3.11, N = 3SE +/- 3.85, N = 3SE +/- 0.78, N = 3621.1587.8522.51. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMCore i9 12900KCore i5 12600KCore i5 12400110220330440550Min: 617.3 / Avg: 621.13 / Max: 627.3Min: 581.5 / Avg: 587.83 / Max: 594.8Min: 521.4 / Avg: 522.5 / Max: 5241. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeCore i9 12900KCore i5 12600KCore i5 124003M6M9M12M15MSE +/- 29295.13, N = 4SE +/- 10715.53, N = 4SE +/- 14452.06, N = 31263311711890651106314271. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm
OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeCore i9 12900KCore i5 12600KCore i5 124002M4M6M8M10MMin: 12547076 / Avg: 12633116.75 / Max: 12678359Min: 11864010 / Avg: 11890651 / Max: 11914488Min: 10608266 / Avg: 10631427.33 / Max: 106579831. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: PartialTweetsCore i9 12900KCore i5 12600KCore i5 12400246810SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 36.616.235.571. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: PartialTweetsCore i9 12900KCore i5 12600KCore i5 124003691215Min: 6.6 / Avg: 6.61 / Max: 6.61Min: 6.22 / Avg: 6.23 / Max: 6.24Min: 5.54 / Avg: 5.57 / Max: 5.591. (CXX) g++ options: -O3

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMCore i9 12900KCore i5 12600KCore i5 12400120240360480600SE +/- 4.35, N = 5SE +/- 2.08, N = 4SE +/- 3.21, N = 4568.1538.4478.81. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB SISO 64-QAMCore i9 12900KCore i5 12600KCore i5 12400100200300400500Min: 553.9 / Avg: 568.06 / Max: 578.4Min: 533.4 / Avg: 538.35 / Max: 541.8Min: 472.5 / Avg: 478.83 / Max: 487.71. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMCore i9 12900KCore i5 12600KCore i5 124004080120160200SE +/- 0.18, N = 3SE +/- 0.48, N = 4SE +/- 0.17, N = 3161.9152.3136.51. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMCore i9 12900KCore i5 12600KCore i5 12400306090120150Min: 161.6 / Avg: 161.87 / Max: 162.2Min: 151.2 / Avg: 152.25 / Max: 153.5Min: 136.2 / Avg: 136.5 / Max: 136.81. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f

WireGuard + Linux Networking Stack Stress Test

This is a benchmark of the WireGuard secure VPN tunnel and Linux networking stack stress test. The test runs on the local host but does require root permissions to run. The way it works is it creates three namespaces. ns0 has a loopback device. ns1 and ns2 each have wireguard devices. Those two wireguard devices send traffic through the loopback device of ns0. The end result of this is that tests wind up testing encryption and decryption at the same time -- a pretty CPU and scheduler-heavy workflow. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestCore i9 12900KCore i5 12600KCore i5 12400306090120150SE +/- 1.44, N = 3SE +/- 0.57, N = 3SE +/- 0.40, N = 3109.51118.61129.84
OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress TestCore i9 12900KCore i5 12600KCore i5 1240020406080100Min: 106.71 / Avg: 109.51 / Max: 111.48Min: 117.82 / Avg: 118.61 / Max: 119.7Min: 129.07 / Avg: 129.84 / Max: 130.4

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: LargeRandomCore i9 12900KCore i5 12600KCore i5 124000.38930.77861.16791.55721.9465SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 31.731.621.461. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: LargeRandomCore i9 12900KCore i5 12600KCore i5 12400246810Min: 1.72 / Avg: 1.73 / Max: 1.73Min: 1.62 / Avg: 1.62 / Max: 1.62Min: 1.45 / Avg: 1.46 / Max: 1.461. (CXX) g++ options: -O3

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMCore i9 12900KCore i5 12600KCore i5 1240050100150200250SE +/- 0.99, N = 4SE +/- 0.59, N = 4SE +/- 0.55, N = 4225.2213.7190.21. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMCore i9 12900KCore i5 12600KCore i5 124004080120160200Min: 222.6 / Avg: 225.23 / Max: 227.3Min: 212 / Avg: 213.68 / Max: 214.8Min: 189.1 / Avg: 190.15 / Max: 191.21. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f

libjpeg-turbo tjbench

tjbench is a JPEG decompression/compression benchmark that is part of libjpeg-turbo, a JPEG image codec library optimized for SIMD instructions on modern CPU architectures. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMegapixels/sec, More Is Betterlibjpeg-turbo tjbench 2.1.0Test: Decompression ThroughputCore i9 12900KCore i5 12600KCore i5 1240060120180240300SE +/- 0.84, N = 3SE +/- 0.10, N = 3SE +/- 0.13, N = 3277.50261.90234.391. (CC) gcc options: -O3 -rdynamic -lm
OpenBenchmarking.orgMegapixels/sec, More Is Betterlibjpeg-turbo tjbench 2.1.0Test: Decompression ThroughputCore i9 12900KCore i5 12600KCore i5 1240050100150200250Min: 275.85 / Avg: 277.5 / Max: 278.54Min: 261.74 / Avg: 261.9 / Max: 262.09Min: 234.26 / Avg: 234.39 / Max: 234.641. (CC) gcc options: -O3 -rdynamic -lm

SecureMark

SecureMark is an objective, standardized benchmarking framework for measuring the efficiency of cryptographic processing solutions developed by EEMBC. SecureMark-TLS is benchmarking Transport Layer Security performance with a focus on IoT/edge computing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmarks, More Is BetterSecureMark 1.0.4Benchmark: SecureMark-TLSCore i9 12900KCore i5 12600KCore i5 1240080K160K240K320K400KSE +/- 508.54, N = 3SE +/- 427.66, N = 3SE +/- 204.22, N = 33896013678823294021. (CC) gcc options: -pedantic -O3
OpenBenchmarking.orgmarks, More Is BetterSecureMark 1.0.4Benchmark: SecureMark-TLSCore i9 12900KCore i5 12600KCore i5 1240070K140K210K280K350KMin: 388661.09 / Avg: 389600.52 / Max: 390407.81Min: 367440.81 / Avg: 367882.35 / Max: 368737.53Min: 328999.41 / Avg: 329402.32 / Max: 329661.751. (CC) gcc options: -pedantic -O3

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2Core i9 12900KCore i5 12600KCore i5 1240050100150200250SE +/- 0.36, N = 4SE +/- 1.14, N = 4SE +/- 0.27, N = 4178.68193.89211.29MIN: 168.53 / MAX: 247.44MIN: 180.78 / MAX: 254.16MIN: 205.1 / MAX: 218.731. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2Core i9 12900KCore i5 12600KCore i5 124004080120160200Min: 177.83 / Avg: 178.68 / Max: 179.57Min: 191.14 / Avg: 193.89 / Max: 195.87Min: 210.56 / Avg: 211.29 / Max: 211.881. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionCore i9 12900KCore i5 12600KCore i5 12400246810SE +/- 0.007, N = 7SE +/- 0.002, N = 7SE +/- 0.003, N = 65.2355.5396.1901. (CC) gcc options: -fvisibility=hidden -O2 -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionCore i9 12900KCore i5 12600KCore i5 12400246810Min: 5.23 / Avg: 5.24 / Max: 5.28Min: 5.53 / Avg: 5.54 / Max: 5.55Min: 6.18 / Avg: 6.19 / Max: 6.21. (CC) gcc options: -fvisibility=hidden -O2 -lm -ljpeg -lpng16 -ltiff

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesCore i9 12900KCore i5 12600KCore i5 12400130260390520650SE +/- 0.87, N = 4SE +/- 0.50, N = 4SE +/- 0.85, N = 4499528590
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesCore i9 12900KCore i5 12600KCore i5 12400100200300400500Min: 497 / Avg: 498.5 / Max: 500Min: 527 / Avg: 527.5 / Max: 529Min: 588 / Avg: 589.75 / Max: 592

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: Google ChromeCore i9 12900KCore i5 12600KCore i5 12400110220330440550SE +/- 1.51, N = 3SE +/- 0.43, N = 3SE +/- 1.75, N = 3429.1456.6507.11. chrome 96.0.4664.110
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: Kraken - Browser: Google ChromeCore i9 12900KCore i5 12600KCore i5 1240090180270360450Min: 427 / Avg: 429.07 / Max: 432Min: 456 / Avg: 456.57 / Max: 457.4Min: 503.6 / Avg: 507.1 / Max: 5091. chrome 96.0.4664.110

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.1Core i9 12900KCore i5 12600KCore i5 124004080120160200SE +/- 0.51, N = 5SE +/- 0.18, N = 5SE +/- 0.13, N = 4153.15161.01180.98MIN: 149.14 / MAX: 158.84MIN: 157.47 / MAX: 164.46MIN: 176.1 / MAX: 183.841. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.1Core i9 12900KCore i5 12600KCore i5 12400306090120150Min: 152.45 / Avg: 153.15 / Max: 155.14Min: 160.65 / Avg: 161.01 / Max: 161.63Min: 180.64 / Avg: 180.98 / Max: 181.21. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC2Core i9 12900KCore i5 12600KCore i5 1240050100150200250SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3238.43225.21201.761. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC2Core i9 12900KCore i5 12600KCore i5 124004080120160200Min: 238.39 / Avg: 238.43 / Max: 238.47Min: 225.19 / Avg: 225.21 / Max: 225.22Min: 201.73 / Avg: 201.76 / Max: 201.811. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedCore i9 12900KCore i5 12600KCore i5 1240020406080100SE +/- 0.14, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 376.4772.3064.741. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedCore i9 12900KCore i5 12600KCore i5 124001530456075Min: 76.18 / Avg: 76.47 / Max: 76.61Min: 72.25 / Avg: 72.3 / Max: 72.33Min: 64.74 / Avg: 64.74 / Max: 64.751. (CC) gcc options: -O3

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGeometric Mean, More Is BetterSeleniumBenchmark: Octane - Browser: Google ChromeCore i9 12900KCore i5 12600KCore i5 1240020K40K60K80K100KSE +/- 645.28, N = 3SE +/- 162.20, N = 3SE +/- 274.81, N = 39909694330838971. chrome 96.0.4664.110
OpenBenchmarking.orgGeometric Mean, More Is BetterSeleniumBenchmark: Octane - Browser: Google ChromeCore i9 12900KCore i5 12600KCore i5 1240020K40K60K80K100KMin: 97839 / Avg: 99096.33 / Max: 99977Min: 94013 / Avg: 94329.67 / Max: 94549Min: 83375 / Avg: 83897 / Max: 843071. chrome 96.0.4664.110

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v2Core i9 12900KCore i5 12600KCore i5 124001122334455SE +/- 0.10, N = 9SE +/- 0.03, N = 9SE +/- 0.03, N = 940.8143.1748.18MIN: 39.54 / MAX: 42.45MIN: 42.1 / MAX: 44.39MIN: 46.96 / MAX: 49.111. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v2Core i9 12900KCore i5 12600KCore i5 124001020304050Min: 40.58 / Avg: 40.81 / Max: 41.57Min: 42.98 / Avg: 43.17 / Max: 43.29Min: 48.08 / Avg: 48.18 / Max: 48.331. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

Chia Blockchain VDF

Chia is a blockchain and smart transaction platform based on proofs of space and time rather than proofs of work with other cryptocurrencies. This test profile is benchmarking the CPU performance for Chia VDF performance using the Chia VDF benchmark. The Chia VDF is for the Chia Verifiable Delay Function (Proof of Time). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIPS, More Is BetterChia Blockchain VDF 1.0.1Test: Square Plain C++Core i9 12900KCore i5 12600KCore i5 1240050K100K150K200K250KSE +/- 648.93, N = 3SE +/- 33.33, N = 3SE +/- 120.19, N = 32251672131331907331. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread
OpenBenchmarking.orgIPS, More Is BetterChia Blockchain VDF 1.0.1Test: Square Plain C++Core i9 12900KCore i5 12600KCore i5 1240040K80K120K160K200KMin: 224200 / Avg: 225166.67 / Max: 226400Min: 213100 / Avg: 213133.33 / Max: 213200Min: 190500 / Avg: 190733.33 / Max: 1909001. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread

Unpacking Firefox

This simple test profile measures how long it takes to extract the .tar.xz source package of the Mozilla Firefox Web Browser. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzCore i9 12900KCore i5 12600KCore i5 1240048121620SE +/- 0.05, N = 4SE +/- 0.04, N = 4SE +/- 0.08, N = 413.2413.8615.61
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xzCore i9 12900KCore i5 12600KCore i5 1240048121620Min: 13.17 / Avg: 13.24 / Max: 13.39Min: 13.8 / Avg: 13.85 / Max: 13.97Min: 15.49 / Avg: 15.61 / Max: 15.83

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.10Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMCore i9 12900KCore i5 12600KCore i5 1240020406080100SE +/- 0.05, N = 4SE +/- 0.19, N = 4SE +/- 0.23, N = 392.486.578.41. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.10Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMCore i9 12900KCore i5 12600KCore i5 1240020406080100Min: 92.3 / Avg: 92.43 / Max: 92.5Min: 85.9 / Avg: 86.48 / Max: 86.7Min: 78 / Avg: 78.43 / Max: 78.81. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f

OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMCore i9 12900KCore i5 12600KCore i5 124004080120160200SE +/- 0.71, N = 3SE +/- 0.45, N = 3SE +/- 0.50, N = 3182.3173.5154.71. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f
OpenBenchmarking.orgUE Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAMCore i9 12900KCore i5 12600KCore i5 12400306090120150Min: 180.9 / Avg: 182.27 / Max: 183.3Min: 172.6 / Avg: 173.47 / Max: 174.1Min: 153.7 / Avg: 154.67 / Max: 155.41. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedCore i9 12900KCore i5 12600KCore i5 1240020406080100SE +/- 0.46, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 377.9574.0166.281. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedCore i9 12900KCore i5 12600KCore i5 124001530456075Min: 77.03 / Avg: 77.95 / Max: 78.47Min: 73.94 / Avg: 74.01 / Max: 74.07Min: 66.25 / Avg: 66.28 / Max: 66.321. (CC) gcc options: -O3

Cython Benchmark

Cython provides a superset of Python that is geared to deliver C-like levels of performance. This test profile makes use of Cython's bundled benchmark tests and runs an N-Queens sample test as a simple benchmark to the system's Cython performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCython Benchmark 0.29.21Test: N-QueensCore i9 12900KCore i5 12600KCore i5 1240048121620SE +/- 0.02, N = 4SE +/- 0.01, N = 4SE +/- 0.01, N = 414.0714.9016.54
OpenBenchmarking.orgSeconds, Fewer Is BetterCython Benchmark 0.29.21Test: N-QueensCore i9 12900KCore i5 12600KCore i5 1240048121620Min: 14.05 / Avg: 14.07 / Max: 14.12Min: 14.88 / Avg: 14.9 / Max: 14.92Min: 16.53 / Avg: 16.54 / Max: 16.56

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KCore i5 12600KCore i9 12900KCore i5 124001224364860SE +/- 0.13, N = 4SE +/- 0.37, N = 4SE +/- 0.02, N = 451.2049.6143.601. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KCore i5 12600KCore i9 12900KCore i5 124001020304050Min: 50.83 / Avg: 51.2 / Max: 51.41Min: 48.67 / Avg: 49.61 / Max: 50.45Min: 43.55 / Avg: 43.6 / Max: 43.621. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteCore i9 12900KCore i5 12600KCore i5 12400300K600K900K1200K1500KSE +/- 5465.24, N = 4SE +/- 2958.80, N = 4SE +/- 1357.44, N = 4142206813453901211698
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteCore i9 12900KCore i5 12600KCore i5 12400200K400K600K800K1000KMin: 1408906 / Avg: 1422067.75 / Max: 1434534Min: 1338515 / Avg: 1345389.5 / Max: 1350528Min: 1208446 / Avg: 1211698.25 / Max: 1214605

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessCore i9 12900KCore i5 12600KCore i5 1240048121620SE +/- 0.00, N = 4SE +/- 0.01, N = 4SE +/- 0.02, N = 412.5213.1814.661. (CC) gcc options: -fvisibility=hidden -O2 -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessCore i9 12900KCore i5 12600KCore i5 1240048121620Min: 12.51 / Avg: 12.52 / Max: 12.52Min: 13.17 / Avg: 13.18 / Max: 13.19Min: 14.6 / Avg: 14.65 / Max: 14.71. (CC) gcc options: -fvisibility=hidden -O2 -lm -ljpeg -lpng16 -ltiff

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMCore i9 12900KCore i5 12600KCore i5 12400120240360480600SE +/- 1.36, N = 3SE +/- 5.75, N = 4SE +/- 1.07, N = 3551.0523.5470.91. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAMCore i9 12900KCore i5 12600KCore i5 12400100200300400500Min: 548.5 / Avg: 550.97 / Max: 553.2Min: 506.3 / Avg: 523.5 / Max: 530.6Min: 469 / Avg: 470.93 / Max: 472.71. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: Google ChromeCore i9 12900KCore i5 12600KCore i5 12400510152025SE +/- 0.05, N = 6SE +/- 0.09, N = 6SE +/- 0.04, N = 617.9718.9721.011. chrome 96.0.4664.110
OpenBenchmarking.orgms, Fewer Is BetterSeleniumBenchmark: WASM imageConvolute - Browser: Google ChromeCore i9 12900KCore i5 12600KCore i5 12400510152025Min: 17.79 / Avg: 17.97 / Max: 18.18Min: 18.81 / Avg: 18.97 / Max: 19.4Min: 20.89 / Avg: 21.01 / Max: 21.131. chrome 96.0.4664.110

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet50Core i5 12400Core i9 12900KCore i5 12600K510152025SE +/- 0.05, N = 3SE +/- 0.04, N = 15SE +/- 0.14, N = 1516.7418.2319.54MIN: 16.59 / MAX: 17.05MIN: 17.75 / MAX: 19.32MIN: 17.46 / MAX: 24.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet50Core i5 12400Core i9 12900KCore i5 12600K510152025Min: 16.69 / Avg: 16.74 / Max: 16.84Min: 17.94 / Avg: 18.23 / Max: 18.58Min: 17.61 / Avg: 19.54 / Max: 19.871. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

srsRAN

srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMCore i9 12900KCore i5 12600KCore i5 12400130260390520650SE +/- 3.84, N = 4SE +/- 1.62, N = 4SE +/- 1.06, N = 4623.1593.2534.01. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f
OpenBenchmarking.orgeNb Mb/s, More Is BettersrsRAN 21.10Test: 4G PHY_DL_Test 100 PRB SISO 256-QAMCore i9 12900KCore i5 12600KCore i5 12400110220330440550Min: 615.6 / Avg: 623.1 / Max: 631.6Min: 588.7 / Avg: 593.23 / Max: 596.3Min: 531.7 / Avg: 533.98 / Max: 536.21. (CXX) g++ options: -std=c++11 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -rdynamic -lmbedcrypto -lconfig++ -lsctp -lbladeRF -lpthread -lm -lfftw3f

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: DistinctUserIDCore i9 12900KCore i5 12600KCore i5 12400246810SE +/- 0.06, N = 11SE +/- 0.02, N = 3SE +/- 0.00, N = 37.647.306.561. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: DistinctUserIDCore i9 12900KCore i5 12600KCore i5 124003691215Min: 7.34 / Avg: 7.64 / Max: 7.77Min: 7.27 / Avg: 7.3 / Max: 7.32Min: 6.56 / Avg: 6.56 / Max: 6.561. (CXX) g++ options: -O3

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Isoneutral MixingCore i9 12900KCore i5 12600KCore i5 124000.20880.41760.62640.83521.044SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.001, N = 30.7970.8290.928
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Isoneutral MixingCore i9 12900KCore i5 12600KCore i5 12400246810Min: 0.8 / Avg: 0.8 / Max: 0.8Min: 0.83 / Avg: 0.83 / Max: 0.83Min: 0.93 / Avg: 0.93 / Max: 0.93

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream 2 - Browser: Google ChromeCore i9 12900KCore i5 12600KCore i5 1240060120180240300SE +/- 2.27, N = 3SE +/- 1.37, N = 3SE +/- 1.07, N = 2261.80250.65225.261. chrome 96.0.4664.110
OpenBenchmarking.orgScore, More Is BetterSeleniumBenchmark: Jetstream 2 - Browser: Google ChromeCore i9 12900KCore i5 12600KCore i5 1240050100150200250Min: 257.42 / Avg: 261.8 / Max: 265.04Min: 248.93 / Avg: 250.65 / Max: 253.36Min: 224.19 / Avg: 225.26 / Max: 226.331. chrome 96.0.4664.110

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Equation of StateCore i9 12900KCore i5 12600KCore i5 124000.03580.07160.10740.14320.179SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.1370.1440.159
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Equation of StateCore i9 12900KCore i5 12600KCore i5 1240012345Min: 0.14 / Avg: 0.14 / Max: 0.14Min: 0.14 / Avg: 0.14 / Max: 0.14Min: 0.16 / Avg: 0.16 / Max: 0.16

Unpacking The Linux Kernel

This test measures how long it takes to extract the .tar.xz Linux kernel package. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernellinux-4.15.tar.xzCore i9 12900KCore i5 12600KCore i5 124000.99921.99842.99763.99684.996SE +/- 0.006, N = 8SE +/- 0.007, N = 8SE +/- 0.013, N = 83.8273.9674.441
OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernellinux-4.15.tar.xzCore i9 12900KCore i5 12600KCore i5 12400246810Min: 3.81 / Avg: 3.83 / Max: 3.86Min: 3.95 / Avg: 3.97 / Max: 4.01Min: 4.41 / Avg: 4.44 / Max: 4.51

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: KostyaCore i9 12900KCore i5 12600KCore i5 124001.06882.13763.20644.27525.344SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 34.754.564.101. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: KostyaCore i9 12900KCore i5 12600KCore i5 12400246810Min: 4.74 / Avg: 4.75 / Max: 4.76Min: 4.56 / Avg: 4.56 / Max: 4.56Min: 4.1 / Avg: 4.1 / Max: 4.11. (CXX) g++ options: -O3

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRuns Per Minute, More Is BetterSeleniumBenchmark: Speedometer - Browser: Google ChromeCore i9 12900KCore i5 12600KCore i5 1240070140210280350SE +/- 1.76, N = 3SE +/- 2.03, N = 3SE +/- 1.33, N = 33032902621. chrome 96.0.4664.110
OpenBenchmarking.orgRuns Per Minute, More Is BetterSeleniumBenchmark: Speedometer - Browser: Google ChromeCore i9 12900KCore i5 12600KCore i5 1240050100150200250Min: 300 / Avg: 303.33 / Max: 306Min: 286 / Avg: 289.67 / Max: 293Min: 261 / Avg: 262.33 / Max: 2651. chrome 96.0.4664.110

Chia Blockchain VDF

Chia is a blockchain and smart transaction platform based on proofs of space and time rather than proofs of work with other cryptocurrencies. This test profile is benchmarking the CPU performance for Chia VDF performance using the Chia VDF benchmark. The Chia VDF is for the Chia Verifiable Delay Function (Proof of Time). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIPS, More Is BetterChia Blockchain VDF 1.0.1Test: Square Assembly OptimizedCore i9 12900KCore i5 12600KCore i5 1240050K100K150K200K250KSE +/- 1121.51, N = 3SE +/- 2216.85, N = 3SE +/- 2425.79, N = 32555672455332211671. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread
OpenBenchmarking.orgIPS, More Is BetterChia Blockchain VDF 1.0.1Test: Square Assembly OptimizedCore i9 12900KCore i5 12600KCore i5 1240040K80K120K160K200KMin: 253900 / Avg: 255566.67 / Max: 257700Min: 241100 / Avg: 245533.33 / Max: 247800Min: 216900 / Avg: 221166.67 / Max: 2253001. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KCore i5 12600KCore i9 12900KCore i5 1240020406080100SE +/- 0.15, N = 5SE +/- 0.52, N = 5SE +/- 0.02, N = 576.8876.6566.611. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KCore i5 12600KCore i9 12900KCore i5 124001530456075Min: 76.46 / Avg: 76.88 / Max: 77.21Min: 75.62 / Avg: 76.65 / Max: 78.49Min: 66.56 / Avg: 66.61 / Max: 66.651. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.24Test: resizeCore i9 12900KCore i5 12600KCore i5 124001.34782.69564.04345.39126.739SE +/- 0.045, N = 8SE +/- 0.048, N = 7SE +/- 0.060, N = 65.2085.4035.990
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.24Test: resizeCore i9 12900KCore i5 12600KCore i5 12400246810Min: 5.09 / Avg: 5.21 / Max: 5.5Min: 5.3 / Avg: 5.4 / Max: 5.65Min: 5.91 / Avg: 5.99 / Max: 6.29

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.6.1Input: PNG - Encode Speed: 8Core i9 12900KCore i5 12600KCore i5 124000.24530.49060.73590.98121.2265SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 31.091.060.951. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.6.1Input: PNG - Encode Speed: 8Core i9 12900KCore i5 12600KCore i5 12400246810Min: 1.08 / Avg: 1.09 / Max: 1.12Min: 1.05 / Avg: 1.06 / Max: 1.07Min: 0.95 / Avg: 0.95 / Max: 0.951. (CXX) g++ options: -funwind-tables -O3 -O2 -pthread -fPIE -pie

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRuns / Minute, More Is BetterSeleniumBenchmark: StyleBench - Browser: Google ChromeCore i9 12900KCore i5 12600KCore i5 124001428425670SE +/- 0.12, N = 3SE +/- 0.32, N = 3SE +/- 0.15, N = 364.3062.6056.201. chrome 96.0.4664.110
OpenBenchmarking.orgRuns / Minute, More Is BetterSeleniumBenchmark: StyleBench - Browser: Google ChromeCore i9 12900KCore i5 12600KCore i5 124001326395265Min: 64.1 / Avg: 64.3 / Max: 64.5Min: 62.2 / Avg: 62.57 / Max: 63.2Min: 56 / Avg: 56.2 / Max: 56.51. chrome 96.0.4664.110

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: ResNet 50 - Device: CPUCore i9 12900KCore i5 12600KCore i5 124003691215SE +/- 0.03, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 39.328.808.16
OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: ResNet 50 - Device: CPUCore i9 12900KCore i5 12600KCore i5 124003691215Min: 9.28 / Avg: 9.32 / Max: 9.37Min: 8.74 / Avg: 8.8 / Max: 8.9Min: 8.11 / Avg: 8.16 / Max: 8.2

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Equation of StateCore i9 12900KCore i5 12600KCore i5 124000.02360.04720.07080.09440.118SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.001, N = 30.0920.0940.105
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Equation of StateCore i9 12900KCore i5 12600KCore i5 1240012345Min: 0.09 / Avg: 0.09 / Max: 0.09Min: 0.09 / Avg: 0.09 / Max: 0.1Min: 0.1 / Avg: 0.1 / Max: 0.11

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.24Test: rotateCore i9 12900KCore i5 12600KCore i5 12400246810SE +/- 0.004, N = 5SE +/- 0.005, N = 5SE +/- 0.017, N = 57.4797.7188.518
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.24Test: rotateCore i9 12900KCore i5 12600KCore i5 124003691215Min: 7.47 / Avg: 7.48 / Max: 7.49Min: 7.7 / Avg: 7.72 / Max: 7.73Min: 8.47 / Avg: 8.52 / Max: 8.56

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KCore i5 12600KCore i9 12900KCore i5 124001530456075SE +/- 0.23, N = 5SE +/- 0.32, N = 5SE +/- 0.04, N = 568.4068.3760.361. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KCore i5 12600KCore i9 12900KCore i5 124001326395265Min: 67.83 / Avg: 68.4 / Max: 68.86Min: 67.46 / Avg: 68.37 / Max: 69.18Min: 60.26 / Avg: 60.36 / Max: 60.481. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedCore i5 12600KCore i5 12400Core i9 12900K3K6K9K12K15KSE +/- 18.97, N = 3SE +/- 22.86, N = 3SE +/- 259.08, N = 313475.512982.911991.21. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedCore i5 12600KCore i5 12400Core i9 12900K2K4K6K8K10KMin: 13442.1 / Avg: 13475.5 / Max: 13507.8Min: 12943.5 / Avg: 12982.87 / Max: 13022.7Min: 11649.4 / Avg: 11991.23 / Max: 12499.41. (CC) gcc options: -O3

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Equation of StateCore i9 12900KCore i5 12600KCore i5 124000.04390.08780.13170.17560.2195SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.1740.1840.195
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Equation of StateCore i9 12900KCore i5 12600KCore i5 1240012345Min: 0.17 / Avg: 0.17 / Max: 0.17Min: 0.18 / Avg: 0.18 / Max: 0.18Min: 0.19 / Avg: 0.19 / Max: 0.2

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: alexnetCore i5 12400Core i9 12900KCore i5 12600K3691215SE +/- 0.01, N = 3SE +/- 0.04, N = 15SE +/- 0.04, N = 158.218.979.20MIN: 8.14 / MAX: 9.36MIN: 8.76 / MAX: 15.98MIN: 9.11 / MAX: 69.461. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: alexnetCore i5 12400Core i9 12900KCore i5 12600K3691215Min: 8.2 / Avg: 8.21 / Max: 8.22Min: 8.81 / Avg: 8.97 / Max: 9.16Min: 9.14 / Avg: 9.2 / Max: 9.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedCore i5 12600KCore i5 12400Core i9 12900K3K6K9K12K15KSE +/- 15.80, N = 3SE +/- 16.48, N = 3SE +/- 275.85, N = 313431.312950.912007.71. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedCore i5 12600KCore i5 12400Core i9 12900K2K4K6K8K10KMin: 13403.3 / Avg: 13431.27 / Max: 13458Min: 12918.6 / Avg: 12950.87 / Max: 12972.8Min: 11648.8 / Avg: 12007.67 / Max: 125501. (CC) gcc options: -O3

GNU Octave Benchmark

This test profile measures how long it takes to complete several reference GNU Octave files via octave-benchmark. GNU Octave is used for numerical computations and is an open-source alternative to MATLAB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 6.2.0Core i9 12900KCore i5 12600KCore i5 124001.22672.45343.68014.90686.1335SE +/- 0.024, N = 7SE +/- 0.020, N = 7SE +/- 0.017, N = 74.8954.9325.452
OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 6.2.0Core i9 12900KCore i5 12600KCore i5 12400246810Min: 4.8 / Avg: 4.9 / Max: 4.99Min: 4.9 / Avg: 4.93 / Max: 5.04Min: 5.4 / Avg: 5.45 / Max: 5.53

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.24Test: auto-levelsCore i9 12900KCore i5 12600KCore i5 12400246810SE +/- 0.018, N = 5SE +/- 0.029, N = 5SE +/- 0.016, N = 58.0078.2368.902
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.24Test: auto-levelsCore i9 12900KCore i5 12600KCore i5 124003691215Min: 7.95 / Avg: 8.01 / Max: 8.05Min: 8.19 / Avg: 8.24 / Max: 8.35Min: 8.85 / Avg: 8.9 / Max: 8.94

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression SpeedCore i9 12900KCore i5 12600KCore i5 1240010002000300040005000SE +/- 2.83, N = 4SE +/- 3.82, N = 3SE +/- 3.32, N = 34550.24483.24096.01. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression SpeedCore i9 12900KCore i5 12600KCore i5 124008001600240032004000Min: 4543.8 / Avg: 4550.18 / Max: 4555.1Min: 4478.8 / Avg: 4483.2 / Max: 4490.8Min: 4089.4 / Avg: 4096.03 / Max: 4099.61. (CC) gcc options: -O3 -pthread -lz -llzma

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Isoneutral MixingCore i5 12600KCore i9 12900KCore i5 124000.29790.59580.89371.19161.4895SE +/- 0.007, N = 3SE +/- 0.007, N = 15SE +/- 0.008, N = 31.1931.2171.324
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Isoneutral MixingCore i5 12600KCore i9 12900KCore i5 12400246810Min: 1.18 / Avg: 1.19 / Max: 1.21Min: 1.19 / Avg: 1.22 / Max: 1.31Min: 1.31 / Avg: 1.32 / Max: 1.33

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression SpeedCore i9 12900KCore i5 12600KCore i5 1240010002000300040005000SE +/- 2.88, N = 3SE +/- 3.15, N = 3SE +/- 1.52, N = 34679.84604.24218.01. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression SpeedCore i9 12900KCore i5 12600KCore i5 124008001600240032004000Min: 4676.1 / Avg: 4679.83 / Max: 4685.5Min: 4598.1 / Avg: 4604.17 / Max: 4608.7Min: 4216.2 / Avg: 4217.97 / Max: 42211. (CC) gcc options: -O3 -pthread -lz -llzma

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Isoneutral MixingCore i5 12600KCore i9 12900KCore i5 124000.30110.60220.90331.20441.5055SE +/- 0.001, N = 3SE +/- 0.002, N = 3SE +/- 0.013, N = 151.2091.2251.338
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Isoneutral MixingCore i5 12600KCore i9 12900KCore i5 12400246810Min: 1.21 / Avg: 1.21 / Max: 1.21Min: 1.22 / Avg: 1.23 / Max: 1.23Min: 1.31 / Avg: 1.34 / Max: 1.51

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: vgg16Core i9 12900KCore i5 12400Core i5 12600K816243240SE +/- 0.36, N = 15SE +/- 0.01, N = 3SE +/- 0.30, N = 1532.6335.1835.55MIN: 29.92 / MAX: 55.47MIN: 35.02 / MAX: 41.41MIN: 34 / MAX: 40.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: vgg16Core i9 12900KCore i5 12400Core i5 12600K816243240Min: 30.06 / Avg: 32.63 / Max: 34.22Min: 35.17 / Avg: 35.18 / Max: 35.19Min: 34.12 / Avg: 35.55 / Max: 37.231. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenetV3Core i5 12400Core i9 12900KCore i5 12600K0.26980.53960.80941.07921.349SE +/- 0.007, N = 3SE +/- 0.007, N = 3SE +/- 0.016, N = 31.1031.1621.199MIN: 1.08 / MAX: 1.25MIN: 1.13 / MAX: 3.35MIN: 1.17 / MAX: 1.521. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenetV3Core i5 12400Core i9 12900KCore i5 12600K246810Min: 1.09 / Avg: 1.1 / Max: 1.12Min: 1.15 / Avg: 1.16 / Max: 1.17Min: 1.18 / Avg: 1.2 / Max: 1.231. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of StateCore i5 12600KCore i9 12900KCore i5 124000.29680.59360.89041.18721.484SE +/- 0.000, N = 3SE +/- 0.001, N = 3SE +/- 0.014, N = 51.2161.2961.319
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of StateCore i5 12600KCore i9 12900KCore i5 12400246810Min: 1.22 / Avg: 1.22 / Max: 1.22Min: 1.29 / Avg: 1.3 / Max: 1.3Min: 1.3 / Avg: 1.32 / Max: 1.37

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.24Test: unsharp-maskCore i9 12900KCore i5 12600KCore i5 124003691215SE +/- 0.022, N = 5SE +/- 0.013, N = 5SE +/- 0.007, N = 59.90610.01510.426
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.24Test: unsharp-maskCore i9 12900KCore i5 12600KCore i5 124003691215Min: 9.85 / Avg: 9.91 / Max: 9.96Min: 9.98 / Avg: 10.02 / Max: 10.06Min: 10.4 / Avg: 10.43 / Max: 10.44

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral MixingCore i5 12400Core i5 12600KCore i9 12900K0.43940.87881.31821.75762.197SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.002, N = 31.8611.8791.953
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral MixingCore i5 12400Core i5 12600KCore i9 12900K246810Min: 1.86 / Avg: 1.86 / Max: 1.86Min: 1.88 / Avg: 1.88 / Max: 1.88Min: 1.95 / Avg: 1.95 / Max: 1.96

Selenium

This test profile uses the Selenium WebDriver for running various browser benchmarks in different available web browsers such as Firefox and Google Chrome. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, Fewer Is BetterSeleniumBenchmark: PSPDFKit WASM - Browser: Google ChromeCore i9 12900KCore i5 12600KCore i5 124006001200180024003000SE +/- 28.05, N = 3SE +/- 10.33, N = 3SE +/- 32.53, N = 32595261727001. chrome 96.0.4664.110
OpenBenchmarking.orgScore, Fewer Is BetterSeleniumBenchmark: PSPDFKit WASM - Browser: Google ChromeCore i9 12900KCore i5 12600KCore i5 124005001000150020002500Min: 2543 / Avg: 2595.33 / Max: 2639Min: 2602 / Avg: 2617.33 / Max: 2637Min: 2665 / Avg: 2700 / Max: 27651. chrome 96.0.4664.110

CPU Power Consumption Monitor

OpenBenchmarking.orgWattsCPU Power Consumption MonitorPhoronix Test Suite System MonitoringCore i5 12400Core i5 12600KCore i9 12900K4080120160200Min: 1.41 / Avg: 42.12 / Max: 85.86Min: 1.77 / Avg: 73.98 / Max: 137.61Min: 1.08 / Avg: 97.4 / Max: 238.27

OpenCV

MinAvgMaxCore i5 124002.753.979.8Core i5 12600K3.280.6118.0Core i9 12900K2.2118.5190.6OpenBenchmarking.orgWatts, Fewer Is BetterOpenCV 4.5.4CPU Power Consumption Monitor50100150200250

MinAvgMaxCore i5 124002.733.262.1Core i5 12600K3.250.9103.4Core i9 12900K2.268.2151.7OpenBenchmarking.orgWatts, Fewer Is BetterOpenCV 4.5.4CPU Power Consumption Monitor4080120160200

NCNN

MinAvgMaxCore i5 124002.657.172.0Core i5 12600K3.377.097.9Core i9 12900K2.4118.5157.3OpenBenchmarking.orgWatts, Fewer Is BetterNCNN 20210720CPU Power Consumption Monitor4080120160200

Mobile Neural Network

MinAvgMaxCore i5 124002.570.576.8Core i5 12600K3.4111.6120.0Core i9 12900K2.3192.9218.7OpenBenchmarking.orgWatts, Fewer Is BetterMobile Neural Network 1.2CPU Power Consumption Monitor60120180240300

SVT-VP9

MinAvgMaxCore i5 124002.533.777.4Core i5 12600K3.242.6107.5Core i9 12900K2.349.9185.9OpenBenchmarking.orgWatts, Fewer Is BetterSVT-VP9 0.3CPU Power Consumption Monitor50100150200250

OpenBenchmarking.orgFrames Per Second Per Watt, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pCore i9 12900KCore i5 12400Core i5 12600K2468107.3415.8535.182

Etcpak

MinAvgMaxCore i5 124002.512.222.9Core i9 12900K2.213.629.0Core i5 12600K3.117.335.1OpenBenchmarking.orgWatts, Fewer Is BetterEtcpak 0.7CPU Power Consumption Monitor1020304050

OpenBenchmarking.orgMpx/s Per Watt, More Is BetterEtcpak 0.7Configuration: DXT1Core i9 12900KCore i5 12400Core i5 12600K306090120150121.73113.2690.17

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.5.4Test: DNN - Deep Neural NetworkCore i9 12900KCore i5 12600KCore i5 124002K4K6K8K10KSE +/- 165.18, N = 15SE +/- 324.13, N = 12SE +/- 255.92, N = 15884110012103491. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.5.4Test: DNN - Deep Neural NetworkCore i9 12900KCore i5 12600KCore i5 124002K4K6K8K10KMin: 7315 / Avg: 8841 / Max: 9577Min: 8476 / Avg: 10012.08 / Max: 12650Min: 9422 / Avg: 10349 / Max: 123121. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.5.4Test: Object DetectionCore i5 12400Core i9 12900KCore i5 12600K11K22K33K44K55KSE +/- 383.94, N = 4SE +/- 1178.15, N = 12SE +/- 563.33, N = 153156050173521051. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.5.4Test: Object DetectionCore i5 12400Core i9 12900KCore i5 12600K9K18K27K36K45KMin: 30748 / Avg: 31559.5 / Max: 32541Min: 43202 / Avg: 50172.5 / Max: 57714Min: 48175 / Avg: 52104.73 / Max: 570251. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: regnety_400mCore i5 12400Core i9 12900KCore i5 12600K246810SE +/- 0.00, N = 3SE +/- 0.25, N = 15SE +/- 0.23, N = 155.976.877.39MIN: 5.92 / MAX: 6.73MIN: 5.93 / MAX: 21.93MIN: 6.21 / MAX: 9.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: regnety_400mCore i5 12400Core i9 12900KCore i5 12600K3691215Min: 5.96 / Avg: 5.97 / Max: 5.97Min: 5.99 / Avg: 6.87 / Max: 8.43Min: 6.25 / Avg: 7.39 / Max: 8.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: squeezenet_ssdCore i9 12900KCore i5 12400Core i5 12600K48121620SE +/- 0.10, N = 15SE +/- 0.01, N = 3SE +/- 0.75, N = 1513.3514.9317.09MIN: 12.43 / MAX: 33.5MIN: 14.8 / MAX: 21.08MIN: 13.76 / MAX: 56.351. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: squeezenet_ssdCore i9 12900KCore i5 12400Core i5 12600K48121620Min: 12.51 / Avg: 13.35 / Max: 14.14Min: 14.91 / Avg: 14.93 / Max: 14.95Min: 13.9 / Avg: 17.09 / Max: 21.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: yolov4-tinyCore i9 12900KCore i5 12400Core i5 12600K48121620SE +/- 0.24, N = 15SE +/- 0.02, N = 3SE +/- 0.27, N = 1516.2416.6817.02MIN: 14.78 / MAX: 32.95MIN: 16.53 / MAX: 17.3MIN: 15.52 / MAX: 20.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: yolov4-tinyCore i9 12900KCore i5 12400Core i5 12600K48121620Min: 14.91 / Avg: 16.24 / Max: 18.59Min: 16.64 / Avg: 16.68 / Max: 16.71Min: 15.63 / Avg: 17.02 / Max: 19.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet18Core i5 12400Core i9 12900KCore i5 12600K3691215SE +/- 0.05, N = 3SE +/- 0.04, N = 15SE +/- 0.17, N = 159.6810.8310.97MIN: 9.58 / MAX: 10.16MIN: 10.44 / MAX: 17.26MIN: 9.31 / MAX: 19.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet18Core i5 12400Core i9 12900KCore i5 12600K3691215Min: 9.63 / Avg: 9.68 / Max: 9.78Min: 10.59 / Avg: 10.83 / Max: 11.2Min: 9.35 / Avg: 10.97 / Max: 11.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: googlenetCore i5 12400Core i9 12900KCore i5 12600K3691215SE +/- 0.05, N = 3SE +/- 0.32, N = 15SE +/- 0.22, N = 159.149.7711.10MIN: 9.01 / MAX: 9.4MIN: 7.94 / MAX: 16.95MIN: 8.95 / MAX: 16.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: googlenetCore i5 12400Core i9 12900KCore i5 12600K3691215Min: 9.08 / Avg: 9.14 / Max: 9.23Min: 7.98 / Avg: 9.77 / Max: 10.86Min: 9.02 / Avg: 11.1 / Max: 11.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: blazefaceCore i5 12400Core i9 12900KCore i5 12600K0.32180.64360.96541.28721.609SE +/- 0.00, N = 3SE +/- 0.06, N = 15SE +/- 0.04, N = 151.081.371.43MIN: 1.06 / MAX: 1.24MIN: 1.09 / MAX: 1.82MIN: 1.13 / MAX: 1.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: blazefaceCore i5 12400Core i9 12900KCore i5 12600K246810Min: 1.08 / Avg: 1.08 / Max: 1.08Min: 1.11 / Avg: 1.37 / Max: 1.64Min: 1.15 / Avg: 1.43 / Max: 1.571. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: efficientnet-b0Core i5 12400Core i9 12900KCore i5 12600K246810SE +/- 0.00, N = 3SE +/- 0.15, N = 15SE +/- 0.16, N = 153.715.086.08MIN: 3.66 / MAX: 4.5MIN: 4.26 / MAX: 6.52MIN: 4.84 / MAX: 13.571. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: efficientnet-b0Core i5 12400Core i9 12900KCore i5 12600K246810Min: 3.7 / Avg: 3.71 / Max: 3.71Min: 4.3 / Avg: 5.08 / Max: 5.7Min: 4.88 / Avg: 6.08 / Max: 7.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mnasnetCore i5 12400Core i9 12900KCore i5 12600K0.77631.55262.32893.10523.8815SE +/- 0.00, N = 3SE +/- 0.09, N = 13SE +/- 0.09, N = 142.413.003.45MIN: 2.37 / MAX: 3.19MIN: 2.51 / MAX: 10.7MIN: 2.88 / MAX: 10.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mnasnetCore i5 12400Core i9 12900KCore i5 12600K246810Min: 2.41 / Avg: 2.41 / Max: 2.41Min: 2.54 / Avg: 3 / Max: 3.38Min: 2.91 / Avg: 3.45 / Max: 4.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: shufflenet-v2Core i5 12400Core i9 12900KCore i5 12600K0.71781.43562.15342.87123.589SE +/- 0.00, N = 3SE +/- 0.10, N = 14SE +/- 0.08, N = 152.872.963.19MIN: 2.82 / MAX: 3.62MIN: 2.51 / MAX: 4.26MIN: 2.8 / MAX: 4.411. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: shufflenet-v2Core i5 12400Core i9 12900KCore i5 12600K246810Min: 2.86 / Avg: 2.87 / Max: 2.87Min: 2.54 / Avg: 2.96 / Max: 3.46Min: 2.83 / Avg: 3.19 / Max: 3.531. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v3-v3 - Model: mobilenet-v3Core i5 12400Core i9 12900KCore i5 12600K0.721.442.162.883.6SE +/- 0.00, N = 3SE +/- 0.08, N = 15SE +/- 0.07, N = 152.422.783.20MIN: 2.38 / MAX: 3.21MIN: 2.4 / MAX: 10.46MIN: 2.67 / MAX: 4.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v3-v3 - Model: mobilenet-v3Core i5 12400Core i9 12900KCore i5 12600K246810Min: 2.42 / Avg: 2.42 / Max: 2.42Min: 2.44 / Avg: 2.78 / Max: 3.2Min: 2.71 / Avg: 3.2 / Max: 3.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v2-v2 - Model: mobilenet-v2Core i5 12400Core i9 12900KCore i5 12600K0.87531.75062.62593.50124.3765SE +/- 0.00, N = 3SE +/- 0.10, N = 15SE +/- 0.11, N = 152.653.303.89MIN: 2.59 / MAX: 3.43MIN: 2.71 / MAX: 5.56MIN: 3.04 / MAX: 12.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v2-v2 - Model: mobilenet-v2Core i5 12400Core i9 12900KCore i5 12600K246810Min: 2.64 / Avg: 2.65 / Max: 2.65Min: 2.75 / Avg: 3.3 / Max: 3.69Min: 3.09 / Avg: 3.89 / Max: 4.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mobilenetCore i5 12400Core i9 12900KCore i5 12600K3691215SE +/- 0.00, N = 3SE +/- 0.28, N = 15SE +/- 0.31, N = 159.9610.9012.63MIN: 9.86 / MAX: 10.4MIN: 9.22 / MAX: 13.46MIN: 10.87 / MAX: 14.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mobilenetCore i5 12400Core i9 12900KCore i5 12600K48121620Min: 9.96 / Avg: 9.96 / Max: 9.97Min: 9.3 / Avg: 10.9 / Max: 12.64Min: 10.94 / Avg: 12.63 / Max: 14.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: inception-v3Core i5 12400Core i9 12900KCore i5 12600K714212835SE +/- 0.05, N = 3SE +/- 0.60, N = 3SE +/- 1.43, N = 323.1023.7028.90MIN: 22.88 / MAX: 30.06MIN: 22.6 / MAX: 36.07MIN: 26.47 / MAX: 35.831. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: inception-v3Core i5 12400Core i9 12900KCore i5 12600K612182430Min: 23.02 / Avg: 23.1 / Max: 23.19Min: 22.89 / Avg: 23.7 / Max: 24.86Min: 26.55 / Avg: 28.9 / Max: 31.491. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: SqueezeNetV1.0Core i5 12400Core i9 12900KCore i5 12600K1.02292.04583.06874.09165.1145SE +/- 0.008, N = 3SE +/- 0.147, N = 3SE +/- 0.203, N = 33.5304.2554.546MIN: 3.48 / MAX: 6.67MIN: 3.9 / MAX: 12.18MIN: 4.31 / MAX: 5.041. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: SqueezeNetV1.0Core i5 12400Core i9 12900KCore i5 12600K246810Min: 3.52 / Avg: 3.53 / Max: 3.54Min: 3.98 / Avg: 4.25 / Max: 4.47Min: 4.34 / Avg: 4.55 / Max: 4.951. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: resnet-v2-50Core i5 12600KCore i5 12400Core i9 12900K510152025SE +/- 1.19, N = 3SE +/- 0.01, N = 3SE +/- 1.09, N = 318.7619.8920.01MIN: 17.48 / MAX: 23.66MIN: 19.77 / MAX: 36.32MIN: 18.57 / MAX: 31.931. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: resnet-v2-50Core i5 12600KCore i5 12400Core i9 12900K510152025Min: 17.54 / Avg: 18.76 / Max: 21.13Min: 19.87 / Avg: 19.89 / Max: 19.91Min: 18.86 / Avg: 20.01 / Max: 22.181. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: squeezenetv1.1Core i5 12400Core i5 12600KCore i9 12900K0.63971.27941.91912.55883.1985SE +/- 0.007, N = 3SE +/- 0.172, N = 3SE +/- 0.265, N = 32.3542.5772.843MIN: 2.32 / MAX: 2.59MIN: 2.38 / MAX: 9.24MIN: 2.28 / MAX: 3.691. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: squeezenetv1.1Core i5 12400Core i5 12600KCore i9 12900K246810Min: 2.34 / Avg: 2.35 / Max: 2.36Min: 2.4 / Avg: 2.58 / Max: 2.92Min: 2.32 / Avg: 2.84 / Max: 3.171. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pCore i9 12900KCore i5 12600KCore i5 1240080160240320400SE +/- 10.38, N = 15SE +/- 1.53, N = 15SE +/- 1.18, N = 15366.29220.86197.401. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pCore i9 12900KCore i5 12600KCore i5 1240070140210280350Min: 221.06 / Avg: 366.29 / Max: 379.7Min: 199.8 / Avg: 220.86 / Max: 224.87Min: 180.95 / Avg: 197.4 / Max: 199.431. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: DXT1Core i9 12900KCore i5 12600KCore i5 12400400800120016002000SE +/- 27.84, N = 15SE +/- 23.05, N = 15SE +/- 23.38, N = 151655.331556.071386.751. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: DXT1Core i9 12900KCore i5 12600KCore i5 1240030060090012001500Min: 1480.02 / Avg: 1655.33 / Max: 1731.1Min: 1406.96 / Avg: 1556.07 / Max: 1631.43Min: 1273.35 / Avg: 1386.75 / Max: 1462.71. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

175 Results Shown

OpenSSL
TensorFlow Lite:
  Inception ResNet V2
  Mobilenet Quant
  Inception V4
  Mobilenet Float
  SqueezeNet
  NASNet Mobile
Coremark
7-Zip Compression
Stockfish
IndigoBench
Aircrack-ng
Stargate Digital Audio Workstation
ONNX Runtime
libavif avifenc
ONNX Runtime
Blender
Tungsten Renderer
Blender
Stargate Digital Audio Workstation
Tungsten Renderer
Chaos Group V-RAY
Timed MPlayer Compilation
OpenSSL:
  RSA4096:
    verify/s
    sign/s
Appleseed
IndigoBench
Appleseed
ASTC Encoder
Helsing
SVT-HEVC
Tungsten Renderer
Darktable
Embree
Timed Linux Kernel Compilation
SVT-HEVC
SVT-VP9
OSPray
Timed LLVM Compilation
7-Zip Compression
Primesieve
SVT-AV1
OSPray
Timed Mesa Compilation
Xmrig
Timed Node.js Compilation
Embree
Xmrig
Timed Wasmer Compilation
Darktable
LAMMPS Molecular Dynamics Simulator
ONNX Runtime
PlaidML
ONNX Runtime
PlaidML
Darktable
XZ Compression
LeelaChessZero
libavif avifenc
Meta Performance Per Watts
Zstd Compression
SVT-AV1
Tungsten Renderer
RawTherapee
AOM AV1
Timed GDB GNU Debugger Compilation
TNN
AOM AV1
LeelaChessZero
ASTC Encoder
Hugin
srsRAN
LibRaw
rav1e
DaCapo Benchmark
Node.js V8 Web Tooling Benchmark
Mobile Neural Network
Zstd Compression
Mobile Neural Network
srsRAN:
  4G PHY_DL_Test 100 PRB SISO 64-QAM
  5G PHY_DL_NR Test 52 PRB SISO 64-QAM
JPEG XL libjxl
srsRAN
Crafty
simdjson
srsRAN:
  4G PHY_DL_Test 100 PRB SISO 64-QAM
  4G PHY_DL_Test 100 PRB MIMO 64-QAM
WireGuard + Linux Networking Stack Stress Test
simdjson
srsRAN
libjpeg-turbo tjbench
SecureMark
TNN
WebP Image Encode
PyBench
Selenium
TNN
Etcpak
LZ4 Compression
Selenium
TNN
Chia Blockchain VDF
Unpacking Firefox
srsRAN:
  5G PHY_DL_NR Test 52 PRB SISO 64-QAM
  4G PHY_DL_Test 100 PRB MIMO 256-QAM
LZ4 Compression
Cython Benchmark
AOM AV1
PHPBench
WebP Image Encode
srsRAN
Selenium
NCNN
srsRAN
simdjson
PyHPC Benchmarks
Selenium
PyHPC Benchmarks
Unpacking The Linux Kernel
simdjson
Selenium
Chia Blockchain VDF
AOM AV1
GIMP
JPEG XL libjxl
Selenium
PlaidML
PyHPC Benchmarks
GIMP
AOM AV1
LZ4 Compression
PyHPC Benchmarks
NCNN
LZ4 Compression
GNU Octave Benchmark
GIMP
Zstd Compression
PyHPC Benchmarks
Zstd Compression
PyHPC Benchmarks
NCNN
Mobile Neural Network
PyHPC Benchmarks
GIMP
PyHPC Benchmarks
Selenium
CPU Power Consumption Monitor:
  Phoronix Test Suite System Monitoring
  CPU Power Consumption Monitor
  CPU Power Consumption Monitor
  CPU Power Consumption Monitor
  CPU Power Consumption Monitor
  CPU Power Consumption Monitor
  VMAF Optimized - Bosphorus 1080p
  CPU Power Consumption Monitor
  DXT1
OpenCV:
  DNN - Deep Neural Network
  Object Detection
NCNN:
  CPU - regnety_400m
  CPU - squeezenet_ssd
  CPU - yolov4-tiny
  CPU - resnet18
  CPU - googlenet
  CPU - blazeface
  CPU - efficientnet-b0
  CPU - mnasnet
  CPU - shufflenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU-v2-v2 - mobilenet-v2
  CPU - mobilenet
Mobile Neural Network:
  inception-v3
  SqueezeNetV1.0
  resnet-v2-50
  squeezenetv1.1
SVT-VP9
Etcpak