5600u 2021

Intel Core i7-5600U testing with a LENOVO 20BSCTO1WW (N14ET49W 1.27 BIOS) and Intel HD 5500 BDW GT2 3GB on Ubuntu 21.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2111225-TJ-5600U202106
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 2 Tests
C/C++ Compiler Tests 2 Tests
CPU Massive 3 Tests
Creator Workloads 6 Tests
Cryptography 2 Tests
Encoding 2 Tests
HPC - High Performance Computing 2 Tests
Imaging 3 Tests
Multi-Core 2 Tests
Python Tests 2 Tests
Server CPU Tests 2 Tests
Video Encoding 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
A
November 21 2021
  1 Hour, 58 Minutes
B
November 21 2021
  7 Hours, 31 Minutes
C
November 22 2021
  7 Hours, 59 Minutes
Invert Hiding All Results Option
  5 Hours, 50 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


5600u 2021OpenBenchmarking.orgPhoronix Test SuiteIntel Core i7-5600U @ 3.20GHz (2 Cores / 4 Threads)LENOVO 20BSCTO1WW (N14ET49W 1.27 BIOS)Intel Broadwell-U-OPI8GB128GB SAMSUNG MZNTE128Intel HD 5500 BDW GT2 3GB (950MHz)Intel Broadwell-U AudioIntel I218-LM + Intel 7265Ubuntu 21.105.13.0-21-generic (x86_64)GNOME Shell 40.5X Server + Wayland4.6 Mesa 21.2.21.2.182GCC 11.2.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen Resolution5600u 2021 BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-ZPT0kp/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-ZPT0kp/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_cpufreq schedutil - CPU Microcode: 0x2f - Thermald 2.4.6 - Python 3.9.7- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Mitigation of Clear buffers; SMT vulnerable

ABCResult OverviewPhoronix Test Suite100%104%108%112%116%SockperfRAR CompressionASTC EncoderJPEG XL Decoding libjxlrav1eONNX RuntimeGIMPOpenSSLPyHPC BenchmarksAOM AV1Zstd CompressionBLAKE2JPEG XL libjxl

5600u 2021onnx: fcn-resnet101-11 - CPUcompress-zstd: 19 - Compression Speedpyhpc: CPU - JAX - 65536 - Isoneutral Mixingcompress-zstd: 3 - Compression Speedcompress-zstd: 8 - Compression Speedpyhpc: CPU - TensorFlow - 1048576 - Equation of Statepyhpc: CPU - JAX - 4194304 - Equation of Statecompress-zstd: 19, Long Mode - Compression Speedpyhpc: CPU - Numpy - 65536 - Equation of Statepyhpc: CPU - Aesara - 65536 - Isoneutral Mixingpyhpc: CPU - JAX - 1048576 - Equation of Stateopenssl: compress-zstd: 3, Long Mode - Compression Speedpyhpc: CPU - Aesara - 1048576 - Equation of Statepyhpc: CPU - Numpy - 262144 - Equation of Statecompress-zstd: 8, Long Mode - Compression Speedaom-av1: Speed 8 Realtime - Bosphorus 1080pastcenc: Thoroughonnx: yolov4 - CPUcompress-rar: Linux Source Tree Archiving To RARopenssl: RSA4096gimp: resizepyhpc: CPU - Aesara - 4194304 - Equation of Statepyhpc: CPU - Numpy - 65536 - Isoneutral Mixingpyhpc: CPU - Numba - 1048576 - Isoneutral Mixingpyhpc: CPU - Aesara - 1048576 - Isoneutral Mixingpyhpc: CPU - PyTorch - 4194304 - Isoneutral Mixingjpegxl-decode: Allopenssl: pyhpc: CPU - Numba - 4194304 - Equation of Stateaom-av1: Speed 9 Realtime - Bosphorus 1080paom-av1: Speed 6 Realtime - Bosphorus 1080ppyhpc: CPU - Numba - 4194304 - Isoneutral Mixingpyhpc: CPU - JAX - 262144 - Isoneutral Mixingpyhpc: CPU - JAX - 1048576 - Isoneutral Mixingrav1e: 10pyhpc: CPU - Numba - 262144 - Isoneutral Mixingaom-av1: Speed 9 Realtime - Bosphorus 4Kjpegxl: JPEG - 7onnx: shufflenet-v2-10 - CPUpyhpc: CPU - Numpy - 262144 - Isoneutral Mixingcompress-zstd: 19 - Decompression Speedpyhpc: CPU - Numpy - 1048576 - Isoneutral Mixingrav1e: 6pyhpc: CPU - TensorFlow - 4194304 - Equation of Statepyhpc: CPU - Aesara - 262144 - Isoneutral Mixingpyhpc: CPU - PyTorch - 262144 - Isoneutral Mixingpyhpc: CPU - JAX - 4194304 - Isoneutral Mixingaom-av1: Speed 6 Two-Pass - Bosphorus 4Kpyhpc: CPU - Numpy - 4194304 - Equation of Statejpegxl: JPEG - 8pyhpc: CPU - Numpy - 1048576 - Equation of Stateaom-av1: Speed 10 Realtime - Bosphorus 4Krav1e: 5aom-av1: Speed 8 Realtime - Bosphorus 4Kaom-av1: Speed 6 Realtime - Bosphorus 4Kgimp: auto-levelsgimp: rotatepyhpc: CPU - Aesara - 4194304 - Isoneutral Mixingcompress-zstd: 3 - Decompression Speedcompress-zstd: 8 - Decompression Speedpyhpc: CPU - Numpy - 4194304 - Isoneutral Mixingpyhpc: CPU - PyTorch - 1048576 - Isoneutral Mixingaom-av1: Speed 10 Realtime - Bosphorus 1080pjpegxl-decode: 1gimp: unsharp-maskastcenc: Mediumopenssl: RSA4096compress-zstd: 8, Long Mode - Decompression Speedastcenc: Exhaustiveopenssl: SHA256blake2: compress-zstd: 3, Long Mode - Decompression Speedcompress-zstd: 19, Long Mode - Decompression Speedpyhpc: CPU - TensorFlow - 262144 - Equation of Statepyhpc: CPU - TensorFlow - 16384 - Equation of Statepyhpc: CPU - PyTorch - 65536 - Isoneutral Mixingpyhpc: CPU - PyTorch - 65536 - Equation of Statepyhpc: CPU - PyTorch - 16384 - Equation of Statepyhpc: CPU - Numba - 1048576 - Equation of Statepyhpc: CPU - Aesara - 262144 - Equation of Statepyhpc: CPU - Numba - 262144 - Equation of Statepyhpc: CPU - Aesara - 65536 - Equation of Statepyhpc: CPU - Aesara - 16384 - Isoneutral Mixingpyhpc: CPU - Aesara - 16384 - Equation of Statepyhpc: CPU - Numpy - 16384 - Isoneutral Mixingpyhpc: CPU - Numpy - 16384 - Equation of Statepyhpc: CPU - Numba - 65536 - Isoneutral Mixingpyhpc: CPU - Numba - 65536 - Equation of Statepyhpc: CPU - Numba - 16384 - Isoneutral Mixingpyhpc: CPU - Numba - 16384 - Equation of Statepyhpc: CPU - JAX - 262144 - Equation of Statepyhpc: CPU - JAX - 65536 - Equation of Statepyhpc: CPU - JAX - 16384 - Isoneutral Mixingpyhpc: CPU - JAX - 16384 - Equation of Stateaom-av1: Speed 6 Two-Pass - Bosphorus 1080pjpegxl: PNG - 8jpegxl: PNG - 7pyhpc: CPU - TensorFlow - 65536 - Equation of Statepyhpc: CPU - PyTorch - 4194304 - Equation of Statepyhpc: CPU - PyTorch - 1048576 - Equation of Statepyhpc: CPU - PyTorch - 262144 - Equation of Statepyhpc: CPU - PyTorch - 16384 - Isoneutral Mixingonnx: super-resolution-10 - CPUsockperf: Latency Under Loadsockperf: Latency Ping Pongsockperf: ThroughputABC158.140.026608.2112.70.0540.1357.460.0290.0340.03529310.6260.10.1140.12311129.9744.34179390.957439.217.9950.4490.0580.4360.5822.56540.77442.50.35740.013.071.8240.0830.3452.640.09612.4337.82114560.222184.10.9470.8450.2470.1310.1351.5781.452.44116.260.51514.220.6148.262.3319.23716.8062.482445.22523.33.8490.61644.3734.1622.176.379727518.42690.7425.14064681026705.122616.32262.70.0130.0040.0330.0030.0010.090.0280.0220.0080.0080.0020.0140.0070.0250.0060.0060.0020.0070.0020.0050.0014.620.53.160.0040.1890.050.010.00991017.33112.188350505168.470.025627.1108.60.0560.1407.360.0300.0340.03428509.1253.30.1170.122108.529.4145.32429192.672431.217.7020.4520.0580.4340.5742.60440.73446.80.35839.653.031.8450.0820.3462.6170.09512.5637.65115150.2202195.00.9550.8380.2490.1320.1341.5851.442.45316.340.51614.170.6158.232.3219.17416.7672.4832455.02532.03.8500.61844.2434.1722.1356.396127587.62697.3426.13854677791835.132614.22263.30.0130.0040.0330.0030.0010.090.0280.0220.0080.0080.0020.0140.0070.0250.0060.0060.0020.0070.0020.0050.0014.620.503.160.0060.1820.0450.0100.00986113.2149.708322628168.300.025631.3111.70.0540.1357.200.0290.0330.03428569.9259.20.1160.125111.129.3145.29289192.731431.117.6770.4570.0570.4290.5732.60540.18448.70.36239.473.031.8220.0820.3422.6100.09512.5337.45114100.2182203.00.9470.8380.2470.1320.1341.5741.442.43716.360.51814.250.6128.222.3219.15716.7372.4902454.12531.23.8370.61644.2834.0722.1106.390127582.32696.9425.94124671212835.122612.02263.70.0130.0040.0330.0030.0010.090.0280.0220.0080.0080.0020.0140.0070.0250.0060.0060.0020.0070.0020.0050.0014.620.503.160.0060.1850.0440.0100.00989713.7368.876316802OpenBenchmarking.org

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.9.1Model: fcn-resnet101-11 - Device: CPUABC48121620SE +/- 0.00, N = 3SE +/- 0.00, N = 31516161. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.9.1Model: fcn-resnet101-11 - Device: CPUABC48121620Min: 15.5 / Avg: 15.5 / Max: 15.5Min: 15.5 / Avg: 15.5 / Max: 15.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Compression SpeedABC246810SE +/- 0.02, N = 3SE +/- 0.04, N = 38.148.478.301. *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Compression SpeedABC3691215Min: 8.45 / Avg: 8.47 / Max: 8.51Min: 8.25 / Avg: 8.3 / Max: 8.381. *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: JAX - Project Size: 65536 - Benchmark: Isoneutral MixingABC0.00590.01180.01770.02360.0295SE +/- 0.000, N = 15SE +/- 0.000, N = 30.0260.0250.025
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: JAX - Project Size: 65536 - Benchmark: Isoneutral MixingABC12345Min: 0.02 / Avg: 0.02 / Max: 0.03Min: 0.02 / Avg: 0.02 / Max: 0.03

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3 - Compression SpeedABC140280420560700SE +/- 4.28, N = 3SE +/- 4.00, N = 3608.2627.1631.31. *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3 - Compression SpeedABC110220330440550Min: 620.9 / Avg: 627.1 / Max: 635.3Min: 626.7 / Avg: 631.33 / Max: 639.31. *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8 - Compression SpeedABC306090120150SE +/- 0.32, N = 3SE +/- 0.70, N = 3112.7108.6111.71. *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8 - Compression SpeedABC20406080100Min: 108.2 / Avg: 108.57 / Max: 109.2Min: 110.3 / Avg: 111.7 / Max: 112.51. *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: TensorFlow - Project Size: 1048576 - Benchmark: Equation of StateABC0.01260.02520.03780.05040.063SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0540.0560.054
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: TensorFlow - Project Size: 1048576 - Benchmark: Equation of StateABC12345Min: 0.06 / Avg: 0.06 / Max: 0.06Min: 0.05 / Avg: 0.05 / Max: 0.06

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: JAX - Project Size: 4194304 - Benchmark: Equation of StateABC0.03150.0630.09450.1260.1575SE +/- 0.001, N = 15SE +/- 0.002, N = 30.1350.1400.135
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: JAX - Project Size: 4194304 - Benchmark: Equation of StateABC12345Min: 0.14 / Avg: 0.14 / Max: 0.15Min: 0.13 / Avg: 0.14 / Max: 0.14

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Compression SpeedABC246810SE +/- 0.08, N = 3SE +/- 0.06, N = 127.467.367.201. *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Compression SpeedABC3691215Min: 7.2 / Avg: 7.36 / Max: 7.45Min: 6.86 / Avg: 7.2 / Max: 7.441. *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 65536 - Benchmark: Equation of StateABC0.00680.01360.02040.02720.034SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0290.0300.029
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 65536 - Benchmark: Equation of StateABC12345Min: 0.03 / Avg: 0.03 / Max: 0.03Min: 0.03 / Avg: 0.03 / Max: 0.03

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Aesara - Project Size: 65536 - Benchmark: Isoneutral MixingABC0.00770.01540.02310.03080.0385SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0340.0340.033
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Aesara - Project Size: 65536 - Benchmark: Isoneutral MixingABC12345Min: 0.03 / Avg: 0.03 / Max: 0.03Min: 0.03 / Avg: 0.03 / Max: 0.03

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: JAX - Project Size: 1048576 - Benchmark: Equation of StateABC0.00790.01580.02370.03160.0395SE +/- 0.000, N = 4SE +/- 0.000, N = 150.0350.0340.034
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: JAX - Project Size: 1048576 - Benchmark: Equation of StateABC12345Min: 0.03 / Avg: 0.03 / Max: 0.04Min: 0.03 / Avg: 0.03 / Max: 0.04

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. The system/openssl test profiles relies on benchmarking the system/OS-supplied openssl binary rather than the pts/openssl test profile that uses the locally-built OpenSSL for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgverify/s, More Is BetterOpenSSLABC6K12K18K24K30KSE +/- 456.05, N = 3SE +/- 510.31, N = 329310.628509.128569.91. OpenSSL 1.1.1l 24 Aug 2021
OpenBenchmarking.orgverify/s, More Is BetterOpenSSLABC5K10K15K20K25KMin: 27853 / Avg: 28509.13 / Max: 29385.9Min: 27819.1 / Avg: 28569.87 / Max: 295441. OpenSSL 1.1.1l 24 Aug 2021

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3, Long Mode - Compression SpeedABC60120180240300SE +/- 1.95, N = 3SE +/- 0.55, N = 3260.1253.3259.21. *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3, Long Mode - Compression SpeedABC50100150200250Min: 249.6 / Avg: 253.33 / Max: 256.2Min: 258.3 / Avg: 259.2 / Max: 260.21. *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Aesara - Project Size: 1048576 - Benchmark: Equation of StateABC0.02630.05260.07890.10520.1315SE +/- 0.001, N = 8SE +/- 0.001, N = 30.1140.1170.116
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Aesara - Project Size: 1048576 - Benchmark: Equation of StateABC12345Min: 0.11 / Avg: 0.12 / Max: 0.12Min: 0.11 / Avg: 0.12 / Max: 0.12

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 262144 - Benchmark: Equation of StateABC0.02810.05620.08430.11240.1405SE +/- 0.000, N = 3SE +/- 0.001, N = 30.1230.1220.125
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 262144 - Benchmark: Equation of StateABC12345Min: 0.12 / Avg: 0.12 / Max: 0.12Min: 0.12 / Avg: 0.12 / Max: 0.13

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8, Long Mode - Compression SpeedABC20406080100SE +/- 0.29, N = 3SE +/- 0.03, N = 3111.0108.5111.11. *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8, Long Mode - Compression SpeedABC20406080100Min: 108 / Avg: 108.5 / Max: 109Min: 111.1 / Avg: 111.13 / Max: 111.21. *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pABC714212835SE +/- 0.35, N = 4SE +/- 0.34, N = 429.9729.4129.311. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pABC714212835Min: 28.92 / Avg: 29.41 / Max: 30.44Min: 28.73 / Avg: 29.31 / Max: 30.31. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: ThoroughABC1020304050SE +/- 0.42, N = 3SE +/- 0.42, N = 344.3445.3245.291. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: ThoroughABC918273645Min: 44.49 / Avg: 45.32 / Max: 45.79Min: 44.46 / Avg: 45.29 / Max: 45.741. (CXX) g++ options: -O3 -flto -pthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.9.1Model: yolov4 - Device: CPUABC20406080100SE +/- 0.93, N = 3SE +/- 0.77, N = 129391911. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.9.1Model: yolov4 - Device: CPUABC20406080100Min: 90 / Avg: 91.17 / Max: 93Min: 89.5 / Avg: 90.58 / Max: 991. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

RAR Compression

This test measures the time needed to archive/compress two copies of the Linux 5.14 kernel source tree using RAR/WinRAR compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRAR Compression 6.0.2Linux Source Tree Archiving To RARABC20406080100SE +/- 0.57, N = 13SE +/- 0.79, N = 390.9692.6792.73
OpenBenchmarking.orgSeconds, Fewer Is BetterRAR Compression 6.0.2Linux Source Tree Archiving To RARABC20406080100Min: 91.68 / Avg: 92.67 / Max: 99.42Min: 91.89 / Avg: 92.73 / Max: 94.3

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096ABC100200300400500SE +/- 4.19, N = 3SE +/- 3.88, N = 3439.2431.2431.11. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096ABC80160240320400Min: 426.7 / Avg: 431.23 / Max: 439.6Min: 427.1 / Avg: 431.13 / Max: 438.91. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.24Test: resizeABC48121620SE +/- 0.14, N = 3SE +/- 0.11, N = 318.0017.7017.68
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.24Test: resizeABC510152025Min: 17.52 / Avg: 17.7 / Max: 17.98Min: 17.57 / Avg: 17.68 / Max: 17.9

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Equation of StateABC0.10280.20560.30840.41120.514SE +/- 0.002, N = 3SE +/- 0.005, N = 50.4490.4520.457
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Equation of StateABC12345Min: 0.45 / Avg: 0.45 / Max: 0.46Min: 0.44 / Avg: 0.46 / Max: 0.47

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 65536 - Benchmark: Isoneutral MixingABC0.01310.02620.03930.05240.0655SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0580.0580.057
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 65536 - Benchmark: Isoneutral MixingABC12345Min: 0.06 / Avg: 0.06 / Max: 0.06Min: 0.06 / Avg: 0.06 / Max: 0.06

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numba - Project Size: 1048576 - Benchmark: Isoneutral MixingABC0.09810.19620.29430.39240.4905SE +/- 0.002, N = 3SE +/- 0.001, N = 30.4360.4340.429
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numba - Project Size: 1048576 - Benchmark: Isoneutral MixingABC12345Min: 0.43 / Avg: 0.43 / Max: 0.44Min: 0.43 / Avg: 0.43 / Max: 0.43

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Aesara - Project Size: 1048576 - Benchmark: Isoneutral MixingABC0.1310.2620.3930.5240.655SE +/- 0.001, N = 3SE +/- 0.001, N = 30.5820.5740.573
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Aesara - Project Size: 1048576 - Benchmark: Isoneutral MixingABC246810Min: 0.57 / Avg: 0.57 / Max: 0.58Min: 0.57 / Avg: 0.57 / Max: 0.58

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Isoneutral MixingABC0.58611.17221.75832.34442.9305SE +/- 0.022, N = 12SE +/- 0.022, N = 122.5652.6042.605
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Isoneutral MixingABC246810Min: 2.51 / Avg: 2.6 / Max: 2.75Min: 2.54 / Avg: 2.6 / Max: 2.82

JPEG XL Decoding libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.6.1CPU Threads: AllABC918273645SE +/- 0.03, N = 3SE +/- 0.40, N = 340.7740.7340.18
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.6.1CPU Threads: AllABC816243240Min: 40.67 / Avg: 40.73 / Max: 40.79Min: 39.38 / Avg: 40.18 / Max: 40.64

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. The system/openssl test profiles relies on benchmarking the system/OS-supplied openssl binary rather than the pts/openssl test profile that uses the locally-built OpenSSL for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsign/s, More Is BetterOpenSSLABC100200300400500SE +/- 5.23, N = 3SE +/- 3.09, N = 3442.5446.8448.71. OpenSSL 1.1.1l 24 Aug 2021
OpenBenchmarking.orgsign/s, More Is BetterOpenSSLABC80160240320400Min: 437 / Avg: 446.77 / Max: 454.9Min: 443.8 / Avg: 448.7 / Max: 454.41. OpenSSL 1.1.1l 24 Aug 2021

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Equation of StateABC0.08150.1630.24450.3260.4075SE +/- 0.001, N = 3SE +/- 0.002, N = 30.3570.3580.362
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Equation of StateABC12345Min: 0.36 / Avg: 0.36 / Max: 0.36Min: 0.36 / Avg: 0.36 / Max: 0.37

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pABC918273645SE +/- 0.33, N = 3SE +/- 0.30, N = 340.0139.6539.471. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pABC816243240Min: 39.07 / Avg: 39.65 / Max: 40.2Min: 38.92 / Avg: 39.47 / Max: 39.941. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pABC0.69081.38162.07242.76323.454SE +/- 0.00, N = 3SE +/- 0.01, N = 33.073.033.031. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pABC246810Min: 3.03 / Avg: 3.03 / Max: 3.03Min: 3.01 / Avg: 3.03 / Max: 3.051. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Isoneutral MixingABC0.41510.83021.24531.66042.0755SE +/- 0.005, N = 3SE +/- 0.002, N = 31.8241.8451.822
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Isoneutral MixingABC246810Min: 1.84 / Avg: 1.85 / Max: 1.85Min: 1.82 / Avg: 1.82 / Max: 1.82

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: JAX - Project Size: 262144 - Benchmark: Isoneutral MixingABC0.01870.03740.05610.07480.0935SE +/- 0.001, N = 3SE +/- 0.000, N = 30.0830.0820.082
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: JAX - Project Size: 262144 - Benchmark: Isoneutral MixingABC12345Min: 0.08 / Avg: 0.08 / Max: 0.08Min: 0.08 / Avg: 0.08 / Max: 0.08

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: JAX - Project Size: 1048576 - Benchmark: Isoneutral MixingABC0.07790.15580.23370.31160.3895SE +/- 0.001, N = 3SE +/- 0.001, N = 30.3450.3460.342
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: JAX - Project Size: 1048576 - Benchmark: Isoneutral MixingABC12345Min: 0.35 / Avg: 0.35 / Max: 0.35Min: 0.34 / Avg: 0.34 / Max: 0.34

rav1e

Xiph rav1e is a Rust-written AV1 video encoder that claims to be the fastest and safest AV1 encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.5Speed: 10ABC0.5941.1881.7822.3762.97SE +/- 0.017, N = 3SE +/- 0.014, N = 32.6402.6172.610
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.5Speed: 10ABC246810Min: 2.6 / Avg: 2.62 / Max: 2.65Min: 2.59 / Avg: 2.61 / Max: 2.64

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numba - Project Size: 262144 - Benchmark: Isoneutral MixingABC0.02160.04320.06480.08640.108SE +/- 0.000, N = 3SE +/- 0.001, N = 30.0960.0950.095
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numba - Project Size: 262144 - Benchmark: Isoneutral MixingABC12345Min: 0.1 / Avg: 0.1 / Max: 0.1Min: 0.09 / Avg: 0.1 / Max: 0.1

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KABC3691215SE +/- 0.03, N = 3SE +/- 0.01, N = 312.4312.5612.531. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KABC48121620Min: 12.53 / Avg: 12.56 / Max: 12.62Min: 12.51 / Avg: 12.53 / Max: 12.561. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.6.1Input: JPEG - Encode Speed: 7ABC918273645SE +/- 0.05, N = 3SE +/- 0.24, N = 337.8237.6537.451. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.6.1Input: JPEG - Encode Speed: 7ABC816243240Min: 37.57 / Avg: 37.65 / Max: 37.74Min: 37 / Avg: 37.45 / Max: 37.821. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.9.1Model: shufflenet-v2-10 - Device: CPUABC2K4K6K8K10KSE +/- 65.71, N = 3SE +/- 107.01, N = 31145611515114101. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.9.1Model: shufflenet-v2-10 - Device: CPUABC2K4K6K8K10KMin: 11431.5 / Avg: 11514.83 / Max: 11644.5Min: 11285 / Avg: 11409.5 / Max: 11622.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 262144 - Benchmark: Isoneutral MixingABC0.04950.0990.14850.1980.2475SE +/- 0.000, N = 3SE +/- 0.001, N = 30.2200.2200.218
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 262144 - Benchmark: Isoneutral MixingABC12345Min: 0.22 / Avg: 0.22 / Max: 0.22Min: 0.22 / Avg: 0.22 / Max: 0.22

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Decompression SpeedABC5001000150020002500SE +/- 4.79, N = 3SE +/- 0.21, N = 32184.12195.02203.01. *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19 - Decompression SpeedABC400800120016002000Min: 2186.7 / Avg: 2194.97 / Max: 2203.3Min: 2202.7 / Avg: 2203 / Max: 2203.41. *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral MixingABC0.21490.42980.64470.85961.0745SE +/- 0.005, N = 3SE +/- 0.001, N = 30.9470.9550.947
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral MixingABC246810Min: 0.95 / Avg: 0.95 / Max: 0.97Min: 0.95 / Avg: 0.95 / Max: 0.95

rav1e

Xiph rav1e is a Rust-written AV1 video encoder that claims to be the fastest and safest AV1 encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.5Speed: 6ABC0.19010.38020.57030.76040.9505SE +/- 0.004, N = 3SE +/- 0.004, N = 30.8450.8380.838
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.5Speed: 6ABC246810Min: 0.83 / Avg: 0.84 / Max: 0.85Min: 0.83 / Avg: 0.84 / Max: 0.85

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Equation of StateABC0.0560.1120.1680.2240.28SE +/- 0.001, N = 3SE +/- 0.000, N = 30.2470.2490.247
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Equation of StateABC12345Min: 0.25 / Avg: 0.25 / Max: 0.25Min: 0.25 / Avg: 0.25 / Max: 0.25

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Aesara - Project Size: 262144 - Benchmark: Isoneutral MixingABC0.02970.05940.08910.11880.1485SE +/- 0.000, N = 3SE +/- 0.000, N = 30.1310.1320.132
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Aesara - Project Size: 262144 - Benchmark: Isoneutral MixingABC12345Min: 0.13 / Avg: 0.13 / Max: 0.13Min: 0.13 / Avg: 0.13 / Max: 0.13

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: PyTorch - Project Size: 262144 - Benchmark: Isoneutral MixingABC0.03040.06080.09120.12160.152SE +/- 0.000, N = 3SE +/- 0.000, N = 30.1350.1340.134
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: PyTorch - Project Size: 262144 - Benchmark: Isoneutral MixingABC12345Min: 0.13 / Avg: 0.13 / Max: 0.13Min: 0.13 / Avg: 0.13 / Max: 0.14

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: JAX - Project Size: 4194304 - Benchmark: Isoneutral MixingABC0.35660.71321.06981.42641.783SE +/- 0.001, N = 3SE +/- 0.003, N = 31.5781.5851.574
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: JAX - Project Size: 4194304 - Benchmark: Isoneutral MixingABC246810Min: 1.58 / Avg: 1.59 / Max: 1.59Min: 1.57 / Avg: 1.57 / Max: 1.58

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KABC0.32630.65260.97891.30521.6315SE +/- 0.00, N = 3SE +/- 0.00, N = 31.451.441.441. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KABC246810Min: 1.44 / Avg: 1.44 / Max: 1.44Min: 1.44 / Avg: 1.44 / Max: 1.441. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of StateABC0.55191.10381.65572.20762.7595SE +/- 0.002, N = 3SE +/- 0.002, N = 32.4412.4532.437
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of StateABC246810Min: 2.45 / Avg: 2.45 / Max: 2.46Min: 2.43 / Avg: 2.44 / Max: 2.44

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.6.1Input: JPEG - Encode Speed: 8ABC48121620SE +/- 0.02, N = 3SE +/- 0.02, N = 316.2616.3416.361. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.6.1Input: JPEG - Encode Speed: 8ABC48121620Min: 16.29 / Avg: 16.34 / Max: 16.36Min: 16.33 / Avg: 16.36 / Max: 16.391. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of StateABC0.11660.23320.34980.46640.583SE +/- 0.002, N = 3SE +/- 0.001, N = 30.5150.5160.518
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of StateABC246810Min: 0.51 / Avg: 0.52 / Max: 0.52Min: 0.52 / Avg: 0.52 / Max: 0.52

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KABC48121620SE +/- 0.03, N = 3SE +/- 0.04, N = 314.2214.1714.251. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KABC48121620Min: 14.14 / Avg: 14.17 / Max: 14.23Min: 14.17 / Avg: 14.25 / Max: 14.31. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

rav1e

Xiph rav1e is a Rust-written AV1 video encoder that claims to be the fastest and safest AV1 encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.5Speed: 5ABC0.13840.27680.41520.55360.692SE +/- 0.002, N = 3SE +/- 0.002, N = 30.6140.6150.612
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.5Speed: 5ABC246810Min: 0.61 / Avg: 0.62 / Max: 0.62Min: 0.61 / Avg: 0.61 / Max: 0.61

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KABC246810SE +/- 0.01, N = 3SE +/- 0.03, N = 38.268.238.221. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KABC3691215Min: 8.2 / Avg: 8.23 / Max: 8.25Min: 8.18 / Avg: 8.22 / Max: 8.281. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KABC0.52431.04861.57292.09722.6215SE +/- 0.01, N = 3SE +/- 0.01, N = 32.332.322.321. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KABC246810Min: 2.31 / Avg: 2.32 / Max: 2.34Min: 2.31 / Avg: 2.32 / Max: 2.331. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.24Test: auto-levelsABC510152025SE +/- 0.01, N = 3SE +/- 0.01, N = 319.2419.1719.16
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.24Test: auto-levelsABC510152025Min: 19.15 / Avg: 19.17 / Max: 19.2Min: 19.15 / Avg: 19.16 / Max: 19.18

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.24Test: rotateABC48121620SE +/- 0.02, N = 3SE +/- 0.04, N = 316.8116.7716.74
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.24Test: rotateABC48121620Min: 16.73 / Avg: 16.77 / Max: 16.8Min: 16.65 / Avg: 16.74 / Max: 16.79

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Isoneutral MixingABC0.56031.12061.68092.24122.8015SE +/- 0.010, N = 3SE +/- 0.007, N = 32.4802.4832.490
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Aesara - Project Size: 4194304 - Benchmark: Isoneutral MixingABC246810Min: 2.47 / Avg: 2.48 / Max: 2.5Min: 2.48 / Avg: 2.49 / Max: 2.5

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3 - Decompression SpeedABC5001000150020002500SE +/- 1.48, N = 3SE +/- 0.62, N = 32445.22455.02454.11. *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3 - Decompression SpeedABC400800120016002000Min: 2452.6 / Avg: 2455 / Max: 2457.7Min: 2452.9 / Avg: 2454.13 / Max: 2454.91. *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8 - Decompression SpeedABC5001000150020002500SE +/- 1.17, N = 3SE +/- 1.22, N = 32523.32532.02531.21. *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8 - Decompression SpeedABC400800120016002000Min: 2529.8 / Avg: 2532 / Max: 2533.8Min: 2529.2 / Avg: 2531.17 / Max: 2533.41. *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral MixingABC0.86631.73262.59893.46524.3315SE +/- 0.006, N = 3SE +/- 0.002, N = 33.8493.8503.837
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral MixingABC246810Min: 3.84 / Avg: 3.85 / Max: 3.86Min: 3.83 / Avg: 3.84 / Max: 3.84

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: PyTorch - Project Size: 1048576 - Benchmark: Isoneutral MixingABC0.13910.27820.41730.55640.6955SE +/- 0.002, N = 3SE +/- 0.003, N = 30.6160.6180.616
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: PyTorch - Project Size: 1048576 - Benchmark: Isoneutral MixingABC246810Min: 0.62 / Avg: 0.62 / Max: 0.62Min: 0.61 / Avg: 0.62 / Max: 0.62

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080pABC1020304050SE +/- 0.27, N = 3SE +/- 0.41, N = 344.3744.2444.281. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080pABC918273645Min: 43.71 / Avg: 44.24 / Max: 44.51Min: 43.48 / Avg: 44.28 / Max: 44.791. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

JPEG XL Decoding libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.6.1CPU Threads: 1ABC816243240SE +/- 0.03, N = 3SE +/- 0.07, N = 334.1634.1734.07
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.6.1CPU Threads: 1ABC714212835Min: 34.12 / Avg: 34.17 / Max: 34.23Min: 33.94 / Avg: 34.07 / Max: 34.16

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.24Test: unsharp-maskABC510152025SE +/- 0.03, N = 3SE +/- 0.01, N = 322.1722.1422.11
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.24Test: unsharp-maskABC510152025Min: 22.08 / Avg: 22.14 / Max: 22.17Min: 22.08 / Avg: 22.11 / Max: 22.12

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: MediumABC246810SE +/- 0.0104, N = 3SE +/- 0.0093, N = 36.37976.39616.39011. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: MediumABC3691215Min: 6.38 / Avg: 6.4 / Max: 6.41Min: 6.37 / Avg: 6.39 / Max: 6.41. (CXX) g++ options: -O3 -flto -pthread

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096ABC6K12K18K24K30KSE +/- 83.67, N = 3SE +/- 47.22, N = 327518.427587.627582.31. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096ABC5K10K15K20K25KMin: 27424 / Avg: 27587.6 / Max: 27699.9Min: 27487.9 / Avg: 27582.33 / Max: 27629.71. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8, Long Mode - Decompression SpeedABC6001200180024003000SE +/- 3.40, N = 3SE +/- 2.29, N = 32690.72697.32696.91. *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 8, Long Mode - Decompression SpeedABC5001000150020002500Min: 2692.8 / Avg: 2697.33 / Max: 2704Min: 2692.5 / Avg: 2696.9 / Max: 2700.21. *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: ExhaustiveABC90180270360450SE +/- 0.42, N = 3SE +/- 0.58, N = 3425.14426.14425.941. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: ExhaustiveABC80160240320400Min: 425.32 / Avg: 426.14 / Max: 426.74Min: 424.83 / Avg: 425.94 / Max: 426.811. (CXX) g++ options: -O3 -flto -pthread

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA256ABC100M200M300M400M500MSE +/- 227275.07, N = 3SE +/- 290038.41, N = 34681026704677791834671212831. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA256ABC80M160M240M320M400MMin: 467446370 / Avg: 467779183.33 / Max: 468213710Min: 466821510 / Avg: 467121283.33 / Max: 4677012501. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

BLAKE2

This is a benchmark of BLAKE2 using the blake2s binary. BLAKE2 is a high-performance crypto alternative to MD5 and SHA-2/3. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgCycles Per Byte, Fewer Is BetterBLAKE2 20170307ABC1.15432.30863.46294.61725.7715SE +/- 0.00, N = 3SE +/- 0.00, N = 35.125.135.121. (CC) gcc options: -O3 -march=native -lcrypto -lz
OpenBenchmarking.orgCycles Per Byte, Fewer Is BetterBLAKE2 20170307ABC246810Min: 5.13 / Avg: 5.13 / Max: 5.13Min: 5.12 / Avg: 5.12 / Max: 5.131. (CC) gcc options: -O3 -march=native -lcrypto -lz

Zstd Compression

This test measures the time needed to compress/decompress a sample input file using Zstd compression supplied by the system or otherwise externally of the test profile. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3, Long Mode - Decompression SpeedABC6001200180024003000SE +/- 3.73, N = 3SE +/- 5.65, N = 32616.32614.22612.01. *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 3, Long Mode - Decompression SpeedABC5001000150020002500Min: 2607 / Avg: 2614.23 / Max: 2619.4Min: 2601.1 / Avg: 2612 / Max: 26201. *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Decompression SpeedABC5001000150020002500SE +/- 4.68, N = 3SE +/- 1.45, N = 122262.72263.32263.71. *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***
OpenBenchmarking.orgMB/s, More Is BetterZstd CompressionCompression Level: 19, Long Mode - Decompression SpeedABC400800120016002000Min: 2257.7 / Avg: 2263.3 / Max: 2272.6Min: 2256.1 / Avg: 2263.73 / Max: 2270.91. *** zstd command line interface 64-bits v1.4.8, by Yann Collet ***

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: TensorFlow - Project Size: 262144 - Benchmark: Equation of StateABC0.00290.00580.00870.01160.0145SE +/- 0.000, N = 15SE +/- 0.000, N = 150.0130.0130.013
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: TensorFlow - Project Size: 262144 - Benchmark: Equation of StateABC12345Min: 0.01 / Avg: 0.01 / Max: 0.01Min: 0.01 / Avg: 0.01 / Max: 0.01

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: TensorFlow - Project Size: 16384 - Benchmark: Equation of StateABC0.00090.00180.00270.00360.0045SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0040.0040.004
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: TensorFlow - Project Size: 16384 - Benchmark: Equation of StateABC12345Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: PyTorch - Project Size: 65536 - Benchmark: Isoneutral MixingABC0.00740.01480.02220.02960.037SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0330.0330.033
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: PyTorch - Project Size: 65536 - Benchmark: Isoneutral MixingABC12345Min: 0.03 / Avg: 0.03 / Max: 0.03Min: 0.03 / Avg: 0.03 / Max: 0.03

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: PyTorch - Project Size: 65536 - Benchmark: Equation of StateABC0.00070.00140.00210.00280.0035SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0030.0030.003
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: PyTorch - Project Size: 65536 - Benchmark: Equation of StateABC12345Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: PyTorch - Project Size: 16384 - Benchmark: Equation of StateABC0.00020.00040.00060.00080.001SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0010.0010.001
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: PyTorch - Project Size: 16384 - Benchmark: Equation of StateABC12345Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numba - Project Size: 1048576 - Benchmark: Equation of StateABC0.02030.04060.06090.08120.1015SE +/- 0.00, N = 3SE +/- 0.00, N = 30.090.090.09
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numba - Project Size: 1048576 - Benchmark: Equation of StateABC12345Min: 0.09 / Avg: 0.09 / Max: 0.09Min: 0.09 / Avg: 0.09 / Max: 0.09

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Aesara - Project Size: 262144 - Benchmark: Equation of StateABC0.00630.01260.01890.02520.0315SE +/- 0.000, N = 3SE +/- 0.000, N = 60.0280.0280.028
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Aesara - Project Size: 262144 - Benchmark: Equation of StateABC12345Min: 0.03 / Avg: 0.03 / Max: 0.03Min: 0.03 / Avg: 0.03 / Max: 0.03

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numba - Project Size: 262144 - Benchmark: Equation of StateABC0.0050.010.0150.020.025SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0220.0220.022
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numba - Project Size: 262144 - Benchmark: Equation of StateABC12345Min: 0.02 / Avg: 0.02 / Max: 0.02Min: 0.02 / Avg: 0.02 / Max: 0.02

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Aesara - Project Size: 65536 - Benchmark: Equation of StateABC0.00180.00360.00540.00720.009SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0080.0080.008
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Aesara - Project Size: 65536 - Benchmark: Equation of StateABC12345Min: 0.01 / Avg: 0.01 / Max: 0.01Min: 0.01 / Avg: 0.01 / Max: 0.01

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Aesara - Project Size: 16384 - Benchmark: Isoneutral MixingABC0.00180.00360.00540.00720.009SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0080.0080.008
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Aesara - Project Size: 16384 - Benchmark: Isoneutral MixingABC12345Min: 0.01 / Avg: 0.01 / Max: 0.01Min: 0.01 / Avg: 0.01 / Max: 0.01

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Aesara - Project Size: 16384 - Benchmark: Equation of StateABC0.00050.0010.00150.0020.0025SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0020.0020.002
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Aesara - Project Size: 16384 - Benchmark: Equation of StateABC12345Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral MixingABC0.00320.00640.00960.01280.016SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0140.0140.014
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Isoneutral MixingABC12345Min: 0.01 / Avg: 0.01 / Max: 0.01Min: 0.01 / Avg: 0.01 / Max: 0.01

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of StateABC0.00160.00320.00480.00640.008SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0070.0070.007
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numpy - Project Size: 16384 - Benchmark: Equation of StateABC12345Min: 0.01 / Avg: 0.01 / Max: 0.01Min: 0.01 / Avg: 0.01 / Max: 0.01

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numba - Project Size: 65536 - Benchmark: Isoneutral MixingABC0.00560.01120.01680.02240.028SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0250.0250.025
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numba - Project Size: 65536 - Benchmark: Isoneutral MixingABC12345Min: 0.03 / Avg: 0.03 / Max: 0.03Min: 0.03 / Avg: 0.03 / Max: 0.03

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numba - Project Size: 65536 - Benchmark: Equation of StateABC0.00140.00280.00420.00560.007SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0060.0060.006
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numba - Project Size: 65536 - Benchmark: Equation of StateABC12345Min: 0.01 / Avg: 0.01 / Max: 0.01Min: 0.01 / Avg: 0.01 / Max: 0.01

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numba - Project Size: 16384 - Benchmark: Isoneutral MixingABC0.00140.00280.00420.00560.007SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0060.0060.006
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numba - Project Size: 16384 - Benchmark: Isoneutral MixingABC12345Min: 0.01 / Avg: 0.01 / Max: 0.01Min: 0.01 / Avg: 0.01 / Max: 0.01

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numba - Project Size: 16384 - Benchmark: Equation of StateABC0.00050.0010.00150.0020.0025SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0020.0020.002
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: Numba - Project Size: 16384 - Benchmark: Equation of StateABC12345Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: JAX - Project Size: 262144 - Benchmark: Equation of StateABC0.00160.00320.00480.00640.008SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0070.0070.007
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: JAX - Project Size: 262144 - Benchmark: Equation of StateABC12345Min: 0.01 / Avg: 0.01 / Max: 0.01Min: 0.01 / Avg: 0.01 / Max: 0.01

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: JAX - Project Size: 65536 - Benchmark: Equation of StateABC0.00050.0010.00150.0020.0025SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0020.0020.002
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: JAX - Project Size: 65536 - Benchmark: Equation of StateABC12345Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: JAX - Project Size: 16384 - Benchmark: Isoneutral MixingABC0.00110.00220.00330.00440.0055SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0050.0050.005
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: JAX - Project Size: 16384 - Benchmark: Isoneutral MixingABC12345Min: 0.01 / Avg: 0.01 / Max: 0.01Min: 0.01 / Avg: 0.01 / Max: 0.01

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: JAX - Project Size: 16384 - Benchmark: Equation of StateABC0.00020.00040.00060.00080.001SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0010.0010.001
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: JAX - Project Size: 16384 - Benchmark: Equation of StateABC12345Min: 0 / Avg: 0 / Max: 0Min: 0 / Avg: 0 / Max: 0

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pABC1.03952.0793.11854.1585.1975SE +/- 0.01, N = 3SE +/- 0.01, N = 34.624.624.621. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pABC246810Min: 4.61 / Avg: 4.62 / Max: 4.63Min: 4.61 / Avg: 4.62 / Max: 4.631. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.6.1Input: PNG - Encode Speed: 8ABC0.11250.2250.33750.450.5625SE +/- 0.00, N = 3SE +/- 0.00, N = 30.500.500.501. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.6.1Input: PNG - Encode Speed: 8ABC246810Min: 0.49 / Avg: 0.5 / Max: 0.5Min: 0.5 / Avg: 0.5 / Max: 0.511. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.6.1Input: PNG - Encode Speed: 7ABC0.7111.4222.1332.8443.555SE +/- 0.00, N = 3SE +/- 0.00, N = 33.163.163.161. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.6.1Input: PNG - Encode Speed: 7ABC246810Min: 3.16 / Avg: 3.16 / Max: 3.16Min: 3.16 / Avg: 3.16 / Max: 3.171. (CXX) g++ options: -funwind-tables -O3 -O2 -fPIE -pie

PyHPC Benchmarks

PyHPC-Benchmarks is a suite of Python high performance computing benchmarks for execution on CPUs and GPUs using various popular Python HPC libraries. The PyHPC CPU-based benchmarks focus on sequential CPU performance. Learn more via the OpenBenchmarking.org test page.

Device: CPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Isoneutral Mixing

A: Test failed to run.

B: Test failed to run.

C: Test failed to run.

Device: CPU - Backend: TensorFlow - Project Size: 1048576 - Benchmark: Isoneutral Mixing

A: Test failed to run.

B: Test failed to run.

C: Test failed to run.

Device: CPU - Backend: TensorFlow - Project Size: 262144 - Benchmark: Isoneutral Mixing

A: Test failed to run.

B: Test failed to run.

C: Test failed to run.

Device: CPU - Backend: TensorFlow - Project Size: 65536 - Benchmark: Isoneutral Mixing

A: Test failed to run.

B: Test failed to run.

C: Test failed to run.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: TensorFlow - Project Size: 65536 - Benchmark: Equation of StateABC0.00140.00280.00420.00560.007SE +/- 0.001, N = 15SE +/- 0.000, N = 120.0040.0060.006
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: TensorFlow - Project Size: 65536 - Benchmark: Equation of StateABC12345Min: 0 / Avg: 0.01 / Max: 0.01Min: 0 / Avg: 0.01 / Max: 0.01

Device: CPU - Backend: TensorFlow - Project Size: 16384 - Benchmark: Isoneutral Mixing

A: Test failed to run.

B: Test failed to run.

C: Test failed to run.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Equation of StateABC0.04250.0850.12750.170.2125SE +/- 0.002, N = 15SE +/- 0.004, N = 150.1890.1820.185
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Equation of StateABC12345Min: 0.17 / Avg: 0.18 / Max: 0.2Min: 0.18 / Avg: 0.18 / Max: 0.23

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: PyTorch - Project Size: 1048576 - Benchmark: Equation of StateABC0.01130.02260.03390.04520.0565SE +/- 0.001, N = 15SE +/- 0.001, N = 120.0500.0450.044
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: PyTorch - Project Size: 1048576 - Benchmark: Equation of StateABC12345Min: 0.04 / Avg: 0.05 / Max: 0.05Min: 0.04 / Avg: 0.04 / Max: 0.05

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: PyTorch - Project Size: 262144 - Benchmark: Equation of StateABC0.00230.00460.00690.00920.0115SE +/- 0.000, N = 15SE +/- 0.000, N = 150.0100.0100.010
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: PyTorch - Project Size: 262144 - Benchmark: Equation of StateABC12345Min: 0.01 / Avg: 0.01 / Max: 0.01Min: 0.01 / Avg: 0.01 / Max: 0.01

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: PyTorch - Project Size: 16384 - Benchmark: Isoneutral MixingABC0.0020.0040.0060.0080.01SE +/- 0.000, N = 3SE +/- 0.000, N = 150.0090.0090.009
OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 3.0Device: CPU - Backend: PyTorch - Project Size: 16384 - Benchmark: Isoneutral MixingABC12345Min: 0.01 / Avg: 0.01 / Max: 0.01Min: 0.01 / Avg: 0.01 / Max: 0.01

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.9.1Model: super-resolution-10 - Device: CPUABC2004006008001000SE +/- 21.99, N = 12SE +/- 7.02, N = 39108618971. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.9.1Model: super-resolution-10 - Device: CPUABC160320480640800Min: 679.5 / Avg: 860.5 / Max: 909Min: 889 / Avg: 897 / Max: 9111. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Sockperf

This is a network socket API performance benchmark developed by Mellanox. This test profile runs both the client and server on the local host for evaluating individual system performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.7Test: Latency Under LoadABC48121620SE +/- 0.64, N = 25SE +/- 0.64, N = 2517.3313.2113.741. (CXX) g++ options: --param -O3 -rdynamic
OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.7Test: Latency Under LoadABC48121620Min: 7.34 / Avg: 13.21 / Max: 18.14Min: 7.17 / Avg: 13.74 / Max: 17.971. (CXX) g++ options: --param -O3 -rdynamic

OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.7Test: Latency Ping PongABC3691215SE +/- 0.791, N = 25SE +/- 0.595, N = 2512.1889.7088.8761. (CXX) g++ options: --param -O3 -rdynamic
OpenBenchmarking.orgusec, Fewer Is BetterSockperf 3.7Test: Latency Ping PongABC48121620Min: 5.43 / Avg: 9.71 / Max: 21.79Min: 5.68 / Avg: 8.88 / Max: 17.351. (CXX) g++ options: --param -O3 -rdynamic

OpenBenchmarking.orgMessages Per Second, More Is BetterSockperf 3.7Test: ThroughputABC80K160K240K320K400KSE +/- 6303.08, N = 25SE +/- 7025.28, N = 253505053226283168021. (CXX) g++ options: --param -O3 -rdynamic
OpenBenchmarking.orgMessages Per Second, More Is BetterSockperf 3.7Test: ThroughputABC60K120K180K240K300KMin: 264306 / Avg: 322628.24 / Max: 353339Min: 250980 / Avg: 316802 / Max: 3619781. (CXX) g++ options: --param -O3 -rdynamic

107 Results Shown

ONNX Runtime
Zstd Compression
PyHPC Benchmarks
Zstd Compression:
  3 - Compression Speed
  8 - Compression Speed
PyHPC Benchmarks:
  CPU - TensorFlow - 1048576 - Equation of State
  CPU - JAX - 4194304 - Equation of State
Zstd Compression
PyHPC Benchmarks:
  CPU - Numpy - 65536 - Equation of State
  CPU - Aesara - 65536 - Isoneutral Mixing
  CPU - JAX - 1048576 - Equation of State
OpenSSL
Zstd Compression
PyHPC Benchmarks:
  CPU - Aesara - 1048576 - Equation of State
  CPU - Numpy - 262144 - Equation of State
Zstd Compression
AOM AV1
ASTC Encoder
ONNX Runtime
RAR Compression
OpenSSL
GIMP
PyHPC Benchmarks:
  CPU - Aesara - 4194304 - Equation of State
  CPU - Numpy - 65536 - Isoneutral Mixing
  CPU - Numba - 1048576 - Isoneutral Mixing
  CPU - Aesara - 1048576 - Isoneutral Mixing
  CPU - PyTorch - 4194304 - Isoneutral Mixing
JPEG XL Decoding libjxl
OpenSSL
PyHPC Benchmarks
AOM AV1:
  Speed 9 Realtime - Bosphorus 1080p
  Speed 6 Realtime - Bosphorus 1080p
PyHPC Benchmarks:
  CPU - Numba - 4194304 - Isoneutral Mixing
  CPU - JAX - 262144 - Isoneutral Mixing
  CPU - JAX - 1048576 - Isoneutral Mixing
rav1e
PyHPC Benchmarks
AOM AV1
JPEG XL libjxl
ONNX Runtime
PyHPC Benchmarks
Zstd Compression
PyHPC Benchmarks
rav1e
PyHPC Benchmarks:
  CPU - TensorFlow - 4194304 - Equation of State
  CPU - Aesara - 262144 - Isoneutral Mixing
  CPU - PyTorch - 262144 - Isoneutral Mixing
  CPU - JAX - 4194304 - Isoneutral Mixing
AOM AV1
PyHPC Benchmarks
JPEG XL libjxl
PyHPC Benchmarks
AOM AV1
rav1e
AOM AV1:
  Speed 8 Realtime - Bosphorus 4K
  Speed 6 Realtime - Bosphorus 4K
GIMP:
  auto-levels
  rotate
PyHPC Benchmarks
Zstd Compression:
  3 - Decompression Speed
  8 - Decompression Speed
PyHPC Benchmarks:
  CPU - Numpy - 4194304 - Isoneutral Mixing
  CPU - PyTorch - 1048576 - Isoneutral Mixing
AOM AV1
JPEG XL Decoding libjxl
GIMP
ASTC Encoder
OpenSSL
Zstd Compression
ASTC Encoder
OpenSSL
BLAKE2
Zstd Compression:
  3, Long Mode - Decompression Speed
  19, Long Mode - Decompression Speed
PyHPC Benchmarks:
  CPU - TensorFlow - 262144 - Equation of State
  CPU - TensorFlow - 16384 - Equation of State
  CPU - PyTorch - 65536 - Isoneutral Mixing
  CPU - PyTorch - 65536 - Equation of State
  CPU - PyTorch - 16384 - Equation of State
  CPU - Numba - 1048576 - Equation of State
  CPU - Aesara - 262144 - Equation of State
  CPU - Numba - 262144 - Equation of State
  CPU - Aesara - 65536 - Equation of State
  CPU - Aesara - 16384 - Isoneutral Mixing
  CPU - Aesara - 16384 - Equation of State
  CPU - Numpy - 16384 - Isoneutral Mixing
  CPU - Numpy - 16384 - Equation of State
  CPU - Numba - 65536 - Isoneutral Mixing
  CPU - Numba - 65536 - Equation of State
  CPU - Numba - 16384 - Isoneutral Mixing
  CPU - Numba - 16384 - Equation of State
  CPU - JAX - 262144 - Equation of State
  CPU - JAX - 65536 - Equation of State
  CPU - JAX - 16384 - Isoneutral Mixing
  CPU - JAX - 16384 - Equation of State
AOM AV1
JPEG XL libjxl:
  PNG - 8
  PNG - 7
PyHPC Benchmarks:
  CPU - TensorFlow - 65536 - Equation of State
  CPU - PyTorch - 4194304 - Equation of State
  CPU - PyTorch - 1048576 - Equation of State
  CPU - PyTorch - 262144 - Equation of State
  CPU - PyTorch - 16384 - Isoneutral Mixing
ONNX Runtime
Sockperf:
  Latency Under Load
  Latency Ping Pong
  Throughput