Smoke Run

1

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2110234-TJ-SMOKERUN374
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
5900X
October 23 2021
  4 Hours, 39 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Smoke RunOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 9 5900X 12-Core @ 3.70GHz (12 Cores / 24 Threads)ASUS ROG CROSSHAIR VIII HERO (3801 BIOS)AMD Starship/Matisse16GB1000GB Western Digital WDS100T1X0E-00AFY0 + 0GB Ultra USB 3.0Gigabyte AMD Radeon RX 6800/6800 XT / 6900 16GB (2575/1000MHz)AMD Navi 21 HDMI AudioASUS VP28URealtek RTL8125 2.5GbE + Intel I211Ubuntu 21.105.13.0-20-generic (x86_64)GNOME Shell 40.5X Server + Wayland4.6 Mesa 22.0.0-devel (git-c2d522b 2021-10-23 impish-oibaf-ppa) (LLVM 12.0.1 DRM 3.41 5.13.0-20-generic)1.2.195GCC 11.2.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionSmoke Run BenchmarksSystem Logs- Transparent Huge Pages: madvise- CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-ZPT0kp/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-ZPT0kp/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - NONE / errors=remount-ro,relatime,rw / Block Size: 4096- Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa201016- OpenJDK Runtime Environment (build 11.0.12+7-Ubuntu-0ubuntu3)- Python 3.9.7- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

Smoke Rununpack-linux: linux-4.15.tar.xzwireguard: quantlib: lczero: BLASlczero: Eigennamd: ATPase Simulation - 327,506 Atomsmrbayes: Primate Phylogeny Analysiswebp: Defaultwebp: Quality 100webp: Quality 100, Losslesswebp: Quality 100, Highest Compressionwebp: Quality 100, Lossless, Highest Compressionsimdjson: Kostyasimdjson: LargeRandsimdjson: PartialTweetssimdjson: DistinctUserIDxmrig: Monero - 1Mxmrig: Wownero - 1Mchia-vdf: Square Plain C++chia-vdf: Square Assembly Optimizeddacapobench: Jythoncompress-lz4: 3 - Compression Speedcompress-lz4: 3 - Decompression Speedcompress-lz4: 9 - Compression Speedcompress-lz4: 9 - Decompression Speedcompress-zstd: 8 - Compression Speedcompress-zstd: 8 - Decompression Speedcompress-zstd: 19 - Compression Speedcompress-zstd: 19 - Decompression Speedcompress-zstd: 19, Long Mode - Compression Speedcompress-zstd: 19, Long Mode - Decompression Speedjpegxl: PNG - 7jpegxl: PNG - 8jpegxl: JPEG - 7jpegxl: JPEG - 8jpegxl-decode: 1jpegxl-decode: Allgnuradio: Five Back to Back FIR Filtersgnuradio: Signal Source (Cosine)gnuradio: FIR Filtergnuradio: IIR Filtergnuradio: FM Deemphasis Filtergnuradio: Hilbert Transformlibraw: Post-Processing Benchmarkcrafty: Elapsed Timedav1d: Summer Nature 4Kospray: San Miguel - SciVisospray: NASA Streamlines - SciVisospray: Magnetic Reconnection - SciVisttsiod-renderer: Phong Rendering With Soft-Shadow Mappingaom-av1: Speed 6 Realtime - Bosphorus 4Kaom-av1: Speed 6 Two-Pass - Bosphorus 4Kaom-av1: Speed 8 Realtime - Bosphorus 4Kaom-av1: Speed 9 Realtime - Bosphorus 4Kaom-av1: Speed 10 Realtime - Bosphorus 4Kaom-av1: Speed 6 Realtime - Bosphorus 1080paom-av1: Speed 6 Two-Pass - Bosphorus 1080paom-av1: Speed 8 Realtime - Bosphorus 1080paom-av1: Speed 9 Realtime - Bosphorus 1080paom-av1: Speed 10 Realtime - Bosphorus 1080pembree: Pathtracer - Crownembree: Pathtracer ISPC - Crownembree: Pathtracer - Asian Dragonembree: Pathtracer ISPC - Asian Dragonkvazaar: Bosphorus 4K - Mediumkvazaar: Bosphorus 1080p - Mediumkvazaar: Bosphorus 4K - Very Fastkvazaar: Bosphorus 4K - Ultra Fastkvazaar: Bosphorus 1080p - Very Fastkvazaar: Bosphorus 1080p - Ultra Fastsvt-av1: Preset 4 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 4Ksvt-av1: Preset 4 - Bosphorus 1080psvt-av1: Preset 8 - Bosphorus 1080psvt-hevc: 1 - Bosphorus 1080psvt-hevc: 7 - Bosphorus 1080psvt-hevc: 10 - Bosphorus 1080psvt-vp9: VMAF Optimized - Bosphorus 1080psvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080psvt-vp9: Visual Quality Optimized - Bosphorus 1080pvpxenc: Speed 0 - Bosphorus 4Kvpxenc: Speed 5 - Bosphorus 4Kvpxenc: Speed 0 - Bosphorus 1080pvpxenc: Speed 5 - Bosphorus 1080px265: Bosphorus 4Kx265: Bosphorus 1080poidn: RT.ldr_alb_nrm.3840x2160coremark: CoreMark Size 666 - Iterations Per Secondcompress-7zip: Compress Speed Teststockfish: Total Timeasmfish: 1024 Hash Memory, 26 Depthavifenc: 2avifenc: 6avifenc: 10avifenc: 6, Losslessavifenc: 10, Losslessbuild-godot: Time To Compilebuild-linux-kernel: Time To Compilebuild-llvm: Ninjabuild-llvm: Unix Makefilesbuild-mesa: Time To Compilebuild-mplayer: Time To Compilec-ray: Total Time - 4K, 16 Rays Per Pixelpovray: Trace Timesmallpt: Global Illumination Renderer; 128 Samplesnumpy: deepspeech: CPUespeak: Text-To-Speech Synthesishelsing: 12 digitngspice: C2670ngspice: C7552tachyon: Total Timevosk: synthmark: VoiceMark_100cpuminer-opt: Magicpuminer-opt: x25xcpuminer-opt: Deepcoincpuminer-opt: Ringcoincpuminer-opt: Blake-2 Scpuminer-opt: Garlicoincpuminer-opt: Skeincoincpuminer-opt: Myriad-Groestlcpuminer-opt: LBC, LBRY Creditscpuminer-opt: Quad SHA-256, Pyritecpuminer-opt: Triple SHA-256, Onecoinsecuremark: SecureMark-TLSopenssl: SHA256openssl: RSA4096openssl: RSA4096liquid-dsp: 1 - 256 - 57liquid-dsp: 8 - 256 - 57liquid-dsp: 16 - 256 - 57liquid-dsp: 24 - 256 - 57financebench: Repo OpenMPfinancebench: Bonds OpenMPtjbench: Decompression Throughputgromacs: MPI CPU - water_GMX50_baretensorflow-lite: SqueezeNettensorflow-lite: Inception V4tensorflow-lite: NASNet Mobiletensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quanttensorflow-lite: Inception ResNet V2astcenc: Thoroughastcenc: Exhaustivesqlite-speedtest: Timed Time - Size 1,000darktable: Boat - CPU-onlydarktable: Masskrug - CPU-onlydarktable: Server Rack - CPU-onlydarktable: Server Room - CPU-onlygegl: Cropgegl: Scalegegl: Reflectgegl: Color Enhancegegl: Rotate 90 Degreesgimp: resizegimp: rotategimp: auto-levelsgimp: unsharp-maskhugin: Panorama Photo Assistant + Stitching Timelibreoffice: 20 Documents To PDFocrmypdf: Processing 60 Page PDF Documentoctave-benchmark: openscad: Pistolopenscad: Retro Caropenscad: Mini-ITX Caseopenscad: Projector Mount Swivelopenscad: Leonardo Phone Case Slimrawtherapee: Total Benchmark Timestress-ng: Cryptostress-ng: CPU Cachestress-ng: CPU Stressstress-ng: Matrix Mathstress-ng: Vector Mathmnn: mobilenetV3mnn: squeezenetv1.1mnn: resnet-v2-50mnn: SqueezeNetV1.0mnn: MobileNetV2_224mnn: mobilenet-v1-1.0mnn: inception-v3ncnn: CPU - mobilenetncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - shufflenet-v2ncnn: CPU - mnasnetncnn: CPU - efficientnet-b0ncnn: CPU - blazefacencnn: CPU - googlenetncnn: CPU - vgg16ncnn: CPU - resnet18ncnn: CPU - alexnetncnn: CPU - resnet50ncnn: CPU - yolov4-tinyncnn: CPU - squeezenet_ssdncnn: CPU - regnety_400mtnn: CPU - DenseNettnn: CPU - MobileNet v2tnn: CPU - SqueezeNet v2tnn: CPU - SqueezeNet v1.1plaidml: No - Inference - VGG16 - CPUplaidml: No - Inference - VGG19 - CPUplaidml: No - Inference - ResNet 50 - CPUsysbench: RAM / Memorysysbench: CPUindigobench: CPU - Bedroomindigobench: CPU - Supercarblender: BMW27 - CPU-Onlyblender: Classroom - CPU-Onlyblender: Pabellon Barcelona - CPU-Onlyonnx: yolov4 - OpenMP CPUonnx: bertsquad-10 - OpenMP CPUonnx: fcn-resnet101-11 - OpenMP CPUonnx: shufflenet-v2-10 - OpenMP CPUonnx: super-resolution-10 - OpenMP CPUpybench: Total For Average Test Timespyperformance: pathlibpyperformance: json_loadspyperformance: crypto_pyaespyperformance: regex_compilepyperformance: python_startupnatron: Spaceshipappleseed: Emilyappleseed: Disney Materialappleseed: Material Testerphpbench: PHP Benchmark Suitecompress-rar: Linux Source Tree Archiving To RARgit: Time To Complete Common Git Commandspyhpc: CPU - JAX - 262144 - Equation of Statepyhpc: CPU - JAX - 262144 - Isoneutral Mixingpyhpc: CPU - JAX - 1048576 - Equation of Statepyhpc: CPU - JAX - 1048576 - Isoneutral Mixingpyhpc: CPU - JAX - 4194304 - Equation of Statepyhpc: CPU - JAX - 4194304 - Isoneutral Mixingpyhpc: CPU - Numba - 262144 - Equation of Statepyhpc: CPU - Numba - 262144 - Isoneutral Mixingpyhpc: CPU - Numpy - 262144 - Equation of Statepyhpc: CPU - Numpy - 262144 - Isoneutral Mixingpyhpc: CPU - Numba - 1048576 - Equation of Statepyhpc: CPU - Numba - 1048576 - Isoneutral Mixingpyhpc: CPU - Numba - 4194304 - Equation of Statepyhpc: CPU - Numba - 4194304 - Isoneutral Mixingpyhpc: CPU - Numpy - 1048576 - Equation of Statepyhpc: CPU - Numpy - 1048576 - Isoneutral Mixingpyhpc: CPU - Numpy - 4194304 - Equation of Statepyhpc: CPU - Numpy - 4194304 - Isoneutral Mixingpyhpc: CPU - Theano - 262144 - Equation of Statepyhpc: CPU - Theano - 262144 - Isoneutral Mixingpyhpc: CPU - Bohrium - 262144 - Equation of Statepyhpc: CPU - Bohrium - 262144 - Isoneutral Mixingpyhpc: CPU - PyTorch - 262144 - Equation of Statepyhpc: CPU - PyTorch - 262144 - Isoneutral Mixingpyhpc: CPU - Theano - 1048576 - Equation of Statepyhpc: CPU - Theano - 1048576 - Isoneutral Mixingpyhpc: CPU - Theano - 4194304 - Equation of Statepyhpc: CPU - Theano - 4194304 - Isoneutral Mixingpyhpc: CPU - Bohrium - 1048576 - Equation of Statepyhpc: CPU - Bohrium - 1048576 - Isoneutral Mixingpyhpc: CPU - Bohrium - 4194304 - Equation of Statepyhpc: CPU - Bohrium - 4194304 - Isoneutral Mixingpyhpc: CPU - PyTorch - 1048576 - Equation of Statepyhpc: CPU - PyTorch - 1048576 - Isoneutral Mixingpyhpc: CPU - PyTorch - 4194304 - Equation of Statepyhpc: CPU - PyTorch - 4194304 - Isoneutral Mixingpyhpc: CPU - TensorFlow - 262144 - Equation of Statepyhpc: CPU - TensorFlow - 1048576 - Equation of Statepyhpc: CPU - TensorFlow - 4194304 - Equation of Stateunpack-firefox: firefox-84.0.source.tar.xztesseract-ocr: Time To OCR 7 Imagesv-ray: CPUopencv: Features 2Dopencv: Object Detectionopencv: DNN - Deep Neural Network5900X3.789147.7983401.39569361.2986584.4471.0631.76913.7835.33728.5833.641.35.555.89058.811015209600179600301267.521298568.2613003.813934627.944.53943.534.44018.310.551.1194.0335.1470.16360.4510764660934.2821.71034.3509.774.5211192300240.0723.8133.3317.86913.9110.6311.9143.855.2561.48.725.3191.31142.68140.3218.75817.704520.089419.191612.0752.7626.8645.6898.1176.342.04722.3816.69683.18113.95181.98328.23220.44231.43209.198.6222.2118.1939.8622.7482.650.50638694.2251489390452878854954779725.4759.3542.97836.6784.99677.43755.42419.083438.12232.03120.53331.81827.6255.737501.4761.7094621.3273.06679.31362.53652.817814.705931.321720.71454.99139903102.016269802373.811261102234040400124680295980314134197529564103827.2249547.57987600060698000090546000096651000027128.21093840411.777344266.0826111.277124609179880012162384553.392166.216173407.6340.059544.4494.8125.4530.225.066.5594.67522.77740.81330.0485.8158.6669.36311.834.6065.37513.6155.11875.463.52934.1026.7213.75845.1873846.17108.3940070.9967767.35102453.391.9693.50927.1315.0253.2064.05224.92911.173.783.363.753.344.631.5911.2352.0512.419.9321.0319.1213.178.092554.234220.94452.198214.53118.4315.149.6414008.9968780.693.34797.67284.46310.4246972211124993644076812.817.478.91285.733.8227.651869143.356564136.98917691827047.26538.3760.0020.0310.0090.1640.0370.6910.0120.0520.0410.0980.0480.2310.1850.9740.180.4411.0531.9720.0170.060.030.0930.0040.0550.0670.2990.2761.3270.0740.2920.2391.1110.0180.2780.0741.3730.0050.0230.10713.47818.65417197894963924715975OpenBenchmarking.org

Unpacking The Linux Kernel

This test measures how long it takes to extract the .tar.xz Linux kernel package. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking The Linux Kernellinux-4.15.tar.xz5900X0.85251.7052.55753.414.26253.789

WireGuard + Linux Networking Stack Stress Test

This is a benchmark of the WireGuard secure VPN tunnel and Linux networking stack stress test. The test runs on the local host but does require root permissions to run. The way it works is it creates three namespaces. ns0 has a loopback device. ns1 and ns2 each have wireguard devices. Those two wireguard devices send traffic through the loopback device of ns0. The end result of this is that tests wind up testing encryption and decryption at the same time -- a pretty CPU and scheduler-heavy workflow. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWireGuard + Linux Networking Stack Stress Test5900X306090120150147.80

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.215900X70014002100280035003401.31. (CXX) g++ options: -O3 -march=native -rdynamic

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLAS5900X20040060080010009561. (CXX) g++ options: -flto -O3 -march=native -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: Eigen5900X20040060080010009361. (CXX) g++ options: -flto -O3 -march=native -pthread

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atoms5900X0.29220.58440.87661.16881.4611.29865

Timed MrBayes Analysis

This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny Analysis5900X2040608010084.451. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -mabm -O3 -std=c99 -pedantic -march=native -lm

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Default5900X0.23920.47840.71760.95681.1961.0631. (CC) gcc options: -fvisibility=hidden -O3 -march=native -lm -ljpeg -lpng16 -ltiff

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 1005900X0.3980.7961.1941.5921.991.7691. (CC) gcc options: -fvisibility=hidden -O3 -march=native -lm -ljpeg -lpng16 -ltiff

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless5900X4812162013.781. (CC) gcc options: -fvisibility=hidden -O3 -march=native -lm -ljpeg -lpng16 -ltiff

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest Compression5900X1.20082.40163.60244.80326.0045.3371. (CC) gcc options: -fvisibility=hidden -O3 -march=native -lm -ljpeg -lpng16 -ltiff

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest Compression5900X71421283528.581. (CC) gcc options: -fvisibility=hidden -O3 -march=native -lm -ljpeg -lpng16 -ltiff

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: Kostya5900X0.8191.6382.4573.2764.0953.641. (CXX) g++ options: -O3 -march=native

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: LargeRandom5900X0.29250.5850.87751.171.46251.31. (CXX) g++ options: -O3 -march=native

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: PartialTweets5900X1.24882.49763.74644.99526.2445.551. (CXX) g++ options: -O3 -march=native

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 1.0Throughput Test: DistinctUserID5900X1.3052.613.9155.226.5255.81. (CXX) g++ options: -O3 -march=native

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.12.1Variant: Monero - Hash Count: 1M5900X2K4K6K8K10K9058.81. (CXX) g++ options: -O3 -march=native -fexceptions -fno-rtti -maes -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.12.1Variant: Wownero - Hash Count: 1M5900X2K4K6K8K10K110151. (CXX) g++ options: -O3 -march=native -fexceptions -fno-rtti -maes -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Chia Blockchain VDF

Chia is a blockchain and smart transaction platform based on proofs of space and time rather than proofs of work with other cryptocurrencies. This test profile is benchmarking the CPU performance for Chia VDF performance using the Chia VDF benchmark. The Chia VDF is for the Chia Verifiable Delay Function (Proof of Time). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIPS, More Is BetterChia Blockchain VDF 1.0.1Test: Square Plain C++5900X40K80K120K160K200K2096001. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread

OpenBenchmarking.orgIPS, More Is BetterChia Blockchain VDF 1.0.1Test: Square Assembly Optimized5900X40K80K120K160K200K1796001. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Jython5900X60012001800240030003012

Java Test: Eclipse

5900X: The test quit with a non-zero exit status.

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression Speed5900X153045607567.521. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression Speed5900X3K6K9K12K15K129851. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression Speed5900X153045607568.261. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression Speed5900X3K6K9K12K15K13003.81. (CC) gcc options: -O3

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Compression Speed5900X3006009001200150013931. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Decompression Speed5900X100020003000400050004627.91. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression Speed5900X102030405044.51. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression Speed5900X80016002400320040003943.51. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression Speed5900X81624324034.41. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression Speed5900X90018002700360045004018.31. (CC) gcc options: -O3 -march=native -pthread -lz -llzma

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.5Input: PNG - Encode Speed: 75900X369121510.551. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.5Input: PNG - Encode Speed: 85900X0.24980.49960.74940.99921.2491.111. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.5Input: JPEG - Encode Speed: 75900X2040608010094.031. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.5Input: JPEG - Encode Speed: 85900X81624324035.141. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie

JPEG XL Decoding libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.5CPU Threads: 15900X163248648070.16

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.5CPU Threads: All5900X80160240320400360.45

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Five Back to Back FIR Filters5900X200400600800100010761. 3.8.2.0

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Signal Source (Cosine)5900X1000200030004000500046601. 3.8.2.0

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FIR Filter5900X2004006008001000934.21. 3.8.2.0

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: IIR Filter5900X2004006008001000821.71. 3.8.2.0

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FM Deemphasis Filter5900X20040060080010001034.31. 3.8.2.0

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Hilbert Transform5900X110220330440550509.71. 3.8.2.0

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing Benchmark5900X2040608010074.521. (CXX) g++ options: -O3 -march=native -fopenmp -ljpeg -lz -lm

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed Time5900X2M4M6M8M10M111923001. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.2Video Input: Summer Nature 4K5900X50100150200250240.07MIN: 198.24 / MAX: 249.891. (CC) gcc options: -O3 -march=native -pthread -lm

OSPray

Intel OSPray is a portable ray-tracing engine for high-performance, high-fidenlity scientific visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: San Miguel - Renderer: SciVis5900X61218243023.81MIN: 23.26 / MAX: 25.64

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: NASA Streamlines - Renderer: SciVis5900X81624324033.33MIN: 32.26

OpenBenchmarking.orgFPS, More Is BetterOSPray 1.8.5Demo: Magnetic Reconnection - Renderer: SciVis5900X4812162017.86MIN: 17.54

TTSIOD 3D Renderer

A portable GPL 3D software renderer that supports OpenMP and Intel Threading Building Blocks with many different rendering modes. This version does not use OpenGL but is entirely CPU/software based. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterTTSIOD 3D Renderer 2.3bPhong Rendering With Soft-Shadow Mapping5900X2004006008001000913.911. (CXX) g++ options: -O3 -march=native -fomit-frame-pointer -ffast-math -mtune=native -flto -msse -mrecip -mfpmath=sse -msse2 -mssse3 -lSDL -fopenmp -fwhole-program -lstdc++

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K5900X369121510.631. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K5900X369121511.911. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K5900X102030405043.81. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K5900X122436486055.251. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K5900X142842567061.41. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p5900X2468108.71. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p5900X61218243025.311. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p5900X2040608010091.311. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p5900X306090120150142.681. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.2Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p5900X306090120150140.321. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Crown5900X51015202518.76MIN: 18.62 / MAX: 19.07

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Crown5900X4812162017.70MIN: 17.55 / MAX: 18

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer - Model: Asian Dragon5900X51015202520.09MIN: 20.01 / MAX: 20.26

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Asian Dragon5900X51015202519.19MIN: 19.12 / MAX: 19.35

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Medium5900X369121512.071. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O3 -march=native -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 1080p - Video Preset: Medium5900X122436486052.761. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O3 -march=native -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Very Fast5900X61218243026.861. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O3 -march=native -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Ultra Fast5900X102030405045.681. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O3 -march=native -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 1080p - Video Preset: Very Fast5900X2040608010098.11. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O3 -march=native -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 1080p - Video Preset: Ultra Fast5900X4080120160200176.341. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O3 -march=native -lpthread -lm -lrt

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 4 - Input: Bosphorus 4K5900X0.46060.92121.38181.84242.3032.0471. (CXX) g++ options: -O3 -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 8 - Input: Bosphorus 4K5900X51015202522.381. (CXX) g++ options: -O3 -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 4 - Input: Bosphorus 1080p5900X2468106.6961. (CXX) g++ options: -O3 -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8.7Encoder Mode: Preset 8 - Input: Bosphorus 1080p5900X2040608010083.181. (CXX) g++ options: -O3 -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080p5900X4812162013.951. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080p5900X4080120160200181.981. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080p5900X70140210280350328.231. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080p5900X50100150200250220.441. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080p5900X50100150200250231.431. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080p5900X50100150200250209.191. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 4K5900X2468108.621. (CXX) g++ options: -m64 -lm -lpthread -O3 -march=native -fPIC -U_FORTIFY_SOURCE -std=gnu++11

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 4K5900X51015202522.211. (CXX) g++ options: -m64 -lm -lpthread -O3 -march=native -fPIC -U_FORTIFY_SOURCE -std=gnu++11

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 1080p5900X4812162018.191. (CXX) g++ options: -m64 -lm -lpthread -O3 -march=native -fPIC -U_FORTIFY_SOURCE -std=gnu++11

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 1080p5900X91827364539.861. (CXX) g++ options: -m64 -lm -lpthread -O3 -march=native -fPIC -U_FORTIFY_SOURCE -std=gnu++11

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4K5900X51015202522.741. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread -lrt -ldl -lnuma

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080p5900X2040608010082.651. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread -lrt -ldl -lnuma

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RT.ldr_alb_nrm.3840x21605900X0.11250.2250.33750.450.56250.50

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

Benchmark: vklBenchmark ISPC

5900X: Test failed to run.

Benchmark: vklBenchmark Scalar

5900X: Test failed to run.

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second5900X140K280K420K560K700K638694.231. (CC) gcc options: -O2 -O3 -march=native -lrt" -lrt

7-Zip Compression

This is a test of 7-Zip using p7zip with its integrated benchmark feature or upstream 7-Zip for the Windows x64 build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 16.02Compress Speed Test5900X20K40K60K80K100K893901. (CXX) g++ options: -pipe -lpthread

Stockfish

This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 13Total Time5900X10M20M30M40M50M452878851. (CXX) g++ options: -lgcov -m64 -lpthread -O3 -march=native -fno-exceptions -std=c++17 -pedantic -msse -msse3 -mpopcnt -mavx2 -msse4.1 -mssse3 -msse2 -mbmi2 -flto -fprofile-use -fno-peel-loops -fno-tracer -flto=jobserver

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Depth5900X11M22M33M44M55M49547797

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 25900X61218243025.481. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 65900X36912159.3541. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 105900X0.67011.34022.01032.68043.35052.9781. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6, Lossless5900X81624324036.681. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 10, Lossless5900X1.12412.24823.37234.49645.62054.9961. (CXX) g++ options: -O3 -fPIC -lm

Timed GDB GNU Debugger Compilation

This test times how long it takes to build the GNU Debugger (GDB) in a default configuration. Learn more via the OpenBenchmarking.org test page.

5900X: The test quit with a non-zero exit status.

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To Compile5900X2040608010077.44

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.14Time To Compile5900X122436486055.42

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Ninja5900X90180270360450419.08

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Unix Makefiles5900X90180270360450438.12

Timed Mesa Compilation

This test profile times how long it takes to compile Mesa with Meson/Ninja. For minimizing build dependencies and avoid versioning conflicts, test this is just the core Mesa build without LLVM or the extra Gallium3D/Mesa drivers enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To Compile5900X71421283532.03

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.4Time To Compile5900X51015202520.53

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

5900X: The test quit with a non-zero exit status.

C-Ray

This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per Pixel5900X71421283531.821. (CC) gcc options: -lm -lpthread -O3 -march=native

POV-Ray

This is a test of POV-Ray, the Persistence of Vision Raytracer. POV-Ray is used to create 3D graphics using ray-tracing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace Time5900X61218243027.631. (CXX) g++ options: -pipe -O3 -ffast-math -march=native -R/usr/lib -lSDL -lXpm -lSM -lICE -lX11 -lIlmImf -lIlmImf-2_5 -lImath-2_5 -lHalf-2_5 -lIex-2_5 -lIexMath-2_5 -lIlmThread-2_5 -pthread -lIlmThread -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system

Smallpt

Smallpt is a C++ global illumination renderer written in less than 100 lines of code. Global illumination is done via unbiased Monte Carlo path tracing and there is multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSmallpt 1.0Global Illumination Renderer; 128 Samples5900X1.29082.58163.87245.16326.4545.7371. (CXX) g++ options: -fopenmp -O3 -march=native

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy Benchmark5900X110220330440550501.47

Timed Wasmer Compilation

This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.

5900X: The test quit with a non-zero exit status.

DeepSpeech

Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPU5900X142842567061.71

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech Synthesis5900X51015202521.331. (CC) gcc options: -O3 -march=native -std=c99

Helsing

Helsing is an open-source POSIX vampire number generator. This test profile measures the time it takes to generate vampire numbers between varying numbers of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHelsing 1.0-betaDigit Range: 12 digit5900X0.68991.37982.06972.75963.44953.0661. (CC) gcc options: -O2 -pthread

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C26705900X2040608010079.311. (CC) gcc options: -O3 -march=native -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C75525900X142842567062.541. (CC) gcc options: -O3 -march=native -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE

Tachyon

This is a test of the threaded Tachyon, a parallel ray-tracing system, measuring the time to ray-trace a sample scene. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total Time5900X122436486052.821. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread

VOSK Speech Recognition Toolkit

VOSK is an open-source offline speech recognition API/toolkit. VOSK supports speech recognition in 17 languages and has a variety of models available and interfaces for different programming languages. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterVOSK Speech Recognition Toolkit 0.3.215900X4812162014.71

Google SynthMark

SynthMark is a cross platform tool for benchmarking CPU performance under a variety of real-time audio workloads. It uses a polyphonic synthesizer model to provide standardized tests for latency, jitter and computational throughput. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVoices, More Is BetterGoogle SynthMark 20201109Test: VoiceMark_1005900X2004006008001000931.321. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.18Algorithm: Magi5900X160320480640800720.711. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.18Algorithm: x25x5900X100200300400500454.991. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.18Algorithm: Deepcoin5900X3K6K9K12K15K139901. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.18Algorithm: Ringcoin5900X70014002100280035003102.011. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.18Algorithm: Blake-2 S5900X130K260K390K520K650K6269801. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.18Algorithm: Garlicoin5900X50010001500200025002373.811. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.18Algorithm: Skeincoin5900X30K60K90K120K150K1261101. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.18Algorithm: Myriad-Groestl5900X5K10K15K20K25K223401. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.18Algorithm: LBC, LBRY Credits5900X9K18K27K36K45K404001. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.18Algorithm: Quad SHA-256, Pyrite5900X30K60K90K120K150K1246801. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.18Algorithm: Triple SHA-256, Onecoin5900X60K120K180K240K300K2959801. (CXX) g++ options: -O3 -march=native -lcurl -lz -lpthread -lssl -lcrypto -lgmp

SecureMark

SecureMark is an objective, standardized benchmarking framework for measuring the efficiency of cryptographic processing solutions developed by EEMBC. SecureMark-TLS is benchmarking Transport Layer Security performance with a focus on IoT/edge computing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmarks, More Is BetterSecureMark 1.0.4Benchmark: SecureMark-TLS5900X70K140K210K280K350K3141341. (CC) gcc options: -pedantic -O3

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA2565900X4000M8000M12000M16000M20000M197529564101. (CC) gcc options: -pthread -m64 -O3 -march=native -lssl -lcrypto -ldl

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA40965900X80016002400320040003827.21. (CC) gcc options: -pthread -m64 -O3 -march=native -lssl -lcrypto -ldl

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA40965900X50K100K150K200K250K249547.51. (CC) gcc options: -pthread -m64 -O3 -march=native -lssl -lcrypto -ldl

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 1 - Buffer Length: 256 - Filter Length: 575900X20M40M60M80M100M798760001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 8 - Buffer Length: 256 - Filter Length: 575900X130M260M390M520M650M6069800001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 575900X200M400M600M800M1000M9054600001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 24 - Buffer Length: 256 - Filter Length: 575900X200M400M600M800M1000M9665100001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Repo OpenMP5900X6K12K18K24K30K27128.211. (CXX) g++ options: -O3 -march=native -fopenmp

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Bonds OpenMP5900X9K18K27K36K45K40411.781. (CXX) g++ options: -O3 -march=native -fopenmp

libjpeg-turbo tjbench

tjbench is a JPEG decompression/compression benchmark that is part of libjpeg-turbo, a JPEG image codec library optimized for SIMD instructions on modern CPU architectures. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMegapixels/sec, More Is Betterlibjpeg-turbo tjbench 2.1.0Test: Decompression Throughput5900X60120180240300266.081. (CC) gcc options: -O3 -march=native -rdynamic

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2021.2Implementation: MPI CPU - Input: water_GMX50_bare5900X0.28730.57460.86191.14921.43651.2771. (CXX) g++ options: -O3 -march=native

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNet5900X30K60K90K120K150K124609

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V45900X400K800K1200K1600K2000K1798800

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet Mobile5900X30K60K90K120K150K121623

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Float5900X20K40K60K80K100K84553.3

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet Quant5900X20K40K60K80K100K92166.2

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V25900X300K600K900K1200K1500K1617340

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: Thorough5900X2468107.631. (CXX) g++ options: -O3 -march=native -flto -pthread

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.2Preset: Exhaustive5900X91827364540.061. (CXX) g++ options: -O3 -march=native -flto -pthread

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,0005900X102030405044.451. (CC) gcc options: -O3 -march=native -lz

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.6.0Test: Boat - Acceleration: CPU-only5900X1.08272.16543.24814.33085.41354.812

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.6.0Test: Masskrug - Acceleration: CPU-only5900X1.22692.45383.68074.90766.13455.453

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.6.0Test: Server Rack - Acceleration: CPU-only5900X0.04950.0990.14850.1980.24750.22

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.6.0Test: Server Room - Acceleration: CPU-only5900X1.13852.2773.41554.5545.69255.06

GEGL

GEGL is the Generic Graphics Library and is the library/framework used by GIMP and other applications like GNOME Photos. This test profile times how long it takes to complete various GEGL operations on a static set of sample JPEG images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Crop5900X2468106.559

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Scale5900X1.05192.10383.15574.20765.25954.675

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Reflect5900X51015202522.78

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Color Enhance5900X91827364540.81

OpenBenchmarking.orgSeconds, Fewer Is BetterGEGLOperation: Rotate 90 Degrees5900X71421283530.05

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.24Test: resize5900X1.30842.61683.92525.23366.5425.815

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.24Test: rotate5900X2468108.666

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.24Test: auto-levels5900X36912159.363

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.24Test: unsharp-mask5900X369121511.8

Hugin

Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching Time5900X81624324034.61

Inkscape

Inkscape is an open-source vector graphics editor. This test profile times how long it takes to complete various operations by Inkscape. Learn more via the OpenBenchmarking.org test page.

Operation: SVG Files To PNG

5900X: The test quit with a non-zero exit status.

LibreOffice

Various benchmarking operations with the LibreOffice open-source office suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLibreOfficeTest: 20 Documents To PDF5900X1.20942.41883.62824.83766.0475.3751. LibreOffice 7.2.1.2 20(Build:2)

OCRMyPDF

OCRMyPDF is an optical character recognition (OCR) text layer to scanned PDF files, producing new PDFs with the text now selectable/searchable/copy-paste capable. OCRMyPDF leverages the Tesseract OCR engine and is written in Python. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOCRMyPDF 10.3.1+dfsgProcessing 60 Page PDF Document5900X369121513.62

GNU Octave Benchmark

This test profile measures how long it takes to complete several reference GNU Octave files via octave-benchmark. GNU Octave is used for numerical computations and is an open-source alternative to MATLAB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGNU Octave Benchmark 6.2.05900X1.15162.30323.45484.60645.7585.118

OpenSCAD

OpenSCAD is a programmer-focused solid 3D CAD modeller. OpenSCAD is free software and allows creating 3D CAD objects in a script-based modelling environment. This test profile will use the system-provided OpenSCAD program otherwise and time how long it takes tn render different SCAD assets to PNG output. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Pistol5900X2040608010075.461. OpenSCAD version 2021.01

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Retro Car5900X0.7941.5882.3823.1763.973.5291. OpenSCAD version 2021.01

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Mini-ITX Case5900X81624324034.101. OpenSCAD version 2021.01

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Projector Mount Swivel5900X2468106.721. OpenSCAD version 2021.01

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenSCADRender: Leonardo Phone Case Slim5900X4812162013.761. OpenSCAD version 2021.01

RawTherapee

RawTherapee is a cross-platform, open-source multi-threaded RAW image processing program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRawTherapeeTotal Benchmark Time5900X102030405045.191. RawTherapee, version 5.8, command line.

Stress-NG

Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.13.02Test: Crypto5900X80016002400320040003846.171. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lsctp -lz -ldl -pthread -lc -latomic

Test: RdRand

5900X: stress-ng: error: [591273] No stress workers invoked (one or more were unsupported)

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.13.02Test: CPU Cache5900X20406080100108.391. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lsctp -lz -ldl -pthread -lc -latomic

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.13.02Test: CPU Stress5900X9K18K27K36K45K40070.991. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lsctp -lz -ldl -pthread -lc -latomic

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.13.02Test: Matrix Math5900X15K30K45K60K75K67767.351. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lsctp -lz -ldl -pthread -lc -latomic

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.13.02Test: Vector Math5900X20K40K60K80K100K102453.391. (CC) gcc options: -O3 -march=native -O2 -std=gnu99 -lm -laio -lcrypt -lrt -lsctp -lz -ldl -pthread -lc -latomic

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenetV35900X0.4430.8861.3291.7722.2151.969MIN: 1.94 / MAX: 2.131. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: squeezenetv1.15900X0.78951.5792.36853.1583.94753.509MIN: 3.45 / MAX: 4.011. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: resnet-v2-505900X61218243027.13MIN: 26.93 / MAX: 27.571. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: SqueezeNetV1.05900X1.13062.26123.39184.52245.6535.025MIN: 4.98 / MAX: 5.091. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: MobileNetV2_2245900X0.72141.44282.16422.88563.6073.206MIN: 3.17 / MAX: 3.391. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: mobilenet-v1-1.05900X0.91171.82342.73513.64684.55854.052MIN: 4.01 / MAX: 4.851. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.2Model: inception-v35900X61218243024.93MIN: 24.7 / MAX: 26.041. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mobilenet5900X369121511.17MIN: 10.95 / MAX: 11.571. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v2-v2 - Model: mobilenet-v25900X0.85051.7012.55153.4024.25253.78MIN: 3.72 / MAX: 3.921. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU-v3-v3 - Model: mobilenet-v35900X0.7561.5122.2683.0243.783.36MIN: 3.31 / MAX: 3.491. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: shufflenet-v25900X0.84381.68762.53143.37524.2193.75MIN: 3.72 / MAX: 3.841. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: mnasnet5900X0.75151.5032.25453.0063.75753.34MIN: 3.3 / MAX: 3.51. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: efficientnet-b05900X1.04182.08363.12544.16725.2094.63MIN: 4.58 / MAX: 4.791. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: blazeface5900X0.35780.71561.07341.43121.7891.59MIN: 1.57 / MAX: 1.751. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: googlenet5900X369121511.23MIN: 11.09 / MAX: 11.411. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: vgg165900X122436486052.05MIN: 51.63 / MAX: 52.471. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet185900X369121512.41MIN: 12.33 / MAX: 12.541. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: alexnet5900X36912159.93MIN: 9.75 / MAX: 10.541. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: resnet505900X51015202521.03MIN: 20.8 / MAX: 23.011. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: yolov4-tiny5900X51015202519.12MIN: 18.82 / MAX: 27.941. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: squeezenet_ssd5900X369121513.17MIN: 12.95 / MAX: 13.481. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20210720Target: CPU - Model: regnety_400m5900X2468108.09MIN: 8.04 / MAX: 8.571. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNet5900X50010001500200025002554.23MIN: 2515.71 / MAX: 2598.861. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v25900X50100150200250220.94MIN: 219.98 / MAX: 221.891. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v25900X122436486052.20MIN: 52.11 / MAX: 52.321. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.15900X50100150200250214.53MIN: 214.42 / MAX: 214.691. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -fvisibility=default -rdynamic -ldl

PlaidML

This test profile uses PlaidML deep learning framework developed by Intel for offering up various benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG16 - Device: CPU5900X51015202518.43

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: VGG19 - Device: CPU5900X4812162015.14

FP16: No - Mode: Inference - Network: IMDB LSTM - Device: CPU

5900X: Test failed to run.

FP16: No - Mode: Inference - Network: Mobilenet - Device: CPU

5900X: Test failed to run.

OpenBenchmarking.orgFPS, More Is BetterPlaidMLFP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU5900X36912159.64

FP16: No - Mode: Inference - Network: DenseNet 201 - Device: CPU

5900X: Test failed to run.

FP16: No - Mode: Inference - Network: Inception V3 - Device: CPU

5900X: Test failed to run.

FP16: No - Mode: Inference - Network: NASNer Large - Device: CPU

5900X: Test failed to run.

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/sec, More Is BetterSysbench 1.0.20Test: RAM / Memory5900X3K6K9K12K15K14008.991. (CC) gcc options: -O2 -funroll-loops -O3 -march=native -rdynamic -ldl -laio -lm

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPU5900X15K30K45K60K75K68780.691. (CC) gcc options: -O2 -funroll-loops -O3 -march=native -rdynamic -ldl -laio -lm

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Bedroom5900X0.75151.5032.25453.0063.75753.34

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Supercar5900X2468107

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles benchmark with various sample files. GPU computing via OpenCL, NVIDIA OptiX, and NVIDIA CUDA is supported. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: BMW27 - Compute: CPU-Only5900X2040608010097.67

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: Classroom - Compute: CPU-Only5900X60120180240300284.46

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 2.92Blend File: Pabellon Barcelona - Compute: CPU-Only5900X70140210280350310.42

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.8.2Model: yolov4 - Device: OpenMP CPU5900X1002003004005004691. (CXX) g++ options: -O3 -march=native -fopenmp -ffunction-sections -fdata-sections -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.8.2Model: bertsquad-10 - Device: OpenMP CPU5900X1603204806408007221. (CXX) g++ options: -O3 -march=native -fopenmp -ffunction-sections -fdata-sections -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.8.2Model: fcn-resnet101-11 - Device: OpenMP CPU5900X204060801001111. (CXX) g++ options: -O3 -march=native -fopenmp -ffunction-sections -fdata-sections -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.8.2Model: shufflenet-v2-10 - Device: OpenMP CPU5900X5K10K15K20K25K249931. (CXX) g++ options: -O3 -march=native -fopenmp -ffunction-sections -fdata-sections -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.8.2Model: super-resolution-10 - Device: OpenMP CPU5900X1400280042005600700064401. (CXX) g++ options: -O3 -march=native -fopenmp -ffunction-sections -fdata-sections -ldl -lrt

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test Times5900X170340510680850768

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlib5900X369121512.8

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loads5900X4812162017.4

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaes5900X2040608010078.9

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compile5900X306090120150128

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startup5900X1.28932.57863.86795.15726.44655.73

Natron

Natron is an open-source, cross-platform compositing software for visual effects (VFX) and motion graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterNatron 2.4Input: Spaceship5900X0.8551.712.5653.424.2753.8

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Emily5900X50100150200250227.65

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Disney Material5900X306090120150143.36

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Material Tester5900X306090120150136.99

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

5900X: ModuleNotFoundError: No module named 'tensorflow'

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark Suite5900X200K400K600K800K1000K918270

RAR Compression

This test measures the time needed to archive/compress two copies of the Linux 5.14 kernel source tree using RAR/WinRAR compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRAR Compression 6.0.2Linux Source Tree Archiving To RAR5900X112233445547.27

Git

This test measures the time needed to carry out some sample Git operations on an example, static repository that happens to be a copy of the GNOME GTK tool-kit repository. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGitTime To Complete Common Git Commands5900X91827364538.381. git version 2.32.0

PyHPC Benchmarks

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: JAX - Project Size: 262144 - Benchmark: Equation of State5900X0.00050.0010.00150.0020.00250.002

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: JAX - Project Size: 262144 - Benchmark: Isoneutral Mixing5900X0.0070.0140.0210.0280.0350.031

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: JAX - Project Size: 1048576 - Benchmark: Equation of State5900X0.0020.0040.0060.0080.010.009

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: JAX - Project Size: 1048576 - Benchmark: Isoneutral Mixing5900X0.03690.07380.11070.14760.18450.164

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: JAX - Project Size: 4194304 - Benchmark: Equation of State5900X0.00830.01660.02490.03320.04150.037

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: JAX - Project Size: 4194304 - Benchmark: Isoneutral Mixing5900X0.15550.3110.46650.6220.77750.691

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: Numba - Project Size: 262144 - Benchmark: Equation of State5900X0.00270.00540.00810.01080.01350.012

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: Numba - Project Size: 262144 - Benchmark: Isoneutral Mixing5900X0.01170.02340.03510.04680.05850.052

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: Numpy - Project Size: 262144 - Benchmark: Equation of State5900X0.00920.01840.02760.03680.0460.041

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: Numpy - Project Size: 262144 - Benchmark: Isoneutral Mixing5900X0.02210.04420.06630.08840.11050.098

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: Numba - Project Size: 1048576 - Benchmark: Equation of State5900X0.01080.02160.03240.04320.0540.048

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: Numba - Project Size: 1048576 - Benchmark: Isoneutral Mixing5900X0.0520.1040.1560.2080.260.231

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Equation of State5900X0.04160.08320.12480.16640.2080.185

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: Numba - Project Size: 4194304 - Benchmark: Isoneutral Mixing5900X0.21920.43840.65760.87681.0960.974

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Equation of State5900X0.04050.0810.12150.1620.20250.18

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: Numpy - Project Size: 1048576 - Benchmark: Isoneutral Mixing5900X0.09920.19840.29760.39680.4960.441

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Equation of State5900X0.23690.47380.71070.94761.18451.053

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: Numpy - Project Size: 4194304 - Benchmark: Isoneutral Mixing5900X0.44370.88741.33111.77482.21851.972

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: Theano - Project Size: 262144 - Benchmark: Equation of State5900X0.00380.00760.01140.01520.0190.017

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: Theano - Project Size: 262144 - Benchmark: Isoneutral Mixing5900X0.01350.0270.04050.0540.06750.06

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: Bohrium - Project Size: 262144 - Benchmark: Equation of State5900X0.00680.01360.02040.02720.0340.03

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: Bohrium - Project Size: 262144 - Benchmark: Isoneutral Mixing5900X0.02090.04180.06270.08360.10450.093

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: PyTorch - Project Size: 262144 - Benchmark: Equation of State5900X0.00090.00180.00270.00360.00450.004

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: PyTorch - Project Size: 262144 - Benchmark: Isoneutral Mixing5900X0.01240.02480.03720.04960.0620.055

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: Theano - Project Size: 1048576 - Benchmark: Equation of State5900X0.01510.03020.04530.06040.07550.067

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: Theano - Project Size: 1048576 - Benchmark: Isoneutral Mixing5900X0.06730.13460.20190.26920.33650.299

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: Theano - Project Size: 4194304 - Benchmark: Equation of State5900X0.06210.12420.18630.24840.31050.276

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: Theano - Project Size: 4194304 - Benchmark: Isoneutral Mixing5900X0.29860.59720.89581.19441.4931.327

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: Bohrium - Project Size: 1048576 - Benchmark: Equation of State5900X0.01670.03340.05010.06680.08350.074

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: Bohrium - Project Size: 1048576 - Benchmark: Isoneutral Mixing5900X0.06570.13140.19710.26280.32850.292

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: Bohrium - Project Size: 4194304 - Benchmark: Equation of State5900X0.05380.10760.16140.21520.2690.239

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: Bohrium - Project Size: 4194304 - Benchmark: Isoneutral Mixing5900X0.250.50.7511.251.111

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: PyTorch - Project Size: 1048576 - Benchmark: Equation of State5900X0.00410.00820.01230.01640.02050.018

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: PyTorch - Project Size: 1048576 - Benchmark: Isoneutral Mixing5900X0.06260.12520.18780.25040.3130.278

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Equation of State5900X0.01670.03340.05010.06680.08350.074

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: PyTorch - Project Size: 4194304 - Benchmark: Isoneutral Mixing5900X0.30890.61780.92671.23561.54451.373

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: TensorFlow - Project Size: 262144 - Benchmark: Equation of State5900X0.00110.00220.00330.00440.00550.005

Device: CPU - Backend: TensorFlow - Project Size: 262144 - Benchmark: Isoneutral Mixing

5900X: Test failed to run.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: TensorFlow - Project Size: 1048576 - Benchmark: Equation of State5900X0.00520.01040.01560.02080.0260.023

Device: CPU - Backend: TensorFlow - Project Size: 1048576 - Benchmark: Isoneutral Mixing

5900X: Test failed to run.

OpenBenchmarking.orgSeconds, Fewer Is BetterPyHPC Benchmarks 2.1Device: CPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Equation of State5900X0.02410.04820.07230.09640.12050.107

Device: CPU - Backend: TensorFlow - Project Size: 4194304 - Benchmark: Isoneutral Mixing

5900X: Test failed to run.

Unpacking Firefox

This simple test profile measures how long it takes to extract the .tar.xz source package of the Mozilla Firefox Web Browser. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterUnpacking Firefox 84.0Extracting: firefox-84.0.source.tar.xz5900X369121513.48

Tesseract OCR

Tesseract-OCR is the open-source optical character recognition (OCR) engine for the conversion of text within images to raw text output. This test profile relies upon a system-supplied Tesseract installation. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTesseract OCR 4.1.1Time To OCR 7 Images5900X51015202518.65

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. V-RAY is a commercial renderer that can integrate with various creator software products like SketchUp and 3ds Max. The V-RAY benchmark is standalone and supports CPU and NVIDIA CUDA/RTX based rendering. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5Mode: CPU5900X4K8K12K16K20K17197

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.5.4Test: Features 2D5900X20K40K60K80K100K894961. (CXX) g++ options: -fPIC -O3 -march=native -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -fvisibility=hidden -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.5.4Test: Object Detection5900X8K16K24K32K40K392471. (CXX) g++ options: -fPIC -O3 -march=native -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -fvisibility=hidden -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.5.4Test: DNN - Deep Neural Network5900X3K6K9K12K15K159751. (CXX) g++ options: -fPIC -O3 -march=native -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -fvisibility=hidden -shared

273 Results Shown

Unpacking The Linux Kernel
WireGuard + Linux Networking Stack Stress Test
QuantLib
LeelaChessZero:
  BLAS
  Eigen
NAMD
Timed MrBayes Analysis
WebP Image Encode:
  Default
  Quality 100
  Quality 100, Lossless
  Quality 100, Highest Compression
  Quality 100, Lossless, Highest Compression
simdjson:
  Kostya
  LargeRand
  PartialTweets
  DistinctUserID
Xmrig:
  Monero - 1M
  Wownero - 1M
Chia Blockchain VDF:
  Square Plain C++
  Square Assembly Optimized
DaCapo Benchmark
LZ4 Compression:
  3 - Compression Speed
  3 - Decompression Speed
  9 - Compression Speed
  9 - Decompression Speed
Zstd Compression:
  8 - Compression Speed
  8 - Decompression Speed
  19 - Compression Speed
  19 - Decompression Speed
  19, Long Mode - Compression Speed
  19, Long Mode - Decompression Speed
JPEG XL libjxl:
  PNG - 7
  PNG - 8
  JPEG - 7
  JPEG - 8
JPEG XL Decoding libjxl:
  1
  All
GNU Radio:
  Five Back to Back FIR Filters
  Signal Source (Cosine)
  FIR Filter
  IIR Filter
  FM Deemphasis Filter
  Hilbert Transform
LibRaw
Crafty
dav1d
OSPray:
  San Miguel - SciVis
  NASA Streamlines - SciVis
  Magnetic Reconnection - SciVis
TTSIOD 3D Renderer
AOM AV1:
  Speed 6 Realtime - Bosphorus 4K
  Speed 6 Two-Pass - Bosphorus 4K
  Speed 8 Realtime - Bosphorus 4K
  Speed 9 Realtime - Bosphorus 4K
  Speed 10 Realtime - Bosphorus 4K
  Speed 6 Realtime - Bosphorus 1080p
  Speed 6 Two-Pass - Bosphorus 1080p
  Speed 8 Realtime - Bosphorus 1080p
  Speed 9 Realtime - Bosphorus 1080p
  Speed 10 Realtime - Bosphorus 1080p
Embree:
  Pathtracer - Crown
  Pathtracer ISPC - Crown
  Pathtracer - Asian Dragon
  Pathtracer ISPC - Asian Dragon
Kvazaar:
  Bosphorus 4K - Medium
  Bosphorus 1080p - Medium
  Bosphorus 4K - Very Fast
  Bosphorus 4K - Ultra Fast
  Bosphorus 1080p - Very Fast
  Bosphorus 1080p - Ultra Fast
SVT-AV1:
  Preset 4 - Bosphorus 4K
  Preset 8 - Bosphorus 4K
  Preset 4 - Bosphorus 1080p
  Preset 8 - Bosphorus 1080p
SVT-HEVC:
  1 - Bosphorus 1080p
  7 - Bosphorus 1080p
  10 - Bosphorus 1080p
SVT-VP9:
  VMAF Optimized - Bosphorus 1080p
  PSNR/SSIM Optimized - Bosphorus 1080p
  Visual Quality Optimized - Bosphorus 1080p
VP9 libvpx Encoding:
  Speed 0 - Bosphorus 4K
  Speed 5 - Bosphorus 4K
  Speed 0 - Bosphorus 1080p
  Speed 5 - Bosphorus 1080p
x265:
  Bosphorus 4K
  Bosphorus 1080p
Intel Open Image Denoise
Coremark
7-Zip Compression
Stockfish
asmFish
libavif avifenc:
  2
  6
  10
  6, Lossless
  10, Lossless
Timed Godot Game Engine Compilation
Timed Linux Kernel Compilation
Timed LLVM Compilation:
  Ninja
  Unix Makefiles
Timed Mesa Compilation
Timed MPlayer Compilation
C-Ray
POV-Ray
Smallpt
Numpy Benchmark
DeepSpeech
eSpeak-NG Speech Engine
Helsing
Ngspice:
  C2670
  C7552
Tachyon
VOSK Speech Recognition Toolkit
Google SynthMark
Cpuminer-Opt:
  Magi
  x25x
  Deepcoin
  Ringcoin
  Blake-2 S
  Garlicoin
  Skeincoin
  Myriad-Groestl
  LBC, LBRY Credits
  Quad SHA-256, Pyrite
  Triple SHA-256, Onecoin
SecureMark
OpenSSL:
  SHA256
  RSA4096
  RSA4096
Liquid-DSP:
  1 - 256 - 57
  8 - 256 - 57
  16 - 256 - 57
  24 - 256 - 57
FinanceBench:
  Repo OpenMP
  Bonds OpenMP
libjpeg-turbo tjbench
GROMACS
TensorFlow Lite:
  SqueezeNet
  Inception V4
  NASNet Mobile
  Mobilenet Float
  Mobilenet Quant
  Inception ResNet V2
ASTC Encoder:
  Thorough
  Exhaustive
SQLite Speedtest
Darktable:
  Boat - CPU-only
  Masskrug - CPU-only
  Server Rack - CPU-only
  Server Room - CPU-only
GEGL:
  Crop
  Scale
  Reflect
  Color Enhance
  Rotate 90 Degrees
GIMP:
  resize
  rotate
  auto-levels
  unsharp-mask
Hugin
LibreOffice
OCRMyPDF
GNU Octave Benchmark
OpenSCAD:
  Pistol
  Retro Car
  Mini-ITX Case
  Projector Mount Swivel
  Leonardo Phone Case Slim
RawTherapee
Stress-NG:
  Crypto
  CPU Cache
  CPU Stress
  Matrix Math
  Vector Math
Mobile Neural Network:
  mobilenetV3
  squeezenetv1.1
  resnet-v2-50
  SqueezeNetV1.0
  MobileNetV2_224
  mobilenet-v1-1.0
  inception-v3
NCNN:
  CPU - mobilenet
  CPU-v2-v2 - mobilenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU - shufflenet-v2
  CPU - mnasnet
  CPU - efficientnet-b0
  CPU - blazeface
  CPU - googlenet
  CPU - vgg16
  CPU - resnet18
  CPU - alexnet
  CPU - resnet50
  CPU - yolov4-tiny
  CPU - squeezenet_ssd
  CPU - regnety_400m
TNN:
  CPU - DenseNet
  CPU - MobileNet v2
  CPU - SqueezeNet v2
  CPU - SqueezeNet v1.1
PlaidML:
  No - Inference - VGG16 - CPU
  No - Inference - VGG19 - CPU
  No - Inference - ResNet 50 - CPU
Sysbench:
  RAM / Memory
  CPU
IndigoBench:
  CPU - Bedroom
  CPU - Supercar
Blender:
  BMW27 - CPU-Only
  Classroom - CPU-Only
  Pabellon Barcelona - CPU-Only
ONNX Runtime:
  yolov4 - OpenMP CPU
  bertsquad-10 - OpenMP CPU
  fcn-resnet101-11 - OpenMP CPU
  shufflenet-v2-10 - OpenMP CPU
  super-resolution-10 - OpenMP CPU
PyBench
PyPerformance:
  pathlib
  json_loads
  crypto_pyaes
  regex_compile
  python_startup
Natron
Appleseed:
  Emily
  Disney Material
  Material Tester
PHPBench
RAR Compression
Git
PyHPC Benchmarks:
  CPU - JAX - 262144 - Equation of State
  CPU - JAX - 262144 - Isoneutral Mixing
  CPU - JAX - 1048576 - Equation of State
  CPU - JAX - 1048576 - Isoneutral Mixing
  CPU - JAX - 4194304 - Equation of State
  CPU - JAX - 4194304 - Isoneutral Mixing
  CPU - Numba - 262144 - Equation of State
  CPU - Numba - 262144 - Isoneutral Mixing
  CPU - Numpy - 262144 - Equation of State
  CPU - Numpy - 262144 - Isoneutral Mixing
  CPU - Numba - 1048576 - Equation of State
  CPU - Numba - 1048576 - Isoneutral Mixing
  CPU - Numba - 4194304 - Equation of State
  CPU - Numba - 4194304 - Isoneutral Mixing
  CPU - Numpy - 1048576 - Equation of State
  CPU - Numpy - 1048576 - Isoneutral Mixing
  CPU - Numpy - 4194304 - Equation of State
  CPU - Numpy - 4194304 - Isoneutral Mixing
  CPU - Theano - 262144 - Equation of State
  CPU - Theano - 262144 - Isoneutral Mixing
  CPU - Bohrium - 262144 - Equation of State
  CPU - Bohrium - 262144 - Isoneutral Mixing
  CPU - PyTorch - 262144 - Equation of State
  CPU - PyTorch - 262144 - Isoneutral Mixing
  CPU - Theano - 1048576 - Equation of State
  CPU - Theano - 1048576 - Isoneutral Mixing
  CPU - Theano - 4194304 - Equation of State
  CPU - Theano - 4194304 - Isoneutral Mixing
  CPU - Bohrium - 1048576 - Equation of State
  CPU - Bohrium - 1048576 - Isoneutral Mixing
  CPU - Bohrium - 4194304 - Equation of State
  CPU - Bohrium - 4194304 - Isoneutral Mixing
  CPU - PyTorch - 1048576 - Equation of State
  CPU - PyTorch - 1048576 - Isoneutral Mixing
  CPU - PyTorch - 4194304 - Equation of State
  CPU - PyTorch - 4194304 - Isoneutral Mixing
  CPU - TensorFlow - 262144 - Equation of State
  CPU - TensorFlow - 1048576 - Equation of State
  CPU - TensorFlow - 4194304 - Equation of State
Unpacking Firefox
Tesseract OCR
Chaos Group V-RAY
OpenCV:
  Features 2D
  Object Detection
  DNN - Deep Neural Network