extra new ryzen zen4

Benchmarks for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2301091-PTS-EXTRANEW50
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 3 Tests
AV1 2 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 2 Tests
C++ Boost Tests 2 Tests
Chess Test Suite 4 Tests
Timed Code Compilation 12 Tests
C/C++ Compiler Tests 23 Tests
Compression Tests 2 Tests
CPU Massive 33 Tests
Creator Workloads 27 Tests
Cryptocurrency Benchmarks, CPU Mining Tests 2 Tests
Cryptography 4 Tests
Encoding 11 Tests
Fortran Tests 3 Tests
Game Development 3 Tests
HPC - High Performance Computing 18 Tests
Imaging 6 Tests
Machine Learning 8 Tests
Molecular Dynamics 5 Tests
MPI Benchmarks 5 Tests
Multi-Core 41 Tests
NVIDIA GPU Compute 7 Tests
Intel oneAPI 2 Tests
OpenCL 2 Tests
OpenMPI Tests 9 Tests
Programmer / Developer System Benchmarks 18 Tests
Python 2 Tests
Raytracing 2 Tests
Renderers 6 Tests
Scientific Computing 7 Tests
Software Defined Radio 2 Tests
Server 3 Tests
Server CPU Tests 25 Tests
Single-Threaded 5 Tests
Video Encoding 8 Tests
Common Workstation Benchmarks 4 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable
Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
7700
December 31 2022
  6 Hours, 29 Minutes
7900
December 30 2022
  5 Hours, 59 Minutes
Ryzen 7600 AMD
January 03 2023
  1 Day, 4 Hours, 44 Minutes
AMD 7600
January 04 2023
  7 Hours, 11 Minutes
AMD 7700
January 02 2023
  6 Hours, 26 Minutes
Ryzen 7 7700
January 01 2023
  6 Hours, 29 Minutes
Ryzen 7600
January 05 2023
  7 Hours, 13 Minutes
Ryzen 9 7900
December 29 2022
  6 Hours
Invert Hiding All Results Option
  9 Hours, 19 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


extra new ryzen zen4ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen Resolution77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900AMD Ryzen 7 7700 8-Core @ 5.39GHz (8 Cores / 16 Threads)ASUS ROG CROSSHAIR X670E HERO (0805 BIOS)AMD Device 14d832GB2000GB Samsung SSD 980 PRO 2TB + 2000GBAMD Radeon RX 6800 XT 16GB (2575/1000MHz)AMD Navi 21 HDMI AudioASUS MG28UIntel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411Ubuntu 22.046.0.0-060000rc1daily20220820-generic (x86_64)GNOME Shell 42.2X Server 1.21.1.3 + Wayland4.6 Mesa 22.3.0-devel (git-4685385 2022-08-23 jammy-oibaf-ppa) (LLVM 14.0.6 DRM 3.48)1.3.224GCC 12.0.1 20220319ext43840x2160AMD Ryzen 9 7900 12-Core @ 5.48GHz (12 Cores / 24 Threads)2000GB Samsung SSD 980 PRO 2TBAMD Ryzen 5 7600 6-Core @ 5.17GHz (6 Cores / 12 Threads)AMD Ryzen 7 7700 8-Core @ 5.39GHz (8 Cores / 16 Threads)2000GB Samsung SSD 980 PRO 2TB + 2000GBAMD Ryzen 5 7600 6-Core @ 5.17GHz (6 Cores / 12 Threads)AMD Ryzen 9 7900 12-Core @ 5.48GHz (12 Cores / 24 Threads)2000GB Samsung SSD 980 PRO 2TBOpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-OcsLtf/gcc-12-12-20220319/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-OcsLtf/gcc-12-12-20220319/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: amd-pstate performance (Boost: Enabled) - CPU Microcode: 0xa601203Java Details- OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Details- Python 3.10.4Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900Result OverviewPhoronix Test Suite100%122%145%167%189%C-RayMobile Neural NetworkStockfishOpenSSLCoremarkIndigoBenchTachyonXmrigasmFishChaos Group V-RAYBlenderASTC EncoderAircrack-ngNAMDStargate Digital Audio WorkstationCpuminer-Opt7-Zip CompressionLAMMPS Molecular Dynamics SimulatorTimed LLVM CompilationTimed Linux Kernel CompilationTimed MPlayer CompilationPrimesieveAppleseedBuild2oneDNNGROMACSTimed Godot Game Engine CompilationTimed FFmpeg CompilationSVT-HEVCTimed Mesa CompilationNCNNx264SVT-VP9RodiniaNAS Parallel Benchmarkslibavif avifencOpenFOAMx265KvazaarTimed PHP CompilationGPAWY-CruncherTimed Wasmer CompilationSVT-AV1Xcompact3d Incompact3dDarktableNeural Magic DeepSparseJPEG XL Decoding libjxlTimed GDB GNU Debugger CompilationOpenVINOnekRSONNX RuntimeLiquid-DSPVP9 libvpx EncodingGIMPTimed CPython CompilationZstd CompressionGNU RadioAlgebraic Multi-Grid BenchmarkTimed Apache CompilationJPEG XL libjxlLeelaChessZeroNgspiceDaCapo BenchmarkPHPBenchTNNQuantLibCraftysimdjsonPyBenchGitPyPerformanceWebP Image EncodeFLAC Audio EncodingLAME MP3 Encoding

extra new ryzen zen4onednn: IP Shapes 3D - u8s8f32 - CPUncnn: CPU - blazefacemnn: mobilenet-v1-1.0onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUncnn: CPU - FastestDetncnn: CPU - shufflenet-v2onnx: bertsquad-12 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardncnn: CPU - regnety_400monnx: super-resolution-10 - CPU - Standardncnn: CPU-v3-v3 - mobilenet-v3mnn: mobilenetV3ncnn: CPU - mnasnetmnn: squeezenetv1.1mnn: nasnetmnn: MobileNetV2_224c-ray: Total Time - 4K, 16 Rays Per Pixelnpb: EP.Cstockfish: Total Timemnn: resnet-v2-50openssl: RSA4096compress-zstd: 8 - Compression Speedopenssl: RSA4096cpuminer-opt: Deepcoinnpb: EP.Donednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUcpuminer-opt: Magionednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUopenssl: SHA256ncnn: CPU-v2-v2 - mobilenet-v2jpegxl-decode: Allcpuminer-opt: Quad SHA-256, Pyriteopenvino: Vehicle Detection FP16 - CPUcoremark: CoreMark Size 666 - Iterations Per Secondcpuminer-opt: x25xcpuminer-opt: scryptcpuminer-opt: Ringcoincompress-7zip: Decompression Ratingcpuminer-opt: Blake-2 Sdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamcpuminer-opt: LBC, LBRY Creditsindigobench: CPU - Supercardeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamcpuminer-opt: Triple SHA-256, Onecoindeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamcpuminer-opt: Skeincoinstargate: 44100 - 1024xmrig: Wownero - 1Mblender: Barbershop - CPU-Onlyindigobench: CPU - Bedroomtachyon: Total Timeblender: BMW27 - CPU-Onlyblender: Classroom - CPU-Onlyxmrig: Monero - 1Mstargate: 480000 - 1024asmfish: 1024 Hash Memory, 26 Depthonednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPUastcenc: Exhaustiveopenvino: Face Detection FP16-INT8 - CPUv-ray: CPUastcenc: Fastastcenc: Mediumstargate: 44100 - 512astcenc: Thoroughonnx: fcn-resnet101-11 - CPU - Standarddeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamopenvino: Weld Porosity Detection FP16 - CPUstargate: 96000 - 1024build-linux-kernel: allmodconfigappleseed: Disney Materialaircrack-ng: stargate: 192000 - 1024liquid-dsp: 24 - 256 - 57blender: Pabellon Barcelona - CPU-Onlyopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUblender: Fishy Cat - CPU-Onlynamd: ATPase Simulation - 327,506 Atomsliquid-dsp: 16 - 256 - 57onednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: IP Shapes 3D - bf16bf16bf16 - CPUbuild-llvm: Ninjastargate: 480000 - 512onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Training - f32 - CPUopenvino: Face Detection FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamsvt-hevc: 1 - Bosphorus 4Krodinia: OpenMP LavaMDx265: Bosphorus 4Kopenvino: Person Detection FP16 - CPUstargate: 96000 - 512onednn: Recurrent Neural Network Inference - f32 - CPUappleseed: Emilyonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUlammps: Rhodopsin Proteinsvt-hevc: 1 - Bosphorus 1080pdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamstargate: 192000 - 512npb: SP.Bdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamopenvino: Person Detection FP32 - CPUdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamncnn: CPU - efficientnet-b0openvino: Age Gender Recognition Retail 0013 FP16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUbuild-mplayer: Time To Compileprimesieve: 1e13build-llvm: Unix Makefilesprimesieve: 1e12build-linux-kernel: defconfigbuild2: Time To Compilerodinia: OpenMP CFD Solvernpb: BT.Ccompress-7zip: Compression Ratinggromacs: MPI CPU - water_GMX50_bareopenvino: Machine Translation EN To DE FP16 - CPUsvt-vp9: Visual Quality Optimized - Bosphorus 1080ppyperformance: python_startupbuild-godot: Time To Compilebuild-ffmpeg: Time To Compilesvt-hevc: 7 - Bosphorus 4Konednn: IP Shapes 1D - f32 - CPUgnuradio: Five Back to Back FIR Filterssvt-vp9: Visual Quality Optimized - Bosphorus 4Konnx: yolov4 - CPU - Standardmnn: SqueezeNetV1.0svt-hevc: 7 - Bosphorus 1080pbuild-mesa: Time To Compilemnn: inception-v3openvino: Person Vehicle Bike Detection FP16 - CPUavifenc: 6cpuminer-opt: Myriad-Groestlx264: Bosphorus 4Konednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUopenfoam: drivaerFastback, Small Mesh Size - Execution Timesvt-vp9: VMAF Optimized - Bosphorus 1080psvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080pdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamx264: Bosphorus 1080ponednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUappleseed: Material Testerkvazaar: Bosphorus 4K - Mediumsvt-hevc: 10 - Bosphorus 1080ponednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamncnn: CPU - googlenetonnx: ArcFace ResNet-100 - CPU - Parallelsvt-av1: Preset 8 - Bosphorus 4Kdarktable: Boat - CPU-onlysvt-hevc: 10 - Bosphorus 4Kavifenc: 0compress-zstd: 19 - Compression Speedsvt-av1: Preset 8 - Bosphorus 1080pkvazaar: Bosphorus 4K - Very Fastdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamsvt-vp9: VMAF Optimized - Bosphorus 4Krodinia: OpenMP Leukocytesvt-av1: Preset 4 - Bosphorus 4Kavifenc: 6, Losslesssvt-vp9: PSNR/SSIM Optimized - Bosphorus 4Kdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamrodinia: OpenMP Streamclusteronnx: super-resolution-10 - CPU - Parallelonednn: IP Shapes 3D - f32 - CPUcompress-zstd: 19, Long Mode - Compression Speedsvt-av1: Preset 12 - Bosphorus 4Kkvazaar: Bosphorus 4K - Ultra Fastavifenc: 2y-cruncher: 1Bbuild-php: Time To Compilegpaw: Carbon Nanotubevpxenc: Speed 5 - Bosphorus 4Kkvazaar: Bosphorus 1080p - Mediumdarktable: Server Rack - CPU-onlyonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUnpb: LU.Connx: bertsquad-12 - CPU - Parallelsvt-av1: Preset 13 - Bosphorus 4Kbuild-wasmer: Time To Compiledarktable: Masskrug - CPU-onlyincompact3d: input.i3d 129 Cells Per Directiony-cruncher: 500Monednn: IP Shapes 1D - bf16bf16bf16 - CPUgimp: resizenpb: FT.Cnpb: CG.Connx: fcn-resnet101-11 - CPU - Parallelkvazaar: Bosphorus 1080p - Ultra Fastbuild-gdb: Time To Compileonednn: Convolution Batch Shapes Auto - f32 - CPUopenvino: Machine Translation EN To DE FP16 - CPUvpxenc: Speed 0 - Bosphorus 1080pvpxenc: Speed 0 - Bosphorus 4Kcpuminer-opt: Garlicoinopenfoam: drivaerFastback, Small Mesh Size - Mesh Timeopenvino: Vehicle Detection FP16 - CPUnekrs: TurboPipe Periodicncnn: CPU - mobilenetliquid-dsp: 8 - 256 - 57openvino: Face Detection FP16-INT8 - CPUsvt-av1: Preset 4 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 1080pcompress-zstd: 3 - Compression Speedonnx: yolov4 - CPU - Parallelopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16 - CPUkvazaar: Bosphorus 1080p - Very Fastdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamopenvino: Face Detection FP16 - CPUdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdarktable: Server Room - CPU-onlyncnn: CPU - squeezenet_ssdopenvino: Person Detection FP16 - CPUncnn: CPU - resnet18rodinia: OpenMP HotSpot3Dsvt-av1: Preset 13 - Bosphorus 1080ponednn: IP Shapes 1D - u8s8f32 - CPUopenvino: Person Detection FP32 - CPUbuild-python: Defaultx265: Bosphorus 1080pavifenc: 10, Losslesscompress-zstd: 8, Long Mode - Compression Speedncnn: CPU - yolov4-tinydacapobench: Tradebeansncnn: CPU - resnet50ncnn: CPU - vision_transformergimp: unsharp-maskgnuradio: Signal Source (Cosine)openvino: Person Vehicle Bike Detection FP16 - CPUnpb: MG.Copenvino: Age Gender Recognition Retail 0013 FP16 - CPUdacapobench: Tradesoapdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamonnx: GPT-2 - CPU - Standardnpb: SP.Cdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamvpxenc: Speed 5 - Bosphorus 1080popenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamnpb: IS.Dtnn: CPU - SqueezeNet v2dacapobench: H2deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamwebp: Defaultdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamjpegxl: JPEG - 100amg: build-apache: Time To Compiledeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamlczero: Eigendeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamjpegxl: JPEG - 90jpegxl: PNG - 90jpegxl: JPEG - 80ncnn: CPU - vgg16gimp: auto-levelsjpegxl: PNG - 80simdjson: TopTweetngspice: C7552onnx: GPT-2 - CPU - Parallelbuild-python: Released Build, PGO + LTO Optimizeddacapobench: Jythoncompress-zstd: 3, Long Mode - Compression Speedncnn: CPU - alexnetpyperformance: crypto_pyaesgnuradio: FIR Filtergnuradio: Hilbert Transformtnn: CPU - DenseNetcompress-zstd: 8, Long Mode - Decompression Speedliquid-dsp: 2 - 256 - 57compress-zstd: 3, Long Mode - Decompression Speedngspice: C2670pyperformance: nbodyphpbench: PHP Benchmark Suitewebp: Quality 100, Losslesscompress-zstd: 19 - Decompression Speedcompress-zstd: 8 - Decompression Speedlczero: BLASjpegxl-decode: 1compress-zstd: 19, Long Mode - Decompression Speedsimdjson: DistinctUserIDliquid-dsp: 1 - 256 - 57quantlib: pyperformance: gognuradio: IIR Filterpyperformance: floatwebp: Quality 100, Lossless, Highest Compressioncrafty: Elapsed Timewebp: Quality 100, Highest Compressiongimp: rotatepyperformance: 2to3pyperformance: pathlibtnn: CPU - SqueezeNet v1.1gnuradio: FM Deemphasis Filterpybench: Total For Average Test Timesliquid-dsp: 4 - 256 - 57webp: Quality 100git: Time To Complete Common Git Commandspyperformance: regex_compilepyperformance: django_templatepyperformance: raytracejpegxl: PNG - 100simdjson: LargeRandpyperformance: json_loadspyperformance: pickle_pure_pythonsimdjson: PartialTweetstnn: CPU - MobileNet v2encode-flac: WAV To FLACpyperformance: chaoscompress-zstd: 3 - Decompression Speedsimdjson: Kostyaencode-mp3: WAV To MP3dacapobench: Eclipse77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.9622850.591.5581.534411.91.53123915884.9153331.50.7411.711.2725.9431.81248.9461588.73349045097.871194057.21110.22958.4104101551.497.41713516.973.37523174907442601.98361.28149450491.96497649.986176583.09325.462226.51851668795606.1455723605.78692.1786.1425220550146.42521348004.29712710240.7983.972.701115.7463103.96272.7877384.202963420915940.4355140.877713.6115640193.443867.54714.1528348.427612446.0235687.933.1034171019.445158.29446346662.8672.073321775660000335.4412534.521369.35134.781.603837741600002294.352.50229489.6184.052642285.982282.736.89878.7862.30172.96166.07116.423.932.9944691152.24262.664381149.161154.558.5611.8457.4511.97444812225.69163.45266.11783.85164.38186.08322.9121050.50.2504830.81081528.228161.017510.99913.20478.73892.70514.56126811.91109141.46880.7207.454.691.23338.46654.683.253582115.264.385152.559172.8138.21315.514822.335.3234314037.13.909234.37736254.49535244.06254.68.0274124.4866167.510.99936145.28643611.32359.072.5023218.419654.25896.7167045.4723.785110.11116.79652.6129.24126.224.757640.381371.9984.6483.6868.17178.2912.479880.092712.12758924.3264546.4150.38646.8456.43828.02548.926209.07530.1658.560.1567.3466843910.53667162.37544.6973.6919.585521712.7171.2724510.18324004.749535.4395201.6344.5987.6476349.5422.4910.964593.0631.4064768.12555243000007.39756340000293.3712.107609.8225055.34484.555.81111.4616.9891578.8258.83882.89410.331015.276.9661.627668.0870.5998881034.0615.17587.563.8721445.113.12137111.52100.1310.676105.94.8623860.310.38201569.6097843110989.4164.170754.490.645.8486.90061452.3742.331182843.383226.9427.30580.9138575510014.705651.14251542650.208211.5711.911.826.919.52212.099.8670.9127489192.36224592007.34.6762.31485.9759.82067.2266311.62080100006098.479.39779.612331982.215364.15956.5158871.3653799.851043500004131.8128561.157.20.84145742745.209.29716810.3184.205958.460140154000016.7432.05479.524.82480.981.7911.92258.16190.91911.69253.25715.15.784.8060.3885821.393.7010.6641134.163.481083173510.3456762.971.5053.062.52210.5643.1534.5352155.925079673913.653274898.71985.84201.8147002158.455.6776726.491.99078241183031903.39204.91208470672.13691509.796389804.79451.133027.8612161511891208.5176963108.098126.36248.5039295130202.12451815405.36102514144.3723.133.65886.304176.35202.6710059.75.201137553501450.3131161.180918.0721166254.358989.28845.16946411.20349862.1213917.783.867351744.3119.52915762268.6412.5876191032600000249.9616311.471830.42100.61.1900610336000001675.221.54207352.6315.008821680.331678.889.191164.4285.18583.94126.27528.045.193.670094855.906197.261697856.019858.83211.39415.6280.94832.40175520541.04126.12397.92845.07126.33657.91514.1927518.780.1887950.62617721.141122.562375.34910.06459.24869.91411.02543264.21482452.01898.78293.134.4669.52529.48870.472.50651430.485.66943.771217.4728.87820.9171102.444.245249047.133.421073.49424176.83432335.22345.47.5604132.1845204.590.881827123.04644913.97439.562.2609115.302665.30858.01191956.2933.028134.9294.77164.2158.66231.4821.14147.291490.7959.7284.4526.82699.810.713893.30048.4367153.1774951.6181.98154.7147.09823.31939.737169.04724.3663.170.1355.2157244535.6772189.15237.9063.07115.074826210.8651.1432613.30624918.3411923.73107216.4938.6065.7264160.7121.1710.414566.1526.6925618.92648778000008.67739430000331.3913.459687.9336230.74955.156.53118.7818.315651.1854.58412.57611.741153.467.1652.07739.5120.5906331179.613.458103.733.5011637.914.09165512.293.512.38450145.4425610.960.43193674.0953998512794.7570.321146.880.736.5596.54631498.3441.524174847.467727.2429.66520.9243360330014.1705.41481607703.7361212.3212.224.210.1512.5310.0666.5877025183.08522941825.44.6260.61374.7741.42105.736404.12075200006167.274.24577.412278892.195452.46277.1160172.615206.59.721054400004141.9125560.756.10.85148048875.309.18216710.1180.38390559241100000016.8131.71779.224.724611.7911.82238.24190.73811.64753.15794.75.784.7951.023050.532.8541.635431.711.5875712524.7339921.530.7831.611.4486.2371.74564.3331173.772795747011.988148020.71248.82265.17961.371172.139.60780402.033.26570132238381701.91336.39114893370.90385213.542794448.72252.861692.20688456641134.7536536934.51771.40324.7544167343113.44971033373.0396277963.91284.022.064151.7248134.80356.477123.62.980782320297760.5488220.672510.3312352147.270851.67862.9660526.47857935.5975527.632.2275091266.557207.29083835804.8231.488226596263333432.639490.501056.82174.242.062265974733332897.712.57769609.1182.9059882895.712896.315.34676.7649.71952.3215.76017.903.042.1647601450.48332.5113741452.401452.696.7119.2348.08561.42689812327.50212.23984.71163.05211.80114.72132.6916510.730.3131291.0399635.062203.188621.55116.71396.220114.27518.15726610.53901781.23160.22181.364.75111.95246.84544.494.029411934.153.525462.583137.0045.92016.390705.106.6593353031.895.347075.43706273.7372220.17227.7211.715885.3113136.261.35548188.4577629.12286.863.3051423.364142.78675.64130437.2834.52689.65142.81243.6106.88621.1031.258131.987465.5581.9483.0589.90269.1815.680863.755211.96146564.4812935.9127.83138.1767.69432.99455.797235.98321.9644.680.1777.1112131737.61552136.23250.7744.22020.960092515.0481.5805410.30618180.588739.8380160.2851.5457.3801466.4017.208.423598.2935.44912110.78498086333336.89573297500386.6410.304524.2424776.63815.907.5791.3021.8567746.9445.74083.2439.121309.265.6566.571578.5120.7519351306.1216.87582.574.3251326.312.03146111.06110.1710.7965865.65.6721442.670.36224262.3752893211043.4860.286152.680.635.6884.26271322.9647.652187142.004625.8326.43130.8539265373315.772630.98621495630.931010.9311.2111.1325.859.80911.419.5173.0757683201.92525381942.84.2463.61436.8738.42235.3116208.31968728576086.879.98083.511432252.095074.25834.2151469.945133.09.341003533334180.6132541.559.40.82139491885.009.50917310.6191.091913.762640060333316.1333.07182.725.82590.951.7212.32327.89199.06212.10255.25596.15.564.9781.010390.532.8481.641671.711.58100011074.7839941.530.7371.611.3755.3221.63864.3741143.012716718811.558147994.71231.32265.57944.471199.839.59464400.763.26386133881563201.92318.81115330374.5385542.168675446.02250.531684.91683876627004.7633538404.55571.07554.7576168810113.78061030303.0391887962.21284.522.096150.6568135.04357.17002.22.981309327599420.5507240.671710.3412156147.448451.10552.9683646.40479835.6125526.472.2317981280.928206.45839935844.4411.490901598510000433.069510.351056.4173.722.042445974100002897.082.57305608.6762.9080292892.92893.485.33688.7649.65072.31214.14917.193.072.1597071453.82331.3777741450.631444.746.7319.2347.78291.43245412334.26212.12344.71413.06211.97014.71752.7216444.440.3147641.0361335.066203.443622.38616.69297.4113.89318.1226241.32903341.23460.24181.124.75110.91147.27843.734.022151862.753.266592.434137.1445.9714.12699.476.733340305.335785.41634274.5605215.04226.7811.727385.2284134.81.36078188.6222989.1286.813.3034223.34642.825.73127337.1754.55789.96143.23742.8106.00621.1531.303931.94046386.4723.0429.95768.5515.644863.901711.97946784.4753435.9128.57938.0467.80933.08156.406237.4722.3644.680.1767.0951531784.52555136.84850.8114.23220.986722915.0751.583610.42118201.558741.3679160.4551.4877.3769966.3717.38.613579.535.47460410.67497356000006.9583500000386.4710.304528.4484772.93785.87.5991.2221.8566746.1545.73873.259.171293.345.5657.105579.9980.7519071298.3416.80183.824.3521319.112.19135911.11110.110.8015870.75.7121352.010.36215462.7697920411044.4660.332252.040.635.6884.22231299.8543.92170042.198425.9526.35450.8339194510015.836630.55351475629.797510.7611.0710.9725.889.80111.289.571.9747679201.95225401937.14.24631448.5725.52228.8726188.2197510000594379.55381.911495352.085072.45826.3155467.935239.29.4999000003927.2132541.858.70.81140532685.019.50517410.7191.086919.161339810000016.1233.19282.325.82570.961.7212.32327.9198.86812.09555.45547.85.554.9780.9169360.541.4821.453491.721.45125725504.6152821.420.7251.51.2445.5221.73548.7921561.4382738507.334194261.51119.32968.4104401507.337.42742531.733.18229175380827801.87365.43153350463.77509464.001415586.79332.812207.98868336915606.1644717505.88492.45136.1603220960146.76631376504.32876910277.3973.452.699115.5433103.29271.597780.84.228976421269760.4150220.87913.6815748194.492567.69064.1815698.425812446.0117692.333.140911005.656158.42200947545.7272.088438776550000333.2512608.141368.95134.161.600577780000002231.812.33499475.8474.0796292223.942225.666.93892.3262.94912.98165.87117.683.923.0002751113.96261.2381611110.931112.468.59211.9358.03751.9836612368.45163.04696.1333.84162.57816.15072.520765.760.2410040.7949927.927160.119495.06513.07877.6690.49413.9726715.21125851.4873.65209.437.2490.7938.28154.843.104512128.864.528272.42174.5237.68414.598839.035.2884319037.393.890954.28496251.62101246.55261.87.9751125.2987166.270.989794144.6751311.36362.542.4627518.2754.70675.3164045.9143.667110.6115.87852.5129.95326.4224.60640.633472.6383.7943.688.0580.1512.386880.704711.57857474.17446.8152.05147.0155.99127.89348.341207.90829.0158.860.1487.0804844080.08676162.39444.3313.58419.435157812.6781.234969.86623901.699511.8694202.844.0817.3076854.2722.9711.14711.2931.2347848.62560084000006.54759310000292.0912.125616.0995126.44424.485.77111.6316.8582576.1359.29722.8319.091016.685.7853.76677.9640.588639103914.94388.863.8971454.311.414039.9199.5410.3096084.94.7624079.990.38212868.9068925411060.863.499654.430.635.8486.92381401.142.325187743.256126.9427.2430.8839156190014.621649.26351540647.740211.6111.9211.8224.359.24712.119.9567.7787783190.87225402012.94.2560.21498.3775.52066.0696347.12057600006118.577.61477.511841352.185220.66198.9151971.95130.49.831043000004090.7127570.856.10.85145725555.198.97117710.3183.892941.159840721000016.7932.36579.524.624711.7912.12218.17190.3711.672535716.25.784.7920.9517690.571.5181.556491.761.4972915804.8383811.50.7361.581.3015.8831.82749.0321514.07354999557.877193624.51090.62954.2103801562.227.41505517.613.36363175448701301.98344.88147910490.93494267.865724583.56331.572268.58854338688806.1182707305.8492.01146.1347217830146.03141344504.29826810175.6989.722.668115.816104.03273.367745.24.178461414115320.4368590.87613.5715594192.782167.51724.1555658.44557145.6199686.563.1149241012.614158.99459846993.6992.078613772840000334.8512469.441368.9135.181.605527861800002292.332.45222487.7344.0622212272.722289.266.87879.3162.75342.96167.47217.973.872.9865161155.04262.0624681156.111155.48.54711.8557.39271.9752512239.64164.05796.09523.85164.28986.08662.721184.20.2517770.8110927.564161.064503.15713.16977.88193.34214.59926612.661113751.46579.76206.94.6191.40538.41154.623.245322004.464.465612.557172.7138.24815.373834.425.3844308037.273.918164.39946254.32993243.98252.798.0301124.4383168.281.0027145.38815511.33360.582.5071818.310754.5855.69166945.5423.742109.97116.73252.4129.25126.1824.962140.05372.2584.9863.6848.17477.312.426880.441112.11458394.3190245.8149.54346.8956.57327.99948.768209.83929.1958.760.1587.3317443900.91668161.94544.1793.65219.55776612.7651.2720910.03323930.179553.4594201.8844.5727.6399350.1422.4510.994553.5331.5862518.14555482000007.12759550000294.4312.098615.78851404454.545.82111.5616.9692581.3458.91022.90210.021032.676.0353.927670.5270.6002441032.415.18187.63.881443.812.06140710.34100.4310.5426088.94.7923843.510.3769.6804844410954.1163.667153.380.645.8487.66911428.5742.063194243.462126.9427.37960.8738591160014.693651.97151548650.700411.4311.7611.6125.439.48811.969.9271.9337765192.751236020094.6660.61508.7760.32069.7456336.62070400006083.380.1979.412180192.175185.26188.3153570.445186.49.771042600004098.8126575.856.40.84145130085.209.18216710.4184.079943.160640124000016.7832.08479.424.62460.971.7911.82258.21193.77111.66553.25702.75.774.8031.01580.532.8641.652571.851.653419204.8139851.580.8011.671.4666.5051.7764.3291192.542812176712.085147943.71180.22265.47943.921197.0510.1288406.473.27254133352661002.05334.93115430375.11384184.408516448.3251.351709.61682386624604.758538104.58271.82044.755165660113.82881030403.0304737957.91282.322.071152.6828134.08357.357171.72.977773320095390.5503330.681510.5212090145.524251.15052.9695586.47759835.7029527.782.2202511286.513206.77416935828.1371.4902594580000433.939516.261058.54174.022.062855968500002897.052.66452602.7792.9011612888.4428965.36691.7849.62682.3213.00617.773.062.1590671450.82334.1986811450.71446.496.7579.2247.93471.43100912314.21212.10754.71453.02212.00744.71672.8616423.770.312251.0397334.923203.786621.82416.71896.295115.53518.18526385.9903491.23762.24180.454.76112.84847.17343.564.037511923.953.356562.618137.346.04516.873695.356.6533333030.215.340625.45728274.56138215.01222.0811.691785.483134.661.3531189.5382039.09286.263.3090323.326142.85565.95131837.2424.58589.26142.84943.1105.83521.1431.432531.809463.8487.13.0349.8668.2315.632463.949912.06946744.6323435.7127.2237.9667.81533.04156.328239.3321.3144.650.1787.0626931801.69550136.42853.1264.28620.943674115.041.5749210.41718217.438923.1279160.2151.9147.3986864.2417.238.323534.1835.28620810.66497175000007.13585770000379.7310.23522.0434863.43825.787.5791.1821.7765746.1445.90723.2849.213035.6862.467579.1890.7500021315.7816.97782.614.3411320.812.34142411.11110.6710.745846.25.7521396.590.36220162.5703862311031.3460.411452.030.635.6784.00811304.9443.916188141.759525.9726.34310.9439258220015.68630.90261499630.504910.9411.2211.1225.919.77911.449.1172.8717745202.75925381936.94.4366.11436.2709.62237.0756200.61999600006167.679.79179.911542402.0550586021.3152069.365039.89.461000100003966.6133544.358.80.82141703765.019.44617310.6191.076911.761538898000016.1233.1278325.72560.961.7312.42317.91199.1912.10455.15745.45.564.9860.3599921.43.560.6656364.173.48716250810.5157962.981.4933.062.45210.3963.11334.0312150.584748093513.62275022.12025.34203.4147002117.955.51763733.321.84748240553723803.41201.39207140672.9695719.393178804.66449.63027.6712252811822808.5267961508.059127.26268.5038295660200.92241834105.39052314044.5723.783.65887.439777.02202.1312360.55.249215563683300.3128261.181118.1321212255.036889.44985.19070111.20119361.8538918.153.870727738.33119.05078262268.3752.5739041033000000249.9516466.741832.77100.471.1904410340000001675.61.54403353.5384.994921677.71683.039.191163.4585.16443.94127.04927.975.163.665911855.553196.697568854.95855.53711.21115.6580.88552.4110420602.99126.51787.90375.02126.30777.91694.1827522.210.1894510.62523121.083122.638374.36410.06958.79770.28911.00542376.81482102.0297.49294.34.4569.85329.16769.912.517761322.984.96933.863218.529.00722.471102.444.2994911047.083.970483.49499176.07077334.75344.817.6063131.3806208.220.880093124.04262713.89437.962.1672615.360765.06458.07193656.3953.056134.895.69363.8158.2131.4521.271547.001593.659.0174.4686.765100.1410.849992.1318.29867913.1798551183.69654.647.23423.29640.8168.92923.9963.160.1265.2267444554.53767190.95937.9263.17415.084387810.8541.1444313.60224557.8511860.08106215.9838.6735.7036561.5321.3910.284390.526.9425358.91659745000008.4753250000330.5813.488649.4916289.14985.156.53118.7118.2487651.7154.78152.53911.731159.387.1358.202728.990.5952111190.813.576102.843.6211636.713.7167612.0990.2112.62951625.4425644.030.43188074.152974812761.8870.33446.830.726.5496.89031464.4842.367179147.12152429.84140.8643564000014.043704.97911650703.554911.7812.0911.9824.2410.27912.2710.165.9527198183.16623481824.84.6260.41402.1716.62112.1486696.21923100006423.975.52678.411869952.215436.16049.2162371.365176.69.961064600003971.2125561.255.90.86146468275.309.40616710.1182.768907.560240268000017.0331.4378.624.52470.971.8111.82238.27190.74911.57353.35722.75.764.809OpenBenchmarking.org

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPURyzen 9 79007900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600Ryzen 7600 AMD0.23020.46040.69060.92081.151SE +/- 0.008119, N = 30.3599920.3885820.9169360.9517690.9622851.0103901.0158001.023050MIN: 0.33MIN: 0.36MIN: 0.86MIN: 0.84MIN: 0.85MIN: 0.95MIN: 0.95MIN: 0.951. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPURyzen 9 79007900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600Ryzen 7600 AMD246810Min: 1.01 / Avg: 1.02 / Max: 1.031. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazefaceRyzen 7600 AMDAMD 7600Ryzen 7600AMD 7700Ryzen 7 770077007900Ryzen 9 79000.3150.630.9451.261.575SE +/- 0.00, N = 30.530.530.530.540.570.591.391.40MIN: 0.52 / MAX: 0.93MIN: 0.52 / MAX: 0.87MIN: 0.52 / MAX: 0.66MAX: 0.72MIN: 0.54 / MAX: 1.6MIN: 0.56 / MAX: 1.26MIN: 1.37 / MAX: 1.49MIN: 1.36 / MAX: 3.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazefaceRyzen 7600 AMDAMD 7600Ryzen 7600AMD 7700Ryzen 7 770077007900Ryzen 9 7900246810Min: 0.53 / Avg: 0.53 / Max: 0.531. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 9 790079000.83271.66542.49813.33084.1635SE +/- 0.002, N = 151.4821.5181.5582.8482.8542.8643.5603.701MIN: 1.47 / MAX: 3.45MIN: 1.44 / MAX: 4.13MIN: 1.47 / MAX: 4.02MIN: 2.83 / MAX: 3.19MIN: 2.82 / MAX: 9.6MIN: 2.83 / MAX: 14.09MIN: 3.49 / MAX: 4.01MIN: 3.63 / MAX: 3.811. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 9 79007900246810Min: 2.84 / Avg: 2.85 / Max: 2.871. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 76000.37180.74361.11541.48721.859SE +/- 0.001514, N = 30.6641130.6656361.4534901.5344101.5564901.6354301.6416701.652570MIN: 0.62MIN: 0.59MIN: 1.41MIN: 1.41MIN: 1.41MIN: 1.61MIN: 1.62MIN: 1.621. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 7600246810Min: 1.63 / Avg: 1.64 / Max: 1.641. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDetRyzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 760077007900Ryzen 9 79000.93831.87662.81493.75324.6915SE +/- 0.01, N = 31.711.711.721.761.851.904.164.17MIN: 1.69 / MAX: 3.07MIN: 1.7 / MAX: 1.75MIN: 1.7 / MAX: 2.15MIN: 1.7 / MAX: 3.08MIN: 1.84 / MAX: 2.07MIN: 1.83 / MAX: 2.67MIN: 4.11 / MAX: 4.2MIN: 4.14 / MAX: 4.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDetRyzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 760077007900Ryzen 9 7900246810Min: 1.7 / Avg: 1.71 / Max: 1.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v2AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 76007900Ryzen 9 79000.7831.5662.3493.1323.915SE +/- 0.00, N = 31.451.491.531.581.581.603.483.48MIN: 1.43 / MAX: 1.65MIN: 1.44 / MAX: 2.07MIN: 1.47 / MAX: 3.01MIN: 1.56 / MAX: 2.18MIN: 1.57 / MAX: 1.93MIN: 1.58 / MAX: 1.99MIN: 3.44 / MAX: 3.9MIN: 3.45 / MAX: 3.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v2AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 76007900Ryzen 9 7900246810Min: 1.57 / Avg: 1.58 / Max: 1.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: StandardAMD 770077007900AMD 7600Ryzen 7600 AMDRyzen 7 7700Ryzen 9 7900Ryzen 760030060090012001500SE +/- 73.47, N = 1212571239108310007577297165341. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: StandardAMD 770077007900AMD 7600Ryzen 7600 AMDRyzen 7 7700Ryzen 9 7900Ryzen 76002004006008001000Min: 512 / Avg: 757.25 / Max: 10031. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardAMD 7700Ryzen 9 7900Ryzen 760079007700Ryzen 7 7700Ryzen 7600 AMDAMD 76005001000150020002500SE +/- 92.79, N = 12255025081920173515881580125211071. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardAMD 7700Ryzen 9 7900Ryzen 760079007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600400800120016002000Min: 1105.5 / Avg: 1252.13 / Max: 1972.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400mAMD 7700Ryzen 7600 AMDAMD 7600Ryzen 7600Ryzen 7 770077007900Ryzen 9 79003691215SE +/- 0.01, N = 34.614.734.784.814.834.9110.3410.51MIN: 4.51 / MAX: 14.57MIN: 4.68 / MAX: 5.18MIN: 4.74 / MAX: 5.24MIN: 4.74 / MAX: 6.32MIN: 4.51 / MAX: 7.13MIN: 4.56 / MAX: 6.15MIN: 10.27 / MAX: 10.81MIN: 10.31 / MAX: 17.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400mAMD 7700Ryzen 7600 AMDAMD 7600Ryzen 7600Ryzen 7 770077007900Ryzen 9 79003691215Min: 4.72 / Avg: 4.73 / Max: 4.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: StandardRyzen 7 7700Ryzen 9 790079007700AMD 7700AMD 7600Ryzen 7600 AMDRyzen 76002K4K6K8K10KSE +/- 3.09, N = 3838157965676533352823994399239851. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: StandardRyzen 7 7700Ryzen 9 790079007700AMD 7700AMD 7600Ryzen 7600 AMDRyzen 760015003000450060007500Min: 3985.5 / Avg: 3991.67 / Max: 39951. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v3AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 76007900Ryzen 9 79000.67051.3412.01152.6823.3525SE +/- 0.00, N = 31.421.501.501.531.531.582.972.98MIN: 1.4 / MAX: 1.85MIN: 1.39 / MAX: 2.4MIN: 1.39 / MAX: 2.44MIN: 1.5 / MAX: 2.03MIN: 1.51 / MAX: 1.94MIN: 1.53 / MAX: 3.86MIN: 2.94 / MAX: 3.32MIN: 2.94 / MAX: 3.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v3AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 76007900Ryzen 9 7900246810Min: 1.52 / Avg: 1.53 / Max: 1.531. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3AMD 7700Ryzen 7 7700AMD 76007700Ryzen 7600 AMDRyzen 7600Ryzen 9 790079000.33860.67721.01581.35441.693SE +/- 0.005, N = 150.7250.7360.7370.7410.7830.8011.4931.505MIN: 0.72 / MAX: 1.2MIN: 0.7 / MAX: 1.39MIN: 0.73 / MAX: 1.1MIN: 0.71 / MAX: 2.7MIN: 0.73 / MAX: 2.85MIN: 0.79 / MAX: 3.51MIN: 1.47 / MAX: 1.96MIN: 1.48 / MAX: 9.161. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3AMD 7700Ryzen 7 7700AMD 76007700Ryzen 7600 AMDRyzen 7600Ryzen 9 79007900246810Min: 0.74 / Avg: 0.78 / Max: 0.81. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnetAMD 7700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 760077007900Ryzen 9 79000.68851.3772.06552.7543.4425SE +/- 0.00, N = 31.501.581.611.611.671.713.063.06MIN: 1.48 / MAX: 1.65MIN: 1.48 / MAX: 2.46MIN: 1.59 / MAX: 2MIN: 1.59 / MAX: 2.01MIN: 1.64 / MAX: 2.16MIN: 1.61 / MAX: 2.55MIN: 3.03 / MAX: 3.46MIN: 3.03 / MAX: 3.571. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnetAMD 7700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 760077007900Ryzen 9 7900246810Min: 1.6 / Avg: 1.61 / Max: 1.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 9 790079000.56751.1351.70252.272.8375SE +/- 0.008, N = 151.2441.2721.3011.3751.4481.4662.4522.522MIN: 1.23 / MAX: 3.26MIN: 1.2 / MAX: 3.38MIN: 1.24 / MAX: 3.27MIN: 1.36 / MAX: 1.66MIN: 1.37 / MAX: 8.4MIN: 1.45 / MAX: 3.38MIN: 2.42 / MAX: 4.54MIN: 2.47 / MAX: 2.981. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 9 79007900246810Min: 1.38 / Avg: 1.45 / Max: 1.471. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetAMD 7600AMD 7700Ryzen 7 77007700Ryzen 7600 AMDRyzen 7600Ryzen 9 790079003691215SE +/- 0.110, N = 155.3225.5225.8835.9436.2376.50510.39610.564MIN: 5.29 / MAX: 6.27MIN: 5.45 / MAX: 17.02MIN: 5.44 / MAX: 8.54MIN: 5.47 / MAX: 9.31MIN: 5.32 / MAX: 16.05MIN: 6.43 / MAX: 8.41MIN: 10.29 / MAX: 18.12MIN: 10.43 / MAX: 11.261. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetAMD 7600AMD 7700Ryzen 7 77007700Ryzen 7600 AMDRyzen 7600Ryzen 9 790079003691215Min: 5.35 / Avg: 6.24 / Max: 6.491. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224AMD 7600AMD 7700Ryzen 7600 AMDRyzen 76007700Ryzen 7 7700Ryzen 9 790079000.70881.41762.12642.83523.544SE +/- 0.011, N = 151.6381.7351.7451.7701.8121.8273.1133.150MIN: 1.62 / MAX: 2.05MIN: 1.71 / MAX: 3.74MIN: 1.63 / MAX: 8.75MIN: 1.74 / MAX: 3.67MIN: 1.71 / MAX: 3.68MIN: 1.72 / MAX: 3.71MIN: 3.04 / MAX: 3.57MIN: 3.1 / MAX: 3.751. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224AMD 7600AMD 7700Ryzen 7600 AMDRyzen 76007700Ryzen 7 7700Ryzen 9 79007900246810Min: 1.64 / Avg: 1.75 / Max: 1.771. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

C-Ray

This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelRyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 76001428425670SE +/- 0.02, N = 334.0334.5448.7948.9549.0364.3364.3364.371. (CC) gcc options: -lm -lpthread -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelRyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 76001326395265Min: 64.31 / Avg: 64.33 / Max: 64.361. (CC) gcc options: -lm -lpthread -O3

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.C7900Ryzen 9 79007700AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 76005001000150020002500SE +/- 13.02, N = 52155.922150.581588.731561.401514.071192.541173.771143.011. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.C7900Ryzen 9 79007700AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 7600400800120016002000Min: 1141.85 / Avg: 1173.77 / Max: 1200.531. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

Stockfish

This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 15Total Time7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600Ryzen 7600 AMDAMD 760011M22M33M44M55MSE +/- 210730.41, N = 1150796739474809353827385035499955349045092812176727957470271671881. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 15Total Time7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600Ryzen 7600 AMDAMD 76009M18M27M36M45MMin: 26455572 / Avg: 27957469.82 / Max: 287915021. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 9 7900790048121620SE +/- 0.045, N = 157.3347.8717.87711.55811.98812.08513.62013.653MIN: 7.13 / MAX: 19.12MIN: 7.16 / MAX: 10.56MIN: 7.19 / MAX: 10.65MIN: 11.48 / MAX: 19.17MIN: 11.25 / MAX: 19.5MIN: 11.68 / MAX: 24.46MIN: 13.5 / MAX: 15.47MIN: 13.54 / MAX: 15.731. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 9 7900790048121620Min: 11.57 / Avg: 11.99 / Max: 12.161. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Ryzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 760060K120K180K240K300KSE +/- 6.07, N = 3275022.1274898.7194261.5194057.2193624.5148020.7147994.7147943.71. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Ryzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 760050K100K150K200K250KMin: 148008.6 / Avg: 148020.67 / Max: 148027.91. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Compression SpeedRyzen 9 79007900Ryzen 7600 AMDAMD 7600Ryzen 7600AMD 77007700Ryzen 7 7700400800120016002000SE +/- 9.87, N = 32025.31985.81248.81231.31180.21119.31110.21090.61. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Compression SpeedRyzen 9 79007900Ryzen 7600 AMDAMD 7600Ryzen 7600AMD 77007700Ryzen 7 7700400800120016002000Min: 1232.9 / Avg: 1248.83 / Max: 1266.91. (CC) gcc options: -O3 -pthread -lz -llzma

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Ryzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600Ryzen 7600 AMD9001800270036004500SE +/- 0.25, N = 34203.44201.82968.42958.42954.22265.52265.42265.11. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Ryzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600Ryzen 7600 AMD7001400210028003500Min: 2264.8 / Avg: 2265.1 / Max: 2265.61. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: DeepcoinRyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 76003K6K9K12K15KSE +/- 16.47, N = 314700.0014700.0010440.0010410.0010380.007961.377944.477943.921. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: DeepcoinRyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 76003K6K9K12K15KMin: 7944.38 / Avg: 7961.37 / Max: 7994.311. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.D7900Ryzen 9 7900Ryzen 7 77007700AMD 7700AMD 7600Ryzen 7600Ryzen 7600 AMD5001000150020002500SE +/- 9.35, N = 92158.452117.951562.221551.491507.331199.831197.051172.131. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.D7900Ryzen 9 7900Ryzen 7 77007700AMD 7700AMD 7600Ryzen 7600Ryzen 7600 AMD400800120016002000Min: 1141.71 / Avg: 1172.13 / Max: 1201.461. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPURyzen 9 79007900Ryzen 7 77007700AMD 7700AMD 7600Ryzen 7600 AMDRyzen 76003691215SE +/- 0.03364, N = 35.517635.677607.415057.417137.427429.594649.6078010.12880MIN: 4.88MIN: 4.97MIN: 6.75MIN: 6.76MIN: 6.85MIN: 9.26MIN: 9.17MIN: 9.531. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPURyzen 9 79007900Ryzen 7 77007700AMD 7700AMD 7600Ryzen 7600 AMDRyzen 76003691215Min: 9.55 / Avg: 9.61 / Max: 9.661. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: MagiRyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600Ryzen 7600 AMDAMD 7600160320480640800SE +/- 1.54, N = 3733.32726.49531.73517.61516.97406.47402.03400.761. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: MagiRyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600Ryzen 7600 AMDAMD 7600130260390520650Min: 400.19 / Avg: 402.03 / Max: 405.081. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPURyzen 9 79007900AMD 7700AMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 7 770077000.75941.51882.27823.03763.797SE +/- 0.01337, N = 31.847481.990783.182293.263863.265703.272543.363633.37523MIN: 1.79MIN: 1.8MIN: 3.09MIN: 3.2MIN: 3.2MIN: 3.2MIN: 3.08MIN: 3.091. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPURyzen 9 79007900AMD 7700AMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 7 77007700246810Min: 3.25 / Avg: 3.27 / Max: 3.291. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA2567900Ryzen 9 7900Ryzen 7 7700AMD 77007700AMD 7600Ryzen 7600Ryzen 7600 AMD5000M10000M15000M20000M25000MSE +/- 148039451.93, N = 324118303190240553723801754487013017538082780174907442601338815632013335266100132238381701. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA2567900Ryzen 9 7900Ryzen 7 7700AMD 77007700AMD 7600Ryzen 7600Ryzen 7600 AMD4000M8000M12000M16000M20000MMin: 12928360310 / Avg: 13223838170 / Max: 133879069401. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v2AMD 7700Ryzen 7600 AMDAMD 76007700Ryzen 7 7700Ryzen 76007900Ryzen 9 79000.76731.53462.30193.06923.8365SE +/- 0.00, N = 31.871.911.921.981.982.053.393.41MIN: 1.83 / MAX: 2.22MIN: 1.88 / MAX: 2.37MIN: 1.89 / MAX: 2.4MIN: 1.83 / MAX: 3.36MIN: 1.84 / MAX: 3.02MIN: 2 / MAX: 2.62MIN: 3.36 / MAX: 3.7MIN: 3.38 / MAX: 3.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v2AMD 7700Ryzen 7600 AMDAMD 76007700Ryzen 7 7700Ryzen 76007900Ryzen 9 7900246810Min: 1.91 / Avg: 1.91 / Max: 1.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

JPEG XL Decoding libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: AllAMD 77007700Ryzen 7 7700Ryzen 7600 AMDRyzen 7600AMD 76007900Ryzen 9 790080160240320400SE +/- 0.21, N = 3365.43361.28344.88336.39334.93318.81204.91201.39
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: AllAMD 77007700Ryzen 7 7700Ryzen 7600 AMDRyzen 7600AMD 76007900Ryzen 9 790070140210280350Min: 336.08 / Avg: 336.39 / Max: 336.8

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, Pyrite7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600AMD 7600Ryzen 7600 AMD40K80K120K160K200KSE +/- 21.86, N = 32084702071401533501494501479101154301153301148931. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, Pyrite7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600AMD 7600Ryzen 7600 AMD40K80K120K160K200KMin: 114850 / Avg: 114893.33 / Max: 1149201. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPURyzen 9 790079007700Ryzen 7 7700AMD 7700Ryzen 7600AMD 7600Ryzen 7600 AMD150300450600750SE +/- 0.75, N = 3672.90672.13491.96490.93463.77375.11374.50370.901. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPURyzen 9 790079007700Ryzen 7 7700AMD 7700Ryzen 7600AMD 7600Ryzen 7600 AMD120240360480600Min: 369.71 / Avg: 370.9 / Max: 372.291. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondRyzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 7600150K300K450K600K750KSE +/- 1264.79, N = 3695719.39691509.80509464.00497649.99494267.87385542.17385213.54384184.411. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondRyzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 7600120K240K360K480K600KMin: 383928.33 / Avg: 385213.54 / Max: 387743.011. (CC) gcc options: -O2 -lrt" -lrt

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25x7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDRyzen 7600AMD 76002004006008001000SE +/- 1.16, N = 3804.79804.66586.79583.56583.09448.72448.30446.021. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25x7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDRyzen 7600AMD 7600140280420560700Min: 446.67 / Avg: 448.72 / Max: 450.71. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scrypt7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDRyzen 7600AMD 7600100200300400500SE +/- 1.68, N = 3451.13449.60332.81331.57325.46252.86251.35250.531. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scrypt7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDRyzen 7600AMD 760080160240320400Min: 250.86 / Avg: 252.86 / Max: 256.21. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Ringcoin7900Ryzen 9 7900Ryzen 7 77007700AMD 7700Ryzen 7600Ryzen 7600 AMDAMD 76006001200180024003000SE +/- 4.41, N = 33027.863027.672268.582226.512207.981709.611692.201684.911. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Ringcoin7900Ryzen 9 7900Ryzen 7 77007700AMD 7700Ryzen 7600Ryzen 7600 AMDAMD 76005001000150020002500Min: 1684.16 / Avg: 1692.2 / Max: 1699.361. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingRyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 760030K60K90K120K150KSE +/- 90.75, N = 31225281216158683385433851666884568387682381. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingRyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 760020K40K60K80K100KMin: 68668 / Avg: 68844.67 / Max: 689691. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Blake-2 S7900Ryzen 9 79007700Ryzen 7 7700AMD 7700Ryzen 7600 AMDAMD 7600Ryzen 7600300K600K900K1200K1500KSE +/- 2843.36, N = 3118912011822808795608688806915606641136627006624601. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Blake-2 S7900Ryzen 9 79007700Ryzen 7 7700AMD 7700Ryzen 7600 AMDAMD 7600Ryzen 7600200K400K600K800K1000KMin: 661250 / Avg: 664113.33 / Max: 6698001. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamRyzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600Ryzen 7600 AMD246810SE +/- 0.0028, N = 38.52678.51766.16446.14556.11824.76334.75804.7536
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamRyzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600Ryzen 7600 AMD3691215Min: 4.75 / Avg: 4.75 / Max: 4.76

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY Credits7900Ryzen 9 79007700AMD 7700Ryzen 7 7700AMD 7600Ryzen 7600Ryzen 7600 AMD20K40K60K80K100KSE +/- 26.67, N = 396310961507236071750707305384053810536931. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY Credits7900Ryzen 9 79007700AMD 7700Ryzen 7 7700AMD 7600Ryzen 7600Ryzen 7600 AMD20K40K60K80K100KMin: 53640 / Avg: 53693.33 / Max: 537201. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Supercar7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600AMD 7600Ryzen 7600 AMD246810SE +/- 0.013, N = 38.0988.0595.8845.8405.7864.5824.5554.517
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Supercar7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600AMD 7600Ryzen 7600 AMD3691215Min: 4.49 / Avg: 4.52 / Max: 4.54

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamRyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 7600306090120150SE +/- 0.05, N = 3127.26126.3692.4592.1892.0171.8271.4071.08
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamRyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 760020406080100Min: 71.34 / Avg: 71.4 / Max: 71.51

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream7900Ryzen 9 7900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600Ryzen 7600 AMD246810SE +/- 0.0023, N = 38.50398.50386.16036.14256.13474.75764.75504.7544
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream7900Ryzen 9 7900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600Ryzen 7600 AMD3691215Min: 4.75 / Avg: 4.75 / Max: 4.76

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Triple SHA-256, OnecoinRyzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 760060K120K180K240K300KSE +/- 1138.26, N = 132956602951302209602205502178301688101673431656601. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Triple SHA-256, OnecoinRyzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 760050K100K150K200K250KMin: 159580 / Avg: 167343.08 / Max: 1766601. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600AMD 7600Ryzen 7600 AMD4080120160200SE +/- 0.10, N = 3202.12200.92146.77146.43146.03113.83113.78113.45
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600AMD 7600Ryzen 7600 AMD4080120160200Min: 113.25 / Avg: 113.45 / Max: 113.55

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: SkeincoinRyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDRyzen 7600AMD 760040K80K120K160K200KSE +/- 695.37, N = 31834101815401376501348001344501033371030401030301. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: SkeincoinRyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDRyzen 7600AMD 760030K60K90K120K150KMin: 102460 / Avg: 103336.67 / Max: 1047101. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 44100 - Buffer Size: 1024Ryzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 76001.21292.42583.63874.85166.0645SE +/- 0.000394, N = 35.3905235.3610254.3287694.2982684.2971273.0396273.0391883.0304731. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 44100 - Buffer Size: 1024Ryzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 7600246810Min: 3.04 / Avg: 3.04 / Max: 3.041. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1M7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 76003K6K9K12K15KSE +/- 6.89, N = 314144.314044.510277.310240.710175.67963.97962.27957.91. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1M7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 76002K4K6K8K10KMin: 7950.1 / Avg: 7963.87 / Max: 7971.11. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Barbershop - Compute: CPU-Only7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 760030060090012001500SE +/- 2.65, N = 3723.13723.78973.45983.97989.721282.321284.021284.52
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Barbershop - Compute: CPU-Only7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 76002004006008001000Min: 1278.74 / Avg: 1284.02 / Max: 1287.02

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomRyzen 9 790079007700AMD 7700Ryzen 7 7700AMD 7600Ryzen 7600Ryzen 7600 AMD0.82311.64622.46933.29244.1155SE +/- 0.005, N = 33.6583.6582.7012.6992.6682.0962.0712.064
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomRyzen 9 790079007700AMD 7700Ryzen 7 7700AMD 7600Ryzen 7600Ryzen 7600 AMD246810Min: 2.06 / Avg: 2.06 / Max: 2.07

Tachyon

This is a test of the threaded Tachyon, a parallel ray-tracing system, measuring the time to ray-trace a sample scene. The sample scene used is the Teapot scene ray-traced to 8K x 8K with 32 samples. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99.2Total Time7900Ryzen 9 7900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 7600306090120150SE +/- 0.64, N = 386.3087.44115.54115.75115.82150.66151.72152.681. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99.2Total Time7900Ryzen 9 7900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 7600306090120150Min: 150.46 / Avg: 151.72 / Max: 152.541. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: BMW27 - Compute: CPU-Only7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 7600306090120150SE +/- 0.20, N = 376.3577.02103.29103.96104.03134.08134.80135.04
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: BMW27 - Compute: CPU-Only7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 7600306090120150Min: 134.59 / Avg: 134.8 / Max: 135.19

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Classroom - Compute: CPU-OnlyRyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 760080160240320400SE +/- 0.20, N = 3202.13202.67271.59272.78273.36356.47357.10357.35
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Classroom - Compute: CPU-OnlyRyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 760060120180240300Min: 356.16 / Avg: 356.47 / Max: 356.85

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1MRyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600Ryzen 7600 AMDAMD 76003K6K9K12K15KSE +/- 32.54, N = 312360.510059.77780.87745.27738.07171.77123.67002.21. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1MRyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600Ryzen 7600 AMDAMD 76002K4K6K8K10KMin: 7063.2 / Avg: 7123.57 / Max: 7174.81. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 480000 - Buffer Size: 1024Ryzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 76001.18112.36223.54334.72445.9055SE +/- 0.001126, N = 35.2492155.2011374.2289764.2029634.1784612.9813092.9807822.9777731. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 480000 - Buffer Size: 1024Ryzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 7600246810Min: 2.98 / Avg: 2.98 / Max: 2.981. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthRyzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 760012M24M36M48M60MSE +/- 333145.62, N = 35636833055350145421269764209159441411532327599423202977632009539
OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthRyzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 760010M20M30M40M50MMin: 31421530 / Avg: 32029775.67 / Max: 32569453

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPURyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDRyzen 7600AMD 76000.12390.24780.37170.49560.6195SE +/- 0.000941, N = 30.3128260.3131160.4150220.4355140.4368590.5488220.5503330.550724MIN: 0.28MIN: 0.28MIN: 0.39MIN: 0.39MIN: 0.38MIN: 0.52MIN: 0.52MIN: 0.531. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPURyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDRyzen 7600AMD 7600246810Min: 0.55 / Avg: 0.55 / Max: 0.551. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ExhaustiveRyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 76000.26570.53140.79711.06281.3285SE +/- 0.0005, N = 31.18111.18090.87900.87770.87600.68150.67250.67171. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ExhaustiveRyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 7600246810Min: 0.67 / Avg: 0.67 / Max: 0.671. (CXX) g++ options: -O3 -flto -pthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPURyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600AMD 7600Ryzen 7600 AMD48121620SE +/- 0.01, N = 318.1318.0713.6813.6113.5710.5210.3410.331. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPURyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600AMD 7600Ryzen 7600 AMD510152025Min: 10.32 / Avg: 10.33 / Max: 10.341. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. V-RAY is a commercial renderer that can integrate with various creator software products like SketchUp and 3ds Max. The V-RAY benchmark is standalone and supports CPU and NVIDIA CUDA/RTX based rendering. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5.02Mode: CPURyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 76005K10K15K20K25KSE +/- 40.13, N = 32121221166157481564015594123521215612090
OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5.02Mode: CPURyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 76004K8K12K16K20KMin: 12282 / Avg: 12352 / Max: 12421

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: FastRyzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 760060120180240300SE +/- 0.11, N = 3255.04254.36194.49193.44192.78147.45147.27145.521. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: FastRyzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 760050100150200250Min: 147.13 / Avg: 147.27 / Max: 147.51. (CXX) g++ options: -O3 -flto -pthread

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: MediumRyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDRyzen 7600AMD 760020406080100SE +/- 0.26, N = 389.4589.2967.6967.5567.5251.6851.1551.111. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: MediumRyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDRyzen 7600AMD 760020406080100Min: 51.17 / Avg: 51.68 / Max: 52.051. (CXX) g++ options: -O3 -flto -pthread

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 44100 - Buffer Size: 512Ryzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600AMD 7600Ryzen 7600 AMD1.16792.33583.50374.67165.8395SE +/- 0.002271, N = 35.1907015.1694644.1815694.1555654.1528342.9695582.9683642.9660521. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 44100 - Buffer Size: 512Ryzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600AMD 7600Ryzen 7600 AMD246810Min: 2.96 / Avg: 2.97 / Max: 2.971. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: Thorough7900Ryzen 9 7900Ryzen 7 77007700AMD 7700Ryzen 7600 AMDRyzen 7600AMD 76003691215SE +/- 0.0037, N = 311.203411.20118.44558.42768.42586.47856.47756.40471. (CXX) g++ options: -O3 -flto -pthread
OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: Thorough7900Ryzen 9 7900Ryzen 7 77007700AMD 7700Ryzen 7600 AMDRyzen 7600AMD 76003691215Min: 6.47 / Avg: 6.48 / Max: 6.491. (CXX) g++ options: -O3 -flto -pthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: StandardAMD 77007700Ryzen 7600AMD 76007900Ryzen 9 7900Ryzen 7600 AMDRyzen 7 7700306090120150SE +/- 6.47, N = 121241249898989379711. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: StandardAMD 77007700Ryzen 7600AMD 76007900Ryzen 9 7900Ryzen 7600 AMDRyzen 7 770020406080100Min: 53.5 / Avg: 78.79 / Max: 98.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream7900Ryzen 9 79007700AMD 7700Ryzen 7 7700Ryzen 7600AMD 7600Ryzen 7600 AMD1428425670SE +/- 0.07, N = 362.1261.8546.0246.0145.6235.7035.6135.60
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream7900Ryzen 9 79007700AMD 7700Ryzen 7 7700Ryzen 7600AMD 7600Ryzen 7600 AMD1224364860Min: 35.46 / Avg: 35.6 / Max: 35.68

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPURyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 76002004006008001000SE +/- 3.29, N = 3918.15917.78692.33687.93686.56527.78527.63526.471. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPURyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 7600160320480640800Min: 523.88 / Avg: 527.63 / Max: 534.181. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 1024Ryzen 9 79007900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600 AMDRyzen 76000.87091.74182.61273.48364.3545SE +/- 0.001525, N = 33.8707273.8673513.1409103.1149243.1034172.2317982.2275092.2202511. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 1024Ryzen 9 79007900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600 AMDRyzen 7600246810Min: 2.23 / Avg: 2.23 / Max: 2.231. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfigRyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 760030060090012001500SE +/- 6.89, N = 3738.33744.301005.661012.611019.451266.561280.931286.51
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfigRyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 76002004006008001000Min: 1259.54 / Avg: 1266.56 / Max: 1280.34

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Disney MaterialRyzen 9 790079007700AMD 7700Ryzen 7 7700AMD 7600Ryzen 7600Ryzen 7600 AMD50100150200250119.05119.53158.29158.42158.99206.46206.77207.29

Aircrack-ng

Aircrack-ng is a tool for assessing WiFi/WLAN network security. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.77900Ryzen 9 7900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600Ryzen 7600 AMD13K26K39K52K65KSE +/- 2.66, N = 362268.6462268.3847545.7346993.7046662.8735844.4435828.1435804.821. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpcre -lsqlite3 -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -pthread
OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.77900Ryzen 9 7900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600Ryzen 7600 AMD11K22K33K44K55KMin: 35799.77 / Avg: 35804.82 / Max: 35808.771. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpcre -lsqlite3 -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -pthread

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 192000 - Buffer Size: 10247900Ryzen 9 7900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600Ryzen 7600 AMD0.58221.16441.74662.32882.911SE +/- 0.001409, N = 32.5876192.5739042.0884382.0786132.0733211.4909011.4902001.4882261. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 192000 - Buffer Size: 10247900Ryzen 9 7900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600Ryzen 7600 AMD246810Min: 1.49 / Avg: 1.49 / Max: 1.491. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 24 - Buffer Length: 256 - Filter Length: 57Ryzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 7600200M400M600M800M1000MSE +/- 2020662.71, N = 3103300000010326000007765500007756600007728400005985100005962633335945800001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 24 - Buffer Length: 256 - Filter Length: 57Ryzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 7600200M400M600M800M1000MMin: 592230000 / Avg: 596263333.33 / Max: 5985000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Pabellon Barcelona - Compute: CPU-OnlyRyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 760090180270360450SE +/- 0.21, N = 3249.95249.96333.25334.85335.44432.63433.06433.93
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Pabellon Barcelona - Compute: CPU-OnlyRyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 760080160240320400Min: 432.37 / Avg: 432.63 / Max: 433.04

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPURyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600AMD 7600Ryzen 7600 AMD4K8K12K16K20KSE +/- 0.99, N = 316466.7416311.4712608.1412534.5212469.449516.269510.359490.501. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPURyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600AMD 7600Ryzen 7600 AMD3K6K9K12K15KMin: 9488.78 / Avg: 9490.5 / Max: 9492.21. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPURyzen 9 790079007700AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 7600400800120016002000SE +/- 1.12, N = 31832.771830.421369.351368.951368.901058.541056.821056.401. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPURyzen 9 790079007700AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 760030060090012001500Min: 1054.6 / Avg: 1056.82 / Max: 1058.191. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Fishy Cat - Compute: CPU-OnlyRyzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600Ryzen 7600 AMD4080120160200SE +/- 0.26, N = 3100.47100.60134.16134.78135.18173.72174.02174.24
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Fishy Cat - Compute: CPU-OnlyRyzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600Ryzen 7600 AMD306090120150Min: 173.78 / Avg: 174.24 / Max: 174.69

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atoms7900Ryzen 9 7900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 76000.46410.92821.39231.85642.3205SE +/- 0.00775, N = 31.190061.190441.600571.603831.605522.042442.062262.06285
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atoms7900Ryzen 9 7900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 7600246810Min: 2.05 / Avg: 2.06 / Max: 2.08

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57Ryzen 9 79007900Ryzen 7 7700AMD 77007700Ryzen 7600 AMDAMD 7600Ryzen 7600200M400M600M800M1000MSE +/- 86474.15, N = 3103400000010336000007861800007780000007741600005974733335974100005968500001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57Ryzen 9 79007900Ryzen 7 7700AMD 77007700Ryzen 7600 AMDAMD 7600Ryzen 7600200M400M600M800M1000MMin: 597350000 / Avg: 597473333.33 / Max: 5976400001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600AMD 7600Ryzen 7600 AMD6001200180024003000SE +/- 2.35, N = 31675.221675.602231.812292.332294.352897.052897.082897.71MIN: 1671.1MIN: 1671.36MIN: 2215.43MIN: 2268.98MIN: 2263.65MIN: 2882.96MIN: 2893.38MIN: 2889.511. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600AMD 7600Ryzen 7600 AMD5001000150020002500Min: 2893.64 / Avg: 2897.71 / Max: 2901.791. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU7900Ryzen 9 7900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600 AMDRyzen 76000.59951.1991.79852.3982.9975SE +/- 0.02309, N = 31.542071.544032.334992.452222.502292.573052.577692.66452MIN: 1.47MIN: 1.48MIN: 2.28MIN: 2.28MIN: 2.29MIN: 2.51MIN: 2.49MIN: 2.471. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU7900Ryzen 9 7900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600 AMDRyzen 7600246810Min: 2.55 / Avg: 2.58 / Max: 2.621. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Ninja7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600AMD 7600Ryzen 7600 AMD130260390520650SE +/- 0.25, N = 3352.63353.54475.85487.73489.62602.78608.68609.12
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Ninja7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600AMD 7600Ryzen 7600 AMD110220330440550Min: 608.62 / Avg: 609.12 / Max: 609.42

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 480000 - Buffer Size: 5127900Ryzen 9 7900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600 AMDRyzen 76001.1272.2543.3814.5085.635SE +/- 0.002307, N = 35.0088204.9949204.0796294.0622214.0526402.9080292.9059882.9011611. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 480000 - Buffer Size: 5127900Ryzen 9 7900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600 AMDRyzen 7600246810Min: 2.9 / Avg: 2.91 / Max: 2.911. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPURyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600AMD 7600Ryzen 7600 AMD6001200180024003000SE +/- 2.85, N = 31677.701680.332223.942272.722285.982888.442892.902895.71MIN: 1673.08MIN: 1675.48MIN: 2210.21MIN: 2247.96MIN: 2258.7MIN: 2854.93MIN: 2888.21MIN: 2880.281. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPURyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600AMD 7600Ryzen 7600 AMD5001000150020002500Min: 2891.5 / Avg: 2895.71 / Max: 2901.141. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU7900Ryzen 9 7900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600Ryzen 7600 AMD6001200180024003000SE +/- 0.92, N = 31678.881683.032225.662282.732289.262893.482896.002896.31MIN: 1673.29MIN: 1677.55MIN: 2213.52MIN: 2256.18MIN: 2263.84MIN: 2889.6MIN: 2885.46MIN: 2889.011. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU7900Ryzen 9 7900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600Ryzen 7600 AMD5001000150020002500Min: 2894.5 / Avg: 2896.31 / Max: 2897.511. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPURyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 76003691215SE +/- 0.01, N = 39.199.196.936.896.875.365.345.331. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPURyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 76003691215Min: 5.32 / Avg: 5.34 / Max: 5.361. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPU7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600AMD 7600Ryzen 7600 AMD30060090012001500SE +/- 5.59, N = 31164.421163.45892.32879.31878.78691.78688.76676.761. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPU7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600AMD 7600Ryzen 7600 AMD2004006008001000Min: 669.26 / Avg: 676.76 / Max: 687.691. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 760020406080100SE +/- 0.26, N = 385.1985.1662.9562.7562.3049.7249.6549.63
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 76001632486480Min: 49.45 / Avg: 49.72 / Max: 50.24

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 4KRyzen 9 79007900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600Ryzen 7600 AMD0.88651.7732.65953.5464.4325SE +/- 0.00, N = 33.943.942.982.962.962.312.302.301. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 4KRyzen 9 79007900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600Ryzen 7600 AMD246810Min: 2.3 / Avg: 2.3 / Max: 2.31. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMD7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600AMD 7600Ryzen 7600 AMD50100150200250SE +/- 2.30, N = 3126.28127.05165.87166.07167.47213.01214.15215.761. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMD7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600AMD 7600Ryzen 7600 AMD4080120160200Min: 213.33 / Avg: 215.76 / Max: 220.361. (CXX) g++ options: -O2 -lOpenCL

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4K7900Ryzen 9 7900Ryzen 7 7700Ryzen 7600 AMDRyzen 7600AMD 7700AMD 76007700714212835SE +/- 0.23, N = 328.0427.9717.9717.9017.7717.6817.1916.421. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4K7900Ryzen 9 7900Ryzen 7 7700Ryzen 7600 AMDRyzen 7600AMD 7700AMD 76007700612182430Min: 17.66 / Avg: 17.9 / Max: 18.361. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPU7900Ryzen 9 79007700AMD 7700Ryzen 7 7700AMD 7600Ryzen 7600Ryzen 7600 AMD1.16782.33563.50344.67125.839SE +/- 0.02, N = 35.195.163.933.923.873.073.063.041. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPU7900Ryzen 9 79007700AMD 7700Ryzen 7 7700AMD 7600Ryzen 7600Ryzen 7600 AMD246810Min: 3.02 / Avg: 3.04 / Max: 3.071. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 5127900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 76000.82581.65162.47743.30324.129SE +/- 0.003762, N = 33.6700943.6659113.0002752.9944692.9865162.1647602.1597072.1590671. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 5127900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 7600246810Min: 2.16 / Avg: 2.16 / Max: 2.171. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPURyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDRyzen 7600AMD 760030060090012001500SE +/- 1.23, N = 3855.55855.911113.961152.241155.041450.481450.821453.82MIN: 851.41MIN: 851.85MIN: 1102.31MIN: 1133.82MIN: 1129.24MIN: 1445.4MIN: 1438.48MIN: 1450.651. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPURyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDRyzen 7600AMD 760030060090012001500Min: 1448.35 / Avg: 1450.48 / Max: 1452.61. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: EmilyRyzen 9 79007900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600 AMDRyzen 760070140210280350196.70197.26261.24262.06262.66331.38332.51334.20

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPURyzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600Ryzen 7600 AMD30060090012001500SE +/- 0.41, N = 3854.95856.021110.931149.161156.111450.631450.701452.40MIN: 851.04MIN: 851.48MIN: 1100.71MIN: 1126.63MIN: 1135.88MIN: 1448.24MIN: 1435.36MIN: 1447.041. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPURyzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600Ryzen 7600 AMD30060090012001500Min: 1451.81 / Avg: 1452.4 / Max: 1453.191. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPURyzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600Ryzen 7600 AMD30060090012001500SE +/- 0.60, N = 3855.54858.831112.461154.551155.401444.741446.491452.69MIN: 851.94MIN: 851.32MIN: 1094.16MIN: 1130.74MIN: 1130.22MIN: 1440.66MIN: 1430.15MIN: 1448.331. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPURyzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600Ryzen 7600 AMD30060090012001500Min: 1451.53 / Avg: 1452.69 / Max: 1453.541. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin Protein7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600AMD 7600Ryzen 7600 AMD3691215SE +/- 0.023, N = 311.39411.2118.5928.5608.5476.7576.7316.7111. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin Protein7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600AMD 7600Ryzen 7600 AMD3691215Min: 6.67 / Avg: 6.71 / Max: 6.751. (CXX) g++ options: -O3 -lm -ldl

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080pRyzen 9 79007900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600 AMDRyzen 760048121620SE +/- 0.00, N = 315.6515.6211.9311.8511.849.239.239.221. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080pRyzen 9 79007900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600 AMDRyzen 760048121620Min: 9.23 / Avg: 9.23 / Max: 9.231. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDRyzen 7600AMD 760020406080100SE +/- 0.10, N = 380.9580.8958.0457.4557.3948.0947.9347.78
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDRyzen 7600AMD 76001530456075Min: 47.92 / Avg: 48.09 / Max: 48.25

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 192000 - Buffer Size: 512Ryzen 9 79007900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600Ryzen 7600 AMD0.54251.0851.62752.172.7125SE +/- 0.004557, N = 32.4110402.4017551.9836601.9752501.9744481.4324541.4310091.4268981. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 192000 - Buffer Size: 512Ryzen 9 79007900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600Ryzen 7600 AMD246810Min: 1.42 / Avg: 1.43 / Max: 1.431. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.BRyzen 9 79007900AMD 7700AMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 7 770077004K8K12K16K20KSE +/- 6.22, N = 320602.9920541.0412368.4512334.2612327.5012314.2112239.6412225.691. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.BRyzen 9 79007900AMD 7700AMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 7 770077004K8K12K16K20KMin: 12315.32 / Avg: 12327.5 / Max: 12335.811. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600AMD 7600Ryzen 7600 AMD50100150200250SE +/- 0.15, N = 3126.12126.52163.05163.45164.06212.11212.12212.24
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600AMD 7600Ryzen 7600 AMD4080120160200Min: 211.98 / Avg: 212.24 / Max: 212.49

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600AMD 7600Ryzen 7600 AMD246810SE +/- 0.0033, N = 37.92847.90376.13306.11786.09524.71454.71414.7116
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600AMD 7600Ryzen 7600 AMD3691215Min: 4.71 / Avg: 4.71 / Max: 4.72

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPU7900Ryzen 9 7900Ryzen 7 77007700AMD 7700AMD 7600Ryzen 7600 AMDRyzen 76001.14082.28163.42244.56325.704SE +/- 0.01, N = 35.075.023.853.853.843.063.053.021. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPU7900Ryzen 9 7900Ryzen 7 77007700AMD 7700AMD 7600Ryzen 7600 AMDRyzen 7600246810Min: 3.03 / Avg: 3.05 / Max: 3.071. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamRyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 760050100150200250SE +/- 0.11, N = 3126.31126.34162.58164.29164.38211.80211.97212.01
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamRyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 76004080120160200Min: 211.63 / Avg: 211.8 / Max: 212

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamRyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 7600246810SE +/- 0.0024, N = 37.91697.91516.15076.08666.08324.72134.71754.7167
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamRyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 76003691215Min: 4.72 / Avg: 4.72 / Max: 4.73

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b0AMD 7700Ryzen 7600 AMDRyzen 7 7700AMD 7600Ryzen 76007700Ryzen 9 790079000.94281.88562.82843.77124.714SE +/- 0.00, N = 32.502.692.702.722.862.914.184.19MIN: 2.46 / MAX: 3.01MIN: 2.66 / MAX: 3.21MIN: 2.48 / MAX: 5.14MIN: 2.68 / MAX: 3.26MIN: 2.81 / MAX: 4.44MIN: 2.68 / MAX: 4.44MIN: 4.15 / MAX: 4.53MIN: 4.14 / MAX: 4.621. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b0AMD 7700Ryzen 7600 AMDRyzen 7 7700AMD 7600Ryzen 76007700Ryzen 9 79007900246810Min: 2.69 / Avg: 2.69 / Max: 2.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPURyzen 9 79007900Ryzen 7 77007700AMD 7700Ryzen 7600 AMDAMD 7600Ryzen 76006K12K18K24K30KSE +/- 16.00, N = 327522.2127518.7821184.2021050.5020765.7616510.7316444.4416423.771. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPURyzen 9 79007900Ryzen 7 77007700AMD 7700Ryzen 7600 AMDAMD 7600Ryzen 76005K10K15K20K25KMin: 16479.38 / Avg: 16510.73 / Max: 16531.991. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 76000.07080.14160.21240.28320.354SE +/- 0.001519, N = 30.1887950.1894510.2410040.2504830.2517770.3122500.3131290.314764MIN: 0.17MIN: 0.17MIN: 0.22MIN: 0.22MIN: 0.22MIN: 0.29MIN: 0.29MIN: 0.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 760012345Min: 0.31 / Avg: 0.31 / Max: 0.321. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPURyzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600Ryzen 7600 AMD0.2340.4680.7020.9361.17SE +/- 0.000920, N = 30.6252310.6261770.7949900.8108150.8110901.0361301.0397301.039960MIN: 0.56MIN: 0.56MIN: 0.73MIN: 0.74MIN: 0.74MIN: 1MIN: 1MIN: 11. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPURyzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600Ryzen 7600 AMD246810Min: 1.04 / Avg: 1.04 / Max: 1.041. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.5Time To CompileRyzen 9 79007900Ryzen 7 7700AMD 77007700Ryzen 7600Ryzen 7600 AMDAMD 7600816243240SE +/- 0.08, N = 321.0821.1427.5627.9328.2334.9235.0635.07
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.5Time To CompileRyzen 9 79007900Ryzen 7 7700AMD 77007700Ryzen 7600Ryzen 7600 AMDAMD 7600714212835Min: 34.92 / Avg: 35.06 / Max: 35.18

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve primarily benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e137900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 76004080120160200SE +/- 0.48, N = 3122.56122.64160.12161.02161.06203.19203.44203.791. (CXX) g++ options: -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e137900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 76004080120160200Min: 202.24 / Avg: 203.19 / Max: 203.711. (CXX) g++ options: -O3

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Unix MakefilesRyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDRyzen 7600AMD 7600130260390520650SE +/- 1.68, N = 3374.36375.35495.07503.16511.00621.55621.82622.39
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Unix MakefilesRyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDRyzen 7600AMD 7600110220330440550Min: 618.28 / Avg: 621.55 / Max: 623.88

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve primarily benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e127900Ryzen 9 7900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600 AMDRyzen 760048121620SE +/- 0.00, N = 310.0610.0713.0813.1713.2016.6916.7116.721. (CXX) g++ options: -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e127900Ryzen 9 7900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600 AMDRyzen 760048121620Min: 16.71 / Avg: 16.71 / Max: 16.721. (CXX) g++ options: -O3

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigRyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDRyzen 7600AMD 760020406080100SE +/- 0.14, N = 358.8059.2577.6677.8878.7496.2296.3097.40
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigRyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDRyzen 7600AMD 760020406080100Min: 95.95 / Avg: 96.22 / Max: 96.4

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To Compile7900Ryzen 9 7900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 7600306090120150SE +/- 0.35, N = 369.9170.2990.4992.7193.34113.89114.28115.54
OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To Compile7900Ryzen 9 7900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 760020406080100Min: 113.67 / Avg: 114.28 / Max: 114.86

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverRyzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 760048121620SE +/- 0.04, N = 311.0111.0313.9714.5614.6018.1218.1618.191. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverRyzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 7600510152025Min: 18.1 / Avg: 18.16 / Max: 18.241. (CXX) g++ options: -O2 -lOpenCL

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.C7900Ryzen 9 79007700AMD 7700Ryzen 7 7700Ryzen 7600 AMDRyzen 7600AMD 76009K18K27K36K45KSE +/- 42.32, N = 343264.2042376.8026811.9026715.2026612.6626610.5326385.9026241.321. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.C7900Ryzen 9 79007700AMD 7700Ryzen 7 7700Ryzen 7600 AMDRyzen 7600AMD 76008K16K24K32K40KMin: 26526.91 / Avg: 26610.53 / Max: 26663.661. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression Rating7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600AMD 7600Ryzen 7600 AMD30K60K90K120K150KSE +/- 205.24, N = 31482451482101125851113751109149034990334901781. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression Rating7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600AMD 7600Ryzen 7600 AMD30K60K90K120K150KMin: 89912 / Avg: 90178.33 / Max: 905821. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bareRyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600AMD 7600Ryzen 7600 AMD0.45450.9091.36351.8182.2725SE +/- 0.006, N = 32.0202.0181.4801.4681.4651.2371.2341.2311. (CXX) g++ options: -O3
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bareRyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600AMD 7600Ryzen 7600 AMD246810Min: 1.22 / Avg: 1.23 / Max: 1.241. (CXX) g++ options: -O3

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPU7900Ryzen 9 79007700Ryzen 7 7700AMD 7700Ryzen 7600AMD 7600Ryzen 7600 AMD20406080100SE +/- 0.52, N = 398.7897.4980.7079.7673.6562.2460.2460.221. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPU7900Ryzen 9 79007700Ryzen 7 7700AMD 7700Ryzen 7600AMD 7600Ryzen 7600 AMD20406080100Min: 59.64 / Avg: 60.22 / Max: 61.261. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pRyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 760060120180240300SE +/- 0.32, N = 3294.30293.13209.43207.45206.90181.36181.12180.451. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pRyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 760050100150200250Min: 180.98 / Avg: 181.36 / Max: 1821. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupRyzen 9 790079007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 7600AMD 7700246810SE +/- 0.00, N = 34.454.464.604.614.754.754.767.24
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupRyzen 9 790079007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 7600AMD 77003691215Min: 4.75 / Avg: 4.75 / Max: 4.75

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To Compile7900Ryzen 9 7900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 7600306090120150SE +/- 0.44, N = 369.5369.8590.7991.2391.41110.91111.95112.85
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To Compile7900Ryzen 9 7900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 760020406080100Min: 111.08 / Avg: 111.95 / Max: 112.53

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.4Time To CompileRyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDRyzen 7600AMD 76001122334455SE +/- 0.23, N = 329.1729.4938.2838.4138.4746.8547.1747.28
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.4Time To CompileRyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDRyzen 7600AMD 76001020304050Min: 46.38 / Avg: 46.85 / Max: 47.09

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4K7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 76001632486480SE +/- 0.03, N = 370.4769.9154.8454.6854.6244.4943.7343.561. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4K7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 76001428425670Min: 44.43 / Avg: 44.49 / Max: 44.551. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU7900Ryzen 9 7900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600 AMDRyzen 76000.90841.81682.72523.63364.542SE +/- 0.01473, N = 32.506502.517763.104513.245323.253584.022154.029414.03751MIN: 2.24MIN: 2.21MIN: 2.9MIN: 2.84MIN: 2.88MIN: 3.76MIN: 3.74MIN: 3.761. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU7900Ryzen 9 7900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600 AMDRyzen 7600246810Min: 4 / Avg: 4.03 / Max: 4.051. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Five Back to Back FIR FiltersAMD 77007700Ryzen 7 7700Ryzen 7600 AMDRyzen 7600AMD 76007900Ryzen 9 79005001000150020002500SE +/- 16.32, N = 82128.82115.22004.41934.11923.91862.71430.41322.91. 3.10.1.1
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Five Back to Back FIR FiltersAMD 77007700Ryzen 7 7700Ryzen 7600 AMDRyzen 7600AMD 76007900Ryzen 9 7900400800120016002000Min: 1893.1 / Avg: 1934.13 / Max: 2014.21. 3.10.1.1

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 4K7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDRyzen 7600AMD 760020406080100SE +/- 0.02, N = 385.6084.9064.5264.4664.3853.5253.3553.261. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 4K7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDRyzen 7600AMD 76001632486480Min: 53.49 / Avg: 53.52 / Max: 53.561. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: StandardAMD 77007900Ryzen 9 7900AMD 7600Ryzen 7600Ryzen 7 7700Ryzen 7600 AMD77002004006008001000SE +/- 40.22, N = 128276946936596565615465151. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: StandardAMD 77007900Ryzen 9 7900AMD 7600Ryzen 7600Ryzen 7 7700Ryzen 7600 AMD7700150300450600750Min: 385 / Avg: 545.92 / Max: 6601. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0AMD 7700AMD 7600Ryzen 7 77007700Ryzen 7600 AMDRyzen 76007900Ryzen 9 79000.86921.73842.60763.47684.346SE +/- 0.017, N = 152.4202.4342.5572.5592.5832.6183.7713.863MIN: 2.38 / MAX: 13.73MIN: 2.4 / MAX: 9.97MIN: 2.4 / MAX: 5.3MIN: 2.41 / MAX: 4.85MIN: 2.41 / MAX: 9.64MIN: 2.59 / MAX: 4.5MIN: 3.69 / MAX: 4.19MIN: 3.79 / MAX: 5.061. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0AMD 7700AMD 7600Ryzen 7 77007700Ryzen 7600 AMDRyzen 76007900Ryzen 9 7900246810Min: 2.43 / Avg: 2.58 / Max: 2.631. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pRyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600AMD 7600Ryzen 7600 AMD50100150200250SE +/- 0.12, N = 3218.50217.47174.52172.81172.71137.30137.14137.001. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pRyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600AMD 7600Ryzen 7600 AMD4080120160200Min: 136.86 / Avg: 137 / Max: 137.241. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Timed Mesa Compilation

This test profile times how long it takes to compile Mesa with Meson/Ninja. For minimizing build dependencies and avoid versioning conflicts, test this is just the core Mesa build without LLVM or the extra Gallium3D/Mesa drivers enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To Compile7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 76001020304050SE +/- 0.06, N = 328.8829.0137.6838.2138.2545.9245.9746.05
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To Compile7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 7600918273645Min: 45.81 / Avg: 45.92 / Max: 46.02

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3AMD 7600AMD 7700Ryzen 7 77007700Ryzen 7600 AMDRyzen 76007900Ryzen 9 7900510152025SE +/- 0.24, N = 1514.1214.6015.3715.5116.3916.8720.9222.47MIN: 14.05 / MAX: 15.71MIN: 14.2 / MAX: 25.71MIN: 14.16 / MAX: 25.66MIN: 14.26 / MAX: 35.72MIN: 13.9 / MAX: 24.09MIN: 16.72 / MAX: 18.95MIN: 20.57 / MAX: 28.87MIN: 22.2 / MAX: 30.151. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3AMD 7600AMD 7700Ryzen 7 77007700Ryzen 7600 AMDRyzen 76007900Ryzen 9 7900510152025Min: 14.13 / Avg: 16.39 / Max: 16.811. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPURyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 76002004006008001000SE +/- 3.37, N = 31102.441102.44839.03834.42822.33705.10699.47695.351. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPURyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 76002004006008001000Min: 700.77 / Avg: 705.1 / Max: 711.741. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 67900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 7600246810SE +/- 0.014, N = 34.2404.2995.2885.3235.3846.6536.6596.7001. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 67900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 76003691215Min: 6.64 / Avg: 6.66 / Max: 6.681. (CXX) g++ options: -O3 -fPIC -lm

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Myriad-Groestl7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 760011K22K33K44K55KSE +/- 250.27, N = 352490491104319043140430803353033340333301. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Myriad-Groestl7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 76009K18K27K36K45KMin: 33260 / Avg: 33530 / Max: 340301. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 4K7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDRyzen 7600AMD 76001122334455SE +/- 0.31, N = 647.1347.0837.3937.2737.1031.8930.2130.001. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto
OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 4K7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDRyzen 7600AMD 76001020304050Min: 30.36 / Avg: 31.89 / Max: 32.31. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU7900AMD 77007700Ryzen 7 7700Ryzen 9 7900AMD 7600Ryzen 7600Ryzen 7600 AMD1.20312.40623.60934.81246.0155SE +/- 0.00039, N = 33.421073.890953.909233.918163.970485.335785.340625.34707MIN: 3.32MIN: 3.71MIN: 3.68MIN: 3.63MIN: 3.34MIN: 5.3MIN: 5.3MIN: 5.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU7900AMD 77007700Ryzen 7 7700Ryzen 9 7900AMD 7600Ryzen 7600Ryzen 7600 AMD246810Min: 5.35 / Avg: 5.35 / Max: 5.351. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU7900Ryzen 9 7900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 76001.22792.45583.68374.91166.1395SE +/- 0.00921, N = 33.494243.494994.284964.377364.399465.416345.437065.45728MIN: 3.05MIN: 3.06MIN: 3.88MIN: 3.85MIN: 3.84MIN: 5.23MIN: 5.22MIN: 5.221. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU7900Ryzen 9 7900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 7600246810Min: 5.42 / Avg: 5.44 / Max: 5.451. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution TimeRyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 760060120180240300176.07176.83251.62254.33254.50273.74274.56274.561. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080p7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 760070140210280350SE +/- 2.29, N = 3335.22334.75246.55244.06243.98220.17215.04215.011. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080p7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 760060120180240300Min: 215.63 / Avg: 220.17 / Max: 222.931. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080p7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 760080160240320400SE +/- 0.15, N = 3345.40344.81261.80254.60252.79227.72226.78222.081. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080p7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 760060120180240300Min: 227.42 / Avg: 227.72 / Max: 227.891. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 76003691215SE +/- 0.0166, N = 37.56047.60637.97518.02748.030111.691711.715811.7273
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 76003691215Min: 11.68 / Avg: 11.72 / Max: 11.74

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 7600306090120150SE +/- 0.12, N = 3132.18131.38125.30124.49124.4485.4885.3185.23
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 760020406080100Min: 85.13 / Avg: 85.31 / Max: 85.54

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 1080pRyzen 9 79007900Ryzen 7 77007700AMD 7700Ryzen 7600 AMDAMD 7600Ryzen 760050100150200250SE +/- 1.62, N = 3208.22204.59168.28167.51166.27136.26134.80134.661. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto
OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 1080pRyzen 9 79007900Ryzen 7 77007700AMD 7700Ryzen 7600 AMDAMD 7600Ryzen 76004080120160200Min: 133.02 / Avg: 136.26 / Max: 137.941. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPURyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 76000.30620.61240.91861.22481.531SE +/- 0.005136, N = 30.8800930.8818270.9897940.9993601.0027001.3531001.3554801.360780MIN: 0.81MIN: 0.8MIN: 0.88MIN: 0.89MIN: 0.89MIN: 1.3MIN: 1.29MIN: 1.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPURyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 7600246810Min: 1.35 / Avg: 1.36 / Max: 1.371. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Material Tester7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 76004080120160200123.05124.04144.68145.29145.39188.46188.62189.54

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Medium7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 760048121620SE +/- 0.00, N = 313.9713.8911.3611.3311.329.129.109.091. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Medium7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 760048121620Min: 9.12 / Avg: 9.12 / Max: 9.131. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080p7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 7600100200300400500SE +/- 1.08, N = 3439.56437.96362.54360.58359.07286.86286.81286.261. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080p7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 760080160240320400Min: 285.71 / Avg: 286.86 / Max: 289.021. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPURyzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 76000.74451.4892.23352.9783.7225SE +/- 0.00750, N = 32.167262.260912.462752.502322.507183.303423.305143.30903MIN: 2.11MIN: 2.08MIN: 2.26MIN: 2.34MIN: 2.33MIN: 3.14MIN: 3.14MIN: 3.151. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPURyzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 7600246810Min: 3.29 / Avg: 3.31 / Max: 3.321. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600AMD 7600Ryzen 7600 AMD612182430SE +/- 0.03, N = 315.3015.3618.2718.3118.4223.3323.3523.36
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600AMD 7600Ryzen 7600 AMD510152025Min: 23.32 / Avg: 23.36 / Max: 23.43

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600AMD 7600Ryzen 7600 AMD1530456075SE +/- 0.06, N = 365.3165.0654.7154.5954.2642.8642.8242.79
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600AMD 7600Ryzen 7600 AMD1326395265Min: 42.67 / Avg: 42.79 / Max: 42.87

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenetAMD 7700Ryzen 7600 AMDRyzen 7 7700AMD 7600Ryzen 760077007900Ryzen 9 7900246810SE +/- 0.04, N = 35.305.645.695.735.956.708.018.07MIN: 5.22 / MAX: 6.91MIN: 5.54 / MAX: 6.73MIN: 5.28 / MAX: 6.93MIN: 5.68 / MAX: 6.22MIN: 5.85 / MAX: 7.49MIN: 6.24 / MAX: 7.84MIN: 7.91 / MAX: 8.62MIN: 7.97 / MAX: 9.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenetAMD 7700Ryzen 7600 AMDRyzen 7 7700AMD 7600Ryzen 760077007900Ryzen 9 79003691215Min: 5.59 / Avg: 5.64 / Max: 5.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: ParallelRyzen 9 790079007700Ryzen 7 7700AMD 7700Ryzen 7600Ryzen 7600 AMDAMD 7600400800120016002000SE +/- 3.11, N = 3193619191670166916401318130412731. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: ParallelRyzen 9 790079007700Ryzen 7 7700AMD 7700Ryzen 7600Ryzen 7600 AMDAMD 760030060090012001500Min: 1300 / Avg: 1303.83 / Max: 13101. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 4KRyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDRyzen 7600AMD 76001326395265SE +/- 0.10, N = 356.4056.2945.9145.5445.4737.2837.2437.181. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 4KRyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDRyzen 7600AMD 76001122334455Min: 37.09 / Avg: 37.28 / Max: 37.381. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.8.1Test: Boat - Acceleration: CPU-only7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 76001.03162.06323.09484.12645.158SE +/- 0.018, N = 33.0283.0563.6673.7423.7854.5264.5574.585
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.8.1Test: Boat - Acceleration: CPU-only7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 7600246810Min: 4.49 / Avg: 4.53 / Max: 4.55

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4K7900Ryzen 9 7900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 7600306090120150SE +/- 0.10, N = 3134.92134.80110.60110.11109.9789.9689.6589.261. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4K7900Ryzen 9 7900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 7600306090120150Min: 89.45 / Avg: 89.65 / Max: 89.791. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 07900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDRyzen 7600AMD 7600306090120150SE +/- 0.10, N = 394.7795.69115.88116.73116.80142.81142.85143.241. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 07900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDRyzen 7600AMD 7600306090120150Min: 142.66 / Avg: 142.81 / Max: 142.991. (CXX) g++ options: -O3 -fPIC -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression Speed7900Ryzen 9 79007700AMD 7700Ryzen 7 7700Ryzen 7600 AMDRyzen 7600AMD 76001428425670SE +/- 0.12, N = 364.263.852.652.552.443.643.142.81. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression Speed7900Ryzen 9 79007700AMD 7700Ryzen 7 7700Ryzen 7600 AMDRyzen 7600AMD 76001326395265Min: 43.4 / Avg: 43.57 / Max: 43.81. (CC) gcc options: -O3 -pthread -lz -llzma

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 1080p7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 76004080120160200SE +/- 0.38, N = 3158.66158.21129.95129.25129.24106.89106.01105.841. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 1080p7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 7600306090120150Min: 106.44 / Avg: 106.89 / Max: 107.641. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Very Fast7900Ryzen 9 7900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600Ryzen 7600 AMD714212835SE +/- 0.05, N = 331.4831.4526.4226.2026.1821.1521.1421.101. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Very Fast7900Ryzen 9 7900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600Ryzen 7600 AMD714212835Min: 21.03 / Avg: 21.1 / Max: 21.21. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 7600714212835SE +/- 0.08, N = 321.1421.2724.6124.7624.9631.2631.3031.43
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 7600714212835Min: 31.1 / Avg: 31.26 / Max: 31.36

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 76001122334455SE +/- 0.08, N = 347.2947.0040.6340.3840.0531.9931.9431.81
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 76001020304050Min: 31.88 / Avg: 31.99 / Max: 32.15

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 4KRyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDRyzen 7600AMD 760020406080100SE +/- 0.87, N = 393.6090.7972.6372.2571.9965.5563.8463.001. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 4KRyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDRyzen 7600AMD 760020406080100Min: 63.83 / Avg: 65.55 / Max: 66.581. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteRyzen 9 79007900Ryzen 7600 AMDAMD 77007700Ryzen 7 7700AMD 7600Ryzen 760020406080100SE +/- 0.15, N = 359.0259.7381.9583.7984.6584.9986.4787.101. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteRyzen 9 79007900Ryzen 7600 AMDAMD 77007700Ryzen 7 7700AMD 7600Ryzen 760020406080100Min: 81.65 / Avg: 81.95 / Max: 82.151. (CXX) g++ options: -O2 -lOpenCL

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 4KRyzen 9 790079007700Ryzen 7 7700AMD 7700Ryzen 7600 AMDAMD 7600Ryzen 76001.00532.01063.01594.02125.0265SE +/- 0.003, N = 34.4684.4523.6863.6843.6803.0583.0423.0341. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 4KRyzen 9 790079007700Ryzen 7 7700AMD 7700Ryzen 7600 AMDAMD 7600Ryzen 7600246810Min: 3.05 / Avg: 3.06 / Max: 3.061. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6, LosslessRyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 76003691215SE +/- 0.014, N = 36.7656.8268.0508.1718.1749.8609.9029.9571. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6, LosslessRyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 76003691215Min: 9.88 / Avg: 9.9 / Max: 9.921. (CXX) g++ options: -O3 -fPIC -lm

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4KRyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 760020406080100SE +/- 0.03, N = 3100.1499.8080.1578.2977.3069.1868.5568.231. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4KRyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 760020406080100Min: 69.13 / Avg: 69.18 / Max: 69.231. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600AMD 7600Ryzen 7600 AMD48121620SE +/- 0.02, N = 310.7110.8512.3912.4312.4815.6315.6415.68
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600AMD 7600Ryzen 7600 AMD48121620Min: 15.65 / Avg: 15.68 / Max: 15.72

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600AMD 7600Ryzen 7600 AMD20406080100SE +/- 0.08, N = 393.3092.1380.7080.4480.0963.9563.9063.76
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600AMD 7600Ryzen 7600 AMD20406080100Min: 63.6 / Avg: 63.76 / Max: 63.88

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterRyzen 9 79007900AMD 7700Ryzen 7600 AMDAMD 7600Ryzen 7600Ryzen 7 770077003691215SE +/- 0.011, N = 38.2988.43011.57811.96111.97912.06912.11412.1271. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterRyzen 9 79007900AMD 7700Ryzen 7600 AMDAMD 7600Ryzen 7600Ryzen 7 7700770048121620Min: 11.94 / Avg: 11.96 / Max: 11.981. (CXX) g++ options: -O2 -lOpenCL

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: ParallelRyzen 9 790079007700Ryzen 7 7700AMD 7700AMD 7600Ryzen 7600Ryzen 7600 AMD15003000450060007500SE +/- 23.47, N = 3679167155892583957474678467446561. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: ParallelRyzen 9 790079007700Ryzen 7 7700AMD 7700AMD 7600Ryzen 7600Ryzen 7600 AMD12002400360048006000Min: 4624 / Avg: 4655.67 / Max: 4701.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU7900Ryzen 9 7900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600 AMDRyzen 76001.04232.08463.12694.16925.2115SE +/- 0.00604, N = 33.177493.179854.174004.319024.326454.475344.481294.63234MIN: 3.13MIN: 3.13MIN: 4.14MIN: 4.14MIN: 4.15MIN: 4.38MIN: 4.4MIN: 4.541. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU7900Ryzen 9 7900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600 AMDRyzen 7600246810Min: 4.47 / Avg: 4.48 / Max: 4.491. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression Speed7900Ryzen 9 7900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 76001224364860SE +/- 0.03, N = 351.651.046.846.445.835.935.935.71. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression Speed7900Ryzen 9 7900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 76001020304050Min: 35.9 / Avg: 35.93 / Max: 361. (CC) gcc options: -O3 -pthread -lz -llzma

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 4KRyzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 76004080120160200SE +/- 0.28, N = 3183.70181.98152.05150.39149.54128.58127.83127.221. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 4KRyzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 7600306090120150Min: 127.28 / Avg: 127.83 / Max: 128.21. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Ultra Fast7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 76001224364860SE +/- 0.11, N = 354.7154.6047.0146.8946.8438.1738.0437.961. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Ultra Fast7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 76001122334455Min: 38.05 / Avg: 38.17 / Max: 38.391. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 27900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 76001530456075SE +/- 0.03, N = 347.1047.2355.9956.4456.5767.6967.8167.821. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 27900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 76001326395265Min: 67.63 / Avg: 67.69 / Max: 67.751. (CXX) g++ options: -O3 -fPIC -lm

Y-Cruncher

Y-Cruncher is a multi-threaded Pi benchmark capable of computing Pi to trillions of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 1BRyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDRyzen 7600AMD 7600816243240SE +/- 0.10, N = 323.3023.3227.8928.0028.0332.9933.0433.08
OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 1BRyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDRyzen 7600AMD 7600714212835Min: 32.81 / Avg: 32.99 / Max: 33.12

Timed PHP Compilation

This test times how long it takes to build PHP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 8.1.9Time To Compile7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDRyzen 7600AMD 76001326395265SE +/- 0.29, N = 339.7440.8048.3448.7748.9355.8056.3356.41
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 8.1.9Time To Compile7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDRyzen 7600AMD 76001122334455Min: 55.44 / Avg: 55.8 / Max: 56.36

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon NanotubeRyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 760050100150200250SE +/- 0.83, N = 3168.93169.05207.91209.08209.84235.98237.47239.331. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon NanotubeRyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 76004080120160200Min: 234.99 / Avg: 235.98 / Max: 237.621. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 4K7700Ryzen 7 7700AMD 77007900Ryzen 9 7900AMD 7600Ryzen 7600 AMDRyzen 7600714212835SE +/- 0.11, N = 330.1629.1929.0124.3623.9922.3621.9621.311. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 4K7700Ryzen 7 7700AMD 77007900Ryzen 9 7900AMD 7600Ryzen 7600 AMDRyzen 7600714212835Min: 21.73 / Avg: 21.96 / Max: 22.11. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 1080p - Video Preset: Medium7900Ryzen 9 7900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600 AMDRyzen 76001428425670SE +/- 0.06, N = 363.1763.1658.8658.7658.5644.6844.6844.651. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 1080p - Video Preset: Medium7900Ryzen 9 7900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600 AMDRyzen 76001224364860Min: 44.57 / Avg: 44.68 / Max: 44.791. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.8.1Test: Server Rack - Acceleration: CPU-onlyRyzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 76000.04010.08020.12030.16040.2005SE +/- 0.001, N = 30.1260.1350.1480.1560.1580.1760.1770.178
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.8.1Test: Server Rack - Acceleration: CPU-onlyRyzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 760012345Min: 0.18 / Avg: 0.18 / Max: 0.18

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU7900Ryzen 9 7900Ryzen 7600AMD 7700AMD 7600Ryzen 7600 AMDRyzen 7 77007700246810SE +/- 0.03190, N = 35.215725.226747.062697.080487.095157.111217.331747.34668MIN: 5.15MIN: 5.15MIN: 6.99MIN: 6.98MIN: 7MIN: 7MIN: 6.99MIN: 71. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU7900Ryzen 9 7900Ryzen 7600AMD 7700AMD 7600Ryzen 7600 AMDRyzen 7 770077003691215Min: 7.08 / Avg: 7.11 / Max: 7.171. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CRyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600AMD 7600Ryzen 7600 AMD10K20K30K40K50KSE +/- 16.61, N = 344554.5344535.6044080.0843910.5343900.9131801.6931784.5231737.611. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CRyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600AMD 7600Ryzen 7600 AMD8K16K24K32K40KMin: 31705.19 / Avg: 31737.61 / Max: 31760.131. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: Parallel7900Ryzen 9 7900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600 AMDRyzen 7600170340510680850SE +/- 0.60, N = 37727676766686675555525501. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: Parallel7900Ryzen 9 7900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600 AMDRyzen 7600140280420560700Min: 551.5 / Avg: 552.33 / Max: 553.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 4KRyzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600Ryzen 7600 AMD4080120160200SE +/- 0.38, N = 3190.96189.15162.39162.38161.95136.85136.43136.231. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 4KRyzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600Ryzen 7600 AMD4080120160200Min: 135.48 / Avg: 136.23 / Max: 136.741. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Timed Wasmer Compilation

This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 2.3Time To Compile7900Ryzen 9 7900Ryzen 7 7700AMD 77007700Ryzen 7600 AMDAMD 7600Ryzen 76001224364860SE +/- 0.45, N = 337.9137.9344.1844.3344.7050.7750.8153.131. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 2.3Time To Compile7900Ryzen 9 7900Ryzen 7 7700AMD 77007700Ryzen 7600 AMDAMD 7600Ryzen 76001122334455Min: 50.26 / Avg: 50.77 / Max: 51.671. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.8.1Test: Masskrug - Acceleration: CPU-only7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 76000.96441.92882.89323.85764.822SE +/- 0.013, N = 33.0713.1743.5843.6523.6904.2204.2324.286
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.8.1Test: Masskrug - Acceleration: CPU-only7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 7600246810Min: 4.2 / Avg: 4.22 / Max: 4.24

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per Direction7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600Ryzen 7600 AMDAMD 7600510152025SE +/- 0.02, N = 315.0715.0819.4419.5619.5920.9420.9620.991. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz
OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per Direction7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600Ryzen 7600 AMDAMD 7600510152025Min: 20.92 / Avg: 20.96 / Max: 211. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Y-Cruncher

Y-Cruncher is a multi-threaded Pi benchmark capable of computing Pi to trillions of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 500MRyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 760048121620SE +/- 0.01, N = 310.8510.8712.6812.7212.7715.0415.0515.08
OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 500MRyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 760048121620Min: 15.03 / Avg: 15.05 / Max: 15.08

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600Ryzen 7600 AMDAMD 76000.35630.71261.06891.42521.7815SE +/- 0.00075, N = 31.143261.144431.234961.272091.272451.574921.580541.58360MIN: 1.05MIN: 1.01MIN: 1.13MIN: 1.16MIN: 1.16MIN: 1.5MIN: 1.49MIN: 1.521. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600Ryzen 7600 AMDAMD 7600246810Min: 1.58 / Avg: 1.58 / Max: 1.581. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: resizeAMD 7700Ryzen 7 77007700Ryzen 7600 AMDRyzen 7600AMD 76007900Ryzen 9 79003691215SE +/- 0.062, N = 39.86610.03310.18310.30610.41710.42113.30613.602
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: resizeAMD 7700Ryzen 7 77007700Ryzen 7600 AMDRyzen 7600AMD 76007900Ryzen 9 790048121620Min: 10.22 / Avg: 10.31 / Max: 10.43

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.C7900Ryzen 9 79007700Ryzen 7 7700AMD 7700Ryzen 7600AMD 7600Ryzen 7600 AMD5K10K15K20K25KSE +/- 13.02, N = 324918.3424557.8524004.7423930.1723901.6918217.4318201.5518180.581. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.C7900Ryzen 9 79007700Ryzen 7 7700AMD 7700Ryzen 7600AMD 7600Ryzen 7600 AMD4K8K12K16K20KMin: 18165.66 / Avg: 18180.58 / Max: 18206.531. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.C7900Ryzen 9 7900Ryzen 7 77007700AMD 7700Ryzen 7600AMD 7600Ryzen 7600 AMD3K6K9K12K15KSE +/- 33.87, N = 311923.7311860.089553.459535.439511.868923.128741.368739.831. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.C7900Ryzen 9 7900Ryzen 7 77007700AMD 7700Ryzen 7600AMD 7600Ryzen 7600 AMD2K4K6K8K10KMin: 8683.06 / Avg: 8739.83 / Max: 8800.221. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel7900Ryzen 9 79007700Ryzen 7 7700AMD 7700Ryzen 7600 AMDRyzen 7600AMD 760020406080100SE +/- 0.00, N = 31071069594948079791. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel7900Ryzen 9 79007700Ryzen 7 7700AMD 7700Ryzen 7600 AMDRyzen 7600AMD 760020406080100Min: 79.5 / Avg: 79.5 / Max: 79.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 1080p - Video Preset: Ultra Fast7900Ryzen 9 7900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600 AMDRyzen 760050100150200250SE +/- 0.22, N = 3216.49215.98202.80201.88201.63160.45160.28160.211. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 1080p - Video Preset: Ultra Fast7900Ryzen 9 7900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600 AMDRyzen 76004080120160200Min: 159.96 / Avg: 160.28 / Max: 160.71. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Timed GDB GNU Debugger Compilation

This test times how long it takes to build the GNU Debugger (GDB) in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 10.2Time To Compile7900Ryzen 9 7900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600 AMDRyzen 76001224364860SE +/- 0.08, N = 338.6138.6744.0844.5744.6051.4951.5551.91
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 10.2Time To Compile7900Ryzen 9 7900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600 AMDRyzen 76001020304050Min: 51.38 / Avg: 51.55 / Max: 51.63

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPURyzen 9 79007900AMD 7700AMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 7 77007700246810SE +/- 0.00408, N = 35.703655.726417.307687.376997.380147.398687.639937.64763MIN: 5.58MIN: 5.61MIN: 7.2MIN: 7.29MIN: 7.29MIN: 7.3MIN: 7.21MIN: 7.231. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPURyzen 9 79007900AMD 7700AMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 7 770077003691215Min: 7.37 / Avg: 7.38 / Max: 7.391. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPU7700Ryzen 7 7700AMD 77007900Ryzen 9 7900Ryzen 7600AMD 7600Ryzen 7600 AMD1530456075SE +/- 0.56, N = 349.5450.1454.2760.7161.5364.2466.3766.40MIN: 31.93 / MAX: 62.29MIN: 39.21 / MAX: 63.67MIN: 43.38 / MAX: 65.85MIN: 49.57 / MAX: 69.26MIN: 51.95 / MAX: 117.17MIN: 35.74 / MAX: 76.68MIN: 56.61 / MAX: 70.47MIN: 52.54 / MAX: 72.871. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPU7700Ryzen 7 7700AMD 77007900Ryzen 9 7900Ryzen 7600AMD 7600Ryzen 7600 AMD1326395265Min: 65.28 / Avg: 66.4 / Max: 67.051. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 1080pAMD 77007700Ryzen 7 7700Ryzen 9 79007900AMD 7600Ryzen 7600Ryzen 7600 AMD612182430SE +/- 0.06, N = 322.9722.4922.4521.3921.1717.3017.2317.201. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 1080pAMD 77007700Ryzen 7 7700Ryzen 9 79007900AMD 7600Ryzen 7600Ryzen 7600 AMD510152025Min: 17.11 / Avg: 17.2 / Max: 17.321. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 4KAMD 7700Ryzen 7 770077007900Ryzen 9 7900AMD 7600Ryzen 7600 AMDRyzen 76003691215SE +/- 0.02, N = 311.1010.9910.9610.4110.288.618.428.321. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 4KAMD 7700Ryzen 7 770077007900Ryzen 9 7900AMD 7600Ryzen 7600 AMDRyzen 76003691215Min: 8.4 / Avg: 8.42 / Max: 8.451. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: GarlicoinAMD 770077007900Ryzen 7 7700Ryzen 9 7900Ryzen 7600 AMDAMD 7600Ryzen 760010002000300040005000SE +/- 39.25, N = 34711.294593.064566.154553.534390.503598.293579.503534.181. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: GarlicoinAMD 770077007900Ryzen 7 7700Ryzen 9 7900Ryzen 7600 AMDAMD 7600Ryzen 76008001600240032004000Min: 3547.41 / Avg: 3598.29 / Max: 3675.51. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh Time7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 760081624324026.6926.9431.2331.4131.5935.2935.4535.471. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPU7700Ryzen 7 7700AMD 7700Ryzen 9 79007900Ryzen 7600AMD 7600Ryzen 7600 AMD3691215SE +/- 0.02, N = 38.128.148.628.918.9210.6610.6710.78MIN: 4.51 / MAX: 22.01MIN: 5.28 / MAX: 22.47MIN: 4.21 / MAX: 29.5MIN: 5.72 / MAX: 19.34MIN: 4.89 / MAX: 19.33MIN: 5.63 / MAX: 25.42MIN: 6.39 / MAX: 19.07MIN: 4.84 / MAX: 21.351. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPU7700Ryzen 7 7700AMD 7700Ryzen 9 79007900Ryzen 7600AMD 7600Ryzen 7600 AMD3691215Min: 10.74 / Avg: 10.78 / Max: 10.811. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

nekRS

nekRS is an open-source Navier Stokes solver based on the spectral element method. NekRS supports both CPU and GPU/accelerator support though this test profile is currently configured for CPU execution. NekRS is part of Nek5000 of the Mathematics and Computer Science MCS at Argonne National Laboratory. This nekRS benchmark is primarily relevant to large core count HPC servers and otherwise may be very time consuming. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFLOP/s, More Is BetternekRS 22.0Input: TurboPipe PeriodicRyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 760014000M28000M42000M56000M70000MSE +/- 29429369.31, N = 365974500000648778000005600840000055548200000555243000004980863333349735600000497175000001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -lmpi_cxx -lmpi
OpenBenchmarking.orgFLOP/s, More Is BetternekRS 22.0Input: TurboPipe PeriodicRyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 760011000M22000M33000M44000M55000MMin: 49776400000 / Avg: 49808633333.33 / Max: 498674000001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -lmpi_cxx -lmpi

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenetAMD 7700Ryzen 7600 AMDAMD 7600Ryzen 7 7700Ryzen 76007700Ryzen 9 79007900246810SE +/- 0.00, N = 36.546.896.907.127.137.398.408.67MIN: 6.46 / MAX: 8.01MIN: 6.85 / MAX: 7.4MIN: 6.86 / MAX: 7.51MIN: 6.65 / MAX: 8.21MIN: 7.05 / MAX: 8.48MIN: 6.93 / MAX: 8.47MIN: 8.31 / MAX: 8.86MIN: 8.58 / MAX: 9.221. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenetAMD 7700Ryzen 7600 AMDAMD 7600Ryzen 7 7700Ryzen 76007700Ryzen 9 790079003691215Min: 6.89 / Avg: 6.89 / Max: 6.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 8 - Buffer Length: 256 - Filter Length: 57Ryzen 7 7700AMD 77007700Ryzen 9 79007900Ryzen 7600AMD 7600Ryzen 7600 AMD160M320M480M640M800MSE +/- 6183407.06, N = 47595500007593100007563400007532500007394300005857700005835000005732975001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 8 - Buffer Length: 256 - Filter Length: 57Ryzen 7 7700AMD 77007700Ryzen 9 79007900Ryzen 7600AMD 7600Ryzen 7600 AMD130M260M390M520M650MMin: 555170000 / Avg: 573297500 / Max: 5825600001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPUAMD 77007700Ryzen 7 7700Ryzen 9 79007900Ryzen 7600AMD 7600Ryzen 7600 AMD80160240320400SE +/- 0.45, N = 3292.09293.37294.43330.58331.39379.73386.47386.64MIN: 151.29 / MAX: 340.73MIN: 221.79 / MAX: 312MIN: 255.5 / MAX: 313.05MIN: 304.74 / MAX: 342.3MIN: 290.18 / MAX: 342.26MIN: 333.61 / MAX: 390.72MIN: 369.88 / MAX: 390.09MIN: 366.98 / MAX: 393.571. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPUAMD 77007700Ryzen 7 7700Ryzen 9 79007900Ryzen 7600AMD 7600Ryzen 7600 AMD70140210280350Min: 385.75 / Avg: 386.64 / Max: 387.091. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 1080pRyzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 76003691215SE +/- 0.02, N = 313.4913.4612.1312.1112.1010.3010.3010.231. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 1080pRyzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 760048121620Min: 10.26 / Avg: 10.3 / Max: 10.331. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 1080p7900Ryzen 9 7900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600 AMDRyzen 7600150300450600750SE +/- 2.69, N = 3687.93649.49616.10615.79609.82528.45524.24522.041. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 1080p7900Ryzen 9 7900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600 AMDRyzen 7600120240360480600Min: 519.72 / Avg: 524.24 / Max: 529.021. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Compression SpeedRyzen 9 79007900Ryzen 7 7700AMD 77007700Ryzen 7600Ryzen 7600 AMDAMD 760013002600390052006500SE +/- 11.07, N = 36289.16230.75140.05126.45055.34863.44776.64772.91. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Compression SpeedRyzen 9 79007900Ryzen 7 7700AMD 77007700Ryzen 7600Ryzen 7600 AMDAMD 760011002200330044005500Min: 4762.8 / Avg: 4776.6 / Max: 4798.51. (CC) gcc options: -O3 -pthread -lz -llzma

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: ParallelRyzen 9 790079007700Ryzen 7 7700AMD 7700Ryzen 7600Ryzen 7600 AMDAMD 7600110220330440550SE +/- 0.17, N = 34984954484454423823813781. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: ParallelRyzen 9 790079007700Ryzen 7 7700AMD 7700Ryzen 7600Ryzen 7600 AMDAMD 760090180270360450Min: 380.5 / Avg: 380.83 / Max: 3811. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPUAMD 7700Ryzen 7 770077007900Ryzen 9 7900Ryzen 7600AMD 7600Ryzen 7600 AMD1.32752.6553.98255.316.6375SE +/- 0.05, N = 34.484.544.555.155.155.785.805.90MIN: 2.84 / MAX: 12.69MIN: 2.81 / MAX: 17.18MIN: 2.74 / MAX: 17.33MIN: 3.14 / MAX: 13.79MIN: 3.05 / MAX: 67.86MIN: 3.69 / MAX: 20.19MIN: 3.43 / MAX: 7.38MIN: 3.49 / MAX: 14.21. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPUAMD 7700Ryzen 7 770077007900Ryzen 9 7900Ryzen 7600AMD 7600Ryzen 7600 AMD246810Min: 5.81 / Avg: 5.9 / Max: 5.971. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPUAMD 77007700Ryzen 7 77007900Ryzen 9 7900Ryzen 7600 AMDRyzen 7600AMD 7600246810SE +/- 0.05, N = 35.775.815.826.536.537.577.577.59MIN: 3.07 / MAX: 13.01MIN: 3.21 / MAX: 17.52MIN: 3.02 / MAX: 17.51MIN: 3.39 / MAX: 14.56MIN: 3.38 / MAX: 14.89MIN: 3.89 / MAX: 15.51MIN: 3.93 / MAX: 20.27MIN: 3.97 / MAX: 15.261. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPUAMD 77007700Ryzen 7 77007900Ryzen 9 7900Ryzen 7600 AMDRyzen 7600AMD 76003691215Min: 7.48 / Avg: 7.57 / Max: 7.631. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 1080p - Video Preset: Very Fast7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 7600306090120150SE +/- 0.03, N = 3118.78118.71111.63111.56111.4691.3091.2291.181. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 1080p - Video Preset: Very Fast7900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 760020406080100Min: 91.24 / Avg: 91.3 / Max: 91.361. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamAMD 7700Ryzen 7 77007700Ryzen 9 79007900Ryzen 7600AMD 7600Ryzen 7600 AMD510152025SE +/- 0.03, N = 316.8616.9716.9918.2518.3221.7821.8621.86
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamAMD 7700Ryzen 7 77007700Ryzen 9 79007900Ryzen 7600AMD 7600Ryzen 7600 AMD510152025Min: 21.79 / Avg: 21.86 / Max: 21.89

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPUAMD 77007700Ryzen 7 77007900Ryzen 9 7900Ryzen 7600AMD 7600Ryzen 7600 AMD160320480640800SE +/- 1.54, N = 3576.13578.82581.34651.18651.71746.14746.15746.94MIN: 505.33 / MAX: 597.67MIN: 394.1 / MAX: 612.37MIN: 454.21 / MAX: 613.21MIN: 573.67 / MAX: 673.86MIN: 621.09 / MAX: 672.91MIN: 723.99 / MAX: 764.64MIN: 724.14 / MAX: 766.22MIN: 714.17 / MAX: 765.411. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPUAMD 77007700Ryzen 7 77007900Ryzen 9 7900Ryzen 7600AMD 7600Ryzen 7600 AMD130260390520650Min: 744.37 / Avg: 746.94 / Max: 749.711. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamAMD 7700Ryzen 7 77007700Ryzen 9 79007900Ryzen 7600Ryzen 7600 AMDAMD 76001326395265SE +/- 0.07, N = 359.3058.9158.8454.7854.5845.9145.7445.74
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamAMD 7700Ryzen 7 77007700Ryzen 9 79007900Ryzen 7600Ryzen 7600 AMDAMD 76001224364860Min: 45.66 / Avg: 45.74 / Max: 45.88

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.8.1Test: Server Room - Acceleration: CPU-onlyRyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 76000.73891.47782.21672.95563.6945SE +/- 0.036, N = 32.5392.5762.8312.8942.9023.2433.2503.284
OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.8.1Test: Server Room - Acceleration: CPU-onlyRyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 7600246810Min: 3.2 / Avg: 3.24 / Max: 3.32

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssdAMD 7700Ryzen 7600 AMDAMD 7600Ryzen 7600Ryzen 7 77007700Ryzen 9 790079003691215SE +/- 0.01, N = 39.099.129.179.2010.0210.3311.7311.74MIN: 8.76 / MAX: 10.49MIN: 8.99 / MAX: 9.91MIN: 9.02 / MAX: 9.75MIN: 8.98 / MAX: 10.63MIN: 9.18 / MAX: 19.45MIN: 9.5 / MAX: 11.46MIN: 11.48 / MAX: 12.69MIN: 11.43 / MAX: 21.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssdAMD 7700Ryzen 7600 AMDAMD 7600Ryzen 7600Ryzen 7 77007700Ryzen 9 790079003691215Min: 9.11 / Avg: 9.12 / Max: 9.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPU7700AMD 7700Ryzen 7 77007900Ryzen 9 7900AMD 7600Ryzen 7600Ryzen 7600 AMD30060090012001500SE +/- 5.45, N = 31015.271016.681032.671153.461159.381293.341303.001309.26MIN: 718.11 / MAX: 1124.23MIN: 897.5 / MAX: 1118.4MIN: 975.43 / MAX: 1129.72MIN: 731.93 / MAX: 1335.35MIN: 1036.64 / MAX: 1295.05MIN: 914.71 / MAX: 1411.42MIN: 1221.08 / MAX: 1432.12MIN: 749.1 / MAX: 1427.461. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPU7700AMD 7700Ryzen 7 77007900Ryzen 9 7900AMD 7600Ryzen 7600Ryzen 7600 AMD2004006008001000Min: 1298.37 / Avg: 1309.26 / Max: 1315.221. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet18AMD 7600Ryzen 7600 AMDRyzen 7600AMD 7700Ryzen 7 77007700Ryzen 9 79007900246810SE +/- 0.09, N = 35.565.655.685.786.036.967.137.16MIN: 5.51 / MAX: 6.33MIN: 5.5 / MAX: 6.42MIN: 5.51 / MAX: 7.09MIN: 5.57 / MAX: 12.93MIN: 5.61 / MAX: 7.05MIN: 6.51 / MAX: 8.09MIN: 6.99 / MAX: 7.93MIN: 7 / MAX: 7.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet18AMD 7600Ryzen 7600 AMDRyzen 7600AMD 7700Ryzen 7 77007700Ryzen 9 790079003691215Min: 5.55 / Avg: 5.65 / Max: 5.831. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3D7900AMD 7700Ryzen 7 7700AMD 7600Ryzen 9 79007700Ryzen 7600Ryzen 7600 AMD1530456075SE +/- 0.87, N = 352.0753.7653.9357.1158.2061.6362.4766.571. (CXX) g++ options: -O2 -lOpenCL
OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3D7900AMD 7700Ryzen 7 7700AMD 7600Ryzen 9 79007700Ryzen 7600Ryzen 7600 AMD1326395265Min: 64.82 / Avg: 66.57 / Max: 67.481. (CXX) g++ options: -O2 -lOpenCL

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 1080p7900Ryzen 9 7900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600Ryzen 7600 AMD160320480640800SE +/- 0.75, N = 3739.51728.99677.96670.53668.09580.00579.19578.511. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 1080p7900Ryzen 9 7900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600Ryzen 7600 AMD130260390520650Min: 577.42 / Avg: 578.51 / Max: 579.951. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUAMD 77007900Ryzen 9 79007700Ryzen 7 7700Ryzen 7600AMD 7600Ryzen 7600 AMD0.16920.33840.50760.67680.846SE +/- 0.000677, N = 30.5886390.5906330.5952110.5998880.6002440.7500020.7519070.751935MIN: 0.56MIN: 0.54MIN: 0.54MIN: 0.56MIN: 0.55MIN: 0.72MIN: 0.73MIN: 0.731. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUAMD 77007900Ryzen 9 79007700Ryzen 7 7700Ryzen 7600AMD 7600Ryzen 7600 AMD246810Min: 0.75 / Avg: 0.75 / Max: 0.751. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPURyzen 7 77007700AMD 77007900Ryzen 9 7900AMD 7600Ryzen 7600 AMDRyzen 760030060090012001500SE +/- 6.34, N = 31032.401034.061039.001179.601190.801298.341306.121315.78MIN: 820.72 / MAX: 1126.15MIN: 699.51 / MAX: 1128.89MIN: 913.61 / MAX: 1111.36MIN: 1028.22 / MAX: 1530.8MIN: 1049.79 / MAX: 1286.08MIN: 960.85 / MAX: 1409.2MIN: 773.95 / MAX: 1444.66MIN: 728.56 / MAX: 1437.751. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPURyzen 7 77007700AMD 77007900Ryzen 9 7900AMD 7600Ryzen 7600 AMDRyzen 76002004006008001000Min: 1295.28 / Avg: 1306.12 / Max: 1317.251. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Timed CPython Compilation

This test times how long it takes to build the reference Python implementation, CPython, with optimizations and LTO enabled for a release build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: Default7900Ryzen 9 7900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 76004812162013.4613.5814.9415.1815.1816.8016.8816.98

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080p7900Ryzen 9 7900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600Ryzen 7600 AMD20406080100SE +/- 0.55, N = 3103.73102.8488.8687.6087.5683.8282.6182.571. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080p7900Ryzen 9 7900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600Ryzen 7600 AMD20406080100Min: 81.52 / Avg: 82.57 / Max: 83.371. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 10, Lossless7900Ryzen 9 79007700Ryzen 7 7700AMD 7700Ryzen 7600 AMDRyzen 7600AMD 76000.97921.95842.93763.91684.896SE +/- 0.019, N = 33.5013.6213.8723.8803.8974.3254.3414.3521. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 10, Lossless7900Ryzen 9 79007700Ryzen 7 7700AMD 7700Ryzen 7600 AMDRyzen 7600AMD 7600246810Min: 4.3 / Avg: 4.32 / Max: 4.361. (CXX) g++ options: -O3 -fPIC -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Compression Speed7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDRyzen 7600AMD 7600400800120016002000SE +/- 2.90, N = 31637.91636.71454.31445.11443.81326.31320.81319.11. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Compression Speed7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDRyzen 7600AMD 760030060090012001500Min: 1321.8 / Avg: 1326.27 / Max: 1331.71. (CC) gcc options: -O3 -pthread -lz -llzma

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tinyAMD 7700Ryzen 7600 AMDRyzen 7 7700AMD 7600Ryzen 76007700Ryzen 9 7900790048121620SE +/- 0.12, N = 311.4012.0312.0612.1912.3413.1213.7014.09MIN: 11.18 / MAX: 13.15MIN: 11.74 / MAX: 13.04MIN: 11.25 / MAX: 14.07MIN: 11.76 / MAX: 12.93MIN: 11.85 / MAX: 14.12MIN: 12.22 / MAX: 14.69MIN: 13.55 / MAX: 14.36MIN: 13.9 / MAX: 14.241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tinyAMD 7700Ryzen 7600 AMDRyzen 7 7700AMD 7600Ryzen 76007700Ryzen 9 7900790048121620Min: 11.9 / Avg: 12.03 / Max: 12.271. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansAMD 76007700AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMD7900Ryzen 9 7900400800120016002000SE +/- 19.03, N = 2013591371140314071424146116551676
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansAMD 76007700AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMD7900Ryzen 9 790030060090012001500Min: 1377 / Avg: 1460.8 / Max: 1699

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet50AMD 7700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 76007700Ryzen 9 790079003691215SE +/- 0.04, N = 39.9110.3411.0611.1111.1111.5212.0912.20MIN: 9.79 / MAX: 11.38MIN: 9.67 / MAX: 12.24MIN: 10.94 / MAX: 11.76MIN: 11.03 / MAX: 11.66MIN: 10.95 / MAX: 12.56MIN: 10.79 / MAX: 13.36MIN: 11.98 / MAX: 13.18MIN: 12.02 / MAX: 13.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet50AMD 7700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 76007700Ryzen 9 7900790048121620Min: 11 / Avg: 11.06 / Max: 11.141. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformerRyzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 760020406080100SE +/- 0.04, N = 390.2193.5099.54100.13100.43110.10110.17110.67MIN: 89.92 / MAX: 96.8MIN: 93.19 / MAX: 97.76MIN: 98.35 / MAX: 108.29MIN: 98.97 / MAX: 104.21MIN: 98.86 / MAX: 105.15MIN: 109.45 / MAX: 116.06MIN: 109.46 / MAX: 111.5MIN: 109.86 / MAX: 120.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformerRyzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 760020406080100Min: 110.1 / Avg: 110.17 / Max: 110.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: unsharp-maskAMD 7700Ryzen 7 77007700Ryzen 7600Ryzen 7600 AMDAMD 76007900Ryzen 9 79003691215SE +/- 0.02, N = 310.3110.5410.6710.7410.8010.8012.3812.63
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: unsharp-maskAMD 7700Ryzen 7 77007700Ryzen 7600Ryzen 7600 AMDAMD 76007900Ryzen 9 790048121620Min: 10.76 / Avg: 10.8 / Max: 10.83

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Signal Source (Cosine)7700Ryzen 7 7700AMD 7700AMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 9 7900790013002600390052006500SE +/- 6.70, N = 86105.96088.96084.95870.75865.65846.25162.05014.01. 3.10.1.1
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Signal Source (Cosine)7700Ryzen 7 7700AMD 7700AMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 9 7900790011002200330044005500Min: 5831.1 / Avg: 5865.61 / Max: 5888.11. 3.10.1.1

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPUAMD 7700Ryzen 7 770077007900Ryzen 9 7900Ryzen 7600 AMDAMD 7600Ryzen 76001.29382.58763.88145.17526.469SE +/- 0.03, N = 34.764.794.865.445.445.675.715.75MIN: 3.73 / MAX: 12.41MIN: 3.62 / MAX: 17.47MIN: 3.43 / MAX: 7.38MIN: 3.77 / MAX: 13.94MIN: 3.81 / MAX: 14.78MIN: 4.46 / MAX: 8.96MIN: 4.14 / MAX: 17.41MIN: 4.12 / MAX: 15.91. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPUAMD 7700Ryzen 7 770077007900Ryzen 9 7900Ryzen 7600 AMDAMD 7600Ryzen 7600246810Min: 5.61 / Avg: 5.67 / Max: 5.71. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.CRyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDRyzen 7600AMD 76005K10K15K20K25KSE +/- 5.90, N = 325644.0325610.9624079.9923860.3123843.5121442.6721396.5921352.011. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.CRyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDRyzen 7600AMD 76004K8K12K16K20KMin: 21435.91 / Avg: 21442.67 / Max: 21454.421. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPURyzen 7600 AMDAMD 7600Ryzen 7600Ryzen 7 77007700AMD 77007900Ryzen 9 79000.09680.19360.29040.38720.484SE +/- 0.00, N = 30.360.360.360.370.380.380.430.43MIN: 0.22 / MAX: 8.48MIN: 0.22 / MAX: 7.98MIN: 0.22 / MAX: 12.84MIN: 0.23 / MAX: 12.31MIN: 0.23 / MAX: 13MIN: 0.24 / MAX: 2.35MIN: 0.26 / MAX: 9.51MIN: 0.26 / MAX: 8.941. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPURyzen 7600 AMDAMD 7600Ryzen 7600Ryzen 7 77007700AMD 77007900Ryzen 9 790012345Min: 0.36 / Avg: 0.36 / Max: 0.361. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradesoapRyzen 9 790079007700AMD 7700AMD 7600Ryzen 7600Ryzen 7600 AMD5001000150020002500SE +/- 19.27, N = 41880193620152128215422012242
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradesoapRyzen 9 790079007700AMD 7700AMD 7600Ryzen 7600Ryzen 7600 AMD400800120016002000Min: 2185 / Avg: 2241.5 / Max: 2270

Java Test: Tradesoap

Ryzen 7 7700: The test quit with a non-zero exit status.

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamRyzen 7600 AMDRyzen 7600AMD 7600AMD 77007700Ryzen 7 77007900Ryzen 9 79001632486480SE +/- 0.12, N = 362.3862.5762.7768.9169.6169.6874.1074.15
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamRyzen 7600 AMDRyzen 7600AMD 7600AMD 77007700Ryzen 7 77007900Ryzen 9 79001428425670Min: 62.16 / Avg: 62.38 / Max: 62.59

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: Standard7900Ryzen 9 7900AMD 7700AMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 7 770077002K4K6K8K10KSE +/- 114.75, N = 12998597489254920489328623844484311. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: Standard7900Ryzen 9 7900AMD 7700AMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 7 770077002K4K6K8K10KMin: 8284 / Avg: 8931.54 / Max: 91991. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.C7900Ryzen 9 7900AMD 7700AMD 7600Ryzen 7600 AMDRyzen 76007700Ryzen 7 77003K6K9K12K15KSE +/- 8.95, N = 312794.7512761.8811060.8011044.4611043.4811031.3410989.4110954.111. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.C7900Ryzen 9 7900AMD 7700AMD 7600Ryzen 7600 AMDRyzen 76007700Ryzen 7 77002K4K6K8K10KMin: 11028.44 / Avg: 11043.48 / Max: 11059.411. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamRyzen 7600 AMDAMD 7600Ryzen 7600AMD 7700Ryzen 7 770077007900Ryzen 9 79001632486480SE +/- 0.31, N = 360.2960.3360.4163.5063.6764.1770.3270.33
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamRyzen 7600 AMDAMD 7600Ryzen 7600AMD 7700Ryzen 7 770077007900Ryzen 9 79001428425670Min: 59.67 / Avg: 60.29 / Max: 60.61

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 1080p7700AMD 7700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 76007900Ryzen 9 79001224364860SE +/- 0.09, N = 354.4954.4353.3852.6852.0452.0346.8846.831. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11
OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 1080p7700AMD 7700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 76007900Ryzen 9 79001122334455Min: 52.51 / Avg: 52.68 / Max: 52.81. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPURyzen 7600 AMDAMD 7600AMD 7700Ryzen 76007700Ryzen 7 7700Ryzen 9 790079000.16430.32860.49290.65720.8215SE +/- 0.00, N = 30.630.630.630.630.640.640.720.73MIN: 0.34 / MAX: 8.02MIN: 0.39 / MAX: 1.84MIN: 0.38 / MAX: 8.56MIN: 0.34 / MAX: 12.67MIN: 0.36 / MAX: 12.27MIN: 0.37 / MAX: 13.2MIN: 0.4 / MAX: 8.98MIN: 0.39 / MAX: 9.371. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPURyzen 7600 AMDAMD 7600AMD 7700Ryzen 76007700Ryzen 7 7700Ryzen 9 79007900246810Min: 0.63 / Avg: 0.63 / Max: 0.631. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPURyzen 7600Ryzen 7600 AMDAMD 76007700AMD 7700Ryzen 7 7700Ryzen 9 79007900246810SE +/- 0.01, N = 35.675.685.685.845.845.846.546.55MIN: 3.77 / MAX: 11.12MIN: 3.14 / MAX: 13.67MIN: 3.04 / MAX: 13.81MIN: 3.2 / MAX: 17.68MIN: 3.13 / MAX: 13.02MIN: 3.11 / MAX: 18.54MIN: 3.68 / MAX: 16.14MIN: 3.66 / MAX: 14.891. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPURyzen 7600Ryzen 7600 AMDAMD 76007700AMD 7700Ryzen 7 7700Ryzen 9 790079003691215Min: 5.67 / Avg: 5.68 / Max: 5.691. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamRyzen 7600AMD 7600Ryzen 7600 AMD7700AMD 7700Ryzen 7 77007900Ryzen 9 790020406080100SE +/- 0.16, N = 384.0184.2284.2686.9086.9287.6796.5596.89
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamRyzen 7600AMD 7600Ryzen 7600 AMD7700AMD 7700Ryzen 7 77007900Ryzen 9 790020406080100Min: 84.06 / Avg: 84.26 / Max: 84.59

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.D7900Ryzen 9 79007700Ryzen 7 7700AMD 7700Ryzen 7600 AMDRyzen 7600AMD 760030060090012001500SE +/- 6.92, N = 31498.341464.481452.371428.571401.101322.961304.941299.851. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2
OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.D7900Ryzen 9 79007700Ryzen 7 7700AMD 7700Ryzen 7600 AMDRyzen 7600AMD 760030060090012001500Min: 1314.23 / Avg: 1322.96 / Max: 1336.621. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v27900Ryzen 7 7700AMD 77007700Ryzen 9 7900Ryzen 7600AMD 7600Ryzen 7600 AMD1122334455SE +/- 0.35, N = 1541.5242.0642.3342.3342.3743.9243.9247.65MIN: 41.17 / MAX: 42.01MIN: 41.98 / MAX: 42.14MIN: 42.3 / MAX: 42.43MIN: 42.27 / MAX: 42.4MIN: 42.26 / MAX: 42.44MIN: 43.87 / MAX: 44.14MIN: 43.9 / MAX: 44MIN: 43.99 / MAX: 49.871. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v27900Ryzen 7 7700AMD 77007700Ryzen 9 7900Ryzen 7600AMD 7600Ryzen 7600 AMD1020304050Min: 44.17 / Avg: 47.65 / Max: 49.671. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2AMD 76007900Ryzen 9 79007700Ryzen 7600 AMDAMD 7700Ryzen 7600Ryzen 7 7700400800120016002000SE +/- 32.56, N = 2017001748179118281871187718811942
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2AMD 76007900Ryzen 9 79007700Ryzen 7600 AMDAMD 7700Ryzen 7600Ryzen 7 770030060090012001500Min: 1603 / Avg: 1870.55 / Max: 2125

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamRyzen 7600Ryzen 7600 AMDAMD 7600AMD 77007700Ryzen 7 7700Ryzen 9 790079001122334455SE +/- 0.03, N = 341.7642.0042.2043.2643.3843.4647.1247.47
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamRyzen 7600Ryzen 7600 AMDAMD 7600AMD 77007700Ryzen 7 7700Ryzen 9 790079001020304050Min: 41.94 / Avg: 42 / Max: 42.04

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Default7900Ryzen 7 7700AMD 77007700Ryzen 7600AMD 7600Ryzen 7600 AMDRyzen 9 7900612182430SE +/- 0.03, N = 327.2426.9426.9426.9425.9725.9525.8324.001. (CC) gcc options: -fvisibility=hidden -O2 -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Default7900Ryzen 7 7700AMD 77007700Ryzen 7600AMD 7600Ryzen 7600 AMDRyzen 9 7900612182430Min: 25.78 / Avg: 25.83 / Max: 25.861. (CC) gcc options: -fvisibility=hidden -O2 -lm

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamRyzen 7600AMD 7600Ryzen 7600 AMDAMD 77007700Ryzen 7 77007900Ryzen 9 7900714212835SE +/- 0.02, N = 326.3426.3526.4327.2427.3127.3829.6729.84
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamRyzen 7600AMD 7600Ryzen 7600 AMDAMD 77007700Ryzen 7 77007900Ryzen 9 7900714212835Min: 26.41 / Avg: 26.43 / Max: 26.48

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 100Ryzen 760079007700AMD 7700Ryzen 7 7700Ryzen 9 7900Ryzen 7600 AMDAMD 76000.21150.4230.63450.8461.0575SE +/- 0.02, N = 60.940.920.910.880.870.860.850.831. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 100Ryzen 760079007700AMD 7700Ryzen 7 7700Ryzen 9 7900Ryzen 7600 AMDAMD 7600246810Min: 0.78 / Avg: 0.85 / Max: 0.921. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

Algebraic Multi-Grid Benchmark

AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2Ryzen 9 79007900Ryzen 7600 AMDRyzen 7600AMD 7600AMD 7700Ryzen 7 7700770090M180M270M360M450MSE +/- 42003.58, N = 34356400004336033003926537333925822003919451003915619003859116003857551001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi
OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2Ryzen 9 79007900Ryzen 7600 AMDRyzen 7600AMD 7600AMD 7700Ryzen 7 7700770080M160M240M320M400MMin: 392572800 / Avg: 392653733.33 / Max: 3927137001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileRyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600Ryzen 7600 AMDAMD 760048121620SE +/- 0.01, N = 314.0414.1014.6214.6914.7115.6815.7715.84
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileRyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600Ryzen 7600 AMDAMD 760048121620Min: 15.75 / Avg: 15.77 / Max: 15.8

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamAMD 7600Ryzen 7600Ryzen 7600 AMDAMD 77007700Ryzen 7 7700Ryzen 9 79007900150300450600750SE +/- 0.30, N = 3630.55630.90630.99649.26651.14651.97704.98705.41
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamAMD 7600Ryzen 7600Ryzen 7600 AMDAMD 77007700Ryzen 7 7700Ryzen 9 79007900120240360480600Min: 630.61 / Avg: 630.99 / Max: 631.58

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: EigenRyzen 9 79007900Ryzen 7 77007700AMD 7700Ryzen 7600Ryzen 7600 AMDAMD 7600400800120016002000SE +/- 14.96, N = 9165016071548154215401499149514751. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: EigenRyzen 9 79007900Ryzen 7 77007700AMD 7700Ryzen 7600Ryzen 7600 AMDAMD 760030060090012001500Min: 1410 / Avg: 1495.33 / Max: 15681. (CXX) g++ options: -flto -pthread

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamAMD 7600Ryzen 7600Ryzen 7600 AMDAMD 77007700Ryzen 7 7700Ryzen 9 79007900150300450600750SE +/- 0.22, N = 3629.80630.50630.93647.74650.21650.70703.55703.74
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamAMD 7600Ryzen 7600Ryzen 7600 AMDAMD 77007700Ryzen 7 7700Ryzen 9 79007900120240360480600Min: 630.56 / Avg: 630.93 / Max: 631.33

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 907900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 76003691215SE +/- 0.01, N = 312.0011.7811.6111.5711.4310.9410.9310.761. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 907900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 76003691215Min: 10.92 / Avg: 10.93 / Max: 10.941. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 907900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 76003691215SE +/- 0.00, N = 312.3212.0911.9211.9011.7611.2211.2111.071. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 907900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 760048121620Min: 11.21 / Avg: 11.21 / Max: 11.221. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 807900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDRyzen 7600AMD 76003691215SE +/- 0.02, N = 312.2011.9811.8211.8011.6111.1311.1210.971. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 807900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDRyzen 7600AMD 760048121620Min: 11.11 / Avg: 11.13 / Max: 11.161. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg167900Ryzen 9 7900AMD 7700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 76007700612182430SE +/- 0.03, N = 324.2024.2424.3525.4325.8525.8825.9126.91MIN: 23.94 / MAX: 25.17MIN: 23.97 / MAX: 25.1MIN: 24.09 / MAX: 34.85MIN: 24.43 / MAX: 27.56MIN: 25.64 / MAX: 31.08MIN: 25.66 / MAX: 26.69MIN: 25.66 / MAX: 27.54MIN: 25.94 / MAX: 29.021. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg167900Ryzen 9 7900AMD 7700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 76007700612182430Min: 25.81 / Avg: 25.85 / Max: 25.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: auto-levelsAMD 7700Ryzen 7 77007700Ryzen 7600AMD 7600Ryzen 7600 AMD7900Ryzen 9 79003691215SE +/- 0.012, N = 39.2479.4889.5229.7799.8019.80910.15010.279
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: auto-levelsAMD 7700Ryzen 7 77007700Ryzen 7600AMD 7600Ryzen 7600 AMD7900Ryzen 9 79003691215Min: 9.79 / Avg: 9.81 / Max: 9.82

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 807900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 76003691215SE +/- 0.01, N = 312.5312.2712.1112.0911.9611.4411.4111.281. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 807900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 760048121620Min: 11.4 / Avg: 11.41 / Max: 11.431. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: TopTweetRyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 76003691215SE +/- 0.02, N = 310.1010.069.959.929.869.519.509.111. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: TopTweetRyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 76003691215Min: 9.47 / Avg: 9.51 / Max: 9.551. (CXX) g++ options: -O3

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552Ryzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600Ryzen 7600 AMD1632486480SE +/- 0.33, N = 365.9566.5967.7870.9171.9371.9772.8773.081. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE
OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552Ryzen 9 79007900AMD 77007700Ryzen 7 7700AMD 7600Ryzen 7600Ryzen 7600 AMD1428425670Min: 72.45 / Avg: 73.08 / Max: 73.571. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: ParallelAMD 7700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 76007700Ryzen 9 790079002K4K6K8K10KSE +/- 23.08, N = 3778377657745768376797489719870251. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: ParallelAMD 7700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 76007700Ryzen 9 7900790014002800420056007000Min: 7637 / Avg: 7682.83 / Max: 7710.51. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Timed CPython Compilation

This test times how long it takes to build the reference Python implementation, CPython, with optimizations and LTO enabled for a release build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: Released Build, PGO + LTO Optimized7900Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 76004080120160200183.09183.17190.87192.36192.75201.93201.95202.76

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Jython7900Ryzen 9 7900Ryzen 7 77007700Ryzen 7600 AMDRyzen 7600AMD 7600AMD 77005001000150020002500SE +/- 11.02, N = 422942348236024592538253825402540
OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Jython7900Ryzen 9 7900Ryzen 7 77007700Ryzen 7600 AMDRyzen 7600AMD 7600AMD 7700400800120016002000Min: 2512 / Avg: 2537.75 / Max: 2565

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3, Long Mode - Compression SpeedAMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 76007900Ryzen 9 7900400800120016002000SE +/- 3.73, N = 32012.92009.02007.31942.81937.11936.91825.41824.81. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3, Long Mode - Compression SpeedAMD 7700Ryzen 7 77007700Ryzen 7600 AMDAMD 7600Ryzen 76007900Ryzen 9 7900400800120016002000Min: 1936.5 / Avg: 1942.8 / Max: 1949.41. (CC) gcc options: -O3 -pthread -lz -llzma

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnetRyzen 7600 AMDAMD 7600AMD 7700Ryzen 76007900Ryzen 9 7900Ryzen 7 770077001.05082.10163.15244.20325.254SE +/- 0.00, N = 34.244.244.254.434.624.624.664.67MIN: 4.21 / MAX: 4.84MIN: 4.22 / MAX: 4.83MIN: 4.19 / MAX: 5.65MIN: 4.39 / MAX: 5MIN: 4.55 / MAX: 5.18MIN: 4.55 / MAX: 5.51MIN: 4.36 / MAX: 5.78MIN: 4.37 / MAX: 5.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnetRyzen 7600 AMDAMD 7600AMD 7700Ryzen 76007900Ryzen 9 7900Ryzen 7 77007700246810Min: 4.24 / Avg: 4.24 / Max: 4.241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesAMD 7700Ryzen 9 79007900Ryzen 7 77007700AMD 7600Ryzen 7600 AMDRyzen 76001530456075SE +/- 0.18, N = 360.260.460.660.662.363.063.666.1
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesAMD 7700Ryzen 9 79007900Ryzen 7 77007700AMD 7600Ryzen 7600 AMDRyzen 76001326395265Min: 63.3 / Avg: 63.57 / Max: 63.9

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FIR FilterRyzen 7 7700AMD 77007700AMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 9 7900790030060090012001500SE +/- 3.50, N = 81508.71498.31485.91448.51436.81436.21402.11374.71. 3.10.1.1
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FIR FilterRyzen 7 7700AMD 77007700AMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 9 7900790030060090012001500Min: 1424.8 / Avg: 1436.78 / Max: 1448.71. 3.10.1.1

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Hilbert TransformAMD 7700Ryzen 7 770077007900Ryzen 7600 AMDAMD 7600Ryzen 9 7900Ryzen 76002004006008001000SE +/- 5.10, N = 8775.5760.3759.8741.4738.4725.5716.6709.61. 3.10.1.1
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Hilbert TransformAMD 7700Ryzen 7 770077007900Ryzen 7600 AMDAMD 7600Ryzen 9 7900Ryzen 7600140280420560700Min: 706.7 / Avg: 738.41 / Max: 750.11. 3.10.1.1

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetAMD 77007700Ryzen 7 77007900Ryzen 9 7900AMD 7600Ryzen 7600 AMDRyzen 76005001000150020002500SE +/- 0.40, N = 32066.072067.232069.752105.732112.152228.872235.312237.08MIN: 2022.59 / MAX: 2107.46MIN: 2021.19 / MAX: 2109.88MIN: 2024.37 / MAX: 2112.67MIN: 2070.42 / MAX: 2146.74MIN: 2081.33 / MAX: 2147.82MIN: 2216.04 / MAX: 2250MIN: 2217.92 / MAX: 2257.35MIN: 2217.88 / MAX: 2258.41. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetAMD 77007700Ryzen 7 77007900Ryzen 9 7900AMD 7600Ryzen 7600 AMDRyzen 7600400800120016002000Min: 2234.54 / Avg: 2235.31 / Max: 2235.871. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Decompression SpeedRyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDRyzen 7600AMD 760014002800420056007000SE +/- 3.15, N = 36696.26404.16347.16336.66311.66208.36200.66188.21. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Decompression SpeedRyzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDRyzen 7600AMD 760012002400360048006000Min: 6202.8 / Avg: 6208.3 / Max: 6213.71. (CC) gcc options: -O3 -pthread -lz -llzma

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 2 - Buffer Length: 256 - Filter Length: 5777007900Ryzen 7 7700AMD 7700Ryzen 7600AMD 7600Ryzen 7600 AMDRyzen 9 790040M80M120M160M200MSE +/- 1746140.93, N = 72080100002075200002070400002057600001999600001975100001968728571923100001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 2 - Buffer Length: 256 - Filter Length: 5777007900Ryzen 7 7700AMD 7700Ryzen 7600AMD 7600Ryzen 7600 AMDRyzen 9 790040M80M120M160M200MMin: 187130000 / Avg: 196872857.14 / Max: 2000900001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3, Long Mode - Decompression SpeedRyzen 9 7900Ryzen 76007900AMD 77007700Ryzen 7600 AMDRyzen 7 7700AMD 760014002800420056007000SE +/- 67.24, N = 36423.96167.66167.26118.56098.46086.86083.35943.01. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3, Long Mode - Decompression SpeedRyzen 9 7900Ryzen 76007900AMD 77007700Ryzen 7600 AMDRyzen 7 7700AMD 760011002200330044005500Min: 5952.5 / Avg: 6086.83 / Max: 6159.41. (CC) gcc options: -O3 -pthread -lz -llzma

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C26707900Ryzen 9 7900AMD 77007700AMD 7600Ryzen 7600Ryzen 7600 AMDRyzen 7 770020406080100SE +/- 0.07, N = 374.2575.5377.6179.4079.5579.7979.9880.191. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE
OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C26707900Ryzen 9 7900AMD 77007700AMD 7600Ryzen 7600Ryzen 7600 AMDRyzen 7 77001530456075Min: 79.88 / Avg: 79.98 / Max: 80.121. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbody7900AMD 7700Ryzen 9 7900Ryzen 7 77007700Ryzen 7600AMD 7600Ryzen 7600 AMD20406080100SE +/- 1.20, N = 377.477.578.479.479.679.981.983.5
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbody7900AMD 7700Ryzen 9 7900Ryzen 7 77007700Ryzen 7600AMD 7600Ryzen 7600 AMD1632486480Min: 81.3 / Avg: 83.53 / Max: 85.4

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark Suite77007900Ryzen 7 7700Ryzen 9 7900AMD 7700Ryzen 7600AMD 7600Ryzen 7600 AMD300K600K900K1200K1500KSE +/- 5632.48, N = 312331981227889121801911869951184135115424011495351143225
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark Suite77007900Ryzen 7 7700Ryzen 9 7900AMD 7700Ryzen 7600AMD 7600Ryzen 7600 AMD200K400K600K800K1000KMin: 1133685 / Avg: 1143224.67 / Max: 1153183

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, LosslessRyzen 9 790077007900AMD 7700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 76000.49730.99461.49191.98922.4865SE +/- 0.00, N = 32.212.212.192.182.172.092.082.051. (CC) gcc options: -fvisibility=hidden -O2 -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, LosslessRyzen 9 790077007900AMD 7700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 7600246810Min: 2.09 / Avg: 2.09 / Max: 2.11. (CC) gcc options: -fvisibility=hidden -O2 -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression Speed7900Ryzen 9 79007700AMD 7700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 760012002400360048006000SE +/- 7.45, N = 35452.45436.15364.15220.65185.25074.25072.45058.01. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression Speed7900Ryzen 9 79007700AMD 7700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 76009001800270036004500Min: 5063.5 / Avg: 5074.17 / Max: 5088.51. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Decompression Speed7900AMD 7700Ryzen 7 7700Ryzen 9 7900Ryzen 76007700Ryzen 7600 AMDAMD 760013002600390052006500SE +/- 7.12, N = 36277.16198.96188.36049.26021.35956.55834.25826.31. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Decompression Speed7900AMD 7700Ryzen 7 7700Ryzen 9 7900Ryzen 76007700Ryzen 7600 AMDAMD 760011002200330044005500Min: 5823.7 / Avg: 5834.23 / Max: 5847.81. (CC) gcc options: -O3 -pthread -lz -llzma

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASRyzen 9 790079007700AMD 7600Ryzen 7 7700Ryzen 7600AMD 7700Ryzen 7600 AMD30060090012001500SE +/- 21.50, N = 3162316011588155415351520151915141. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASRyzen 9 790079007700AMD 7600Ryzen 7 7700Ryzen 7600AMD 7700Ryzen 7600 AMD30060090012001500Min: 1471 / Avg: 1514 / Max: 15361. (CXX) g++ options: -flto -pthread

JPEG XL Decoding libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: 17900AMD 7700Ryzen 9 79007700Ryzen 7 7700Ryzen 7600 AMDRyzen 7600AMD 76001632486480SE +/- 0.09, N = 372.6171.9071.3671.3670.4469.9469.3667.93
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: 17900AMD 7700Ryzen 9 79007700Ryzen 7 7700Ryzen 7600 AMDRyzen 7600AMD 76001428425670Min: 69.79 / Avg: 69.94 / Max: 70.11

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression Speed7700AMD 76007900Ryzen 7 7700Ryzen 9 7900Ryzen 7600 AMDAMD 7700Ryzen 760012002400360048006000SE +/- 54.15, N = 35379.05239.25206.55186.45176.65133.05130.45039.81. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression Speed7700AMD 76007900Ryzen 7 7700Ryzen 9 7900Ryzen 7600 AMDAMD 7700Ryzen 76009001800270036004500Min: 5069.6 / Avg: 5132.97 / Max: 5240.71. (CC) gcc options: -O3 -pthread -lz -llzma

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: DistinctUserIDRyzen 9 79007700AMD 7700Ryzen 7 77007900Ryzen 7600AMD 7600Ryzen 7600 AMD3691215SE +/- 0.11, N = 39.969.859.839.779.729.469.409.341. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: DistinctUserIDRyzen 9 79007700AMD 7700Ryzen 7 77007900Ryzen 7600AMD 7600Ryzen 7600 AMD3691215Min: 9.11 / Avg: 9.34 / Max: 9.481. (CXX) g++ options: -O3

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 1 - Buffer Length: 256 - Filter Length: 57Ryzen 9 790079007700AMD 7700Ryzen 7 7700Ryzen 7600 AMDRyzen 7600AMD 760020M40M60M80M100MSE +/- 98206.13, N = 3106460000105440000104350000104300000104260000100353333100010000999000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 1 - Buffer Length: 256 - Filter Length: 57Ryzen 9 790079007700AMD 7700Ryzen 7 7700Ryzen 7600 AMDRyzen 7600AMD 760020M40M60M80M100MMin: 100160000 / Avg: 100353333.33 / Max: 1004800001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21Ryzen 7600 AMD79007700Ryzen 7 7700AMD 7700Ryzen 9 7900Ryzen 7600AMD 76009001800270036004500SE +/- 40.10, N = 64180.64141.94131.84098.84090.73971.23966.63927.21. (CXX) g++ options: -O3 -march=native -rdynamic
OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21Ryzen 7600 AMD79007700Ryzen 7 7700AMD 7700Ryzen 9 7900Ryzen 7600AMD 76007001400210028003500Min: 3982.5 / Avg: 4180.62 / Max: 4234.61. (CXX) g++ options: -O3 -march=native -rdynamic

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: go7900Ryzen 9 7900Ryzen 7 7700AMD 77007700Ryzen 7600 AMDAMD 7600Ryzen 7600306090120150SE +/- 0.00, N = 3125125126127128132132133
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: go7900Ryzen 9 7900Ryzen 7 7700AMD 77007700Ryzen 7600 AMDAMD 7600Ryzen 760020406080100Min: 132 / Avg: 132 / Max: 132

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: IIR FilterRyzen 7 7700AMD 7700Ryzen 9 790077007900Ryzen 7600AMD 7600Ryzen 7600 AMD120240360480600SE +/- 3.03, N = 8575.8570.8561.2561.1560.7544.3541.8541.51. 3.10.1.1
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: IIR FilterRyzen 7 7700AMD 7700Ryzen 9 790077007900Ryzen 7600AMD 7600Ryzen 7600 AMD100200300400500Min: 531.5 / Avg: 541.51 / Max: 553.61. 3.10.1.1

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatRyzen 9 79007900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600Ryzen 7600 AMD1326395265SE +/- 0.12, N = 355.956.156.156.457.258.758.859.4
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatRyzen 9 79007900AMD 7700Ryzen 7 77007700AMD 7600Ryzen 7600Ryzen 7600 AMD1224364860Min: 59.2 / Avg: 59.37 / Max: 59.6

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Lossless, Highest CompressionRyzen 9 7900AMD 77007900Ryzen 7 77007700Ryzen 7600Ryzen 7600 AMDAMD 76000.19350.3870.58050.7740.9675SE +/- 0.00, N = 30.860.850.850.840.840.820.820.811. (CC) gcc options: -fvisibility=hidden -O2 -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Lossless, Highest CompressionRyzen 9 7900AMD 77007900Ryzen 7 77007700Ryzen 7600Ryzen 7600 AMDAMD 7600246810Min: 0.81 / Avg: 0.82 / Max: 0.821. (CC) gcc options: -fvisibility=hidden -O2 -lm

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed Time7900Ryzen 9 79007700AMD 7700Ryzen 7 7700Ryzen 7600AMD 7600Ryzen 7600 AMD3M6M9M12M15MSE +/- 122230.58, N = 814804887146468271457427414572555145130081417037614053268139491881. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm
OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed Time7900Ryzen 9 79007700AMD 7700Ryzen 7 7700Ryzen 7600AMD 7600Ryzen 7600 AMD3M6M9M12M15MMin: 13116005 / Avg: 13949188.38 / Max: 141540531. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Highest CompressionRyzen 9 79007900Ryzen 7 77007700AMD 7700Ryzen 7600AMD 7600Ryzen 7600 AMD1.19252.3853.57754.775.9625SE +/- 0.00, N = 35.305.305.205.205.195.015.015.001. (CC) gcc options: -fvisibility=hidden -O2 -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Highest CompressionRyzen 9 79007900Ryzen 7 77007700AMD 7700Ryzen 7600AMD 7600Ryzen 7600 AMD246810Min: 5 / Avg: 5 / Max: 5.011. (CC) gcc options: -fvisibility=hidden -O2 -lm

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: rotateAMD 77007900Ryzen 7 77007700Ryzen 9 7900Ryzen 7600AMD 7600Ryzen 7600 AMD3691215SE +/- 0.015, N = 38.9719.1829.1829.2979.4069.4469.5059.509
OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: rotateAMD 77007900Ryzen 7 77007700Ryzen 9 7900Ryzen 7600AMD 7600Ryzen 7600 AMD3691215Min: 9.49 / Avg: 9.51 / Max: 9.54

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to37900Ryzen 7 7700Ryzen 9 79007700Ryzen 7600 AMDRyzen 7600AMD 7600AMD 77004080120160200SE +/- 0.00, N = 3167167167168173173174177
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to37900Ryzen 7 7700Ryzen 9 79007700Ryzen 7600 AMDRyzen 7600AMD 7600AMD 7700306090120150Min: 173 / Avg: 173 / Max: 173

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlib7900Ryzen 9 79007700AMD 7700Ryzen 7 7700Ryzen 7600 AMDRyzen 7600AMD 76003691215SE +/- 0.00, N = 310.110.110.310.310.410.610.610.7
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlib7900Ryzen 9 79007700AMD 7700Ryzen 7 7700Ryzen 7600 AMDRyzen 7600AMD 76003691215Min: 10.6 / Avg: 10.6 / Max: 10.6

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.17900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600AMD 7600Ryzen 7600 AMD4080120160200SE +/- 0.02, N = 3180.38182.77183.89184.08184.21191.08191.09191.09MIN: 180.33 / MAX: 180.52MIN: 182.69 / MAX: 182.89MIN: 183.77 / MAX: 184.05MIN: 183.99 / MAX: 184.18MIN: 184.13 / MAX: 184.34MIN: 191 / MAX: 191.26MIN: 191.02 / MAX: 191.19MIN: 190.97 / MAX: 191.361. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.17900Ryzen 9 7900AMD 7700Ryzen 7 77007700Ryzen 7600AMD 7600Ryzen 7600 AMD4080120160200Min: 191.05 / Avg: 191.09 / Max: 191.131. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FM Deemphasis Filter7700Ryzen 7 7700AMD 7700AMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 9 790079002004006008001000SE +/- 2.20, N = 8958.4943.1941.1919.1913.7911.7907.5905.01. 3.10.1.1
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FM Deemphasis Filter7700Ryzen 7 7700AMD 7700AMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 9 790079002004006008001000Min: 901.4 / Avg: 913.66 / Max: 920.21. 3.10.1.1

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test Times7900AMD 77007700Ryzen 9 7900Ryzen 7 7700AMD 7600Ryzen 7600Ryzen 7600 AMD140280420560700SE +/- 2.40, N = 3592598601602606613615626
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test Times7900AMD 77007700Ryzen 9 7900Ryzen 7 7700AMD 7600Ryzen 7600Ryzen 7600 AMD110220330440550Min: 621 / Avg: 625.67 / Max: 629

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 4 - Buffer Length: 256 - Filter Length: 577900AMD 7700Ryzen 9 79007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 760090M180M270M360M450MSE +/- 265476.51, N = 34110000004072100004026800004015400004012400004006033333981000003889800001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 4 - Buffer Length: 256 - Filter Length: 577900AMD 7700Ryzen 9 79007700Ryzen 7 7700Ryzen 7600 AMDAMD 7600Ryzen 760070M140M210M280M350MMin: 400110000 / Avg: 400603333.33 / Max: 4010200001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100Ryzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDRyzen 7600AMD 760048121620SE +/- 0.01, N = 317.0316.8116.7916.7816.7416.1316.1216.121. (CC) gcc options: -fvisibility=hidden -O2 -lm
OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100Ryzen 9 79007900AMD 7700Ryzen 7 77007700Ryzen 7600 AMDRyzen 7600AMD 760048121620Min: 16.12 / Avg: 16.13 / Max: 16.141. (CC) gcc options: -fvisibility=hidden -O2 -lm

Git

This test measures the time needed to carry out some sample Git operations on an example, static repository that happens to be a copy of the GNOME GTK tool-kit repository. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGitTime To Complete Common Git CommandsRyzen 9 790079007700Ryzen 7 7700AMD 7700Ryzen 7600 AMDRyzen 7600AMD 7600816243240SE +/- 0.04, N = 331.4331.7232.0532.0832.3733.0733.1333.191. git version 2.34.1
OpenBenchmarking.orgSeconds, Fewer Is BetterGitTime To Complete Common Git CommandsRyzen 9 790079007700Ryzen 7 7700AMD 7700Ryzen 7600 AMDRyzen 7600AMD 7600714212835Min: 32.99 / Avg: 33.07 / Max: 33.141. git version 2.34.1

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileRyzen 9 79007900Ryzen 7 77007700AMD 7700AMD 7600Ryzen 7600 AMDRyzen 760020406080100SE +/- 0.00, N = 378.679.279.479.579.582.382.783.0
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileRyzen 9 79007900Ryzen 7 77007700AMD 7700AMD 7600Ryzen 7600 AMDRyzen 76001632486480Min: 82.7 / Avg: 82.7 / Max: 82.7

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateRyzen 9 7900AMD 7700Ryzen 7 770079007700Ryzen 7600Ryzen 7600 AMDAMD 7600612182430SE +/- 0.00, N = 324.524.624.624.724.825.725.825.8
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateRyzen 9 7900AMD 7700Ryzen 7 770079007700Ryzen 7600Ryzen 7600 AMDAMD 7600612182430Min: 25.8 / Avg: 25.8 / Max: 25.8

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytrace7900Ryzen 7 7700AMD 7700Ryzen 9 79007700Ryzen 7600AMD 7600Ryzen 7600 AMD60120180240300SE +/- 0.33, N = 3246246247247248256257259
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytrace7900Ryzen 7 7700AMD 7700Ryzen 9 79007700Ryzen 7600AMD 7600Ryzen 7600 AMD50100150200250Min: 259 / Avg: 259.33 / Max: 260

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 100AMD 770079007700Ryzen 9 7900Ryzen 7 7700Ryzen 7600AMD 7600Ryzen 7600 AMD0.2250.450.6750.91.125SE +/- 0.01, N = 31.001.000.980.970.970.960.960.951. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 100AMD 770079007700Ryzen 9 7900Ryzen 7 7700Ryzen 7600AMD 7600Ryzen 7600 AMD246810Min: 0.93 / Avg: 0.95 / Max: 0.971. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: LargeRandomRyzen 9 7900Ryzen 7 7700AMD 770079007700Ryzen 7600AMD 7600Ryzen 7600 AMD0.40730.81461.22191.62922.0365SE +/- 0.00, N = 31.811.791.791.791.791.731.721.721. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: LargeRandomRyzen 9 7900Ryzen 7 7700AMD 770079007700Ryzen 7600AMD 7600Ryzen 7600 AMD246810Min: 1.72 / Avg: 1.72 / Max: 1.721. (CXX) g++ options: -O3

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loads7900Ryzen 7 7700Ryzen 9 79007700AMD 7700Ryzen 7600 AMDAMD 7600Ryzen 76003691215SE +/- 0.03, N = 311.811.811.811.912.112.312.312.4
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loads7900Ryzen 7 7700Ryzen 9 79007700AMD 7700Ryzen 7600 AMDAMD 7600Ryzen 760048121620Min: 12.3 / Avg: 12.33 / Max: 12.4

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonAMD 77007900Ryzen 9 79007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 760050100150200250SE +/- 0.33, N = 3221223223225225231232232
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonAMD 77007900Ryzen 9 79007700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 76004080120160200Min: 231 / Avg: 231.67 / Max: 232

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: PartialTweetsRyzen 9 79007900Ryzen 7 7700AMD 77007700Ryzen 7600AMD 7600Ryzen 7600 AMD246810SE +/- 0.00, N = 38.278.248.218.178.167.917.907.891. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: PartialTweetsRyzen 9 79007900Ryzen 7 7700AMD 77007700Ryzen 7600AMD 7600Ryzen 7600 AMD3691215Min: 7.88 / Avg: 7.89 / Max: 7.891. (CXX) g++ options: -O3

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2AMD 77007900Ryzen 9 79007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 76004080120160200SE +/- 0.23, N = 3190.37190.74190.75190.92193.77198.87199.06199.19MIN: 189.7 / MAX: 193.98MIN: 189.99 / MAX: 192.23MIN: 190.01 / MAX: 191.51MIN: 190.03 / MAX: 193.11MIN: 192.93 / MAX: 195.31MIN: 198.22 / MAX: 199.53MIN: 198.12 / MAX: 200.3MIN: 198.58 / MAX: 200.11. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2AMD 77007900Ryzen 9 79007700Ryzen 7 7700AMD 7600Ryzen 7600 AMDRyzen 76004080120160200Min: 198.72 / Avg: 199.06 / Max: 199.491. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

FLAC Audio Encoding

This test times how long it takes to encode a sample WAV file to FLAC audio format ten times using the --best preset settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.4WAV To FLACRyzen 9 79007900Ryzen 7 7700AMD 77007700AMD 7600Ryzen 7600 AMDRyzen 76003691215SE +/- 0.00, N = 511.5711.6511.6711.6711.6912.1012.1012.101. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.4WAV To FLACRyzen 9 79007900Ryzen 7 7700AMD 77007700AMD 7600Ryzen 7600 AMDRyzen 760048121620Min: 12.09 / Avg: 12.1 / Max: 12.111. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosAMD 770079007700Ryzen 7 7700Ryzen 9 7900Ryzen 7600Ryzen 7600 AMDAMD 76001224364860SE +/- 0.09, N = 353.053.153.253.253.355.155.255.4
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosAMD 770079007700Ryzen 7 7700Ryzen 9 7900Ryzen 7600Ryzen 7600 AMDAMD 76001122334455Min: 55 / Avg: 55.17 / Max: 55.3

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Decompression Speed7900Ryzen 7600Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 760012002400360048006000SE +/- 86.19, N = 35794.75745.45722.75716.25715.15702.75596.15547.81. (CC) gcc options: -O3 -pthread -lz -llzma
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Decompression Speed7900Ryzen 7600Ryzen 9 7900AMD 77007700Ryzen 7 7700Ryzen 7600 AMDAMD 760010002000300040005000Min: 5449.9 / Avg: 5596.07 / Max: 5748.31. (CC) gcc options: -O3 -pthread -lz -llzma

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: KostyaAMD 770079007700Ryzen 7 7700Ryzen 9 7900Ryzen 7600Ryzen 7600 AMDAMD 76001.30052.6013.90155.2026.5025SE +/- 0.01, N = 35.785.785.785.775.765.565.565.551. (CXX) g++ options: -O3
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: KostyaAMD 770079007700Ryzen 7 7700Ryzen 9 7900Ryzen 7600Ryzen 7600 AMDAMD 7600246810Min: 5.55 / Avg: 5.56 / Max: 5.571. (CXX) g++ options: -O3

LAME MP3 Encoding

LAME is an MP3 encoder licensed under the LGPL. This test measures the time required to encode a WAV file to MP3 format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3AMD 77007900Ryzen 7 77007700Ryzen 9 7900Ryzen 7600 AMDAMD 7600Ryzen 76001.12192.24383.36574.48765.6095SE +/- 0.003, N = 34.7924.7954.8034.8064.8094.9784.9784.9861. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lncurses -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3AMD 77007900Ryzen 7 77007700Ryzen 9 7900Ryzen 7600 AMDAMD 7600Ryzen 7600246810Min: 4.97 / Avg: 4.98 / Max: 4.981. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lncurses -lm

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

Java Test: Eclipse

7700: The test quit with a non-zero exit status.

7900: The test quit with a non-zero exit status.

Ryzen 7600 AMD: The test quit with a non-zero exit status.

AMD 7600: The test quit with a non-zero exit status.

AMD 7700: The test quit with a non-zero exit status.

Ryzen 7 7700: The test quit with a non-zero exit status.

Ryzen 7600: The test quit with a non-zero exit status.

Ryzen 9 7900: The test quit with a non-zero exit status.

325 Results Shown

oneDNN
NCNN
Mobile Neural Network
oneDNN
NCNN:
  CPU - FastestDet
  CPU - shufflenet-v2
ONNX Runtime:
  bertsquad-12 - CPU - Standard
  ArcFace ResNet-100 - CPU - Standard
NCNN
ONNX Runtime
NCNN
Mobile Neural Network
NCNN
Mobile Neural Network:
  squeezenetv1.1
  nasnet
  MobileNetV2_224
C-Ray
NAS Parallel Benchmarks
Stockfish
Mobile Neural Network
OpenSSL
Zstd Compression
OpenSSL
Cpuminer-Opt
NAS Parallel Benchmarks
oneDNN
Cpuminer-Opt
oneDNN
OpenSSL
NCNN
JPEG XL Decoding libjxl
Cpuminer-Opt
OpenVINO
Coremark
Cpuminer-Opt:
  x25x
  scrypt
  Ringcoin
7-Zip Compression
Cpuminer-Opt
Neural Magic DeepSparse
Cpuminer-Opt
IndigoBench
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream
Cpuminer-Opt
Neural Magic DeepSparse
Cpuminer-Opt
Stargate Digital Audio Workstation
Xmrig
Blender
IndigoBench
Tachyon
Blender:
  BMW27 - CPU-Only
  Classroom - CPU-Only
Xmrig
Stargate Digital Audio Workstation
asmFish
oneDNN
ASTC Encoder
OpenVINO
Chaos Group V-RAY
ASTC Encoder:
  Fast
  Medium
Stargate Digital Audio Workstation
ASTC Encoder
ONNX Runtime
Neural Magic DeepSparse
OpenVINO
Stargate Digital Audio Workstation
Timed Linux Kernel Compilation
Appleseed
Aircrack-ng
Stargate Digital Audio Workstation
Liquid-DSP
Blender
OpenVINO:
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU
  Weld Porosity Detection FP16-INT8 - CPU
Blender
NAMD
Liquid-DSP
oneDNN:
  Recurrent Neural Network Training - u8s8f32 - CPU
  IP Shapes 3D - bf16bf16bf16 - CPU
Timed LLVM Compilation
Stargate Digital Audio Workstation
oneDNN:
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
  Recurrent Neural Network Training - f32 - CPU
OpenVINO:
  Face Detection FP16 - CPU
  Vehicle Detection FP16-INT8 - CPU
Neural Magic DeepSparse
SVT-HEVC
Rodinia
x265
OpenVINO
Stargate Digital Audio Workstation
oneDNN
Appleseed
oneDNN:
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - u8s8f32 - CPU
LAMMPS Molecular Dynamics Simulator
SVT-HEVC
Neural Magic DeepSparse
Stargate Digital Audio Workstation
NAS Parallel Benchmarks
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    ms/batch
    items/sec
OpenVINO
Neural Magic DeepSparse:
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    ms/batch
    items/sec
NCNN
OpenVINO
oneDNN:
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
  Deconvolution Batch shapes_1d - u8s8f32 - CPU
Timed MPlayer Compilation
Primesieve
Timed LLVM Compilation
Primesieve
Timed Linux Kernel Compilation
Build2
Rodinia
NAS Parallel Benchmarks
7-Zip Compression
GROMACS
OpenVINO
SVT-VP9
PyPerformance
Timed Godot Game Engine Compilation
Timed FFmpeg Compilation
SVT-HEVC
oneDNN
GNU Radio
SVT-VP9
ONNX Runtime
Mobile Neural Network
SVT-HEVC
Timed Mesa Compilation
Mobile Neural Network
OpenVINO
libavif avifenc
Cpuminer-Opt
x264
oneDNN:
  Deconvolution Batch shapes_3d - f32 - CPU
  Deconvolution Batch shapes_1d - f32 - CPU
OpenFOAM
SVT-VP9:
  VMAF Optimized - Bosphorus 1080p
  PSNR/SSIM Optimized - Bosphorus 1080p
Neural Magic DeepSparse:
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    ms/batch
    items/sec
x264
oneDNN
Appleseed
Kvazaar
SVT-HEVC
oneDNN
Neural Magic DeepSparse:
  CV Detection,YOLOv5s COCO - Synchronous Single-Stream:
    ms/batch
    items/sec
NCNN
ONNX Runtime
SVT-AV1
Darktable
SVT-HEVC
libavif avifenc
Zstd Compression
SVT-AV1
Kvazaar
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream:
    ms/batch
    items/sec
SVT-VP9
Rodinia
SVT-AV1
libavif avifenc
SVT-VP9
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    ms/batch
    items/sec
Rodinia
ONNX Runtime
oneDNN
Zstd Compression
SVT-AV1
Kvazaar
libavif avifenc
Y-Cruncher
Timed PHP Compilation
GPAW
VP9 libvpx Encoding
Kvazaar
Darktable
oneDNN
NAS Parallel Benchmarks
ONNX Runtime
SVT-AV1
Timed Wasmer Compilation
Darktable
Xcompact3d Incompact3d
Y-Cruncher
oneDNN
GIMP
NAS Parallel Benchmarks:
  FT.C
  CG.C
ONNX Runtime
Kvazaar
Timed GDB GNU Debugger Compilation
oneDNN
OpenVINO
VP9 libvpx Encoding:
  Speed 0 - Bosphorus 1080p
  Speed 0 - Bosphorus 4K
Cpuminer-Opt
OpenFOAM
OpenVINO
nekRS
NCNN
Liquid-DSP
OpenVINO
SVT-AV1:
  Preset 4 - Bosphorus 1080p
  Preset 12 - Bosphorus 1080p
Zstd Compression
ONNX Runtime
OpenVINO:
  Vehicle Detection FP16-INT8 - CPU
  Weld Porosity Detection FP16 - CPU
Kvazaar
Neural Magic DeepSparse
OpenVINO
Neural Magic DeepSparse
Darktable
NCNN
OpenVINO
NCNN
Rodinia
SVT-AV1
oneDNN
OpenVINO
Timed CPython Compilation
x265
libavif avifenc
Zstd Compression
NCNN
DaCapo Benchmark
NCNN:
  CPU - resnet50
  CPU - vision_transformer
GIMP
GNU Radio
OpenVINO
NAS Parallel Benchmarks
OpenVINO
DaCapo Benchmark
Neural Magic DeepSparse
ONNX Runtime
NAS Parallel Benchmarks
Neural Magic DeepSparse
VP9 libvpx Encoding
OpenVINO:
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU
  Weld Porosity Detection FP16-INT8 - CPU
Neural Magic DeepSparse
NAS Parallel Benchmarks
TNN
DaCapo Benchmark
Neural Magic DeepSparse
WebP Image Encode
Neural Magic DeepSparse
JPEG XL libjxl
Algebraic Multi-Grid Benchmark
Timed Apache Compilation
Neural Magic DeepSparse
LeelaChessZero
Neural Magic DeepSparse
JPEG XL libjxl:
  JPEG - 90
  PNG - 90
  JPEG - 80
NCNN
GIMP
JPEG XL libjxl
simdjson
Ngspice
ONNX Runtime
Timed CPython Compilation
DaCapo Benchmark
Zstd Compression
NCNN
PyPerformance
GNU Radio:
  FIR Filter
  Hilbert Transform
TNN
Zstd Compression
Liquid-DSP
Zstd Compression
Ngspice
PyPerformance
PHPBench
WebP Image Encode
Zstd Compression:
  19 - Decompression Speed
  8 - Decompression Speed
LeelaChessZero
JPEG XL Decoding libjxl
Zstd Compression
simdjson
Liquid-DSP
QuantLib
PyPerformance
GNU Radio
PyPerformance
WebP Image Encode
Crafty
WebP Image Encode
GIMP
PyPerformance:
  2to3
  pathlib
TNN
GNU Radio
PyBench
Liquid-DSP
WebP Image Encode
Git
PyPerformance:
  regex_compile
  django_template
  raytrace
JPEG XL libjxl
simdjson
PyPerformance:
  json_loads
  pickle_pure_python
simdjson
TNN
FLAC Audio Encoding
PyPerformance
Zstd Compression
simdjson
LAME MP3 Encoding