AMD Ryzen zen4 Linux

Benchmarks for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2301094-PTS-EXTRANEW01
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable
Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
7700
December 31 2022
  6 Hours, 29 Minutes
7900
December 30 2022
  5 Hours, 59 Minutes
Ryzen 7600 AMD
January 03 2023
  1 Day, 4 Hours, 44 Minutes
AMD 7600
January 04 2023
  7 Hours, 11 Minutes
AMD 7700
January 02 2023
  6 Hours, 26 Minutes
Ryzen 7 7700
January 01 2023
  6 Hours, 29 Minutes
Ryzen 7600
January 05 2023
  7 Hours, 13 Minutes
Ryzen 9 7900
December 29 2022
  6 Hours
Invert Behavior (Only Show Selected Data)
  9 Hours, 19 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


AMD Ryzen zen4 LinuxProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen Resolution77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900AMD Ryzen 7 7700 8-Core @ 5.39GHz (8 Cores / 16 Threads)ASUS ROG CROSSHAIR X670E HERO (0805 BIOS)AMD Device 14d832GB2000GB Samsung SSD 980 PRO 2TB + 2000GBAMD Radeon RX 6800 XT 16GB (2575/1000MHz)AMD Navi 21 HDMI AudioASUS MG28UIntel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411Ubuntu 22.046.0.0-060000rc1daily20220820-generic (x86_64)GNOME Shell 42.2X Server 1.21.1.3 + Wayland4.6 Mesa 22.3.0-devel (git-4685385 2022-08-23 jammy-oibaf-ppa) (LLVM 14.0.6 DRM 3.48)1.3.224GCC 12.0.1 20220319ext43840x2160AMD Ryzen 9 7900 12-Core @ 5.48GHz (12 Cores / 24 Threads)2000GB Samsung SSD 980 PRO 2TBAMD Ryzen 5 7600 6-Core @ 5.17GHz (6 Cores / 12 Threads)AMD Ryzen 7 7700 8-Core @ 5.39GHz (8 Cores / 16 Threads)2000GB Samsung SSD 980 PRO 2TB + 2000GBAMD Ryzen 5 7600 6-Core @ 5.17GHz (6 Cores / 12 Threads)AMD Ryzen 9 7900 12-Core @ 5.48GHz (12 Cores / 24 Threads)2000GB Samsung SSD 980 PRO 2TBOpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-OcsLtf/gcc-12-12-20220319/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-OcsLtf/gcc-12-12-20220319/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: amd-pstate performance (Boost: Enabled) - CPU Microcode: 0xa601203Java Details- OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Details- Python 3.10.4Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900Result OverviewPhoronix Test Suite100%122%145%167%189%C-RayMobile Neural NetworkStockfishOpenSSLCoremarkIndigoBenchTachyonXmrigasmFishChaos Group V-RAYBlenderASTC EncoderAircrack-ngNAMDStargate Digital Audio WorkstationCpuminer-Opt7-Zip CompressionLAMMPS Molecular Dynamics SimulatorTimed LLVM CompilationTimed Linux Kernel CompilationTimed MPlayer CompilationPrimesieveAppleseedBuild2oneDNNGROMACSTimed Godot Game Engine CompilationTimed FFmpeg CompilationSVT-HEVCTimed Mesa CompilationNCNNx264SVT-VP9RodiniaNAS Parallel Benchmarkslibavif avifencOpenFOAMx265KvazaarTimed PHP CompilationGPAWY-CruncherTimed Wasmer CompilationSVT-AV1Xcompact3d Incompact3dDarktableNeural Magic DeepSparseJPEG XL Decoding libjxlTimed GDB GNU Debugger CompilationOpenVINOnekRSONNX RuntimeLiquid-DSPVP9 libvpx EncodingGIMPTimed CPython CompilationZstd CompressionGNU RadioAlgebraic Multi-Grid BenchmarkTimed Apache CompilationJPEG XL libjxlLeelaChessZeroNgspiceDaCapo BenchmarkPHPBenchTNNQuantLibCraftysimdjsonPyBenchGitPyPerformanceWebP Image EncodeFLAC Audio EncodingLAME MP3 Encoding

AMD Ryzen zen4 Linuxonednn: IP Shapes 3D - u8s8f32 - CPUncnn: CPU - blazefacemnn: mobilenet-v1-1.0onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUncnn: CPU - FastestDetncnn: CPU - shufflenet-v2onnx: bertsquad-12 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardncnn: CPU - regnety_400monnx: super-resolution-10 - CPU - Standardncnn: CPU-v3-v3 - mobilenet-v3mnn: mobilenetV3ncnn: CPU - mnasnetmnn: squeezenetv1.1mnn: nasnetmnn: MobileNetV2_224c-ray: Total Time - 4K, 16 Rays Per Pixelnpb: EP.Cstockfish: Total Timemnn: resnet-v2-50openssl: RSA4096compress-zstd: 8 - Compression Speedopenssl: RSA4096cpuminer-opt: Deepcoinnpb: EP.Donednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUcpuminer-opt: Magionednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUopenssl: SHA256ncnn: CPU-v2-v2 - mobilenet-v2jpegxl-decode: Allcpuminer-opt: Quad SHA-256, Pyriteopenvino: Vehicle Detection FP16 - CPUcoremark: CoreMark Size 666 - Iterations Per Secondcpuminer-opt: x25xcpuminer-opt: scryptcpuminer-opt: Ringcoincompress-7zip: Decompression Ratingcpuminer-opt: Blake-2 Sdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamcpuminer-opt: LBC, LBRY Creditsindigobench: CPU - Supercardeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamcpuminer-opt: Triple SHA-256, Onecoindeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamcpuminer-opt: Skeincoinstargate: 44100 - 1024xmrig: Wownero - 1Mblender: Barbershop - CPU-Onlyindigobench: CPU - Bedroomtachyon: Total Timeblender: BMW27 - CPU-Onlyblender: Classroom - CPU-Onlyxmrig: Monero - 1Mstargate: 480000 - 1024asmfish: 1024 Hash Memory, 26 Depthonednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPUastcenc: Exhaustiveopenvino: Face Detection FP16-INT8 - CPUv-ray: CPUastcenc: Fastastcenc: Mediumstargate: 44100 - 512astcenc: Thoroughonnx: fcn-resnet101-11 - CPU - Standarddeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamopenvino: Weld Porosity Detection FP16 - CPUstargate: 96000 - 1024build-linux-kernel: allmodconfigappleseed: Disney Materialaircrack-ng: stargate: 192000 - 1024liquid-dsp: 24 - 256 - 57blender: Pabellon Barcelona - CPU-Onlyopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUblender: Fishy Cat - CPU-Onlynamd: ATPase Simulation - 327,506 Atomsliquid-dsp: 16 - 256 - 57onednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: IP Shapes 3D - bf16bf16bf16 - CPUbuild-llvm: Ninjastargate: 480000 - 512onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Training - f32 - CPUopenvino: Face Detection FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamsvt-hevc: 1 - Bosphorus 4Krodinia: OpenMP LavaMDx265: Bosphorus 4Kopenvino: Person Detection FP16 - CPUstargate: 96000 - 512onednn: Recurrent Neural Network Inference - f32 - CPUappleseed: Emilyonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUlammps: Rhodopsin Proteinsvt-hevc: 1 - Bosphorus 1080pdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamstargate: 192000 - 512npb: SP.Bdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamopenvino: Person Detection FP32 - CPUdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamncnn: CPU - efficientnet-b0openvino: Age Gender Recognition Retail 0013 FP16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUbuild-mplayer: Time To Compileprimesieve: 1e13build-llvm: Unix Makefilesprimesieve: 1e12build-linux-kernel: defconfigbuild2: Time To Compilerodinia: OpenMP CFD Solvernpb: BT.Ccompress-7zip: Compression Ratinggromacs: MPI CPU - water_GMX50_bareopenvino: Machine Translation EN To DE FP16 - CPUsvt-vp9: Visual Quality Optimized - Bosphorus 1080ppyperformance: python_startupbuild-godot: Time To Compilebuild-ffmpeg: Time To Compilesvt-hevc: 7 - Bosphorus 4Konednn: IP Shapes 1D - f32 - CPUgnuradio: Five Back to Back FIR Filterssvt-vp9: Visual Quality Optimized - Bosphorus 4Konnx: yolov4 - CPU - Standardmnn: SqueezeNetV1.0svt-hevc: 7 - Bosphorus 1080pbuild-mesa: Time To Compilemnn: inception-v3openvino: Person Vehicle Bike Detection FP16 - CPUavifenc: 6cpuminer-opt: Myriad-Groestlx264: Bosphorus 4Konednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUopenfoam: drivaerFastback, Small Mesh Size - Execution Timesvt-vp9: VMAF Optimized - Bosphorus 1080psvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080pdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamx264: Bosphorus 1080ponednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUappleseed: Material Testerkvazaar: Bosphorus 4K - Mediumsvt-hevc: 10 - Bosphorus 1080ponednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamncnn: CPU - googlenetonnx: ArcFace ResNet-100 - CPU - Parallelsvt-av1: Preset 8 - Bosphorus 4Kdarktable: Boat - CPU-onlysvt-hevc: 10 - Bosphorus 4Kavifenc: 0compress-zstd: 19 - Compression Speedsvt-av1: Preset 8 - Bosphorus 1080pkvazaar: Bosphorus 4K - Very Fastdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamsvt-vp9: VMAF Optimized - Bosphorus 4Krodinia: OpenMP Leukocytesvt-av1: Preset 4 - Bosphorus 4Kavifenc: 6, Losslesssvt-vp9: PSNR/SSIM Optimized - Bosphorus 4Kdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamrodinia: OpenMP Streamclusteronnx: super-resolution-10 - CPU - Parallelonednn: IP Shapes 3D - f32 - CPUcompress-zstd: 19, Long Mode - Compression Speedsvt-av1: Preset 12 - Bosphorus 4Kkvazaar: Bosphorus 4K - Ultra Fastavifenc: 2y-cruncher: 1Bbuild-php: Time To Compilegpaw: Carbon Nanotubevpxenc: Speed 5 - Bosphorus 4Kkvazaar: Bosphorus 1080p - Mediumdarktable: Server Rack - CPU-onlyonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUnpb: LU.Connx: bertsquad-12 - CPU - Parallelsvt-av1: Preset 13 - Bosphorus 4Kbuild-wasmer: Time To Compiledarktable: Masskrug - CPU-onlyincompact3d: input.i3d 129 Cells Per Directiony-cruncher: 500Monednn: IP Shapes 1D - bf16bf16bf16 - CPUgimp: resizenpb: FT.Cnpb: CG.Connx: fcn-resnet101-11 - CPU - Parallelkvazaar: Bosphorus 1080p - Ultra Fastbuild-gdb: Time To Compileonednn: Convolution Batch Shapes Auto - f32 - CPUopenvino: Machine Translation EN To DE FP16 - CPUvpxenc: Speed 0 - Bosphorus 1080pvpxenc: Speed 0 - Bosphorus 4Kcpuminer-opt: Garlicoinopenfoam: drivaerFastback, Small Mesh Size - Mesh Timeopenvino: Vehicle Detection FP16 - CPUnekrs: TurboPipe Periodicncnn: CPU - mobilenetliquid-dsp: 8 - 256 - 57openvino: Face Detection FP16-INT8 - CPUsvt-av1: Preset 4 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 1080pcompress-zstd: 3 - Compression Speedonnx: yolov4 - CPU - Parallelopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16 - CPUkvazaar: Bosphorus 1080p - Very Fastdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamopenvino: Face Detection FP16 - CPUdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdarktable: Server Room - CPU-onlyncnn: CPU - squeezenet_ssdopenvino: Person Detection FP16 - CPUncnn: CPU - resnet18rodinia: OpenMP HotSpot3Dsvt-av1: Preset 13 - Bosphorus 1080ponednn: IP Shapes 1D - u8s8f32 - CPUopenvino: Person Detection FP32 - CPUbuild-python: Defaultx265: Bosphorus 1080pavifenc: 10, Losslesscompress-zstd: 8, Long Mode - Compression Speedncnn: CPU - yolov4-tinydacapobench: Tradebeansncnn: CPU - resnet50ncnn: CPU - vision_transformergimp: unsharp-maskgnuradio: Signal Source (Cosine)openvino: Person Vehicle Bike Detection FP16 - CPUnpb: MG.Copenvino: Age Gender Recognition Retail 0013 FP16 - CPUdacapobench: Tradesoapdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamonnx: GPT-2 - CPU - Standardnpb: SP.Cdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamvpxenc: Speed 5 - Bosphorus 1080popenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamnpb: IS.Dtnn: CPU - SqueezeNet v2dacapobench: H2deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamwebp: Defaultdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamjpegxl: JPEG - 100amg: build-apache: Time To Compiledeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamlczero: Eigendeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamjpegxl: JPEG - 90jpegxl: PNG - 90jpegxl: JPEG - 80ncnn: CPU - vgg16gimp: auto-levelsjpegxl: PNG - 80simdjson: TopTweetngspice: C7552onnx: GPT-2 - CPU - Parallelbuild-python: Released Build, PGO + LTO Optimizeddacapobench: Jythoncompress-zstd: 3, Long Mode - Compression Speedncnn: CPU - alexnetpyperformance: crypto_pyaesgnuradio: FIR Filtergnuradio: Hilbert Transformtnn: CPU - DenseNetcompress-zstd: 8, Long Mode - Decompression Speedliquid-dsp: 2 - 256 - 57compress-zstd: 3, Long Mode - Decompression Speedngspice: C2670pyperformance: nbodyphpbench: PHP Benchmark Suitewebp: Quality 100, Losslesscompress-zstd: 19 - Decompression Speedcompress-zstd: 8 - Decompression Speedlczero: BLASjpegxl-decode: 1compress-zstd: 19, Long Mode - Decompression Speedsimdjson: DistinctUserIDliquid-dsp: 1 - 256 - 57quantlib: pyperformance: gognuradio: IIR Filterpyperformance: floatwebp: Quality 100, Lossless, Highest Compressioncrafty: Elapsed Timewebp: Quality 100, Highest Compressiongimp: rotatepyperformance: 2to3pyperformance: pathlibtnn: CPU - SqueezeNet v1.1gnuradio: FM Deemphasis Filterpybench: Total For Average Test Timesliquid-dsp: 4 - 256 - 57webp: Quality 100git: Time To Complete Common Git Commandspyperformance: regex_compilepyperformance: django_templatepyperformance: raytracejpegxl: PNG - 100simdjson: LargeRandpyperformance: json_loadspyperformance: pickle_pure_pythonsimdjson: PartialTweetstnn: CPU - MobileNet v2encode-flac: WAV To FLACpyperformance: chaoscompress-zstd: 3 - Decompression Speedsimdjson: Kostyaencode-mp3: WAV To MP3dacapobench: Eclipse77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.9622850.591.5581.534411.91.53123915884.9153331.50.7411.711.2725.9431.81248.9461588.73349045097.871194057.21110.22958.4104101551.497.41713516.973.37523174907442601.98361.28149450491.96497649.986176583.09325.462226.51851668795606.1455723605.78692.1786.1425220550146.42521348004.29712710240.7983.972.701115.7463103.96272.7877384.202963420915940.4355140.877713.6115640193.443867.54714.1528348.427612446.0235687.933.1034171019.445158.29446346662.8672.073321775660000335.4412534.521369.35134.781.603837741600002294.352.50229489.6184.052642285.982282.736.89878.7862.30172.96166.07116.423.932.9944691152.24262.664381149.161154.558.5611.8457.4511.97444812225.69163.45266.11783.85164.38186.08322.9121050.50.2504830.81081528.228161.017510.99913.20478.73892.70514.56126811.91109141.46880.7207.454.691.23338.46654.683.253582115.264.385152.559172.8138.21315.514822.335.3234314037.13.909234.37736254.49535244.06254.68.0274124.4866167.510.99936145.28643611.32359.072.5023218.419654.25896.7167045.4723.785110.11116.79652.6129.24126.224.757640.381371.9984.6483.6868.17178.2912.479880.092712.12758924.3264546.4150.38646.8456.43828.02548.926209.07530.1658.560.1567.3466843910.53667162.37544.6973.6919.585521712.7171.2724510.18324004.749535.4395201.6344.5987.6476349.5422.4910.964593.0631.4064768.12555243000007.39756340000293.3712.107609.8225055.34484.555.81111.4616.9891578.8258.83882.89410.331015.276.9661.627668.0870.5998881034.0615.17587.563.8721445.113.12137111.52100.1310.676105.94.8623860.310.38201569.6097843110989.4164.170754.490.645.8486.90061452.3742.331182843.383226.9427.30580.9138575510014.705651.14251542650.208211.5711.911.826.919.52212.099.8670.9127489192.36224592007.34.6762.31485.9759.82067.2266311.62080100006098.479.39779.612331982.215364.15956.5158871.3653799.851043500004131.8128561.157.20.84145742745.209.29716810.3184.205958.460140154000016.7432.05479.524.82480.981.7911.92258.16190.91911.69253.25715.15.784.8060.3885821.393.7010.6641134.163.481083173510.3456762.971.5053.062.52210.5643.1534.5352155.925079673913.653274898.71985.84201.8147002158.455.6776726.491.99078241183031903.39204.91208470672.13691509.796389804.79451.133027.8612161511891208.5176963108.098126.36248.5039295130202.12451815405.36102514144.3723.133.65886.304176.35202.6710059.75.201137553501450.3131161.180918.0721166254.358989.28845.16946411.20349862.1213917.783.867351744.3119.52915762268.6412.5876191032600000249.9616311.471830.42100.61.1900610336000001675.221.54207352.6315.008821680.331678.889.191164.4285.18583.94126.27528.045.193.670094855.906197.261697856.019858.83211.39415.6280.94832.40175520541.04126.12397.92845.07126.33657.91514.1927518.780.1887950.62617721.141122.562375.34910.06459.24869.91411.02543264.21482452.01898.78293.134.4669.52529.48870.472.50651430.485.66943.771217.4728.87820.9171102.444.245249047.133.421073.49424176.83432335.22345.47.5604132.1845204.590.881827123.04644913.97439.562.2609115.302665.30858.01191956.2933.028134.9294.77164.2158.66231.4821.14147.291490.7959.7284.4526.82699.810.713893.30048.4367153.1774951.6181.98154.7147.09823.31939.737169.04724.3663.170.1355.2157244535.6772189.15237.9063.07115.074826210.8651.1432613.30624918.3411923.73107216.4938.6065.7264160.7121.1710.414566.1526.6925618.92648778000008.67739430000331.3913.459687.9336230.74955.156.53118.7818.315651.1854.58412.57611.741153.467.1652.07739.5120.5906331179.613.458103.733.5011637.914.09165512.293.512.38450145.4425610.960.43193674.0953998512794.7570.321146.880.736.5596.54631498.3441.524174847.467727.2429.66520.9243360330014.1705.41481607703.7361212.3212.224.210.1512.5310.0666.5877025183.08522941825.44.6260.61374.7741.42105.736404.12075200006167.274.24577.412278892.195452.46277.1160172.615206.59.721054400004141.9125560.756.10.85148048875.309.18216710.1180.38390559241100000016.8131.71779.224.724611.7911.82238.24190.73811.64753.15794.75.784.7951.023050.532.8541.635431.711.5875712524.7339921.530.7831.611.4486.2371.74564.3331173.772795747011.988148020.71248.82265.17961.371172.139.60780402.033.26570132238381701.91336.39114893370.90385213.542794448.72252.861692.20688456641134.7536536934.51771.40324.7544167343113.44971033373.0396277963.91284.022.064151.7248134.80356.477123.62.980782320297760.5488220.672510.3312352147.270851.67862.9660526.47857935.5975527.632.2275091266.557207.29083835804.8231.488226596263333432.639490.501056.82174.242.062265974733332897.712.57769609.1182.9059882895.712896.315.34676.7649.71952.3215.76017.903.042.1647601450.48332.5113741452.401452.696.7119.2348.08561.42689812327.50212.23984.71163.05211.80114.72132.6916510.730.3131291.0399635.062203.188621.55116.71396.220114.27518.15726610.53901781.23160.22181.364.75111.95246.84544.494.029411934.153.525462.583137.0045.92016.390705.106.6593353031.895.347075.43706273.7372220.17227.7211.715885.3113136.261.35548188.4577629.12286.863.3051423.364142.78675.64130437.2834.52689.65142.81243.6106.88621.1031.258131.987465.5581.9483.0589.90269.1815.680863.755211.96146564.4812935.9127.83138.1767.69432.99455.797235.98321.9644.680.1777.1112131737.61552136.23250.7744.22020.960092515.0481.5805410.30618180.588739.8380160.2851.5457.3801466.4017.208.423598.2935.44912110.78498086333336.89573297500386.6410.304524.2424776.63815.907.5791.3021.8567746.9445.74083.2439.121309.265.6566.571578.5120.7519351306.1216.87582.574.3251326.312.03146111.06110.1710.7965865.65.6721442.670.36224262.3752893211043.4860.286152.680.635.6884.26271322.9647.652187142.004625.8326.43130.8539265373315.772630.98621495630.931010.9311.2111.1325.859.80911.419.5173.0757683201.92525381942.84.2463.61436.8738.42235.3116208.31968728576086.879.98083.511432252.095074.25834.2151469.945133.09.341003533334180.6132541.559.40.82139491885.009.50917310.6191.091913.762640060333316.1333.07182.725.82590.951.7212.32327.89199.06212.10255.25596.15.564.9781.010390.532.8481.641671.711.58100011074.7839941.530.7371.611.3755.3221.63864.3741143.012716718811.558147994.71231.32265.57944.471199.839.59464400.763.26386133881563201.92318.81115330374.5385542.168675446.02250.531684.91683876627004.7633538404.55571.07554.7576168810113.78061030303.0391887962.21284.522.096150.6568135.04357.17002.22.981309327599420.5507240.671710.3412156147.448451.10552.9683646.40479835.6125526.472.2317981280.928206.45839935844.4411.490901598510000433.069510.351056.4173.722.042445974100002897.082.57305608.6762.9080292892.92893.485.33688.7649.65072.31214.14917.193.072.1597071453.82331.3777741450.631444.746.7319.2347.78291.43245412334.26212.12344.71413.06211.97014.71752.7216444.440.3147641.0361335.066203.443622.38616.69297.4113.89318.1226241.32903341.23460.24181.124.75110.91147.27843.734.022151862.753.266592.434137.1445.9714.12699.476.733340305.335785.41634274.5605215.04226.7811.727385.2284134.81.36078188.6222989.1286.813.3034223.34642.825.73127337.1754.55789.96143.23742.8106.00621.1531.303931.94046386.4723.0429.95768.5515.644863.901711.97946784.4753435.9128.57938.0467.80933.08156.406237.4722.3644.680.1767.0951531784.52555136.84850.8114.23220.986722915.0751.583610.42118201.558741.3679160.4551.4877.3769966.3717.38.613579.535.47460410.67497356000006.9583500000386.4710.304528.4484772.93785.87.5991.2221.8566746.1545.73873.259.171293.345.5657.105579.9980.7519071298.3416.80183.824.3521319.112.19135911.11110.110.8015870.75.7121352.010.36215462.7697920411044.4660.332252.040.635.6884.22231299.8543.92170042.198425.9526.35450.8339194510015.836630.55351475629.797510.7611.0710.9725.889.80111.289.571.9747679201.95225401937.14.24631448.5725.52228.8726188.2197510000594379.55381.911495352.085072.45826.3155467.935239.29.4999000003927.2132541.858.70.81140532685.019.50517410.7191.086919.161339810000016.1233.19282.325.82570.961.7212.32327.9198.86812.09555.45547.85.554.9780.9169360.541.4821.453491.721.45125725504.6152821.420.7251.51.2445.5221.73548.7921561.4382738507.334194261.51119.32968.4104401507.337.42742531.733.18229175380827801.87365.43153350463.77509464.001415586.79332.812207.98868336915606.1644717505.88492.45136.1603220960146.76631376504.32876910277.3973.452.699115.5433103.29271.597780.84.228976421269760.4150220.87913.6815748194.492567.69064.1815698.425812446.0117692.333.140911005.656158.42200947545.7272.088438776550000333.2512608.141368.95134.161.600577780000002231.812.33499475.8474.0796292223.942225.666.93892.3262.94912.98165.87117.683.923.0002751113.96261.2381611110.931112.468.59211.9358.03751.9836612368.45163.04696.1333.84162.57816.15072.520765.760.2410040.7949927.927160.119495.06513.07877.6690.49413.9726715.21125851.4873.65209.437.2490.7938.28154.843.104512128.864.528272.42174.5237.68414.598839.035.2884319037.393.890954.28496251.62101246.55261.87.9751125.2987166.270.989794144.6751311.36362.542.4627518.2754.70675.3164045.9143.667110.6115.87852.5129.95326.4224.60640.633472.6383.7943.688.0580.1512.386880.704711.57857474.17446.8152.05147.0155.99127.89348.341207.90829.0158.860.1487.0804844080.08676162.39444.3313.58419.435157812.6781.234969.86623901.699511.8694202.844.0817.3076854.2722.9711.14711.2931.2347848.62560084000006.54759310000292.0912.125616.0995126.44424.485.77111.6316.8582576.1359.29722.8319.091016.685.7853.76677.9640.588639103914.94388.863.8971454.311.414039.9199.5410.3096084.94.7624079.990.38212868.9068925411060.863.499654.430.635.8486.92381401.142.325187743.256126.9427.2430.8839156190014.621649.26351540647.740211.6111.9211.8224.359.24712.119.9567.7787783190.87225402012.94.2560.21498.3775.52066.0696347.12057600006118.577.61477.511841352.185220.66198.9151971.95130.49.831043000004090.7127570.856.10.85145725555.198.97117710.3183.892941.159840721000016.7932.36579.524.624711.7912.12218.17190.3711.672535716.25.784.7920.9517690.571.5181.556491.761.4972915804.8383811.50.7361.581.3015.8831.82749.0321514.07354999557.877193624.51090.62954.2103801562.227.41505517.613.36363175448701301.98344.88147910490.93494267.865724583.56331.572268.58854338688806.1182707305.8492.01146.1347217830146.03141344504.29826810175.6989.722.668115.816104.03273.367745.24.178461414115320.4368590.87613.5715594192.782167.51724.1555658.44557145.6199686.563.1149241012.614158.99459846993.6992.078613772840000334.8512469.441368.9135.181.605527861800002292.332.45222487.7344.0622212272.722289.266.87879.3162.75342.96167.47217.973.872.9865161155.04262.0624681156.111155.48.54711.8557.39271.9752512239.64164.05796.09523.85164.28986.08662.721184.20.2517770.8110927.564161.064503.15713.16977.88193.34214.59926612.661113751.46579.76206.94.6191.40538.41154.623.245322004.464.465612.557172.7138.24815.373834.425.3844308037.273.918164.39946254.32993243.98252.798.0301124.4383168.281.0027145.38815511.33360.582.5071818.310754.5855.69166945.5423.742109.97116.73252.4129.25126.1824.962140.05372.2584.9863.6848.17477.312.426880.441112.11458394.3190245.8149.54346.8956.57327.99948.768209.83929.1958.760.1587.3317443900.91668161.94544.1793.65219.55776612.7651.2720910.03323930.179553.4594201.8844.5727.6399350.1422.4510.994553.5331.5862518.14555482000007.12759550000294.4312.098615.78851404454.545.82111.5616.9692581.3458.91022.90210.021032.676.0353.927670.5270.6002441032.415.18187.63.881443.812.06140710.34100.4310.5426088.94.7923843.510.3769.6804844410954.1163.667153.380.645.8487.66911428.5742.063194243.462126.9427.37960.8738591160014.693651.97151548650.700411.4311.7611.6125.439.48811.969.9271.9337765192.751236020094.6660.61508.7760.32069.7456336.62070400006083.380.1979.412180192.175185.26188.3153570.445186.49.771042600004098.8126575.856.40.84145130085.209.18216710.4184.079943.160640124000016.7832.08479.424.62460.971.7911.82258.21193.77111.66553.25702.75.774.8031.01580.532.8641.652571.851.653419204.8139851.580.8011.671.4666.5051.7764.3291192.542812176712.085147943.71180.22265.47943.921197.0510.1288406.473.27254133352661002.05334.93115430375.11384184.408516448.3251.351709.61682386624604.758538104.58271.82044.755165660113.82881030403.0304737957.91282.322.071152.6828134.08357.357171.72.977773320095390.5503330.681510.5212090145.524251.15052.9695586.47759835.7029527.782.2202511286.513206.77416935828.1371.4902594580000433.939516.261058.54174.022.062855968500002897.052.66452602.7792.9011612888.4428965.36691.7849.62682.3213.00617.773.062.1590671450.82334.1986811450.71446.496.7579.2247.93471.43100912314.21212.10754.71453.02212.00744.71672.8616423.770.312251.0397334.923203.786621.82416.71896.295115.53518.18526385.9903491.23762.24180.454.76112.84847.17343.564.037511923.953.356562.618137.346.04516.873695.356.6533333030.215.340625.45728274.56138215.01222.0811.691785.483134.661.3531189.5382039.09286.263.3090323.326142.85565.95131837.2424.58589.26142.84943.1105.83521.1431.432531.809463.8487.13.0349.8668.2315.632463.949912.06946744.6323435.7127.2237.9667.81533.04156.328239.3321.3144.650.1787.0626931801.69550136.42853.1264.28620.943674115.041.5749210.41718217.438923.1279160.2151.9147.3986864.2417.238.323534.1835.28620810.66497175000007.13585770000379.7310.23522.0434863.43825.787.5791.1821.7765746.1445.90723.2849.213035.6862.467579.1890.7500021315.7816.97782.614.3411320.812.34142411.11110.6710.745846.25.7521396.590.36220162.5703862311031.3460.411452.030.635.6784.00811304.9443.916188141.759525.9726.34310.9439258220015.68630.90261499630.504910.9411.2211.1225.919.77911.449.1172.8717745202.75925381936.94.4366.11436.2709.62237.0756200.61999600006167.679.79179.911542402.0550586021.3152069.365039.89.461000100003966.6133544.358.80.82141703765.019.44617310.6191.076911.761538898000016.1233.1278325.72560.961.7312.42317.91199.1912.10455.15745.45.564.9860.3599921.43.560.6656364.173.48716250810.5157962.981.4933.062.45210.3963.11334.0312150.584748093513.62275022.12025.34203.4147002117.955.51763733.321.84748240553723803.41201.39207140672.9695719.393178804.66449.63027.6712252811822808.5267961508.059127.26268.5038295660200.92241834105.39052314044.5723.783.65887.439777.02202.1312360.55.249215563683300.3128261.181118.1321212255.036889.44985.19070111.20119361.8538918.153.870727738.33119.05078262268.3752.5739041033000000249.9516466.741832.77100.471.1904410340000001675.61.54403353.5384.994921677.71683.039.191163.4585.16443.94127.04927.975.163.665911855.553196.697568854.95855.53711.21115.6580.88552.4110420602.99126.51787.90375.02126.30777.91694.1827522.210.1894510.62523121.083122.638374.36410.06958.79770.28911.00542376.81482102.0297.49294.34.4569.85329.16769.912.517761322.984.96933.863218.529.00722.471102.444.2994911047.083.970483.49499176.07077334.75344.817.6063131.3806208.220.880093124.04262713.89437.962.1672615.360765.06458.07193656.3953.056134.895.69363.8158.2131.4521.271547.001593.659.0174.4686.765100.1410.849992.1318.29867913.1798551183.69654.647.23423.29640.8168.92923.9963.160.1265.2267444554.53767190.95937.9263.17415.084387810.8541.1444313.60224557.8511860.08106215.9838.6735.7036561.5321.3910.284390.526.9425358.91659745000008.4753250000330.5813.488649.4916289.14985.156.53118.7118.2487651.7154.78152.53911.731159.387.1358.202728.990.5952111190.813.576102.843.6211636.713.7167612.0990.2112.62951625.4425644.030.43188074.152974812761.8870.33446.830.726.5496.89031464.4842.367179147.12152429.84140.8643564000014.043704.97911650703.554911.7812.0911.9824.2410.27912.2710.165.9527198183.16623481824.84.6260.41402.1716.62112.1486696.21923100006423.975.52678.411869952.215436.16049.2162371.365176.69.961064600003971.2125561.255.90.86146468275.309.40616710.1182.768907.560240268000017.0331.4378.624.52470.971.8111.82238.27190.74911.57353.35722.75.764.809OpenBenchmarking.org

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.23020.46040.69060.92081.151SE +/- 0.008119, N = 30.9622850.3885821.0230501.0103900.9169360.9517691.0158000.359992MIN: 0.85MIN: 0.36MIN: 0.95MIN: 0.95MIN: 0.86MIN: 0.84MIN: 0.95MIN: 0.331. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazeface77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.3150.630.9451.261.575SE +/- 0.00, N = 30.591.390.530.530.540.570.531.40MIN: 0.56 / MAX: 1.26MIN: 1.37 / MAX: 1.49MIN: 0.52 / MAX: 0.93MIN: 0.52 / MAX: 0.87MAX: 0.72MIN: 0.54 / MAX: 1.6MIN: 0.52 / MAX: 0.66MIN: 1.36 / MAX: 3.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.077007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.83271.66542.49813.33084.1635SE +/- 0.002, N = 151.5583.7012.8542.8481.4821.5182.8643.560MIN: 1.47 / MAX: 4.02MIN: 3.63 / MAX: 3.81MIN: 2.82 / MAX: 9.6MIN: 2.83 / MAX: 3.19MIN: 1.47 / MAX: 3.45MIN: 1.44 / MAX: 4.13MIN: 2.83 / MAX: 14.09MIN: 3.49 / MAX: 4.011. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.37180.74361.11541.48721.859SE +/- 0.001514, N = 31.5344100.6641131.6354301.6416701.4534901.5564901.6525700.665636MIN: 1.41MIN: 0.62MIN: 1.61MIN: 1.62MIN: 1.41MIN: 1.41MIN: 1.62MIN: 0.591. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDet77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.93831.87662.81493.75324.6915SE +/- 0.01, N = 31.904.161.711.711.721.761.854.17MIN: 1.83 / MAX: 2.67MIN: 4.11 / MAX: 4.2MIN: 1.69 / MAX: 3.07MIN: 1.7 / MAX: 1.75MIN: 1.7 / MAX: 2.15MIN: 1.7 / MAX: 3.08MIN: 1.84 / MAX: 2.07MIN: 4.14 / MAX: 4.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v277007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.7831.5662.3493.1323.915SE +/- 0.00, N = 31.533.481.581.581.451.491.603.48MIN: 1.47 / MAX: 3.01MIN: 3.44 / MAX: 3.9MIN: 1.56 / MAX: 2.18MIN: 1.57 / MAX: 1.93MIN: 1.43 / MAX: 1.65MIN: 1.44 / MAX: 2.07MIN: 1.58 / MAX: 1.99MIN: 3.45 / MAX: 3.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: Standard77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790030060090012001500SE +/- 73.47, N = 1212391083757100012577295347161. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79005001000150020002500SE +/- 92.79, N = 12158817351252110725501580192025081. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400m77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79003691215SE +/- 0.01, N = 34.9110.344.734.784.614.834.8110.51MIN: 4.56 / MAX: 6.15MIN: 10.27 / MAX: 10.81MIN: 4.68 / MAX: 5.18MIN: 4.74 / MAX: 5.24MIN: 4.51 / MAX: 14.57MIN: 4.51 / MAX: 7.13MIN: 4.74 / MAX: 6.32MIN: 10.31 / MAX: 17.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: Standard77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79002K4K6K8K10KSE +/- 3.09, N = 3533356763992399452828381398557961. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v377007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.67051.3412.01152.6823.3525SE +/- 0.00, N = 31.502.971.531.531.421.501.582.98MIN: 1.39 / MAX: 2.4MIN: 2.94 / MAX: 3.32MIN: 1.5 / MAX: 2.03MIN: 1.51 / MAX: 1.94MIN: 1.4 / MAX: 1.85MIN: 1.39 / MAX: 2.44MIN: 1.53 / MAX: 3.86MIN: 2.94 / MAX: 3.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV377007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.33860.67721.01581.35441.693SE +/- 0.005, N = 150.7411.5050.7830.7370.7250.7360.8011.493MIN: 0.71 / MAX: 2.7MIN: 1.48 / MAX: 9.16MIN: 0.73 / MAX: 2.85MIN: 0.73 / MAX: 1.1MIN: 0.72 / MAX: 1.2MIN: 0.7 / MAX: 1.39MIN: 0.79 / MAX: 3.51MIN: 1.47 / MAX: 1.961. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnet77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.68851.3772.06552.7543.4425SE +/- 0.00, N = 31.713.061.611.611.501.581.673.06MIN: 1.61 / MAX: 2.55MIN: 3.03 / MAX: 3.46MIN: 1.59 / MAX: 2MIN: 1.59 / MAX: 2.01MIN: 1.48 / MAX: 1.65MIN: 1.48 / MAX: 2.46MIN: 1.64 / MAX: 2.16MIN: 3.03 / MAX: 3.571. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.177007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.56751.1351.70252.272.8375SE +/- 0.008, N = 151.2722.5221.4481.3751.2441.3011.4662.452MIN: 1.2 / MAX: 3.38MIN: 2.47 / MAX: 2.98MIN: 1.37 / MAX: 8.4MIN: 1.36 / MAX: 1.66MIN: 1.23 / MAX: 3.26MIN: 1.24 / MAX: 3.27MIN: 1.45 / MAX: 3.38MIN: 2.42 / MAX: 4.541. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnet77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79003691215SE +/- 0.110, N = 155.94310.5646.2375.3225.5225.8836.50510.396MIN: 5.47 / MAX: 9.31MIN: 10.43 / MAX: 11.26MIN: 5.32 / MAX: 16.05MIN: 5.29 / MAX: 6.27MIN: 5.45 / MAX: 17.02MIN: 5.44 / MAX: 8.54MIN: 6.43 / MAX: 8.41MIN: 10.29 / MAX: 18.121. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_22477007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.70881.41762.12642.83523.544SE +/- 0.011, N = 151.8123.1501.7451.6381.7351.8271.7703.113MIN: 1.71 / MAX: 3.68MIN: 3.1 / MAX: 3.75MIN: 1.63 / MAX: 8.75MIN: 1.62 / MAX: 2.05MIN: 1.71 / MAX: 3.74MIN: 1.72 / MAX: 3.71MIN: 1.74 / MAX: 3.67MIN: 3.04 / MAX: 3.571. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

C-Ray

This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per Pixel77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001428425670SE +/- 0.02, N = 348.9534.5464.3364.3748.7949.0364.3334.031. (CC) gcc options: -lm -lpthread -O3

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.C77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79005001000150020002500SE +/- 13.02, N = 51588.732155.921173.771143.011561.401514.071192.542150.581. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

Stockfish

This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 15Total Time77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790011M22M33M44M55MSE +/- 210730.41, N = 1134904509507967392795747027167188382738503549995528121767474809351. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-5077007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790048121620SE +/- 0.045, N = 157.87113.65311.98811.5587.3347.87712.08513.620MIN: 7.16 / MAX: 10.56MIN: 13.54 / MAX: 15.73MIN: 11.25 / MAX: 19.5MIN: 11.48 / MAX: 19.17MIN: 7.13 / MAX: 19.12MIN: 7.19 / MAX: 10.65MIN: 11.68 / MAX: 24.46MIN: 13.5 / MAX: 15.471. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA409677007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790060K120K180K240K300KSE +/- 6.07, N = 3194057.2274898.7148020.7147994.7194261.5193624.5147943.7275022.11. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Compression Speed77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900400800120016002000SE +/- 9.87, N = 31110.21985.81248.81231.31119.31090.61180.22025.31. (CC) gcc options: -O3 -pthread -lz -llzma

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA409677007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79009001800270036004500SE +/- 0.25, N = 32958.44201.82265.12265.52968.42954.22265.44203.41. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Deepcoin77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79003K6K9K12K15KSE +/- 16.47, N = 310410.0014700.007961.377944.4710440.0010380.007943.9214700.001. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.D77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79005001000150020002500SE +/- 9.35, N = 91551.492158.451172.131199.831507.331562.221197.052117.951. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79003691215SE +/- 0.03364, N = 37.417135.677609.607809.594647.427427.4150510.128805.51763MIN: 6.76MIN: 4.97MIN: 9.17MIN: 9.26MIN: 6.85MIN: 6.75MIN: 9.53MIN: 4.881. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Magi77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900160320480640800SE +/- 1.54, N = 3516.97726.49402.03400.76531.73517.61406.47733.321. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.75941.51882.27823.03763.797SE +/- 0.01337, N = 33.375231.990783.265703.263863.182293.363633.272541.84748MIN: 3.09MIN: 1.8MIN: 3.2MIN: 3.2MIN: 3.09MIN: 3.08MIN: 3.2MIN: 1.791. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA25677007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79005000M10000M15000M20000M25000MSE +/- 148039451.93, N = 317490744260241183031901322383817013388156320175380827801754487013013335266100240553723801. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v277007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.76731.53462.30193.06923.8365SE +/- 0.00, N = 31.983.391.911.921.871.982.053.41MIN: 1.83 / MAX: 3.36MIN: 3.36 / MAX: 3.7MIN: 1.88 / MAX: 2.37MIN: 1.89 / MAX: 2.4MIN: 1.83 / MAX: 2.22MIN: 1.84 / MAX: 3.02MIN: 2 / MAX: 2.62MIN: 3.38 / MAX: 3.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

JPEG XL Decoding libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: All77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790080160240320400SE +/- 0.21, N = 3361.28204.91336.39318.81365.43344.88334.93201.39

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, Pyrite77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790040K80K120K160K200KSE +/- 21.86, N = 31494502084701148931153301533501479101154302071401. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900150300450600750SE +/- 0.75, N = 3491.96672.13370.90374.50463.77490.93375.11672.901. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900150K300K450K600K750KSE +/- 1264.79, N = 3497649.99691509.80385213.54385542.17509464.00494267.87384184.41695719.391. (CC) gcc options: -O2 -lrt" -lrt

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25x77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79002004006008001000SE +/- 1.16, N = 3583.09804.79448.72446.02586.79583.56448.30804.661. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scrypt77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900100200300400500SE +/- 1.68, N = 3325.46451.13252.86250.53332.81331.57251.35449.601. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Ringcoin77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79006001200180024003000SE +/- 4.41, N = 32226.513027.861692.201684.912207.982268.581709.613027.671. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression Rating77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790030K60K90K120K150KSE +/- 90.75, N = 38516612161568845683878683385433682381225281. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Blake-2 S77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900300K600K900K1200K1500KSE +/- 2843.36, N = 3879560118912066411366270069156086888066246011822801. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900246810SE +/- 0.0028, N = 36.14558.51764.75364.76336.16446.11824.75808.5267

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY Credits77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790020K40K60K80K100KSE +/- 26.67, N = 372360963105369353840717507073053810961501. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Supercar77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900246810SE +/- 0.013, N = 35.7868.0984.5174.5555.8845.8404.5828.059

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900306090120150SE +/- 0.05, N = 392.18126.3671.4071.0892.4592.0171.82127.26

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900246810SE +/- 0.0023, N = 36.14258.50394.75444.75766.16036.13474.75508.5038

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Triple SHA-256, Onecoin77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790060K120K180K240K300KSE +/- 1138.26, N = 132205502951301673431688102209602178301656602956601. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79004080120160200SE +/- 0.10, N = 3146.43202.12113.45113.78146.77146.03113.83200.92

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Skeincoin77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790040K80K120K160K200KSE +/- 695.37, N = 31348001815401033371030301376501344501030401834101. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 44100 - Buffer Size: 102477007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001.21292.42583.63874.85166.0645SE +/- 0.000394, N = 34.2971275.3610253.0396273.0391884.3287694.2982683.0304735.3905231. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1M77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79003K6K9K12K15KSE +/- 6.89, N = 310240.714144.37963.97962.210277.310175.67957.914044.51. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Barbershop - Compute: CPU-Only77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790030060090012001500SE +/- 2.65, N = 3983.97723.131284.021284.52973.45989.721282.32723.78

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: Bedroom77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.82311.64622.46933.29244.1155SE +/- 0.005, N = 32.7013.6582.0642.0962.6992.6682.0713.658

Tachyon

This is a test of the threaded Tachyon, a parallel ray-tracing system, measuring the time to ray-trace a sample scene. The sample scene used is the Teapot scene ray-traced to 8K x 8K with 32 samples. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99.2Total Time77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900306090120150SE +/- 0.64, N = 3115.7586.30151.72150.66115.54115.82152.6887.441. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: BMW27 - Compute: CPU-Only77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900306090120150SE +/- 0.20, N = 3103.9676.35134.80135.04103.29104.03134.0877.02

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Classroom - Compute: CPU-Only77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790080160240320400SE +/- 0.20, N = 3272.78202.67356.47357.10271.59273.36357.35202.13

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1M77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79003K6K9K12K15KSE +/- 32.54, N = 37738.010059.77123.67002.27780.87745.27171.712360.51. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 480000 - Buffer Size: 102477007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001.18112.36223.54334.72445.9055SE +/- 0.001126, N = 34.2029635.2011372.9807822.9813094.2289764.1784612.9777735.2492151. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 Depth77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790012M24M36M48M60MSE +/- 333145.62, N = 34209159455350145320297763275994242126976414115323200953956368330

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.12390.24780.37170.49560.6195SE +/- 0.000941, N = 30.4355140.3131160.5488220.5507240.4150220.4368590.5503330.312826MIN: 0.39MIN: 0.28MIN: 0.52MIN: 0.53MIN: 0.39MIN: 0.38MIN: 0.52MIN: 0.281. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: Exhaustive77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.26570.53140.79711.06281.3285SE +/- 0.0005, N = 30.87771.18090.67250.67170.87900.87600.68151.18111. (CXX) g++ options: -O3 -flto -pthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790048121620SE +/- 0.01, N = 313.6118.0710.3310.3413.6813.5710.5218.131. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. V-RAY is a commercial renderer that can integrate with various creator software products like SketchUp and 3ds Max. The V-RAY benchmark is standalone and supports CPU and NVIDIA CUDA/RTX based rendering. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5.02Mode: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79005K10K15K20K25KSE +/- 40.13, N = 31564021166123521215615748155941209021212

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: Fast77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790060120180240300SE +/- 0.11, N = 3193.44254.36147.27147.45194.49192.78145.52255.041. (CXX) g++ options: -O3 -flto -pthread

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: Medium77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790020406080100SE +/- 0.26, N = 367.5589.2951.6851.1167.6967.5251.1589.451. (CXX) g++ options: -O3 -flto -pthread

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 44100 - Buffer Size: 51277007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001.16792.33583.50374.67165.8395SE +/- 0.002271, N = 34.1528345.1694642.9660522.9683644.1815694.1555652.9695585.1907011. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: Thorough77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79003691215SE +/- 0.0037, N = 38.427611.20346.47856.40478.42588.44556.477511.20111. (CXX) g++ options: -O3 -flto -pthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: Standard77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900306090120150SE +/- 6.47, N = 121249879981247198931. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001428425670SE +/- 0.07, N = 346.0262.1235.6035.6146.0145.6235.7061.85

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79002004006008001000SE +/- 3.29, N = 3687.93917.78527.63526.47692.33686.56527.78918.151. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 102477007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.87091.74182.61273.48364.3545SE +/- 0.001525, N = 33.1034173.8673512.2275092.2317983.1409103.1149242.2202513.8707271. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfig77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790030060090012001500SE +/- 6.89, N = 31019.45744.301266.561280.931005.661012.611286.51738.33

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Disney Material77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790050100150200250158.29119.53207.29206.46158.42158.99206.77119.05

Aircrack-ng

Aircrack-ng is a tool for assessing WiFi/WLAN network security. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.777007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790013K26K39K52K65KSE +/- 2.66, N = 346662.8762268.6435804.8235844.4447545.7346993.7035828.1462268.381. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpcre -lsqlite3 -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -pthread

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 192000 - Buffer Size: 102477007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.58221.16441.74662.32882.911SE +/- 0.001409, N = 32.0733212.5876191.4882261.4909012.0884382.0786131.4902002.5739041. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 24 - Buffer Length: 256 - Filter Length: 5777007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900200M400M600M800M1000MSE +/- 2020662.71, N = 3775660000103260000059626333359851000077655000077284000059458000010330000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Pabellon Barcelona - Compute: CPU-Only77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790090180270360450SE +/- 0.21, N = 3335.44249.96432.63433.06333.25334.85433.93249.95

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79004K8K12K16K20KSE +/- 0.99, N = 312534.5216311.479490.509510.3512608.1412469.449516.2616466.741. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900400800120016002000SE +/- 1.12, N = 31369.351830.421056.821056.401368.951368.901058.541832.771. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Fishy Cat - Compute: CPU-Only77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79004080120160200SE +/- 0.26, N = 3134.78100.60174.24173.72134.16135.18174.02100.47

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atoms77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.46410.92821.39231.85642.3205SE +/- 0.00775, N = 31.603831.190062.062262.042441.600571.605522.062851.19044

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 5777007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900200M400M600M800M1000MSE +/- 86474.15, N = 3774160000103360000059747333359741000077800000078618000059685000010340000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79006001200180024003000SE +/- 2.35, N = 32294.351675.222897.712897.082231.812292.332897.051675.60MIN: 2263.65MIN: 1671.1MIN: 2889.51MIN: 2893.38MIN: 2215.43MIN: 2268.98MIN: 2882.96MIN: 1671.361. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.59951.1991.79852.3982.9975SE +/- 0.02309, N = 32.502291.542072.577692.573052.334992.452222.664521.54403MIN: 2.29MIN: 1.47MIN: 2.49MIN: 2.51MIN: 2.28MIN: 2.28MIN: 2.47MIN: 1.481. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Ninja77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900130260390520650SE +/- 0.25, N = 3489.62352.63609.12608.68475.85487.73602.78353.54

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 480000 - Buffer Size: 51277007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001.1272.2543.3814.5085.635SE +/- 0.002307, N = 34.0526405.0088202.9059882.9080294.0796294.0622212.9011614.9949201. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79006001200180024003000SE +/- 2.85, N = 32285.981680.332895.712892.902223.942272.722888.441677.70MIN: 2258.7MIN: 1675.48MIN: 2880.28MIN: 2888.21MIN: 2210.21MIN: 2247.96MIN: 2854.93MIN: 1673.081. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79006001200180024003000SE +/- 0.92, N = 32282.731678.882896.312893.482225.662289.262896.001683.03MIN: 2256.18MIN: 1673.29MIN: 2889.01MIN: 2889.6MIN: 2213.52MIN: 2263.84MIN: 2885.46MIN: 1677.551. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79003691215SE +/- 0.01, N = 36.899.195.345.336.936.875.369.191. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790030060090012001500SE +/- 5.59, N = 3878.781164.42676.76688.76892.32879.31691.781163.451. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790020406080100SE +/- 0.26, N = 362.3085.1949.7249.6562.9562.7549.6385.16

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 4K77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.88651.7732.65953.5464.4325SE +/- 0.00, N = 32.963.942.302.312.982.962.303.941. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMD77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790050100150200250SE +/- 2.30, N = 3166.07126.28215.76214.15165.87167.47213.01127.051. (CXX) g++ options: -O2 -lOpenCL

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4K77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900714212835SE +/- 0.23, N = 316.4228.0417.9017.1917.6817.9717.7727.971. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001.16782.33563.50344.67125.839SE +/- 0.02, N = 33.935.193.043.073.923.873.065.161. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 51277007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.82581.65162.47743.30324.129SE +/- 0.003762, N = 32.9944693.6700942.1647602.1597073.0002752.9865162.1590673.6659111. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790030060090012001500SE +/- 1.23, N = 31152.24855.911450.481453.821113.961155.041450.82855.55MIN: 1133.82MIN: 851.85MIN: 1445.4MIN: 1450.65MIN: 1102.31MIN: 1129.24MIN: 1438.48MIN: 851.411. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Emily77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790070140210280350262.66197.26332.51331.38261.24262.06334.20196.70

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790030060090012001500SE +/- 0.41, N = 31149.16856.021452.401450.631110.931156.111450.70854.95MIN: 1126.63MIN: 851.48MIN: 1447.04MIN: 1448.24MIN: 1100.71MIN: 1135.88MIN: 1435.36MIN: 851.041. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790030060090012001500SE +/- 0.60, N = 31154.55858.831452.691444.741112.461155.401446.49855.54MIN: 1130.74MIN: 851.32MIN: 1448.33MIN: 1440.66MIN: 1094.16MIN: 1130.22MIN: 1430.15MIN: 851.941. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin Protein77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79003691215SE +/- 0.023, N = 38.56011.3946.7116.7318.5928.5476.75711.2111. (CXX) g++ options: -O3 -lm -ldl

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080p77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790048121620SE +/- 0.00, N = 311.8415.629.239.2311.9311.859.2215.651. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790020406080100SE +/- 0.10, N = 357.4580.9548.0947.7858.0457.3947.9380.89

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 192000 - Buffer Size: 51277007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.54251.0851.62752.172.7125SE +/- 0.004557, N = 31.9744482.4017551.4268981.4324541.9836601.9752501.4310092.4110401. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.B77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79004K8K12K16K20KSE +/- 6.22, N = 312225.6920541.0412327.5012334.2612368.4512239.6412314.2120602.991. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790050100150200250SE +/- 0.15, N = 3163.45126.12212.24212.12163.05164.06212.11126.52

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900246810SE +/- 0.0033, N = 36.11787.92844.71164.71416.13306.09524.71457.9037

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001.14082.28163.42244.56325.704SE +/- 0.01, N = 33.855.073.053.063.843.853.025.021. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790050100150200250SE +/- 0.11, N = 3164.38126.34211.80211.97162.58164.29212.01126.31

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900246810SE +/- 0.0024, N = 36.08327.91514.72134.71756.15076.08664.71677.9169

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b077007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.94281.88562.82843.77124.714SE +/- 0.00, N = 32.914.192.692.722.502.702.864.18MIN: 2.68 / MAX: 4.44MIN: 4.14 / MAX: 4.62MIN: 2.66 / MAX: 3.21MIN: 2.68 / MAX: 3.26MIN: 2.46 / MAX: 3.01MIN: 2.48 / MAX: 5.14MIN: 2.81 / MAX: 4.44MIN: 4.15 / MAX: 4.531. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79006K12K18K24K30KSE +/- 16.00, N = 321050.5027518.7816510.7316444.4420765.7621184.2016423.7727522.211. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.07080.14160.21240.28320.354SE +/- 0.001519, N = 30.2504830.1887950.3131290.3147640.2410040.2517770.3122500.189451MIN: 0.22MIN: 0.17MIN: 0.29MIN: 0.3MIN: 0.22MIN: 0.22MIN: 0.29MIN: 0.171. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.2340.4680.7020.9361.17SE +/- 0.000920, N = 30.8108150.6261771.0399601.0361300.7949900.8110901.0397300.625231MIN: 0.74MIN: 0.56MIN: 1MIN: 1MIN: 0.73MIN: 0.74MIN: 1MIN: 0.561. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.5Time To Compile77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900816243240SE +/- 0.08, N = 328.2321.1435.0635.0727.9327.5634.9221.08

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve primarily benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e1377007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79004080120160200SE +/- 0.48, N = 3161.02122.56203.19203.44160.12161.06203.79122.641. (CXX) g++ options: -O3

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Unix Makefiles77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900130260390520650SE +/- 1.68, N = 3511.00375.35621.55622.39495.07503.16621.82374.36

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve primarily benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e1277007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790048121620SE +/- 0.00, N = 313.2010.0616.7116.6913.0813.1716.7210.071. (CXX) g++ options: -O3

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfig77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790020406080100SE +/- 0.14, N = 378.7459.2596.2297.4077.6677.8896.3058.80

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To Compile77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900306090120150SE +/- 0.35, N = 392.7169.91114.28113.8990.4993.34115.5470.29

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD Solver77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790048121620SE +/- 0.04, N = 314.5611.0318.1618.1213.9714.6018.1911.011. (CXX) g++ options: -O2 -lOpenCL

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.C77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79009K18K27K36K45KSE +/- 42.32, N = 326811.9043264.2026610.5326241.3226715.2026612.6626385.9042376.801. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression Rating77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790030K60K90K120K150KSE +/- 205.24, N = 31109141482459017890334112585111375903491482101. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bare77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.45450.9091.36351.8182.2725SE +/- 0.006, N = 31.4682.0181.2311.2341.4801.4651.2372.0201. (CXX) g++ options: -O3

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790020406080100SE +/- 0.52, N = 380.7098.7860.2260.2473.6579.7662.2497.491. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080p77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790060120180240300SE +/- 0.32, N = 3207.45293.13181.36181.12209.43206.90180.45294.301. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startup77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900246810SE +/- 0.00, N = 34.604.464.754.757.244.614.764.45

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To Compile77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900306090120150SE +/- 0.44, N = 391.2369.53111.95110.9190.7991.41112.8569.85

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.4Time To Compile77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001122334455SE +/- 0.23, N = 338.4729.4946.8547.2838.2838.4147.1729.17

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4K77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001632486480SE +/- 0.03, N = 354.6870.4744.4943.7354.8454.6243.5669.911. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.90841.81682.72523.63364.542SE +/- 0.01473, N = 33.253582.506504.029414.022153.104513.245324.037512.51776MIN: 2.88MIN: 2.24MIN: 3.74MIN: 3.76MIN: 2.9MIN: 2.84MIN: 3.76MIN: 2.211. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Five Back to Back FIR Filters77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79005001000150020002500SE +/- 16.32, N = 82115.21430.41934.11862.72128.82004.41923.91322.91. 3.10.1.1

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 4K77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790020406080100SE +/- 0.02, N = 364.3885.6053.5253.2664.5264.4653.3584.901. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: Standard77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79002004006008001000SE +/- 40.22, N = 125156945466598275616566931. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.077007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.86921.73842.60763.47684.346SE +/- 0.017, N = 152.5593.7712.5832.4342.4202.5572.6183.863MIN: 2.41 / MAX: 4.85MIN: 3.69 / MAX: 4.19MIN: 2.41 / MAX: 9.64MIN: 2.4 / MAX: 9.97MIN: 2.38 / MAX: 13.73MIN: 2.4 / MAX: 5.3MIN: 2.59 / MAX: 4.5MIN: 3.79 / MAX: 5.061. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080p77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790050100150200250SE +/- 0.12, N = 3172.81217.47137.00137.14174.52172.71137.30218.501. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Timed Mesa Compilation

This test profile times how long it takes to compile Mesa with Meson/Ninja. For minimizing build dependencies and avoid versioning conflicts, test this is just the core Mesa build without LLVM or the extra Gallium3D/Mesa drivers enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To Compile77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001020304050SE +/- 0.06, N = 338.2128.8845.9245.9737.6838.2546.0529.01

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v377007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900510152025SE +/- 0.24, N = 1515.5120.9216.3914.1214.6015.3716.8722.47MIN: 14.26 / MAX: 35.72MIN: 20.57 / MAX: 28.87MIN: 13.9 / MAX: 24.09MIN: 14.05 / MAX: 15.71MIN: 14.2 / MAX: 25.71MIN: 14.16 / MAX: 25.66MIN: 16.72 / MAX: 18.95MIN: 22.2 / MAX: 30.151. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79002004006008001000SE +/- 3.37, N = 3822.331102.44705.10699.47839.03834.42695.351102.441. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 677007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900246810SE +/- 0.014, N = 35.3234.2406.6596.7005.2885.3846.6534.2991. (CXX) g++ options: -O3 -fPIC -lm

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Myriad-Groestl77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790011K22K33K44K55KSE +/- 250.27, N = 343140524903353033340431904308033330491101. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 4K77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001122334455SE +/- 0.31, N = 637.1047.1331.8930.0037.3937.2730.2147.081. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001.20312.40623.60934.81246.0155SE +/- 0.00039, N = 33.909233.421075.347075.335783.890953.918165.340623.97048MIN: 3.68MIN: 3.32MIN: 5.3MIN: 5.3MIN: 3.71MIN: 3.63MIN: 5.3MIN: 3.341. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001.22792.45583.68374.91166.1395SE +/- 0.00921, N = 34.377363.494245.437065.416344.284964.399465.457283.49499MIN: 3.85MIN: 3.05MIN: 5.22MIN: 5.23MIN: 3.88MIN: 3.84MIN: 5.22MIN: 3.061. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution Time77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790060120180240300254.50176.83273.74274.56251.62254.33274.56176.071. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080p77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790070140210280350SE +/- 2.29, N = 3244.06335.22220.17215.04246.55243.98215.01334.751. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080p77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790080160240320400SE +/- 0.15, N = 3254.60345.40227.72226.78261.80252.79222.08344.811. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79003691215SE +/- 0.0166, N = 38.02747.560411.715811.72737.97518.030111.69177.6063

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900306090120150SE +/- 0.12, N = 3124.49132.1885.3185.23125.30124.4485.48131.38

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 1080p77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790050100150200250SE +/- 1.62, N = 3167.51204.59136.26134.80166.27168.28134.66208.221. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.30620.61240.91861.22481.531SE +/- 0.005136, N = 30.9993600.8818271.3554801.3607800.9897941.0027001.3531000.880093MIN: 0.89MIN: 0.8MIN: 1.29MIN: 1.3MIN: 0.88MIN: 0.89MIN: 1.3MIN: 0.811. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Material Tester77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79004080120160200145.29123.05188.46188.62144.68145.39189.54124.04

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Medium77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790048121620SE +/- 0.00, N = 311.3213.979.129.1011.3611.339.0913.891. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080p77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900100200300400500SE +/- 1.08, N = 3359.07439.56286.86286.81362.54360.58286.26437.961. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.74451.4892.23352.9783.7225SE +/- 0.00750, N = 32.502322.260913.305143.303422.462752.507183.309032.16726MIN: 2.34MIN: 2.08MIN: 3.14MIN: 3.14MIN: 2.26MIN: 2.33MIN: 3.15MIN: 2.111. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900612182430SE +/- 0.03, N = 318.4215.3023.3623.3518.2718.3123.3315.36

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001530456075SE +/- 0.06, N = 354.2665.3142.7942.8254.7154.5942.8665.06

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenet77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900246810SE +/- 0.04, N = 36.708.015.645.735.305.695.958.07MIN: 6.24 / MAX: 7.84MIN: 7.91 / MAX: 8.62MIN: 5.54 / MAX: 6.73MIN: 5.68 / MAX: 6.22MIN: 5.22 / MAX: 6.91MIN: 5.28 / MAX: 6.93MIN: 5.85 / MAX: 7.49MIN: 7.97 / MAX: 9.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900400800120016002000SE +/- 3.11, N = 3167019191304127316401669131819361. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 4K77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001326395265SE +/- 0.10, N = 345.4756.2937.2837.1845.9145.5437.2456.401. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.8.1Test: Boat - Acceleration: CPU-only77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001.03162.06323.09484.12645.158SE +/- 0.018, N = 33.7853.0284.5264.5573.6673.7424.5853.056

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4K77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900306090120150SE +/- 0.10, N = 3110.11134.9289.6589.96110.60109.9789.26134.801. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 077007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900306090120150SE +/- 0.10, N = 3116.8094.77142.81143.24115.88116.73142.8595.691. (CXX) g++ options: -O3 -fPIC -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression Speed77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001428425670SE +/- 0.12, N = 352.664.243.642.852.552.443.163.81. (CC) gcc options: -O3 -pthread -lz -llzma

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 1080p77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79004080120160200SE +/- 0.38, N = 3129.24158.66106.89106.01129.95129.25105.84158.211. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Very Fast77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900714212835SE +/- 0.05, N = 326.2031.4821.1021.1526.4226.1821.1431.451. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900714212835SE +/- 0.08, N = 324.7621.1431.2631.3024.6124.9631.4321.27

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001122334455SE +/- 0.08, N = 340.3847.2931.9931.9440.6340.0531.8147.00

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 4K77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790020406080100SE +/- 0.87, N = 371.9990.7965.5563.0072.6372.2563.8493.601. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP Leukocyte77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790020406080100SE +/- 0.15, N = 384.6559.7381.9586.4783.7984.9987.1059.021. (CXX) g++ options: -O2 -lOpenCL

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 4K77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001.00532.01063.01594.02125.0265SE +/- 0.003, N = 33.6864.4523.0583.0423.6803.6843.0344.4681. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6, Lossless77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79003691215SE +/- 0.014, N = 38.1716.8269.9029.9578.0508.1749.8606.7651. (CXX) g++ options: -O3 -fPIC -lm

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4K77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790020406080100SE +/- 0.03, N = 378.2999.8069.1868.5580.1577.3068.23100.141. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790048121620SE +/- 0.02, N = 312.4810.7115.6815.6412.3912.4315.6310.85

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790020406080100SE +/- 0.08, N = 380.0993.3063.7663.9080.7080.4463.9592.13

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP Streamcluster77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79003691215SE +/- 0.011, N = 312.1278.43011.96111.97911.57812.11412.0698.2981. (CXX) g++ options: -O2 -lOpenCL

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: Parallel77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790015003000450060007500SE +/- 23.47, N = 3589267154656467857475839467467911. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001.04232.08463.12694.16925.2115SE +/- 0.00604, N = 34.326453.177494.481294.475344.174004.319024.632343.17985MIN: 4.15MIN: 3.13MIN: 4.4MIN: 4.38MIN: 4.14MIN: 4.14MIN: 4.54MIN: 3.131. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression Speed77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001224364860SE +/- 0.03, N = 346.451.635.935.946.845.835.751.01. (CC) gcc options: -O3 -pthread -lz -llzma

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 4K77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79004080120160200SE +/- 0.28, N = 3150.39181.98127.83128.58152.05149.54127.22183.701. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Ultra Fast77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001224364860SE +/- 0.11, N = 346.8454.7138.1738.0447.0146.8937.9654.601. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 277007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001530456075SE +/- 0.03, N = 356.4447.1067.6967.8155.9956.5767.8247.231. (CXX) g++ options: -O3 -fPIC -lm

Y-Cruncher

Y-Cruncher is a multi-threaded Pi benchmark capable of computing Pi to trillions of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 1B77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900816243240SE +/- 0.10, N = 328.0323.3232.9933.0827.8928.0033.0423.30

Timed PHP Compilation

This test times how long it takes to build PHP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 8.1.9Time To Compile77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001326395265SE +/- 0.29, N = 348.9339.7455.8056.4148.3448.7756.3340.80

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon Nanotube77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790050100150200250SE +/- 0.83, N = 3209.08169.05235.98237.47207.91209.84239.33168.931. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 4K77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900714212835SE +/- 0.11, N = 330.1624.3621.9622.3629.0129.1921.3123.991. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 1080p - Video Preset: Medium77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001428425670SE +/- 0.06, N = 358.5663.1744.6844.6858.8658.7644.6563.161. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.8.1Test: Server Rack - Acceleration: CPU-only77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.04010.08020.12030.16040.2005SE +/- 0.001, N = 30.1560.1350.1770.1760.1480.1580.1780.126

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900246810SE +/- 0.03190, N = 37.346685.215727.111217.095157.080487.331747.062695.22674MIN: 7MIN: 5.15MIN: 7MIN: 7MIN: 6.98MIN: 6.99MIN: 6.99MIN: 5.151. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.C77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790010K20K30K40K50KSE +/- 16.61, N = 343910.5344535.6031737.6131784.5244080.0843900.9131801.6944554.531. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: Parallel77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900170340510680850SE +/- 0.60, N = 36677725525556766685507671. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 4K77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79004080120160200SE +/- 0.38, N = 3162.38189.15136.23136.85162.39161.95136.43190.961. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Timed Wasmer Compilation

This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 2.3Time To Compile77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001224364860SE +/- 0.45, N = 344.7037.9150.7750.8144.3344.1853.1337.931. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.8.1Test: Masskrug - Acceleration: CPU-only77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.96441.92882.89323.85764.822SE +/- 0.013, N = 33.6903.0714.2204.2323.5843.6524.2863.174

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per Direction77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900510152025SE +/- 0.02, N = 319.5915.0720.9620.9919.4419.5620.9415.081. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Y-Cruncher

Y-Cruncher is a multi-threaded Pi benchmark capable of computing Pi to trillions of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 500M77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790048121620SE +/- 0.01, N = 312.7210.8715.0515.0812.6812.7715.0410.85

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.35630.71261.06891.42521.7815SE +/- 0.00075, N = 31.272451.143261.580541.583601.234961.272091.574921.14443MIN: 1.16MIN: 1.05MIN: 1.49MIN: 1.52MIN: 1.13MIN: 1.16MIN: 1.5MIN: 1.011. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: resize77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79003691215SE +/- 0.062, N = 310.18313.30610.30610.4219.86610.03310.41713.602

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.C77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79005K10K15K20K25KSE +/- 13.02, N = 324004.7424918.3418180.5818201.5523901.6923930.1718217.4324557.851. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.C77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79003K6K9K12K15KSE +/- 33.87, N = 39535.4311923.738739.838741.369511.869553.458923.1211860.081. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790020406080100SE +/- 0.00, N = 39510780799494791061. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 1080p - Video Preset: Ultra Fast77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790050100150200250SE +/- 0.22, N = 3201.63216.49160.28160.45202.80201.88160.21215.981. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Timed GDB GNU Debugger Compilation

This test times how long it takes to build the GNU Debugger (GDB) in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 10.2Time To Compile77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001224364860SE +/- 0.08, N = 344.6038.6151.5551.4944.0844.5751.9138.67

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900246810SE +/- 0.00408, N = 37.647635.726417.380147.376997.307687.639937.398685.70365MIN: 7.23MIN: 5.61MIN: 7.29MIN: 7.29MIN: 7.2MIN: 7.21MIN: 7.3MIN: 5.581. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001530456075SE +/- 0.56, N = 349.5460.7166.4066.3754.2750.1464.2461.53MIN: 31.93 / MAX: 62.29MIN: 49.57 / MAX: 69.26MIN: 52.54 / MAX: 72.87MIN: 56.61 / MAX: 70.47MIN: 43.38 / MAX: 65.85MIN: 39.21 / MAX: 63.67MIN: 35.74 / MAX: 76.68MIN: 51.95 / MAX: 117.171. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 1080p77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900612182430SE +/- 0.06, N = 322.4921.1717.2017.3022.9722.4517.2321.391. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 4K77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79003691215SE +/- 0.02, N = 310.9610.418.428.6111.1010.998.3210.281. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Garlicoin77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790010002000300040005000SE +/- 39.25, N = 34593.064566.153598.293579.504711.294553.533534.184390.501. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh Time77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790081624324031.4126.6935.4535.4731.2331.5935.2926.941. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79003691215SE +/- 0.02, N = 38.128.9210.7810.678.628.1410.668.91MIN: 4.51 / MAX: 22.01MIN: 4.89 / MAX: 19.33MIN: 4.84 / MAX: 21.35MIN: 6.39 / MAX: 19.07MIN: 4.21 / MAX: 29.5MIN: 5.28 / MAX: 22.47MIN: 5.63 / MAX: 25.42MIN: 5.72 / MAX: 19.341. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

nekRS

nekRS is an open-source Navier Stokes solver based on the spectral element method. NekRS supports both CPU and GPU/accelerator support though this test profile is currently configured for CPU execution. NekRS is part of Nek5000 of the Mathematics and Computer Science MCS at Argonne National Laboratory. This nekRS benchmark is primarily relevant to large core count HPC servers and otherwise may be very time consuming. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFLOP/s, More Is BetternekRS 22.0Input: TurboPipe Periodic77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790014000M28000M42000M56000M70000MSE +/- 29429369.31, N = 355524300000648778000004980863333349735600000560084000005554820000049717500000659745000001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -lmpi_cxx -lmpi

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenet77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900246810SE +/- 0.00, N = 37.398.676.896.906.547.127.138.40MIN: 6.93 / MAX: 8.47MIN: 8.58 / MAX: 9.22MIN: 6.85 / MAX: 7.4MIN: 6.86 / MAX: 7.51MIN: 6.46 / MAX: 8.01MIN: 6.65 / MAX: 8.21MIN: 7.05 / MAX: 8.48MIN: 8.31 / MAX: 8.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 8 - Buffer Length: 256 - Filter Length: 5777007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900160M320M480M640M800MSE +/- 6183407.06, N = 47563400007394300005732975005835000007593100007595500005857700007532500001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790080160240320400SE +/- 0.45, N = 3293.37331.39386.64386.47292.09294.43379.73330.58MIN: 221.79 / MAX: 312MIN: 290.18 / MAX: 342.26MIN: 366.98 / MAX: 393.57MIN: 369.88 / MAX: 390.09MIN: 151.29 / MAX: 340.73MIN: 255.5 / MAX: 313.05MIN: 333.61 / MAX: 390.72MIN: 304.74 / MAX: 342.31. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 1080p77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79003691215SE +/- 0.02, N = 312.1113.4610.3010.3012.1312.1010.2313.491. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 1080p77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900150300450600750SE +/- 2.69, N = 3609.82687.93524.24528.45616.10615.79522.04649.491. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Compression Speed77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790013002600390052006500SE +/- 11.07, N = 35055.36230.74776.64772.95126.45140.04863.46289.11. (CC) gcc options: -O3 -pthread -lz -llzma

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: Parallel77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900110220330440550SE +/- 0.17, N = 34484953813784424453824981. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001.32752.6553.98255.316.6375SE +/- 0.05, N = 34.555.155.905.804.484.545.785.15MIN: 2.74 / MAX: 17.33MIN: 3.14 / MAX: 13.79MIN: 3.49 / MAX: 14.2MIN: 3.43 / MAX: 7.38MIN: 2.84 / MAX: 12.69MIN: 2.81 / MAX: 17.18MIN: 3.69 / MAX: 20.19MIN: 3.05 / MAX: 67.861. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900246810SE +/- 0.05, N = 35.816.537.577.595.775.827.576.53MIN: 3.21 / MAX: 17.52MIN: 3.39 / MAX: 14.56MIN: 3.89 / MAX: 15.51MIN: 3.97 / MAX: 15.26MIN: 3.07 / MAX: 13.01MIN: 3.02 / MAX: 17.51MIN: 3.93 / MAX: 20.27MIN: 3.38 / MAX: 14.891. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 1080p - Video Preset: Very Fast77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900306090120150SE +/- 0.03, N = 3111.46118.7891.3091.22111.63111.5691.18118.711. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900510152025SE +/- 0.03, N = 316.9918.3221.8621.8616.8616.9721.7818.25

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900160320480640800SE +/- 1.54, N = 3578.82651.18746.94746.15576.13581.34746.14651.71MIN: 394.1 / MAX: 612.37MIN: 573.67 / MAX: 673.86MIN: 714.17 / MAX: 765.41MIN: 724.14 / MAX: 766.22MIN: 505.33 / MAX: 597.67MIN: 454.21 / MAX: 613.21MIN: 723.99 / MAX: 764.64MIN: 621.09 / MAX: 672.911. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001326395265SE +/- 0.07, N = 358.8454.5845.7445.7459.3058.9145.9154.78

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.8.1Test: Server Room - Acceleration: CPU-only77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.73891.47782.21672.95563.6945SE +/- 0.036, N = 32.8942.5763.2433.2502.8312.9023.2842.539

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssd77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79003691215SE +/- 0.01, N = 310.3311.749.129.179.0910.029.2011.73MIN: 9.5 / MAX: 11.46MIN: 11.43 / MAX: 21.52MIN: 8.99 / MAX: 9.91MIN: 9.02 / MAX: 9.75MIN: 8.76 / MAX: 10.49MIN: 9.18 / MAX: 19.45MIN: 8.98 / MAX: 10.63MIN: 11.48 / MAX: 12.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790030060090012001500SE +/- 5.45, N = 31015.271153.461309.261293.341016.681032.671303.001159.38MIN: 718.11 / MAX: 1124.23MIN: 731.93 / MAX: 1335.35MIN: 749.1 / MAX: 1427.46MIN: 914.71 / MAX: 1411.42MIN: 897.5 / MAX: 1118.4MIN: 975.43 / MAX: 1129.72MIN: 1221.08 / MAX: 1432.12MIN: 1036.64 / MAX: 1295.051. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet1877007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900246810SE +/- 0.09, N = 36.967.165.655.565.786.035.687.13MIN: 6.51 / MAX: 8.09MIN: 7 / MAX: 7.92MIN: 5.5 / MAX: 6.42MIN: 5.51 / MAX: 6.33MIN: 5.57 / MAX: 12.93MIN: 5.61 / MAX: 7.05MIN: 5.51 / MAX: 7.09MIN: 6.99 / MAX: 7.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3D77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001530456075SE +/- 0.87, N = 361.6352.0766.5757.1153.7653.9362.4758.201. (CXX) g++ options: -O2 -lOpenCL

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 1080p77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900160320480640800SE +/- 0.75, N = 3668.09739.51578.51580.00677.96670.53579.19728.991. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.16920.33840.50760.67680.846SE +/- 0.000677, N = 30.5998880.5906330.7519350.7519070.5886390.6002440.7500020.595211MIN: 0.56MIN: 0.54MIN: 0.73MIN: 0.73MIN: 0.56MIN: 0.55MIN: 0.72MIN: 0.541. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790030060090012001500SE +/- 6.34, N = 31034.061179.601306.121298.341039.001032.401315.781190.80MIN: 699.51 / MAX: 1128.89MIN: 1028.22 / MAX: 1530.8MIN: 773.95 / MAX: 1444.66MIN: 960.85 / MAX: 1409.2MIN: 913.61 / MAX: 1111.36MIN: 820.72 / MAX: 1126.15MIN: 728.56 / MAX: 1437.75MIN: 1049.79 / MAX: 1286.081. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Timed CPython Compilation

This test times how long it takes to build the reference Python implementation, CPython, with optimizations and LTO enabled for a release build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: Default77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79004812162015.1813.4616.8816.8014.9415.1816.9813.58

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080p77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790020406080100SE +/- 0.55, N = 387.56103.7382.5783.8288.8687.6082.61102.841. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 10, Lossless77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.97921.95842.93763.91684.896SE +/- 0.019, N = 33.8723.5014.3254.3523.8973.8804.3413.6211. (CXX) g++ options: -O3 -fPIC -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Compression Speed77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900400800120016002000SE +/- 2.90, N = 31445.11637.91326.31319.11454.31443.81320.81636.71. (CC) gcc options: -O3 -pthread -lz -llzma

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tiny77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790048121620SE +/- 0.12, N = 313.1214.0912.0312.1911.4012.0612.3413.70MIN: 12.22 / MAX: 14.69MIN: 13.9 / MAX: 14.24MIN: 11.74 / MAX: 13.04MIN: 11.76 / MAX: 12.93MIN: 11.18 / MAX: 13.15MIN: 11.25 / MAX: 14.07MIN: 11.85 / MAX: 14.12MIN: 13.55 / MAX: 14.361. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradebeans77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900400800120016002000SE +/- 19.03, N = 2013711655146113591403140714241676

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet5077007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79003691215SE +/- 0.04, N = 311.5212.2011.0611.119.9110.3411.1112.09MIN: 10.79 / MAX: 13.36MIN: 12.02 / MAX: 13.2MIN: 10.94 / MAX: 11.76MIN: 11.03 / MAX: 11.66MIN: 9.79 / MAX: 11.38MIN: 9.67 / MAX: 12.24MIN: 10.95 / MAX: 12.56MIN: 11.98 / MAX: 13.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformer77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790020406080100SE +/- 0.04, N = 3100.1393.50110.17110.1099.54100.43110.6790.21MIN: 98.97 / MAX: 104.21MIN: 93.19 / MAX: 97.76MIN: 109.46 / MAX: 111.5MIN: 109.45 / MAX: 116.06MIN: 98.35 / MAX: 108.29MIN: 98.86 / MAX: 105.15MIN: 109.86 / MAX: 120.66MIN: 89.92 / MAX: 96.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: unsharp-mask77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79003691215SE +/- 0.02, N = 310.6712.3810.8010.8010.3110.5410.7412.63

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Signal Source (Cosine)77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790013002600390052006500SE +/- 6.70, N = 86105.95014.05865.65870.76084.96088.95846.25162.01. 3.10.1.1

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001.29382.58763.88145.17526.469SE +/- 0.03, N = 34.865.445.675.714.764.795.755.44MIN: 3.43 / MAX: 7.38MIN: 3.77 / MAX: 13.94MIN: 4.46 / MAX: 8.96MIN: 4.14 / MAX: 17.41MIN: 3.73 / MAX: 12.41MIN: 3.62 / MAX: 17.47MIN: 4.12 / MAX: 15.9MIN: 3.81 / MAX: 14.781. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.C77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79005K10K15K20K25KSE +/- 5.90, N = 323860.3125610.9621442.6721352.0124079.9923843.5121396.5925644.031. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.09680.19360.29040.38720.484SE +/- 0.00, N = 30.380.430.360.360.380.370.360.43MIN: 0.23 / MAX: 13MIN: 0.26 / MAX: 9.51MIN: 0.22 / MAX: 8.48MIN: 0.22 / MAX: 7.98MIN: 0.24 / MAX: 2.35MIN: 0.23 / MAX: 12.31MIN: 0.22 / MAX: 12.84MIN: 0.26 / MAX: 8.941. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Tradesoap77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7600Ryzen 9 79005001000150020002500SE +/- 19.27, N = 42015193622422154212822011880

Java Test: Tradesoap

Ryzen 7 7700: The test quit with a non-zero exit status.

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001632486480SE +/- 0.12, N = 369.6174.1062.3862.7768.9169.6862.5774.15

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: Standard77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79002K4K6K8K10KSE +/- 114.75, N = 12843199858932920492548444862397481. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.C77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79003K6K9K12K15KSE +/- 8.95, N = 310989.4112794.7511043.4811044.4611060.8010954.1111031.3412761.881. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001632486480SE +/- 0.31, N = 364.1770.3260.2960.3363.5063.6760.4170.33

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 1080p77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001224364860SE +/- 0.09, N = 354.4946.8852.6852.0454.4353.3852.0346.831. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.16430.32860.49290.65720.8215SE +/- 0.00, N = 30.640.730.630.630.630.640.630.72MIN: 0.36 / MAX: 12.27MIN: 0.39 / MAX: 9.37MIN: 0.34 / MAX: 8.02MIN: 0.39 / MAX: 1.84MIN: 0.38 / MAX: 8.56MIN: 0.37 / MAX: 13.2MIN: 0.34 / MAX: 12.67MIN: 0.4 / MAX: 8.981. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPU77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900246810SE +/- 0.01, N = 35.846.555.685.685.845.845.676.54MIN: 3.2 / MAX: 17.68MIN: 3.66 / MAX: 14.89MIN: 3.14 / MAX: 13.67MIN: 3.04 / MAX: 13.81MIN: 3.13 / MAX: 13.02MIN: 3.11 / MAX: 18.54MIN: 3.77 / MAX: 11.12MIN: 3.68 / MAX: 16.141. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790020406080100SE +/- 0.16, N = 386.9096.5584.2684.2286.9287.6784.0196.89

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.D77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790030060090012001500SE +/- 6.92, N = 31452.371498.341322.961299.851401.101428.571304.941464.481. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v277007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001122334455SE +/- 0.35, N = 1542.3341.5247.6543.9242.3342.0643.9242.37MIN: 42.27 / MAX: 42.4MIN: 41.17 / MAX: 42.01MIN: 43.99 / MAX: 49.87MIN: 43.9 / MAX: 44MIN: 42.3 / MAX: 42.43MIN: 41.98 / MAX: 42.14MIN: 43.87 / MAX: 44.14MIN: 42.26 / MAX: 42.441. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H277007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900400800120016002000SE +/- 32.56, N = 2018281748187117001877194218811791

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001122334455SE +/- 0.03, N = 343.3847.4742.0042.2043.2643.4641.7647.12

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Default77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900612182430SE +/- 0.03, N = 326.9427.2425.8325.9526.9426.9425.9724.001. (CC) gcc options: -fvisibility=hidden -O2 -lm

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900714212835SE +/- 0.02, N = 327.3129.6726.4326.3527.2427.3826.3429.84

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 10077007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.21150.4230.63450.8461.0575SE +/- 0.02, N = 60.910.920.850.830.880.870.940.861. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

Algebraic Multi-Grid Benchmark

AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.277007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790090M180M270M360M450MSE +/- 42003.58, N = 33857551004336033003926537333919451003915619003859116003925822004356400001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To Compile77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790048121620SE +/- 0.01, N = 314.7114.1015.7715.8414.6214.6915.6814.04

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900150300450600750SE +/- 0.30, N = 3651.14705.41630.99630.55649.26651.97630.90704.98

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: Eigen77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900400800120016002000SE +/- 14.96, N = 9154216071495147515401548149916501. (CXX) g++ options: -flto -pthread

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900150300450600750SE +/- 0.22, N = 3650.21703.74630.93629.80647.74650.70630.50703.55

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 9077007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79003691215SE +/- 0.01, N = 311.5712.0010.9310.7611.6111.4310.9411.781. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 9077007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79003691215SE +/- 0.00, N = 311.9012.3211.2111.0711.9211.7611.2212.091. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 8077007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79003691215SE +/- 0.02, N = 311.8012.2011.1310.9711.8211.6111.1211.981. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg1677007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900612182430SE +/- 0.03, N = 326.9124.2025.8525.8824.3525.4325.9124.24MIN: 25.94 / MAX: 29.02MIN: 23.94 / MAX: 25.17MIN: 25.64 / MAX: 31.08MIN: 25.66 / MAX: 26.69MIN: 24.09 / MAX: 34.85MIN: 24.43 / MAX: 27.56MIN: 25.66 / MAX: 27.54MIN: 23.97 / MAX: 25.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: auto-levels77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79003691215SE +/- 0.012, N = 39.52210.1509.8099.8019.2479.4889.77910.279

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 8077007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79003691215SE +/- 0.01, N = 312.0912.5311.4111.2812.1111.9611.4412.271. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: TopTweet77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79003691215SE +/- 0.02, N = 39.8610.069.519.509.959.929.1110.101. (CXX) g++ options: -O3

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C755277007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001632486480SE +/- 0.33, N = 370.9166.5973.0871.9767.7871.9372.8765.951. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: Parallel77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79002K4K6K8K10KSE +/- 23.08, N = 3748970257683767977837765774571981. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Timed CPython Compilation

This test times how long it takes to build the reference Python implementation, CPython, with optimizations and LTO enabled for a release build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: Released Build, PGO + LTO Optimized77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79004080120160200192.36183.09201.93201.95190.87192.75202.76183.17

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: Jython77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79005001000150020002500SE +/- 11.02, N = 424592294253825402540236025382348

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3, Long Mode - Compression Speed77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900400800120016002000SE +/- 3.73, N = 32007.31825.41942.81937.12012.92009.01936.91824.81. (CC) gcc options: -O3 -pthread -lz -llzma

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnet77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001.05082.10163.15244.20325.254SE +/- 0.00, N = 34.674.624.244.244.254.664.434.62MIN: 4.37 / MAX: 5.86MIN: 4.55 / MAX: 5.18MIN: 4.21 / MAX: 4.84MIN: 4.22 / MAX: 4.83MIN: 4.19 / MAX: 5.65MIN: 4.36 / MAX: 5.78MIN: 4.39 / MAX: 5MIN: 4.55 / MAX: 5.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaes77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001530456075SE +/- 0.18, N = 362.360.663.663.060.260.666.160.4

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FIR Filter77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790030060090012001500SE +/- 3.50, N = 81485.91374.71436.81448.51498.31508.71436.21402.11. 3.10.1.1

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Hilbert Transform77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79002004006008001000SE +/- 5.10, N = 8759.8741.4738.4725.5775.5760.3709.6716.61. 3.10.1.1

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNet77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79005001000150020002500SE +/- 0.40, N = 32067.232105.732235.312228.872066.072069.752237.082112.15MIN: 2021.19 / MAX: 2109.88MIN: 2070.42 / MAX: 2146.74MIN: 2217.92 / MAX: 2257.35MIN: 2216.04 / MAX: 2250MIN: 2022.59 / MAX: 2107.46MIN: 2024.37 / MAX: 2112.67MIN: 2217.88 / MAX: 2258.4MIN: 2081.33 / MAX: 2147.821. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Decompression Speed77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790014002800420056007000SE +/- 3.15, N = 36311.66404.16208.36188.26347.16336.66200.66696.21. (CC) gcc options: -O3 -pthread -lz -llzma

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 2 - Buffer Length: 256 - Filter Length: 5777007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790040M80M120M160M200MSE +/- 1746140.93, N = 72080100002075200001968728571975100002057600002070400001999600001923100001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3, Long Mode - Decompression Speed77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790014002800420056007000SE +/- 67.24, N = 36098.46167.26086.85943.06118.56083.36167.66423.91. (CC) gcc options: -O3 -pthread -lz -llzma

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C267077007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790020406080100SE +/- 0.07, N = 379.4074.2579.9879.5577.6180.1979.7975.531. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbody77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790020406080100SE +/- 1.20, N = 379.677.483.581.977.579.479.978.4

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark Suite77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900300K600K900K1200K1500KSE +/- 5632.48, N = 312331981227889114322511495351184135121801911542401186995

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Lossless77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.49730.99461.49191.98922.4865SE +/- 0.00, N = 32.212.192.092.082.182.172.052.211. (CC) gcc options: -fvisibility=hidden -O2 -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression Speed77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790012002400360048006000SE +/- 7.45, N = 35364.15452.45074.25072.45220.65185.25058.05436.11. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Decompression Speed77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790013002600390052006500SE +/- 7.12, N = 35956.56277.15834.25826.36198.96188.36021.36049.21. (CC) gcc options: -O3 -pthread -lz -llzma

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLAS77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790030060090012001500SE +/- 21.50, N = 3158816011514155415191535152016231. (CXX) g++ options: -flto -pthread

JPEG XL Decoding libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: 177007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001632486480SE +/- 0.09, N = 371.3672.6169.9467.9371.9070.4469.3671.36

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression Speed77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790012002400360048006000SE +/- 54.15, N = 35379.05206.55133.05239.25130.45186.45039.85176.61. (CC) gcc options: -O3 -pthread -lz -llzma

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: DistinctUserID77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79003691215SE +/- 0.11, N = 39.859.729.349.409.839.779.469.961. (CXX) g++ options: -O3

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 1 - Buffer Length: 256 - Filter Length: 5777007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790020M40M60M80M100MSE +/- 98206.13, N = 3104350000105440000100353333999000001043000001042600001000100001064600001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.2177007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79009001800270036004500SE +/- 40.10, N = 64131.84141.94180.63927.24090.74098.83966.63971.21. (CXX) g++ options: -O3 -march=native -rdynamic

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: go77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900306090120150SE +/- 0.00, N = 3128125132132127126133125

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: IIR Filter77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900120240360480600SE +/- 3.03, N = 8561.1560.7541.5541.8570.8575.8544.3561.21. 3.10.1.1

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: float77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001326395265SE +/- 0.12, N = 357.256.159.458.756.156.458.855.9

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Lossless, Highest Compression77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.19350.3870.58050.7740.9675SE +/- 0.00, N = 30.840.850.820.810.850.840.820.861. (CC) gcc options: -fvisibility=hidden -O2 -lm

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed Time77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79003M6M9M12M15MSE +/- 122230.58, N = 814574274148048871394918814053268145725551451300814170376146468271. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Highest Compression77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001.19252.3853.57754.775.9625SE +/- 0.00, N = 35.205.305.005.015.195.205.015.301. (CC) gcc options: -fvisibility=hidden -O2 -lm

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: rotate77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79003691215SE +/- 0.015, N = 39.2979.1829.5099.5058.9719.1829.4469.406

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to377007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79004080120160200SE +/- 0.00, N = 3168167173174177167173167

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlib77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79003691215SE +/- 0.00, N = 310.310.110.610.710.310.410.610.1

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.177007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79004080120160200SE +/- 0.02, N = 3184.21180.38191.09191.09183.89184.08191.08182.77MIN: 184.13 / MAX: 184.34MIN: 180.33 / MAX: 180.52MIN: 190.97 / MAX: 191.36MIN: 191.02 / MAX: 191.19MIN: 183.77 / MAX: 184.05MIN: 183.99 / MAX: 184.18MIN: 191 / MAX: 191.26MIN: 182.69 / MAX: 182.891. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FM Deemphasis Filter77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79002004006008001000SE +/- 2.20, N = 8958.4905.0913.7919.1941.1943.1911.7907.51. 3.10.1.1

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test Times77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900140280420560700SE +/- 2.40, N = 3601592626613598606615602

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 4 - Buffer Length: 256 - Filter Length: 5777007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790090M180M270M360M450MSE +/- 265476.51, N = 34015400004110000004006033333981000004072100004012400003889800004026800001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 10077007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790048121620SE +/- 0.01, N = 316.7416.8116.1316.1216.7916.7816.1217.031. (CC) gcc options: -fvisibility=hidden -O2 -lm

Git

This test measures the time needed to carry out some sample Git operations on an example, static repository that happens to be a copy of the GNOME GTK tool-kit repository. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGitTime To Complete Common Git Commands77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900816243240SE +/- 0.04, N = 332.0531.7233.0733.1932.3732.0833.1331.431. git version 2.34.1

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compile77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790020406080100SE +/- 0.00, N = 379.579.282.782.379.579.483.078.6

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_template77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900612182430SE +/- 0.00, N = 324.824.725.825.824.624.625.724.5

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytrace77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790060120180240300SE +/- 0.33, N = 3248246259257247246256247

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 10077007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.2250.450.6750.91.125SE +/- 0.01, N = 30.981.000.950.961.000.970.960.971. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: LargeRandom77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.40730.81461.22191.62922.0365SE +/- 0.00, N = 31.791.791.721.721.791.791.731.811. (CXX) g++ options: -O3

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loads77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79003691215SE +/- 0.03, N = 311.911.812.312.312.111.812.411.8

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_python77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790050100150200250SE +/- 0.33, N = 3225223232232221225231223

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: PartialTweets77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900246810SE +/- 0.00, N = 38.168.247.897.908.178.217.918.271. (CXX) g++ options: -O3

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v277007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79004080120160200SE +/- 0.23, N = 3190.92190.74199.06198.87190.37193.77199.19190.75MIN: 190.03 / MAX: 193.11MIN: 189.99 / MAX: 192.23MIN: 198.12 / MAX: 200.3MIN: 198.22 / MAX: 199.53MIN: 189.7 / MAX: 193.98MIN: 192.93 / MAX: 195.31MIN: 198.58 / MAX: 200.1MIN: 190.01 / MAX: 191.511. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

FLAC Audio Encoding

This test times how long it takes to encode a sample WAV file to FLAC audio format ten times using the --best preset settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.4WAV To FLAC77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79003691215SE +/- 0.00, N = 511.6911.6512.1012.1011.6711.6712.1011.571. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaos77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001224364860SE +/- 0.09, N = 353.253.155.255.453.053.255.153.3

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Decompression Speed77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 790012002400360048006000SE +/- 86.19, N = 35715.15794.75596.15547.85716.25702.75745.45722.71. (CC) gcc options: -O3 -pthread -lz -llzma

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: Kostya77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001.30052.6013.90155.2026.5025SE +/- 0.01, N = 35.785.785.565.555.785.775.565.761. (CXX) g++ options: -O3

LAME MP3 Encoding

LAME is an MP3 encoder licensed under the LGPL. This test measures the time required to encode a WAV file to MP3 format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP377007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79001.12192.24383.36574.48765.6095SE +/- 0.003, N = 34.8064.7954.9784.9784.7924.8034.9864.8091. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lncurses -lm

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

Java Test: Eclipse

7700: The test quit with a non-zero exit status.

7900: The test quit with a non-zero exit status.

Ryzen 7600 AMD: The test quit with a non-zero exit status.

AMD 7600: The test quit with a non-zero exit status.

AMD 7700: The test quit with a non-zero exit status.

Ryzen 7 7700: The test quit with a non-zero exit status.

Ryzen 7600: The test quit with a non-zero exit status.

Ryzen 9 7900: The test quit with a non-zero exit status.

325 Results Shown

oneDNN
NCNN
Mobile Neural Network
oneDNN
NCNN:
  CPU - FastestDet
  CPU - shufflenet-v2
ONNX Runtime:
  bertsquad-12 - CPU - Standard
  ArcFace ResNet-100 - CPU - Standard
NCNN
ONNX Runtime
NCNN
Mobile Neural Network
NCNN
Mobile Neural Network:
  squeezenetv1.1
  nasnet
  MobileNetV2_224
C-Ray
NAS Parallel Benchmarks
Stockfish
Mobile Neural Network
OpenSSL
Zstd Compression
OpenSSL
Cpuminer-Opt
NAS Parallel Benchmarks
oneDNN
Cpuminer-Opt
oneDNN
OpenSSL
NCNN
JPEG XL Decoding libjxl
Cpuminer-Opt
OpenVINO
Coremark
Cpuminer-Opt:
  x25x
  scrypt
  Ringcoin
7-Zip Compression
Cpuminer-Opt
Neural Magic DeepSparse
Cpuminer-Opt
IndigoBench
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream
Cpuminer-Opt
Neural Magic DeepSparse
Cpuminer-Opt
Stargate Digital Audio Workstation
Xmrig
Blender
IndigoBench
Tachyon
Blender:
  BMW27 - CPU-Only
  Classroom - CPU-Only
Xmrig
Stargate Digital Audio Workstation
asmFish
oneDNN
ASTC Encoder
OpenVINO
Chaos Group V-RAY
ASTC Encoder:
  Fast
  Medium
Stargate Digital Audio Workstation
ASTC Encoder
ONNX Runtime
Neural Magic DeepSparse
OpenVINO
Stargate Digital Audio Workstation
Timed Linux Kernel Compilation
Appleseed
Aircrack-ng
Stargate Digital Audio Workstation
Liquid-DSP
Blender
OpenVINO:
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU
  Weld Porosity Detection FP16-INT8 - CPU
Blender
NAMD
Liquid-DSP
oneDNN:
  Recurrent Neural Network Training - u8s8f32 - CPU
  IP Shapes 3D - bf16bf16bf16 - CPU
Timed LLVM Compilation
Stargate Digital Audio Workstation
oneDNN:
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
  Recurrent Neural Network Training - f32 - CPU
OpenVINO:
  Face Detection FP16 - CPU
  Vehicle Detection FP16-INT8 - CPU
Neural Magic DeepSparse
SVT-HEVC
Rodinia
x265
OpenVINO
Stargate Digital Audio Workstation
oneDNN
Appleseed
oneDNN:
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - u8s8f32 - CPU
LAMMPS Molecular Dynamics Simulator
SVT-HEVC
Neural Magic DeepSparse
Stargate Digital Audio Workstation
NAS Parallel Benchmarks
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    ms/batch
    items/sec
OpenVINO
Neural Magic DeepSparse:
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    ms/batch
    items/sec
NCNN
OpenVINO
oneDNN:
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
  Deconvolution Batch shapes_1d - u8s8f32 - CPU
Timed MPlayer Compilation
Primesieve
Timed LLVM Compilation
Primesieve
Timed Linux Kernel Compilation
Build2
Rodinia
NAS Parallel Benchmarks
7-Zip Compression
GROMACS
OpenVINO
SVT-VP9
PyPerformance
Timed Godot Game Engine Compilation
Timed FFmpeg Compilation
SVT-HEVC
oneDNN
GNU Radio
SVT-VP9
ONNX Runtime
Mobile Neural Network
SVT-HEVC
Timed Mesa Compilation
Mobile Neural Network
OpenVINO
libavif avifenc
Cpuminer-Opt
x264
oneDNN:
  Deconvolution Batch shapes_3d - f32 - CPU
  Deconvolution Batch shapes_1d - f32 - CPU
OpenFOAM
SVT-VP9:
  VMAF Optimized - Bosphorus 1080p
  PSNR/SSIM Optimized - Bosphorus 1080p
Neural Magic DeepSparse:
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    ms/batch
    items/sec
x264
oneDNN
Appleseed
Kvazaar
SVT-HEVC
oneDNN
Neural Magic DeepSparse:
  CV Detection,YOLOv5s COCO - Synchronous Single-Stream:
    ms/batch
    items/sec
NCNN
ONNX Runtime
SVT-AV1
Darktable
SVT-HEVC
libavif avifenc
Zstd Compression
SVT-AV1
Kvazaar
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream:
    ms/batch
    items/sec
SVT-VP9
Rodinia
SVT-AV1
libavif avifenc
SVT-VP9
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    ms/batch
    items/sec
Rodinia
ONNX Runtime
oneDNN
Zstd Compression
SVT-AV1
Kvazaar
libavif avifenc
Y-Cruncher
Timed PHP Compilation
GPAW
VP9 libvpx Encoding
Kvazaar
Darktable
oneDNN
NAS Parallel Benchmarks
ONNX Runtime
SVT-AV1
Timed Wasmer Compilation
Darktable
Xcompact3d Incompact3d
Y-Cruncher
oneDNN
GIMP
NAS Parallel Benchmarks:
  FT.C
  CG.C
ONNX Runtime
Kvazaar
Timed GDB GNU Debugger Compilation
oneDNN
OpenVINO
VP9 libvpx Encoding:
  Speed 0 - Bosphorus 1080p
  Speed 0 - Bosphorus 4K
Cpuminer-Opt
OpenFOAM
OpenVINO
nekRS
NCNN
Liquid-DSP
OpenVINO
SVT-AV1:
  Preset 4 - Bosphorus 1080p
  Preset 12 - Bosphorus 1080p
Zstd Compression
ONNX Runtime
OpenVINO:
  Vehicle Detection FP16-INT8 - CPU
  Weld Porosity Detection FP16 - CPU
Kvazaar
Neural Magic DeepSparse
OpenVINO
Neural Magic DeepSparse
Darktable
NCNN
OpenVINO
NCNN
Rodinia
SVT-AV1
oneDNN
OpenVINO
Timed CPython Compilation
x265
libavif avifenc
Zstd Compression
NCNN
DaCapo Benchmark
NCNN:
  CPU - resnet50
  CPU - vision_transformer
GIMP
GNU Radio
OpenVINO
NAS Parallel Benchmarks
OpenVINO
DaCapo Benchmark
Neural Magic DeepSparse
ONNX Runtime
NAS Parallel Benchmarks
Neural Magic DeepSparse
VP9 libvpx Encoding
OpenVINO:
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU
  Weld Porosity Detection FP16-INT8 - CPU
Neural Magic DeepSparse
NAS Parallel Benchmarks
TNN
DaCapo Benchmark
Neural Magic DeepSparse
WebP Image Encode
Neural Magic DeepSparse
JPEG XL libjxl
Algebraic Multi-Grid Benchmark
Timed Apache Compilation
Neural Magic DeepSparse
LeelaChessZero
Neural Magic DeepSparse
JPEG XL libjxl:
  JPEG - 90
  PNG - 90
  JPEG - 80
NCNN
GIMP
JPEG XL libjxl
simdjson
Ngspice
ONNX Runtime
Timed CPython Compilation
DaCapo Benchmark
Zstd Compression
NCNN
PyPerformance
GNU Radio:
  FIR Filter
  Hilbert Transform
TNN
Zstd Compression
Liquid-DSP
Zstd Compression
Ngspice
PyPerformance
PHPBench
WebP Image Encode
Zstd Compression:
  19 - Decompression Speed
  8 - Decompression Speed
LeelaChessZero
JPEG XL Decoding libjxl
Zstd Compression
simdjson
Liquid-DSP
QuantLib
PyPerformance
GNU Radio
PyPerformance
WebP Image Encode
Crafty
WebP Image Encode
GIMP
PyPerformance:
  2to3
  pathlib
TNN
GNU Radio
PyBench
Liquid-DSP
WebP Image Encode
Git
PyPerformance:
  regex_compile
  django_template
  raytrace
JPEG XL libjxl
simdjson
PyPerformance:
  json_loads
  pickle_pure_python
simdjson
TNN
FLAC Audio Encoding
PyPerformance
Zstd Compression
simdjson
LAME MP3 Encoding