extra new ryzen zen4

Benchmarks for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2301096-PTS-EXTRANEW19
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable
Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
7700
December 31 2022
  6 Hours, 29 Minutes
7900
December 30 2022
  5 Hours, 59 Minutes
Ryzen 7600 AMD
January 03 2023
  1 Day, 4 Hours, 44 Minutes
AMD 7600
January 04 2023
  7 Hours, 11 Minutes
AMD 7700
January 02 2023
  6 Hours, 26 Minutes
Ryzen 7 7700
January 01 2023
  6 Hours, 29 Minutes
Ryzen 7600
January 05 2023
  7 Hours, 13 Minutes
Ryzen 9 7900
December 29 2022
  6 Hours
Invert Behavior (Only Show Selected Data)
  9 Hours, 19 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


extra new ryzen zen4ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen Resolution77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900AMD Ryzen 7 7700 8-Core @ 5.39GHz (8 Cores / 16 Threads)ASUS ROG CROSSHAIR X670E HERO (0805 BIOS)AMD Device 14d832GB2000GB Samsung SSD 980 PRO 2TB + 2000GBAMD Radeon RX 6800 XT 16GB (2575/1000MHz)AMD Navi 21 HDMI AudioASUS MG28UIntel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411Ubuntu 22.046.0.0-060000rc1daily20220820-generic (x86_64)GNOME Shell 42.2X Server 1.21.1.3 + Wayland4.6 Mesa 22.3.0-devel (git-4685385 2022-08-23 jammy-oibaf-ppa) (LLVM 14.0.6 DRM 3.48)1.3.224GCC 12.0.1 20220319ext43840x2160AMD Ryzen 9 7900 12-Core @ 5.48GHz (12 Cores / 24 Threads)2000GB Samsung SSD 980 PRO 2TBAMD Ryzen 5 7600 6-Core @ 5.17GHz (6 Cores / 12 Threads)AMD Ryzen 7 7700 8-Core @ 5.39GHz (8 Cores / 16 Threads)2000GB Samsung SSD 980 PRO 2TB + 2000GBAMD Ryzen 5 7600 6-Core @ 5.17GHz (6 Cores / 12 Threads)AMD Ryzen 9 7900 12-Core @ 5.48GHz (12 Cores / 24 Threads)2000GB Samsung SSD 980 PRO 2TBOpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-OcsLtf/gcc-12-12-20220319/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-OcsLtf/gcc-12-12-20220319/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: amd-pstate performance (Boost: Enabled) - CPU Microcode: 0xa601203Java Details- OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Details- Python 3.10.4Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900Result OverviewPhoronix Test Suite100%122%145%167%189%C-RayMobile Neural NetworkStockfishOpenSSLCoremarkIndigoBenchTachyonXmrigasmFishChaos Group V-RAYBlenderASTC EncoderAircrack-ngNAMDStargate Digital Audio WorkstationCpuminer-Opt7-Zip CompressionLAMMPS Molecular Dynamics SimulatorTimed LLVM CompilationTimed Linux Kernel CompilationTimed MPlayer CompilationPrimesieveAppleseedBuild2oneDNNGROMACSTimed Godot Game Engine CompilationTimed FFmpeg CompilationSVT-HEVCTimed Mesa CompilationNCNNx264SVT-VP9RodiniaNAS Parallel Benchmarkslibavif avifencOpenFOAMx265KvazaarTimed PHP CompilationGPAWY-CruncherTimed Wasmer CompilationSVT-AV1Xcompact3d Incompact3dDarktableNeural Magic DeepSparseJPEG XL Decoding libjxlTimed GDB GNU Debugger CompilationOpenVINOnekRSONNX RuntimeLiquid-DSPVP9 libvpx EncodingGIMPTimed CPython CompilationZstd CompressionGNU RadioAlgebraic Multi-Grid BenchmarkTimed Apache CompilationJPEG XL libjxlLeelaChessZeroNgspiceDaCapo BenchmarkPHPBenchTNNQuantLibCraftysimdjsonPyBenchGitPyPerformanceWebP Image EncodeFLAC Audio EncodingLAME MP3 Encoding

extra new ryzen zen4onednn: IP Shapes 3D - u8s8f32 - CPUncnn: CPU - blazefacemnn: mobilenet-v1-1.0onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUncnn: CPU - FastestDetncnn: CPU - shufflenet-v2onnx: bertsquad-12 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardncnn: CPU - regnety_400monnx: super-resolution-10 - CPU - Standardncnn: CPU-v3-v3 - mobilenet-v3mnn: mobilenetV3ncnn: CPU - mnasnetmnn: squeezenetv1.1mnn: nasnetmnn: MobileNetV2_224c-ray: Total Time - 4K, 16 Rays Per Pixelnpb: EP.Cstockfish: Total Timemnn: resnet-v2-50openssl: RSA4096compress-zstd: 8 - Compression Speedopenssl: RSA4096cpuminer-opt: Deepcoinnpb: EP.Donednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUcpuminer-opt: Magionednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUopenssl: SHA256ncnn: CPU-v2-v2 - mobilenet-v2jpegxl-decode: Allcpuminer-opt: Quad SHA-256, Pyriteopenvino: Vehicle Detection FP16 - CPUcoremark: CoreMark Size 666 - Iterations Per Secondcpuminer-opt: x25xcpuminer-opt: scryptcpuminer-opt: Ringcoincompress-7zip: Decompression Ratingcpuminer-opt: Blake-2 Sdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamcpuminer-opt: LBC, LBRY Creditsindigobench: CPU - Supercardeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamcpuminer-opt: Triple SHA-256, Onecoindeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamcpuminer-opt: Skeincoinstargate: 44100 - 1024xmrig: Wownero - 1Mblender: Barbershop - CPU-Onlyindigobench: CPU - Bedroomtachyon: Total Timeblender: BMW27 - CPU-Onlyblender: Classroom - CPU-Onlyxmrig: Monero - 1Mstargate: 480000 - 1024asmfish: 1024 Hash Memory, 26 Depthonednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPUastcenc: Exhaustiveopenvino: Face Detection FP16-INT8 - CPUv-ray: CPUastcenc: Fastastcenc: Mediumstargate: 44100 - 512astcenc: Thoroughonnx: fcn-resnet101-11 - CPU - Standarddeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamopenvino: Weld Porosity Detection FP16 - CPUstargate: 96000 - 1024build-linux-kernel: allmodconfigappleseed: Disney Materialaircrack-ng: stargate: 192000 - 1024liquid-dsp: 24 - 256 - 57blender: Pabellon Barcelona - CPU-Onlyopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUblender: Fishy Cat - CPU-Onlynamd: ATPase Simulation - 327,506 Atomsliquid-dsp: 16 - 256 - 57onednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: IP Shapes 3D - bf16bf16bf16 - CPUbuild-llvm: Ninjastargate: 480000 - 512onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Training - f32 - CPUopenvino: Face Detection FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamsvt-hevc: 1 - Bosphorus 4Krodinia: OpenMP LavaMDx265: Bosphorus 4Kopenvino: Person Detection FP16 - CPUstargate: 96000 - 512onednn: Recurrent Neural Network Inference - f32 - CPUappleseed: Emilyonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUlammps: Rhodopsin Proteinsvt-hevc: 1 - Bosphorus 1080pdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamstargate: 192000 - 512npb: SP.Bdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamopenvino: Person Detection FP32 - CPUdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamncnn: CPU - efficientnet-b0openvino: Age Gender Recognition Retail 0013 FP16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUbuild-mplayer: Time To Compileprimesieve: 1e13build-llvm: Unix Makefilesprimesieve: 1e12build-linux-kernel: defconfigbuild2: Time To Compilerodinia: OpenMP CFD Solvernpb: BT.Ccompress-7zip: Compression Ratinggromacs: MPI CPU - water_GMX50_bareopenvino: Machine Translation EN To DE FP16 - CPUsvt-vp9: Visual Quality Optimized - Bosphorus 1080ppyperformance: python_startupbuild-godot: Time To Compilebuild-ffmpeg: Time To Compilesvt-hevc: 7 - Bosphorus 4Konednn: IP Shapes 1D - f32 - CPUgnuradio: Five Back to Back FIR Filterssvt-vp9: Visual Quality Optimized - Bosphorus 4Konnx: yolov4 - CPU - Standardmnn: SqueezeNetV1.0svt-hevc: 7 - Bosphorus 1080pbuild-mesa: Time To Compilemnn: inception-v3openvino: Person Vehicle Bike Detection FP16 - CPUavifenc: 6cpuminer-opt: Myriad-Groestlx264: Bosphorus 4Konednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUopenfoam: drivaerFastback, Small Mesh Size - Execution Timesvt-vp9: VMAF Optimized - Bosphorus 1080psvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080pdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamx264: Bosphorus 1080ponednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUappleseed: Material Testerkvazaar: Bosphorus 4K - Mediumsvt-hevc: 10 - Bosphorus 1080ponednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamncnn: CPU - googlenetonnx: ArcFace ResNet-100 - CPU - Parallelsvt-av1: Preset 8 - Bosphorus 4Kdarktable: Boat - CPU-onlysvt-hevc: 10 - Bosphorus 4Kavifenc: 0compress-zstd: 19 - Compression Speedsvt-av1: Preset 8 - Bosphorus 1080pkvazaar: Bosphorus 4K - Very Fastdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamsvt-vp9: VMAF Optimized - Bosphorus 4Krodinia: OpenMP Leukocytesvt-av1: Preset 4 - Bosphorus 4Kavifenc: 6, Losslesssvt-vp9: PSNR/SSIM Optimized - Bosphorus 4Kdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamrodinia: OpenMP Streamclusteronnx: super-resolution-10 - CPU - Parallelonednn: IP Shapes 3D - f32 - CPUcompress-zstd: 19, Long Mode - Compression Speedsvt-av1: Preset 12 - Bosphorus 4Kkvazaar: Bosphorus 4K - Ultra Fastavifenc: 2y-cruncher: 1Bbuild-php: Time To Compilegpaw: Carbon Nanotubevpxenc: Speed 5 - Bosphorus 4Kkvazaar: Bosphorus 1080p - Mediumdarktable: Server Rack - CPU-onlyonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUnpb: LU.Connx: bertsquad-12 - CPU - Parallelsvt-av1: Preset 13 - Bosphorus 4Kbuild-wasmer: Time To Compiledarktable: Masskrug - CPU-onlyincompact3d: input.i3d 129 Cells Per Directiony-cruncher: 500Monednn: IP Shapes 1D - bf16bf16bf16 - CPUgimp: resizenpb: FT.Cnpb: CG.Connx: fcn-resnet101-11 - CPU - Parallelkvazaar: Bosphorus 1080p - Ultra Fastbuild-gdb: Time To Compileonednn: Convolution Batch Shapes Auto - f32 - CPUopenvino: Machine Translation EN To DE FP16 - CPUvpxenc: Speed 0 - Bosphorus 1080pvpxenc: Speed 0 - Bosphorus 4Kcpuminer-opt: Garlicoinopenfoam: drivaerFastback, Small Mesh Size - Mesh Timeopenvino: Vehicle Detection FP16 - CPUnekrs: TurboPipe Periodicncnn: CPU - mobilenetliquid-dsp: 8 - 256 - 57openvino: Face Detection FP16-INT8 - CPUsvt-av1: Preset 4 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 1080pcompress-zstd: 3 - Compression Speedonnx: yolov4 - CPU - Parallelopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16 - CPUkvazaar: Bosphorus 1080p - Very Fastdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamopenvino: Face Detection FP16 - CPUdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdarktable: Server Room - CPU-onlyncnn: CPU - squeezenet_ssdopenvino: Person Detection FP16 - CPUncnn: CPU - resnet18rodinia: OpenMP HotSpot3Dsvt-av1: Preset 13 - Bosphorus 1080ponednn: IP Shapes 1D - u8s8f32 - CPUopenvino: Person Detection FP32 - CPUbuild-python: Defaultx265: Bosphorus 1080pavifenc: 10, Losslesscompress-zstd: 8, Long Mode - Compression Speedncnn: CPU - yolov4-tinydacapobench: Tradebeansncnn: CPU - resnet50ncnn: CPU - vision_transformergimp: unsharp-maskgnuradio: Signal Source (Cosine)openvino: Person Vehicle Bike Detection FP16 - CPUnpb: MG.Copenvino: Age Gender Recognition Retail 0013 FP16 - CPUdacapobench: Tradesoapdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamonnx: GPT-2 - CPU - Standardnpb: SP.Cdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamvpxenc: Speed 5 - Bosphorus 1080popenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamnpb: IS.Dtnn: CPU - SqueezeNet v2dacapobench: H2deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamwebp: Defaultdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamjpegxl: JPEG - 100amg: build-apache: Time To Compiledeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamlczero: Eigendeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamjpegxl: JPEG - 90jpegxl: PNG - 90jpegxl: JPEG - 80ncnn: CPU - vgg16gimp: auto-levelsjpegxl: PNG - 80simdjson: TopTweetngspice: C7552onnx: GPT-2 - CPU - Parallelbuild-python: Released Build, PGO + LTO Optimizeddacapobench: Jythoncompress-zstd: 3, Long Mode - Compression Speedncnn: CPU - alexnetpyperformance: crypto_pyaesgnuradio: FIR Filtergnuradio: Hilbert Transformtnn: CPU - DenseNetcompress-zstd: 8, Long Mode - Decompression Speedliquid-dsp: 2 - 256 - 57compress-zstd: 3, Long Mode - Decompression Speedngspice: C2670pyperformance: nbodyphpbench: PHP Benchmark Suitewebp: Quality 100, Losslesscompress-zstd: 19 - Decompression Speedcompress-zstd: 8 - Decompression Speedlczero: BLASjpegxl-decode: 1compress-zstd: 19, Long Mode - Decompression Speedsimdjson: DistinctUserIDliquid-dsp: 1 - 256 - 57quantlib: pyperformance: gognuradio: IIR Filterpyperformance: floatwebp: Quality 100, Lossless, Highest Compressioncrafty: Elapsed Timewebp: Quality 100, Highest Compressiongimp: rotatepyperformance: 2to3pyperformance: pathlibtnn: CPU - SqueezeNet v1.1gnuradio: FM Deemphasis Filterpybench: Total For Average Test Timesliquid-dsp: 4 - 256 - 57webp: Quality 100git: Time To Complete Common Git Commandspyperformance: regex_compilepyperformance: django_templatepyperformance: raytracejpegxl: PNG - 100simdjson: LargeRandpyperformance: json_loadspyperformance: pickle_pure_pythonsimdjson: PartialTweetstnn: CPU - MobileNet v2encode-flac: WAV To FLACpyperformance: chaoscompress-zstd: 3 - Decompression Speedsimdjson: Kostyaencode-mp3: WAV To MP3dacapobench: Eclipse77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.9622850.591.5581.534411.91.53123915884.9153331.50.7411.711.2725.9431.81248.9461588.73349045097.871194057.21110.22958.4104101551.497.41713516.973.37523174907442601.98361.28149450491.96497649.986176583.09325.462226.51851668795606.1455723605.78692.1786.1425220550146.42521348004.29712710240.7983.972.701115.7463103.96272.7877384.202963420915940.4355140.877713.6115640193.443867.54714.1528348.427612446.0235687.933.1034171019.445158.29446346662.8672.073321775660000335.4412534.521369.35134.781.603837741600002294.352.50229489.6184.052642285.982282.736.89878.7862.30172.96166.07116.423.932.9944691152.24262.664381149.161154.558.5611.8457.4511.97444812225.69163.45266.11783.85164.38186.08322.9121050.50.2504830.81081528.228161.017510.99913.20478.73892.70514.56126811.91109141.46880.7207.454.691.23338.46654.683.253582115.264.385152.559172.8138.21315.514822.335.3234314037.13.909234.37736254.49535244.06254.68.0274124.4866167.510.99936145.28643611.32359.072.5023218.419654.25896.7167045.4723.785110.11116.79652.6129.24126.224.757640.381371.9984.6483.6868.17178.2912.479880.092712.12758924.3264546.4150.38646.8456.43828.02548.926209.07530.1658.560.1567.3466843910.53667162.37544.6973.6919.585521712.7171.2724510.18324004.749535.4395201.6344.5987.6476349.5422.4910.964593.0631.4064768.12555243000007.39756340000293.3712.107609.8225055.34484.555.81111.4616.9891578.8258.83882.89410.331015.276.9661.627668.0870.5998881034.0615.17587.563.8721445.113.12137111.52100.1310.676105.94.8623860.310.38201569.6097843110989.4164.170754.490.645.8486.90061452.3742.331182843.383226.9427.30580.9138575510014.705651.14251542650.208211.5711.911.826.919.52212.099.8670.9127489192.36224592007.34.6762.31485.9759.82067.2266311.62080100006098.479.39779.612331982.215364.15956.5158871.3653799.851043500004131.8128561.157.20.84145742745.209.29716810.3184.205958.460140154000016.7432.05479.524.82480.981.7911.92258.16190.91911.69253.25715.15.784.8060.3885821.393.7010.6641134.163.481083173510.3456762.971.5053.062.52210.5643.1534.5352155.925079673913.653274898.71985.84201.8147002158.455.6776726.491.99078241183031903.39204.91208470672.13691509.796389804.79451.133027.8612161511891208.5176963108.098126.36248.5039295130202.12451815405.36102514144.3723.133.65886.304176.35202.6710059.75.201137553501450.3131161.180918.0721166254.358989.28845.16946411.20349862.1213917.783.867351744.3119.52915762268.6412.5876191032600000249.9616311.471830.42100.61.1900610336000001675.221.54207352.6315.008821680.331678.889.191164.4285.18583.94126.27528.045.193.670094855.906197.261697856.019858.83211.39415.6280.94832.40175520541.04126.12397.92845.07126.33657.91514.1927518.780.1887950.62617721.141122.562375.34910.06459.24869.91411.02543264.21482452.01898.78293.134.4669.52529.48870.472.50651430.485.66943.771217.4728.87820.9171102.444.245249047.133.421073.49424176.83432335.22345.47.5604132.1845204.590.881827123.04644913.97439.562.2609115.302665.30858.01191956.2933.028134.9294.77164.2158.66231.4821.14147.291490.7959.7284.4526.82699.810.713893.30048.4367153.1774951.6181.98154.7147.09823.31939.737169.04724.3663.170.1355.2157244535.6772189.15237.9063.07115.074826210.8651.1432613.30624918.3411923.73107216.4938.6065.7264160.7121.1710.414566.1526.6925618.92648778000008.67739430000331.3913.459687.9336230.74955.156.53118.7818.315651.1854.58412.57611.741153.467.1652.07739.5120.5906331179.613.458103.733.5011637.914.09165512.293.512.38450145.4425610.960.43193674.0953998512794.7570.321146.880.736.5596.54631498.3441.524174847.467727.2429.66520.9243360330014.1705.41481607703.7361212.3212.224.210.1512.5310.0666.5877025183.08522941825.44.6260.61374.7741.42105.736404.12075200006167.274.24577.412278892.195452.46277.1160172.615206.59.721054400004141.9125560.756.10.85148048875.309.18216710.1180.38390559241100000016.8131.71779.224.724611.7911.82238.24190.73811.64753.15794.75.784.7951.023050.532.8541.635431.711.5875712524.7339921.530.7831.611.4486.2371.74564.3331173.772795747011.988148020.71248.82265.17961.371172.139.60780402.033.26570132238381701.91336.39114893370.90385213.542794448.72252.861692.20688456641134.7536536934.51771.40324.7544167343113.44971033373.0396277963.91284.022.064151.7248134.80356.477123.62.980782320297760.5488220.672510.3312352147.270851.67862.9660526.47857935.5975527.632.2275091266.557207.29083835804.8231.488226596263333432.639490.501056.82174.242.062265974733332897.712.57769609.1182.9059882895.712896.315.34676.7649.71952.3215.76017.903.042.1647601450.48332.5113741452.401452.696.7119.2348.08561.42689812327.50212.23984.71163.05211.80114.72132.6916510.730.3131291.0399635.062203.188621.55116.71396.220114.27518.15726610.53901781.23160.22181.364.75111.95246.84544.494.029411934.153.525462.583137.0045.92016.390705.106.6593353031.895.347075.43706273.7372220.17227.7211.715885.3113136.261.35548188.4577629.12286.863.3051423.364142.78675.64130437.2834.52689.65142.81243.6106.88621.1031.258131.987465.5581.9483.0589.90269.1815.680863.755211.96146564.4812935.9127.83138.1767.69432.99455.797235.98321.9644.680.1777.1112131737.61552136.23250.7744.22020.960092515.0481.5805410.30618180.588739.8380160.2851.5457.3801466.4017.208.423598.2935.44912110.78498086333336.89573297500386.6410.304524.2424776.63815.907.5791.3021.8567746.9445.74083.2439.121309.265.6566.571578.5120.7519351306.1216.87582.574.3251326.312.03146111.06110.1710.7965865.65.6721442.670.36224262.3752893211043.4860.286152.680.635.6884.26271322.9647.652187142.004625.8326.43130.8539265373315.772630.98621495630.931010.9311.2111.1325.859.80911.419.5173.0757683201.92525381942.84.2463.61436.8738.42235.3116208.31968728576086.879.98083.511432252.095074.25834.2151469.945133.09.341003533334180.6132541.559.40.82139491885.009.50917310.6191.091913.762640060333316.1333.07182.725.82590.951.7212.32327.89199.06212.10255.25596.15.564.9781.010390.532.8481.641671.711.58100011074.7839941.530.7371.611.3755.3221.63864.3741143.012716718811.558147994.71231.32265.57944.471199.839.59464400.763.26386133881563201.92318.81115330374.5385542.168675446.02250.531684.91683876627004.7633538404.55571.07554.7576168810113.78061030303.0391887962.21284.522.096150.6568135.04357.17002.22.981309327599420.5507240.671710.3412156147.448451.10552.9683646.40479835.6125526.472.2317981280.928206.45839935844.4411.490901598510000433.069510.351056.4173.722.042445974100002897.082.57305608.6762.9080292892.92893.485.33688.7649.65072.31214.14917.193.072.1597071453.82331.3777741450.631444.746.7319.2347.78291.43245412334.26212.12344.71413.06211.97014.71752.7216444.440.3147641.0361335.066203.443622.38616.69297.4113.89318.1226241.32903341.23460.24181.124.75110.91147.27843.734.022151862.753.266592.434137.1445.9714.12699.476.733340305.335785.41634274.5605215.04226.7811.727385.2284134.81.36078188.6222989.1286.813.3034223.34642.825.73127337.1754.55789.96143.23742.8106.00621.1531.303931.94046386.4723.0429.95768.5515.644863.901711.97946784.4753435.9128.57938.0467.80933.08156.406237.4722.3644.680.1767.0951531784.52555136.84850.8114.23220.986722915.0751.583610.42118201.558741.3679160.4551.4877.3769966.3717.38.613579.535.47460410.67497356000006.9583500000386.4710.304528.4484772.93785.87.5991.2221.8566746.1545.73873.259.171293.345.5657.105579.9980.7519071298.3416.80183.824.3521319.112.19135911.11110.110.8015870.75.7121352.010.36215462.7697920411044.4660.332252.040.635.6884.22231299.8543.92170042.198425.9526.35450.8339194510015.836630.55351475629.797510.7611.0710.9725.889.80111.289.571.9747679201.95225401937.14.24631448.5725.52228.8726188.2197510000594379.55381.911495352.085072.45826.3155467.935239.29.4999000003927.2132541.858.70.81140532685.019.50517410.7191.086919.161339810000016.1233.19282.325.82570.961.7212.32327.9198.86812.09555.45547.85.554.9780.9169360.541.4821.453491.721.45125725504.6152821.420.7251.51.2445.5221.73548.7921561.4382738507.334194261.51119.32968.4104401507.337.42742531.733.18229175380827801.87365.43153350463.77509464.001415586.79332.812207.98868336915606.1644717505.88492.45136.1603220960146.76631376504.32876910277.3973.452.699115.5433103.29271.597780.84.228976421269760.4150220.87913.6815748194.492567.69064.1815698.425812446.0117692.333.140911005.656158.42200947545.7272.088438776550000333.2512608.141368.95134.161.600577780000002231.812.33499475.8474.0796292223.942225.666.93892.3262.94912.98165.87117.683.923.0002751113.96261.2381611110.931112.468.59211.9358.03751.9836612368.45163.04696.1333.84162.57816.15072.520765.760.2410040.7949927.927160.119495.06513.07877.6690.49413.9726715.21125851.4873.65209.437.2490.7938.28154.843.104512128.864.528272.42174.5237.68414.598839.035.2884319037.393.890954.28496251.62101246.55261.87.9751125.2987166.270.989794144.6751311.36362.542.4627518.2754.70675.3164045.9143.667110.6115.87852.5129.95326.4224.60640.633472.6383.7943.688.0580.1512.386880.704711.57857474.17446.8152.05147.0155.99127.89348.341207.90829.0158.860.1487.0804844080.08676162.39444.3313.58419.435157812.6781.234969.86623901.699511.8694202.844.0817.3076854.2722.9711.14711.2931.2347848.62560084000006.54759310000292.0912.125616.0995126.44424.485.77111.6316.8582576.1359.29722.8319.091016.685.7853.76677.9640.588639103914.94388.863.8971454.311.414039.9199.5410.3096084.94.7624079.990.38212868.9068925411060.863.499654.430.635.8486.92381401.142.325187743.256126.9427.2430.8839156190014.621649.26351540647.740211.6111.9211.8224.359.24712.119.9567.7787783190.87225402012.94.2560.21498.3775.52066.0696347.12057600006118.577.61477.511841352.185220.66198.9151971.95130.49.831043000004090.7127570.856.10.85145725555.198.97117710.3183.892941.159840721000016.7932.36579.524.624711.7912.12218.17190.3711.672535716.25.784.7920.9517690.571.5181.556491.761.4972915804.8383811.50.7361.581.3015.8831.82749.0321514.07354999557.877193624.51090.62954.2103801562.227.41505517.613.36363175448701301.98344.88147910490.93494267.865724583.56331.572268.58854338688806.1182707305.8492.01146.1347217830146.03141344504.29826810175.6989.722.668115.816104.03273.367745.24.178461414115320.4368590.87613.5715594192.782167.51724.1555658.44557145.6199686.563.1149241012.614158.99459846993.6992.078613772840000334.8512469.441368.9135.181.605527861800002292.332.45222487.7344.0622212272.722289.266.87879.3162.75342.96167.47217.973.872.9865161155.04262.0624681156.111155.48.54711.8557.39271.9752512239.64164.05796.09523.85164.28986.08662.721184.20.2517770.8110927.564161.064503.15713.16977.88193.34214.59926612.661113751.46579.76206.94.6191.40538.41154.623.245322004.464.465612.557172.7138.24815.373834.425.3844308037.273.918164.39946254.32993243.98252.798.0301124.4383168.281.0027145.38815511.33360.582.5071818.310754.5855.69166945.5423.742109.97116.73252.4129.25126.1824.962140.05372.2584.9863.6848.17477.312.426880.441112.11458394.3190245.8149.54346.8956.57327.99948.768209.83929.1958.760.1587.3317443900.91668161.94544.1793.65219.55776612.7651.2720910.03323930.179553.4594201.8844.5727.6399350.1422.4510.994553.5331.5862518.14555482000007.12759550000294.4312.098615.78851404454.545.82111.5616.9692581.3458.91022.90210.021032.676.0353.927670.5270.6002441032.415.18187.63.881443.812.06140710.34100.4310.5426088.94.7923843.510.3769.6804844410954.1163.667153.380.645.8487.66911428.5742.063194243.462126.9427.37960.8738591160014.693651.97151548650.700411.4311.7611.6125.439.48811.969.9271.9337765192.751236020094.6660.61508.7760.32069.7456336.62070400006083.380.1979.412180192.175185.26188.3153570.445186.49.771042600004098.8126575.856.40.84145130085.209.18216710.4184.079943.160640124000016.7832.08479.424.62460.971.7911.82258.21193.77111.66553.25702.75.774.8031.01580.532.8641.652571.851.653419204.8139851.580.8011.671.4666.5051.7764.3291192.542812176712.085147943.71180.22265.47943.921197.0510.1288406.473.27254133352661002.05334.93115430375.11384184.408516448.3251.351709.61682386624604.758538104.58271.82044.755165660113.82881030403.0304737957.91282.322.071152.6828134.08357.357171.72.977773320095390.5503330.681510.5212090145.524251.15052.9695586.47759835.7029527.782.2202511286.513206.77416935828.1371.4902594580000433.939516.261058.54174.022.062855968500002897.052.66452602.7792.9011612888.4428965.36691.7849.62682.3213.00617.773.062.1590671450.82334.1986811450.71446.496.7579.2247.93471.43100912314.21212.10754.71453.02212.00744.71672.8616423.770.312251.0397334.923203.786621.82416.71896.295115.53518.18526385.9903491.23762.24180.454.76112.84847.17343.564.037511923.953.356562.618137.346.04516.873695.356.6533333030.215.340625.45728274.56138215.01222.0811.691785.483134.661.3531189.5382039.09286.263.3090323.326142.85565.95131837.2424.58589.26142.84943.1105.83521.1431.432531.809463.8487.13.0349.8668.2315.632463.949912.06946744.6323435.7127.2237.9667.81533.04156.328239.3321.3144.650.1787.0626931801.69550136.42853.1264.28620.943674115.041.5749210.41718217.438923.1279160.2151.9147.3986864.2417.238.323534.1835.28620810.66497175000007.13585770000379.7310.23522.0434863.43825.787.5791.1821.7765746.1445.90723.2849.213035.6862.467579.1890.7500021315.7816.97782.614.3411320.812.34142411.11110.6710.745846.25.7521396.590.36220162.5703862311031.3460.411452.030.635.6784.00811304.9443.916188141.759525.9726.34310.9439258220015.68630.90261499630.504910.9411.2211.1225.919.77911.449.1172.8717745202.75925381936.94.4366.11436.2709.62237.0756200.61999600006167.679.79179.911542402.0550586021.3152069.365039.89.461000100003966.6133544.358.80.82141703765.019.44617310.6191.076911.761538898000016.1233.1278325.72560.961.7312.42317.91199.1912.10455.15745.45.564.9860.3599921.43.560.6656364.173.48716250810.5157962.981.4933.062.45210.3963.11334.0312150.584748093513.62275022.12025.34203.4147002117.955.51763733.321.84748240553723803.41201.39207140672.9695719.393178804.66449.63027.6712252811822808.5267961508.059127.26268.5038295660200.92241834105.39052314044.5723.783.65887.439777.02202.1312360.55.249215563683300.3128261.181118.1321212255.036889.44985.19070111.20119361.8538918.153.870727738.33119.05078262268.3752.5739041033000000249.9516466.741832.77100.471.1904410340000001675.61.54403353.5384.994921677.71683.039.191163.4585.16443.94127.04927.975.163.665911855.553196.697568854.95855.53711.21115.6580.88552.4110420602.99126.51787.90375.02126.30777.91694.1827522.210.1894510.62523121.083122.638374.36410.06958.79770.28911.00542376.81482102.0297.49294.34.4569.85329.16769.912.517761322.984.96933.863218.529.00722.471102.444.2994911047.083.970483.49499176.07077334.75344.817.6063131.3806208.220.880093124.04262713.89437.962.1672615.360765.06458.07193656.3953.056134.895.69363.8158.2131.4521.271547.001593.659.0174.4686.765100.1410.849992.1318.29867913.1798551183.69654.647.23423.29640.8168.92923.9963.160.1265.2267444554.53767190.95937.9263.17415.084387810.8541.1444313.60224557.8511860.08106215.9838.6735.7036561.5321.3910.284390.526.9425358.91659745000008.4753250000330.5813.488649.4916289.14985.156.53118.7118.2487651.7154.78152.53911.731159.387.1358.202728.990.5952111190.813.576102.843.6211636.713.7167612.0990.2112.62951625.4425644.030.43188074.152974812761.8870.33446.830.726.5496.89031464.4842.367179147.12152429.84140.8643564000014.043704.97911650703.554911.7812.0911.9824.2410.27912.2710.165.9527198183.16623481824.84.6260.41402.1716.62112.1486696.21923100006423.975.52678.411869952.215436.16049.2162371.365176.69.961064600003971.2125561.255.90.86146468275.309.40616710.1182.768907.560240268000017.0331.4378.624.52470.971.8111.82238.27190.74911.57353.35722.75.764.809OpenBenchmarking.org

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPURyzen 7600 AMDRyzen 7600AMD 76007700Ryzen 7 7700AMD 77007900Ryzen 9 79000.23020.46040.69060.92081.151SE +/- 0.008119, N = 31.0230501.0158001.0103900.9622850.9517690.9169360.3885820.359992MIN: 0.95MIN: 0.95MIN: 0.95MIN: 0.85MIN: 0.84MIN: 0.86MIN: 0.36MIN: 0.331. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazefaceRyzen 9 790079007700Ryzen 7 7700AMD 7700Ryzen 7600AMD 7600Ryzen 7600 AMD0.3150.630.9451.261.575SE +/- 0.00, N = 31.401.390.590.570.540.530.530.53MIN: 1.36 / MAX: 3.18MIN: 1.37 / MAX: 1.49MIN: 0.56 / MAX: 1.26MIN: 0.54 / MAX: 1.6MAX: 0.72MIN: 0.52 / MAX: 0.66MIN: 0.52 / MAX: 0.87MIN: 0.52 / MAX: 0.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.07900Ryzen 9 7900Ryzen 7600Ryzen 7600 AMDAMD 76007700Ryzen 7 7700AMD 77000.83271.66542.49813.33084.1635SE +/- 0.002, N = 153.7013.5602.8642.8542.8481.5581.5181.482MIN: 3.63 / MAX: 3.81MIN: 3.49 / MAX: 4.01MIN: 2.83 / MAX: 14.09MIN: 2.82 / MAX: 9.6MIN: 2.83 / MAX: 3.19MIN: 1.47 / MAX: 4.02MIN: 1.44 / MAX: 4.13MIN: 1.47 / MAX: 3.451. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPURyzen 7600AMD 7600Ryzen 7600 AMDRyzen 7 77007700AMD 7700Ryzen 9 790079000.37180.74361.11541.48721.859SE +/- 0.001514, N = 31.6525701.6416701.6354301.5564901.5344101.4534900.6656360.664113MIN: 1.62MIN: 1.62MIN: 1.61MIN: 1.41MIN: 1.41MIN: 1.41MIN: 0.59MIN: 0.621. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDetRyzen 9 790079007700Ryzen 7600Ryzen 7 7700AMD 7700AMD 7600Ryzen 7600 AMD0.93831.87662.81493.75324.6915SE +/- 0.01, N = 34.174.161.901.851.761.721.711.71MIN: 4.14 / MAX: 4.86MIN: 4.11 / MAX: 4.2MIN: 1.83 / MAX: 2.67MIN: 1.84 / MAX: 2.07MIN: 1.7 / MAX: 3.08MIN: 1.7 / MAX: 2.15MIN: 1.7 / MAX: 1.75MIN: 1.69 / MAX: 3.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v2Ryzen 9 79007900Ryzen 7600AMD 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 77000.7831.5662.3493.1323.915SE +/- 0.00, N = 33.483.481.601.581.581.531.491.45MIN: 3.45 / MAX: 3.74MIN: 3.44 / MAX: 3.9MIN: 1.58 / MAX: 1.99MIN: 1.57 / MAX: 1.93MIN: 1.56 / MAX: 2.18MIN: 1.47 / MAX: 3.01MIN: 1.44 / MAX: 2.07MIN: 1.43 / MAX: 1.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: StandardRyzen 7600Ryzen 9 7900Ryzen 7 7700Ryzen 7600 AMDAMD 760079007700AMD 770030060090012001500SE +/- 73.47, N = 1253471672975710001083123912571. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardAMD 7600Ryzen 7600 AMDRyzen 7 770077007900Ryzen 7600Ryzen 9 7900AMD 77005001000150020002500SE +/- 92.79, N = 12110712521580158817351920250825501. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400mRyzen 9 790079007700Ryzen 7 7700Ryzen 7600AMD 7600Ryzen 7600 AMDAMD 77003691215SE +/- 0.01, N = 310.5110.344.914.834.814.784.734.61MIN: 10.31 / MAX: 17.48MIN: 10.27 / MAX: 10.81MIN: 4.56 / MAX: 6.15MIN: 4.51 / MAX: 7.13MIN: 4.74 / MAX: 6.32MIN: 4.74 / MAX: 5.24MIN: 4.68 / MAX: 5.18MIN: 4.51 / MAX: 14.571. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: StandardRyzen 7600Ryzen 7600 AMDAMD 7600AMD 770077007900Ryzen 9 7900Ryzen 7 77002K4K6K8K10KSE +/- 3.09, N = 3398539923994528253335676579683811. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v3Ryzen 9 79007900Ryzen 7600AMD 7600Ryzen 7600 AMDRyzen 7 77007700AMD 77000.67051.3412.01152.6823.3525SE +/- 0.00, N = 32.982.971.581.531.531.501.501.42MIN: 2.94 / MAX: 3.38MIN: 2.94 / MAX: 3.32MIN: 1.53 / MAX: 3.86MIN: 1.51 / MAX: 1.94MIN: 1.5 / MAX: 2.03MIN: 1.39 / MAX: 2.44MIN: 1.39 / MAX: 2.4MIN: 1.4 / MAX: 1.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV37900Ryzen 9 7900Ryzen 7600Ryzen 7600 AMD7700AMD 7600Ryzen 7 7700AMD 77000.33860.67721.01581.35441.693SE +/- 0.005, N = 151.5051.4930.8010.7830.7410.7370.7360.725MIN: 1.48 / MAX: 9.16MIN: 1.47 / MAX: 1.96MIN: 0.79 / MAX: 3.51MIN: 0.73 / MAX: 2.85MIN: 0.71 / MAX: 2.7MIN: 0.73 / MAX: 1.1MIN: 0.7 / MAX: 1.39MIN: 0.72 / MAX: 1.21. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnetRyzen 9 790079007700Ryzen 7600AMD 7600Ryzen 7600 AMDRyzen 7 7700AMD 77000.68851.3772.06552.7543.4425SE +/- 0.00, N = 33.063.061.711.671.611.611.581.50MIN: 3.03 / MAX: 3.57MIN: 3.03 / MAX: 3.46MIN: 1.61 / MAX: 2.55MIN: 1.64 / MAX: 2.16MIN: 1.59 / MAX: 2.01MIN: 1.59 / MAX: 2MIN: 1.48 / MAX: 2.46MIN: 1.48 / MAX: 1.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.17900Ryzen 9 7900Ryzen 7600Ryzen 7600 AMDAMD 7600Ryzen 7 77007700AMD 77000.56751.1351.70252.272.8375SE +/- 0.008, N = 152.5222.4521.4661.4481.3751.3011.2721.244MIN: 2.47 / MAX: 2.98MIN: 2.42 / MAX: 4.54MIN: 1.45 / MAX: 3.38MIN: 1.37 / MAX: 8.4MIN: 1.36 / MAX: 1.66MIN: 1.24 / MAX: 3.27MIN: 1.2 / MAX: 3.38MIN: 1.23 / MAX: 3.261. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnet7900Ryzen 9 7900Ryzen 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 7700AMD 76003691215SE +/- 0.110, N = 1510.56410.3966.5056.2375.9435.8835.5225.322MIN: 10.43 / MAX: 11.26MIN: 10.29 / MAX: 18.12MIN: 6.43 / MAX: 8.41MIN: 5.32 / MAX: 16.05MIN: 5.47 / MAX: 9.31MIN: 5.44 / MAX: 8.54MIN: 5.45 / MAX: 17.02MIN: 5.29 / MAX: 6.271. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_2247900Ryzen 9 7900Ryzen 7 77007700Ryzen 7600Ryzen 7600 AMDAMD 7700AMD 76000.70881.41762.12642.83523.544SE +/- 0.011, N = 153.1503.1131.8271.8121.7701.7451.7351.638MIN: 3.1 / MAX: 3.75MIN: 3.04 / MAX: 3.57MIN: 1.72 / MAX: 3.71MIN: 1.71 / MAX: 3.68MIN: 1.74 / MAX: 3.67MIN: 1.63 / MAX: 8.75MIN: 1.71 / MAX: 3.74MIN: 1.62 / MAX: 2.051. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

C-Ray

This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelAMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 7 77007700AMD 77007900Ryzen 9 79001428425670SE +/- 0.02, N = 364.3764.3364.3349.0348.9548.7934.5434.031. (CC) gcc options: -lm -lpthread -O3

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.CAMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 77007700Ryzen 9 790079005001000150020002500SE +/- 13.02, N = 51143.011173.771192.541514.071561.401588.732150.582155.921. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

Stockfish

This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 15Total TimeAMD 7600Ryzen 7600 AMDRyzen 76007700Ryzen 7 7700AMD 7700Ryzen 9 7900790011M22M33M44M55MSE +/- 210730.41, N = 1127167188279574702812176734904509354999553827385047480935507967391. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-507900Ryzen 9 7900Ryzen 7600Ryzen 7600 AMDAMD 7600Ryzen 7 77007700AMD 770048121620SE +/- 0.045, N = 1513.65313.62012.08511.98811.5587.8777.8717.334MIN: 13.54 / MAX: 15.73MIN: 13.5 / MAX: 15.47MIN: 11.68 / MAX: 24.46MIN: 11.25 / MAX: 19.5MIN: 11.48 / MAX: 19.17MIN: 7.19 / MAX: 10.65MIN: 7.16 / MAX: 10.56MIN: 7.13 / MAX: 19.121. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Ryzen 7600AMD 7600Ryzen 7600 AMDRyzen 7 77007700AMD 77007900Ryzen 9 790060K120K180K240K300KSE +/- 6.07, N = 3147943.7147994.7148020.7193624.5194057.2194261.5274898.7275022.11. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Compression SpeedRyzen 7 77007700AMD 7700Ryzen 7600AMD 7600Ryzen 7600 AMD7900Ryzen 9 7900400800120016002000SE +/- 9.87, N = 31090.61110.21119.31180.21231.31248.81985.82025.31. (CC) gcc options: -O3 -pthread -lz -llzma

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Ryzen 7600 AMDRyzen 7600AMD 7600Ryzen 7 77007700AMD 77007900Ryzen 9 79009001800270036004500SE +/- 0.25, N = 32265.12265.42265.52954.22958.42968.44201.84203.41. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: DeepcoinRyzen 7600AMD 7600Ryzen 7600 AMDRyzen 7 77007700AMD 77007900Ryzen 9 79003K6K9K12K15KSE +/- 16.47, N = 37943.927944.477961.3710380.0010410.0010440.0014700.0014700.001. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.DRyzen 7600 AMDRyzen 7600AMD 7600AMD 77007700Ryzen 7 7700Ryzen 9 790079005001000150020002500SE +/- 9.35, N = 91172.131197.051199.831507.331551.491562.222117.952158.451. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPURyzen 7600Ryzen 7600 AMDAMD 7600AMD 77007700Ryzen 7 77007900Ryzen 9 79003691215SE +/- 0.03364, N = 310.128809.607809.594647.427427.417137.415055.677605.51763MIN: 9.53MIN: 9.17MIN: 9.26MIN: 6.85MIN: 6.76MIN: 6.75MIN: 4.97MIN: 4.881. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: MagiAMD 7600Ryzen 7600 AMDRyzen 76007700Ryzen 7 7700AMD 77007900Ryzen 9 7900160320480640800SE +/- 1.54, N = 3400.76402.03406.47516.97517.61531.73726.49733.321. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU7700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 7600AMD 77007900Ryzen 9 79000.75941.51882.27823.03763.797SE +/- 0.01337, N = 33.375233.363633.272543.265703.263863.182291.990781.84748MIN: 3.09MIN: 3.08MIN: 3.2MIN: 3.2MIN: 3.2MIN: 3.09MIN: 1.8MIN: 1.791. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA256Ryzen 7600 AMDRyzen 7600AMD 76007700AMD 7700Ryzen 7 7700Ryzen 9 790079005000M10000M15000M20000M25000MSE +/- 148039451.93, N = 313223838170133352661001338815632017490744260175380827801754487013024055372380241183031901. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v2Ryzen 9 79007900Ryzen 7600Ryzen 7 77007700AMD 7600Ryzen 7600 AMDAMD 77000.76731.53462.30193.06923.8365SE +/- 0.00, N = 33.413.392.051.981.981.921.911.87MIN: 3.38 / MAX: 3.79MIN: 3.36 / MAX: 3.7MIN: 2 / MAX: 2.62MIN: 1.84 / MAX: 3.02MIN: 1.83 / MAX: 3.36MIN: 1.89 / MAX: 2.4MIN: 1.88 / MAX: 2.37MIN: 1.83 / MAX: 2.221. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

JPEG XL Decoding libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: AllRyzen 9 79007900AMD 7600Ryzen 7600Ryzen 7600 AMDRyzen 7 77007700AMD 770080160240320400SE +/- 0.21, N = 3201.39204.91318.81334.93336.39344.88361.28365.43

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, PyriteRyzen 7600 AMDAMD 7600Ryzen 7600Ryzen 7 77007700AMD 7700Ryzen 9 7900790040K80K120K160K200KSE +/- 21.86, N = 31148931153301154301479101494501533502071402084701. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPURyzen 7600 AMDAMD 7600Ryzen 7600AMD 7700Ryzen 7 770077007900Ryzen 9 7900150300450600750SE +/- 0.75, N = 3370.90374.50375.11463.77490.93491.96672.13672.901. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondRyzen 7600Ryzen 7600 AMDAMD 7600Ryzen 7 77007700AMD 77007900Ryzen 9 7900150K300K450K600K750KSE +/- 1264.79, N = 3384184.41385213.54385542.17494267.87497649.99509464.00691509.80695719.391. (CC) gcc options: -O2 -lrt" -lrt

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25xAMD 7600Ryzen 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 7700Ryzen 9 790079002004006008001000SE +/- 1.16, N = 3446.02448.30448.72583.09583.56586.79804.66804.791. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scryptAMD 7600Ryzen 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 7700Ryzen 9 79007900100200300400500SE +/- 1.68, N = 3250.53251.35252.86325.46331.57332.81449.60451.131. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: RingcoinAMD 7600Ryzen 7600 AMDRyzen 7600AMD 77007700Ryzen 7 7700Ryzen 9 790079006001200180024003000SE +/- 4.41, N = 31684.911692.201709.612207.982226.512268.583027.673027.861. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingRyzen 7600AMD 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 77007900Ryzen 9 790030K60K90K120K150KSE +/- 90.75, N = 36823868387688458516685433868331216151225281. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Blake-2 SRyzen 7600AMD 7600Ryzen 7600 AMDAMD 7700Ryzen 7 77007700Ryzen 9 79007900300K600K900K1200K1500KSE +/- 2843.36, N = 3662460662700664113691560868880879560118228011891201. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamRyzen 7600 AMDRyzen 7600AMD 7600Ryzen 7 77007700AMD 77007900Ryzen 9 7900246810SE +/- 0.0028, N = 34.75364.75804.76336.11826.14556.16448.51768.5267

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY CreditsRyzen 7600 AMDRyzen 7600AMD 7600Ryzen 7 7700AMD 77007700Ryzen 9 7900790020K40K60K80K100KSE +/- 26.67, N = 353693538105384070730717507236096150963101. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarRyzen 7600 AMDAMD 7600Ryzen 76007700Ryzen 7 7700AMD 7700Ryzen 9 79007900246810SE +/- 0.013, N = 34.5174.5554.5825.7865.8405.8848.0598.098

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamAMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 7 77007700AMD 77007900Ryzen 9 7900306090120150SE +/- 0.05, N = 371.0871.4071.8292.0192.1892.45126.36127.26

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamRyzen 7600 AMDRyzen 7600AMD 7600Ryzen 7 77007700AMD 7700Ryzen 9 79007900246810SE +/- 0.0023, N = 34.75444.75504.75766.13476.14256.16038.50388.5039

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Triple SHA-256, OnecoinRyzen 7600Ryzen 7600 AMDAMD 7600Ryzen 7 77007700AMD 77007900Ryzen 9 790060K120K180K240K300KSE +/- 1138.26, N = 131656601673431688102178302205502209602951302956601. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamRyzen 7600 AMDAMD 7600Ryzen 7600Ryzen 7 77007700AMD 7700Ryzen 9 790079004080120160200SE +/- 0.10, N = 3113.45113.78113.83146.03146.43146.77200.92202.12

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: SkeincoinAMD 7600Ryzen 7600Ryzen 7600 AMDRyzen 7 77007700AMD 77007900Ryzen 9 790040K80K120K160K200KSE +/- 695.37, N = 31030301030401033371344501348001376501815401834101. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 44100 - Buffer Size: 1024Ryzen 7600AMD 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 77007900Ryzen 9 79001.21292.42583.63874.85166.0645SE +/- 0.000394, N = 33.0304733.0391883.0396274.2971274.2982684.3287695.3610255.3905231. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1MRyzen 7600AMD 7600Ryzen 7600 AMDRyzen 7 77007700AMD 7700Ryzen 9 790079003K6K9K12K15KSE +/- 6.89, N = 37957.97962.27963.910175.610240.710277.314044.514144.31. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Barbershop - Compute: CPU-OnlyAMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 7 77007700AMD 7700Ryzen 9 7900790030060090012001500SE +/- 2.65, N = 31284.521284.021282.32989.72983.97973.45723.78723.13

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomRyzen 7600 AMDRyzen 7600AMD 7600Ryzen 7 7700AMD 770077007900Ryzen 9 79000.82311.64622.46933.29244.1155SE +/- 0.005, N = 32.0642.0712.0962.6682.6992.7013.6583.658

Tachyon

This is a test of the threaded Tachyon, a parallel ray-tracing system, measuring the time to ray-trace a sample scene. The sample scene used is the Teapot scene ray-traced to 8K x 8K with 32 samples. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99.2Total TimeRyzen 7600Ryzen 7600 AMDAMD 7600Ryzen 7 77007700AMD 7700Ryzen 9 79007900306090120150SE +/- 0.64, N = 3152.68151.72150.66115.82115.75115.5487.4486.301. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: BMW27 - Compute: CPU-OnlyAMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 7 77007700AMD 7700Ryzen 9 79007900306090120150SE +/- 0.20, N = 3135.04134.80134.08104.03103.96103.2977.0276.35

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Classroom - Compute: CPU-OnlyRyzen 7600AMD 7600Ryzen 7600 AMDRyzen 7 77007700AMD 77007900Ryzen 9 790080160240320400SE +/- 0.20, N = 3357.35357.10356.47273.36272.78271.59202.67202.13

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1MAMD 7600Ryzen 7600 AMDRyzen 76007700Ryzen 7 7700AMD 77007900Ryzen 9 79003K6K9K12K15KSE +/- 32.54, N = 37002.27123.67171.77738.07745.27780.810059.712360.51. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 480000 - Buffer Size: 1024Ryzen 7600Ryzen 7600 AMDAMD 7600Ryzen 7 77007700AMD 77007900Ryzen 9 79001.18112.36223.54334.72445.9055SE +/- 0.001126, N = 32.9777732.9807822.9813094.1784614.2029634.2289765.2011375.2492151. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthRyzen 7600Ryzen 7600 AMDAMD 7600Ryzen 7 77007700AMD 77007900Ryzen 9 790012M24M36M48M60MSE +/- 333145.62, N = 33200953932029776327599424141153242091594421269765535014556368330

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPUAMD 7600Ryzen 7600Ryzen 7600 AMDRyzen 7 77007700AMD 77007900Ryzen 9 79000.12390.24780.37170.49560.6195SE +/- 0.000941, N = 30.5507240.5503330.5488220.4368590.4355140.4150220.3131160.312826MIN: 0.53MIN: 0.52MIN: 0.52MIN: 0.38MIN: 0.39MIN: 0.39MIN: 0.28MIN: 0.281. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ExhaustiveAMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 7 77007700AMD 77007900Ryzen 9 79000.26570.53140.79711.06281.3285SE +/- 0.0005, N = 30.67170.67250.68150.87600.87770.87901.18091.18111. (CXX) g++ options: -O3 -flto -pthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPURyzen 7600 AMDAMD 7600Ryzen 7600Ryzen 7 77007700AMD 77007900Ryzen 9 790048121620SE +/- 0.01, N = 310.3310.3410.5213.5713.6113.6818.0718.131. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. V-RAY is a commercial renderer that can integrate with various creator software products like SketchUp and 3ds Max. The V-RAY benchmark is standalone and supports CPU and NVIDIA CUDA/RTX based rendering. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5.02Mode: CPURyzen 7600AMD 7600Ryzen 7600 AMDRyzen 7 77007700AMD 77007900Ryzen 9 79005K10K15K20K25KSE +/- 40.13, N = 31209012156123521559415640157482116621212

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: FastRyzen 7600Ryzen 7600 AMDAMD 7600Ryzen 7 77007700AMD 77007900Ryzen 9 790060120180240300SE +/- 0.11, N = 3145.52147.27147.45192.78193.44194.49254.36255.041. (CXX) g++ options: -O3 -flto -pthread

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: MediumAMD 7600Ryzen 7600Ryzen 7600 AMDRyzen 7 77007700AMD 77007900Ryzen 9 790020406080100SE +/- 0.26, N = 351.1151.1551.6867.5267.5567.6989.2989.451. (CXX) g++ options: -O3 -flto -pthread

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 44100 - Buffer Size: 512Ryzen 7600 AMDAMD 7600Ryzen 76007700Ryzen 7 7700AMD 77007900Ryzen 9 79001.16792.33583.50374.67165.8395SE +/- 0.002271, N = 32.9660522.9683642.9695584.1528344.1555654.1815695.1694645.1907011. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ThoroughAMD 7600Ryzen 7600Ryzen 7600 AMDAMD 77007700Ryzen 7 7700Ryzen 9 790079003691215SE +/- 0.0037, N = 36.40476.47756.47858.42588.42768.445511.201111.20341. (CXX) g++ options: -O3 -flto -pthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: StandardRyzen 7 7700Ryzen 7600 AMDRyzen 9 79007900AMD 7600Ryzen 76007700AMD 7700306090120150SE +/- 6.47, N = 127179939898981241241. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamRyzen 7600 AMDAMD 7600Ryzen 7600Ryzen 7 7700AMD 77007700Ryzen 9 790079001428425670SE +/- 0.07, N = 335.6035.6135.7045.6246.0146.0261.8562.12

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPUAMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 7 77007700AMD 77007900Ryzen 9 79002004006008001000SE +/- 3.29, N = 3526.47527.63527.78686.56687.93692.33917.78918.151. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 1024Ryzen 7600Ryzen 7600 AMDAMD 76007700Ryzen 7 7700AMD 77007900Ryzen 9 79000.87091.74182.61273.48364.3545SE +/- 0.001525, N = 32.2202512.2275092.2317983.1034173.1149243.1409103.8673513.8707271. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfigRyzen 7600AMD 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 77007900Ryzen 9 790030060090012001500SE +/- 6.89, N = 31286.511280.931266.561019.451012.611005.66744.30738.33

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Disney MaterialRyzen 7600 AMDRyzen 7600AMD 7600Ryzen 7 7700AMD 770077007900Ryzen 9 790050100150200250207.29206.77206.46158.99158.42158.29119.53119.05

Aircrack-ng

Aircrack-ng is a tool for assessing WiFi/WLAN network security. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.7Ryzen 7600 AMDRyzen 7600AMD 76007700Ryzen 7 7700AMD 7700Ryzen 9 7900790013K26K39K52K65KSE +/- 2.66, N = 335804.8235828.1435844.4446662.8746993.7047545.7362268.3862268.641. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpcre -lsqlite3 -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -pthread

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 192000 - Buffer Size: 1024Ryzen 7600 AMDRyzen 7600AMD 76007700Ryzen 7 7700AMD 7700Ryzen 9 790079000.58221.16441.74662.32882.911SE +/- 0.001409, N = 31.4882261.4902001.4909012.0733212.0786132.0884382.5739042.5876191. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 24 - Buffer Length: 256 - Filter Length: 57Ryzen 7600Ryzen 7600 AMDAMD 7600Ryzen 7 77007700AMD 77007900Ryzen 9 7900200M400M600M800M1000MSE +/- 2020662.71, N = 3594580000596263333598510000772840000775660000776550000103260000010330000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Pabellon Barcelona - Compute: CPU-OnlyRyzen 7600AMD 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 77007900Ryzen 9 790090180270360450SE +/- 0.21, N = 3433.93433.06432.63335.44334.85333.25249.96249.95

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPURyzen 7600 AMDAMD 7600Ryzen 7600Ryzen 7 77007700AMD 77007900Ryzen 9 79004K8K12K16K20KSE +/- 0.99, N = 39490.509510.359516.2612469.4412534.5212608.1416311.4716466.741. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPUAMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 770077007900Ryzen 9 7900400800120016002000SE +/- 1.12, N = 31056.401056.821058.541368.901368.951369.351830.421832.771. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Fishy Cat - Compute: CPU-OnlyRyzen 7600 AMDRyzen 7600AMD 7600Ryzen 7 77007700AMD 77007900Ryzen 9 79004080120160200SE +/- 0.26, N = 3174.24174.02173.72135.18134.78134.16100.60100.47

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsRyzen 7600Ryzen 7600 AMDAMD 7600Ryzen 7 77007700AMD 7700Ryzen 9 790079000.46410.92821.39231.85642.3205SE +/- 0.00775, N = 32.062852.062262.042441.605521.603831.600571.190441.19006

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57Ryzen 7600AMD 7600Ryzen 7600 AMD7700AMD 7700Ryzen 7 77007900Ryzen 9 7900200M400M600M800M1000MSE +/- 86474.15, N = 3596850000597410000597473333774160000778000000786180000103360000010340000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPURyzen 7600 AMDAMD 7600Ryzen 76007700Ryzen 7 7700AMD 7700Ryzen 9 790079006001200180024003000SE +/- 2.35, N = 32897.712897.082897.052294.352292.332231.811675.601675.22MIN: 2889.51MIN: 2893.38MIN: 2882.96MIN: 2263.65MIN: 2268.98MIN: 2215.43MIN: 1671.36MIN: 1671.11. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPURyzen 7600Ryzen 7600 AMDAMD 76007700Ryzen 7 7700AMD 7700Ryzen 9 790079000.59951.1991.79852.3982.9975SE +/- 0.02309, N = 32.664522.577692.573052.502292.452222.334991.544031.54207MIN: 2.47MIN: 2.49MIN: 2.51MIN: 2.29MIN: 2.28MIN: 2.28MIN: 1.48MIN: 1.471. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: NinjaRyzen 7600 AMDAMD 7600Ryzen 76007700Ryzen 7 7700AMD 7700Ryzen 9 79007900130260390520650SE +/- 0.25, N = 3609.12608.68602.78489.62487.73475.85353.54352.63

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 480000 - Buffer Size: 512Ryzen 7600Ryzen 7600 AMDAMD 76007700Ryzen 7 7700AMD 7700Ryzen 9 790079001.1272.2543.3814.5085.635SE +/- 0.002307, N = 32.9011612.9059882.9080294.0526404.0622214.0796294.9949205.0088201. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPURyzen 7600 AMDAMD 7600Ryzen 76007700Ryzen 7 7700AMD 77007900Ryzen 9 79006001200180024003000SE +/- 2.85, N = 32895.712892.902888.442285.982272.722223.941680.331677.70MIN: 2880.28MIN: 2888.21MIN: 2854.93MIN: 2258.7MIN: 2247.96MIN: 2210.21MIN: 1675.48MIN: 1673.081. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPURyzen 7600 AMDRyzen 7600AMD 7600Ryzen 7 77007700AMD 7700Ryzen 9 790079006001200180024003000SE +/- 0.92, N = 32896.312896.002893.482289.262282.732225.661683.031678.88MIN: 2889.01MIN: 2885.46MIN: 2889.6MIN: 2263.84MIN: 2256.18MIN: 2213.52MIN: 1677.55MIN: 1673.291. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPUAMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 7 77007700AMD 77007900Ryzen 9 79003691215SE +/- 0.01, N = 35.335.345.366.876.896.939.199.191. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPURyzen 7600 AMDAMD 7600Ryzen 76007700Ryzen 7 7700AMD 7700Ryzen 9 7900790030060090012001500SE +/- 5.59, N = 3676.76688.76691.78878.78879.31892.321163.451164.421. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamRyzen 7600AMD 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 7700Ryzen 9 7900790020406080100SE +/- 0.26, N = 349.6349.6549.7262.3062.7562.9585.1685.19

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 4KRyzen 7600 AMDRyzen 7600AMD 76007700Ryzen 7 7700AMD 77007900Ryzen 9 79000.88651.7732.65953.5464.4325SE +/- 0.00, N = 32.302.302.312.962.962.983.943.941. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDRyzen 7600 AMDAMD 7600Ryzen 7600Ryzen 7 77007700AMD 7700Ryzen 9 7900790050100150200250SE +/- 2.30, N = 3215.76214.15213.01167.47166.07165.87127.05126.281. (CXX) g++ options: -O2 -lOpenCL

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4K7700AMD 7600AMD 7700Ryzen 7600Ryzen 7600 AMDRyzen 7 7700Ryzen 9 79007900714212835SE +/- 0.23, N = 316.4217.1917.6817.7717.9017.9727.9728.041. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPURyzen 7600 AMDRyzen 7600AMD 7600Ryzen 7 7700AMD 77007700Ryzen 9 790079001.16782.33563.50344.67125.839SE +/- 0.02, N = 33.043.063.073.873.923.935.165.191. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 512Ryzen 7600AMD 7600Ryzen 7600 AMDRyzen 7 77007700AMD 7700Ryzen 9 790079000.82581.65162.47743.30324.129SE +/- 0.003762, N = 32.1590672.1597072.1647602.9865162.9944693.0002753.6659113.6700941. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUAMD 7600Ryzen 7600Ryzen 7600 AMDRyzen 7 77007700AMD 77007900Ryzen 9 790030060090012001500SE +/- 1.23, N = 31453.821450.821450.481155.041152.241113.96855.91855.55MIN: 1450.65MIN: 1438.48MIN: 1445.4MIN: 1129.24MIN: 1133.82MIN: 1102.31MIN: 851.85MIN: 851.411. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: EmilyRyzen 7600Ryzen 7600 AMDAMD 76007700Ryzen 7 7700AMD 77007900Ryzen 9 790070140210280350334.20332.51331.38262.66262.06261.24197.26196.70

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPURyzen 7600 AMDRyzen 7600AMD 7600Ryzen 7 77007700AMD 77007900Ryzen 9 790030060090012001500SE +/- 0.41, N = 31452.401450.701450.631156.111149.161110.93856.02854.95MIN: 1447.04MIN: 1435.36MIN: 1448.24MIN: 1135.88MIN: 1126.63MIN: 1100.71MIN: 851.48MIN: 851.041. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPURyzen 7600 AMDRyzen 7600AMD 7600Ryzen 7 77007700AMD 77007900Ryzen 9 790030060090012001500SE +/- 0.60, N = 31452.691446.491444.741155.401154.551112.46858.83855.54MIN: 1448.33MIN: 1430.15MIN: 1440.66MIN: 1130.22MIN: 1130.74MIN: 1094.16MIN: 851.32MIN: 851.941. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin ProteinRyzen 7600 AMDAMD 7600Ryzen 7600Ryzen 7 77007700AMD 7700Ryzen 9 790079003691215SE +/- 0.023, N = 36.7116.7316.7578.5478.5608.59211.21111.3941. (CXX) g++ options: -O3 -lm -ldl

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080pRyzen 7600Ryzen 7600 AMDAMD 76007700Ryzen 7 7700AMD 77007900Ryzen 9 790048121620SE +/- 0.00, N = 39.229.239.2311.8411.8511.9315.6215.651. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamAMD 7600Ryzen 7600Ryzen 7600 AMDRyzen 7 77007700AMD 7700Ryzen 9 7900790020406080100SE +/- 0.10, N = 347.7847.9348.0957.3957.4558.0480.8980.95

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 192000 - Buffer Size: 512Ryzen 7600 AMDRyzen 7600AMD 76007700Ryzen 7 7700AMD 77007900Ryzen 9 79000.54251.0851.62752.172.7125SE +/- 0.004557, N = 31.4268981.4310091.4324541.9744481.9752501.9836602.4017552.4110401. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.B7700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 7600AMD 77007900Ryzen 9 79004K8K12K16K20KSE +/- 6.22, N = 312225.6912239.6412314.2112327.5012334.2612368.4520541.0420602.991. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamRyzen 7600 AMDAMD 7600Ryzen 7600Ryzen 7 77007700AMD 7700Ryzen 9 7900790050100150200250SE +/- 0.15, N = 3212.24212.12212.11164.06163.45163.05126.52126.12

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamRyzen 7600 AMDAMD 7600Ryzen 7600Ryzen 7 77007700AMD 7700Ryzen 9 79007900246810SE +/- 0.0033, N = 34.71164.71414.71456.09526.11786.13307.90377.9284

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPURyzen 7600Ryzen 7600 AMDAMD 7600AMD 77007700Ryzen 7 7700Ryzen 9 790079001.14082.28163.42244.56325.704SE +/- 0.01, N = 33.023.053.063.843.853.855.025.071. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamRyzen 7600AMD 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 77007900Ryzen 9 790050100150200250SE +/- 0.11, N = 3212.01211.97211.80164.38164.29162.58126.34126.31

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamRyzen 7600AMD 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 77007900Ryzen 9 7900246810SE +/- 0.0024, N = 34.71674.71754.72136.08326.08666.15077.91517.9169

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b07900Ryzen 9 79007700Ryzen 7600AMD 7600Ryzen 7 7700Ryzen 7600 AMDAMD 77000.94281.88562.82843.77124.714SE +/- 0.00, N = 34.194.182.912.862.722.702.692.50MIN: 4.14 / MAX: 4.62MIN: 4.15 / MAX: 4.53MIN: 2.68 / MAX: 4.44MIN: 2.81 / MAX: 4.44MIN: 2.68 / MAX: 3.26MIN: 2.48 / MAX: 5.14MIN: 2.66 / MAX: 3.21MIN: 2.46 / MAX: 3.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPURyzen 7600AMD 7600Ryzen 7600 AMDAMD 77007700Ryzen 7 77007900Ryzen 9 79006K12K18K24K30KSE +/- 16.00, N = 316423.7716444.4416510.7320765.7621050.5021184.2027518.7827522.211. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUAMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 7 77007700AMD 7700Ryzen 9 790079000.07080.14160.21240.28320.354SE +/- 0.001519, N = 30.3147640.3131290.3122500.2517770.2504830.2410040.1894510.188795MIN: 0.3MIN: 0.29MIN: 0.29MIN: 0.22MIN: 0.22MIN: 0.22MIN: 0.17MIN: 0.171. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPURyzen 7600 AMDRyzen 7600AMD 7600Ryzen 7 77007700AMD 77007900Ryzen 9 79000.2340.4680.7020.9361.17SE +/- 0.000920, N = 31.0399601.0397301.0361300.8110900.8108150.7949900.6261770.625231MIN: 1MIN: 1MIN: 1MIN: 0.74MIN: 0.74MIN: 0.73MIN: 0.56MIN: 0.561. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.5Time To CompileAMD 7600Ryzen 7600 AMDRyzen 76007700AMD 7700Ryzen 7 77007900Ryzen 9 7900816243240SE +/- 0.08, N = 335.0735.0634.9228.2327.9327.5621.1421.08

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve primarily benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e13Ryzen 7600AMD 7600Ryzen 7600 AMDRyzen 7 77007700AMD 7700Ryzen 9 790079004080120160200SE +/- 0.48, N = 3203.79203.44203.19161.06161.02160.12122.64122.561. (CXX) g++ options: -O3

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Unix MakefilesAMD 7600Ryzen 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 77007900Ryzen 9 7900130260390520650SE +/- 1.68, N = 3622.39621.82621.55511.00503.16495.07375.35374.36

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve primarily benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e12Ryzen 7600Ryzen 7600 AMDAMD 76007700Ryzen 7 7700AMD 7700Ryzen 9 7900790048121620SE +/- 0.00, N = 316.7216.7116.6913.2013.1713.0810.0710.061. (CXX) g++ options: -O3

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigAMD 7600Ryzen 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 77007900Ryzen 9 790020406080100SE +/- 0.14, N = 397.4096.3096.2278.7477.8877.6659.2558.80

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileRyzen 7600Ryzen 7600 AMDAMD 7600Ryzen 7 77007700AMD 7700Ryzen 9 79007900306090120150SE +/- 0.35, N = 3115.54114.28113.8993.3492.7190.4970.2969.91

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverRyzen 7600Ryzen 7600 AMDAMD 7600Ryzen 7 77007700AMD 77007900Ryzen 9 790048121620SE +/- 0.04, N = 318.1918.1618.1214.6014.5613.9711.0311.011. (CXX) g++ options: -O2 -lOpenCL

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.CAMD 7600Ryzen 7600Ryzen 7600 AMDRyzen 7 7700AMD 77007700Ryzen 9 790079009K18K27K36K45KSE +/- 42.32, N = 326241.3226385.9026610.5326612.6626715.2026811.9042376.8043264.201. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingRyzen 7600 AMDAMD 7600Ryzen 76007700Ryzen 7 7700AMD 7700Ryzen 9 7900790030K60K90K120K150KSE +/- 205.24, N = 39017890334903491109141113751125851482101482451. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bareRyzen 7600 AMDAMD 7600Ryzen 7600Ryzen 7 77007700AMD 77007900Ryzen 9 79000.45450.9091.36351.8182.2725SE +/- 0.006, N = 31.2311.2341.2371.4651.4681.4802.0182.0201. (CXX) g++ options: -O3

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPURyzen 7600 AMDAMD 7600Ryzen 7600AMD 7700Ryzen 7 77007700Ryzen 9 7900790020406080100SE +/- 0.52, N = 360.2260.2462.2473.6579.7680.7097.4998.781. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pRyzen 7600AMD 7600Ryzen 7600 AMDRyzen 7 77007700AMD 77007900Ryzen 9 790060120180240300SE +/- 0.32, N = 3180.45181.12181.36206.90207.45209.43293.13294.301. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupAMD 7700Ryzen 7600AMD 7600Ryzen 7600 AMDRyzen 7 770077007900Ryzen 9 7900246810SE +/- 0.00, N = 37.244.764.754.754.614.604.464.45

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To CompileRyzen 7600Ryzen 7600 AMDAMD 7600Ryzen 7 77007700AMD 7700Ryzen 9 79007900306090120150SE +/- 0.44, N = 3112.85111.95110.9191.4191.2390.7969.8569.53

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.4Time To CompileAMD 7600Ryzen 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 77007900Ryzen 9 79001122334455SE +/- 0.23, N = 347.2847.1746.8538.4738.4138.2829.4929.17

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4KRyzen 7600AMD 7600Ryzen 7600 AMDRyzen 7 77007700AMD 7700Ryzen 9 790079001632486480SE +/- 0.03, N = 343.5643.7344.4954.6254.6854.8469.9170.471. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPURyzen 7600Ryzen 7600 AMDAMD 76007700Ryzen 7 7700AMD 7700Ryzen 9 790079000.90841.81682.72523.63364.542SE +/- 0.01473, N = 34.037514.029414.022153.253583.245323.104512.517762.50650MIN: 3.76MIN: 3.74MIN: 3.76MIN: 2.88MIN: 2.84MIN: 2.9MIN: 2.21MIN: 2.241. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Five Back to Back FIR FiltersRyzen 9 79007900AMD 7600Ryzen 7600Ryzen 7600 AMDRyzen 7 77007700AMD 77005001000150020002500SE +/- 16.32, N = 81322.91430.41862.71923.91934.12004.42115.22128.81. 3.10.1.1

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 4KAMD 7600Ryzen 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 7700Ryzen 9 7900790020406080100SE +/- 0.02, N = 353.2653.3553.5264.3864.4664.5284.9085.601. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: Standard7700Ryzen 7600 AMDRyzen 7 7700Ryzen 7600AMD 7600Ryzen 9 79007900AMD 77002004006008001000SE +/- 40.22, N = 125155465616566596936948271. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0Ryzen 9 79007900Ryzen 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 7600AMD 77000.86921.73842.60763.47684.346SE +/- 0.017, N = 153.8633.7712.6182.5832.5592.5572.4342.420MIN: 3.79 / MAX: 5.06MIN: 3.69 / MAX: 4.19MIN: 2.59 / MAX: 4.5MIN: 2.41 / MAX: 9.64MIN: 2.41 / MAX: 4.85MIN: 2.4 / MAX: 5.3MIN: 2.4 / MAX: 9.97MIN: 2.38 / MAX: 13.731. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pRyzen 7600 AMDAMD 7600Ryzen 7600Ryzen 7 77007700AMD 77007900Ryzen 9 790050100150200250SE +/- 0.12, N = 3137.00137.14137.30172.71172.81174.52217.47218.501. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Timed Mesa Compilation

This test profile times how long it takes to compile Mesa with Meson/Ninja. For minimizing build dependencies and avoid versioning conflicts, test this is just the core Mesa build without LLVM or the extra Gallium3D/Mesa drivers enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To CompileRyzen 7600AMD 7600Ryzen 7600 AMDRyzen 7 77007700AMD 7700Ryzen 9 790079001020304050SE +/- 0.06, N = 346.0545.9745.9238.2538.2137.6829.0128.88

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3Ryzen 9 79007900Ryzen 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 7700AMD 7600510152025SE +/- 0.24, N = 1522.4720.9216.8716.3915.5115.3714.6014.12MIN: 22.2 / MAX: 30.15MIN: 20.57 / MAX: 28.87MIN: 16.72 / MAX: 18.95MIN: 13.9 / MAX: 24.09MIN: 14.26 / MAX: 35.72MIN: 14.16 / MAX: 25.66MIN: 14.2 / MAX: 25.71MIN: 14.05 / MAX: 15.711. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPURyzen 7600AMD 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 77007900Ryzen 9 79002004006008001000SE +/- 3.37, N = 3695.35699.47705.10822.33834.42839.031102.441102.441. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6AMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 7 77007700AMD 7700Ryzen 9 79007900246810SE +/- 0.014, N = 36.7006.6596.6535.3845.3235.2884.2994.2401. (CXX) g++ options: -O3 -fPIC -lm

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Myriad-GroestlRyzen 7600AMD 7600Ryzen 7600 AMDRyzen 7 77007700AMD 7700Ryzen 9 7900790011K22K33K44K55KSE +/- 250.27, N = 333330333403353043080431404319049110524901. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 4KAMD 7600Ryzen 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 7700Ryzen 9 790079001122334455SE +/- 0.31, N = 630.0030.2131.8937.1037.2737.3947.0847.131. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPURyzen 7600 AMDRyzen 7600AMD 7600Ryzen 9 7900Ryzen 7 77007700AMD 770079001.20312.40623.60934.81246.0155SE +/- 0.00039, N = 35.347075.340625.335783.970483.918163.909233.890953.42107MIN: 5.3MIN: 5.3MIN: 5.3MIN: 3.34MIN: 3.63MIN: 3.68MIN: 3.71MIN: 3.321. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPURyzen 7600Ryzen 7600 AMDAMD 7600Ryzen 7 77007700AMD 7700Ryzen 9 790079001.22792.45583.68374.91166.1395SE +/- 0.00921, N = 35.457285.437065.416344.399464.377364.284963.494993.49424MIN: 5.22MIN: 5.22MIN: 5.23MIN: 3.84MIN: 3.85MIN: 3.88MIN: 3.06MIN: 3.051. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution TimeRyzen 7600AMD 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 77007900Ryzen 9 790060120180240300274.56274.56273.74254.50254.33251.62176.83176.071. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pRyzen 7600AMD 7600Ryzen 7600 AMDRyzen 7 77007700AMD 7700Ryzen 9 7900790070140210280350SE +/- 2.29, N = 3215.01215.04220.17243.98244.06246.55334.75335.221. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pRyzen 7600AMD 7600Ryzen 7600 AMDRyzen 7 77007700AMD 7700Ryzen 9 7900790080160240320400SE +/- 0.15, N = 3222.08226.78227.72252.79254.60261.80344.81345.401. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamAMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 7 77007700AMD 7700Ryzen 9 790079003691215SE +/- 0.0166, N = 311.727311.715811.69178.03018.02747.97517.60637.5604

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamAMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 7 77007700AMD 7700Ryzen 9 79007900306090120150SE +/- 0.12, N = 385.2385.3185.48124.44124.49125.30131.38132.18

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 1080pRyzen 7600AMD 7600Ryzen 7600 AMDAMD 77007700Ryzen 7 77007900Ryzen 9 790050100150200250SE +/- 1.62, N = 3134.66134.80136.26166.27167.51168.28204.59208.221. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUAMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 7 77007700AMD 77007900Ryzen 9 79000.30620.61240.91861.22481.531SE +/- 0.005136, N = 31.3607801.3554801.3531001.0027000.9993600.9897940.8818270.880093MIN: 1.3MIN: 1.29MIN: 1.3MIN: 0.89MIN: 0.89MIN: 0.88MIN: 0.8MIN: 0.811. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Material TesterRyzen 7600AMD 7600Ryzen 7600 AMDRyzen 7 77007700AMD 7700Ryzen 9 790079004080120160200189.54188.62188.46145.39145.29144.68124.04123.05

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: MediumRyzen 7600AMD 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 7700Ryzen 9 7900790048121620SE +/- 0.00, N = 39.099.109.1211.3211.3311.3613.8913.971. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pRyzen 7600AMD 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 7700Ryzen 9 79007900100200300400500SE +/- 1.08, N = 3286.26286.81286.86359.07360.58362.54437.96439.561. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPURyzen 7600Ryzen 7600 AMDAMD 7600Ryzen 7 77007700AMD 77007900Ryzen 9 79000.74451.4892.23352.9783.7225SE +/- 0.00750, N = 33.309033.305143.303422.507182.502322.462752.260912.16726MIN: 3.15MIN: 3.14MIN: 3.14MIN: 2.33MIN: 2.34MIN: 2.26MIN: 2.08MIN: 2.111. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamRyzen 7600 AMDAMD 7600Ryzen 76007700Ryzen 7 7700AMD 7700Ryzen 9 79007900612182430SE +/- 0.03, N = 323.3623.3523.3318.4218.3118.2715.3615.30

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamRyzen 7600 AMDAMD 7600Ryzen 76007700Ryzen 7 7700AMD 7700Ryzen 9 790079001530456075SE +/- 0.06, N = 342.7942.8242.8654.2654.5954.7165.0665.31

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenetRyzen 9 790079007700Ryzen 7600AMD 7600Ryzen 7 7700Ryzen 7600 AMDAMD 7700246810SE +/- 0.04, N = 38.078.016.705.955.735.695.645.30MIN: 7.97 / MAX: 9.25MIN: 7.91 / MAX: 8.62MIN: 6.24 / MAX: 7.84MIN: 5.85 / MAX: 7.49MIN: 5.68 / MAX: 6.22MIN: 5.28 / MAX: 6.93MIN: 5.54 / MAX: 6.73MIN: 5.22 / MAX: 6.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: ParallelAMD 7600Ryzen 7600 AMDRyzen 7600AMD 7700Ryzen 7 770077007900Ryzen 9 7900400800120016002000SE +/- 3.11, N = 3127313041318164016691670191919361. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 4KAMD 7600Ryzen 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 77007900Ryzen 9 79001326395265SE +/- 0.10, N = 337.1837.2437.2845.4745.5445.9156.2956.401. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.8.1Test: Boat - Acceleration: CPU-onlyRyzen 7600AMD 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 7700Ryzen 9 790079001.03162.06323.09484.12645.158SE +/- 0.018, N = 34.5854.5574.5263.7853.7423.6673.0563.028

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4KRyzen 7600Ryzen 7600 AMDAMD 7600Ryzen 7 77007700AMD 7700Ryzen 9 79007900306090120150SE +/- 0.10, N = 389.2689.6589.96109.97110.11110.60134.80134.921. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 0AMD 7600Ryzen 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 7700Ryzen 9 79007900306090120150SE +/- 0.10, N = 3143.24142.85142.81116.80116.73115.8895.6994.771. (CXX) g++ options: -O3 -fPIC -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression SpeedAMD 7600Ryzen 7600Ryzen 7600 AMDRyzen 7 7700AMD 77007700Ryzen 9 790079001428425670SE +/- 0.12, N = 342.843.143.652.452.552.663.864.21. (CC) gcc options: -O3 -pthread -lz -llzma

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 1080pRyzen 7600AMD 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 7700Ryzen 9 790079004080120160200SE +/- 0.38, N = 3105.84106.01106.89129.24129.25129.95158.21158.661. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Very FastRyzen 7600 AMDRyzen 7600AMD 7600Ryzen 7 77007700AMD 7700Ryzen 9 79007900714212835SE +/- 0.05, N = 321.1021.1421.1526.1826.2026.4231.4531.481. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamRyzen 7600AMD 7600Ryzen 7600 AMDRyzen 7 77007700AMD 7700Ryzen 9 79007900714212835SE +/- 0.08, N = 331.4331.3031.2624.9624.7624.6121.2721.14

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamRyzen 7600AMD 7600Ryzen 7600 AMDRyzen 7 77007700AMD 7700Ryzen 9 790079001122334455SE +/- 0.08, N = 331.8131.9431.9940.0540.3840.6347.0047.29

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 4KAMD 7600Ryzen 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 77007900Ryzen 9 790020406080100SE +/- 0.87, N = 363.0063.8465.5571.9972.2572.6390.7993.601. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteRyzen 7600AMD 7600Ryzen 7 77007700AMD 7700Ryzen 7600 AMD7900Ryzen 9 790020406080100SE +/- 0.15, N = 387.1086.4784.9984.6583.7981.9559.7359.021. (CXX) g++ options: -O2 -lOpenCL

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 4KRyzen 7600AMD 7600Ryzen 7600 AMDAMD 7700Ryzen 7 770077007900Ryzen 9 79001.00532.01063.01594.02125.0265SE +/- 0.003, N = 33.0343.0423.0583.6803.6843.6864.4524.4681. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6, LosslessAMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 7 77007700AMD 77007900Ryzen 9 79003691215SE +/- 0.014, N = 39.9579.9029.8608.1748.1718.0506.8266.7651. (CXX) g++ options: -O3 -fPIC -lm

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4KRyzen 7600AMD 7600Ryzen 7600 AMDRyzen 7 77007700AMD 77007900Ryzen 9 790020406080100SE +/- 0.03, N = 368.2368.5569.1877.3078.2980.1599.80100.141. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamRyzen 7600 AMDAMD 7600Ryzen 76007700Ryzen 7 7700AMD 7700Ryzen 9 7900790048121620SE +/- 0.02, N = 315.6815.6415.6312.4812.4312.3910.8510.71

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamRyzen 7600 AMDAMD 7600Ryzen 76007700Ryzen 7 7700AMD 7700Ryzen 9 7900790020406080100SE +/- 0.08, N = 363.7663.9063.9580.0980.4480.7092.1393.30

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP Streamcluster7700Ryzen 7 7700Ryzen 7600AMD 7600Ryzen 7600 AMDAMD 77007900Ryzen 9 79003691215SE +/- 0.011, N = 312.12712.11412.06911.97911.96111.5788.4308.2981. (CXX) g++ options: -O2 -lOpenCL

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: ParallelRyzen 7600 AMDRyzen 7600AMD 7600AMD 7700Ryzen 7 770077007900Ryzen 9 790015003000450060007500SE +/- 23.47, N = 3465646744678574758395892671567911. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPURyzen 7600Ryzen 7600 AMDAMD 76007700Ryzen 7 7700AMD 7700Ryzen 9 790079001.04232.08463.12694.16925.2115SE +/- 0.00604, N = 34.632344.481294.475344.326454.319024.174003.179853.17749MIN: 4.54MIN: 4.4MIN: 4.38MIN: 4.15MIN: 4.14MIN: 4.14MIN: 3.13MIN: 3.131. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression SpeedRyzen 7600Ryzen 7600 AMDAMD 7600Ryzen 7 77007700AMD 7700Ryzen 9 790079001224364860SE +/- 0.03, N = 335.735.935.945.846.446.851.051.61. (CC) gcc options: -O3 -pthread -lz -llzma

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 4KRyzen 7600Ryzen 7600 AMDAMD 7600Ryzen 7 77007700AMD 77007900Ryzen 9 79004080120160200SE +/- 0.28, N = 3127.22127.83128.58149.54150.39152.05181.98183.701. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Ultra FastRyzen 7600AMD 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 7700Ryzen 9 790079001224364860SE +/- 0.11, N = 337.9638.0438.1746.8446.8947.0154.6054.711. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 2Ryzen 7600AMD 7600Ryzen 7600 AMDRyzen 7 77007700AMD 7700Ryzen 9 790079001530456075SE +/- 0.03, N = 367.8267.8167.6956.5756.4455.9947.2347.101. (CXX) g++ options: -O3 -fPIC -lm

Y-Cruncher

Y-Cruncher is a multi-threaded Pi benchmark capable of computing Pi to trillions of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 1BAMD 7600Ryzen 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 77007900Ryzen 9 7900816243240SE +/- 0.10, N = 333.0833.0432.9928.0328.0027.8923.3223.30

Timed PHP Compilation

This test times how long it takes to build PHP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 8.1.9Time To CompileAMD 7600Ryzen 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 7700Ryzen 9 790079001326395265SE +/- 0.29, N = 356.4156.3355.8048.9348.7748.3440.8039.74

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon NanotubeRyzen 7600AMD 7600Ryzen 7600 AMDRyzen 7 77007700AMD 77007900Ryzen 9 790050100150200250SE +/- 0.83, N = 3239.33237.47235.98209.84209.08207.91169.05168.931. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 4KRyzen 7600Ryzen 7600 AMDAMD 7600Ryzen 9 79007900AMD 7700Ryzen 7 77007700714212835SE +/- 0.11, N = 321.3121.9622.3623.9924.3629.0129.1930.161. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 1080p - Video Preset: MediumRyzen 7600Ryzen 7600 AMDAMD 76007700Ryzen 7 7700AMD 7700Ryzen 9 790079001428425670SE +/- 0.06, N = 344.6544.6844.6858.5658.7658.8663.1663.171. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.8.1Test: Server Rack - Acceleration: CPU-onlyRyzen 7600Ryzen 7600 AMDAMD 7600Ryzen 7 77007700AMD 77007900Ryzen 9 79000.04010.08020.12030.16040.2005SE +/- 0.001, N = 30.1780.1770.1760.1580.1560.1480.1350.126

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU7700Ryzen 7 7700Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7600Ryzen 9 79007900246810SE +/- 0.03190, N = 37.346687.331747.111217.095157.080487.062695.226745.21572MIN: 7MIN: 6.99MIN: 7MIN: 7MIN: 6.98MIN: 6.99MIN: 5.15MIN: 5.151. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CRyzen 7600 AMDAMD 7600Ryzen 7600Ryzen 7 77007700AMD 77007900Ryzen 9 790010K20K30K40K50KSE +/- 16.61, N = 331737.6131784.5231801.6943900.9143910.5344080.0844535.6044554.531. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: ParallelRyzen 7600Ryzen 7600 AMDAMD 76007700Ryzen 7 7700AMD 7700Ryzen 9 79007900170340510680850SE +/- 0.60, N = 35505525556676686767677721. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 4KRyzen 7600 AMDRyzen 7600AMD 7600Ryzen 7 77007700AMD 77007900Ryzen 9 79004080120160200SE +/- 0.38, N = 3136.23136.43136.85161.95162.38162.39189.15190.961. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Timed Wasmer Compilation

This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 2.3Time To CompileRyzen 7600AMD 7600Ryzen 7600 AMD7700AMD 7700Ryzen 7 7700Ryzen 9 790079001224364860SE +/- 0.45, N = 353.1350.8150.7744.7044.3344.1837.9337.911. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.8.1Test: Masskrug - Acceleration: CPU-onlyRyzen 7600AMD 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 7700Ryzen 9 790079000.96441.92882.89323.85764.822SE +/- 0.013, N = 34.2864.2324.2203.6903.6523.5843.1743.071

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per DirectionAMD 7600Ryzen 7600 AMDRyzen 76007700Ryzen 7 7700AMD 7700Ryzen 9 79007900510152025SE +/- 0.02, N = 320.9920.9620.9419.5919.5619.4415.0815.071. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Y-Cruncher

Y-Cruncher is a multi-threaded Pi benchmark capable of computing Pi to trillions of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 500MAMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 7 77007700AMD 77007900Ryzen 9 790048121620SE +/- 0.01, N = 315.0815.0515.0412.7712.7212.6810.8710.85

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUAMD 7600Ryzen 7600 AMDRyzen 76007700Ryzen 7 7700AMD 7700Ryzen 9 790079000.35630.71261.06891.42521.7815SE +/- 0.00075, N = 31.583601.580541.574921.272451.272091.234961.144431.14326MIN: 1.52MIN: 1.49MIN: 1.5MIN: 1.16MIN: 1.16MIN: 1.13MIN: 1.01MIN: 1.051. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: resizeRyzen 9 79007900AMD 7600Ryzen 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 77003691215SE +/- 0.062, N = 313.60213.30610.42110.41710.30610.18310.0339.866

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.CRyzen 7600 AMDAMD 7600Ryzen 7600AMD 7700Ryzen 7 77007700Ryzen 9 790079005K10K15K20K25KSE +/- 13.02, N = 318180.5818201.5518217.4323901.6923930.1724004.7424557.8524918.341. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.CRyzen 7600 AMDAMD 7600Ryzen 7600AMD 77007700Ryzen 7 7700Ryzen 9 790079003K6K9K12K15KSE +/- 33.87, N = 38739.838741.368923.129511.869535.439553.4511860.0811923.731. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: ParallelAMD 7600Ryzen 7600Ryzen 7600 AMDAMD 7700Ryzen 7 77007700Ryzen 9 7900790020406080100SE +/- 0.00, N = 37979809494951061071. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 1080p - Video Preset: Ultra FastRyzen 7600Ryzen 7600 AMDAMD 76007700Ryzen 7 7700AMD 7700Ryzen 9 7900790050100150200250SE +/- 0.22, N = 3160.21160.28160.45201.63201.88202.80215.98216.491. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Timed GDB GNU Debugger Compilation

This test times how long it takes to build the GNU Debugger (GDB) in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 10.2Time To CompileRyzen 7600Ryzen 7600 AMDAMD 76007700Ryzen 7 7700AMD 7700Ryzen 9 790079001224364860SE +/- 0.08, N = 351.9151.5551.4944.6044.5744.0838.6738.61

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU7700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 7600AMD 77007900Ryzen 9 7900246810SE +/- 0.00408, N = 37.647637.639937.398687.380147.376997.307685.726415.70365MIN: 7.23MIN: 7.21MIN: 7.3MIN: 7.29MIN: 7.29MIN: 7.2MIN: 5.61MIN: 5.581. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPURyzen 7600 AMDAMD 7600Ryzen 7600Ryzen 9 79007900AMD 7700Ryzen 7 770077001530456075SE +/- 0.56, N = 366.4066.3764.2461.5360.7154.2750.1449.54MIN: 52.54 / MAX: 72.87MIN: 56.61 / MAX: 70.47MIN: 35.74 / MAX: 76.68MIN: 51.95 / MAX: 117.17MIN: 49.57 / MAX: 69.26MIN: 43.38 / MAX: 65.85MIN: 39.21 / MAX: 63.67MIN: 31.93 / MAX: 62.291. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 1080pRyzen 7600 AMDRyzen 7600AMD 76007900Ryzen 9 7900Ryzen 7 77007700AMD 7700612182430SE +/- 0.06, N = 317.2017.2317.3021.1721.3922.4522.4922.971. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 4KRyzen 7600Ryzen 7600 AMDAMD 7600Ryzen 9 790079007700Ryzen 7 7700AMD 77003691215SE +/- 0.02, N = 38.328.428.6110.2810.4110.9610.9911.101. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: GarlicoinRyzen 7600AMD 7600Ryzen 7600 AMDRyzen 9 7900Ryzen 7 770079007700AMD 770010002000300040005000SE +/- 39.25, N = 33534.183579.503598.294390.504553.534566.154593.064711.291. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh TimeAMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 7 77007700AMD 7700Ryzen 9 7900790081624324035.4735.4535.2931.5931.4131.2326.9426.691. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPURyzen 7600 AMDAMD 7600Ryzen 76007900Ryzen 9 7900AMD 7700Ryzen 7 770077003691215SE +/- 0.02, N = 310.7810.6710.668.928.918.628.148.12MIN: 4.84 / MAX: 21.35MIN: 6.39 / MAX: 19.07MIN: 5.63 / MAX: 25.42MIN: 4.89 / MAX: 19.33MIN: 5.72 / MAX: 19.34MIN: 4.21 / MAX: 29.5MIN: 5.28 / MAX: 22.47MIN: 4.51 / MAX: 22.011. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

nekRS

nekRS is an open-source Navier Stokes solver based on the spectral element method. NekRS supports both CPU and GPU/accelerator support though this test profile is currently configured for CPU execution. NekRS is part of Nek5000 of the Mathematics and Computer Science MCS at Argonne National Laboratory. This nekRS benchmark is primarily relevant to large core count HPC servers and otherwise may be very time consuming. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFLOP/s, More Is BetternekRS 22.0Input: TurboPipe PeriodicRyzen 7600AMD 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 77007900Ryzen 9 790014000M28000M42000M56000M70000MSE +/- 29429369.31, N = 349717500000497356000004980863333355524300000555482000005600840000064877800000659745000001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -lmpi_cxx -lmpi

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenet7900Ryzen 9 79007700Ryzen 7600Ryzen 7 7700AMD 7600Ryzen 7600 AMDAMD 7700246810SE +/- 0.00, N = 38.678.407.397.137.126.906.896.54MIN: 8.58 / MAX: 9.22MIN: 8.31 / MAX: 8.86MIN: 6.93 / MAX: 8.47MIN: 7.05 / MAX: 8.48MIN: 6.65 / MAX: 8.21MIN: 6.86 / MAX: 7.51MIN: 6.85 / MAX: 7.4MIN: 6.46 / MAX: 8.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 8 - Buffer Length: 256 - Filter Length: 57Ryzen 7600 AMDAMD 7600Ryzen 76007900Ryzen 9 79007700AMD 7700Ryzen 7 7700160M320M480M640M800MSE +/- 6183407.06, N = 45732975005835000005857700007394300007532500007563400007593100007595500001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPURyzen 7600 AMDAMD 7600Ryzen 76007900Ryzen 9 7900Ryzen 7 77007700AMD 770080160240320400SE +/- 0.45, N = 3386.64386.47379.73331.39330.58294.43293.37292.09MIN: 366.98 / MAX: 393.57MIN: 369.88 / MAX: 390.09MIN: 333.61 / MAX: 390.72MIN: 290.18 / MAX: 342.26MIN: 304.74 / MAX: 342.3MIN: 255.5 / MAX: 313.05MIN: 221.79 / MAX: 312MIN: 151.29 / MAX: 340.731. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 1080pRyzen 7600Ryzen 7600 AMDAMD 7600Ryzen 7 77007700AMD 77007900Ryzen 9 79003691215SE +/- 0.02, N = 310.2310.3010.3012.1012.1112.1313.4613.491. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 1080pRyzen 7600Ryzen 7600 AMDAMD 76007700Ryzen 7 7700AMD 7700Ryzen 9 79007900150300450600750SE +/- 2.69, N = 3522.04524.24528.45609.82615.79616.10649.49687.931. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Compression SpeedAMD 7600Ryzen 7600 AMDRyzen 76007700AMD 7700Ryzen 7 77007900Ryzen 9 790013002600390052006500SE +/- 11.07, N = 34772.94776.64863.45055.35126.45140.06230.76289.11. (CC) gcc options: -O3 -pthread -lz -llzma

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: ParallelAMD 7600Ryzen 7600 AMDRyzen 7600AMD 7700Ryzen 7 770077007900Ryzen 9 7900110220330440550SE +/- 0.17, N = 33783813824424454484954981. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPURyzen 7600 AMDAMD 7600Ryzen 7600Ryzen 9 790079007700Ryzen 7 7700AMD 77001.32752.6553.98255.316.6375SE +/- 0.05, N = 35.905.805.785.155.154.554.544.48MIN: 3.49 / MAX: 14.2MIN: 3.43 / MAX: 7.38MIN: 3.69 / MAX: 20.19MIN: 3.05 / MAX: 67.86MIN: 3.14 / MAX: 13.79MIN: 2.74 / MAX: 17.33MIN: 2.81 / MAX: 17.18MIN: 2.84 / MAX: 12.691. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPUAMD 7600Ryzen 7600Ryzen 7600 AMDRyzen 9 79007900Ryzen 7 77007700AMD 7700246810SE +/- 0.05, N = 37.597.577.576.536.535.825.815.77MIN: 3.97 / MAX: 15.26MIN: 3.93 / MAX: 20.27MIN: 3.89 / MAX: 15.51MIN: 3.38 / MAX: 14.89MIN: 3.39 / MAX: 14.56MIN: 3.02 / MAX: 17.51MIN: 3.21 / MAX: 17.52MIN: 3.07 / MAX: 13.011. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 1080p - Video Preset: Very FastRyzen 7600AMD 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 7700Ryzen 9 79007900306090120150SE +/- 0.03, N = 391.1891.2291.30111.46111.56111.63118.71118.781. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamRyzen 7600 AMDAMD 7600Ryzen 76007900Ryzen 9 79007700Ryzen 7 7700AMD 7700510152025SE +/- 0.03, N = 321.8621.8621.7818.3218.2516.9916.9716.86

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPURyzen 7600 AMDAMD 7600Ryzen 7600Ryzen 9 79007900Ryzen 7 77007700AMD 7700160320480640800SE +/- 1.54, N = 3746.94746.15746.14651.71651.18581.34578.82576.13MIN: 714.17 / MAX: 765.41MIN: 724.14 / MAX: 766.22MIN: 723.99 / MAX: 764.64MIN: 621.09 / MAX: 672.91MIN: 573.67 / MAX: 673.86MIN: 454.21 / MAX: 613.21MIN: 394.1 / MAX: 612.37MIN: 505.33 / MAX: 597.671. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamAMD 7600Ryzen 7600 AMDRyzen 76007900Ryzen 9 79007700Ryzen 7 7700AMD 77001326395265SE +/- 0.07, N = 345.7445.7445.9154.5854.7858.8458.9159.30

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.8.1Test: Server Room - Acceleration: CPU-onlyRyzen 7600AMD 7600Ryzen 7600 AMDRyzen 7 77007700AMD 77007900Ryzen 9 79000.73891.47782.21672.95563.6945SE +/- 0.036, N = 33.2843.2503.2432.9022.8942.8312.5762.539

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssd7900Ryzen 9 79007700Ryzen 7 7700Ryzen 7600AMD 7600Ryzen 7600 AMDAMD 77003691215SE +/- 0.01, N = 311.7411.7310.3310.029.209.179.129.09MIN: 11.43 / MAX: 21.52MIN: 11.48 / MAX: 12.69MIN: 9.5 / MAX: 11.46MIN: 9.18 / MAX: 19.45MIN: 8.98 / MAX: 10.63MIN: 9.02 / MAX: 9.75MIN: 8.99 / MAX: 9.91MIN: 8.76 / MAX: 10.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPURyzen 7600 AMDRyzen 7600AMD 7600Ryzen 9 79007900Ryzen 7 7700AMD 7700770030060090012001500SE +/- 5.45, N = 31309.261303.001293.341159.381153.461032.671016.681015.27MIN: 749.1 / MAX: 1427.46MIN: 1221.08 / MAX: 1432.12MIN: 914.71 / MAX: 1411.42MIN: 1036.64 / MAX: 1295.05MIN: 731.93 / MAX: 1335.35MIN: 975.43 / MAX: 1129.72MIN: 897.5 / MAX: 1118.4MIN: 718.11 / MAX: 1124.231. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet187900Ryzen 9 79007700Ryzen 7 7700AMD 7700Ryzen 7600Ryzen 7600 AMDAMD 7600246810SE +/- 0.09, N = 37.167.136.966.035.785.685.655.56MIN: 7 / MAX: 7.92MIN: 6.99 / MAX: 7.93MIN: 6.51 / MAX: 8.09MIN: 5.61 / MAX: 7.05MIN: 5.57 / MAX: 12.93MIN: 5.51 / MAX: 7.09MIN: 5.5 / MAX: 6.42MIN: 5.51 / MAX: 6.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DRyzen 7600 AMDRyzen 76007700Ryzen 9 7900AMD 7600Ryzen 7 7700AMD 770079001530456075SE +/- 0.87, N = 366.5762.4761.6358.2057.1153.9353.7652.071. (CXX) g++ options: -O2 -lOpenCL

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 1080pRyzen 7600 AMDRyzen 7600AMD 76007700Ryzen 7 7700AMD 7700Ryzen 9 79007900160320480640800SE +/- 0.75, N = 3578.51579.19580.00668.09670.53677.96728.99739.511. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPURyzen 7600 AMDAMD 7600Ryzen 7600Ryzen 7 77007700Ryzen 9 79007900AMD 77000.16920.33840.50760.67680.846SE +/- 0.000677, N = 30.7519350.7519070.7500020.6002440.5998880.5952110.5906330.588639MIN: 0.73MIN: 0.73MIN: 0.72MIN: 0.55MIN: 0.56MIN: 0.54MIN: 0.54MIN: 0.561. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPURyzen 7600Ryzen 7600 AMDAMD 7600Ryzen 9 79007900AMD 77007700Ryzen 7 770030060090012001500SE +/- 6.34, N = 31315.781306.121298.341190.801179.601039.001034.061032.40MIN: 728.56 / MAX: 1437.75MIN: 773.95 / MAX: 1444.66MIN: 960.85 / MAX: 1409.2MIN: 1049.79 / MAX: 1286.08MIN: 1028.22 / MAX: 1530.8MIN: 913.61 / MAX: 1111.36MIN: 699.51 / MAX: 1128.89MIN: 820.72 / MAX: 1126.151. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Timed CPython Compilation

This test times how long it takes to build the reference Python implementation, CPython, with optimizations and LTO enabled for a release build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: DefaultRyzen 7600Ryzen 7600 AMDAMD 7600Ryzen 7 77007700AMD 7700Ryzen 9 790079004812162016.9816.8816.8015.1815.1814.9413.5813.46

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pRyzen 7600 AMDRyzen 7600AMD 76007700Ryzen 7 7700AMD 7700Ryzen 9 7900790020406080100SE +/- 0.55, N = 382.5782.6183.8287.5687.6088.86102.84103.731. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 10, LosslessAMD 7600Ryzen 7600Ryzen 7600 AMDAMD 7700Ryzen 7 77007700Ryzen 9 790079000.97921.95842.93763.91684.896SE +/- 0.019, N = 34.3524.3414.3253.8973.8803.8723.6213.5011. (CXX) g++ options: -O3 -fPIC -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Compression SpeedAMD 7600Ryzen 7600Ryzen 7600 AMDRyzen 7 77007700AMD 7700Ryzen 9 79007900400800120016002000SE +/- 2.90, N = 31319.11320.81326.31443.81445.11454.31636.71637.91. (CC) gcc options: -O3 -pthread -lz -llzma

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tiny7900Ryzen 9 79007700Ryzen 7600AMD 7600Ryzen 7 7700Ryzen 7600 AMDAMD 770048121620SE +/- 0.12, N = 314.0913.7013.1212.3412.1912.0612.0311.40MIN: 13.9 / MAX: 14.24MIN: 13.55 / MAX: 14.36MIN: 12.22 / MAX: 14.69MIN: 11.85 / MAX: 14.12MIN: 11.76 / MAX: 12.93MIN: 11.25 / MAX: 14.07MIN: 11.74 / MAX: 13.04MIN: 11.18 / MAX: 13.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansRyzen 9 79007900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 77007700AMD 7600400800120016002000SE +/- 19.03, N = 2016761655146114241407140313711359

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet507900Ryzen 9 79007700Ryzen 7600AMD 7600Ryzen 7600 AMDRyzen 7 7700AMD 77003691215SE +/- 0.04, N = 312.2012.0911.5211.1111.1111.0610.349.91MIN: 12.02 / MAX: 13.2MIN: 11.98 / MAX: 13.18MIN: 10.79 / MAX: 13.36MIN: 10.95 / MAX: 12.56MIN: 11.03 / MAX: 11.66MIN: 10.94 / MAX: 11.76MIN: 9.67 / MAX: 12.24MIN: 9.79 / MAX: 11.381. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformerRyzen 7600Ryzen 7600 AMDAMD 7600Ryzen 7 77007700AMD 77007900Ryzen 9 790020406080100SE +/- 0.04, N = 3110.67110.17110.10100.43100.1399.5493.5090.21MIN: 109.86 / MAX: 120.66MIN: 109.46 / MAX: 111.5MIN: 109.45 / MAX: 116.06MIN: 98.86 / MAX: 105.15MIN: 98.97 / MAX: 104.21MIN: 98.35 / MAX: 108.29MIN: 93.19 / MAX: 97.76MIN: 89.92 / MAX: 96.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: unsharp-maskRyzen 9 79007900AMD 7600Ryzen 7600 AMDRyzen 76007700Ryzen 7 7700AMD 77003691215SE +/- 0.02, N = 312.6312.3810.8010.8010.7410.6710.5410.31

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Signal Source (Cosine)7900Ryzen 9 7900Ryzen 7600Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700770013002600390052006500SE +/- 6.70, N = 85014.05162.05846.25865.65870.76084.96088.96105.91. 3.10.1.1

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPURyzen 7600AMD 7600Ryzen 7600 AMDRyzen 9 790079007700Ryzen 7 7700AMD 77001.29382.58763.88145.17526.469SE +/- 0.03, N = 35.755.715.675.445.444.864.794.76MIN: 4.12 / MAX: 15.9MIN: 4.14 / MAX: 17.41MIN: 4.46 / MAX: 8.96MIN: 3.81 / MAX: 14.78MIN: 3.77 / MAX: 13.94MIN: 3.43 / MAX: 7.38MIN: 3.62 / MAX: 17.47MIN: 3.73 / MAX: 12.411. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.CAMD 7600Ryzen 7600Ryzen 7600 AMDRyzen 7 77007700AMD 77007900Ryzen 9 79005K10K15K20K25KSE +/- 5.90, N = 321352.0121396.5921442.6723843.5123860.3124079.9925610.9625644.031. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPURyzen 9 79007900AMD 77007700Ryzen 7 7700Ryzen 7600AMD 7600Ryzen 7600 AMD0.09680.19360.29040.38720.484SE +/- 0.00, N = 30.430.430.380.380.370.360.360.36MIN: 0.26 / MAX: 8.94MIN: 0.26 / MAX: 9.51MIN: 0.24 / MAX: 2.35MIN: 0.23 / MAX: 13MIN: 0.23 / MAX: 12.31MIN: 0.22 / MAX: 12.84MIN: 0.22 / MAX: 7.98MIN: 0.22 / MAX: 8.481. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradesoapRyzen 7600 AMDRyzen 7600AMD 7600AMD 770077007900Ryzen 9 79005001000150020002500SE +/- 19.27, N = 42242220121542128201519361880

Java Test: Tradesoap

Ryzen 7 7700: The test quit with a non-zero exit status.

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamRyzen 9 79007900Ryzen 7 77007700AMD 7700AMD 7600Ryzen 7600Ryzen 7600 AMD1632486480SE +/- 0.12, N = 374.1574.1069.6869.6168.9162.7762.5762.38

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: Standard7700Ryzen 7 7700Ryzen 7600Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 9 790079002K4K6K8K10KSE +/- 114.75, N = 12843184448623893292049254974899851. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.CRyzen 7 77007700Ryzen 7600Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 9 790079003K6K9K12K15KSE +/- 8.95, N = 310954.1110989.4111031.3411043.4811044.4611060.8012761.8812794.751. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamRyzen 9 790079007700Ryzen 7 7700AMD 7700Ryzen 7600AMD 7600Ryzen 7600 AMD1632486480SE +/- 0.31, N = 370.3370.3264.1763.6763.5060.4160.3360.29

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 1080pRyzen 9 79007900Ryzen 7600AMD 7600Ryzen 7600 AMDRyzen 7 7700AMD 770077001224364860SE +/- 0.09, N = 346.8346.8852.0352.0452.6853.3854.4354.491. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU7900Ryzen 9 7900Ryzen 7 77007700Ryzen 7600AMD 7700AMD 7600Ryzen 7600 AMD0.16430.32860.49290.65720.8215SE +/- 0.00, N = 30.730.720.640.640.630.630.630.63MIN: 0.39 / MAX: 9.37MIN: 0.4 / MAX: 8.98MIN: 0.37 / MAX: 13.2MIN: 0.36 / MAX: 12.27MIN: 0.34 / MAX: 12.67MIN: 0.38 / MAX: 8.56MIN: 0.39 / MAX: 1.84MIN: 0.34 / MAX: 8.021. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPU7900Ryzen 9 7900Ryzen 7 7700AMD 77007700AMD 7600Ryzen 7600 AMDRyzen 7600246810SE +/- 0.01, N = 36.556.545.845.845.845.685.685.67MIN: 3.66 / MAX: 14.89MIN: 3.68 / MAX: 16.14MIN: 3.11 / MAX: 18.54MIN: 3.13 / MAX: 13.02MIN: 3.2 / MAX: 17.68MIN: 3.04 / MAX: 13.81MIN: 3.14 / MAX: 13.67MIN: 3.77 / MAX: 11.121. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamRyzen 9 79007900Ryzen 7 7700AMD 77007700Ryzen 7600 AMDAMD 7600Ryzen 760020406080100SE +/- 0.16, N = 396.8996.5587.6786.9286.9084.2684.2284.01

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.DAMD 7600Ryzen 7600Ryzen 7600 AMDAMD 7700Ryzen 7 77007700Ryzen 9 7900790030060090012001500SE +/- 6.92, N = 31299.851304.941322.961401.101428.571452.371464.481498.341. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v2Ryzen 7600 AMDAMD 7600Ryzen 7600Ryzen 9 79007700AMD 7700Ryzen 7 770079001122334455SE +/- 0.35, N = 1547.6543.9243.9242.3742.3342.3342.0641.52MIN: 43.99 / MAX: 49.87MIN: 43.9 / MAX: 44MIN: 43.87 / MAX: 44.14MIN: 42.26 / MAX: 42.44MIN: 42.27 / MAX: 42.4MIN: 42.3 / MAX: 42.43MIN: 41.98 / MAX: 42.14MIN: 41.17 / MAX: 42.011. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Ryzen 7 7700Ryzen 7600AMD 7700Ryzen 7600 AMD7700Ryzen 9 79007900AMD 7600400800120016002000SE +/- 32.56, N = 2019421881187718711828179117481700

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream7900Ryzen 9 7900Ryzen 7 77007700AMD 7700AMD 7600Ryzen 7600 AMDRyzen 76001122334455SE +/- 0.03, N = 347.4747.1243.4643.3843.2642.2042.0041.76

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: DefaultRyzen 9 7900Ryzen 7600 AMDAMD 7600Ryzen 76007700AMD 7700Ryzen 7 77007900612182430SE +/- 0.03, N = 324.0025.8325.9525.9726.9426.9426.9427.241. (CC) gcc options: -fvisibility=hidden -O2 -lm

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamRyzen 9 79007900Ryzen 7 77007700AMD 7700Ryzen 7600 AMDAMD 7600Ryzen 7600714212835SE +/- 0.02, N = 329.8429.6727.3827.3127.2426.4326.3526.34

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 100AMD 7600Ryzen 7600 AMDRyzen 9 7900Ryzen 7 7700AMD 770077007900Ryzen 76000.21150.4230.63450.8461.0575SE +/- 0.02, N = 60.830.850.860.870.880.910.920.941. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

Algebraic Multi-Grid Benchmark

AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.27700Ryzen 7 7700AMD 7700AMD 7600Ryzen 7600Ryzen 7600 AMD7900Ryzen 9 790090M180M270M360M450MSE +/- 42003.58, N = 33857551003859116003915619003919451003925822003926537334336033004356400001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileAMD 7600Ryzen 7600 AMDRyzen 76007700Ryzen 7 7700AMD 77007900Ryzen 9 790048121620SE +/- 0.01, N = 315.8415.7715.6814.7114.6914.6214.1014.04

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream7900Ryzen 9 7900Ryzen 7 77007700AMD 7700Ryzen 7600 AMDRyzen 7600AMD 7600150300450600750SE +/- 0.30, N = 3705.41704.98651.97651.14649.26630.99630.90630.55

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: EigenAMD 7600Ryzen 7600 AMDRyzen 7600AMD 77007700Ryzen 7 77007900Ryzen 9 7900400800120016002000SE +/- 14.96, N = 9147514951499154015421548160716501. (CXX) g++ options: -flto -pthread

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream7900Ryzen 9 7900Ryzen 7 77007700AMD 7700Ryzen 7600 AMDRyzen 7600AMD 7600150300450600750SE +/- 0.22, N = 3703.74703.55650.70650.21647.74630.93630.50629.80

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 90AMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 7 77007700AMD 7700Ryzen 9 790079003691215SE +/- 0.01, N = 310.7610.9310.9411.4311.5711.6111.7812.001. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 90AMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 7 77007700AMD 7700Ryzen 9 790079003691215SE +/- 0.00, N = 311.0711.2111.2211.7611.9011.9212.0912.321. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 80AMD 7600Ryzen 7600Ryzen 7600 AMDRyzen 7 77007700AMD 7700Ryzen 9 790079003691215SE +/- 0.02, N = 310.9711.1211.1311.6111.8011.8211.9812.201. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg167700Ryzen 7600AMD 7600Ryzen 7600 AMDRyzen 7 7700AMD 7700Ryzen 9 79007900612182430SE +/- 0.03, N = 326.9125.9125.8825.8525.4324.3524.2424.20MIN: 25.94 / MAX: 29.02MIN: 25.66 / MAX: 27.54MIN: 25.66 / MAX: 26.69MIN: 25.64 / MAX: 31.08MIN: 24.43 / MAX: 27.56MIN: 24.09 / MAX: 34.85MIN: 23.97 / MAX: 25.1MIN: 23.94 / MAX: 25.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: auto-levelsRyzen 9 79007900Ryzen 7600 AMDAMD 7600Ryzen 76007700Ryzen 7 7700AMD 77003691215SE +/- 0.012, N = 310.27910.1509.8099.8019.7799.5229.4889.247

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 80AMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 7 77007700AMD 7700Ryzen 9 790079003691215SE +/- 0.01, N = 311.2811.4111.4411.9612.0912.1112.2712.531. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: TopTweetRyzen 7600AMD 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 77007900Ryzen 9 79003691215SE +/- 0.02, N = 39.119.509.519.869.929.9510.0610.101. (CXX) g++ options: -O3

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552Ryzen 7600 AMDRyzen 7600AMD 7600Ryzen 7 77007700AMD 77007900Ryzen 9 79001632486480SE +/- 0.33, N = 373.0872.8771.9771.9370.9167.7866.5965.951. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: Parallel7900Ryzen 9 79007700AMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 77002K4K6K8K10KSE +/- 23.08, N = 3702571987489767976837745776577831. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Timed CPython Compilation

This test times how long it takes to build the reference Python implementation, CPython, with optimizations and LTO enabled for a release build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: Released Build, PGO + LTO OptimizedRyzen 7600AMD 7600Ryzen 7600 AMDRyzen 7 77007700AMD 7700Ryzen 9 790079004080120160200202.76201.95201.93192.75192.36190.87183.17183.09

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonAMD 7700AMD 7600Ryzen 7600Ryzen 7600 AMD7700Ryzen 7 7700Ryzen 9 790079005001000150020002500SE +/- 11.02, N = 425402540253825382459236023482294

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3, Long Mode - Compression SpeedRyzen 9 79007900Ryzen 7600AMD 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 7700400800120016002000SE +/- 3.73, N = 31824.81825.41936.91937.11942.82007.32009.02012.91. (CC) gcc options: -O3 -pthread -lz -llzma

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnet7700Ryzen 7 7700Ryzen 9 79007900Ryzen 7600AMD 7700AMD 7600Ryzen 7600 AMD1.05082.10163.15244.20325.254SE +/- 0.00, N = 34.674.664.624.624.434.254.244.24MIN: 4.37 / MAX: 5.86MIN: 4.36 / MAX: 5.78MIN: 4.55 / MAX: 5.51MIN: 4.55 / MAX: 5.18MIN: 4.39 / MAX: 5MIN: 4.19 / MAX: 5.65MIN: 4.22 / MAX: 4.83MIN: 4.21 / MAX: 4.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesRyzen 7600Ryzen 7600 AMDAMD 76007700Ryzen 7 77007900Ryzen 9 7900AMD 77001530456075SE +/- 0.18, N = 366.163.663.062.360.660.660.460.2

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FIR Filter7900Ryzen 9 7900Ryzen 7600Ryzen 7600 AMDAMD 76007700AMD 7700Ryzen 7 770030060090012001500SE +/- 3.50, N = 81374.71402.11436.21436.81448.51485.91498.31508.71. 3.10.1.1

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Hilbert TransformRyzen 7600Ryzen 9 7900AMD 7600Ryzen 7600 AMD79007700Ryzen 7 7700AMD 77002004006008001000SE +/- 5.10, N = 8709.6716.6725.5738.4741.4759.8760.3775.51. 3.10.1.1

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetRyzen 7600Ryzen 7600 AMDAMD 7600Ryzen 9 79007900Ryzen 7 77007700AMD 77005001000150020002500SE +/- 0.40, N = 32237.082235.312228.872112.152105.732069.752067.232066.07MIN: 2217.88 / MAX: 2258.4MIN: 2217.92 / MAX: 2257.35MIN: 2216.04 / MAX: 2250MIN: 2081.33 / MAX: 2147.82MIN: 2070.42 / MAX: 2146.74MIN: 2024.37 / MAX: 2112.67MIN: 2021.19 / MAX: 2109.88MIN: 2022.59 / MAX: 2107.461. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Decompression SpeedAMD 7600Ryzen 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 77007900Ryzen 9 790014002800420056007000SE +/- 3.15, N = 36188.26200.66208.36311.66336.66347.16404.16696.21. (CC) gcc options: -O3 -pthread -lz -llzma

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 2 - Buffer Length: 256 - Filter Length: 57Ryzen 9 7900Ryzen 7600 AMDAMD 7600Ryzen 7600AMD 7700Ryzen 7 77007900770040M80M120M160M200MSE +/- 1746140.93, N = 71923100001968728571975100001999600002057600002070400002075200002080100001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3, Long Mode - Decompression SpeedAMD 7600Ryzen 7 7700Ryzen 7600 AMD7700AMD 77007900Ryzen 7600Ryzen 9 790014002800420056007000SE +/- 67.24, N = 35943.06083.36086.86098.46118.56167.26167.66423.91. (CC) gcc options: -O3 -pthread -lz -llzma

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C2670Ryzen 7 7700Ryzen 7600 AMDRyzen 7600AMD 76007700AMD 7700Ryzen 9 7900790020406080100SE +/- 0.07, N = 380.1979.9879.7979.5579.4077.6175.5374.251. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyRyzen 7600 AMDAMD 7600Ryzen 76007700Ryzen 7 7700Ryzen 9 7900AMD 7700790020406080100SE +/- 1.20, N = 383.581.979.979.679.478.477.577.4

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteRyzen 7600 AMDAMD 7600Ryzen 7600AMD 7700Ryzen 9 7900Ryzen 7 770079007700300K600K900K1200K1500KSE +/- 5632.48, N = 311432251149535115424011841351186995121801912278891233198

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, LosslessRyzen 7600AMD 7600Ryzen 7600 AMDRyzen 7 7700AMD 770079007700Ryzen 9 79000.49730.99461.49191.98922.4865SE +/- 0.00, N = 32.052.082.092.172.182.192.212.211. (CC) gcc options: -fvisibility=hidden -O2 -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression SpeedRyzen 7600AMD 7600Ryzen 7600 AMDRyzen 7 7700AMD 77007700Ryzen 9 7900790012002400360048006000SE +/- 7.45, N = 35058.05072.45074.25185.25220.65364.15436.15452.41. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Decompression SpeedAMD 7600Ryzen 7600 AMD7700Ryzen 7600Ryzen 9 7900Ryzen 7 7700AMD 7700790013002600390052006500SE +/- 7.12, N = 35826.35834.25956.56021.36049.26188.36198.96277.11. (CC) gcc options: -O3 -pthread -lz -llzma

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASRyzen 7600 AMDAMD 7700Ryzen 7600Ryzen 7 7700AMD 760077007900Ryzen 9 790030060090012001500SE +/- 21.50, N = 3151415191520153515541588160116231. (CXX) g++ options: -flto -pthread

JPEG XL Decoding libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: 1AMD 7600Ryzen 7600Ryzen 7600 AMDRyzen 7 77007700Ryzen 9 7900AMD 770079001632486480SE +/- 0.09, N = 367.9369.3669.9470.4471.3671.3671.9072.61

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression SpeedRyzen 7600AMD 7700Ryzen 7600 AMDRyzen 9 7900Ryzen 7 77007900AMD 7600770012002400360048006000SE +/- 54.15, N = 35039.85130.45133.05176.65186.45206.55239.25379.01. (CC) gcc options: -O3 -pthread -lz -llzma

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: DistinctUserIDRyzen 7600 AMDAMD 7600Ryzen 76007900Ryzen 7 7700AMD 77007700Ryzen 9 79003691215SE +/- 0.11, N = 39.349.409.469.729.779.839.859.961. (CXX) g++ options: -O3

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 1 - Buffer Length: 256 - Filter Length: 57AMD 7600Ryzen 7600Ryzen 7600 AMDRyzen 7 7700AMD 770077007900Ryzen 9 790020M40M60M80M100MSE +/- 98206.13, N = 3999000001000100001003533331042600001043000001043500001054400001064600001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21AMD 7600Ryzen 7600Ryzen 9 7900AMD 7700Ryzen 7 770077007900Ryzen 7600 AMD9001800270036004500SE +/- 40.10, N = 63927.23966.63971.24090.74098.84131.84141.94180.61. (CXX) g++ options: -O3 -march=native -rdynamic

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goRyzen 7600AMD 7600Ryzen 7600 AMD7700AMD 7700Ryzen 7 7700Ryzen 9 79007900306090120150SE +/- 0.00, N = 3133132132128127126125125

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: IIR FilterRyzen 7600 AMDAMD 7600Ryzen 760079007700Ryzen 9 7900AMD 7700Ryzen 7 7700120240360480600SE +/- 3.03, N = 8541.5541.8544.3560.7561.1561.2570.8575.81. 3.10.1.1

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatRyzen 7600 AMDRyzen 7600AMD 76007700Ryzen 7 7700AMD 77007900Ryzen 9 79001326395265SE +/- 0.12, N = 359.458.858.757.256.456.156.155.9

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Lossless, Highest CompressionAMD 7600Ryzen 7600 AMDRyzen 76007700Ryzen 7 77007900AMD 7700Ryzen 9 79000.19350.3870.58050.7740.9675SE +/- 0.00, N = 30.810.820.820.840.840.850.850.861. (CC) gcc options: -fvisibility=hidden -O2 -lm

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeRyzen 7600 AMDAMD 7600Ryzen 7600Ryzen 7 7700AMD 77007700Ryzen 9 790079003M6M9M12M15MSE +/- 122230.58, N = 813949188140532681417037614513008145725551457427414646827148048871. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Highest CompressionRyzen 7600 AMDAMD 7600Ryzen 7600AMD 77007700Ryzen 7 77007900Ryzen 9 79001.19252.3853.57754.775.9625SE +/- 0.00, N = 35.005.015.015.195.205.205.305.301. (CC) gcc options: -fvisibility=hidden -O2 -lm

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: rotateRyzen 7600 AMDAMD 7600Ryzen 7600Ryzen 9 79007700Ryzen 7 77007900AMD 77003691215SE +/- 0.015, N = 39.5099.5059.4469.4069.2979.1829.1828.971

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3AMD 7700AMD 7600Ryzen 7600Ryzen 7600 AMD7700Ryzen 9 7900Ryzen 7 770079004080120160200SE +/- 0.00, N = 3177174173173168167167167

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibAMD 7600Ryzen 7600Ryzen 7600 AMDRyzen 7 7700AMD 77007700Ryzen 9 790079003691215SE +/- 0.00, N = 310.710.610.610.410.310.310.110.1

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.1Ryzen 7600 AMDAMD 7600Ryzen 76007700Ryzen 7 7700AMD 7700Ryzen 9 790079004080120160200SE +/- 0.02, N = 3191.09191.09191.08184.21184.08183.89182.77180.38MIN: 190.97 / MAX: 191.36MIN: 191.02 / MAX: 191.19MIN: 191 / MAX: 191.26MIN: 184.13 / MAX: 184.34MIN: 183.99 / MAX: 184.18MIN: 183.77 / MAX: 184.05MIN: 182.69 / MAX: 182.89MIN: 180.33 / MAX: 180.521. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FM Deemphasis Filter7900Ryzen 9 7900Ryzen 7600Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 770077002004006008001000SE +/- 2.20, N = 8905.0907.5911.7913.7919.1941.1943.1958.41. 3.10.1.1

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesRyzen 7600 AMDRyzen 7600AMD 7600Ryzen 7 7700Ryzen 9 79007700AMD 77007900140280420560700SE +/- 2.40, N = 3626615613606602601598592

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 4 - Buffer Length: 256 - Filter Length: 57Ryzen 7600AMD 7600Ryzen 7600 AMDRyzen 7 77007700Ryzen 9 7900AMD 7700790090M180M270M360M450MSE +/- 265476.51, N = 33889800003981000004006033334012400004015400004026800004072100004110000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100AMD 7600Ryzen 7600Ryzen 7600 AMD7700Ryzen 7 7700AMD 77007900Ryzen 9 790048121620SE +/- 0.01, N = 316.1216.1216.1316.7416.7816.7916.8117.031. (CC) gcc options: -fvisibility=hidden -O2 -lm

Git

This test measures the time needed to carry out some sample Git operations on an example, static repository that happens to be a copy of the GNOME GTK tool-kit repository. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGitTime To Complete Common Git CommandsAMD 7600Ryzen 7600Ryzen 7600 AMDAMD 7700Ryzen 7 770077007900Ryzen 9 7900816243240SE +/- 0.04, N = 333.1933.1333.0732.3732.0832.0531.7231.431. git version 2.34.1

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileRyzen 7600Ryzen 7600 AMDAMD 7600AMD 77007700Ryzen 7 77007900Ryzen 9 790020406080100SE +/- 0.00, N = 383.082.782.379.579.579.479.278.6

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateAMD 7600Ryzen 7600 AMDRyzen 760077007900Ryzen 7 7700AMD 7700Ryzen 9 7900612182430SE +/- 0.00, N = 325.825.825.724.824.724.624.624.5

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceRyzen 7600 AMDAMD 7600Ryzen 76007700Ryzen 9 7900AMD 7700Ryzen 7 7700790060120180240300SE +/- 0.33, N = 3259257256248247247246246

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 100Ryzen 7600 AMDAMD 7600Ryzen 7600Ryzen 7 7700Ryzen 9 790077007900AMD 77000.2250.450.6750.91.125SE +/- 0.01, N = 30.950.960.960.970.970.981.001.001. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: LargeRandomRyzen 7600 AMDAMD 7600Ryzen 760077007900AMD 7700Ryzen 7 7700Ryzen 9 79000.40730.81461.22191.62922.0365SE +/- 0.00, N = 31.721.721.731.791.791.791.791.811. (CXX) g++ options: -O3

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsRyzen 7600AMD 7600Ryzen 7600 AMDAMD 77007700Ryzen 9 7900Ryzen 7 770079003691215SE +/- 0.03, N = 312.412.312.312.111.911.811.811.8

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonAMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 7 77007700Ryzen 9 79007900AMD 770050100150200250SE +/- 0.33, N = 3232232231225225223223221

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: PartialTweetsRyzen 7600 AMDAMD 7600Ryzen 76007700AMD 7700Ryzen 7 77007900Ryzen 9 7900246810SE +/- 0.00, N = 37.897.907.918.168.178.218.248.271. (CXX) g++ options: -O3

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2Ryzen 7600Ryzen 7600 AMDAMD 7600Ryzen 7 77007700Ryzen 9 79007900AMD 77004080120160200SE +/- 0.23, N = 3199.19199.06198.87193.77190.92190.75190.74190.37MIN: 198.58 / MAX: 200.1MIN: 198.12 / MAX: 200.3MIN: 198.22 / MAX: 199.53MIN: 192.93 / MAX: 195.31MIN: 190.03 / MAX: 193.11MIN: 190.01 / MAX: 191.51MIN: 189.99 / MAX: 192.23MIN: 189.7 / MAX: 193.981. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

FLAC Audio Encoding

This test times how long it takes to encode a sample WAV file to FLAC audio format ten times using the --best preset settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.4WAV To FLACRyzen 7600Ryzen 7600 AMDAMD 76007700AMD 7700Ryzen 7 77007900Ryzen 9 79003691215SE +/- 0.00, N = 512.1012.1012.1011.6911.6711.6711.6511.571. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosAMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 9 7900Ryzen 7 770077007900AMD 77001224364860SE +/- 0.09, N = 355.455.255.153.353.253.253.153.0

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Decompression SpeedAMD 7600Ryzen 7600 AMDRyzen 7 77007700AMD 7700Ryzen 9 7900Ryzen 7600790012002400360048006000SE +/- 86.19, N = 35547.85596.15702.75715.15716.25722.75745.45794.71. (CC) gcc options: -O3 -pthread -lz -llzma

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: KostyaAMD 7600Ryzen 7600 AMDRyzen 7600Ryzen 9 7900Ryzen 7 770077007900AMD 77001.30052.6013.90155.2026.5025SE +/- 0.01, N = 35.555.565.565.765.775.785.785.781. (CXX) g++ options: -O3

LAME MP3 Encoding

LAME is an MP3 encoder licensed under the LGPL. This test measures the time required to encode a WAV file to MP3 format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3Ryzen 7600AMD 7600Ryzen 7600 AMDRyzen 9 79007700Ryzen 7 77007900AMD 77001.12192.24383.36574.48765.6095SE +/- 0.003, N = 34.9864.9784.9784.8094.8064.8034.7954.7921. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lncurses -lm

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

Java Test: Eclipse

7700: The test quit with a non-zero exit status.

7900: The test quit with a non-zero exit status.

Ryzen 7600 AMD: The test quit with a non-zero exit status.

AMD 7600: The test quit with a non-zero exit status.

AMD 7700: The test quit with a non-zero exit status.

Ryzen 7 7700: The test quit with a non-zero exit status.

Ryzen 7600: The test quit with a non-zero exit status.

Ryzen 9 7900: The test quit with a non-zero exit status.

325 Results Shown

oneDNN
NCNN
Mobile Neural Network
oneDNN
NCNN:
  CPU - FastestDet
  CPU - shufflenet-v2
ONNX Runtime:
  bertsquad-12 - CPU - Standard
  ArcFace ResNet-100 - CPU - Standard
NCNN
ONNX Runtime
NCNN
Mobile Neural Network
NCNN
Mobile Neural Network:
  squeezenetv1.1
  nasnet
  MobileNetV2_224
C-Ray
NAS Parallel Benchmarks
Stockfish
Mobile Neural Network
OpenSSL
Zstd Compression
OpenSSL
Cpuminer-Opt
NAS Parallel Benchmarks
oneDNN
Cpuminer-Opt
oneDNN
OpenSSL
NCNN
JPEG XL Decoding libjxl
Cpuminer-Opt
OpenVINO
Coremark
Cpuminer-Opt:
  x25x
  scrypt
  Ringcoin
7-Zip Compression
Cpuminer-Opt
Neural Magic DeepSparse
Cpuminer-Opt
IndigoBench
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream
Cpuminer-Opt
Neural Magic DeepSparse
Cpuminer-Opt
Stargate Digital Audio Workstation
Xmrig
Blender
IndigoBench
Tachyon
Blender:
  BMW27 - CPU-Only
  Classroom - CPU-Only
Xmrig
Stargate Digital Audio Workstation
asmFish
oneDNN
ASTC Encoder
OpenVINO
Chaos Group V-RAY
ASTC Encoder:
  Fast
  Medium
Stargate Digital Audio Workstation
ASTC Encoder
ONNX Runtime
Neural Magic DeepSparse
OpenVINO
Stargate Digital Audio Workstation
Timed Linux Kernel Compilation
Appleseed
Aircrack-ng
Stargate Digital Audio Workstation
Liquid-DSP
Blender
OpenVINO:
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU
  Weld Porosity Detection FP16-INT8 - CPU
Blender
NAMD
Liquid-DSP
oneDNN:
  Recurrent Neural Network Training - u8s8f32 - CPU
  IP Shapes 3D - bf16bf16bf16 - CPU
Timed LLVM Compilation
Stargate Digital Audio Workstation
oneDNN:
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
  Recurrent Neural Network Training - f32 - CPU
OpenVINO:
  Face Detection FP16 - CPU
  Vehicle Detection FP16-INT8 - CPU
Neural Magic DeepSparse
SVT-HEVC
Rodinia
x265
OpenVINO
Stargate Digital Audio Workstation
oneDNN
Appleseed
oneDNN:
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - u8s8f32 - CPU
LAMMPS Molecular Dynamics Simulator
SVT-HEVC
Neural Magic DeepSparse
Stargate Digital Audio Workstation
NAS Parallel Benchmarks
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    ms/batch
    items/sec
OpenVINO
Neural Magic DeepSparse:
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    ms/batch
    items/sec
NCNN
OpenVINO
oneDNN:
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
  Deconvolution Batch shapes_1d - u8s8f32 - CPU
Timed MPlayer Compilation
Primesieve
Timed LLVM Compilation
Primesieve
Timed Linux Kernel Compilation
Build2
Rodinia
NAS Parallel Benchmarks
7-Zip Compression
GROMACS
OpenVINO
SVT-VP9
PyPerformance
Timed Godot Game Engine Compilation
Timed FFmpeg Compilation
SVT-HEVC
oneDNN
GNU Radio
SVT-VP9
ONNX Runtime
Mobile Neural Network
SVT-HEVC
Timed Mesa Compilation
Mobile Neural Network
OpenVINO
libavif avifenc
Cpuminer-Opt
x264
oneDNN:
  Deconvolution Batch shapes_3d - f32 - CPU
  Deconvolution Batch shapes_1d - f32 - CPU
OpenFOAM
SVT-VP9:
  VMAF Optimized - Bosphorus 1080p
  PSNR/SSIM Optimized - Bosphorus 1080p
Neural Magic DeepSparse:
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    ms/batch
    items/sec
x264
oneDNN
Appleseed
Kvazaar
SVT-HEVC
oneDNN
Neural Magic DeepSparse:
  CV Detection,YOLOv5s COCO - Synchronous Single-Stream:
    ms/batch
    items/sec
NCNN
ONNX Runtime
SVT-AV1
Darktable
SVT-HEVC
libavif avifenc
Zstd Compression
SVT-AV1
Kvazaar
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream:
    ms/batch
    items/sec
SVT-VP9
Rodinia
SVT-AV1
libavif avifenc
SVT-VP9
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    ms/batch
    items/sec
Rodinia
ONNX Runtime
oneDNN
Zstd Compression
SVT-AV1
Kvazaar
libavif avifenc
Y-Cruncher
Timed PHP Compilation
GPAW
VP9 libvpx Encoding
Kvazaar
Darktable
oneDNN
NAS Parallel Benchmarks
ONNX Runtime
SVT-AV1
Timed Wasmer Compilation
Darktable
Xcompact3d Incompact3d
Y-Cruncher
oneDNN
GIMP
NAS Parallel Benchmarks:
  FT.C
  CG.C
ONNX Runtime
Kvazaar
Timed GDB GNU Debugger Compilation
oneDNN
OpenVINO
VP9 libvpx Encoding:
  Speed 0 - Bosphorus 1080p
  Speed 0 - Bosphorus 4K
Cpuminer-Opt
OpenFOAM
OpenVINO
nekRS
NCNN
Liquid-DSP
OpenVINO
SVT-AV1:
  Preset 4 - Bosphorus 1080p
  Preset 12 - Bosphorus 1080p
Zstd Compression
ONNX Runtime
OpenVINO:
  Vehicle Detection FP16-INT8 - CPU
  Weld Porosity Detection FP16 - CPU
Kvazaar
Neural Magic DeepSparse
OpenVINO
Neural Magic DeepSparse
Darktable
NCNN
OpenVINO
NCNN
Rodinia
SVT-AV1
oneDNN
OpenVINO
Timed CPython Compilation
x265
libavif avifenc
Zstd Compression
NCNN
DaCapo Benchmark
NCNN:
  CPU - resnet50
  CPU - vision_transformer
GIMP
GNU Radio
OpenVINO
NAS Parallel Benchmarks
OpenVINO
DaCapo Benchmark
Neural Magic DeepSparse
ONNX Runtime
NAS Parallel Benchmarks
Neural Magic DeepSparse
VP9 libvpx Encoding
OpenVINO:
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU
  Weld Porosity Detection FP16-INT8 - CPU
Neural Magic DeepSparse
NAS Parallel Benchmarks
TNN
DaCapo Benchmark
Neural Magic DeepSparse
WebP Image Encode
Neural Magic DeepSparse
JPEG XL libjxl
Algebraic Multi-Grid Benchmark
Timed Apache Compilation
Neural Magic DeepSparse
LeelaChessZero
Neural Magic DeepSparse
JPEG XL libjxl:
  JPEG - 90
  PNG - 90
  JPEG - 80
NCNN
GIMP
JPEG XL libjxl
simdjson
Ngspice
ONNX Runtime
Timed CPython Compilation
DaCapo Benchmark
Zstd Compression
NCNN
PyPerformance
GNU Radio:
  FIR Filter
  Hilbert Transform
TNN
Zstd Compression
Liquid-DSP
Zstd Compression
Ngspice
PyPerformance
PHPBench
WebP Image Encode
Zstd Compression:
  19 - Decompression Speed
  8 - Decompression Speed
LeelaChessZero
JPEG XL Decoding libjxl
Zstd Compression
simdjson
Liquid-DSP
QuantLib
PyPerformance
GNU Radio
PyPerformance
WebP Image Encode
Crafty
WebP Image Encode
GIMP
PyPerformance:
  2to3
  pathlib
TNN
GNU Radio
PyBench
Liquid-DSP
WebP Image Encode
Git
PyPerformance:
  regex_compile
  django_template
  raytrace
JPEG XL libjxl
simdjson
PyPerformance:
  json_loads
  pickle_pure_python
simdjson
TNN
FLAC Audio Encoding
PyPerformance
Zstd Compression
simdjson
LAME MP3 Encoding