AMD Ryzen zen4 Linux

Benchmarks for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2301094-PTS-EXTRANEW01
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable
Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
7700
December 31 2022
  6 Hours, 29 Minutes
7900
December 30 2022
  5 Hours, 59 Minutes
Ryzen 7600 AMD
January 03 2023
  1 Day, 4 Hours, 44 Minutes
AMD 7600
January 04 2023
  7 Hours, 11 Minutes
AMD 7700
January 02 2023
  6 Hours, 26 Minutes
Ryzen 7 7700
January 01 2023
  6 Hours, 29 Minutes
Ryzen 7600
January 05 2023
  7 Hours, 13 Minutes
Ryzen 9 7900
December 29 2022
  6 Hours
Invert Behavior (Only Show Selected Data)
  9 Hours, 19 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


AMD Ryzen zen4 LinuxProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen Resolution77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900AMD Ryzen 7 7700 8-Core @ 5.39GHz (8 Cores / 16 Threads)ASUS ROG CROSSHAIR X670E HERO (0805 BIOS)AMD Device 14d832GB2000GB Samsung SSD 980 PRO 2TB + 2000GBAMD Radeon RX 6800 XT 16GB (2575/1000MHz)AMD Navi 21 HDMI AudioASUS MG28UIntel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411Ubuntu 22.046.0.0-060000rc1daily20220820-generic (x86_64)GNOME Shell 42.2X Server 1.21.1.3 + Wayland4.6 Mesa 22.3.0-devel (git-4685385 2022-08-23 jammy-oibaf-ppa) (LLVM 14.0.6 DRM 3.48)1.3.224GCC 12.0.1 20220319ext43840x2160AMD Ryzen 9 7900 12-Core @ 5.48GHz (12 Cores / 24 Threads)2000GB Samsung SSD 980 PRO 2TBAMD Ryzen 5 7600 6-Core @ 5.17GHz (6 Cores / 12 Threads)AMD Ryzen 7 7700 8-Core @ 5.39GHz (8 Cores / 16 Threads)2000GB Samsung SSD 980 PRO 2TB + 2000GBAMD Ryzen 5 7600 6-Core @ 5.17GHz (6 Cores / 12 Threads)AMD Ryzen 9 7900 12-Core @ 5.48GHz (12 Cores / 24 Threads)2000GB Samsung SSD 980 PRO 2TBOpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-OcsLtf/gcc-12-12-20220319/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-OcsLtf/gcc-12-12-20220319/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: amd-pstate performance (Boost: Enabled) - CPU Microcode: 0xa601203Java Details- OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Details- Python 3.10.4Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 7900Result OverviewPhoronix Test Suite100%122%145%167%189%C-RayMobile Neural NetworkStockfishOpenSSLCoremarkIndigoBenchTachyonXmrigasmFishChaos Group V-RAYBlenderASTC EncoderAircrack-ngNAMDStargate Digital Audio WorkstationCpuminer-Opt7-Zip CompressionLAMMPS Molecular Dynamics SimulatorTimed LLVM CompilationTimed Linux Kernel CompilationTimed MPlayer CompilationPrimesieveAppleseedBuild2oneDNNGROMACSTimed Godot Game Engine CompilationTimed FFmpeg CompilationSVT-HEVCTimed Mesa CompilationNCNNx264SVT-VP9RodiniaNAS Parallel Benchmarkslibavif avifencOpenFOAMx265KvazaarTimed PHP CompilationGPAWY-CruncherTimed Wasmer CompilationSVT-AV1Xcompact3d Incompact3dDarktableNeural Magic DeepSparseJPEG XL Decoding libjxlTimed GDB GNU Debugger CompilationOpenVINOnekRSONNX RuntimeLiquid-DSPVP9 libvpx EncodingGIMPTimed CPython CompilationZstd CompressionGNU RadioAlgebraic Multi-Grid BenchmarkTimed Apache CompilationJPEG XL libjxlLeelaChessZeroNgspiceDaCapo BenchmarkPHPBenchTNNQuantLibCraftysimdjsonPyBenchGitPyPerformanceWebP Image EncodeFLAC Audio EncodingLAME MP3 Encoding

AMD Ryzen zen4 Linuxonednn: IP Shapes 3D - u8s8f32 - CPUncnn: CPU - blazefacemnn: mobilenet-v1-1.0onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUncnn: CPU - FastestDetncnn: CPU - shufflenet-v2onnx: bertsquad-12 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardncnn: CPU - regnety_400monnx: super-resolution-10 - CPU - Standardncnn: CPU-v3-v3 - mobilenet-v3mnn: mobilenetV3ncnn: CPU - mnasnetmnn: squeezenetv1.1mnn: nasnetmnn: MobileNetV2_224c-ray: Total Time - 4K, 16 Rays Per Pixelnpb: EP.Cstockfish: Total Timemnn: resnet-v2-50openssl: RSA4096compress-zstd: 8 - Compression Speedopenssl: RSA4096cpuminer-opt: Deepcoinnpb: EP.Donednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUcpuminer-opt: Magionednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUopenssl: SHA256ncnn: CPU-v2-v2 - mobilenet-v2jpegxl-decode: Allcpuminer-opt: Quad SHA-256, Pyriteopenvino: Vehicle Detection FP16 - CPUcoremark: CoreMark Size 666 - Iterations Per Secondcpuminer-opt: x25xcpuminer-opt: scryptcpuminer-opt: Ringcoincompress-7zip: Decompression Ratingcpuminer-opt: Blake-2 Sdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamcpuminer-opt: LBC, LBRY Creditsindigobench: CPU - Supercardeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamcpuminer-opt: Triple SHA-256, Onecoindeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamcpuminer-opt: Skeincoinstargate: 44100 - 1024xmrig: Wownero - 1Mblender: Barbershop - CPU-Onlyindigobench: CPU - Bedroomtachyon: Total Timeblender: BMW27 - CPU-Onlyblender: Classroom - CPU-Onlyxmrig: Monero - 1Mstargate: 480000 - 1024asmfish: 1024 Hash Memory, 26 Depthonednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPUastcenc: Exhaustiveopenvino: Face Detection FP16-INT8 - CPUv-ray: CPUastcenc: Fastastcenc: Mediumstargate: 44100 - 512astcenc: Thoroughonnx: fcn-resnet101-11 - CPU - Standarddeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamopenvino: Weld Porosity Detection FP16 - CPUstargate: 96000 - 1024build-linux-kernel: allmodconfigappleseed: Disney Materialaircrack-ng: stargate: 192000 - 1024liquid-dsp: 24 - 256 - 57blender: Pabellon Barcelona - CPU-Onlyopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUblender: Fishy Cat - CPU-Onlynamd: ATPase Simulation - 327,506 Atomsliquid-dsp: 16 - 256 - 57onednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: IP Shapes 3D - bf16bf16bf16 - CPUbuild-llvm: Ninjastargate: 480000 - 512onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Training - f32 - CPUopenvino: Face Detection FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamsvt-hevc: 1 - Bosphorus 4Krodinia: OpenMP LavaMDx265: Bosphorus 4Kopenvino: Person Detection FP16 - CPUstargate: 96000 - 512onednn: Recurrent Neural Network Inference - f32 - CPUappleseed: Emilyonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUlammps: Rhodopsin Proteinsvt-hevc: 1 - Bosphorus 1080pdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamstargate: 192000 - 512npb: SP.Bdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamopenvino: Person Detection FP32 - CPUdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamncnn: CPU - efficientnet-b0openvino: Age Gender Recognition Retail 0013 FP16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUbuild-mplayer: Time To Compileprimesieve: 1e13build-llvm: Unix Makefilesprimesieve: 1e12build-linux-kernel: defconfigbuild2: Time To Compilerodinia: OpenMP CFD Solvernpb: BT.Ccompress-7zip: Compression Ratinggromacs: MPI CPU - water_GMX50_bareopenvino: Machine Translation EN To DE FP16 - CPUsvt-vp9: Visual Quality Optimized - Bosphorus 1080ppyperformance: python_startupbuild-godot: Time To Compilebuild-ffmpeg: Time To Compilesvt-hevc: 7 - Bosphorus 4Konednn: IP Shapes 1D - f32 - CPUgnuradio: Five Back to Back FIR Filterssvt-vp9: Visual Quality Optimized - Bosphorus 4Konnx: yolov4 - CPU - Standardmnn: SqueezeNetV1.0svt-hevc: 7 - Bosphorus 1080pbuild-mesa: Time To Compilemnn: inception-v3openvino: Person Vehicle Bike Detection FP16 - CPUavifenc: 6cpuminer-opt: Myriad-Groestlx264: Bosphorus 4Konednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUopenfoam: drivaerFastback, Small Mesh Size - Execution Timesvt-vp9: VMAF Optimized - Bosphorus 1080psvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080pdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamx264: Bosphorus 1080ponednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUappleseed: Material Testerkvazaar: Bosphorus 4K - Mediumsvt-hevc: 10 - Bosphorus 1080ponednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamncnn: CPU - googlenetonnx: ArcFace ResNet-100 - CPU - Parallelsvt-av1: Preset 8 - Bosphorus 4Kdarktable: Boat - CPU-onlysvt-hevc: 10 - Bosphorus 4Kavifenc: 0compress-zstd: 19 - Compression Speedsvt-av1: Preset 8 - Bosphorus 1080pkvazaar: Bosphorus 4K - Very Fastdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamsvt-vp9: VMAF Optimized - Bosphorus 4Krodinia: OpenMP Leukocytesvt-av1: Preset 4 - Bosphorus 4Kavifenc: 6, Losslesssvt-vp9: PSNR/SSIM Optimized - Bosphorus 4Kdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamrodinia: OpenMP Streamclusteronnx: super-resolution-10 - CPU - Parallelonednn: IP Shapes 3D - f32 - CPUcompress-zstd: 19, Long Mode - Compression Speedsvt-av1: Preset 12 - Bosphorus 4Kkvazaar: Bosphorus 4K - Ultra Fastavifenc: 2y-cruncher: 1Bbuild-php: Time To Compilegpaw: Carbon Nanotubevpxenc: Speed 5 - Bosphorus 4Kkvazaar: Bosphorus 1080p - Mediumdarktable: Server Rack - CPU-onlyonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUnpb: LU.Connx: bertsquad-12 - CPU - Parallelsvt-av1: Preset 13 - Bosphorus 4Kbuild-wasmer: Time To Compiledarktable: Masskrug - CPU-onlyincompact3d: input.i3d 129 Cells Per Directiony-cruncher: 500Monednn: IP Shapes 1D - bf16bf16bf16 - CPUgimp: resizenpb: FT.Cnpb: CG.Connx: fcn-resnet101-11 - CPU - Parallelkvazaar: Bosphorus 1080p - Ultra Fastbuild-gdb: Time To Compileonednn: Convolution Batch Shapes Auto - f32 - CPUopenvino: Machine Translation EN To DE FP16 - CPUvpxenc: Speed 0 - Bosphorus 1080pvpxenc: Speed 0 - Bosphorus 4Kcpuminer-opt: Garlicoinopenfoam: drivaerFastback, Small Mesh Size - Mesh Timeopenvino: Vehicle Detection FP16 - CPUnekrs: TurboPipe Periodicncnn: CPU - mobilenetliquid-dsp: 8 - 256 - 57openvino: Face Detection FP16-INT8 - CPUsvt-av1: Preset 4 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 1080pcompress-zstd: 3 - Compression Speedonnx: yolov4 - CPU - Parallelopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16 - CPUkvazaar: Bosphorus 1080p - Very Fastdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamopenvino: Face Detection FP16 - CPUdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdarktable: Server Room - CPU-onlyncnn: CPU - squeezenet_ssdopenvino: Person Detection FP16 - CPUncnn: CPU - resnet18rodinia: OpenMP HotSpot3Dsvt-av1: Preset 13 - Bosphorus 1080ponednn: IP Shapes 1D - u8s8f32 - CPUopenvino: Person Detection FP32 - CPUbuild-python: Defaultx265: Bosphorus 1080pavifenc: 10, Losslesscompress-zstd: 8, Long Mode - Compression Speedncnn: CPU - yolov4-tinydacapobench: Tradebeansncnn: CPU - resnet50ncnn: CPU - vision_transformergimp: unsharp-maskgnuradio: Signal Source (Cosine)openvino: Person Vehicle Bike Detection FP16 - CPUnpb: MG.Copenvino: Age Gender Recognition Retail 0013 FP16 - CPUdacapobench: Tradesoapdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamonnx: GPT-2 - CPU - Standardnpb: SP.Cdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamvpxenc: Speed 5 - Bosphorus 1080popenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamnpb: IS.Dtnn: CPU - SqueezeNet v2dacapobench: H2deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamwebp: Defaultdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamjpegxl: JPEG - 100amg: build-apache: Time To Compiledeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamlczero: Eigendeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamjpegxl: JPEG - 90jpegxl: PNG - 90jpegxl: JPEG - 80ncnn: CPU - vgg16gimp: auto-levelsjpegxl: PNG - 80simdjson: TopTweetngspice: C7552onnx: GPT-2 - CPU - Parallelbuild-python: Released Build, PGO + LTO Optimizeddacapobench: Jythoncompress-zstd: 3, Long Mode - Compression Speedncnn: CPU - alexnetpyperformance: crypto_pyaesgnuradio: FIR Filtergnuradio: Hilbert Transformtnn: CPU - DenseNetcompress-zstd: 8, Long Mode - Decompression Speedliquid-dsp: 2 - 256 - 57compress-zstd: 3, Long Mode - Decompression Speedngspice: C2670pyperformance: nbodyphpbench: PHP Benchmark Suitewebp: Quality 100, Losslesscompress-zstd: 19 - Decompression Speedcompress-zstd: 8 - Decompression Speedlczero: BLASjpegxl-decode: 1compress-zstd: 19, Long Mode - Decompression Speedsimdjson: DistinctUserIDliquid-dsp: 1 - 256 - 57quantlib: pyperformance: gognuradio: IIR Filterpyperformance: floatwebp: Quality 100, Lossless, Highest Compressioncrafty: Elapsed Timewebp: Quality 100, Highest Compressiongimp: rotatepyperformance: 2to3pyperformance: pathlibtnn: CPU - SqueezeNet v1.1gnuradio: FM Deemphasis Filterpybench: Total For Average Test Timesliquid-dsp: 4 - 256 - 57webp: Quality 100git: Time To Complete Common Git Commandspyperformance: regex_compilepyperformance: django_templatepyperformance: raytracejpegxl: PNG - 100simdjson: LargeRandpyperformance: json_loadspyperformance: pickle_pure_pythonsimdjson: PartialTweetstnn: CPU - MobileNet v2encode-flac: WAV To FLACpyperformance: chaoscompress-zstd: 3 - Decompression Speedsimdjson: Kostyaencode-mp3: WAV To MP3dacapobench: Eclipse77007900Ryzen 7600 AMDAMD 7600AMD 7700Ryzen 7 7700Ryzen 7600Ryzen 9 79000.9622850.591.5581.534411.91.53123915884.9153331.50.7411.711.2725.9431.81248.9461588.73349045097.871194057.21110.22958.4104101551.497.41713516.973.37523174907442601.98361.28149450491.96497649.986176583.09325.462226.51851668795606.1455723605.78692.1786.1425220550146.42521348004.29712710240.7983.972.701115.7463103.96272.7877384.202963420915940.4355140.877713.6115640193.443867.54714.1528348.427612446.0235687.933.1034171019.445158.29446346662.8672.073321775660000335.4412534.521369.35134.781.603837741600002294.352.50229489.6184.052642285.982282.736.89878.7862.30172.96166.07116.423.932.9944691152.24262.664381149.161154.558.5611.8457.4511.97444812225.69163.45266.11783.85164.38186.08322.9121050.50.2504830.81081528.228161.017510.99913.20478.73892.70514.56126811.91109141.46880.7207.454.691.23338.46654.683.253582115.264.385152.559172.8138.21315.514822.335.3234314037.13.909234.37736254.49535244.06254.68.0274124.4866167.510.99936145.28643611.32359.072.5023218.419654.25896.7167045.4723.785110.11116.79652.6129.24126.224.757640.381371.9984.6483.6868.17178.2912.479880.092712.12758924.3264546.4150.38646.8456.43828.02548.926209.07530.1658.560.1567.3466843910.53667162.37544.6973.6919.585521712.7171.2724510.18324004.749535.4395201.6344.5987.6476349.5422.4910.964593.0631.4064768.12555243000007.39756340000293.3712.107609.8225055.34484.555.81111.4616.9891578.8258.83882.89410.331015.276.9661.627668.0870.5998881034.0615.17587.563.8721445.113.12137111.52100.1310.676105.94.8623860.310.38201569.6097843110989.4164.170754.490.645.8486.90061452.3742.331182843.383226.9427.30580.9138575510014.705651.14251542650.208211.5711.911.826.919.52212.099.8670.9127489192.36224592007.34.6762.31485.9759.82067.2266311.62080100006098.479.39779.612331982.215364.15956.5158871.3653799.851043500004131.8128561.157.20.84145742745.209.29716810.3184.205958.460140154000016.7432.05479.524.82480.981.7911.92258.16190.91911.69253.25715.15.784.8060.3885821.393.7010.6641134.163.481083173510.3456762.971.5053.062.52210.5643.1534.5352155.925079673913.653274898.71985.84201.8147002158.455.6776726.491.99078241183031903.39204.91208470672.13691509.796389804.79451.133027.8612161511891208.5176963108.098126.36248.5039295130202.12451815405.36102514144.3723.133.65886.304176.35202.6710059.75.201137553501450.3131161.180918.0721166254.358989.28845.16946411.20349862.1213917.783.867351744.3119.52915762268.6412.5876191032600000249.9616311.471830.42100.61.1900610336000001675.221.54207352.6315.008821680.331678.889.191164.4285.18583.94126.27528.045.193.670094855.906197.261697856.019858.83211.39415.6280.94832.40175520541.04126.12397.92845.07126.33657.91514.1927518.780.1887950.62617721.141122.562375.34910.06459.24869.91411.02543264.21482452.01898.78293.134.4669.52529.48870.472.50651430.485.66943.771217.4728.87820.9171102.444.245249047.133.421073.49424176.83432335.22345.47.5604132.1845204.590.881827123.04644913.97439.562.2609115.302665.30858.01191956.2933.028134.9294.77164.2158.66231.4821.14147.291490.7959.7284.4526.82699.810.713893.30048.4367153.1774951.6181.98154.7147.09823.31939.737169.04724.3663.170.1355.2157244535.6772189.15237.9063.07115.074826210.8651.1432613.30624918.3411923.73107216.4938.6065.7264160.7121.1710.414566.1526.6925618.92648778000008.67739430000331.3913.459687.9336230.74955.156.53118.7818.315651.1854.58412.57611.741153.467.1652.07739.5120.5906331179.613.458103.733.5011637.914.09165512.293.512.38450145.4425610.960.43193674.0953998512794.7570.321146.880.736.5596.54631498.3441.524174847.467727.2429.66520.9243360330014.1705.41481607703.7361212.3212.224.210.1512.5310.0666.5877025183.08522941825.44.6260.61374.7741.42105.736404.12075200006167.274.24577.412278892.195452.46277.1160172.615206.59.721054400004141.9125560.756.10.85148048875.309.18216710.1180.38390559241100000016.8131.71779.224.724611.7911.82238.24190.73811.64753.15794.75.784.7951.023050.532.8541.635431.711.5875712524.7339921.530.7831.611.4486.2371.74564.3331173.772795747011.988148020.71248.82265.17961.371172.139.60780402.033.26570132238381701.91336.39114893370.90385213.542794448.72252.861692.20688456641134.7536536934.51771.40324.7544167343113.44971033373.0396277963.91284.022.064151.7248134.80356.477123.62.980782320297760.5488220.672510.3312352147.270851.67862.9660526.47857935.5975527.632.2275091266.557207.29083835804.8231.488226596263333432.639490.501056.82174.242.062265974733332897.712.57769609.1182.9059882895.712896.315.34676.7649.71952.3215.76017.903.042.1647601450.48332.5113741452.401452.696.7119.2348.08561.42689812327.50212.23984.71163.05211.80114.72132.6916510.730.3131291.0399635.062203.188621.55116.71396.220114.27518.15726610.53901781.23160.22181.364.75111.95246.84544.494.029411934.153.525462.583137.0045.92016.390705.106.6593353031.895.347075.43706273.7372220.17227.7211.715885.3113136.261.35548188.4577629.12286.863.3051423.364142.78675.64130437.2834.52689.65142.81243.6106.88621.1031.258131.987465.5581.9483.0589.90269.1815.680863.755211.96146564.4812935.9127.83138.1767.69432.99455.797235.98321.9644.680.1777.1112131737.61552136.23250.7744.22020.960092515.0481.5805410.30618180.588739.8380160.2851.5457.3801466.4017.208.423598.2935.44912110.78498086333336.89573297500386.6410.304524.2424776.63815.907.5791.3021.8567746.9445.74083.2439.121309.265.6566.571578.5120.7519351306.1216.87582.574.3251326.312.03146111.06110.1710.7965865.65.6721442.670.36224262.3752893211043.4860.286152.680.635.6884.26271322.9647.652187142.004625.8326.43130.8539265373315.772630.98621495630.931010.9311.2111.1325.859.80911.419.5173.0757683201.92525381942.84.2463.61436.8738.42235.3116208.31968728576086.879.98083.511432252.095074.25834.2151469.945133.09.341003533334180.6132541.559.40.82139491885.009.50917310.6191.091913.762640060333316.1333.07182.725.82590.951.7212.32327.89199.06212.10255.25596.15.564.9781.010390.532.8481.641671.711.58100011074.7839941.530.7371.611.3755.3221.63864.3741143.012716718811.558147994.71231.32265.57944.471199.839.59464400.763.26386133881563201.92318.81115330374.5385542.168675446.02250.531684.91683876627004.7633538404.55571.07554.7576168810113.78061030303.0391887962.21284.522.096150.6568135.04357.17002.22.981309327599420.5507240.671710.3412156147.448451.10552.9683646.40479835.6125526.472.2317981280.928206.45839935844.4411.490901598510000433.069510.351056.4173.722.042445974100002897.082.57305608.6762.9080292892.92893.485.33688.7649.65072.31214.14917.193.072.1597071453.82331.3777741450.631444.746.7319.2347.78291.43245412334.26212.12344.71413.06211.97014.71752.7216444.440.3147641.0361335.066203.443622.38616.69297.4113.89318.1226241.32903341.23460.24181.124.75110.91147.27843.734.022151862.753.266592.434137.1445.9714.12699.476.733340305.335785.41634274.5605215.04226.7811.727385.2284134.81.36078188.6222989.1286.813.3034223.34642.825.73127337.1754.55789.96143.23742.8106.00621.1531.303931.94046386.4723.0429.95768.5515.644863.901711.97946784.4753435.9128.57938.0467.80933.08156.406237.4722.3644.680.1767.0951531784.52555136.84850.8114.23220.986722915.0751.583610.42118201.558741.3679160.4551.4877.3769966.3717.38.613579.535.47460410.67497356000006.9583500000386.4710.304528.4484772.93785.87.5991.2221.8566746.1545.73873.259.171293.345.5657.105579.9980.7519071298.3416.80183.824.3521319.112.19135911.11110.110.8015870.75.7121352.010.36215462.7697920411044.4660.332252.040.635.6884.22231299.8543.92170042.198425.9526.35450.8339194510015.836630.55351475629.797510.7611.0710.9725.889.80111.289.571.9747679201.95225401937.14.24631448.5725.52228.8726188.2197510000594379.55381.911495352.085072.45826.3155467.935239.29.4999000003927.2132541.858.70.81140532685.019.50517410.7191.086919.161339810000016.1233.19282.325.82570.961.7212.32327.9198.86812.09555.45547.85.554.9780.9169360.541.4821.453491.721.45125725504.6152821.420.7251.51.2445.5221.73548.7921561.4382738507.334194261.51119.32968.4104401507.337.42742531.733.18229175380827801.87365.43153350463.77509464.001415586.79332.812207.98868336915606.1644717505.88492.45136.1603220960146.76631376504.32876910277.3973.452.699115.5433103.29271.597780.84.228976421269760.4150220.87913.6815748194.492567.69064.1815698.425812446.0117692.333.140911005.656158.42200947545.7272.088438776550000333.2512608.141368.95134.161.600577780000002231.812.33499475.8474.0796292223.942225.666.93892.3262.94912.98165.87117.683.923.0002751113.96261.2381611110.931112.468.59211.9358.03751.9836612368.45163.04696.1333.84162.57816.15072.520765.760.2410040.7949927.927160.119495.06513.07877.6690.49413.9726715.21125851.4873.65209.437.2490.7938.28154.843.104512128.864.528272.42174.5237.68414.598839.035.2884319037.393.890954.28496251.62101246.55261.87.9751125.2987166.270.989794144.6751311.36362.542.4627518.2754.70675.3164045.9143.667110.6115.87852.5129.95326.4224.60640.633472.6383.7943.688.0580.1512.386880.704711.57857474.17446.8152.05147.0155.99127.89348.341207.90829.0158.860.1487.0804844080.08676162.39444.3313.58419.435157812.6781.234969.86623901.699511.8694202.844.0817.3076854.2722.9711.14711.2931.2347848.62560084000006.54759310000292.0912.125616.0995126.44424.485.77111.6316.8582576.1359.29722.8319.091016.685.7853.76677.9640.588639103914.94388.863.8971454.311.414039.9199.5410.3096084.94.7624079.990.38212868.9068925411060.863.499654.430.635.8486.92381401.142.325187743.256126.9427.2430.8839156190014.621649.26351540647.740211.6111.9211.8224.359.24712.119.9567.7787783190.87225402012.94.2560.21498.3775.52066.0696347.12057600006118.577.61477.511841352.185220.66198.9151971.95130.49.831043000004090.7127570.856.10.85145725555.198.97117710.3183.892941.159840721000016.7932.36579.524.624711.7912.12218.17190.3711.672535716.25.784.7920.9517690.571.5181.556491.761.4972915804.8383811.50.7361.581.3015.8831.82749.0321514.07354999557.877193624.51090.62954.2103801562.227.41505517.613.36363175448701301.98344.88147910490.93494267.865724583.56331.572268.58854338688806.1182707305.8492.01146.1347217830146.03141344504.29826810175.6989.722.668115.816104.03273.367745.24.178461414115320.4368590.87613.5715594192.782167.51724.1555658.44557145.6199686.563.1149241012.614158.99459846993.6992.078613772840000334.8512469.441368.9135.181.605527861800002292.332.45222487.7344.0622212272.722289.266.87879.3162.75342.96167.47217.973.872.9865161155.04262.0624681156.111155.48.54711.8557.39271.9752512239.64164.05796.09523.85164.28986.08662.721184.20.2517770.8110927.564161.064503.15713.16977.88193.34214.59926612.661113751.46579.76206.94.6191.40538.41154.623.245322004.464.465612.557172.7138.24815.373834.425.3844308037.273.918164.39946254.32993243.98252.798.0301124.4383168.281.0027145.38815511.33360.582.5071818.310754.5855.69166945.5423.742109.97116.73252.4129.25126.1824.962140.05372.2584.9863.6848.17477.312.426880.441112.11458394.3190245.8149.54346.8956.57327.99948.768209.83929.1958.760.1587.3317443900.91668161.94544.1793.65219.55776612.7651.2720910.03323930.179553.4594201.8844.5727.6399350.1422.4510.994553.5331.5862518.14555482000007.12759550000294.4312.098615.78851404454.545.82111.5616.9692581.3458.91022.90210.021032.676.0353.927670.5270.6002441032.415.18187.63.881443.812.06140710.34100.4310.5426088.94.7923843.510.3769.6804844410954.1163.667153.380.645.8487.66911428.5742.063194243.462126.9427.37960.8738591160014.693651.97151548650.700411.4311.7611.6125.439.48811.969.9271.9337765192.751236020094.6660.61508.7760.32069.7456336.62070400006083.380.1979.412180192.175185.26188.3153570.445186.49.771042600004098.8126575.856.40.84145130085.209.18216710.4184.079943.160640124000016.7832.08479.424.62460.971.7911.82258.21193.77111.66553.25702.75.774.8031.01580.532.8641.652571.851.653419204.8139851.580.8011.671.4666.5051.7764.3291192.542812176712.085147943.71180.22265.47943.921197.0510.1288406.473.27254133352661002.05334.93115430375.11384184.408516448.3251.351709.61682386624604.758538104.58271.82044.755165660113.82881030403.0304737957.91282.322.071152.6828134.08357.357171.72.977773320095390.5503330.681510.5212090145.524251.15052.9695586.47759835.7029527.782.2202511286.513206.77416935828.1371.4902594580000433.939516.261058.54174.022.062855968500002897.052.66452602.7792.9011612888.4428965.36691.7849.62682.3213.00617.773.062.1590671450.82334.1986811450.71446.496.7579.2247.93471.43100912314.21212.10754.71453.02212.00744.71672.8616423.770.312251.0397334.923203.786621.82416.71896.295115.53518.18526385.9903491.23762.24180.454.76112.84847.17343.564.037511923.953.356562.618137.346.04516.873695.356.6533333030.215.340625.45728274.56138215.01222.0811.691785.483134.661.3531189.5382039.09286.263.3090323.326142.85565.95131837.2424.58589.26142.84943.1105.83521.1431.432531.809463.8487.13.0349.8668.2315.632463.949912.06946744.6323435.7127.2237.9667.81533.04156.328239.3321.3144.650.1787.0626931801.69550136.42853.1264.28620.943674115.041.5749210.41718217.438923.1279160.2151.9147.3986864.2417.238.323534.1835.28620810.66497175000007.13585770000379.7310.23522.0434863.43825.787.5791.1821.7765746.1445.90723.2849.213035.6862.467579.1890.7500021315.7816.97782.614.3411320.812.34142411.11110.6710.745846.25.7521396.590.36220162.5703862311031.3460.411452.030.635.6784.00811304.9443.916188141.759525.9726.34310.9439258220015.68630.90261499630.504910.9411.2211.1225.919.77911.449.1172.8717745202.75925381936.94.4366.11436.2709.62237.0756200.61999600006167.679.79179.911542402.0550586021.3152069.365039.89.461000100003966.6133544.358.80.82141703765.019.44617310.6191.076911.761538898000016.1233.1278325.72560.961.7312.42317.91199.1912.10455.15745.45.564.9860.3599921.43.560.6656364.173.48716250810.5157962.981.4933.062.45210.3963.11334.0312150.584748093513.62275022.12025.34203.4147002117.955.51763733.321.84748240553723803.41201.39207140672.9695719.393178804.66449.63027.6712252811822808.5267961508.059127.26268.5038295660200.92241834105.39052314044.5723.783.65887.439777.02202.1312360.55.249215563683300.3128261.181118.1321212255.036889.44985.19070111.20119361.8538918.153.870727738.33119.05078262268.3752.5739041033000000249.9516466.741832.77100.471.1904410340000001675.61.54403353.5384.994921677.71683.039.191163.4585.16443.94127.04927.975.163.665911855.553196.697568854.95855.53711.21115.6580.88552.4110420602.99126.51787.90375.02126.30777.91694.1827522.210.1894510.62523121.083122.638374.36410.06958.79770.28911.00542376.81482102.0297.49294.34.4569.85329.16769.912.517761322.984.96933.863218.529.00722.471102.444.2994911047.083.970483.49499176.07077334.75344.817.6063131.3806208.220.880093124.04262713.89437.962.1672615.360765.06458.07193656.3953.056134.895.69363.8158.2131.4521.271547.001593.659.0174.4686.765100.1410.849992.1318.29867913.1798551183.69654.647.23423.29640.8168.92923.9963.160.1265.2267444554.53767190.95937.9263.17415.084387810.8541.1444313.60224557.8511860.08106215.9838.6735.7036561.5321.3910.284390.526.9425358.91659745000008.4753250000330.5813.488649.4916289.14985.156.53118.7118.2487651.7154.78152.53911.731159.387.1358.202728.990.5952111190.813.576102.843.6211636.713.7167612.0990.2112.62951625.4425644.030.43188074.152974812761.8870.33446.830.726.5496.89031464.4842.367179147.12152429.84140.8643564000014.043704.97911650703.554911.7812.0911.9824.2410.27912.2710.165.9527198183.16623481824.84.6260.41402.1716.62112.1486696.21923100006423.975.52678.411869952.215436.16049.2162371.365176.69.961064600003971.2125561.255.90.86146468275.309.40616710.1182.768907.560240268000017.0331.4378.624.52470.971.8111.82238.27190.74911.57353.35722.75.764.809OpenBenchmarking.org

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.23020.46040.69060.92081.151SE +/- 0.008119, N = 30.3599921.0230501.0158000.9517690.9169361.0103900.3885820.962285MIN: 0.33MIN: 0.95MIN: 0.95MIN: 0.84MIN: 0.86MIN: 0.95MIN: 0.36MIN: 0.851. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: blazefaceRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.3150.630.9451.261.575SE +/- 0.00, N = 31.400.530.530.570.540.531.390.59MIN: 1.36 / MAX: 3.18MIN: 0.52 / MAX: 0.93MIN: 0.52 / MAX: 0.66MIN: 0.54 / MAX: 1.6MAX: 0.72MIN: 0.52 / MAX: 0.87MIN: 1.37 / MAX: 1.49MIN: 0.56 / MAX: 1.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.83271.66542.49813.33084.1635SE +/- 0.002, N = 153.5602.8542.8641.5181.4822.8483.7011.558MIN: 3.49 / MAX: 4.01MIN: 2.82 / MAX: 9.6MIN: 2.83 / MAX: 14.09MIN: 1.44 / MAX: 4.13MIN: 1.47 / MAX: 3.45MIN: 2.83 / MAX: 3.19MIN: 3.63 / MAX: 3.81MIN: 1.47 / MAX: 4.021. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.37180.74361.11541.48721.859SE +/- 0.001514, N = 30.6656361.6354301.6525701.5564901.4534901.6416700.6641131.534410MIN: 0.59MIN: 1.61MIN: 1.62MIN: 1.41MIN: 1.41MIN: 1.62MIN: 0.62MIN: 1.411. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: FastestDetRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.93831.87662.81493.75324.6915SE +/- 0.01, N = 34.171.711.851.761.721.714.161.90MIN: 4.14 / MAX: 4.86MIN: 1.69 / MAX: 3.07MIN: 1.84 / MAX: 2.07MIN: 1.7 / MAX: 3.08MIN: 1.7 / MAX: 2.15MIN: 1.7 / MAX: 1.75MIN: 4.11 / MAX: 4.2MIN: 1.83 / MAX: 2.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: shufflenet-v2Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.7831.5662.3493.1323.915SE +/- 0.00, N = 33.481.581.601.491.451.583.481.53MIN: 3.45 / MAX: 3.74MIN: 1.56 / MAX: 2.18MIN: 1.58 / MAX: 1.99MIN: 1.44 / MAX: 2.07MIN: 1.43 / MAX: 1.65MIN: 1.57 / MAX: 1.93MIN: 3.44 / MAX: 3.9MIN: 1.47 / MAX: 3.011. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: StandardRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770030060090012001500SE +/- 73.47, N = 1271675753472912571000108312391. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077005001000150020002500SE +/- 92.79, N = 12250812521920158025501107173515881. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: regnety_400mRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077003691215SE +/- 0.01, N = 310.514.734.814.834.614.7810.344.91MIN: 10.31 / MAX: 17.48MIN: 4.68 / MAX: 5.18MIN: 4.74 / MAX: 6.32MIN: 4.51 / MAX: 7.13MIN: 4.51 / MAX: 14.57MIN: 4.74 / MAX: 5.24MIN: 10.27 / MAX: 10.81MIN: 4.56 / MAX: 6.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: StandardRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077002K4K6K8K10KSE +/- 3.09, N = 3579639923985838152823994567653331. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v3-v3 - Model: mobilenet-v3Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.67051.3412.01152.6823.3525SE +/- 0.00, N = 32.981.531.581.501.421.532.971.50MIN: 2.94 / MAX: 3.38MIN: 1.5 / MAX: 2.03MIN: 1.53 / MAX: 3.86MIN: 1.39 / MAX: 2.44MIN: 1.4 / MAX: 1.85MIN: 1.51 / MAX: 1.94MIN: 2.94 / MAX: 3.32MIN: 1.39 / MAX: 2.41. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.33860.67721.01581.35441.693SE +/- 0.005, N = 151.4930.7830.8010.7360.7250.7371.5050.741MIN: 1.47 / MAX: 1.96MIN: 0.73 / MAX: 2.85MIN: 0.79 / MAX: 3.51MIN: 0.7 / MAX: 1.39MIN: 0.72 / MAX: 1.2MIN: 0.73 / MAX: 1.1MIN: 1.48 / MAX: 9.16MIN: 0.71 / MAX: 2.71. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mnasnetRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.68851.3772.06552.7543.4425SE +/- 0.00, N = 33.061.611.671.581.501.613.061.71MIN: 3.03 / MAX: 3.57MIN: 1.59 / MAX: 2MIN: 1.64 / MAX: 2.16MIN: 1.48 / MAX: 2.46MIN: 1.48 / MAX: 1.65MIN: 1.59 / MAX: 2.01MIN: 3.03 / MAX: 3.46MIN: 1.61 / MAX: 2.551. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.56751.1351.70252.272.8375SE +/- 0.008, N = 152.4521.4481.4661.3011.2441.3752.5221.272MIN: 2.42 / MAX: 4.54MIN: 1.37 / MAX: 8.4MIN: 1.45 / MAX: 3.38MIN: 1.24 / MAX: 3.27MIN: 1.23 / MAX: 3.26MIN: 1.36 / MAX: 1.66MIN: 2.47 / MAX: 2.98MIN: 1.2 / MAX: 3.381. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077003691215SE +/- 0.110, N = 1510.3966.2376.5055.8835.5225.32210.5645.943MIN: 10.29 / MAX: 18.12MIN: 5.32 / MAX: 16.05MIN: 6.43 / MAX: 8.41MIN: 5.44 / MAX: 8.54MIN: 5.45 / MAX: 17.02MIN: 5.29 / MAX: 6.27MIN: 10.43 / MAX: 11.26MIN: 5.47 / MAX: 9.311. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.70881.41762.12642.83523.544SE +/- 0.011, N = 153.1131.7451.7701.8271.7351.6383.1501.812MIN: 3.04 / MAX: 3.57MIN: 1.63 / MAX: 8.75MIN: 1.74 / MAX: 3.67MIN: 1.72 / MAX: 3.71MIN: 1.71 / MAX: 3.74MIN: 1.62 / MAX: 2.05MIN: 3.1 / MAX: 3.75MIN: 1.71 / MAX: 3.681. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

C-Ray

This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001428425670SE +/- 0.02, N = 334.0364.3364.3349.0348.7964.3734.5448.951. (CC) gcc options: -lm -lpthread -O3

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.CRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077005001000150020002500SE +/- 13.02, N = 52150.581173.771192.541514.071561.401143.012155.921588.731. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

Stockfish

This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 15Total TimeRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770011M22M33M44M55MSE +/- 210730.41, N = 1147480935279574702812176735499955382738502716718850796739349045091. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto=jobserver

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770048121620SE +/- 0.045, N = 1513.62011.98812.0857.8777.33411.55813.6537.871MIN: 13.5 / MAX: 15.47MIN: 11.25 / MAX: 19.5MIN: 11.68 / MAX: 24.46MIN: 7.19 / MAX: 10.65MIN: 7.13 / MAX: 19.12MIN: 11.48 / MAX: 19.17MIN: 13.54 / MAX: 15.73MIN: 7.16 / MAX: 10.561. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770060K120K180K240K300KSE +/- 6.07, N = 3275022.1148020.7147943.7193624.5194261.5147994.7274898.7194057.21. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Compression SpeedRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700400800120016002000SE +/- 9.87, N = 32025.31248.81180.21090.61119.31231.31985.81110.21. (CC) gcc options: -O3 -pthread -lz -llzma

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.0Algorithm: RSA4096Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077009001800270036004500SE +/- 0.25, N = 34203.42265.12265.42954.22968.42265.54201.82958.41. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: DeepcoinRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077003K6K9K12K15KSE +/- 16.47, N = 314700.007961.377943.9210380.0010440.007944.4714700.0010410.001. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.DRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077005001000150020002500SE +/- 9.35, N = 92117.951172.131197.051562.221507.331199.832158.451551.491. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077003691215SE +/- 0.03364, N = 35.517639.6078010.128807.415057.427429.594645.677607.41713MIN: 4.88MIN: 9.17MIN: 9.53MIN: 6.75MIN: 6.85MIN: 9.26MIN: 4.97MIN: 6.761. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: MagiRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700160320480640800SE +/- 1.54, N = 3733.32402.03406.47517.61531.73400.76726.49516.971. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.75941.51882.27823.03763.797SE +/- 0.01337, N = 31.847483.265703.272543.363633.182293.263861.990783.37523MIN: 1.79MIN: 3.2MIN: 3.2MIN: 3.08MIN: 3.09MIN: 3.2MIN: 1.8MIN: 3.091. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.0Algorithm: SHA256Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077005000M10000M15000M20000M25000MSE +/- 148039451.93, N = 324055372380132238381701333526610017544870130175380827801338815632024118303190174907442601. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU-v2-v2 - Model: mobilenet-v2Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.76731.53462.30193.06923.8365SE +/- 0.00, N = 33.411.912.051.981.871.923.391.98MIN: 3.38 / MAX: 3.79MIN: 1.88 / MAX: 2.37MIN: 2 / MAX: 2.62MIN: 1.84 / MAX: 3.02MIN: 1.83 / MAX: 2.22MIN: 1.89 / MAX: 2.4MIN: 3.36 / MAX: 3.7MIN: 1.83 / MAX: 3.361. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

JPEG XL Decoding libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: AllRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770080160240320400SE +/- 0.21, N = 3201.39336.39334.93344.88365.43318.81204.91361.28

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, PyriteRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770040K80K120K160K200KSE +/- 21.86, N = 32071401148931154301479101533501153302084701494501. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700150300450600750SE +/- 0.75, N = 3672.90370.90375.11490.93463.77374.50672.13491.961. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700150K300K450K600K750KSE +/- 1264.79, N = 3695719.39385213.54384184.41494267.87509464.00385542.17691509.80497649.991. (CC) gcc options: -O2 -lrt" -lrt

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25xRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077002004006008001000SE +/- 1.16, N = 3804.66448.72448.30583.56586.79446.02804.79583.091. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scryptRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700100200300400500SE +/- 1.68, N = 3449.60252.86251.35331.57332.81250.53451.13325.461. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: RingcoinRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077006001200180024003000SE +/- 4.41, N = 33027.671692.201709.612268.582207.981684.913027.862226.511. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression RatingRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770030K60K90K120K150KSE +/- 90.75, N = 31225286884568238854338683368387121615851661. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Blake-2 SRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700300K600K900K1200K1500KSE +/- 2843.36, N = 3118228066411366246086888069156066270011891208795601. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700246810SE +/- 0.0028, N = 38.52674.75364.75806.11826.16444.76338.51766.1455

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY CreditsRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770020K40K60K80K100KSE +/- 26.67, N = 396150536935381070730717505384096310723601. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700246810SE +/- 0.013, N = 38.0594.5174.5825.8405.8844.5558.0985.786

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700306090120150SE +/- 0.05, N = 3127.2671.4071.8292.0192.4571.08126.3692.18

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700246810SE +/- 0.0023, N = 38.50384.75444.75506.13476.16034.75768.50396.1425

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Triple SHA-256, OnecoinRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770060K120K180K240K300KSE +/- 1138.26, N = 132956601673431656602178302209601688102951302205501. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077004080120160200SE +/- 0.10, N = 3200.92113.45113.83146.03146.77113.78202.12146.43

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: SkeincoinRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770040K80K120K160K200KSE +/- 695.37, N = 31834101033371030401344501376501030301815401348001. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 44100 - Buffer Size: 1024Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001.21292.42583.63874.85166.0645SE +/- 0.000394, N = 35.3905233.0396273.0304734.2982684.3287693.0391885.3610254.2971271. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Wownero - Hash Count: 1MRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077003K6K9K12K15KSE +/- 6.89, N = 314044.57963.97957.910175.610277.37962.214144.310240.71. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Barbershop - Compute: CPU-OnlyRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770030060090012001500SE +/- 2.65, N = 3723.781284.021282.32989.72973.451284.52723.13983.97

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.82311.64622.46933.29244.1155SE +/- 0.005, N = 33.6582.0642.0712.6682.6992.0963.6582.701

Tachyon

This is a test of the threaded Tachyon, a parallel ray-tracing system, measuring the time to ray-trace a sample scene. The sample scene used is the Teapot scene ray-traced to 8K x 8K with 32 samples. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99.2Total TimeRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700306090120150SE +/- 0.64, N = 387.44151.72152.68115.82115.54150.6686.30115.751. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: BMW27 - Compute: CPU-OnlyRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700306090120150SE +/- 0.20, N = 377.02134.80134.08104.03103.29135.0476.35103.96

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Classroom - Compute: CPU-OnlyRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770080160240320400SE +/- 0.20, N = 3202.13356.47357.35273.36271.59357.10202.67272.78

Xmrig

Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.18.1Variant: Monero - Hash Count: 1MRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077003K6K9K12K15KSE +/- 32.54, N = 312360.57123.67171.77745.27780.87002.210059.77738.01. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 480000 - Buffer Size: 1024Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001.18112.36223.54334.72445.9055SE +/- 0.001126, N = 35.2492152.9807822.9777734.1784614.2289762.9813095.2011374.2029631. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770012M24M36M48M60MSE +/- 333145.62, N = 35636833032029776320095394141153242126976327599425535014542091594

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.12390.24780.37170.49560.6195SE +/- 0.000941, N = 30.3128260.5488220.5503330.4368590.4150220.5507240.3131160.435514MIN: 0.28MIN: 0.52MIN: 0.52MIN: 0.38MIN: 0.39MIN: 0.53MIN: 0.28MIN: 0.391. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ExhaustiveRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.26570.53140.79711.06281.3285SE +/- 0.0005, N = 31.18110.67250.68150.87600.87900.67171.18090.87771. (CXX) g++ options: -O3 -flto -pthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770048121620SE +/- 0.01, N = 318.1310.3310.5213.5713.6810.3418.0713.611. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Chaos Group V-RAY

This is a test of Chaos Group's V-RAY benchmark. V-RAY is a commercial renderer that can integrate with various creator software products like SketchUp and 3ds Max. The V-RAY benchmark is standalone and supports CPU and NVIDIA CUDA/RTX based rendering. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 5.02Mode: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077005K10K15K20K25KSE +/- 40.13, N = 32121212352120901559415748121562116615640

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: FastRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770060120180240300SE +/- 0.11, N = 3255.04147.27145.52192.78194.49147.45254.36193.441. (CXX) g++ options: -O3 -flto -pthread

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: MediumRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770020406080100SE +/- 0.26, N = 389.4551.6851.1567.5267.6951.1189.2967.551. (CXX) g++ options: -O3 -flto -pthread

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 44100 - Buffer Size: 512Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001.16792.33583.50374.67165.8395SE +/- 0.002271, N = 35.1907012.9660522.9695584.1555654.1815692.9683645.1694644.1528341. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 4.0Preset: ThoroughRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077003691215SE +/- 0.0037, N = 311.20116.47856.47758.44558.42586.404711.20348.42761. (CXX) g++ options: -O3 -flto -pthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: StandardRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700306090120150SE +/- 6.47, N = 129379987112498981241. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001428425670SE +/- 0.07, N = 361.8535.6035.7045.6246.0135.6162.1246.02

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077002004006008001000SE +/- 3.29, N = 3918.15527.63527.78686.56692.33526.47917.78687.931. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 1024Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.87091.74182.61273.48364.3545SE +/- 0.001525, N = 33.8707272.2275092.2202513.1149243.1409102.2317983.8673513.1034171. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfigRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770030060090012001500SE +/- 6.89, N = 3738.331266.561286.511012.611005.661280.93744.301019.45

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Disney MaterialRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770050100150200250119.05207.29206.77158.99158.42206.46119.53158.29

Aircrack-ng

Aircrack-ng is a tool for assessing WiFi/WLAN network security. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgk/s, More Is BetterAircrack-ng 1.7Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770013K26K39K52K65KSE +/- 2.66, N = 362268.3835804.8235828.1446993.7047545.7335844.4462268.6446662.871. (CXX) g++ options: -std=gnu++17 -O3 -fvisibility=hidden -fcommon -rdynamic -lnl-3 -lnl-genl-3 -lpcre -lsqlite3 -lpthread -lz -lssl -lcrypto -lhwloc -ldl -lm -pthread

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 192000 - Buffer Size: 1024Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.58221.16441.74662.32882.911SE +/- 0.001409, N = 32.5739041.4882261.4902002.0786132.0884381.4909012.5876192.0733211. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 24 - Buffer Length: 256 - Filter Length: 57Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700200M400M600M800M1000MSE +/- 2020662.71, N = 3103300000059626333359458000077284000077655000059851000010326000007756600001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Pabellon Barcelona - Compute: CPU-OnlyRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770090180270360450SE +/- 0.21, N = 3249.95432.63433.93334.85333.25433.06249.96335.44

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077004K8K12K16K20KSE +/- 0.99, N = 316466.749490.509516.2612469.4412608.149510.3516311.4712534.521. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700400800120016002000SE +/- 1.12, N = 31832.771056.821058.541368.901368.951056.401830.421369.351. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Fishy Cat - Compute: CPU-OnlyRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077004080120160200SE +/- 0.26, N = 3100.47174.24174.02135.18134.16173.72100.60134.78

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.46410.92821.39231.85642.3205SE +/- 0.00775, N = 31.190442.062262.062851.605521.600572.042441.190061.60383

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700200M400M600M800M1000MSE +/- 86474.15, N = 3103400000059747333359685000078618000077800000059741000010336000007741600001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077006001200180024003000SE +/- 2.35, N = 31675.602897.712897.052292.332231.812897.081675.222294.35MIN: 1671.36MIN: 2889.51MIN: 2882.96MIN: 2268.98MIN: 2215.43MIN: 2893.38MIN: 1671.1MIN: 2263.651. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.59951.1991.79852.3982.9975SE +/- 0.02309, N = 31.544032.577692.664522.452222.334992.573051.542072.50229MIN: 1.48MIN: 2.49MIN: 2.47MIN: 2.28MIN: 2.28MIN: 2.51MIN: 1.47MIN: 2.291. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: NinjaRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700130260390520650SE +/- 0.25, N = 3353.54609.12602.78487.73475.85608.68352.63489.62

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 480000 - Buffer Size: 512Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001.1272.2543.3814.5085.635SE +/- 0.002307, N = 34.9949202.9059882.9011614.0622214.0796292.9080295.0088204.0526401. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077006001200180024003000SE +/- 2.85, N = 31677.702895.712888.442272.722223.942892.901680.332285.98MIN: 1673.08MIN: 2880.28MIN: 2854.93MIN: 2247.96MIN: 2210.21MIN: 2888.21MIN: 1675.48MIN: 2258.71. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077006001200180024003000SE +/- 0.92, N = 31683.032896.312896.002289.262225.662893.481678.882282.73MIN: 1677.55MIN: 2889.01MIN: 2885.46MIN: 2263.84MIN: 2213.52MIN: 2889.6MIN: 1673.29MIN: 2256.181. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077003691215SE +/- 0.01, N = 39.195.345.366.876.935.339.196.891. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770030060090012001500SE +/- 5.59, N = 31163.45676.76691.78879.31892.32688.761164.42878.781. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770020406080100SE +/- 0.26, N = 385.1649.7249.6362.7562.9549.6585.1962.30

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 4KRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.88651.7732.65953.5464.4325SE +/- 0.00, N = 33.942.302.302.962.982.313.942.961. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770050100150200250SE +/- 2.30, N = 3127.05215.76213.01167.47165.87214.15126.28166.071. (CXX) g++ options: -O2 -lOpenCL

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700714212835SE +/- 0.23, N = 327.9717.9017.7717.9717.6817.1928.0416.421. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001.16782.33563.50344.67125.839SE +/- 0.02, N = 35.163.043.063.873.923.075.193.931. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 96000 - Buffer Size: 512Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.82581.65162.47743.30324.129SE +/- 0.003762, N = 33.6659112.1647602.1590672.9865163.0002752.1597073.6700942.9944691. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770030060090012001500SE +/- 1.23, N = 3855.551450.481450.821155.041113.961453.82855.911152.24MIN: 851.41MIN: 1445.4MIN: 1438.48MIN: 1129.24MIN: 1102.31MIN: 1450.65MIN: 851.85MIN: 1133.821. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: EmilyRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770070140210280350196.70332.51334.20262.06261.24331.38197.26262.66

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770030060090012001500SE +/- 0.41, N = 3854.951452.401450.701156.111110.931450.63856.021149.16MIN: 851.04MIN: 1447.04MIN: 1435.36MIN: 1135.88MIN: 1100.71MIN: 1448.24MIN: 851.48MIN: 1126.631. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770030060090012001500SE +/- 0.60, N = 3855.541452.691446.491155.401112.461444.74858.831154.55MIN: 851.94MIN: 1448.33MIN: 1430.15MIN: 1130.22MIN: 1094.16MIN: 1440.66MIN: 851.32MIN: 1130.741. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: Rhodopsin ProteinRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077003691215SE +/- 0.023, N = 311.2116.7116.7578.5478.5926.73111.3948.5601. (CXX) g++ options: -O3 -lm -ldl

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080pRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770048121620SE +/- 0.00, N = 315.659.239.2211.8511.939.2315.6211.841. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770020406080100SE +/- 0.10, N = 380.8948.0947.9357.3958.0447.7880.9557.45

Stargate Digital Audio Workstation

Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRender Ratio, More Is BetterStargate Digital Audio Workstation 22.11.5Sample Rate: 192000 - Buffer Size: 512Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.54251.0851.62752.172.7125SE +/- 0.004557, N = 32.4110401.4268981.4310091.9752501.9836601.4324542.4017551.9744481. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.BRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077004K8K12K16K20KSE +/- 6.22, N = 320602.9912327.5012314.2112239.6412368.4512334.2620541.0412225.691. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770050100150200250SE +/- 0.15, N = 3126.52212.24212.11164.06163.05212.12126.12163.45

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700246810SE +/- 0.0033, N = 37.90374.71164.71456.09526.13304.71417.92846.1178

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001.14082.28163.42244.56325.704SE +/- 0.01, N = 35.023.053.023.853.843.065.073.851. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770050100150200250SE +/- 0.11, N = 3126.31211.80212.01164.29162.58211.97126.34164.38

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700246810SE +/- 0.0024, N = 37.91694.72134.71676.08666.15074.71757.91516.0832

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: efficientnet-b0Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.94281.88562.82843.77124.714SE +/- 0.00, N = 34.182.692.862.702.502.724.192.91MIN: 4.15 / MAX: 4.53MIN: 2.66 / MAX: 3.21MIN: 2.81 / MAX: 4.44MIN: 2.48 / MAX: 5.14MIN: 2.46 / MAX: 3.01MIN: 2.68 / MAX: 3.26MIN: 4.14 / MAX: 4.62MIN: 2.68 / MAX: 4.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077006K12K18K24K30KSE +/- 16.00, N = 327522.2116510.7316423.7721184.2020765.7616444.4427518.7821050.501. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.07080.14160.21240.28320.354SE +/- 0.001519, N = 30.1894510.3131290.3122500.2517770.2410040.3147640.1887950.250483MIN: 0.17MIN: 0.29MIN: 0.29MIN: 0.22MIN: 0.22MIN: 0.3MIN: 0.17MIN: 0.221. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.2340.4680.7020.9361.17SE +/- 0.000920, N = 30.6252311.0399601.0397300.8110900.7949901.0361300.6261770.810815MIN: 0.56MIN: 1MIN: 1MIN: 0.74MIN: 0.73MIN: 1MIN: 0.56MIN: 0.741. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.5Time To CompileRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700816243240SE +/- 0.08, N = 321.0835.0634.9227.5627.9335.0721.1428.23

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve primarily benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e13Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077004080120160200SE +/- 0.48, N = 3122.64203.19203.79161.06160.12203.44122.56161.021. (CXX) g++ options: -O3

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Unix MakefilesRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700130260390520650SE +/- 1.68, N = 3374.36621.55621.82503.16495.07622.39375.35511.00

Primesieve

Primesieve generates prime numbers using a highly optimized sieve of Eratosthenes implementation. Primesieve primarily benchmarks the CPU's L1/L2 cache performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 8.0Length: 1e12Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770048121620SE +/- 0.00, N = 310.0716.7116.7213.1713.0816.6910.0613.201. (CXX) g++ options: -O3

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770020406080100SE +/- 0.14, N = 358.8096.2296.3077.8877.6697.4059.2578.74

Build2

This test profile measures the time to bootstrap/install the build2 C++ build toolchain from source. Build2 is a cross-platform build toolchain for C/C++ code and features Cargo-like features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBuild2 0.13Time To CompileRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700306090120150SE +/- 0.35, N = 370.29114.28115.5493.3490.49113.8969.9192.71

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD SolverRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770048121620SE +/- 0.04, N = 311.0118.1618.1914.6013.9718.1211.0314.561. (CXX) g++ options: -O2 -lOpenCL

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.CRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077009K18K27K36K45KSE +/- 42.32, N = 342376.8026610.5326385.9026612.6626715.2026241.3243264.2026811.901. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression RatingRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770030K60K90K120K150KSE +/- 205.24, N = 31482109017890349111375112585903341482451109141. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing with the water_GMX50 data. This test profile allows selecting between CPU and GPU-based GROMACS builds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2022.1Implementation: MPI CPU - Input: water_GMX50_bareRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.45450.9091.36351.8182.2725SE +/- 0.006, N = 32.0201.2311.2371.4651.4801.2342.0181.4681. (CXX) g++ options: -O3

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770020406080100SE +/- 0.52, N = 397.4960.2262.2479.7673.6560.2498.7880.701. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770060120180240300SE +/- 0.32, N = 3294.30181.36180.45206.90209.43181.12293.13207.451. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700246810SE +/- 0.00, N = 34.454.754.764.617.244.754.464.60

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 3.2.3Time To CompileRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700306090120150SE +/- 0.44, N = 369.85111.95112.8591.4190.79110.9169.5391.23

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 4.4Time To CompileRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001122334455SE +/- 0.23, N = 329.1746.8547.1738.4138.2847.2829.4938.47

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 4KRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001632486480SE +/- 0.03, N = 369.9144.4943.5654.6254.8443.7370.4754.681. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.90841.81682.72523.63364.542SE +/- 0.01473, N = 32.517764.029414.037513.245323.104514.022152.506503.25358MIN: 2.21MIN: 3.74MIN: 3.76MIN: 2.84MIN: 2.9MIN: 3.76MIN: 2.24MIN: 2.881. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Five Back to Back FIR FiltersRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077005001000150020002500SE +/- 16.32, N = 81322.91934.11923.92004.42128.81862.71430.42115.21. 3.10.1.1

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 4KRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770020406080100SE +/- 0.02, N = 384.9053.5253.3564.4664.5253.2685.6064.381. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: StandardRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077002004006008001000SE +/- 40.22, N = 126935466565618276596945151. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.86921.73842.60763.47684.346SE +/- 0.017, N = 153.8632.5832.6182.5572.4202.4343.7712.559MIN: 3.79 / MAX: 5.06MIN: 2.41 / MAX: 9.64MIN: 2.59 / MAX: 4.5MIN: 2.4 / MAX: 5.3MIN: 2.38 / MAX: 13.73MIN: 2.4 / MAX: 9.97MIN: 3.69 / MAX: 4.19MIN: 2.41 / MAX: 4.851. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770050100150200250SE +/- 0.12, N = 3218.50137.00137.30172.71174.52137.14217.47172.811. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

Timed Mesa Compilation

This test profile times how long it takes to compile Mesa with Meson/Ninja. For minimizing build dependencies and avoid versioning conflicts, test this is just the core Mesa build without LLVM or the extra Gallium3D/Mesa drivers enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 21.0Time To CompileRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001020304050SE +/- 0.06, N = 329.0145.9246.0538.2537.6845.9728.8838.21

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700510152025SE +/- 0.24, N = 1522.4716.3916.8715.3714.6014.1220.9215.51MIN: 22.2 / MAX: 30.15MIN: 13.9 / MAX: 24.09MIN: 16.72 / MAX: 18.95MIN: 14.16 / MAX: 25.66MIN: 14.2 / MAX: 25.71MIN: 14.05 / MAX: 15.71MIN: 20.57 / MAX: 28.87MIN: 14.26 / MAX: 35.721. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077002004006008001000SE +/- 3.37, N = 31102.44705.10695.35834.42839.03699.471102.44822.331. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700246810SE +/- 0.014, N = 34.2996.6596.6535.3845.2886.7004.2405.3231. (CXX) g++ options: -O3 -fPIC -lm

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Myriad-GroestlRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770011K22K33K44K55KSE +/- 250.27, N = 349110335303333043080431903334052490431401. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 4KRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001122334455SE +/- 0.31, N = 647.0831.8930.2137.2737.3930.0047.1337.101. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001.20312.40623.60934.81246.0155SE +/- 0.00039, N = 33.970485.347075.340623.918163.890955.335783.421073.90923MIN: 3.34MIN: 5.3MIN: 5.3MIN: 3.63MIN: 3.71MIN: 5.3MIN: 3.32MIN: 3.681. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001.22792.45583.68374.91166.1395SE +/- 0.00921, N = 33.494995.437065.457284.399464.284965.416343.494244.37736MIN: 3.06MIN: 5.22MIN: 5.22MIN: 3.84MIN: 3.88MIN: 5.23MIN: 3.05MIN: 3.851. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution TimeRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770060120180240300176.07273.74274.56254.33251.62274.56176.83254.501. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770070140210280350SE +/- 2.29, N = 3334.75220.17215.01243.98246.55215.04335.22244.061. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770080160240320400SE +/- 0.15, N = 3344.81227.72222.08252.79261.80226.78345.40254.601. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077003691215SE +/- 0.0166, N = 37.606311.715811.69178.03017.975111.72737.56048.0274

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700306090120150SE +/- 0.12, N = 3131.3885.3185.48124.44125.3085.23132.18124.49

x264

This is a multi-threaded test of the x264 video encoder run on the CPU with a choice of 1080p or 4K video input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2022-02-22Video Input: Bosphorus 1080pRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770050100150200250SE +/- 1.62, N = 3208.22136.26134.66168.28166.27134.80204.59167.511. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -flto

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.30620.61240.91861.22481.531SE +/- 0.005136, N = 30.8800931.3554801.3531001.0027000.9897941.3607800.8818270.999360MIN: 0.81MIN: 1.29MIN: 1.3MIN: 0.89MIN: 0.88MIN: 1.3MIN: 0.8MIN: 0.891. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Appleseed

Appleseed is an open-source production renderer focused on physically-based global illumination rendering engine primarily designed for animation and visual effects. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAppleseed 2.0 BetaScene: Material TesterRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077004080120160200124.04188.46189.54145.39144.68188.62123.05145.29

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: MediumRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770048121620SE +/- 0.00, N = 313.899.129.0911.3311.369.1013.9711.321. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700100200300400500SE +/- 1.08, N = 3437.96286.86286.26360.58362.54286.81439.56359.071. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.74451.4892.23352.9783.7225SE +/- 0.00750, N = 32.167263.305143.309032.507182.462753.303422.260912.50232MIN: 2.11MIN: 3.14MIN: 3.15MIN: 2.33MIN: 2.26MIN: 3.14MIN: 2.08MIN: 2.341. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700612182430SE +/- 0.03, N = 315.3623.3623.3318.3118.2723.3515.3018.42

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001530456075SE +/- 0.06, N = 365.0642.7942.8654.5954.7142.8265.3154.26

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: googlenetRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700246810SE +/- 0.04, N = 38.075.645.955.695.305.738.016.70MIN: 7.97 / MAX: 9.25MIN: 5.54 / MAX: 6.73MIN: 5.85 / MAX: 7.49MIN: 5.28 / MAX: 6.93MIN: 5.22 / MAX: 6.91MIN: 5.68 / MAX: 6.22MIN: 7.91 / MAX: 8.62MIN: 6.24 / MAX: 7.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: ParallelRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700400800120016002000SE +/- 3.11, N = 3193613041318166916401273191916701. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 4KRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001326395265SE +/- 0.10, N = 356.4037.2837.2445.5445.9137.1856.2945.471. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.8.1Test: Boat - Acceleration: CPU-onlyRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001.03162.06323.09484.12645.158SE +/- 0.018, N = 33.0564.5264.5853.7423.6674.5573.0283.785

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 4KRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700306090120150SE +/- 0.10, N = 3134.8089.6589.26109.97110.6089.96134.92110.111. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 0Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700306090120150SE +/- 0.10, N = 395.69142.81142.85116.73115.88143.2494.77116.801. (CXX) g++ options: -O3 -fPIC -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression SpeedRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001428425670SE +/- 0.12, N = 363.843.643.152.452.542.864.252.61. (CC) gcc options: -O3 -pthread -lz -llzma

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 8 - Input: Bosphorus 1080pRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077004080120160200SE +/- 0.38, N = 3158.21106.89105.84129.25129.95106.01158.66129.241. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Very FastRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700714212835SE +/- 0.05, N = 331.4521.1021.1426.1826.4221.1531.4826.201. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700714212835SE +/- 0.08, N = 321.2731.2631.4324.9624.6131.3021.1424.76

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001122334455SE +/- 0.08, N = 347.0031.9931.8140.0540.6331.9447.2940.38

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 4KRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770020406080100SE +/- 0.87, N = 393.6065.5563.8472.2572.6363.0090.7971.991. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LeukocyteRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770020406080100SE +/- 0.15, N = 359.0281.9587.1084.9983.7986.4759.7384.651. (CXX) g++ options: -O2 -lOpenCL

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 4KRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001.00532.01063.01594.02125.0265SE +/- 0.003, N = 34.4683.0583.0343.6843.6803.0424.4523.6861. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 6, LosslessRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077003691215SE +/- 0.014, N = 36.7659.9029.8608.1748.0509.9576.8268.1711. (CXX) g++ options: -O3 -fPIC -lm

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4KRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770020406080100SE +/- 0.03, N = 3100.1469.1868.2377.3080.1568.5599.8078.291. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770048121620SE +/- 0.02, N = 310.8515.6815.6312.4312.3915.6410.7112.48

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770020406080100SE +/- 0.08, N = 392.1363.7663.9580.4480.7063.9093.3080.09

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP StreamclusterRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077003691215SE +/- 0.011, N = 38.29811.96112.06912.11411.57811.9798.43012.1271. (CXX) g++ options: -O2 -lOpenCL

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: ParallelRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770015003000450060007500SE +/- 23.47, N = 3679146564674583957474678671558921. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001.04232.08463.12694.16925.2115SE +/- 0.00604, N = 33.179854.481294.632344.319024.174004.475343.177494.32645MIN: 3.13MIN: 4.4MIN: 4.54MIN: 4.14MIN: 4.14MIN: 4.38MIN: 3.13MIN: 4.151. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression SpeedRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001224364860SE +/- 0.03, N = 351.035.935.745.846.835.951.646.41. (CC) gcc options: -O3 -pthread -lz -llzma

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 4KRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077004080120160200SE +/- 0.28, N = 3183.70127.83127.22149.54152.05128.58181.98150.391. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 4K - Video Preset: Ultra FastRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001224364860SE +/- 0.11, N = 354.6038.1737.9646.8947.0138.0454.7146.841. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 2Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001530456075SE +/- 0.03, N = 347.2367.6967.8256.5755.9967.8147.1056.441. (CXX) g++ options: -O3 -fPIC -lm

Y-Cruncher

Y-Cruncher is a multi-threaded Pi benchmark capable of computing Pi to trillions of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 1BRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700816243240SE +/- 0.10, N = 323.3032.9933.0428.0027.8933.0823.3228.03

Timed PHP Compilation

This test times how long it takes to build PHP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 8.1.9Time To CompileRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001326395265SE +/- 0.29, N = 340.8055.8056.3348.7748.3456.4139.7448.93

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 22.1Input: Carbon NanotubeRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770050100150200250SE +/- 0.83, N = 3168.93235.98239.33209.84207.91237.47169.05209.081. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 4KRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700714212835SE +/- 0.11, N = 323.9921.9621.3129.1929.0122.3624.3630.161. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 1080p - Video Preset: MediumRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001428425670SE +/- 0.06, N = 363.1644.6844.6558.7658.8644.6863.1758.561. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.8.1Test: Server Rack - Acceleration: CPU-onlyRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.04010.08020.12030.16040.2005SE +/- 0.001, N = 30.1260.1770.1780.1580.1480.1760.1350.156

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700246810SE +/- 0.03190, N = 35.226747.111217.062697.331747.080487.095155.215727.34668MIN: 5.15MIN: 7MIN: 6.99MIN: 6.99MIN: 6.98MIN: 7MIN: 5.15MIN: 71. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.CRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770010K20K30K40K50KSE +/- 16.61, N = 344554.5331737.6131801.6943900.9144080.0831784.5244535.6043910.531. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: ParallelRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700170340510680850SE +/- 0.60, N = 37675525506686765557726671. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 4KRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077004080120160200SE +/- 0.38, N = 3190.96136.23136.43161.95162.39136.85189.15162.381. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Timed Wasmer Compilation

This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 2.3Time To CompileRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001224364860SE +/- 0.45, N = 337.9350.7753.1344.1844.3350.8137.9144.701. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.8.1Test: Masskrug - Acceleration: CPU-onlyRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.96441.92882.89323.85764.822SE +/- 0.013, N = 33.1744.2204.2863.6523.5844.2323.0713.690

Xcompact3d Incompact3d

Xcompact3d Incompact3d is a Fortran-MPI based, finite difference high-performance code for solving the incompressible Navier-Stokes equation and as many as you need scalar transport equations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterXcompact3d Incompact3d 2021-03-11Input: input.i3d 129 Cells Per DirectionRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700510152025SE +/- 0.02, N = 315.0820.9620.9419.5619.4420.9915.0719.591. (F9X) gfortran options: -cpp -O2 -funroll-loops -floop-optimize -fcray-pointer -fbacktrace -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

Y-Cruncher

Y-Cruncher is a multi-threaded Pi benchmark capable of computing Pi to trillions of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 500MRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770048121620SE +/- 0.01, N = 310.8515.0515.0412.7712.6815.0810.8712.72

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.35630.71261.06891.42521.7815SE +/- 0.00075, N = 31.144431.580541.574921.272091.234961.583601.143261.27245MIN: 1.01MIN: 1.49MIN: 1.5MIN: 1.16MIN: 1.13MIN: 1.52MIN: 1.05MIN: 1.161. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: resizeRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077003691215SE +/- 0.062, N = 313.60210.30610.41710.0339.86610.42113.30610.183

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.CRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077005K10K15K20K25KSE +/- 13.02, N = 324557.8518180.5818217.4323930.1723901.6918201.5524918.3424004.741. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.CRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077003K6K9K12K15KSE +/- 33.87, N = 311860.088739.838923.129553.459511.868741.3611923.739535.431. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: ParallelRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770020406080100SE +/- 0.00, N = 31068079949479107951. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 1080p - Video Preset: Ultra FastRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770050100150200250SE +/- 0.22, N = 3215.98160.28160.21201.88202.80160.45216.49201.631. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Timed GDB GNU Debugger Compilation

This test times how long it takes to build the GNU Debugger (GDB) in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 10.2Time To CompileRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001224364860SE +/- 0.08, N = 338.6751.5551.9144.5744.0851.4938.6144.60

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700246810SE +/- 0.00408, N = 35.703657.380147.398687.639937.307687.376995.726417.64763MIN: 5.58MIN: 7.29MIN: 7.3MIN: 7.21MIN: 7.2MIN: 7.29MIN: 5.61MIN: 7.231. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Machine Translation EN To DE FP16 - Device: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001530456075SE +/- 0.56, N = 361.5366.4064.2450.1454.2766.3760.7149.54MIN: 51.95 / MAX: 117.17MIN: 52.54 / MAX: 72.87MIN: 35.74 / MAX: 76.68MIN: 39.21 / MAX: 63.67MIN: 43.38 / MAX: 65.85MIN: 56.61 / MAX: 70.47MIN: 49.57 / MAX: 69.26MIN: 31.93 / MAX: 62.291. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 1080pRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700612182430SE +/- 0.06, N = 321.3917.2017.2322.4522.9717.3021.1722.491. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 0 - Input: Bosphorus 4KRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077003691215SE +/- 0.02, N = 310.288.428.3210.9911.108.6110.4110.961. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: GarlicoinRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770010002000300040005000SE +/- 39.25, N = 34390.503598.293534.184553.534711.293579.504566.154593.061. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh TimeRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770081624324026.9435.4535.2931.5931.2335.4726.6931.411. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077003691215SE +/- 0.02, N = 38.9110.7810.668.148.6210.678.928.12MIN: 5.72 / MAX: 19.34MIN: 4.84 / MAX: 21.35MIN: 5.63 / MAX: 25.42MIN: 5.28 / MAX: 22.47MIN: 4.21 / MAX: 29.5MIN: 6.39 / MAX: 19.07MIN: 4.89 / MAX: 19.33MIN: 4.51 / MAX: 22.011. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

nekRS

nekRS is an open-source Navier Stokes solver based on the spectral element method. NekRS supports both CPU and GPU/accelerator support though this test profile is currently configured for CPU execution. NekRS is part of Nek5000 of the Mathematics and Computer Science MCS at Argonne National Laboratory. This nekRS benchmark is primarily relevant to large core count HPC servers and otherwise may be very time consuming. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFLOP/s, More Is BetternekRS 22.0Input: TurboPipe PeriodicRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770014000M28000M42000M56000M70000MSE +/- 29429369.31, N = 365974500000498086333334971750000055548200000560084000004973560000064877800000555243000001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -lmpi_cxx -lmpi

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: mobilenetRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700246810SE +/- 0.00, N = 38.406.897.137.126.546.908.677.39MIN: 8.31 / MAX: 8.86MIN: 6.85 / MAX: 7.4MIN: 7.05 / MAX: 8.48MIN: 6.65 / MAX: 8.21MIN: 6.46 / MAX: 8.01MIN: 6.86 / MAX: 7.51MIN: 8.58 / MAX: 9.22MIN: 6.93 / MAX: 8.471. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 8 - Buffer Length: 256 - Filter Length: 57Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700160M320M480M640M800MSE +/- 6183407.06, N = 47532500005732975005857700007595500007593100005835000007394300007563400001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16-INT8 - Device: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770080160240320400SE +/- 0.45, N = 3330.58386.64379.73294.43292.09386.47331.39293.37MIN: 304.74 / MAX: 342.3MIN: 366.98 / MAX: 393.57MIN: 333.61 / MAX: 390.72MIN: 255.5 / MAX: 313.05MIN: 151.29 / MAX: 340.73MIN: 369.88 / MAX: 390.09MIN: 290.18 / MAX: 342.26MIN: 221.79 / MAX: 3121. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 4 - Input: Bosphorus 1080pRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077003691215SE +/- 0.02, N = 313.4910.3010.2312.1012.1310.3013.4612.111. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 12 - Input: Bosphorus 1080pRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700150300450600750SE +/- 2.69, N = 3649.49524.24522.04615.79616.10528.45687.93609.821. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Compression SpeedRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770013002600390052006500SE +/- 11.07, N = 36289.14776.64863.45140.05126.44772.96230.75055.31. (CC) gcc options: -O3 -pthread -lz -llzma

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: ParallelRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700110220330440550SE +/- 0.17, N = 34983813824454423784954481. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001.32752.6553.98255.316.6375SE +/- 0.05, N = 35.155.905.784.544.485.805.154.55MIN: 3.05 / MAX: 67.86MIN: 3.49 / MAX: 14.2MIN: 3.69 / MAX: 20.19MIN: 2.81 / MAX: 17.18MIN: 2.84 / MAX: 12.69MIN: 3.43 / MAX: 7.38MIN: 3.14 / MAX: 13.79MIN: 2.74 / MAX: 17.331. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16 - Device: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700246810SE +/- 0.05, N = 36.537.577.575.825.777.596.535.81MIN: 3.38 / MAX: 14.89MIN: 3.89 / MAX: 15.51MIN: 3.93 / MAX: 20.27MIN: 3.02 / MAX: 17.51MIN: 3.07 / MAX: 13.01MIN: 3.97 / MAX: 15.26MIN: 3.39 / MAX: 14.56MIN: 3.21 / MAX: 17.521. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Kvazaar

This is a test of Kvazaar as a CPU-based H.265/HEVC video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.1Video Input: Bosphorus 1080p - Video Preset: Very FastRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700306090120150SE +/- 0.03, N = 3118.7191.3091.18111.56111.6391.22118.78111.461. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700510152025SE +/- 0.03, N = 318.2521.8621.7816.9716.8621.8618.3216.99

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Face Detection FP16 - Device: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700160320480640800SE +/- 1.54, N = 3651.71746.94746.14581.34576.13746.15651.18578.82MIN: 621.09 / MAX: 672.91MIN: 714.17 / MAX: 765.41MIN: 723.99 / MAX: 764.64MIN: 454.21 / MAX: 613.21MIN: 505.33 / MAX: 597.67MIN: 724.14 / MAX: 766.22MIN: 573.67 / MAX: 673.86MIN: 394.1 / MAX: 612.371. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001326395265SE +/- 0.07, N = 354.7845.7445.9158.9159.3045.7454.5858.84

Darktable

Darktable is an open-source photography / workflow application this will use any system-installed Darktable program or on Windows will automatically download the pre-built binary from the project. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterDarktable 3.8.1Test: Server Room - Acceleration: CPU-onlyRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.73891.47782.21672.95563.6945SE +/- 0.036, N = 32.5393.2433.2842.9022.8313.2502.5762.894

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: squeezenet_ssdRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077003691215SE +/- 0.01, N = 311.739.129.2010.029.099.1711.7410.33MIN: 11.48 / MAX: 12.69MIN: 8.99 / MAX: 9.91MIN: 8.98 / MAX: 10.63MIN: 9.18 / MAX: 19.45MIN: 8.76 / MAX: 10.49MIN: 9.02 / MAX: 9.75MIN: 11.43 / MAX: 21.52MIN: 9.5 / MAX: 11.461. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP16 - Device: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770030060090012001500SE +/- 5.45, N = 31159.381309.261303.001032.671016.681293.341153.461015.27MIN: 1036.64 / MAX: 1295.05MIN: 749.1 / MAX: 1427.46MIN: 1221.08 / MAX: 1432.12MIN: 975.43 / MAX: 1129.72MIN: 897.5 / MAX: 1118.4MIN: 914.71 / MAX: 1411.42MIN: 731.93 / MAX: 1335.35MIN: 718.11 / MAX: 1124.231. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet18Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700246810SE +/- 0.09, N = 37.135.655.686.035.785.567.166.96MIN: 6.99 / MAX: 7.93MIN: 5.5 / MAX: 6.42MIN: 5.51 / MAX: 7.09MIN: 5.61 / MAX: 7.05MIN: 5.57 / MAX: 12.93MIN: 5.51 / MAX: 6.33MIN: 7 / MAX: 7.92MIN: 6.51 / MAX: 8.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3DRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001530456075SE +/- 0.87, N = 358.2066.5762.4753.9353.7657.1152.0761.631. (CXX) g++ options: -O2 -lOpenCL

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.4Encoder Mode: Preset 13 - Input: Bosphorus 1080pRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700160320480640800SE +/- 0.75, N = 3728.99578.51579.19670.53677.96580.00739.51668.091. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.16920.33840.50760.67680.846SE +/- 0.000677, N = 30.5952110.7519350.7500020.6002440.5886390.7519070.5906330.599888MIN: 0.54MIN: 0.73MIN: 0.72MIN: 0.55MIN: 0.56MIN: 0.73MIN: 0.54MIN: 0.561. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Detection FP32 - Device: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770030060090012001500SE +/- 6.34, N = 31190.801306.121315.781032.401039.001298.341179.601034.06MIN: 1049.79 / MAX: 1286.08MIN: 773.95 / MAX: 1444.66MIN: 728.56 / MAX: 1437.75MIN: 820.72 / MAX: 1126.15MIN: 913.61 / MAX: 1111.36MIN: 960.85 / MAX: 1409.2MIN: 1028.22 / MAX: 1530.8MIN: 699.51 / MAX: 1128.891. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Timed CPython Compilation

This test times how long it takes to build the reference Python implementation, CPython, with optimizations and LTO enabled for a release build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: DefaultRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077004812162013.5816.8816.9815.1814.9416.8013.4615.18

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770020406080100SE +/- 0.55, N = 3102.8482.5782.6187.6088.8683.82103.7387.561. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.11Encoder Speed: 10, LosslessRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.97921.95842.93763.91684.896SE +/- 0.019, N = 33.6214.3254.3413.8803.8974.3523.5013.8721. (CXX) g++ options: -O3 -fPIC -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Compression SpeedRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700400800120016002000SE +/- 2.90, N = 31636.71326.31320.81443.81454.31319.11637.91445.11. (CC) gcc options: -O3 -pthread -lz -llzma

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: yolov4-tinyRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770048121620SE +/- 0.12, N = 313.7012.0312.3412.0611.4012.1914.0913.12MIN: 13.55 / MAX: 14.36MIN: 11.74 / MAX: 13.04MIN: 11.85 / MAX: 14.12MIN: 11.25 / MAX: 14.07MIN: 11.18 / MAX: 13.15MIN: 11.76 / MAX: 12.93MIN: 13.9 / MAX: 14.24MIN: 12.22 / MAX: 14.691. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradebeansRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700400800120016002000SE +/- 19.03, N = 2016761461142414071403135916551371

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: resnet50Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077003691215SE +/- 0.04, N = 312.0911.0611.1110.349.9111.1112.2011.52MIN: 11.98 / MAX: 13.18MIN: 10.94 / MAX: 11.76MIN: 10.95 / MAX: 12.56MIN: 9.67 / MAX: 12.24MIN: 9.79 / MAX: 11.38MIN: 11.03 / MAX: 11.66MIN: 12.02 / MAX: 13.2MIN: 10.79 / MAX: 13.361. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vision_transformerRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770020406080100SE +/- 0.04, N = 390.21110.17110.67100.4399.54110.1093.50100.13MIN: 89.92 / MAX: 96.8MIN: 109.46 / MAX: 111.5MIN: 109.86 / MAX: 120.66MIN: 98.86 / MAX: 105.15MIN: 98.35 / MAX: 108.29MIN: 109.45 / MAX: 116.06MIN: 93.19 / MAX: 97.76MIN: 98.97 / MAX: 104.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: unsharp-maskRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077003691215SE +/- 0.02, N = 312.6310.8010.7410.5410.3110.8012.3810.67

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Signal Source (Cosine)Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770013002600390052006500SE +/- 6.70, N = 85162.05865.65846.26088.96084.95870.75014.06105.91. 3.10.1.1

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Person Vehicle Bike Detection FP16 - Device: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001.29382.58763.88145.17526.469SE +/- 0.03, N = 35.445.675.754.794.765.715.444.86MIN: 3.81 / MAX: 14.78MIN: 4.46 / MAX: 8.96MIN: 4.12 / MAX: 15.9MIN: 3.62 / MAX: 17.47MIN: 3.73 / MAX: 12.41MIN: 4.14 / MAX: 17.41MIN: 3.77 / MAX: 13.94MIN: 3.43 / MAX: 7.381. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.CRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077005K10K15K20K25KSE +/- 5.90, N = 325644.0321442.6721396.5923843.5124079.9921352.0125610.9623860.311. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16 - Device: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.09680.19360.29040.38720.484SE +/- 0.00, N = 30.430.360.360.370.380.360.430.38MIN: 0.26 / MAX: 8.94MIN: 0.22 / MAX: 8.48MIN: 0.22 / MAX: 12.84MIN: 0.23 / MAX: 12.31MIN: 0.24 / MAX: 2.35MIN: 0.22 / MAX: 7.98MIN: 0.26 / MAX: 9.51MIN: 0.23 / MAX: 131. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: TradesoapRyzen 9 7900Ryzen 7600 AMDRyzen 7600AMD 7700AMD 7600790077005001000150020002500SE +/- 19.27, N = 41880224222012128215419362015

Java Test: Tradesoap

Ryzen 7 7700: The test quit with a non-zero exit status.

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001632486480SE +/- 0.12, N = 374.1562.3862.5769.6868.9162.7774.1069.61

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: StandardRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077002K4K6K8K10KSE +/- 114.75, N = 12974889328623844492549204998584311. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.CRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077003K6K9K12K15KSE +/- 8.95, N = 312761.8811043.4811031.3410954.1111060.8011044.4612794.7510989.411. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001632486480SE +/- 0.31, N = 370.3360.2960.4163.6763.5060.3370.3264.17

VP9 libvpx Encoding

This is a standard video encoding performance test of Google's libvpx library and the vpxenc command for the VP9 video format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVP9 libvpx Encoding 1.10.0Speed: Speed 5 - Input: Bosphorus 1080pRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001224364860SE +/- 0.09, N = 346.8352.6852.0353.3854.4352.0446.8854.491. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=gnu++11

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.16430.32860.49290.65720.8215SE +/- 0.00, N = 30.720.630.630.640.630.630.730.64MIN: 0.4 / MAX: 8.98MIN: 0.34 / MAX: 8.02MIN: 0.34 / MAX: 12.67MIN: 0.37 / MAX: 13.2MIN: 0.38 / MAX: 8.56MIN: 0.39 / MAX: 1.84MIN: 0.39 / MAX: 9.37MIN: 0.36 / MAX: 12.271. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Weld Porosity Detection FP16-INT8 - Device: CPURyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700246810SE +/- 0.01, N = 36.545.685.675.845.845.686.555.84MIN: 3.68 / MAX: 16.14MIN: 3.14 / MAX: 13.67MIN: 3.77 / MAX: 11.12MIN: 3.11 / MAX: 18.54MIN: 3.13 / MAX: 13.02MIN: 3.04 / MAX: 13.81MIN: 3.66 / MAX: 14.89MIN: 3.2 / MAX: 17.681. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770020406080100SE +/- 0.16, N = 396.8984.2684.0187.6786.9284.2296.5586.90

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.DRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770030060090012001500SE +/- 6.92, N = 31464.481322.961304.941428.571401.101299.851498.341452.371. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.2

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v2Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001122334455SE +/- 0.35, N = 1542.3747.6543.9242.0642.3343.9241.5242.33MIN: 42.26 / MAX: 42.44MIN: 43.99 / MAX: 49.87MIN: 43.87 / MAX: 44.14MIN: 41.98 / MAX: 42.14MIN: 42.3 / MAX: 42.43MIN: 43.9 / MAX: 44MIN: 41.17 / MAX: 42.01MIN: 42.27 / MAX: 42.41. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: H2Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700400800120016002000SE +/- 32.56, N = 2017911871188119421877170017481828

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001122334455SE +/- 0.03, N = 347.1242.0041.7643.4643.2642.2047.4743.38

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: DefaultRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700612182430SE +/- 0.03, N = 324.0025.8325.9726.9426.9425.9527.2426.941. (CC) gcc options: -fvisibility=hidden -O2 -lm

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700714212835SE +/- 0.02, N = 329.8426.4326.3427.3827.2426.3529.6727.31

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 100Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.21150.4230.63450.8461.0575SE +/- 0.02, N = 60.860.850.940.870.880.830.920.911. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

Algebraic Multi-Grid Benchmark

AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFigure Of Merit, More Is BetterAlgebraic Multi-Grid Benchmark 1.2Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770090M180M270M360M450MSE +/- 42003.58, N = 34356400003926537333925822003859116003915619003919451004336033003857551001. (CC) gcc options: -lparcsr_ls -lparcsr_mv -lseq_mv -lIJ_mv -lkrylov -lHYPRE_utilities -lm -fopenmp -lmpi

Timed Apache Compilation

This test times how long it takes to build the Apache HTTPD web server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Apache Compilation 2.4.41Time To CompileRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770048121620SE +/- 0.01, N = 314.0415.7715.6814.6914.6215.8414.1014.71

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700150300450600750SE +/- 0.30, N = 3704.98630.99630.90651.97649.26630.55705.41651.14

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: EigenRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700400800120016002000SE +/- 14.96, N = 9165014951499154815401475160715421. (CXX) g++ options: -flto -pthread

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700150300450600750SE +/- 0.22, N = 3703.55630.93630.50650.70647.74629.80703.74650.21

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 90Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077003691215SE +/- 0.01, N = 311.7810.9310.9411.4311.6110.7612.0011.571. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 90Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077003691215SE +/- 0.00, N = 312.0911.2111.2211.7611.9211.0712.3211.901. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: JPEG - Quality: 80Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077003691215SE +/- 0.02, N = 311.9811.1311.1211.6111.8210.9712.2011.801. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: vgg16Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700612182430SE +/- 0.03, N = 324.2425.8525.9125.4324.3525.8824.2026.91MIN: 23.97 / MAX: 25.1MIN: 25.64 / MAX: 31.08MIN: 25.66 / MAX: 27.54MIN: 24.43 / MAX: 27.56MIN: 24.09 / MAX: 34.85MIN: 25.66 / MAX: 26.69MIN: 23.94 / MAX: 25.17MIN: 25.94 / MAX: 29.021. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: auto-levelsRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077003691215SE +/- 0.012, N = 310.2799.8099.7799.4889.2479.80110.1509.522

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 80Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077003691215SE +/- 0.01, N = 312.2711.4111.4411.9612.1111.2812.5312.091. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: TopTweetRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077003691215SE +/- 0.02, N = 310.109.519.119.929.959.5010.069.861. (CXX) g++ options: -O3

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001632486480SE +/- 0.33, N = 365.9573.0872.8771.9367.7871.9766.5970.911. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: ParallelRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077002K4K6K8K10KSE +/- 23.08, N = 3719876837745776577837679702574891. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

Timed CPython Compilation

This test times how long it takes to build the reference Python implementation, CPython, with optimizations and LTO enabled for a release build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed CPython Compilation 3.10.6Build Configuration: Released Build, PGO + LTO OptimizedRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077004080120160200183.17201.93202.76192.75190.87201.95183.09192.36

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 9.12-MR1Java Test: JythonRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077005001000150020002500SE +/- 11.02, N = 423482538253823602540254022942459

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3, Long Mode - Compression SpeedRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700400800120016002000SE +/- 3.73, N = 31824.81942.81936.92009.02012.91937.11825.42007.31. (CC) gcc options: -O3 -pthread -lz -llzma

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20220729Target: CPU - Model: alexnetRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001.05082.10163.15244.20325.254SE +/- 0.00, N = 34.624.244.434.664.254.244.624.67MIN: 4.55 / MAX: 5.51MIN: 4.21 / MAX: 4.84MIN: 4.39 / MAX: 5MIN: 4.36 / MAX: 5.78MIN: 4.19 / MAX: 5.65MIN: 4.22 / MAX: 4.83MIN: 4.55 / MAX: 5.18MIN: 4.37 / MAX: 5.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: crypto_pyaesRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001530456075SE +/- 0.18, N = 360.463.666.160.660.263.060.662.3

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FIR FilterRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770030060090012001500SE +/- 3.50, N = 81402.11436.81436.21508.71498.31448.51374.71485.91. 3.10.1.1

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Hilbert TransformRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077002004006008001000SE +/- 5.10, N = 8716.6738.4709.6760.3775.5725.5741.4759.81. 3.10.1.1

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: DenseNetRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077005001000150020002500SE +/- 0.40, N = 32112.152235.312237.082069.752066.072228.872105.732067.23MIN: 2081.33 / MAX: 2147.82MIN: 2217.92 / MAX: 2257.35MIN: 2217.88 / MAX: 2258.4MIN: 2024.37 / MAX: 2112.67MIN: 2022.59 / MAX: 2107.46MIN: 2216.04 / MAX: 2250MIN: 2070.42 / MAX: 2146.74MIN: 2021.19 / MAX: 2109.881. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Decompression SpeedRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770014002800420056007000SE +/- 3.15, N = 36696.26208.36200.66336.66347.16188.26404.16311.61. (CC) gcc options: -O3 -pthread -lz -llzma

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 2 - Buffer Length: 256 - Filter Length: 57Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770040M80M120M160M200MSE +/- 1746140.93, N = 71923100001968728571999600002070400002057600001975100002075200002080100001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3, Long Mode - Decompression SpeedRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770014002800420056007000SE +/- 67.24, N = 36423.96086.86167.66083.36118.55943.06167.26098.41. (CC) gcc options: -O3 -pthread -lz -llzma

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C2670Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770020406080100SE +/- 0.07, N = 375.5379.9879.7980.1977.6179.5574.2579.401. (CC) gcc options: -O0 -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lXft -lfontconfig -lXrender -lfreetype -lSM -lICE

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: nbodyRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770020406080100SE +/- 1.20, N = 378.483.579.979.477.581.977.479.6

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700300K600K900K1200K1500KSE +/- 5632.48, N = 311869951143225115424012180191184135114953512278891233198

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, LosslessRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.49730.99461.49191.98922.4865SE +/- 0.00, N = 32.212.092.052.172.182.082.192.211. (CC) gcc options: -fvisibility=hidden -O2 -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression SpeedRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770012002400360048006000SE +/- 7.45, N = 35436.15074.25058.05185.25220.65072.45452.45364.11. (CC) gcc options: -O3 -pthread -lz -llzma

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Decompression SpeedRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770013002600390052006500SE +/- 7.12, N = 36049.25834.26021.36188.36198.95826.36277.15956.51. (CC) gcc options: -O3 -pthread -lz -llzma

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770030060090012001500SE +/- 21.50, N = 3162315141520153515191554160115881. (CXX) g++ options: -flto -pthread

JPEG XL Decoding libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL Decoding libjxl 0.7CPU Threads: 1Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001632486480SE +/- 0.09, N = 371.3669.9469.3670.4471.9067.9372.6171.36

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression SpeedRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770012002400360048006000SE +/- 54.15, N = 35176.65133.05039.85186.45130.45239.25206.55379.01. (CC) gcc options: -O3 -pthread -lz -llzma

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: DistinctUserIDRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077003691215SE +/- 0.11, N = 39.969.349.469.779.839.409.729.851. (CXX) g++ options: -O3

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 1 - Buffer Length: 256 - Filter Length: 57Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770020M40M60M80M100MSE +/- 98206.13, N = 3106460000100353333100010000104260000104300000999000001054400001043500001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077009001800270036004500SE +/- 40.10, N = 63971.24180.63966.64098.84090.73927.24141.94131.81. (CXX) g++ options: -O3 -march=native -rdynamic

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: goRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700306090120150SE +/- 0.00, N = 3125132133126127132125128

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: IIR FilterRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700120240360480600SE +/- 3.03, N = 8561.2541.5544.3575.8570.8541.8560.7561.11. 3.10.1.1

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: floatRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001326395265SE +/- 0.12, N = 355.959.458.856.456.158.756.157.2

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Lossless, Highest CompressionRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.19350.3870.58050.7740.9675SE +/- 0.00, N = 30.860.820.820.840.850.810.850.841. (CC) gcc options: -fvisibility=hidden -O2 -lm

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077003M6M9M12M15MSE +/- 122230.58, N = 814646827139491881417037614513008145725551405326814804887145742741. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100, Highest CompressionRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001.19252.3853.57754.775.9625SE +/- 0.00, N = 35.305.005.015.205.195.015.305.201. (CC) gcc options: -fvisibility=hidden -O2 -lm

GIMP

GIMP is an open-source image manipulaton program. This test profile will use the system-provided GIMP program otherwise on Windows relys upon a pre-packaged Windows binary from upstream GIMP.org. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGIMP 2.10.30Test: rotateRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077003691215SE +/- 0.015, N = 39.4069.5099.4469.1828.9719.5059.1829.297

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: 2to3Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077004080120160200SE +/- 0.00, N = 3167173173167177174167168

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pathlibRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077003691215SE +/- 0.00, N = 310.110.610.610.410.310.710.110.3

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: SqueezeNet v1.1Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077004080120160200SE +/- 0.02, N = 3182.77191.09191.08184.08183.89191.09180.38184.21MIN: 182.69 / MAX: 182.89MIN: 190.97 / MAX: 191.36MIN: 191 / MAX: 191.26MIN: 183.99 / MAX: 184.18MIN: 183.77 / MAX: 184.05MIN: 191.02 / MAX: 191.19MIN: 180.33 / MAX: 180.52MIN: 184.13 / MAX: 184.341. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FM Deemphasis FilterRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077002004006008001000SE +/- 2.20, N = 8907.5913.7911.7943.1941.1919.1905.0958.41. 3.10.1.1

PyBench

This test profile reports the total time of the different average timed test results from PyBench. PyBench reports average test times for different functions such as BuiltinFunctionCalls and NestedForLoops, with this total result providing a rough estimate as to Python's average performance on a given system. This test profile runs PyBench each time for 20 rounds. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyBench 2018-02-16Total For Average Test TimesRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700140280420560700SE +/- 2.40, N = 3602626615606598613592601

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 4 - Buffer Length: 256 - Filter Length: 57Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770090M180M270M360M450MSE +/- 265476.51, N = 34026800004006033333889800004012400004072100003981000004110000004015400001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterWebP Image Encode 1.2.4Encode Settings: Quality 100Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770048121620SE +/- 0.01, N = 317.0316.1316.1216.7816.7916.1216.8116.741. (CC) gcc options: -fvisibility=hidden -O2 -lm

Git

This test measures the time needed to carry out some sample Git operations on an example, static repository that happens to be a copy of the GNOME GTK tool-kit repository. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGitTime To Complete Common Git CommandsRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700816243240SE +/- 0.04, N = 331.4333.0733.1332.0832.3733.1931.7232.051. git version 2.34.1

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: regex_compileRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770020406080100SE +/- 0.00, N = 378.682.783.079.479.582.379.279.5

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: django_templateRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700612182430SE +/- 0.00, N = 324.525.825.724.624.625.824.724.8

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: raytraceRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770060120180240300SE +/- 0.33, N = 3247259256246247257246248

JPEG XL libjxl

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL libjxl 0.7Input: PNG - Quality: 100Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.2250.450.6750.91.125SE +/- 0.01, N = 30.970.950.960.971.000.961.000.981. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: LargeRandomRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077000.40730.81461.22191.62922.0365SE +/- 0.00, N = 31.811.721.731.791.791.721.791.791. (CXX) g++ options: -O3

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: json_loadsRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077003691215SE +/- 0.03, N = 311.812.312.411.812.112.311.811.9

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: pickle_pure_pythonRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770050100150200250SE +/- 0.33, N = 3223232231225221232223225

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: PartialTweetsRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 760079007700246810SE +/- 0.00, N = 38.277.897.918.218.177.908.248.161. (CXX) g++ options: -O3

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.3Target: CPU - Model: MobileNet v2Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077004080120160200SE +/- 0.23, N = 3190.75199.06199.19193.77190.37198.87190.74190.92MIN: 190.01 / MAX: 191.51MIN: 198.12 / MAX: 200.3MIN: 198.58 / MAX: 200.1MIN: 192.93 / MAX: 195.31MIN: 189.7 / MAX: 193.98MIN: 198.22 / MAX: 199.53MIN: 189.99 / MAX: 192.23MIN: 190.03 / MAX: 193.111. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -fvisibility=default -O3 -rdynamic -ldl

FLAC Audio Encoding

This test times how long it takes to encode a sample WAV file to FLAC audio format ten times using the --best preset settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.4WAV To FLACRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077003691215SE +/- 0.00, N = 511.5712.1012.1011.6711.6712.1011.6511.691. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: chaosRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001224364860SE +/- 0.09, N = 353.355.255.153.253.055.453.153.2

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Decompression SpeedRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 76007900770012002400360048006000SE +/- 86.19, N = 35722.75596.15745.45702.75716.25547.85794.75715.11. (CC) gcc options: -O3 -pthread -lz -llzma

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 2.0Throughput Test: KostyaRyzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001.30052.6013.90155.2026.5025SE +/- 0.01, N = 35.765.565.565.775.785.555.785.781. (CXX) g++ options: -O3

LAME MP3 Encoding

LAME is an MP3 encoder licensed under the LGPL. This test measures the time required to encode a WAV file to MP3 format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3Ryzen 9 7900Ryzen 7600 AMDRyzen 7600Ryzen 7 7700AMD 7700AMD 7600790077001.12192.24383.36574.48765.6095SE +/- 0.003, N = 34.8094.9784.9864.8034.7924.9784.7954.8061. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lncurses -lm

DaCapo Benchmark

This test runs the DaCapo Benchmarks written in Java and intended to test system/CPU performance. Learn more via the OpenBenchmarking.org test page.

Java Test: Eclipse

7700: The test quit with a non-zero exit status.

7900: The test quit with a non-zero exit status.

Ryzen 7600 AMD: The test quit with a non-zero exit status.

AMD 7600: The test quit with a non-zero exit status.

AMD 7700: The test quit with a non-zero exit status.

Ryzen 7 7700: The test quit with a non-zero exit status.

Ryzen 7600: The test quit with a non-zero exit status.

Ryzen 9 7900: The test quit with a non-zero exit status.

325 Results Shown

oneDNN
NCNN
Mobile Neural Network
oneDNN
NCNN:
  CPU - FastestDet
  CPU - shufflenet-v2
ONNX Runtime:
  bertsquad-12 - CPU - Standard
  ArcFace ResNet-100 - CPU - Standard
NCNN
ONNX Runtime
NCNN
Mobile Neural Network
NCNN
Mobile Neural Network:
  squeezenetv1.1
  nasnet
  MobileNetV2_224
C-Ray
NAS Parallel Benchmarks
Stockfish
Mobile Neural Network
OpenSSL
Zstd Compression
OpenSSL
Cpuminer-Opt
NAS Parallel Benchmarks
oneDNN
Cpuminer-Opt
oneDNN
OpenSSL
NCNN
JPEG XL Decoding libjxl
Cpuminer-Opt
OpenVINO
Coremark
Cpuminer-Opt:
  x25x
  scrypt
  Ringcoin
7-Zip Compression
Cpuminer-Opt
Neural Magic DeepSparse
Cpuminer-Opt
IndigoBench
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream
Cpuminer-Opt
Neural Magic DeepSparse
Cpuminer-Opt
Stargate Digital Audio Workstation
Xmrig
Blender
IndigoBench
Tachyon
Blender:
  BMW27 - CPU-Only
  Classroom - CPU-Only
Xmrig
Stargate Digital Audio Workstation
asmFish
oneDNN
ASTC Encoder
OpenVINO
Chaos Group V-RAY
ASTC Encoder:
  Fast
  Medium
Stargate Digital Audio Workstation
ASTC Encoder
ONNX Runtime
Neural Magic DeepSparse
OpenVINO
Stargate Digital Audio Workstation
Timed Linux Kernel Compilation
Appleseed
Aircrack-ng
Stargate Digital Audio Workstation
Liquid-DSP
Blender
OpenVINO:
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU
  Weld Porosity Detection FP16-INT8 - CPU
Blender
NAMD
Liquid-DSP
oneDNN:
  Recurrent Neural Network Training - u8s8f32 - CPU
  IP Shapes 3D - bf16bf16bf16 - CPU
Timed LLVM Compilation
Stargate Digital Audio Workstation
oneDNN:
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
  Recurrent Neural Network Training - f32 - CPU
OpenVINO:
  Face Detection FP16 - CPU
  Vehicle Detection FP16-INT8 - CPU
Neural Magic DeepSparse
SVT-HEVC
Rodinia
x265
OpenVINO
Stargate Digital Audio Workstation
oneDNN
Appleseed
oneDNN:
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - u8s8f32 - CPU
LAMMPS Molecular Dynamics Simulator
SVT-HEVC
Neural Magic DeepSparse
Stargate Digital Audio Workstation
NAS Parallel Benchmarks
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    ms/batch
    items/sec
OpenVINO
Neural Magic DeepSparse:
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    ms/batch
    items/sec
NCNN
OpenVINO
oneDNN:
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
  Deconvolution Batch shapes_1d - u8s8f32 - CPU
Timed MPlayer Compilation
Primesieve
Timed LLVM Compilation
Primesieve
Timed Linux Kernel Compilation
Build2
Rodinia
NAS Parallel Benchmarks
7-Zip Compression
GROMACS
OpenVINO
SVT-VP9
PyPerformance
Timed Godot Game Engine Compilation
Timed FFmpeg Compilation
SVT-HEVC
oneDNN
GNU Radio
SVT-VP9
ONNX Runtime
Mobile Neural Network
SVT-HEVC
Timed Mesa Compilation
Mobile Neural Network
OpenVINO
libavif avifenc
Cpuminer-Opt
x264
oneDNN:
  Deconvolution Batch shapes_3d - f32 - CPU
  Deconvolution Batch shapes_1d - f32 - CPU
OpenFOAM
SVT-VP9:
  VMAF Optimized - Bosphorus 1080p
  PSNR/SSIM Optimized - Bosphorus 1080p
Neural Magic DeepSparse:
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    ms/batch
    items/sec
x264
oneDNN
Appleseed
Kvazaar
SVT-HEVC
oneDNN
Neural Magic DeepSparse:
  CV Detection,YOLOv5s COCO - Synchronous Single-Stream:
    ms/batch
    items/sec
NCNN
ONNX Runtime
SVT-AV1
Darktable
SVT-HEVC
libavif avifenc
Zstd Compression
SVT-AV1
Kvazaar
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream:
    ms/batch
    items/sec
SVT-VP9
Rodinia
SVT-AV1
libavif avifenc
SVT-VP9
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    ms/batch
    items/sec
Rodinia
ONNX Runtime
oneDNN
Zstd Compression
SVT-AV1
Kvazaar
libavif avifenc
Y-Cruncher
Timed PHP Compilation
GPAW
VP9 libvpx Encoding
Kvazaar
Darktable
oneDNN
NAS Parallel Benchmarks
ONNX Runtime
SVT-AV1
Timed Wasmer Compilation
Darktable
Xcompact3d Incompact3d
Y-Cruncher
oneDNN
GIMP
NAS Parallel Benchmarks:
  FT.C
  CG.C
ONNX Runtime
Kvazaar
Timed GDB GNU Debugger Compilation
oneDNN
OpenVINO
VP9 libvpx Encoding:
  Speed 0 - Bosphorus 1080p
  Speed 0 - Bosphorus 4K
Cpuminer-Opt
OpenFOAM
OpenVINO
nekRS
NCNN
Liquid-DSP
OpenVINO
SVT-AV1:
  Preset 4 - Bosphorus 1080p
  Preset 12 - Bosphorus 1080p
Zstd Compression
ONNX Runtime
OpenVINO:
  Vehicle Detection FP16-INT8 - CPU
  Weld Porosity Detection FP16 - CPU
Kvazaar
Neural Magic DeepSparse
OpenVINO
Neural Magic DeepSparse
Darktable
NCNN
OpenVINO
NCNN
Rodinia
SVT-AV1
oneDNN
OpenVINO
Timed CPython Compilation
x265
libavif avifenc
Zstd Compression
NCNN
DaCapo Benchmark
NCNN:
  CPU - resnet50
  CPU - vision_transformer
GIMP
GNU Radio
OpenVINO
NAS Parallel Benchmarks
OpenVINO
DaCapo Benchmark
Neural Magic DeepSparse
ONNX Runtime
NAS Parallel Benchmarks
Neural Magic DeepSparse
VP9 libvpx Encoding
OpenVINO:
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU
  Weld Porosity Detection FP16-INT8 - CPU
Neural Magic DeepSparse
NAS Parallel Benchmarks
TNN
DaCapo Benchmark
Neural Magic DeepSparse
WebP Image Encode
Neural Magic DeepSparse
JPEG XL libjxl
Algebraic Multi-Grid Benchmark
Timed Apache Compilation
Neural Magic DeepSparse
LeelaChessZero
Neural Magic DeepSparse
JPEG XL libjxl:
  JPEG - 90
  PNG - 90
  JPEG - 80
NCNN
GIMP
JPEG XL libjxl
simdjson
Ngspice
ONNX Runtime
Timed CPython Compilation
DaCapo Benchmark
Zstd Compression
NCNN
PyPerformance
GNU Radio:
  FIR Filter
  Hilbert Transform
TNN
Zstd Compression
Liquid-DSP
Zstd Compression
Ngspice
PyPerformance
PHPBench
WebP Image Encode
Zstd Compression:
  19 - Decompression Speed
  8 - Decompression Speed
LeelaChessZero
JPEG XL Decoding libjxl
Zstd Compression
simdjson
Liquid-DSP
QuantLib
PyPerformance
GNU Radio
PyPerformance
WebP Image Encode
Crafty
WebP Image Encode
GIMP
PyPerformance:
  2to3
  pathlib
TNN
GNU Radio
PyBench
Liquid-DSP
WebP Image Encode
Git
PyPerformance:
  regex_compile
  django_template
  raytrace
JPEG XL libjxl
simdjson
PyPerformance:
  json_loads
  pickle_pure_python
simdjson
TNN
FLAC Audio Encoding
PyPerformance
Zstd Compression
simdjson
LAME MP3 Encoding