Intel Core Ultra 7 155H vs. AMD Ryzen 7 7840U Linux Benchmarks

Intel Core Ultra 7 155H benchmarks by Michael Larabel for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2312190-NE-COREULTRA78
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 3 Tests
AV1 2 Tests
Web Browsers 1 Tests
Chess Test Suite 3 Tests
Timed Code Compilation 6 Tests
C/C++ Compiler Tests 19 Tests
Compression Tests 3 Tests
CPU Massive 31 Tests
Creator Workloads 37 Tests
Cryptography 5 Tests
Database Test Suite 2 Tests
Encoding 11 Tests
Fortran Tests 2 Tests
Game Development 5 Tests
HPC - High Performance Computing 20 Tests
Imaging 9 Tests
Java Tests 2 Tests
Common Kernel Benchmarks 4 Tests
Machine Learning 14 Tests
MPI Benchmarks 3 Tests
Multi-Core 37 Tests
NVIDIA GPU Compute 7 Tests
Intel oneAPI 6 Tests
OpenMPI Tests 4 Tests
Productivity 4 Tests
Programmer / Developer System Benchmarks 14 Tests
Python 6 Tests
Raytracing 3 Tests
Renderers 8 Tests
Scientific Computing 2 Tests
Software Defined Radio 3 Tests
Server 8 Tests
Server CPU Tests 22 Tests
Single-Threaded 12 Tests
Speech 2 Tests
Telephony 2 Tests
Video Encoding 8 Tests
Common Workstation Benchmarks 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable
Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Ryzen 7 7840U
December 14 2023
  1 Day, 13 Hours, 52 Minutes
Core Ultra 7 155H
December 16 2023
  2 Days, 23 Hours, 19 Minutes
Invert Hiding All Results Option
  2 Days, 6 Hours, 35 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Intel Core Ultra 7 155H vs. AMD Ryzen 7 7840U Linux BenchmarksProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionRyzen 7 7840UCore Ultra 7 155HAMD Ryzen 7 7840U @ 5.13GHz (8 Cores / 16 Threads)Framework FRANMDCP07 (03.03 BIOS)AMD Device 14e816GB512GB Western Digital WD PC SN740 SDDPNQD-512GAMD Phoenix1 512MB (2700/2800MHz)AMD Rembrandt Radeon HD AudioMEDIATEK MT7922 802.11ax PCIUbuntu 23.106.7.0-060700rc5-generic (x86_64)GNOME Shell 45.1X Server 1.21.1.7 + Wayland4.6 Mesa 24.0~git2312160600.5d937f~oibaf~m (git-5d937f0 2023-12-16 mantic-oibaf-ppa)GCC 13.2.0ext42256x1504Intel Core Ultra 7 155H @ 4.80GHz (16 Cores / 22 Threads)MTL Coral_MTH (V1.01 BIOS)Intel Device 7e7f1024GB Micron_2550_MTFDKBA1T0TGEIntel Arc MTL 15GB (2250MHz)Intel Meteor Lake-P HD AudioIntel Device 7e401920x1200OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Disk Details- NONE / relatime,rw / Block Size: 4096Processor Details- Ryzen 7 7840U: Scaling Governor: amd-pstate-epp powersave - Platform Profile: balanced - CPU Microcode: 0xa704103 - ACPI Profile: balanced - Core Ultra 7 155H: Scaling Governor: intel_pstate powersave - CPU Microcode: 0x13 - Thermald 2.5.4Java Details- OpenJDK Runtime Environment (build 17.0.9+9-Ubuntu-123.10)Python Details- Python 3.11.6Security Details- Ryzen 7 7840U: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Vulnerable: Safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - Core Ultra 7 155H: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

Ryzen 7 7840U vs. Core Ultra 7 155H ComparisonPhoronix Test SuiteBaseline+82.2%+82.2%+164.4%+164.4%+246.6%+246.6%+328.8%+328.8%114.7%108.6%91.1%88.8%84%69.3%68.9%64.3%48.9%47.7%46.2%45.7%44.7%44.4%43.1%38.6%37.7%37.1%24.8%24.1%24%23.5%23%22.7%22.2%22.2%20.6%20.3%18.8%18.6%18.2%17.2%17%16.1%14.8%11.8%11.1%10.7%10.4%8.6%8.4%7.7%6.9%6.6%6.5%6.5%6.3%5.8%5.6%5.5%5.5%5.2%5%4.9%4.9%4.7%4.5%4.4%4.4%4.3%4%3.1%2.7%2.6%2.4%2.4%C.S.9.P.Y.P - A.M.S328.9%H.G.B.A316.2%I.R.V308.5%H.E.R.F.I - CPU306.4%P.V.B.D.F - CPU258.3%N.T.C.B.b.u.S.S.I - A.M.S234.5%ggml-small.en - 2.S.o.t.U233.7%A.G.R.R.0.F.I - CPU231.6%V.D.F.I - CPU222.9%B.L.N.Q.A.S.I - A.M.S220.8%W.P.D.F.I - CPU220%P.D.F - CPU217%CPU - regnety_400m204.1%B.L.N.Q.A - A.M.S199.1%N.T.C.B.b.u.c - A.M.S196.3%H.G.B193.8%R.5.S.I - A.M.S193.3%N.D.C.o.b.u.o.I - A.M.S190.7%R.5.B - A.M.S189.7%C.C.R.5.I - A.M.S187.8%F.B.t.B.F.F183.7%ggml-base.en - 2.S.o.t.U181.9%CPU - mnasnet176.6%C.D.Y.C.S.I - A.M.S175.4%C.D.Y.C - A.M.S174.6%N.T.C.D.m - A.M.S174.3%F.D.F.I - CPU172.6%SqueezeNet158.5%CPU-v3-v3 - mobilenet-v3154.9%RSA4096150.1%R.S.A.F.I - CPU149.8%CPU - shufflenet-v2147.2%CPU - efficientnet-b0141.8%CPU - blazeface140.5%TSNE MNIST Dataset134.6%CPU - 32 - Efficientnet_v2_l125.2%Inception V4123.3%tConvolve MPI - Degridding120.8%CPU-v2-v2 - mobilenet-v2119.1%Mobilenet Float118.5%CPU - 16 - Efficientnet_v2_l116.8%8 - Compression SpeedCPU - resnet50112.5%CPU - 1 - ResNet-152111.6%BLAS108.9%F.F.TChaCha20-Poly1305107.2%CPU - 32 - ResNet-152106.3%P.V.B.D.F - CPU104.3%CPU - 16 - ResNet-152102.6%ChaCha20102.3%CPU - FastestDet99%python_startup94.2%CPU - googlenet91.9%tConvolve MT - GriddingH.E.R.F.I - CPU90.4%8, Long Mode - Compression SpeedtConvolve MPI - Gridding87.8%C.S.9.P.Y.P - A.M.S86.8%CPU - mobilenet85.4%CPU - 32 - ResNet-5084.5%V.D.F.I - CPU84.2%RotateCPU - resnet1882.4%F.D.R.F.I - CPU81.9%CPU - 16 - ResNet-5081.2%P.D.F - CPU81.1%CPU - 1 - Efficientnet_v2_l80%Eigen76.9%CPU - 1 - ResNet-5075.9%Update Rand71.8%AES-128-GCM69.4%SP.Bgravity_spheres_volume/dim_512/ao/real_time69.2%12 - Compression Speedgravity_spheres_volume/dim_512/scivis/real_time68.1%3 - D.S67.5%9 - D.S67.1%CPU - yolov4-tiny66.5%1 - D.S65.9%R.O.R.S.I64.6%H.C.OMobilenet Quant62.9%CPU - squeezenet_ssd61.8%Bumper Beam61.6%1 - Compression Speed61%AES-256-GCM60.6%F.D.R.F.I - CPU60.1%CPU - Numpy - 4194304 - Equation of State58.4%F.D.F.I - CPU56.5%Monte Carlo56.2%D.L.M.F54.3%A.G.R.R.0.F.I - CPU53.5%CPU - 16 - ResNet-5052.4%2.N.L.R51.1%W.P.D.F.I - CPU50.6%1e1249.6%A.X.2.ECPU - 32 - ResNet-5048.2%8 - 256 - 51247.9%A.X.5.ECore47.6%1 - 4K - 32 - Path Tracer - CPU46.6%tConvolve MT - Degriddingresize46.2%CPU - Bedroom46.1%Hilbert Transform16 - 256 - 51244.8%A.X.2.DA.X.5.D3 - 4K - 32 - Path Tracer - CPU44.4%1 - 4K - 16 - Path Tracer - CPU44.1%C.P.D.T44.1%gravity_spheres_volume/dim_512/pathtracer/real_time44%N.T.C.B.b.u.S.S.I - A.M.S43.3%libx265 - Live43.2%1 - 1080p - 1 - Path Tracer - CPU43.1%Hilbert Transform3 - 1080p - 1 - Path Tracer - CPU42.9%3 - 4K - 16 - Path Tracer - CPU42.7%R.S.A.F.I - CPU42.5%CPU - alexnet42.4%B.S.o.W42.3%1 - 1080p - 32 - Path Tracer - CPU42%CPU - Numpy - 4194304 - Isoneutral Mixing41.8%libx265 - Video On Demand41.3%3 - 1080p - 32 - Path Tracer - CPU40.8%libx265 - Platform40.7%Scale39%1 - 4K - 1 - Path Tracer - CPU38.7%BT.CKNN CADB.L.N.Q.A.S.I - A.M.S37.5%3 - 4K - 1 - Path Tracer - CPU37.5%libx265 - Upload37.3%SharpenPreset 8 - Bosphorus 4K36.3%R.R.W.R36.2%RT.ldr_alb_nrm.3840x2160 - CPU-Only36%Bosphorus 1080p - Super Fast35.7%Barbershop - CPU-Only35%34.8%D.R33.8%3 - 1080p - 16 - Path Tracer - CPU33.6%ARES-6 - Google Chrome33.2%14 digit32.9%Emily32.9%Danish Mood - CPU32.7%Bosphorus 4K - Super Fast32.4%1 - 1080p - 16 - Path Tracer - CPU32.1%Bosphorus 1080p - Very Fast31.9%Bosphorus 1080p - Very Fast31.2%Material Tester30.8%DistinctUserID30.7%Composite30.6%B.L.N.Q.A - A.M.S30.3%30.3%Bosphorus 4K - Very Fast30.2%CPU - vgg1630.2%N.T.C.B.b.u.c - A.M.S30%1.H.M.2.D29.9%TopTweet29.8%Bosphorus 4K - Ultra Fast29.6%Bosphorus 1080p - Super Fast29.2%Total Time28%Bosphorus 4K - Very Fast28%Bosphorus 1080p - Ultra Fast27.9%Bosphorus 4K - Super Fast27.7%Bosphorus 1080p - Ultra Fast27.6%v.I27.3%Preset 4 - Bosphorus 4K27.3%FIR Filter27.2%LuxCore Benchmark - CPU27.1%Blowfish27%bcrypt27%R.C.a.P - CPU26.3%N.D.C.o.b.u.o.I - A.M.S26.3%Preset 4 - Bosphorus 1080p26.1%Rotate 90 Degrees26%Preset 8 - Bosphorus 1080p26%EP.C25.9%2.smt225.9%R.5.S.I - A.M.S25.3%F.D.FFishy Cat - CPU-Only24.6%R.5.B - A.M.S24.3%Timed Time - Size 1,000particle_volume/scivis/real_timeBosphorus 4K - Ultra Fast23.9%Pathtracer ISPC - Crown23.6%19 - D.SC.C.R.5.I - A.M.S23.5%PartialTweets23.1%KawPow - 1MBosphorus 1080p - Faster22.9%particle_volume/ao/real_timeregex_compile22.7%Pabellon Barcelona - CPU-Only22.6%Bosphorus 1080p - Fast22.4%1 - 256 - 32F.D.FKostya21.9%Bosphorus 4K - Fast21.5%4 - 256 - 51221.4%Bosphorus 1080p21.3%Speedometer - Firefox20.9%Classroom - CPU-Only20.7%2 - 256 - 32CPU - Supercar20.6%Kraken - Firefox20.6%Monero - 1MCPU20.3%Bosphorus 4K - Faster19.6%T.B.T19.4%PBKDF2-sha51219.4%Pathtracer ISPC - Asian Dragon18.9%CryptoNight-Heavy - 1MC.F.U - 1MC.D.Y.C.S.I - A.M.S18.3%1.smt2Relative Entropy18.1%C.D.Y.C - A.M.S18%N.T.C.D.m - A.M.S17.7%SVG Files To PNG17.7%GhostRider - 1M17.5%Time To Compile17.4%VMAF Optimized - Bosphorus 4KServer Rack - CPU-only17.2%P.B.S12 digit16.8%Noise-Gaussian16.2%W.c - Google ChromeTime To Compile15.9%Kraken - Google Chrome15.5%Pathtracer ISPC - Asian Dragon Obj15.1%B.C15.1%Preset 12 - Bosphorus 1080p15%J.S.O.RARES-6 - Firefox14.4%allmodconfig14.2%S.X.5.E13.8%16 - 256 - 3213.8%defconfig13.4%MPI CPU - water_GMX50_bare13.2%Swirl12.6%D.T12.2%SGD Regression12.1%linux-5.19.tar.xz12%19, Long Mode - D.S2 - 256 - 51211.6%Jetstream 2 - Firefox11.6%4 - 256 - 32RSA409611%unsharp-mask10.9%10.8%particle_volume/pathtracer/real_time10.8%P.S.O - Bosphorus 4KS.X.2.E10.7%Read While Writing10.6%EnhancedDisney Material10.3%HWB Color Space10.3%Jetstream 2 - Google Chrome9.9%PSPDFKit WASM - Google Chrome9.8%Preset 13 - Bosphorus 1080p9.8%BMW27 - CPU-Only9.7%Isolation Forest9.3%WAV To FLAC9.3%9.3%Exhaustive9.3%Boat - CPU-only9.1%Orange Juice - CPU8.8%W.i - Firefox8.7%T.T.C.C.G.C8.7%Rand ReadIIR FilterTime To Compile8%Crop7.9%auto-levels7.9%S.S.C7.8%7.7%Bosphorus 4KTime To Compile7.6%PSPDFKit WASM - Firefox7.4%1 - 256 - 5127.4%WAV To MP37.2%Earthgecko Skyline7%Speedometer - Google Chrome6.9%Preset 12 - Bosphorus 4K6.9%Multi-Threaded6.9%T.F.A.T.T26.6%ReflectW.i - Google Chrome6.5%6, LosslessSingle-Threaded44100 - 1024rotate6.3%DLSC - CPU6.2%S.X.5.D6%6Preset 13 - Bosphorus 4KResizing5.6%P.6.P.P.D5.5%12 - D.SS.X.2.D5.5%Time To Compile5.3%C.A.D.OTotal Time - 4.1.R.P.PPBKDF2-whirlpoolT.T.S.SPod2html4.8%Compression Ratingraytrace4.6%T.X.5.D4.5%Q.1.C.E.519, Long Mode - Compression SpeedWritesInterpreterSHA5124.2%L.P.C.S4.2%3 - Compression Speed4%W.c - FirefoxT.X.2.D3.8%crypto_pyaes3.7%8 - 256 - 323.7%10, Lossless3.6%T.X.2.E3.4%Fast3.4%T.X.5.E3.3%Octane - Google Chrome3.2%F.B.t.B.F.F2.9%Mini-ITX Case2.8%Complex Phase2.8%Antialias19 - Compression SpeedWindowed Gaussian2.6%ThoroughDefault2.4%480000 - 1024Pistol2.1%Neural Magic DeepSparseScikit-LearnTensorFlow LiteOpenVINOOpenVINONeural Magic DeepSparseWhisper.cppOpenVINOOpenVINONeural Magic DeepSparseOpenVINOOpenVINONCNNNeural Magic DeepSparseNeural Magic DeepSparseScikit-LearnNeural Magic DeepSparseNeural Magic DeepSparseNeural Magic DeepSparseNeural Magic DeepSparseLuaRadioWhisper.cppNCNNNeural Magic DeepSparseNeural Magic DeepSparseNeural Magic DeepSparseOpenVINOTensorFlow LiteNCNNOpenSSLOpenVINONCNNNCNNNCNNScikit-LearnPyTorchTensorFlow LiteASKAPNCNNTensorFlow LitePyTorchZstd CompressionNCNNPyTorchLeelaChessZeroJava SciMarkOpenSSLPyTorchOpenVINOPyTorchOpenSSLNCNNPyPerformanceNCNNASKAPOpenVINOZstd CompressionASKAPNeural Magic DeepSparseNCNNPyTorchOpenVINOGraphicsMagickNCNNOpenVINOPyTorchOpenVINOPyTorchLeelaChessZeroPyTorchRocksDBOpenSSLNAS Parallel BenchmarksOSPRayZstd CompressionOSPRayLZ4 CompressionLZ4 CompressionNCNNLZ4 CompressionOpenRadiossASKAPTensorFlow LiteNCNNOpenRadiossLZ4 CompressionOpenSSLOpenVINOPyHPC BenchmarksOpenVINOJava SciMarkJava SciMarkOpenVINOTensorFlowScikit-LearnOpenVINOPrimesieveCryptsetupTensorFlowLiquid-DSPCryptsetupOpenCVOSPRay StudioASKAPGIMPIndigoBenchGNU RadioLiquid-DSPCryptsetupCryptsetupOSPRay StudioOSPRay StudioOpenRadiossOSPRayNeural Magic DeepSparseFFmpegOSPRay StudioLuaRadioOSPRay StudioOSPRay StudioOpenVINONCNNOpenRadiossOSPRay StudioPyHPC BenchmarksFFmpegOSPRay StudioFFmpegGEGLOSPRay StudioNAS Parallel BenchmarksNumenta Anomaly BenchmarkNeural Magic DeepSparseOSPRay StudioFFmpegGraphicsMagickSVT-AV1RocksDBIntel Open Image DenoiseKvazaarBlenderAircrack-ng7-Zip CompressionOSPRay StudioSeleniumHelsingAppleseedLuxCoreRenderKvazaarOSPRay StudioKvazaaruvg266AppleseedsimdjsonJava SciMarkNeural Magic DeepSparseGNU Octave BenchmarkKvazaarNCNNNeural Magic DeepSparseasmFishsimdjsonKvazaaruvg266Stockfishuvg266Kvazaaruvg266uvg266OpenVKLSVT-AV1GNU RadioLuxCoreRenderJohn The RipperJohn The RipperLuxCoreRenderNeural Magic DeepSparseSVT-AV1GEGLSVT-AV1NAS Parallel BenchmarksZ3 Theorem ProverNeural Magic DeepSparseLuaRadioBlenderNeural Magic DeepSparseSQLite SpeedtestOSPRayuvg266EmbreeZstd CompressionNeural Magic DeepSparsesimdjsonXmrigVVenCOSPRayPyPerformanceBlenderVVenCLiquid-DSPGNU RadiosimdjsonVVenCLiquid-DSPx265SeleniumBlenderLiquid-DSPIndigoBenchSeleniumXmrigChaos Group V-RAYVVenCRawTherapeeCryptsetupEmbreeXmrigXmrigNeural Magic DeepSparseZ3 Theorem ProverNumenta Anomaly BenchmarkNeural Magic DeepSparseNeural Magic DeepSparseInkscapeXmrigTimed FFmpeg CompilationSVT-VP9DarktablePHPBenchHelsingGraphicsMagickSeleniumTimed GDB GNU Debugger CompilationSeleniumEmbreeNumenta Anomaly BenchmarkSVT-AV1Java SciMarkSeleniumTimed Linux Kernel CompilationCryptsetupLiquid-DSPTimed Linux Kernel CompilationGROMACSGraphicsMagicklibjpeg-turbo tjbenchScikit-LearnUnpacking The Linux KernelZstd CompressionLiquid-DSPSeleniumLiquid-DSPOpenSSLGIMPNumpy BenchmarkOSPRaySVT-VP9CryptsetupRocksDBGraphicsMagickAppleseedGraphicsMagickSeleniumSeleniumSVT-AV1BlenderScikit-LearnFLAC Audio Encodinglibavif avifencASTC EncoderDarktableLuxCoreRenderSeleniumGitRocksDBGNU RadioTimed Godot Game Engine CompilationGEGLGIMPGNU RadioR Benchmarkx265Timed PHP CompilationSeleniumLiquid-DSPLAME MP3 EncodingNumenta Anomaly BenchmarkSeleniumSVT-AV1QuantLibPyBenchlibavif avifencGEGLSeleniumlibavif avifencQuantLibStargate Digital Audio WorkstationGIMPLuxCoreRenderCryptsetuplibavif avifencSVT-AV1GraphicsMagickOCRMyPDFNode.js V8 Web Tooling BenchmarkZstd CompressionCryptsetupTimed Wasmer CompilationNumenta Anomaly BenchmarkC-RayCryptsetupeSpeak-NG Speech EnginePerl Benchmarks7-Zip CompressionPyPerformanceCryptsetupWebP2 Image EncodeZstd CompressionScyllaDBPerl BenchmarksOpenSSLOpenSCADLZ4 CompressionSeleniumCryptsetupPyPerformanceLiquid-DSPlibavif avifencCryptsetupASTC EncoderCryptsetupSeleniumWireGuard + Linux Networking Stack Stress TestGNU RadioOpenSCADLuaRadioGEGLZstd CompressionNumenta Anomaly BenchmarkASTC EncoderWebP2 Image EncodeStargate Digital Audio WorkstationOpenSCADRyzen 7 7840UCore Ultra 7 155H

Intel Core Ultra 7 155H vs. AMD Ryzen 7 7840U Linux Benchmarksdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamwhisper-cpp: ggml-small.en - 2016 State of the Unionopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Person Detection FP16 - CPUdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamluaradio: Five Back to Back FIR Filterswhisper-cpp: ggml-base.en - 2016 State of the Uniondeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamopenvino: Face Detection FP16-INT8 - CPUtensorflow-lite: SqueezeNetopenssl: RSA4096openvino: Road Segmentation ADAS FP16-INT8 - CPUscikit-learn: TSNE MNIST Datasettensorflow-lite: Inception V4askap: tConvolve MPI - Degriddingtensorflow-lite: Mobilenet Floatpytorch: CPU - 16 - Efficientnet_v2_lcompress-zstd: 8 - Compression Speedlczero: BLASjava-scimark2: Fast Fourier Transformopenssl: ChaCha20-Poly1305pytorch: CPU - 32 - ResNet-152openvino: Person Vehicle Bike Detection FP16 - CPUpytorch: CPU - 16 - ResNet-152openssl: ChaCha20pyperformance: python_startupaskap: tConvolve MT - Griddingopenvino: Handwritten English Recognition FP16-INT8 - CPUcompress-zstd: 8, Long Mode - Compression Speeddeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streampytorch: CPU - 32 - ResNet-50openvino: Vehicle Detection FP16-INT8 - CPUgraphics-magick: Rotateopenvino: Face Detection Retail FP16-INT8 - CPUpytorch: CPU - 16 - ResNet-50openvino: Person Detection FP16 - CPUpytorch: CPU - 1 - Efficientnet_v2_llczero: Eigenpytorch: CPU - 1 - ResNet-50rocksdb: Update Randopenssl: AES-128-GCMnpb: SP.Bospray: gravity_spheres_volume/dim_512/ao/real_timecompress-zstd: 12 - Compression Speedospray: gravity_spheres_volume/dim_512/scivis/real_timecompress-lz4: 3 - Decompression Speedcompress-lz4: 9 - Decompression Speedcompress-lz4: 1 - Decompression Speedopenradioss: Rubber O-Ring Seal Installationtensorflow-lite: Mobilenet Quantopenradioss: Bumper Beamcompress-lz4: 1 - Compression Speedopenssl: AES-256-GCMopenvino: Face Detection Retail FP16-INT8 - CPUpyhpc: CPU - Numpy - 4194304 - Equation of Stateopenvino: Face Detection FP16-INT8 - CPUjava-scimark2: Monte Carlojava-scimark2: Dense LU Matrix Factorizationopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUtensorflow: CPU - 16 - ResNet-50scikit-learn: 20 Newsgroups / Logistic Regressionopenvino: Weld Porosity Detection FP16-INT8 - CPUprimesieve: 1e12cryptsetup: AES-XTS 256b Encryptiontensorflow: CPU - 32 - ResNet-50liquid-dsp: 8 - 256 - 512cryptsetup: AES-XTS 512b Encryptionopencv: Coreospray-studio: 1 - 4K - 32 - Path Tracer - CPUaskap: tConvolve MT - Degriddinggimp: resizeindigobench: CPU - Bedroomgnuradio: Hilbert Transformliquid-dsp: 16 - 256 - 512cryptsetup: AES-XTS 256b Decryptioncryptsetup: AES-XTS 512b Decryptionospray-studio: 3 - 4K - 32 - Path Tracer - CPUospray-studio: 1 - 4K - 16 - Path Tracer - CPUopenradioss: Cell Phone Drop Testospray: gravity_spheres_volume/dim_512/pathtracer/real_timedeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamffmpeg: libx265 - Liveospray-studio: 1 - 1080p - 1 - Path Tracer - CPUluaradio: Hilbert Transformospray-studio: 3 - 1080p - 1 - Path Tracer - CPUospray-studio: 3 - 4K - 16 - Path Tracer - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenradioss: Bird Strike on Windshieldospray-studio: 1 - 1080p - 32 - Path Tracer - CPUpyhpc: CPU - Numpy - 4194304 - Isoneutral Mixingffmpeg: libx265 - Video On Demandospray-studio: 3 - 1080p - 32 - Path Tracer - CPUffmpeg: libx265 - Platformgegl: Scaleospray-studio: 1 - 4K - 1 - Path Tracer - CPUnpb: BT.Cnumenta-nab: KNN CADdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamospray-studio: 3 - 4K - 1 - Path Tracer - CPUffmpeg: libx265 - Uploadgraphics-magick: Sharpenrocksdb: Read Rand Write Randoidn: RT.ldr_alb_nrm.3840x2160 - CPU-Onlykvazaar: Bosphorus 1080p - Super Fastblender: Barbershop - CPU-Onlyaircrack-ng: compress-7zip: Decompression Ratingospray-studio: 3 - 1080p - 16 - Path Tracer - CPUselenium: ARES-6 - Google Chromehelsing: 14 digitappleseed: Emilyluxcorerender: Danish Mood - CPUkvazaar: Bosphorus 4K - Super Fastospray-studio: 1 - 1080p - 16 - Path Tracer - CPUkvazaar: Bosphorus 1080p - Very Fastuvg266: Bosphorus 1080p - Very Fastappleseed: Material Testersimdjson: DistinctUserIDjava-scimark2: Compositedeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamoctave-benchmark: kvazaar: Bosphorus 4K - Very Fastdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamsimdjson: TopTweetkvazaar: Bosphorus 4K - Ultra Fastuvg266: Bosphorus 1080p - Super Faststockfish: Total Timeuvg266: Bosphorus 4K - Very Fastkvazaar: Bosphorus 1080p - Ultra Fastuvg266: Bosphorus 4K - Super Fastuvg266: Bosphorus 1080p - Ultra Fastopenvkl: vklBenchmarkCPU ISPCsvt-av1: Preset 4 - Bosphorus 4Kgnuradio: FIR Filterluxcorerender: LuxCore Benchmark - CPUjohn-the-ripper: Blowfishluxcorerender: Rainbow Colors and Prism - CPUdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamsvt-av1: Preset 4 - Bosphorus 1080pgegl: Rotate 90 Degreesnpb: EP.Cz3: 2.smt2deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamluaradio: FM Deemphasis Filterblender: Fishy Cat - CPU-Onlydeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamsqlite-speedtest: Timed Time - Size 1,000ospray: particle_volume/scivis/real_timeuvg266: Bosphorus 4K - Ultra Fastembree: Pathtracer ISPC - Crowncompress-zstd: 19 - Decompression Speeddeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamsimdjson: PartialTweetsxmrig: KawPow - 1Mvvenc: Bosphorus 1080p - Fasterospray: particle_volume/ao/real_timepyperformance: regex_compileblender: Pabellon Barcelona - CPU-Onlyvvenc: Bosphorus 1080p - Fastliquid-dsp: 1 - 256 - 32gnuradio: FM Deemphasis Filtersimdjson: Kostyavvenc: Bosphorus 4K - Fastliquid-dsp: 4 - 256 - 512x265: Bosphorus 1080pselenium: Speedometer - Firefoxblender: Classroom - CPU-Onlyliquid-dsp: 2 - 256 - 32indigobench: CPU - Supercarselenium: Kraken - Firefoxxmrig: Monero - 1Mv-ray: CPUvvenc: Bosphorus 4K - Fasterrawtherapee: Total Benchmark Timecryptsetup: PBKDF2-sha512embree: Pathtracer ISPC - Asian Dragonxmrig: CryptoNight-Heavy - 1Mxmrig: CryptoNight-Femto UPX2 - 1Mdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamz3: 1.smt2numenta-nab: Relative Entropydeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streaminkscape: SVG Files To PNGxmrig: GhostRider - 1Mbuild-ffmpeg: Time To Compilesvt-vp9: VMAF Optimized - Bosphorus 4Kphpbench: PHP Benchmark Suitehelsing: 12 digitgraphics-magick: Noise-Gaussianselenium: WASM collisionDetection - Google Chromebuild-gdb: Time To Compileselenium: Kraken - Google Chromeembree: Pathtracer ISPC - Asian Dragon Objnumenta-nab: Bayesian Changepointsvt-av1: Preset 12 - Bosphorus 1080pjava-scimark2: Jacobi Successive Over-Relaxationselenium: ARES-6 - Firefoxbuild-linux-kernel: allmodconfigcryptsetup: Serpent-XTS 512b Encryptionliquid-dsp: 16 - 256 - 32build-linux-kernel: defconfiggromacs: MPI CPU - water_GMX50_baregraphics-magick: Swirltjbench: Decompression Throughputscikit-learn: SGD Regressionunpack-linux: linux-5.19.tar.xzcompress-zstd: 19, Long Mode - Decompression Speedliquid-dsp: 2 - 256 - 512selenium: Jetstream 2 - Firefoxliquid-dsp: 4 - 256 - 32openssl: RSA4096gimp: unsharp-masknumpy: ospray: particle_volume/pathtracer/real_timesvt-vp9: PSNR/SSIM Optimized - Bosphorus 4Kcryptsetup: Serpent-XTS 256b Encryptiongraphics-magick: Enhancedappleseed: Disney Materialgraphics-magick: HWB Color Spaceselenium: Jetstream 2 - Google Chromeselenium: PSPDFKit WASM - Google Chromesvt-av1: Preset 13 - Bosphorus 1080pblender: BMW27 - CPU-Onlyscikit-learn: Isolation Forestencode-flac: WAV To FLACavifenc: 0astcenc: Exhaustivedarktable: Boat - CPU-onlyluxcorerender: Orange Juice - CPUselenium: WASM imageConvolute - Firefoxgit: Time To Complete Common Git Commandsrocksdb: Rand Readgnuradio: IIR Filterbuild-godot: Time To Compilegegl: Cropgimp: auto-levelsgnuradio: Signal Source (Cosine)rbenchmark: x265: Bosphorus 4Kbuild-php: Time To Compileselenium: PSPDFKit WASM - Firefoxliquid-dsp: 1 - 256 - 512encode-mp3: WAV To MP3numenta-nab: Earthgecko Skylineselenium: Speedometer - Google Chromequantlib: Multi-Threadedpybench: Total For Average Test Timesavifenc: 2gegl: Reflectselenium: WASM imageConvolute - Google Chromeavifenc: 6, Losslessquantlib: Single-Threadedstargate: 44100 - 1024gimp: rotateluxcorerender: DLSC - CPUcryptsetup: Serpent-XTS 512b Decryptionavifenc: 6graphics-magick: Resizingocrmypdf: Processing 60 Page PDF Documentnode-web-tooling: compress-zstd: 12 - Decompression Speedcryptsetup: Serpent-XTS 256b Decryptionbuild-wasmer: Time To Compilenumenta-nab: Contextual Anomaly Detector OSEc-ray: Total Time - 4K, 16 Rays Per Pixelcryptsetup: PBKDF2-whirlpoolespeak: Text-To-Speech Synthesisperl-benchmark: Pod2htmlcompress-7zip: Compression Ratingpyperformance: raytracecryptsetup: Twofish-XTS 512b Decryptionwebp2: Quality 100, Compression Effort 5compress-zstd: 19, Long Mode - Compression Speedperl-benchmark: Interpreteropenssl: SHA512openscad: Leonardo Phone Case Slimcompress-lz4: 3 - Compression Speedselenium: WASM collisionDetection - Firefoxcryptsetup: Twofish-XTS 256b Decryptionpyperformance: crypto_pyaesliquid-dsp: 8 - 256 - 32avifenc: 10, Losslesscryptsetup: Twofish-XTS 256b Encryptionastcenc: Fastcryptsetup: Twofish-XTS 512b Encryptionselenium: Octane - Google Chromewireguard: gnuradio: Five Back to Back FIR Filtersopenscad: Mini-ITX Caseluaradio: Complex Phasegegl: Antialiascompress-zstd: 19 - Compression Speednumenta-nab: Windowed Gaussianastcenc: Thoroughwebp2: Defaultstargate: 480000 - 1024openscad: Pistolcompress-zstd: 8 - Decompression Speedpyperformance: json_loadscompress-lz4: 9 - Compression Speedgegl: Color Enhancernnoise: xmrig: Wownero - 1Mopenssl: SHA256unpack-firefox: firefox-84.0.source.tar.xzselenium: Octane - Firefoxastcenc: Mediumdarktable: Masskrug - CPU-onlyopenscad: Projector Mount Swivelsimdjson: LargeRandstargate: 192000 - 1024openscad: Retro Cargegl: Cartoonstargate: 96000 - 1024scikit-learn: MNIST Datasetgegl: Tile Glassdarktable: Server Room - CPU-onlygegl: Wavelet Blurcompress-zstd: 8, Long Mode - Decompression Speedwebp2: Quality 100, Lossless Compressionwebp2: Quality 95, Compression Effort 7webp2: Quality 75, Compression Effort 7scylladb: Writesrocksdb: Read While Writingtensorflow-lite: Inception ResNet V2pytorch: CPU - 32 - Efficientnet_v2_lpytorch: CPU - 1 - ResNet-152darktable: Server Rack - CPU-onlyscikit-learn: Hist Gradient Boosting Adultscikit-learn: Hist Gradient Boostingncnn: CPU - FastestDetncnn: CPU - regnety_400mncnn: CPU - squeezenet_ssdncnn: CPU - yolov4-tinyncnn: CPU - resnet50ncnn: CPU - alexnetncnn: CPU - resnet18ncnn: CPU - vgg16ncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - mobilenetaskap: Hogbom Clean OpenMPaskap: tConvolve MPI - Griddingasmfish: 1024 Hash Memory, 26 Depthsvt-vp9: Visual Quality Optimized - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 4Kjohn-the-ripper: bcryptjava-scimark2: Sparse Matrix MultiplyRyzen 7 7840UCore Ultra 7 155H230.845738.058.3010.1981376.501790.387.5722.41098.09113.02323.9412411.60004.0891410.987631.995332.06581645.2134.7917667.460468.695146.3041405.332363.215348.923.29219.82332952.42753.291690.548.50217.694340.383640932716012.54480.0112.56515742803776.851269.30208.88235.812.990129.22525.965611668.1229.0335.3511.976948.72523026961362973137551.202.8123597.32.6936116186.616182.916757.2201.543381.29181.1014693.92829774085234.740.9899.862079.5615816.0418984.5521.8634.843976.2717.8513667.622.191207933333374.0994143817961886.4513.0101.882633.51600333333659.93369.9454383193022110.213.36988293.5236134.442990129.13565228806171.23318.30989851.45250.2911741150.344.6731187120064.32234.008133.68271411525.0410515627780.34127.061363.8639922.89768828606777.14541.428364.2379521.3027.445147497.5583.44196.018358.384844.329.24266.54221.917.26518.3236.9490.112615180017.80174.7118.86108.962473.6621342.81.50186499.227.283713.69634.3381286.4461.172730.0072443.5186.0793.679553.7523.0350022.7211.21271315.093.46497.043501.628.9233.0419571.0467.1913.639462736671054.45.074.1216762566766.90307363.54920456674.162511.83488.6107938.20756.655261169713.80053519.13507.244.435026.90011.67743.630964.747724.586792.555.78750.7110750145.152266252.1066.229397.111.824725.366415.6392225.4216.261435.522869.4563880000112.5801.006457277.10538996.5844.6501397.533822000200.975181153333154053.014.489657.86120.10153.35863.9202210.079705892308.8202480503.207145.91223.47413.425142.1460.66935.6163.1020.737.80854818670478.6421.5535.52212.9734593.50.136813.3765.0242381172730005.38189.04935431965.359066.55623.32819.1010.2963967.43.12355212.4051.89861.36.72685311.45017.661831.7859.252.31835.22863.11889520622.4260.0759425677484175553.33.786.320.00182758441069277316.64770.20286.3553.345.53399700004.971546.1161.0215548.787357183.6511418.725.3041084.327.28911.56.9547.12848.423.09031654.2961896.511.868.0239.45115.1395210.91356169334314.0715020257.26143.5634.2381.571.5067662.42467.4372.28246655.62821.8402.76143.8411872.80.010.060.1365439177416130731.68.4920.140.34949.65371.1222.926.057.8717.2113.205.735.8641.128.270.794.002.562.292.682.9910.54239.6532806.453260531344.3195.34187.91691.52329.334186313760.21990.1998154.6329.7434.11611256.458711.2624.4471.886925.89358.29968.91581219.600111.99271194.749492.693092.2964580.0379.91964185.7702188.6114127.00091105.086109.592138.858.18515.60473566.51246.983694.573.92467.245710.18175749838866.08234.996.202549630916413.32425.20109.71445.36.952915.84285.561032916.9516.0219.526.653927.703044825674690960312780.781.66251164.31.602499662.69682.010102.0331.705506.56292.589126.54516681898207.591.5676.301331.1210250.9012364.8714.3452.663648.2426.7085462.114.97816613334982.81466965595662758.5719.0181.288923.31105333335297.74867.6656126278225158.812.34088204.794493.884280184.75095326459120.16452.851405572.05935.5916533935.786.4961646527810.56169.88297.19531940318.2414411475240.2593.631840.7829616.49251438810729.51719.827483.9351130.9820.736800473.9563.58256.3371086.413708.797.09298.52216.835.59066.4128.5169.742042907213.91136.5614.7785.381942.8771055.31.18146807.305.769210.86143.2701021.4277.009582.4983553.6231.9075.369143.3293.7648218.339.07161624.375.71075.724307.423.5353.7335087.1572.7311.144565303331288.04.163.3915570866755.15254438.911110166673.451617.04197.489736.86367.670218687311.60624181.54160.837.562722.75313.78536.961255.01228.926674.265.48859.4412582736.017229217.1276.736458.710.271629.201361.5492555.5918.601638.806763.7495346429127.6130.889406246.934506108.3165.2061562.830305667180.143201206667138795.816.067593.50108.37059.04780.7223231.820691809280.9272723458.372160.08244.33214.678155.3320.61256.1272.8522.541.08059535959518.7455.1665.96013.9984261.60.147414.4069.9502557160846675.76695.30233129903.155270.93921.88820.349.6694224.53.31911613.1811.78812.66.35880810.85316.741931.8814.755.10733.47160.12393933521.3730.0795742981100183529.33.956.600.00175259423160382017.33867.52275.4533.247.23277600005.148528.0155.7553531.384689178.1761379.126.0201054.926.56611.87.1327.30208.223.16331655.4261861.912.066.9838.87414.9325271.51341159549714.2184974156.78663.5384.2671.561.5161412.43667.1282.29144955.81021.7752.76943.9291870.60.010.060.136832116042291255523.779.520.409206.641208.9545.8118.4012.7328.6528.058.1610.6953.5215.871.909.677.085.666.836.5519.54393.8531494.152510917444.4375.69692.87085.58521.514146673696.16OpenBenchmarking.org

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamRyzen 7 7840UCore Ultra 7 155H2004006008001000SE +/- 0.16, N = 3SE +/- 4.85, N = 3230.85990.20
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamRyzen 7 7840UCore Ultra 7 155H2004006008001000Min: 230.55 / Avg: 230.85 / Max: 231.08Min: 982.41 / Avg: 990.2 / Max: 999.09

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16-INT8 - Device: CPURyzen 7 7840UCore Ultra 7 155H306090120150SE +/- 0.06, N = 3SE +/- 1.18, N = 1538.05154.63-shared -ldl - MIN: 23.35 / MAX: 47.88-isystem -std=c++11 -fvisibility=hidden -mavx2 -mfma -MD -MT -MF - MIN: 82.86 / MAX: 237.891. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16-INT8 - Device: CPURyzen 7 7840UCore Ultra 7 155H306090120150Min: 37.96 / Avg: 38.05 / Max: 38.17Min: 144.49 / Avg: 154.63 / Max: 162.511. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPURyzen 7 7840UCore Ultra 7 155H714212835SE +/- 0.01, N = 3SE +/- 0.24, N = 158.3029.74-shared -ldl - MIN: 4.8 / MAX: 23.8-isystem -std=c++11 -fvisibility=hidden -mavx2 -mfma -MD -MT -MF - MIN: 13.33 / MAX: 63.081. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPURyzen 7 7840UCore Ultra 7 155H714212835Min: 8.29 / Avg: 8.3 / Max: 8.31Min: 28.08 / Avg: 29.74 / Max: 31.281. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-StreamRyzen 7 7840UCore Ultra 7 155H816243240SE +/- 0.01, N = 3SE +/- 0.06, N = 310.2034.12
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-StreamRyzen 7 7840UCore Ultra 7 155H714212835Min: 10.18 / Avg: 10.2 / Max: 10.22Min: 33.99 / Avg: 34.12 / Max: 34.19

Whisper.cpp

Whisper.cpp is a port of OpenAI's Whisper model in C/C++. Whisper.cpp is developed by Georgi Gerganov for transcribing WAV audio files to text / speech recognition. Whisper.cpp supports ARM NEON, x86 AVX, and other advanced CPU features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.4Model: ggml-small.en - Input: 2016 State of the UnionRyzen 7 7840UCore Ultra 7 155H30060090012001500SE +/- 2.21, N = 3SE +/- 2.40, N = 3376.501256.461. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.4Model: ggml-small.en - Input: 2016 State of the UnionRyzen 7 7840UCore Ultra 7 155H2004006008001000Min: 372.18 / Avg: 376.5 / Max: 379.41Min: 1254.02 / Avg: 1256.46 / Max: 1261.251. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPURyzen 7 7840UCore Ultra 7 155H0.28350.5670.85051.1341.4175SE +/- 0.00, N = 3SE +/- 0.01, N = 100.381.26-shared -ldl - MIN: 0.19 / MAX: 7.48-isystem -std=c++11 -fvisibility=hidden -mavx2 -mfma -MD -MT -MF - MIN: 0.38 / MAX: 8.731. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPURyzen 7 7840UCore Ultra 7 155H246810Min: 0.38 / Avg: 0.38 / Max: 0.38Min: 1.21 / Avg: 1.26 / Max: 1.31. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16-INT8 - Device: CPURyzen 7 7840UCore Ultra 7 155H612182430SE +/- 0.01, N = 3SE +/- 0.27, N = 57.5724.44-shared -ldl - MIN: 3.81 / MAX: 24.08-isystem -std=c++11 -fvisibility=hidden -mavx2 -mfma -MD -MT -MF - MIN: 11.36 / MAX: 49.161. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16-INT8 - Device: CPURyzen 7 7840UCore Ultra 7 155H612182430Min: 7.54 / Avg: 7.57 / Max: 7.59Min: 23.45 / Avg: 24.44 / Max: 24.981. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-StreamRyzen 7 7840UCore Ultra 7 155H1632486480SE +/- 0.02, N = 3SE +/- 0.50, N = 322.4171.89
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-StreamRyzen 7 7840UCore Ultra 7 155H1428425670Min: 22.36 / Avg: 22.41 / Max: 22.44Min: 71.3 / Avg: 71.89 / Max: 72.88

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPURyzen 7 7840UCore Ultra 7 155H612182430SE +/- 0.01, N = 3SE +/- 0.22, N = 158.0925.89-shared -ldl - MIN: 3.61 / MAX: 21.87-isystem -std=c++11 -fvisibility=hidden -mavx2 -mfma -MD -MT -MF - MIN: 12.01 / MAX: 76.151. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPURyzen 7 7840UCore Ultra 7 155H612182430Min: 8.07 / Avg: 8.09 / Max: 8.1Min: 24.26 / Avg: 25.89 / Max: 27.221. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Detection FP16 - Device: CPURyzen 7 7840UCore Ultra 7 155H80160240320400SE +/- 0.05, N = 3SE +/- 3.66, N = 15113.02358.29-shared -ldl - MIN: 75.9 / MAX: 137.69-isystem -std=c++11 -fvisibility=hidden -mavx2 -mfma -MD -MT -MF - MIN: 189.82 / MAX: 500.521. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Detection FP16 - Device: CPURyzen 7 7840UCore Ultra 7 155H60120180240300Min: 112.96 / Avg: 113.02 / Max: 113.12Min: 340.08 / Avg: 358.29 / Max: 378.261. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-StreamRyzen 7 7840UCore Ultra 7 155H2004006008001000SE +/- 1.12, N = 3SE +/- 2.88, N = 3323.94968.92
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-StreamRyzen 7 7840UCore Ultra 7 155H2004006008001000Min: 322.69 / Avg: 323.94 / Max: 326.19Min: 965.16 / Avg: 968.92 / Max: 974.57

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamRyzen 7 7840UCore Ultra 7 155H30060090012001500SE +/- 0.22, N = 3SE +/- 4.45, N = 3411.601219.60
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamRyzen 7 7840UCore Ultra 7 155H2004006008001000Min: 411.18 / Avg: 411.6 / Max: 411.89Min: 1211.94 / Avg: 1219.6 / Max: 1227.37

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-StreamRyzen 7 7840UCore Ultra 7 155H3691215SE +/- 0.0053, N = 3SE +/- 0.1010, N = 34.089111.9927
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-StreamRyzen 7 7840UCore Ultra 7 155H3691215Min: 4.08 / Avg: 4.09 / Max: 4.1Min: 11.81 / Avg: 11.99 / Max: 12.16

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamRyzen 7 7840UCore Ultra 7 155H30060090012001500SE +/- 0.93, N = 3SE +/- 8.92, N = 3410.991194.75
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamRyzen 7 7840UCore Ultra 7 155H2004006008001000Min: 409.94 / Avg: 410.99 / Max: 412.83Min: 1182.6 / Avg: 1194.75 / Max: 1212.13

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-StreamRyzen 7 7840UCore Ultra 7 155H20406080100SE +/- 0.01, N = 3SE +/- 0.81, N = 332.0092.69
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-StreamRyzen 7 7840UCore Ultra 7 155H20406080100Min: 31.98 / Avg: 32 / Max: 32.02Min: 91.07 / Avg: 92.69 / Max: 93.6

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamRyzen 7 7840UCore Ultra 7 155H20406080100SE +/- 0.08, N = 3SE +/- 0.98, N = 332.0792.30
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamRyzen 7 7840UCore Ultra 7 155H20406080100Min: 31.95 / Avg: 32.07 / Max: 32.22Min: 90.34 / Avg: 92.3 / Max: 93.36

LuaRadio

LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Five Back to Back FIR FiltersRyzen 7 7840UCore Ultra 7 155H400800120016002000SE +/- 5.35, N = 3SE +/- 4.56, N = 31645.2580.0
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Five Back to Back FIR FiltersRyzen 7 7840UCore Ultra 7 155H30060090012001500Min: 1635.6 / Avg: 1645.17 / Max: 1654.1Min: 573.1 / Avg: 579.97 / Max: 588.6

Whisper.cpp

Whisper.cpp is a port of OpenAI's Whisper model in C/C++. Whisper.cpp is developed by Georgi Gerganov for transcribing WAV audio files to text / speech recognition. Whisper.cpp supports ARM NEON, x86 AVX, and other advanced CPU features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.4Model: ggml-base.en - Input: 2016 State of the UnionRyzen 7 7840UCore Ultra 7 155H80160240320400SE +/- 0.81, N = 3SE +/- 4.62, N = 4134.79379.921. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.4Model: ggml-base.en - Input: 2016 State of the UnionRyzen 7 7840UCore Ultra 7 155H70140210280350Min: 133.21 / Avg: 134.79 / Max: 135.86Min: 366.67 / Avg: 379.92 / Max: 388.141. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-StreamRyzen 7 7840UCore Ultra 7 155H4080120160200SE +/- 0.16, N = 3SE +/- 0.41, N = 367.46185.77
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-StreamRyzen 7 7840UCore Ultra 7 155H306090120150Min: 67.21 / Avg: 67.46 / Max: 67.77Min: 185.09 / Avg: 185.77 / Max: 186.51

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-StreamRyzen 7 7840UCore Ultra 7 155H4080120160200SE +/- 0.28, N = 3SE +/- 0.60, N = 368.70188.61
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-StreamRyzen 7 7840UCore Ultra 7 155H306090120150Min: 68.24 / Avg: 68.7 / Max: 69.2Min: 187.69 / Avg: 188.61 / Max: 189.75

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamRyzen 7 7840UCore Ultra 7 155H306090120150SE +/- 0.05, N = 3SE +/- 1.03, N = 346.30127.00
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamRyzen 7 7840UCore Ultra 7 155H20406080100Min: 46.21 / Avg: 46.3 / Max: 46.37Min: 124.99 / Avg: 127 / Max: 128.38

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPURyzen 7 7840UCore Ultra 7 155H2004006008001000SE +/- 1.59, N = 3SE +/- 7.14, N = 15405.331105.08-shared -ldl - MIN: 342.91 / MAX: 427.35-isystem -std=c++11 -fvisibility=hidden -mavx2 -mfma -MD -MT -MF - MIN: 458.17 / MAX: 1748.771. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPURyzen 7 7840UCore Ultra 7 155H2004006008001000Min: 402.21 / Avg: 405.33 / Max: 407.38Min: 1059.94 / Avg: 1105.08 / Max: 1155.511. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNetRyzen 7 7840UCore Ultra 7 155H13002600390052006500SE +/- 22.28, N = 7SE +/- 63.74, N = 42363.216109.59
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNetRyzen 7 7840UCore Ultra 7 155H11002200330044005500Min: 2259.11 / Avg: 2363.21 / Max: 2445.57Min: 5948.98 / Avg: 6109.59 / Max: 6247.24

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096Ryzen 7 7840UCore Ultra 7 155H11002200330044005500SE +/- 23.60, N = 3SE +/- 16.23, N = 105348.92138.81. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096Ryzen 7 7840UCore Ultra 7 155H9001800270036004500Min: 5322.6 / Avg: 5348.9 / Max: 5396Min: 2073.2 / Avg: 2138.76 / Max: 2267.41. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16-INT8 - Device: CPURyzen 7 7840UCore Ultra 7 155H1326395265SE +/- 0.13, N = 3SE +/- 0.54, N = 723.2958.18-shared -ldl - MIN: 11.62 / MAX: 32.63-isystem -std=c++11 -fvisibility=hidden -mavx2 -mfma -MD -MT -MF - MIN: 27.84 / MAX: 100.271. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16-INT8 - Device: CPURyzen 7 7840UCore Ultra 7 155H1224364860Min: 23.05 / Avg: 23.29 / Max: 23.5Min: 55.96 / Avg: 58.18 / Max: 59.61. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv

Scikit-Learn

Scikit-learn is a Python module for machine learning built on NumPy, SciPy, and is BSD-licensed. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: TSNE MNIST DatasetRyzen 7 7840UCore Ultra 7 155H110220330440550SE +/- 0.32, N = 3SE +/- 0.25, N = 3219.82515.601. (F9X) gfortran options: -O0
OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: TSNE MNIST DatasetRyzen 7 7840UCore Ultra 7 155H90180270360450Min: 219.24 / Avg: 219.82 / Max: 220.34Min: 515.14 / Avg: 515.6 / Max: 515.971. (F9X) gfortran options: -O0

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4Ryzen 7 7840UCore Ultra 7 155H16K32K48K64K80KSE +/- 123.63, N = 3SE +/- 942.37, N = 1332952.473566.5
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4Ryzen 7 7840UCore Ultra 7 155H13K26K39K52K65KMin: 32705.4 / Avg: 32952.43 / Max: 33085.2Min: 69570.3 / Avg: 73566.52 / Max: 81429.8

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - DegriddingRyzen 7 7840UCore Ultra 7 155H6001200180024003000SE +/- 35.04, N = 3SE +/- 16.69, N = 32753.291246.981. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMpix/sec, More Is BetterASKAP 1.0Test: tConvolve MPI - DegriddingRyzen 7 7840UCore Ultra 7 155H5001000150020002500Min: 2705.11 / Avg: 2753.29 / Max: 2821.46Min: 1226.15 / Avg: 1246.98 / Max: 1279.981. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation focused on TensorFlow machine learning for mobile, IoT, edge, and other cases. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet FloatRyzen 7 7840UCore Ultra 7 155H8001600240032004000SE +/- 2.40, N = 3SE +/- 57.37, N = 121690.543694.57
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet FloatRyzen 7 7840UCore Ultra 7 155H6001200180024003000Min: 1687.13 / Avg: 1690.54 / Max: 1695.17Min: 3443.31 / Avg: 3694.57 / Max: 4047.6

PyTorch

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_lRyzen 7 7840UCore Ultra 7 155H246810SE +/- 0.05, N = 3SE +/- 0.06, N = 98.503.92MIN: 6.64 / MAX: 8.87MIN: 2.17 / MAX: 5.21
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_lRyzen 7 7840UCore Ultra 7 155H3691215Min: 8.41 / Avg: 8.5 / Max: 8.59Min: 3.6 / Avg: 3.92 / Max: 4.13

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Compression SpeedRyzen 7 7840UCore Ultra 7 155H100200300400500SE +/- 1.77, N = 15SE +/- 6.52, N = 3217.6467.2-llzma1. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8 - Compression SpeedRyzen 7 7840UCore Ultra 7 155H80160240320400Min: 204.8 / Avg: 217.59 / Max: 228.4Min: 457.8 / Avg: 467.17 / Max: 479.71. (CC) gcc options: -O3 -pthread -lz

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.30Backend: BLASRyzen 7 7840UCore Ultra 7 155H20406080100SE +/- 1.12, N = 9SE +/- 0.62, N = 994451. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.30Backend: BLASRyzen 7 7840UCore Ultra 7 155H20406080100Min: 91 / Avg: 94 / Max: 101Min: 43 / Avg: 44.78 / Max: 481. (CXX) g++ options: -flto -pthread

Java SciMark

This test runs the Java version of SciMark 2, which is a benchmark for scientific and numerical computing developed by programmers at the National Institute of Standards and Technology. This benchmark is made up of Fast Foruier Transform, Jacobi Successive Over-relaxation, Monte Carlo, Sparse Matrix Multiply, and dense LU matrix factorization benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Fast Fourier TransformRyzen 7 7840UCore Ultra 7 155H150300450600750SE +/- 2.39, N = 3SE +/- 14.60, N = 3340.38710.18
OpenBenchmarking.orgMflops, More Is BetterJava SciMark 2.2Computational Test: Fast Fourier TransformRyzen 7 7840UCore Ultra 7 155H130260390520650Min: 337.66 / Avg: 340.38 / Max: 345.15Min: 684.51 / Avg: 710.18 / Max: 735.08

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20-Poly1305Ryzen 7 7840UCore Ultra 7 155H8000M16000M24000M32000M40000MSE +/- 15102420.85, N = 3SE +/- 183459477.47, N = 1236409327160175749838861. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20-Poly1305Ryzen 7 7840UCore Ultra 7 155H6000M12000M18000M24000M30000MMin: 36381224960 / Avg: 36409327160 / Max: 36432967270Min: 16647641770 / Avg: 17574983885.83 / Max: 182164627501. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

PyTorch

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-152Ryzen 7 7840UCore Ultra 7 155H3691215SE +/- 0.09, N = 3SE +/- 0.08, N = 912.546.08MIN: 9.91 / MAX: 13.32MIN: 2.91 / MAX: 7.76
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-152Ryzen 7 7840UCore Ultra 7 155H48121620Min: 12.36 / Avg: 12.54 / Max: 12.65Min: 5.61 / Avg: 6.08 / Max: 6.34

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPURyzen 7 7840UCore Ultra 7 155H100200300400500SE +/- 0.36, N = 3SE +/- 1.90, N = 15480.01234.99-shared -ldl-isystem -std=c++11 -fvisibility=hidden -mavx2 -mfma -MD -MT -MF1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPURyzen 7 7840UCore Ultra 7 155H90180270360450Min: 479.3 / Avg: 480.01 / Max: 480.4Min: 223.17 / Avg: 234.99 / Max: 248.561. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv

PyTorch

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-152Ryzen 7 7840UCore Ultra 7 155H3691215SE +/- 0.02, N = 3SE +/- 0.06, N = 712.566.20MIN: 11.12 / MAX: 13.19MIN: 3.26 / MAX: 7.73
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-152Ryzen 7 7840UCore Ultra 7 155H48121620Min: 12.54 / Avg: 12.56 / Max: 12.59Min: 5.9 / Avg: 6.2 / Max: 6.37

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20Ryzen 7 7840UCore Ultra 7 155H11000M22000M33000M44000M55000MSE +/- 10619881.22, N = 3SE +/- 189080382.65, N = 1151574280377254963091641. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20Ryzen 7 7840UCore Ultra 7 155H9000M18000M27000M36000M45000MMin: 51553109060 / Avg: 51574280376.67 / Max: 51586341550Min: 24041682810 / Avg: 25496309163.64 / Max: 261209666901. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl

PyPerformance

PyPerformance is the reference Python performance benchmark suite. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupRyzen 7 7840UCore Ultra 7 155H3691215SE +/- 0.01, N = 3SE +/- 0.00, N = 36.8513.30
OpenBenchmarking.orgMilliseconds, Fewer Is BetterPyPerformance 1.0.0Benchmark: python_startupRyzen 7 7840UCore Ultra 7 155H48121620Min: 6.84 / Avg: 6.85 / Max: 6.86Min: 13.3 / Avg: 13.3 / Max: 13.3

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - GriddingRyzen 7 7840UCore Ultra 7 155H5001000150020002500SE +/- 4.65, N = 3SE +/- 3.94, N = 31269.302425.201. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - GriddingRyzen 7 7840UCore Ultra 7 155H400800120016002000Min: 1260.01 / Avg: 1269.3 / Max: 1274.33Min: 2417.51 / Avg: 2425.2 / Max: 2430.551. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16-INT8 - Device: CPURyzen 7 7840UCore Ultra 7 155H50100150200250SE +/- 0.26, N = 3SE +/- 0.85, N = 15208.88109.71-shared -ldl-isystem -std=c++11 -fvisibility=hidden -mavx2 -mfma -MD -MT -MF1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16-INT8 - Device: CPURyzen 7 7840UCore Ultra 7 155H4080120160200Min: 208.37 / Avg: 208.88 / Max: 209.21Min: 104.31 / Avg: 109.71 / Max: 117.311. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Compression SpeedRyzen 7 7840UCore Ultra 7 155H100200300400500SE +/- 2.40, N = 6SE +/- 1.88, N = 3235.8445.3-llzma1. (CC) gcc options: -O3 -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 8, Long Mode - Compression SpeedRyzen 7 7840UCore Ultra 7 155H80160240320400Min: 229.8 / Avg: 235.78 / Max: 243.8Min: 442.5 / Avg: 445.33 / Max: 448.91. (CC) gcc options: -O3 -pthread -lz

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamRyzen 7 7840UCore Ultra 7 155H3691215SE +/- 0.0105, N = 3SE +/- 0.0147, N = 312.99016.9529
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamRyzen 7 7840UCore Ultra 7 155H48121620Min: 12.97 / Avg: 12.99 / Max: 13.01Min: 6.93 / Avg: 6.95 / Max: 6.98

PyTorch

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-50Ryzen 7 7840UCore Ultra 7 155H714212835SE +/- 0.14, N = 3SE +/- 0.21, N = 329.2215.84MIN: 26.95 / MAX: 29.98MIN: 11.12 / MAX: 20.44
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 32 - Model: ResNet-50Ryzen 7 7840UCore Ultra 7 155H612182430Min: 28.96 / Avg: 29.22 / Max: 29.45Min: 15.48 / Avg: 15.84 / Max: 16.21

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16-INT8 - Device: CPURyzen 7 7840UCore Ultra 7 155H110220330440550SE +/- 0.91, N = 3SE +/- 3.17, N = 5525.96285.56-shared -ldl-isystem -std=c++11 -fvisibility=hidden -mavx2 -mfma -MD -MT -MF1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv