AMD Ryzen 9 7950X3D Modes On Linux

Ryzen 9 7950X3D benchmarks for a future article by Michael Larabel.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2302261-NE-7950X3DMO02
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 2 Tests
AV1 5 Tests
Bioinformatics 2 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 2 Tests
C++ Boost Tests 4 Tests
Web Browsers 1 Tests
Chess Test Suite 4 Tests
Timed Code Compilation 7 Tests
C/C++ Compiler Tests 21 Tests
Compression Tests 3 Tests
CPU Massive 41 Tests
Creator Workloads 42 Tests
Cryptocurrency Benchmarks, CPU Mining Tests 2 Tests
Cryptography 3 Tests
Database Test Suite 3 Tests
Encoding 13 Tests
Fortran Tests 4 Tests
Game Development 6 Tests
HPC - High Performance Computing 28 Tests
Imaging 8 Tests
Java 2 Tests
Common Kernel Benchmarks 4 Tests
Linear Algebra 2 Tests
Machine Learning 11 Tests
Molecular Dynamics 8 Tests
MPI Benchmarks 8 Tests
Multi-Core 47 Tests
Node.js + NPM Tests 2 Tests
NVIDIA GPU Compute 7 Tests
Intel oneAPI 6 Tests
OpenMPI Tests 11 Tests
Productivity 3 Tests
Programmer / Developer System Benchmarks 14 Tests
Python 4 Tests
Raytracing 3 Tests
Renderers 10 Tests
Scientific Computing 14 Tests
Software Defined Radio 4 Tests
Server 7 Tests
Server CPU Tests 28 Tests
Single-Threaded 8 Tests
Speech 2 Tests
Telephony 2 Tests
Texture Compression 3 Tests
Video Encoding 11 Tests
Common Workstation Benchmarks 5 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Auto
February 17 2023
  1 Day, 8 Hours, 38 Minutes
Prefer Cache
February 19 2023
  1 Day, 9 Hours, 1 Minute
Prefer Freq
February 20 2023
  1 Day, 12 Hours, 27 Minutes
Invert Hiding All Results Option
  1 Day, 10 Hours, 2 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


AMD Ryzen 9 7950X3D Modes On LinuxOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 9 7950X3D 16-Core @ 5.76GHz (16 Cores / 32 Threads)ASUS ROG CROSSHAIR X670E HERO (9922 BIOS)AMD Device 14d832GBWestern Digital WD_BLACK SN850X 1000GB + 2000GBAMD Radeon RX 7900 XTX 24GB (2304/1249MHz)AMD Device ab30ASUS MG28UIntel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411Ubuntu 23.046.2.0-060200rc8daily20230213-generic (x86_64)GNOME Shell 43.2X Server 1.21.1.64.6 Mesa 23.1.0-devel (git-efcb639 2023-02-13 lunar-oibaf-ppa) (LLVM 15.0.7 DRM 3.49)GCC 12.2.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionAMD Ryzen 9 7950X3D Modes On Linux BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-AKimc9/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-AKimc9/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - NONE / errors=remount-ro,relatime,rw / Block Size: 4096- Scaling Governor: amd-pstate performance (Boost: Enabled) - CPU Microcode: 0xa601203- OpenJDK Runtime Environment (build 17.0.6+10-Ubuntu-0ubuntu1)- Python 3.11.1- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

AutoPrefer CachePrefer FreqResult OverviewPhoronix Test Suite100%104%108%112%Google DracoHimeno BenchmarkOSPRay StudioPyBenchPHPBenchQuantLibDeepSpeechStress-NGsrsRANNumpy BenchmarksimdjsonLAME MP3 EncodingKTX-Software toktxACES DGEMMPennantRadiance BenchmarkSQLite SpeedtestRNNoiseTensorFlow LiteoneDNN

AutoPrefer CachePrefer FreqPer Watt Result OverviewPhoronix Test Suite100%107%113%120%Numpy BenchmarkPHPBenchQuantLibsimdjsonHimeno BenchmarkLZ4 CompressionStress-NGsrsRANACES DGEMMASKAPClickHouseGraphicsMagickNode.js V8 Web Tooling Benchmarklibjpeg-turbo tjbenchAlgebraic Multi-Grid BenchmarkLiquid-DSPASTC EncoderWebP Image EncodeBRL-CADNode.js Express HTTP Load TestOpenVKLChaos Group V-RAYLuxCoreRenderStargate Digital Audio WorkstationIndigoBenchCraftyNatronSeleniumLuaRadioHigh Performance Conjugate Gradientx2657-Zip CompressionXmrigOpenEMSLULESHOSPRaySVT-VP9GNU Radiodav1dEmbreeSVT-HEVCVP9 libvpx EncodingSysbenchx264LAMMPS Molecular Dynamics SimulatorSVT-AV1Cpuminer-OptKvazaarZstd CompressionLeelaChessZeroRocksDBAOM AV1rav1eGROMACSP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.MP.W.G.M

AMD Ryzen 9 7950X3D Modes On Linuxgnuradio: Hilbert Transformgnuradio: FM Deemphasis Filtergnuradio: IIR Filtergnuradio: FIR Filtergnuradio: Signal Source (Cosine)gnuradio: Five Back to Back FIR Filterslammps: 20k Atomsbuild-linux-kernel: allmodconfigblender: Barbershop - CPU-Onlyonnx: bertsquad-12 - CPU - Standardonnx: bertsquad-12 - CPU - Standardaskap: tConvolve MT - Degriddingaskap: tConvolve MT - Griddingonnx: fcn-resnet101-11 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardonnx: super-resolution-10 - CPU - Standardonnx: super-resolution-10 - CPU - Standardluaradio: Complex Phaseluaradio: Hilbert Transformluaradio: FM Deemphasis Filterluaradio: Five Back to Back FIR Filtersopenvkl: vklBenchmark ISPCospray: particle_volume/pathtracer/real_timeopenems: pyEMS Coupleropenvkl: vklBenchmark Scalarlczero: BLASlczero: Eigencompress-zstd: 19 - Decompression Speedcompress-zstd: 19 - Compression Speedospray: particle_volume/scivis/real_timeonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardhimeno: Poisson Pressure Solverbuild-llvm: Unix Makefilesbuild-llvm: Ninjanumpy: onnx: yolov4 - CPU - Standardonnx: yolov4 - CPU - Standardonnx: GPT-2 - CPU - Standardonnx: GPT-2 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Standardbrl-cad: VGR Performance Metricospray: particle_volume/ao/real_timeclickhouse: 100M Rows Hits Dataset, Third Runclickhouse: 100M Rows Hits Dataset, Second Runclickhouse: 100M Rows Hits Dataset, First Run / Cold Cachehpcg: selenium: PSPDFKit WASM - Google Chromerenaissance: ALS Movie Lensavifenc: 0blender: Pabellon Barcelona - CPU-Onlyrenaissance: Akka Unbalanced Cobwebbed Treencnn: CPU - mnasnetncnn: CPU - FastestDetncnn: CPU - vision_transformerncnn: CPU - regnety_400mncnn: CPU - squeezenet_ssdncnn: CPU - yolov4-tinyncnn: CPU - resnet50ncnn: CPU - alexnetncnn: CPU - resnet18ncnn: CPU - vgg16ncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - mobilenetselenium: Octane - Google Chromecompress-zstd: 19, Long Mode - Decompression Speedcompress-zstd: 19, Long Mode - Compression Speedospray-studio: 3 - 4K - 32 - Path Tracerstress-ng: Socket Activitycpuminer-opt: Myriad-Groestlwireguard: gcrypt: ospray-studio: 3 - 4K - 1 - Path Tracertnn: CPU - DenseNetospray-studio: 2 - 4K - 1 - Path Tracerblender: Classroom - CPU-Onlyospray-studio: 1 - 4K - 1 - Path Tracerstress-ng: IO_uringospray-studio: 2 - 4K - 32 - Path Tracerospray-studio: 1 - 4K - 32 - Path Tracerstress-ng: NUMAgpaw: Carbon Nanotuberenaissance: Scala Dottytoktx: UASTC 4 + Zstd Compression 19ospray-studio: 3 - 1080p - 32 - Path Tracerstress-ng: CPU Cachegraphics-magick: Noise-Gaussianradiance: Serialcompress-lz4: 9 - Decompression Speedcompress-lz4: 9 - Compression Speedpyhpc: CPU - Numpy - 4194304 - Isoneutral Mixingcpuminer-opt: Garlicoinavifenc: 2ospray-studio: 3 - 1080p - 1 - Path Tracerospray-studio: 2 - 1080p - 32 - Path Tracerospray-studio: 2 - 1080p - 1 - Path Tracersvt-hevc: 1 - Bosphorus 4Kospray-studio: 1 - 1080p - 1 - Path Tracerospray-studio: 1 - 1080p - 32 - Path Tracerospray: gravity_spheres_volume/dim_512/scivis/real_timeospray: gravity_spheres_volume/dim_512/ao/real_timeospray: gravity_spheres_volume/dim_512/pathtracer/real_timeappleseed: Emilysimdjson: Kostyanumenta-nab: KNN CADsysbench: CPUstress-ng: System V Message Passingprimesieve: 1e13onednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUsimdjson: LargeRandcompress-zstd: 8 - Decompression Speedcompress-zstd: 8 - Compression Speedstargate: 192000 - 512selenium: ARES-6 - Google Chromemrbayes: Primate Phylogeny Analysisselenium: Jetstream 2 - Google Chromev-ray: CPUgromacs: MPI CPU - water_GMX50_baredeepspeech: CPUincompact3d: input.i3d 193 Cells Per Directionblender: Fishy Cat - CPU-Onlygraphics-magick: Resizingappleseed: Material Testersimdjson: DistinctUserIDsimdjson: TopTweetsimdjson: PartialTweetsxmrig: Monero - 1Mopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUluxcorerender: Orange Juice - CPUgegl: Cartooncompress-zstd: 12 - Decompression Speedcompress-zstd: 12 - Compression Speedopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUluxcorerender: Danish Mood - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUcompress-zstd: 3, Long Mode - Decompression Speedcompress-zstd: 3, Long Mode - Compression Speedrenaissance: Apache Spark ALScompress-zstd: 3 - Decompression Speedcompress-zstd: 3 - Compression Speedluxcorerender: LuxCore Benchmark - CPUsrsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAMcompress-zstd: 8, Long Mode - Decompression Speedcompress-zstd: 8, Long Mode - Compression Speedindigobench: CPU - Supercarindigobench: CPU - Bedroomopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUluxcorerender: DLSC - CPUtensorflow-lite: Inception V4selenium: Kraken - Google Chromeopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUtensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quantrocksdb: Rand Fill Syncopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUrocksdb: Rand Fillgraphics-magick: Sharpengraphics-magick: Enhancedrocksdb: Update Randrocksdb: Read While Writingrocksdb: Read Rand Write Randgraphics-magick: Rotategraphics-magick: Swirlgraphics-magick: HWB Color Spacerocksdb: Rand Readvpxenc: Speed 0 - Bosphorus 4Ktachyon: Total Timetjbench: Decompression Throughputaom-av1: Speed 4 Two-Pass - Bosphorus 4Kselenium: WASM collisionDetection - Google Chromeappleseed: Disney Materialrenaissance: Genetic Algorithm Using Jenetics + Futuresdolfyn: Computational Fluid Dynamicsbuild2: Time To Compilerenaissance: Apache Spark PageRankonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUcpuminer-opt: Ringcoinnode-web-tooling: blender: BMW27 - CPU-Onlyxmrig: Wownero - 1Mcompress-lz4: 3 - Decompression Speedcompress-lz4: 3 - Compression Speedopenfoam: drivaerFastback, Small Mesh Size - Execution Timeopenfoam: drivaerFastback, Small Mesh Size - Mesh Timestress-ng: Memory Copyingbuild-godot: Time To Compileaom-av1: Speed 0 Two-Pass - Bosphorus 4Kstargate: 192000 - 1024numenta-nab: Earthgecko Skylinebuild-linux-kernel: defconfigrenaissance: Savina Reactors.IOstress-ng: Futexnumenta-nab: Bayesian Changepointstress-ng: Semaphoresx264: Bosphorus 4Kgegl: Wavelet Blurnamd: ATPase Simulation - 327,506 Atomsradiance: SMP Parallelaom-av1: Speed 8 Realtime - Bosphorus 4Krawtherapee: Total Benchmark Timesqlite-speedtest: Timed Time - Size 1,000selenium: Speedometer - Google Chromevpxenc: Speed 5 - Bosphorus 4Kgegl: Rotate 90 Degreesgegl: Color Enhancemt-dgemm: Sustained Floating-Point Ratepennant: sedovbigvpxenc: Speed 0 - Bosphorus 1080prnnoise: stargate: 96000 - 512selenium: WASM imageConvolute - Google Chromeaom-av1: Speed 6 Two-Pass - Bosphorus 4Ksvt-av1: Preset 4 - Bosphorus 4Ksvt-vp9: Visual Quality Optimized - Bosphorus 4Ksrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMsrsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAMastcenc: Exhaustiveonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUstargate: 96000 - 1024aom-av1: Speed 9 Realtime - Bosphorus 4Kcpuminer-opt: Triple SHA-256, Onecoincpuminer-opt: scryptstress-ng: MEMFDstress-ng: Glibc C String Functionsstress-ng: Matrix Mathstress-ng: Atomiccpuminer-opt: Deepcoinstress-ng: Mutexstress-ng: Mallocstress-ng: SENDFILEstress-ng: Forkingstress-ng: Cryptostress-ng: Vector Mathstress-ng: MMAPstress-ng: CPU Stressstress-ng: Glibc Qsort Data Sortingwebp: Quality 100, Losslesscpuminer-opt: LBC, LBRY Creditscpuminer-opt: Magicpuminer-opt: Skeincoincpuminer-opt: x25xcompress-lz4: 1 - Decompression Speedcompress-lz4: 1 - Compression Speedcpuminer-opt: Quad SHA-256, Pyritekvazaar: Bosphorus 4K - Slowsrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 64-QAMwebp: Quality 100, Lossless, Highest Compressionm-queens: Time To Solvekvazaar: Bosphorus 4K - Mediumrenaissance: Finagle HTTP Requestssvt-hevc: 1 - Bosphorus 1080paom-av1: Speed 6 Realtime - Bosphorus 4Kpyhpc: CPU - Numpy - 4194304 - Equation of Staterenaissance: Apache Spark Bayestnn: CPU - MobileNet v2onednn: IP Shapes 3D - u8s8f32 - CPUaom-av1: Speed 4 Two-Pass - Bosphorus 1080paom-av1: Speed 10 Realtime - Bosphorus 4Kgegl: Antialiassrsran: OFDM_Testdacapobench: H2pennant: leblancbigincompact3d: input.i3d 129 Cells Per Directionstargate: 480000 - 512srsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMsrsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAMtoktx: UASTC 3 + Zstd Compression 19stargate: 44100 - 512rocksdb: Seq Fillembree: Pathtracer ISPC - Asian Dragonbuild-ffmpeg: Time To Compilepyhpc: CPU - Numpy - 1048576 - Equation of Statestargate: 480000 - 1024build-mesa: Time To Compileonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUrenaissance: Rand Foreststargate: 44100 - 1024gegl: Reflectvpxenc: Speed 5 - Bosphorus 1080pbuild-mplayer: Time To Compilegegl: Tile Glassembree: Pathtracer - Crownnumenta-nab: Contextual Anomaly Detector OSEamg: liquid-dsp: 32 - 256 - 57liquid-dsp: 16 - 256 - 57liquid-dsp: 8 - 256 - 57phpbench: PHP Benchmark Suitecompress-7zip: Decompression Ratingcompress-7zip: Compression Ratingembree: Pathtracer - Asian Dragonquantlib: astcenc: Fastsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMsrsran: 4G PHY_DL_Test 100 PRB SISO 256-QAMaskap: Hogbom Clean OpenMPembree: Pathtracer ISPC - Crowndav1d: Chimera 1080p 10-bitnatron: Spaceshiptoktx: Zstd Compression 19selenium: Maze Solver - Google Chromeastcenc: Thoroughgimp: unsharp-maskkvazaar: Bosphorus 4K - Very Fastx265: Bosphorus 4Krav1e: 5draco: Lioncrafty: Elapsed Timerav1e: 1pyhpc: CPU - Numpy - 1048576 - Isoneutral Mixingsvt-av1: Preset 13 - Bosphorus 4Kaom-av1: Speed 0 Two-Pass - Bosphorus 1080pgimp: resizekvazaar: Bosphorus 4K - Super Fastdav1d: Chimera 1080pavifenc: 10, Losslesstnn: CPU - SqueezeNet v1.1rav1e: 6encode-mp3: WAV To MP3luxcorerender: Rainbow Colors and Prism - CPUpybench: Total For Average Test Timessysbench: RAM / Memorywebp: Quality 100, Highest Compressionsvt-av1: Preset 8 - Bosphorus 4Kgimp: rotateaom-av1: Speed 6 Two-Pass - Bosphorus 1080pgimp: auto-levelssvt-av1: Preset 4 - Bosphorus 1080pdraco: Church Facadenode-express-loadtest: kvazaar: Bosphorus 4K - Ultra Fastdav1d: Summer Nature 4Kkvazaar: Bosphorus 1080p - Slowkvazaar: Bosphorus 1080p - Mediumgegl: Scalelulesh: numenta-nab: Relative Entropyaom-av1: Speed 6 Realtime - Bosphorus 1080punpack-firefox: firefox-84.0.source.tar.xzastcenc: Mediumn-queens: Elapsed Timerav1e: 10gegl: Cropsvt-vp9: VMAF Optimized - Bosphorus 4Kprimesieve: 1e12askap: tConvolve OpenMP - Degriddingaskap: tConvolve OpenMP - Griddingaom-av1: Speed 10 Realtime - Bosphorus 1080psvt-vp9: PSNR/SSIM Optimized - Bosphorus 4Ktnn: CPU - SqueezeNet v2aom-av1: Speed 9 Realtime - Bosphorus 1080psvt-hevc: 7 - Bosphorus 4Kavifenc: 6, Losslessaom-av1: Speed 8 Realtime - Bosphorus 1080px265: Bosphorus 1080poctave-benchmark: toktx: UASTC 3unpack-linux: linux-5.19.tar.xzsvt-av1: Preset 12 - Bosphorus 4Ksvt-hevc: 10 - Bosphorus 4Kkvazaar: Bosphorus 1080p - Very Fastnumenta-nab: Windowed Gaussiansvt-av1: Preset 8 - Bosphorus 1080px264: Bosphorus 1080pdarktable: Masskrug - CPU-onlyavifenc: 6dacapobench: Jythondarktable: Server Room - CPU-onlyonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUdarktable: Boat - CPU-onlykvazaar: Bosphorus 1080p - Super Fastdav1d: Summer Nature 1080plammps: Rhodopsin Proteinkvazaar: Bosphorus 1080p - Ultra Fastsvt-vp9: Visual Quality Optimized - Bosphorus 1080psvt-hevc: 7 - Bosphorus 1080pwebp: Quality 100svt-vp9: VMAF Optimized - Bosphorus 1080psvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080psvt-hevc: 10 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 1080pwebp: Defaultdarktable: Server Rack - CPU-onlystress-ng: x86_64 RdRandAutoPrefer CachePrefer Freq720.21119.7520.01390.74813.01356.516.339523.312490.1959.638817.09662964.862371.38346.4633.0135523.645142.75995.52425185.5441078.4153.7527.51959.1407235.94561.61198192818252004.926.07.5642314.696769.08135350.249908285.528252.372899.7381.701512.24455.39574185.3442.11966474.9353947667.57692313.67311.62275.838.3288531398108.573.668168.207732.73.374.7381.5212.1211.7714.3211.604.827.0324.518.481.624.643.843.313.818.91950991968.014.915006834995.7859358149.039145.31345812035.1623988138.42388232571.40127948126604578.01126.883475.2127.25737535183.32611331.61218495.372.101.0853841.4736.2261154313119805.80969309437.465707.661028.95807146.3079585.5490.680107446.9725423648.1483.1561280.481280.49674.836673.2361.702414.11051.03.2808747.2470.376333.012312242.68538.5671460.472933567.61216996.9104319.148.838.6416134.41084.607.341097.267.257.9061.8872485.5296.0589.5113.534.38305.5126.142252.51418.11874.32198.64033.34.77215.3609.22433.01015.011.7865.42859.08135.285.0217797.3373.24.801663.091108.201626.40397284.701700.140.3940664.138.01997.600.6723579.676.122613.426.001332.5013987172604869471324196944332366710301144163414729613110.1058.9928337.85606113.19230.1084.5798731126.511.11855.2911566.90.1488204161.5520.6352.5419626.118483.078.95125.8245723.2020387172.3149.3820.413.65948345.22646.5743454.14087777.6511.5033423759.3371.3939.3420.81721112.04589.0734.82834.84036822.6834.27033.42110.72284133.1153020.1612.5705.25528918.8424.185.296109.37127.7191.21.69505.141725.560909105.96425237639.971281.144554878.75122833.10212936.161977311172291.2935942000.76484816.5273511.8443690.40138021.66381.2158780.75368.052.291365131101.712627871110.5519572.917290.9829154720.71254.8633.10.8328.54521.152083.522.66108.410.736790.2192.2820.66551728.32106.7824.943202625000163224.8518414.95184397.196401242.3622.67.4977.339446144549939.740121.9640.1217.72673821.2130.460811391.57.92632320.58039.8015.34220.26831.995519.90443758620014877000001485366667756130000127049617661618967134.47004169.2367.8940264.9675.6551.30536.0301830.585.910.1993.716.042213.11045.8434.284.6333196153356671.1760.237210.9981.2012.53460.46913.513.283174.1117.3704.56717.5750012821.255.4670.0869.46069.2210.64014.41936771100679.14399.4679.7282.587.3449039.15287.159219.5010.488129.14605.98117.0506.890117.846.86211519.96855.31238.38126.2242.843248.22101.105.488221.03113.234.6525.2793.953214.074174.66173.453.414193.762268.242.6433.22423292.3460.6344552.456230.491409.2316.314281.37383.36305.8317.44449.83458.64592.13789.949772.34428.120.152721.31115.0518.21390.54918.11404.416.328517.734489.9858.063317.69352970.042530.47297.8193.4280824.135842.26745.55023184.0151089.7155.6527.71945.7407235.72362.01198192618101949.125.97.5968113.420974.50985163.440763282.589252.317958.4894.131910.793155.29251188.8822.00833497.8813967457.56189314.70316.99280.938.3283031038103.676.168168.247772.73.374.7582.0112.1811.7014.3511.494.756.9024.578.471.634.663.843.333.828.92956441847.014.615363335308.6358791147.187143.14146812027.2803990138.45394831486.82131552129677576.94126.814521.1127.44537690181.06612360.30718764.573.611.0673781.0737.1241152320619975.80986315937.507977.614878.98545145.150936.0489.861107434.0625978492.7783.0671276.221286.51683.479685.4231.872428.51046.43.3072517.3670.337326.220312802.67635.3232560.817441967.62215896.5937869.159.577.5916231.91082.367.351093.407.277.8662.5992476.5299.9591.1413.484.36305.7126.112158.61423.91874.12210.73967.54.78223.1622.32410.11012.011.8195.42558.99135.515.0717748.5373.24.821659.221109.841622.75396534.691702.470.3940512.397.921009.730.6723486.366.122613.725.981335.2913887872614869507864212246331176610271162170214776010910.0659.0962342.25167313.11229.0984.7898191116.211.08855.3531562.20.1700314211.6320.4652.3719605.418752.775.74123.6636723.9817817097.3949.4450.413.63746147.98646.1873489.04120615.7011.9523476176.5571.7939.5140.81594114.02689.7234.94234.51736722.2034.23634.25010.64250432.3875520.3512.7695.19352819.3324.185.311109.57126.3191.61.69765.154565.549972108.48425550640.641266.524616386.56122869.22212454.761951711230656.9936157266.33485658.9773184.0443716.13137735.11379.5358865.01371.842.271367831105.252624201116.0219460.117319.6729193320.69220.7571.60.8428.55021.162096.122.64105.980.741788.2192.2470.66462728.34109.8124.598204366667159322.6229914.67195847.208599204.9566.38.7147.324451144181539.778921.8230.1237.72639621.4370.464426391.57.97725520.74040.3415.34820.37632.129820.77843811330014856666671471433333751733333115434117639818987134.59144577.4368.2456222.5610.4550.08436.0175830.286.011.4463.716.115013.14045.7834.644.6893543152316021.1810.236209.8611.2112.44660.40912.383.297175.7157.4054.87917.6756212696.415.5370.3949.10468.4710.76714.40245171087179.19401.8179.7982.367.2649073.59047.278221.7010.775129.15755.98416.7866.933117.976.86811495.96857.02241.09126.3442.247242.10101.105.517219.56113.594.6535.1304.007213.532174.96173.353.409194.838268.802.6493.27023142.3260.6357672.476229.991406.8116.114281.56382.65306.6216.95446.12457.94591.30787.790773.41327.920.156687.41136.9524.71267.75008.91352.316.386519.516488.9155.593818.31802959.182322.20337.2163.0870222.984743.73585.53549185.9531071.4154.6527.92002.5419235.28360.92199193117951990.625.47.5301814.389170.08994638.291812276.209252.070885.6182.102112.18165.46230183.5702.00100499.6973959457.57299323.36321.27281.138.3319731247991.471.804167.947797.23.424.7880.5512.3011.8714.4111.614.747.0224.728.611.654.763.883.383.889.16955431835.114.915214835444.4258189148.626143.79546292028.8373988138.43394629496.76131199129900581.60126.820470.7126.5623758831.91605366.41518587.572.841.0943869.3735.93511783200510015.82984315517.564007.700028.98056145.5887335.9089.953107729.7825052017.7382.9281288.241279.16678.655673.1571.892423.11046.83.2423177.0470.413325.828312382.69135.1951661.343949667.48221696.86082810.069.738.5316109.41080.717.371089.897.297.9262.3442469.8297.9589.7113.534.46306.2326.092267.81405.51857.12193.44023.54.78256.0682.22417.31012.911.8525.43458.93135.635.0517769.5372.84.771674.001108.321627.87394784.651721.240.3940605.448.05993.160.6723592.856.102621.905.991333.2413960242614929502944198454331541510121153164614774001710.1858.9602344.08021313.22228.6484.733221111.311.00155.2911584.10.1485854152.5820.9052.2519658.118319.873.61129.4152422.9255097077.0949.3660.413.65455946.38646.4253439.34134347.2711.4113476539.4971.7839.2230.81379112.3489.8734.85936.09236921.9934.97233.59811.28118833.0465520.3913.1305.18160718.6424.095.298109.76143.9211.51.70125.426885.506467110.91426480641.111276.154613558.35123819.50212527.911967011172285.0036014622.73489525.4272995.0443758.76138137.06381.3756651.53373.082.231366131102.642628871109.9719602.617314.4029195320.74215.9563.60.8428.47921.192080.422.73102.320.745786.3191.6590.82300928.29108.6025.055241166667163823.3901714.61817127.195516243.1633.37.5617.391493145059939.764521.8460.1227.74326021.3010.465139388.97.99664120.37339.8315.26220.26532.292019.96243549493314886333331488033333757983333124778617702419065934.62084512.4368.1015223.1613.0549.45136.2407827.995.910.1913.716.150013.12545.9234.654.7253195153873801.1790.234209.9831.2012.64760.47915.113.250174.6277.3674.56217.5850412712.245.3270.4839.47168.5810.78214.46636231105679.32399.6079.8482.647.2339066.29337.283220.5810.722129.53095.98316.7446.884118.006.84511495.96918.41238.97126.6343.359241.38101.095.421217.88113.424.5855.2724.054212.522174.93173.583.432195.812269.522.6363.20923212.3310.6343572.452230.801407.9116.252281.74384.34308.0417.16450.47459.41591.01796.324771.61126.570.152OpenBenchmarking.org

GNU Radio

GNU Radio is a free software development toolkit providing signal processing blocks to implement software-defined radios (SDR) and signal processing systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Hilbert TransformAutoPrefer CachePrefer Freq160320480640800SE +/- 3.80, N = 9SE +/- 3.09, N = 9SE +/- 3.07, N = 9720.2721.3687.41. 3.10.5.1
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Hilbert TransformAutoPrefer CachePrefer Freq130260390520650Min: 703.7 / Avg: 720.18 / Max: 738Min: 702.8 / Avg: 721.32 / Max: 735.1Min: 674.7 / Avg: 687.4 / Max: 697.51. 3.10.5.1

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FM Deemphasis FilterAutoPrefer CachePrefer Freq2004006008001000SE +/- 2.62, N = 9SE +/- 4.98, N = 9SE +/- 3.20, N = 91119.71115.01136.91. 3.10.5.1
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FM Deemphasis FilterAutoPrefer CachePrefer Freq2004006008001000Min: 1100.8 / Avg: 1119.71 / Max: 1128.7Min: 1096.5 / Avg: 1115.02 / Max: 1139.1Min: 1121 / Avg: 1136.87 / Max: 1153.51. 3.10.5.1

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: IIR FilterAutoPrefer CachePrefer Freq110220330440550SE +/- 2.24, N = 9SE +/- 1.65, N = 9SE +/- 0.92, N = 9520.0518.2524.71. 3.10.5.1
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: IIR FilterAutoPrefer CachePrefer Freq90180270360450Min: 512.4 / Avg: 519.99 / Max: 534.1Min: 511.7 / Avg: 518.16 / Max: 526.2Min: 518.5 / Avg: 524.66 / Max: 527.11. 3.10.5.1

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FIR FilterAutoPrefer CachePrefer Freq30060090012001500SE +/- 2.89, N = 9SE +/- 3.63, N = 9SE +/- 2.90, N = 91390.71390.51267.71. 3.10.5.1
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: FIR FilterAutoPrefer CachePrefer Freq2004006008001000Min: 1379.7 / Avg: 1390.74 / Max: 1404.2Min: 1372.7 / Avg: 1390.5 / Max: 1408.1Min: 1255.6 / Avg: 1267.73 / Max: 1282.41. 3.10.5.1

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Signal Source (Cosine)AutoPrefer CachePrefer Freq11002200330044005500SE +/- 48.78, N = 9SE +/- 45.51, N = 9SE +/- 41.28, N = 94813.04918.15008.91. 3.10.5.1
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Signal Source (Cosine)AutoPrefer CachePrefer Freq9001800270036004500Min: 4541 / Avg: 4813.03 / Max: 5000.6Min: 4753.7 / Avg: 4918.1 / Max: 5121.9Min: 4777.3 / Avg: 5008.9 / Max: 5150.71. 3.10.5.1

OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Five Back to Back FIR FiltersAutoPrefer CachePrefer Freq30060090012001500SE +/- 22.39, N = 9SE +/- 25.73, N = 9SE +/- 17.40, N = 91356.51404.41352.31. 3.10.5.1
OpenBenchmarking.orgMiB/s, More Is BetterGNU RadioTest: Five Back to Back FIR FiltersAutoPrefer CachePrefer Freq2004006008001000Min: 1269.3 / Avg: 1356.54 / Max: 1506.6Min: 1300 / Avg: 1404.44 / Max: 1509.6Min: 1232.5 / Avg: 1352.26 / Max: 1395.51. 3.10.5.1

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k AtomsAutoPrefer CachePrefer Freq48121620SE +/- 0.09, N = 3SE +/- 0.10, N = 3SE +/- 0.09, N = 316.3416.3316.391. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 23Jun2022Model: 20k AtomsAutoPrefer CachePrefer Freq48121620Min: 16.19 / Avg: 16.34 / Max: 16.5Min: 16.13 / Avg: 16.33 / Max: 16.45Min: 16.21 / Avg: 16.39 / Max: 16.521. (CXX) g++ options: -O3 -lm -ldl

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfigAutoPrefer CachePrefer Freq110220330440550SE +/- 0.26, N = 3SE +/- 0.45, N = 3SE +/- 0.32, N = 3523.31517.73519.52
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfigAutoPrefer CachePrefer Freq90180270360450Min: 522.79 / Avg: 523.31 / Max: 523.6Min: 516.97 / Avg: 517.73 / Max: 518.51Min: 518.89 / Avg: 519.52 / Max: 519.88

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Barbershop - Compute: CPU-OnlyAutoPrefer CachePrefer Freq110220330440550SE +/- 0.17, N = 3SE +/- 0.55, N = 3SE +/- 0.59, N = 3490.19489.98488.91
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.4Blend File: Barbershop - Compute: CPU-OnlyAutoPrefer CachePrefer Freq90180270360450Min: 489.98 / Avg: 490.19 / Max: 490.52Min: 489.16 / Avg: 489.98 / Max: 491.03Min: 488.32 / Avg: 488.91 / Max: 490.1

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: StandardAutoPrefer CachePrefer Freq1326395265SE +/- 2.18, N = 15SE +/- 2.61, N = 15SE +/- 2.10, N = 1559.6458.0655.591. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: StandardAutoPrefer CachePrefer Freq1224364860Min: 49.39 / Avg: 59.64 / Max: 68.44Min: 49.04 / Avg: 58.06 / Max: 70.11Min: 49.03 / Avg: 55.59 / Max: 68.821. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: StandardAutoPrefer CachePrefer Freq510152025SE +/- 0.65, N = 15SE +/- 0.75, N = 15SE +/- 0.63, N = 1517.1017.6918.321. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: bertsquad-12 - Device: CPU - Executor: StandardAutoPrefer CachePrefer Freq510152025Min: 14.61 / Avg: 17.1 / Max: 20.25Min: 14.26 / Avg: 17.69 / Max: 20.39Min: 14.53 / Avg: 18.32 / Max: 20.391. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

ASKAP

ASKAP is a set of benchmarks from the Australian SKA Pathfinder. The principal ASKAP benchmarks are the Hogbom Clean Benchmark (tHogbomClean) and Convolutional Resamping Benchmark (tConvolve) as well as some previous ASKAP benchmarks being included as well for OpenCL and CUDA execution of tConvolve. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - DegriddingAutoPrefer CachePrefer Freq6001200180024003000SE +/- 12.65, N = 15SE +/- 9.57, N = 15SE +/- 11.63, N = 152964.862970.042959.181. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - DegriddingAutoPrefer CachePrefer Freq5001000150020002500Min: 2860.08 / Avg: 2964.86 / Max: 3004.3Min: 2877.47 / Avg: 2970.04 / Max: 3006.42Min: 2884.29 / Avg: 2959.18 / Max: 3006.421. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - GriddingAutoPrefer CachePrefer Freq5001000150020002500SE +/- 34.96, N = 15SE +/- 31.33, N = 15SE +/- 18.48, N = 152371.382530.472322.201. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp
OpenBenchmarking.orgMillion Grid Points Per Second, More Is BetterASKAP 1.0Test: tConvolve MT - GriddingAutoPrefer CachePrefer Freq400800120016002000Min: 2242.16 / Avg: 2371.38 / Max: 2672.58Min: 2333.66 / Avg: 2530.47 / Max: 2694.56Min: 2248.67 / Avg: 2322.2 / Max: 2498.591. (CXX) g++ options: -O3 -fstrict-aliasing -fopenmp

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: StandardAutoPrefer CachePrefer Freq80160240320400SE +/- 20.18, N = 15SE +/- 15.84, N = 12SE +/- 19.31, N = 15346.46297.82337.221. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: StandardAutoPrefer CachePrefer Freq60120180240300Min: 276.66 / Avg: 346.46 / Max: 473.82Min: 277.23 / Avg: 297.82 / Max: 470.93Min: 276.35 / Avg: 337.22 / Max: 469.591. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: StandardAutoPrefer CachePrefer Freq0.77131.54262.31393.08523.8565SE +/- 0.15740, N = 15SE +/- 0.12057, N = 12SE +/- 0.15247, N = 153.013553.428083.087021. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: fcn-resnet101-11 - Device: CPU - Executor: StandardAutoPrefer CachePrefer Freq246810Min: 2.11 / Avg: 3.01 / Max: 3.61Min: 2.12 / Avg: 3.43 / Max: 3.61Min: 2.13 / Avg: 3.09 / Max: 3.621. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardAutoPrefer CachePrefer Freq612182430SE +/- 0.72, N = 15SE +/- 0.98, N = 15SE +/- 0.52, N = 1223.6524.1422.981. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardAutoPrefer CachePrefer Freq612182430Min: 21.59 / Avg: 23.65 / Max: 30.66Min: 21.52 / Avg: 24.14 / Max: 30.7Min: 21.57 / Avg: 22.98 / Max: 26.121. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardAutoPrefer CachePrefer Freq1020304050SE +/- 1.11, N = 15SE +/- 1.48, N = 15SE +/- 0.93, N = 1242.7642.2743.741. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardAutoPrefer CachePrefer Freq918273645Min: 32.61 / Avg: 42.76 / Max: 46.32Min: 32.57 / Avg: 42.27 / Max: 46.46Min: 38.28 / Avg: 43.74 / Max: 46.351. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: StandardAutoPrefer CachePrefer Freq1.24882.49763.74644.99526.244SE +/- 0.23907, N = 15SE +/- 0.21808, N = 15SE +/- 0.29232, N = 125.524255.550235.535491. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: StandardAutoPrefer CachePrefer Freq246810Min: 4.69 / Avg: 5.52 / Max: 7.15Min: 4.71 / Avg: 5.55 / Max: 6.72Min: 4.7 / Avg: 5.54 / Max: 7.111. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: StandardAutoPrefer CachePrefer Freq4080120160200SE +/- 7.51, N = 15SE +/- 7.02, N = 15SE +/- 9.16, N = 12185.54184.02185.951. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: super-resolution-10 - Device: CPU - Executor: StandardAutoPrefer CachePrefer Freq306090120150Min: 139.91 / Avg: 185.54 / Max: 213.06Min: 148.88 / Avg: 184.02 / Max: 212.43Min: 140.55 / Avg: 185.95 / Max: 212.971. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

LuaRadio

LuaRadio is a lightweight software-defined radio (SDR) framework built atop LuaJIT. LuaRadio provides a suite of source, sink, and processing blocks, with a simple API for defining flow graphs, running flow graphs, creating blocks, and creating data types. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Complex PhaseAutoPrefer CachePrefer Freq2004006008001000SE +/- 4.07, N = 3SE +/- 4.40, N = 5SE +/- 4.41, N = 71078.41089.71071.4
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Complex PhaseAutoPrefer CachePrefer Freq2004006008001000Min: 1071.9 / Avg: 1078.4 / Max: 1085.9Min: 1080.7 / Avg: 1089.72 / Max: 1102.1Min: 1056.1 / Avg: 1071.39 / Max: 1088.3

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Hilbert TransformAutoPrefer CachePrefer Freq306090120150SE +/- 1.47, N = 3SE +/- 1.10, N = 5SE +/- 0.35, N = 7153.7155.6154.6
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Hilbert TransformAutoPrefer CachePrefer Freq306090120150Min: 150.8 / Avg: 153.73 / Max: 155.3Min: 152.9 / Avg: 155.58 / Max: 158.4Min: 153.9 / Avg: 154.59 / Max: 156.5

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: FM Deemphasis FilterAutoPrefer CachePrefer Freq110220330440550SE +/- 4.15, N = 3SE +/- 3.13, N = 5SE +/- 2.13, N = 7527.5527.7527.9
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: FM Deemphasis FilterAutoPrefer CachePrefer Freq90180270360450Min: 519.4 / Avg: 527.53 / Max: 533Min: 515.7 / Avg: 527.66 / Max: 532.2Min: 518.6 / Avg: 527.9 / Max: 534

OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Five Back to Back FIR FiltersAutoPrefer CachePrefer Freq400800120016002000SE +/- 22.01, N = 3SE +/- 20.86, N = 5SE +/- 17.45, N = 71959.11945.72002.5
OpenBenchmarking.orgMiB/s, More Is BetterLuaRadio 0.9.1Test: Five Back to Back FIR FiltersAutoPrefer CachePrefer Freq30060090012001500Min: 1915.1 / Avg: 1959.07 / Max: 1982.9Min: 1885.2 / Avg: 1945.72 / Max: 2004.1Min: 1926.2 / Avg: 2002.47 / Max: 2045.3

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ISPCAutoPrefer CachePrefer Freq90180270360450SE +/- 0.33, N = 3SE +/- 0.33, N = 3SE +/- 1.86, N = 3407407419MIN: 53 / MAX: 5149MIN: 53 / MAX: 5165MIN: 53 / MAX: 5936
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ISPCAutoPrefer CachePrefer Freq70140210280350Min: 407 / Avg: 407.33 / Max: 408Min: 407 / Avg: 407.33 / Max: 408Min: 415 / Avg: 418.67 / Max: 421

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/pathtracer/real_timeAutoPrefer CachePrefer Freq50100150200250SE +/- 0.40, N = 3SE +/- 0.12, N = 3SE +/- 0.84, N = 3235.95235.72235.28
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/pathtracer/real_timeAutoPrefer CachePrefer Freq4080120160200Min: 235.18 / Avg: 235.94 / Max: 236.51Min: 235.53 / Avg: 235.72 / Max: 235.95Min: 233.72 / Avg: 235.28 / Max: 236.58

OpenEMS

OpenEMS is a free and open electromagnetic field solver using the FDTD method. This test profile runs OpenEMS and pyEMS benchmark demos. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMCells/s, More Is BetterOpenEMS 0.0.35-86Test: pyEMS CouplerAutoPrefer CachePrefer Freq1428425670SE +/- 0.35, N = 3SE +/- 0.12, N = 3SE +/- 0.05, N = 361.6162.0160.921. (CXX) g++ options: -O3 -rdynamic -ltinyxml -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -lexpat
OpenBenchmarking.orgMCells/s, More Is BetterOpenEMS 0.0.35-86Test: pyEMS CouplerAutoPrefer CachePrefer Freq1224364860Min: 61.01 / Avg: 61.61 / Max: 62.21Min: 61.81 / Avg: 62.01 / Max: 62.21Min: 60.82 / Avg: 60.92 / Max: 611. (CXX) g++ options: -O3 -rdynamic -ltinyxml -lcrypto -lcurl -lpthread -lsz -lz -ldl -lm -lexpat

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ScalarAutoPrefer CachePrefer Freq4080120160200SE +/- 0.67, N = 3SE +/- 0.33, N = 3SE +/- 0.00, N = 3198198199MIN: 17 / MAX: 3749MIN: 18 / MAX: 3736MIN: 18 / MAX: 3749
OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ScalarAutoPrefer CachePrefer Freq4080120160200Min: 197 / Avg: 198.33 / Max: 199Min: 197 / Avg: 197.67 / Max: 198Min: 199 / Avg: 199 / Max: 199

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASAutoPrefer CachePrefer Freq400800120016002000SE +/- 22.10, N = 3SE +/- 6.23, N = 3SE +/- 2.96, N = 31928192619311. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASAutoPrefer CachePrefer Freq30060090012001500Min: 1897 / Avg: 1928.33 / Max: 1971Min: 1914 / Avg: 1926.33 / Max: 1934Min: 1927 / Avg: 1931.33 / Max: 19371. (CXX) g++ options: -flto -pthread

OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: EigenAutoPrefer CachePrefer Freq400800120016002000SE +/- 12.03, N = 3SE +/- 19.08, N = 3SE +/- 14.57, N = 31825181017951. (CXX) g++ options: -flto -pthread
OpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: EigenAutoPrefer CachePrefer Freq30060090012001500Min: 1807 / Avg: 1825.33 / Max: 1848Min: 1778 / Avg: 1810 / Max: 1844Min: 1767 / Avg: 1795 / Max: 18161. (CXX) g++ options: -flto -pthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (silesia.tar) using Zstd (Zstandard) compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Decompression SpeedAutoPrefer CachePrefer Freq400800120016002000SE +/- 26.46, N = 15SE +/- 2.99, N = 15SE +/- 22.67, N = 152004.91949.11990.61. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Decompression SpeedAutoPrefer CachePrefer Freq30060090012001500Min: 1938.2 / Avg: 2004.89 / Max: 2201.5Min: 1925.1 / Avg: 1949.14 / Max: 1975.7Min: 1925 / Avg: 1990.55 / Max: 2190.11. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Compression SpeedAutoPrefer CachePrefer Freq612182430SE +/- 0.22, N = 15SE +/- 0.28, N = 15SE +/- 0.35, N = 1526.025.925.41. (CC) gcc options: -O3 -pthread -lz -llzma -llz4
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.4Compression Level: 19 - Compression SpeedAutoPrefer CachePrefer Freq612182430Min: 24.2 / Avg: 26.03 / Max: 26.9Min: 24 / Avg: 25.9 / Max: 27.1Min: 23 / Avg: 25.44 / Max: 26.91. (CC) gcc options: -O3 -pthread -lz -llzma -llz4

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/scivis/real_timeAutoPrefer CachePrefer Freq246810SE +/- 0.00618, N = 3SE +/- 0.00464, N = 3SE +/- 0.03588, N = 37.564237.596817.53018
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: particle_volume/scivis/real_timeAutoPrefer CachePrefer Freq3691215Min: 7.55 / Avg: 7.56 / Max: 7.57Min: 7.59 / Avg: 7.6 / Max: 7.61Min: 7.46 / Avg: 7.53 / Max: 7.57

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: StandardAutoPrefer CachePrefer Freq48121620SE +/- 0.51, N = 15SE +/- 0.06, N = 3SE +/- 0.43, N = 1214.7013.4214.391. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: StandardAutoPrefer CachePrefer Freq48121620Min: 13.27 / Avg: 14.7 / Max: 18Min: 13.3 / Avg: 13.42 / Max: 13.49Min: 13.39 / Avg: 14.39 / Max: 17.571. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: StandardAutoPrefer CachePrefer Freq20406080100SE +/- 2.15, N = 15SE +/- 0.35, N = 3SE +/- 1.82, N = 1269.0874.5170.091. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: StandardAutoPrefer CachePrefer Freq1428425670Min: 55.55 / Avg: 69.08 / Max: 75.38Min: 74.11 / Avg: 74.51 / Max: 75.2Min: 56.91 / Avg: 70.09 / Max: 74.691. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

Himeno Benchmark

The Himeno benchmark is a linear solver of pressure Poisson using a point-Jacobi method. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterHimeno Benchmark 3.0Poisson Pressure SolverAutoPrefer CachePrefer Freq11002200330044005500SE +/- 140.79, N = 15SE +/- 120.69, N = 15SE +/- 83.07, N = 155350.255163.444638.291. (CC) gcc options: -O3 -mavx2
OpenBenchmarking.orgMFLOPS, More Is BetterHimeno Benchmark 3.0Poisson Pressure SolverAutoPrefer CachePrefer Freq9001800270036004500Min: 4682.76 / Avg: 5350.25 / Max: 6576.31Min: 4533.32 / Avg: 5163.44 / Max: 5888.94Min: 4161.95 / Avg: 4638.29 / Max: 5055.631. (CC) gcc options: -O3 -mavx2

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Unix MakefilesAutoPrefer CachePrefer Freq60120180240300SE +/- 0.94, N = 3SE +/- 3.72, N = 3SE +/- 2.01, N = 3285.53282.59276.21
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: Unix MakefilesAutoPrefer CachePrefer Freq50100150200250Min: 283.92 / Avg: 285.53 / Max: 287.16Min: 275.48 / Avg: 282.59 / Max: 288.07Min: 273.48 / Avg: 276.21 / Max: 280.13

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: NinjaAutoPrefer CachePrefer Freq60120180240300SE +/- 0.14, N = 3SE +/- 0.39, N = 3SE +/- 0.26, N = 3252.37252.32252.07
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 13.0Build System: NinjaAutoPrefer CachePrefer Freq50100150200250Min: 252.1 / Avg: 252.37 / Max: 252.58Min: 251.79 / Avg: 252.32 / Max: 253.09Min: 251.67 / Avg: 252.07 / Max: 252.56

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkAutoPrefer CachePrefer Freq2004006008001000SE +/- 6.01, N = 15SE +/- 9.21, N = 6SE +/- 1.73, N = 3899.73958.48885.61
OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkAutoPrefer CachePrefer Freq2004006008001000Min: 872.04 / Avg: 899.73 / Max: 977.41Min: 914.45 / Avg: 958.48 / Max: 979.92Min: 883.71 / Avg: 885.61 / Max: 889.07

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Model Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: StandardAutoPrefer CachePrefer Freq20406080100SE +/- 0.82, N = 5SE +/- 3.28, N = 15SE +/- 0.73, N = 381.7094.1382.101. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: StandardAutoPrefer CachePrefer Freq20406080100Min: 78.51 / Avg: 81.7 / Max: 82.84Min: 82.6 / Avg: 94.13 / Max: 113.58Min: 80.65 / Avg: 82.1 / Max: 82.881. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: StandardAutoPrefer CachePrefer Freq3691215SE +/- 0.13, N = 5SE +/- 0.35, N = 15SE +/- 0.11, N = 312.2410.7912.181. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: yolov4 - Device: CPU - Executor: StandardAutoPrefer CachePrefer Freq48121620Min: 12.07 / Avg: 12.24 / Max: 12.74Min: 8.8 / Avg: 10.79 / Max: 12.11Min: 12.06 / Avg: 12.18 / Max: 12.41. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: StandardAutoPrefer CachePrefer Freq1.2292.4583.6874.9166.145SE +/- 0.06583, N = 4SE +/- 0.01610, N = 3SE +/- 0.08402, N = 155.395745.292515.462301. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: StandardAutoPrefer CachePrefer Freq246810Min: 5.22 / Avg: 5.4 / Max: 5.54Min: 5.27 / Avg: 5.29 / Max: 5.32Min: 5.22 / Avg: 5.46 / Max: 6.111. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: StandardAutoPrefer CachePrefer Freq4080120160200SE +/- 2.28, N = 4SE +/- 0.57, N = 3SE +/- 2.62, N = 15185.34188.88183.571. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: GPT-2 - Device: CPU - Executor: StandardAutoPrefer CachePrefer Freq306090120150Min: 180.45 / Avg: 185.34 / Max: 191.43Min: 187.79 / Avg: 188.88 / Max: 189.73Min: 163.72 / Avg: 183.57 / Max: 191.451. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: StandardAutoPrefer CachePrefer Freq0.47690.95381.43071.90762.3845SE +/- 0.04888, N = 15SE +/- 0.00651, N = 3SE +/- 0.00189, N = 32.119662.008332.001001. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: StandardAutoPrefer CachePrefer Freq246810Min: 1.98 / Avg: 2.12 / Max: 2.53Min: 2 / Avg: 2.01 / Max: 2.02Min: 2 / Avg: 2 / Max: 21. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: StandardAutoPrefer CachePrefer Freq110220330440550SE +/- 9.98, N = 15SE +/- 1.61, N = 3SE +/- 0.47, N = 3474.94497.88499.701. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt
OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.14Model: ResNet50 v1-12-int8 - Device: CPU - Executor: StandardAutoPrefer CachePrefer Freq90180270360450Min: 395.89 / Avg: 474.93 / Max: 503.77Min: 494.71 / Avg: 497.88 / Max: 499.94Min: 498.89 / Avg: 499.7 / Max: 500.521. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto=auto -fno-fat-lto-objects -ldl -lrt

BRL-CAD

BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.