june

Intel Core i9-10980XE testing with a ASRock X299 Steel Legend (P1.30 BIOS) and llvmpipe on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2306252-PTS-JUNE436037
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts

Limit displaying results to tests within:

AV1 2 Tests
C++ Boost Tests 2 Tests
C/C++ Compiler Tests 2 Tests
CPU Massive 4 Tests
Creator Workloads 6 Tests
Encoding 4 Tests
Fortran Tests 2 Tests
HPC - High Performance Computing 7 Tests
Machine Learning 2 Tests
MPI Benchmarks 3 Tests
Multi-Core 5 Tests
OpenMPI Tests 5 Tests
Python Tests 3 Tests
Scientific Computing 3 Tests
Software Defined Radio 2 Tests
Server CPU Tests 3 Tests
Video Encoding 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
June 25 2023
  2 Hours, 44 Minutes
b
June 25 2023
  2 Hours, 46 Minutes
c
June 25 2023
  1 Hour, 55 Minutes
Invert Hiding All Results Option
  2 Hours, 28 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


juneOpenBenchmarking.orgPhoronix Test SuiteIntel Core i9-10980XE @ 4.80GHz (18 Cores / 36 Threads)ASRock X299 Steel Legend (P1.30 BIOS)Intel Sky Lake-E DMI3 Registers32GBSamsung SSD 970 PRO 512GBllvmpipeRealtek ALC1220Intel I219-V + Intel I211Ubuntu 22.045.19.0-051900rc7-generic (x86_64)GNOME Shell 42.2X Server 1.21.1.34.5 Mesa 22.0.1 (LLVM 13.0.1 256 bits)1.2.204GCC 11.3.0ext41024x768ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionJune BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-aYxV0E/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-aYxV0E/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - NONE / errors=remount-ro,relatime,rw / Block Size: 4096- Scaling Governor: intel_cpufreq schedutil - CPU Microcode: 0x5003303- Python 3.10.6- itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of Enhanced IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Mitigation of TSX disabled

abcResult OverviewPhoronix Test Suite100%102%104%107%109%RELIONQuantLibMonte Carlo Simulations of Ionised NebulaeeSpeak-NG Speech EngineSVT-AV1Z3 Theorem Proverdav1dsrsRAN ProjectlibxsmmEmbreeOpus Codec EncodingSQLiteHigh Performance Conjugate GradientVVenCLiquid-DSPNeural Magic DeepSparseStress-NG

junesqlite: 1sqlite: 2sqlite: 4sqlite: 8quantlib: hpcg: 104 104 104 - 60libxsmm: 128libxsmm: 256libxsmm: 32libxsmm: 64mocassin: Gas HII40mocassin: Dust 2D tau100.0relion: Basic - CPUz3: 1.smt2z3: 2.smt2srsran: Downlink Processor Benchmarksrsran: PUSCH Processor Benchmark, Throughput Totalsrsran: PUSCH Processor Benchmark, Throughput Threaddav1d: Chimera 1080pdav1d: Summer Nature 4Kdav1d: Summer Nature 1080pdav1d: Chimera 1080p 10-bitembree: Pathtracer - Crownembree: Pathtracer ISPC - Crownembree: Pathtracer - Asian Dragonembree: Pathtracer - Asian Dragon Objembree: Pathtracer ISPC - Asian Dragonembree: Pathtracer ISPC - Asian Dragon Objsvt-av1: Preset 4 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Ksvt-av1: Preset 13 - Bosphorus 4Ksvt-av1: Preset 4 - Bosphorus 1080psvt-av1: Preset 8 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080pvvenc: Bosphorus 4K - Fastvvenc: Bosphorus 4K - Fastervvenc: Bosphorus 1080p - Fastvvenc: Bosphorus 1080p - Fasterencode-opus: WAV To Opus Encodeespeak: Text-To-Speech Synthesisliquid-dsp: 1 - 256 - 32liquid-dsp: 1 - 256 - 57liquid-dsp: 2 - 256 - 32liquid-dsp: 2 - 256 - 57liquid-dsp: 4 - 256 - 32liquid-dsp: 4 - 256 - 57liquid-dsp: 8 - 256 - 32liquid-dsp: 8 - 256 - 57liquid-dsp: 1 - 256 - 512liquid-dsp: 16 - 256 - 32liquid-dsp: 16 - 256 - 57liquid-dsp: 2 - 256 - 512liquid-dsp: 32 - 256 - 32liquid-dsp: 32 - 256 - 57liquid-dsp: 36 - 256 - 32liquid-dsp: 36 - 256 - 57liquid-dsp: 4 - 256 - 512liquid-dsp: 8 - 256 - 512liquid-dsp: 16 - 256 - 512liquid-dsp: 32 - 256 - 512liquid-dsp: 36 - 256 - 512deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamstress-ng: Hashstress-ng: MMAPstress-ng: NUMAstress-ng: Pipestress-ng: Pollstress-ng: Zlibstress-ng: Futexstress-ng: MEMFDstress-ng: Mutexstress-ng: Atomicstress-ng: Cryptostress-ng: Mallocstress-ng: Cloningstress-ng: Forkingstress-ng: Pthreadstress-ng: AVL Treestress-ng: IO_uringstress-ng: SENDFILEstress-ng: CPU Cachestress-ng: CPU Stressstress-ng: Semaphoresstress-ng: Matrix Mathstress-ng: Vector Mathstress-ng: Function Callstress-ng: x86_64 RdRandstress-ng: Floating Pointstress-ng: Matrix 3D Mathstress-ng: Memory Copyingstress-ng: Vector Shufflestress-ng: Socket Activitystress-ng: Wide Vector Mathstress-ng: Context Switchingstress-ng: Fused Multiply-Addstress-ng: Vector Floating Pointstress-ng: Glibc C String Functionsstress-ng: Glibc Qsort Data Sortingstress-ng: System V Message Passinggpaw: Carbon Nanotubewhisper-cpp: ggml-base.en - 2016 State of the Unionwhisper-cpp: ggml-small.en - 2016 State of the Unionwhisper-cpp: ggml-medium.en - 2016 State of the Unionkripke: abc48.858114.62138.5131772297.47.56073392.9156.6107.2222.118.281195.5261354.22337.591136.37413.62199.7164.4230.12224.08596.16205.7119.337617.617822.268120.116622.968819.73482.6735.92598.964100.1847.81750.51157.713189.5114.28.08311.36421.78139.97236.8513804700047089000743200008370700014383000014704000028150000026229000012299000528520000472070000250270009730500007001500001070600000736850000491440009595400017995000025791000027304000017.9697497.582816.523960.5094232.445338.6958123.42878.092152.8248170.124649.296720.2747103.329687.055579.833112.5117257.952734.8607137.72167.25144.099262.383696.641910.338222.8302393.757621.951845.537671.9637125.038248.648520.546817.6541506.530316.101162.09853160968.94377.69383.49319148.712356731.341926.822450295.89356.3511151019.2268.4530778.8927513666.081657.2154709.75139152.26111.01238314.47325600.512923392.142674.641039242.2699971.5269200.1410795.42182253.6839011403.774840.8512117.8711498.67726737.432769182.1617168287.2339425.246643031.84427.857183098.54207.778173.91884488.902751343.52947.948115.273137.374182.1442386.97.54527397.3156.8107.2222.819.255195.6731458.81537.58138.1314162159.6163.8230.26225.15602.88205.6619.176917.615822.23619.947322.918619.73172.66836.198.18799.4177.6849.275160.449188.54.1988.08611.43121.79239.90837.0013799100047036000744730008358900014351000014490000028326000026475000012292000527920000471920000249800009732700007074200001057600000750100000492600009626000018014000025714000027314000017.9638498.395216.547460.4236233.556838.5109122.6648.14353.1891169.036749.308520.2702102.801987.508579.946812.4932257.423934.9311135.68817.3587144.710962.162594.973910.520422.7121395.874921.996845.44473.1038123.086848.944720.422317.7419505.336916.319961.2663293772.69381.37388.639374931.952357112.221923.672257619.63362.5111163728.77266.9830768.4927457307.31666.7955280.88138545.44111.05239747.21325816.87257635842701.6341174375.7998339.7969079.5510785.12182243.263892.411404.984835.2412090.1711930.75730513.782862261.7717113542.437514.936629878.42426.517136960.22207.094173.1522481.102911366.6488848.008115.039137.41179.7742365.57.5402398.6156.9107.5223.219.318195.3661470.98338.074138.125413.52203.9161230.84227.44618.43204.7819.320917.680222.188120.086423.055319.77032.66535.76899.18499.1077.83949.216154.21179.1494.2098.08311.29821.79139.82537.6093799600046996000739140008363700014290000014561000028122000026454000012273000527780000468410000250430009762400006978900001063900000742360000491410009634000018023000025854000027265000017.8246501.256516.566160.3549233.838638.4649123.4528.091352.9064169.811249.081120.3637103.288687.049379.646712.5405258.109434.8428137.93747.2377148.398860.625795.812410.428222.6711394.435221.963545.512971.8879125.169948.947120.420317.6884506.69616.225461.6233269490.82378.29385.829360534.192355403.881917.92364953.68359.02OpenBenchmarking.org

SQLite

This is a simple benchmark of SQLite. At present this test profile just measures the time to perform a pre-defined number of insertions on an indexed database with a variable number of concurrent repetitions -- up to the maximum number of CPU threads available. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 1abc112233445548.8647.9548.011. (CC) gcc options: -O2 -lreadline -ltermcap -lz -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 2abc306090120150114.62115.27115.041. (CC) gcc options: -O2 -lreadline -ltermcap -lz -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 4abc306090120150138.51137.37137.411. (CC) gcc options: -O2 -lreadline -ltermcap -lz -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite 3.41.2Threads / Copies: 8abc4080120160200177.00182.14179.771. (CC) gcc options: -O2 -lreadline -ltermcap -lz -lm

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.30abc50010001500200025002297.42386.92365.51. (CXX) g++ options: -O3 -march=native -fPIE -pie

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1X Y Z: 104 104 104 - RT: 60abc2468107.560737.545277.540201. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

X Y Z: 144 144 144 - RT: 60

a: The test quit with a non-zero exit status. E: cat: 'HPCG-Benchmark*.txt': No such file or directory

b: The test quit with a non-zero exit status. E: cat: 'HPCG-Benchmark*.txt': No such file or directory

c: The test quit with a non-zero exit status. E: cat: 'HPCG-Benchmark*.txt': No such file or directory

X Y Z: 160 160 160 - RT: 60

a: The test quit with a non-zero exit status. E: cat: 'HPCG-Benchmark*.txt': No such file or directory

b: The test quit with a non-zero exit status. E: cat: 'HPCG-Benchmark*.txt': No such file or directory

c: The test quit with a non-zero exit status. E: cat: 'HPCG-Benchmark*.txt': No such file or directory

libxsmm

Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 128abc90180270360450392.9397.3398.61. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -pedantic -O2 -fopenmp -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -march=core-avx2

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 256abc306090120150156.6156.8156.91. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -pedantic -O2 -fopenmp -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -march=core-avx2

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 32abc20406080100107.2107.2107.51. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -pedantic -O2 -fopenmp -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -march=core-avx2

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 64abc50100150200250222.1222.8223.21. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -pedantic -O2 -fopenmp -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -march=core-avx2

Monte Carlo Simulations of Ionised Nebulae

Mocassin is the Monte Carlo Simulations of Ionised Nebulae. MOCASSIN is a fully 3D or 2D photoionisation and dust radiative transfer code which employs a Monte Carlo approach to the transfer of radiation through media of arbitrary geometry and density distribution. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2.02.73.3Input: Gas HII40abc51015202518.2819.2619.321. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O2 -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lz

OpenBenchmarking.orgSeconds, Fewer Is BetterMonte Carlo Simulations of Ionised Nebulae 2.02.73.3Input: Dust 2D tau100.0abc4080120160200195.53195.67195.371. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O2 -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lz

RELION

RELION - REgularised LIkelihood OptimisatioN - is a stand-alone computer program for Maximum A Posteriori refinement of (multiple) 3D reconstructions or 2D class averages in cryo-electron microscopy (cryo-EM). It is developed in the research group of Sjors Scheres at the MRC Laboratory of Molecular Biology. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRELION 4.0.1Test: Basic - Device: CPUabc300600900120015001354.221458.821470.981. (CXX) g++ options: -fopenmp -std=c++11 -O3 -rdynamic -ldl -ltiff -lfftw3f -lfftw3 -lpng -ljpeg -lmpi_cxx -lmpi

Z3 Theorem Prover

The Z3 Theorem Prover / SMT solver is developed by Microsoft Research under the MIT license. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterZ3 Theorem Prover 4.12.1SMT File: 1.smt2abc91827364537.5937.5838.071. (CXX) g++ options: -lpthread -std=c++17 -fvisibility=hidden -mfpmath=sse -msse -msse2 -O3 -fPIC

OpenBenchmarking.orgSeconds, Fewer Is BetterZ3 Theorem Prover 4.12.1SMT File: 2.smt2abc306090120150136.37138.13138.131. (CXX) g++ options: -lpthread -std=c++17 -fvisibility=hidden -mfpmath=sse -msse -msse2 -O3 -fPIC

srsRAN Project

srsRAN Project is a complete ORAN-native 5G RAN solution created by Software Radio Systems (SRS). The srsRAN Project radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.5Test: Downlink Processor Benchmarkabc90180270360450413.6416.0413.51. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.5Test: PUSCH Processor Benchmark, Throughput Totalabc50010001500200025002199.72159.62203.91. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.5Test: PUSCH Processor Benchmark, Throughput Threadabc4080120160200164.4163.8161.01. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest

dav1d

Dav1d is an open-source, speedy AV1 video decoder supporting modern SIMD CPU features. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.2.1Video Input: Chimera 1080pabc50100150200250230.12230.26230.841. (CC) gcc options: -pthread

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.2.1Video Input: Summer Nature 4Kabc50100150200250224.08225.15227.441. (CC) gcc options: -pthread

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.2.1Video Input: Summer Nature 1080pabc130260390520650596.16602.88618.431. (CC) gcc options: -pthread

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.2.1Video Input: Chimera 1080p 10-bitabc50100150200250205.71205.66204.781. (CC) gcc options: -pthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Crownabc51015202519.3419.1819.32MIN: 19.08 / MAX: 19.68MIN: 18.89 / MAX: 19.63MIN: 19.14 / MAX: 19.63

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Crownabc4812162017.6217.6217.68MIN: 17.34 / MAX: 17.94MIN: 17.37 / MAX: 17.92MIN: 17.42 / MAX: 17.97

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Asian Dragonabc51015202522.2722.2422.19MIN: 22.13 / MAX: 22.54MIN: 22.1 / MAX: 22.51MIN: 22.05 / MAX: 22.48

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Asian Dragon Objabc51015202520.1219.9520.09MIN: 20 / MAX: 20.46MIN: 19.84 / MAX: 20.15MIN: 19.97 / MAX: 20.31

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian Dragonabc61218243022.9722.9223.06MIN: 22.75 / MAX: 23.26MIN: 22.69 / MAX: 23.17MIN: 22.87 / MAX: 23.31

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian Dragon Objabc51015202519.7319.7319.77MIN: 19.53 / MAX: 19.99MIN: 19.52 / MAX: 20.03MIN: 19.59 / MAX: 20.01

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 4 - Input: Bosphorus 4Kabc0.60081.20161.80242.40323.0042.6702.6682.6651. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 8 - Input: Bosphorus 4Kabc81624324035.9336.1035.771. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 12 - Input: Bosphorus 4Kabc2040608010098.9698.1999.181. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 13 - Input: Bosphorus 4Kabc20406080100100.1899.4299.111. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 4 - Input: Bosphorus 1080pabc2468107.8177.6807.8391. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 8 - Input: Bosphorus 1080pabc112233445550.5149.2849.221. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 12 - Input: Bosphorus 1080pabc4080120160200157.71160.45154.211. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 13 - Input: Bosphorus 1080pabc4080120160200189.51188.50179.151. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.8Video Input: Bosphorus 4K - Video Preset: Fastabc0.9471.8942.8413.7884.7354.2004.1984.2091. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.8Video Input: Bosphorus 4K - Video Preset: Fasterabc2468108.0838.0868.0831. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.8Video Input: Bosphorus 1080p - Video Preset: Fastabc369121511.3611.4311.301. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.8Video Input: Bosphorus 1080p - Video Preset: Fasterabc51015202521.7821.7921.791. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus five times. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.4WAV To Opus Encodeabc91827364539.9739.9139.831. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 1.51Text-To-Speech Synthesisabc91827364536.8537.0037.611. (CXX) g++ options: -O2

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 32abc8M16M24M32M40M3804700037991000379960001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 57abc10M20M30M40M50M4708900047036000469960001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 32abc16M32M48M64M80M7432000074473000739140001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 57abc20M40M60M80M100M8370700083589000836370001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 32abc30M60M90M120M150M1438300001435100001429000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 57abc30M60M90M120M150M1470400001449000001456100001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 32abc60M120M180M240M300M2815000002832600002812200001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 57abc60M120M180M240M300M2622900002647500002645400001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 512abc3M6M9M12M15M1229900012292000122730001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 32abc110M220M330M440M550M5285200005279200005277800001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 57abc100M200M300M400M500M4720700004719200004684100001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 512abc5M10M15M20M25M2502700024980000250430001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 32abc200M400M600M800M1000M9730500009732700009762400001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 57abc150M300M450M600M750M7001500007074200006978900001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 36 - Buffer Length: 256 - Filter Length: 32abc200M400M600M800M1000M1070600000105760000010639000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 36 - Buffer Length: 256 - Filter Length: 57abc160M320M480M640M800M7368500007501000007423600001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 512abc11M22M33M44M55M4914400049260000491410001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 512abc20M40M60M80M100M9595400096260000963400001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 512abc40M80M120M160M200M1799500001801400001802300001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 512abc60M120M180M240M300M2579100002571400002585400001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 36 - Buffer Length: 256 - Filter Length: 512abc60M120M180M240M300M2730400002731400002726500001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Neural Magic DeepSparse

This is a benchmark of Neural Magic's DeepSparse using its built-in deepsparse.benchmark utility and various models from their SparseZoo (https://sparsezoo.neuralmagic.com/). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamabc4812162017.9717.9617.82

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamabc110220330440550497.58498.40501.26

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamabc4812162016.5216.5516.57

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamabc142842567060.5160.4260.35

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamabc50100150200250232.45233.56233.84

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamabc91827364538.7038.5138.46

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamabc306090120150123.43122.66123.45

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Streamabc2468108.09218.14308.0913

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamabc122436486052.8253.1952.91

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamabc4080120160200170.12169.04169.81

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamabc112233445549.3049.3149.08

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Streamabc51015202520.2720.2720.36

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamabc20406080100103.33102.80103.29

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamabc2040608010087.0687.5187.05

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamabc2040608010079.8379.9579.65

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Streamabc369121512.5112.4912.54

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamabc60120180240300257.95257.42258.11

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamabc81624324034.8634.9334.84

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamabc306090120150137.72135.69137.94

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamabc2468107.25007.35877.2377

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamabc306090120150144.10144.71148.40

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamabc142842567062.3862.1660.63

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamabc2040608010096.6494.9795.81

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamabc369121510.3410.5210.43

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamabc51015202522.8322.7122.67

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamabc90180270360450393.76395.87394.44

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamabc51015202521.9522.0021.96

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamabc102030405045.5445.4445.51

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamabc163248648071.9673.1071.89

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamabc306090120150125.04123.09125.17

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamabc112233445548.6548.9448.95

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Streamabc51015202520.5520.4220.42

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamabc4812162017.6517.7417.69

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamabc110220330440550506.53505.34506.70

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamabc4812162016.1016.3216.23

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamabc142842567062.1061.2761.62

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Hashabc700K1400K2100K2800K3500K3160968.943293772.693269490.821. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: MMAPabc80160240320400377.69381.37378.291. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: NUMAabc80160240320400383.40388.63385.821. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Pipeabc2M4M6M8M10M9319148.719374931.959360534.191. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Pollabc500K1000K1500K2000K2500K2356731.342357112.222355403.881. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Zlibabc4008001200160020001926.821923.671917.901. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Futexabc500K1000K1500K2000K2500K2450295.892257619.632364953.681. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: MEMFDabc80160240320400356.35362.51359.021. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Mutexab2M4M6M8M10M11151019.2011163728.771. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Atomicab60120180240300268.45266.981. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Cryptoab7K14K21K28K35K30778.8930768.491. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Mallocab6M12M18M24M30M27513666.0827457307.301. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Cloningab4008001200160020001657.211666.791. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Forkingab12K24K36K48K60K54709.7555280.881. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Pthreadab30K60K90K120K150K139152.26138545.441. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: AVL Treeab20406080100111.01111.051. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: IO_uringab50K100K150K200K250K238314.47239747.211. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: SENDFILEab70K140K210K280K350K325600.51325816.871. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: CPU Cacheab600K1200K1800K2400K3000K2923392.12576358.01. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: CPU Stressab9K18K27K36K45K42674.6042701.631. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Semaphoresab9M18M27M36M45M41039242.2641174375.791. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Matrix Mathab20K40K60K80K100K99971.5298339.791. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Vector Mathab15K30K45K60K75K69200.1469079.551. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Function Callab2K4K6K8K10K10795.4210785.121. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: x86_64 RdRandab40K80K120K160K200K182253.68182243.261. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Floating Pointab80016002400320040003901.003892.411. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Matrix 3D Mathab300600900120015001403.771404.981. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Memory Copyingab100020003000400050004840.854835.241. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Vector Shuffleab3K6K9K12K15K12117.8712090.171. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Socket Activityab3K6K9K12K15K11498.6711930.751. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Wide Vector Mathab160K320K480K640K800K726737.43730513.781. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Context Switchingab600K1200K1800K2400K3000K2769182.162862261.771. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Fused Multiply-Addab4M8M12M16M20M17168287.2317113542.401. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Vector Floating Pointab8K16K24K32K40K39425.2437514.931. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Glibc C String Functionsab1.4M2.8M4.2M5.6M7M6643031.846629878.421. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Glibc Qsort Data Sortingab90180270360450427.85426.511. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: System V Message Passingab1.5M3M4.5M6M7.5M7183098.547136960.221. (CXX) g++ options: -lm -laio -lapparmor -latomic -lbsd -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmd -lmpfr -lpthread -lrt -lsctp -lz

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 23.6Input: Carbon Nanotubeab50100150200250207.78207.091. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi

Whisper.cpp

Whisper.cpp is a port of OpenAI's Whisper model in C/C++. Whisper.cpp is developed by Georgi Gerganov for transcribing WAV audio files to text / speech recognition. Whisper.cpp supports ARM NEON, x86 AVX, and other advanced CPU features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.4Model: ggml-base.en - Input: 2016 State of the Unionab4080120160200173.92173.151. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.4Model: ggml-small.en - Input: 2016 State of the Unionab110220330440550488.90481.101. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisper.cpp 1.4Model: ggml-medium.en - Input: 2016 State of the Unionab300600900120015001343.531366.651. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread

Kripke

Kripke is a simple, scalable, 3D Sn deterministic particle transport code. Its primary purpose is to research how data layout, programming paradigms and architectures effect the implementation and performance of Sn transport. Kripke is developed by LLNL. Learn more via the OpenBenchmarking.org test page.

a: The test quit with a non-zero exit status.

b: The test quit with a non-zero exit status.

140 Results Shown

SQLite:
  1
  2
  4
  8
QuantLib
High Performance Conjugate Gradient
libxsmm:
  128
  256
  32
  64
Monte Carlo Simulations of Ionised Nebulae:
  Gas HII40
  Dust 2D tau100.0
RELION
Z3 Theorem Prover:
  1.smt2
  2.smt2
srsRAN Project:
  Downlink Processor Benchmark
  PUSCH Processor Benchmark, Throughput Total
  PUSCH Processor Benchmark, Throughput Thread
dav1d:
  Chimera 1080p
  Summer Nature 4K
  Summer Nature 1080p
  Chimera 1080p 10-bit
Embree:
  Pathtracer - Crown
  Pathtracer ISPC - Crown
  Pathtracer - Asian Dragon
  Pathtracer - Asian Dragon Obj
  Pathtracer ISPC - Asian Dragon
  Pathtracer ISPC - Asian Dragon Obj
SVT-AV1:
  Preset 4 - Bosphorus 4K
  Preset 8 - Bosphorus 4K
  Preset 12 - Bosphorus 4K
  Preset 13 - Bosphorus 4K
  Preset 4 - Bosphorus 1080p
  Preset 8 - Bosphorus 1080p
  Preset 12 - Bosphorus 1080p
  Preset 13 - Bosphorus 1080p
VVenC:
  Bosphorus 4K - Fast
  Bosphorus 4K - Faster
  Bosphorus 1080p - Fast
  Bosphorus 1080p - Faster
Opus Codec Encoding
eSpeak-NG Speech Engine
Liquid-DSP:
  1 - 256 - 32
  1 - 256 - 57
  2 - 256 - 32
  2 - 256 - 57
  4 - 256 - 32
  4 - 256 - 57
  8 - 256 - 32
  8 - 256 - 57
  1 - 256 - 512
  16 - 256 - 32
  16 - 256 - 57
  2 - 256 - 512
  32 - 256 - 32
  32 - 256 - 57
  36 - 256 - 32
  36 - 256 - 57
  4 - 256 - 512
  8 - 256 - 512
  16 - 256 - 512
  32 - 256 - 512
  36 - 256 - 512
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Detection, YOLOv5s COCO - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    items/sec
    ms/batch
Stress-NG:
  Hash
  MMAP
  NUMA
  Pipe
  Poll
  Zlib
  Futex
  MEMFD
  Mutex
  Atomic
  Crypto
  Malloc
  Cloning
  Forking
  Pthread
  AVL Tree
  IO_uring
  SENDFILE
  CPU Cache
  CPU Stress
  Semaphores
  Matrix Math
  Vector Math
  Function Call
  x86_64 RdRand
  Floating Point
  Matrix 3D Math
  Memory Copying
  Vector Shuffle
  Socket Activity
  Wide Vector Math
  Context Switching
  Fused Multiply-Add
  Vector Floating Point
  Glibc C String Functions
  Glibc Qsort Data Sorting
  System V Message Passing
GPAW
Whisper.cpp:
  ggml-base.en - 2016 State of the Union
  ggml-small.en - 2016 State of the Union
  ggml-medium.en - 2016 State of the Union